WO2014075237A1 - Method for achieving augmented reality, and user equipment - Google Patents
Method for achieving augmented reality, and user equipment Download PDFInfo
- Publication number
- WO2014075237A1 WO2014075237A1 PCT/CN2012/084581 CN2012084581W WO2014075237A1 WO 2014075237 A1 WO2014075237 A1 WO 2014075237A1 CN 2012084581 W CN2012084581 W CN 2012084581W WO 2014075237 A1 WO2014075237 A1 WO 2014075237A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- video frame
- user equipment
- virtual reality
- image
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Definitions
- the present invention relates to the field of information technology (Information Technology, IT: IT), and in particular to a method and user equipment for implementing augmented reality.
- Augmented Reality (AR) technology is an emerging human-computer interaction technology developed on the basis of virtual reality technology. It uses visual technology to apply virtual reality information to the real world. The virtual reality information acquired in the real world is superimposed on the real world image, and allows users to interact with the augmented reality application, which expands the user's perception of the real world. With the popularity of intelligent user equipment (User Equipment, UE), AR technology has developed rapidly in recent years.
- User Equipment User Equipment
- the user equipment can capture the video stream through the camera, use the captured video stream as real world information, and obtain virtual reality information related to the real world information from the server side, and superimpose the acquired virtual reality information. On the captured video stream, and display the superimposed video stream.
- the UE may send a request for acquiring virtual reality information to the server side after the video stream is captured, where the request for acquiring the virtual reality information includes information about a key frame captured by the UE or a location of the UE, where the key frame is The gesture image of the tracked object is included; after the virtual reality information is obtained according to the key frame captured by the UE or the location of the UE, the virtual reality information is sent to the UE, and the UE superimposes the received virtual reality information. Displayed on each frame of the captured video stream.
- the virtual reality information received by the UE is related to the tracked object in the real world or to the location where the UE is located.
- the AR experience begins when the UE overlays the received virtual reality information onto the captured video stream.
- the virtual reality information received by the UE is related to the real world. Specifically, the virtual reality information received by the UE is related to the tracked object in the real world or the location where the UE is located. After the end of the AR experience, if the user needs After experiencing the same AR experience again, the user needs to go back to the original real world. For example, the user is located at location A. When the user queries the restaurant near location A with the UE, the server side will return to near location A. The information of the restaurant, the UE superimposes the obtained restaurant information onto the captured video frame, and if the user later wants to experience the same AR experience, the user is required to return to the location A again and capture the same video frame.
- an object of embodiments of the present invention is to provide a method and user equipment for implementing augmented reality, so that after the end of the AR experience, the user can also experience the same AR experience again at any time.
- an embodiment of the present invention provides a method for implementing augmented reality, including:
- the user equipment stores an augmented reality context when the user experiences an augmented reality experience, the augmented reality context including virtual content information received by the user equipment from the server side and a video stream captured by the user equipment;
- the user equipment acquires virtual reality information according to the stored virtual content information
- the user equipment sequentially acquires the stored video frames in the video stream according to the sequence in which the video frames are captured, and superimposes the acquired virtual reality information on the acquired video frames, and displays the overlay force P. Video frame.
- the user sets a video frame to capture a correspondence between a timestamp of the captured video frame and the tracked object information, and a posture of the tracked object. And removing an image from the captured video frame, updating the panoramic image according to the video frame after removing the posture image, and storing a correspondence between the time stamp and the background information;
- the user equipment stores a standard image of the tracked object when capturing a video frame, and in the When the user equipment stops capturing video frames, storing the panorama;
- the tracked object information includes location information of the gesture image in the captured video frame, and the background information includes location information of the captured video frame in the panoramic image.
- the tracked object information further includes a single image of the gesture image on the captured video frame
- the background information further includes a deflection angle of the captured video frame relative to the panoramic image deflection.
- the user equipment acquires the stored standard image and the panoramic image
- the user equipment sequentially acquires the timestamp of the video frame to be displayed in the order in which the video frames are captured, and obtains the tracked object information and the background information corresponding to the acquired timestamp according to the obtained timestamp. And performing affine transformation on the obtained standard image according to the obtained homography matrix included in the tracked object information, obtaining a posture image of the tracked object, and acquiring location information according to the obtained background information. And the deflection angle, the acquired panoramic image is obtained according to the displayed resolution, and the background image is obtained, and the obtained posture image is superimposed on the cut background image according to the obtained position information of the tracked object information.
- the video frame currently to be displayed is generated.
- the virtual content information includes an identifier of the tracked object corresponding to the virtual reality information
- the superimposing the acquired virtual reality information on the acquired video frame includes: when the virtual content information includes the identifier of the tracked object, the user equipment is according to the posture of the tracked object And superimposing the acquired virtual reality information on the currently displayed video frame in a position in the video frame to be displayed currently.
- the user equipment sequentially captures a video frame, updates a panoramic image according to the captured video frame, and stores a timestamp and background information of the captured video frame. Correspondence between them;
- the user equipment stores the panoramic image; wherein the background information includes location information of the captured video frame in the panoramic image.
- the background information further includes a deflection angle of the captured video frame relative to the panoramic image deflection.
- the user equipment acquires the stored panoramic image
- the user equipment obtains timestamps of the current video frame to be displayed in sequence according to the sequence in which the video frames are captured, and obtains background information corresponding to the obtained timestamp according to the obtained timestamp, according to the obtained information.
- the position information and the deflection angle included in the background information are intercepted, and the acquired panoramic image is intercepted according to the displayed resolution to generate the current video frame to be displayed.
- the virtual content information includes location information corresponding to the virtual reality information, where the background information further includes And the information about the location of the user equipment, the superimposing the acquired virtual reality information on the acquired video frame, including:
- the user equipment superimposes the acquired virtual reality information on the currently displayed video frame according to the information about the location of the user equipment included in the background information and the location information included in the virtual content information.
- an embodiment of the present invention provides a user equipment, including:
- a receiving unit configured to receive virtual content information returned from the server side
- a video stream capturing unit configured to capture a video stream
- a storage unit configured to store an augmented reality context when the user experiences an augmented reality experience, where the augmented reality context includes the virtual content information received by the receiving unit and the video stream captured by the video stream capturing unit;
- a virtual reality information acquiring unit configured to acquire virtual reality information according to the virtual content information stored by the storage unit when the user needs to experience the augmented reality experience again;
- a video frame acquiring unit configured to sequentially acquire video frames in the video stream stored by the storage unit according to a sequence in which the video frames are captured;
- a superimposing unit configured to superimpose the virtual reality information acquired by the virtual reality information acquiring unit on the video frame acquired by the video frame acquiring unit;
- a display unit configured to display a video frame superimposed by the superposition unit.
- the video stream capturing unit is specifically configured to obtain a video frame according to a second time
- the storage unit is specifically configured to store a correspondence between a timestamp of the video frame captured by the video stream capturing unit and the tracked object information, and remove the posture image of the tracked object from the captured video frame, according to the Removing the video frame after the gesture image to update the panorama, and storing a correspondence between the time stamp and the background information;
- the tracked object information includes location information of the gesture image in the captured video frame, and the background information includes location information of the captured video frame in the panoramic image.
- the tracked object information further includes a single image of the gesture image on the captured video frame
- the background information further includes a deflection angle of the captured video frame relative to the panoramic image deflection.
- the virtual content information that is received by the receiving unit includes the Tracking the identifier of the object
- the superimposing unit is specifically configured to: according to the position of the image of the tracked object in the current video frame to be displayed, when the virtual content information includes the identifier of the tracked object And superimposing the virtual reality information acquired by the virtual reality information acquiring unit on the current video frame to be displayed generated by the video frame acquiring unit.
- the video stream capturing unit is specifically configured to obtain a video frame according to a second time
- the storage unit is specifically configured to update a panoramic image according to a video frame captured by the video stream capturing unit, and store a correspondence between a timestamp of the captured video frame and background information;
- the background information includes location information of the captured video frame in the panoramic image.
- the background information further includes a deflection angle of the captured video frame relative to the panoramic image deflection.
- the virtual content information received by the receiving unit includes location information corresponding to the virtual reality information
- the background information further includes information about a location of the user equipment
- the superimposing unit is specifically configured to: according to information about a location of the user equipment included in the background information, and location information included in the virtual content information
- the virtual reality information acquired by the virtual reality information acquiring unit is superimposed on the current video frame to be displayed generated by the video frame acquiring unit.
- a method and a user equipment for implementing an augmented reality experience are provided by an embodiment of the present invention.
- the UE stores the virtual content information and the captured video stream through the augmented reality context, after the augmented reality experience ends.
- the UE acquires virtual reality information according to the stored virtual content information, and superimposes the acquired virtual reality information on each video frame in the video stream for display. , enabling the user to experience the same augmented reality experience again at any time after experiencing the augmented reality experience.
- FIG. 1 is a schematic structural diagram of a system for implementing augmented reality according to an embodiment of the present invention
- FIG. 2 is a flowchart of a method for implementing augmented reality according to an embodiment of the present invention
- FIG. 3 is a flowchart of another method for implementing augmented reality according to an embodiment of the present invention
- FIG. 4 is a flowchart of still another method for implementing augmented reality according to an embodiment of the present invention
- FIG. 5 is a structural diagram of a user equipment according to an embodiment of the present invention
- FIG. 6 is a structural diagram of another user equipment according to an embodiment of the present invention. detailed description
- FIG. 1 it is a system architecture diagram for implementing augmented reality according to an embodiment of the present invention.
- the UE sends a request for acquiring the virtual content information to the server side, where the request for acquiring the virtual content information includes information identifying the tracked object or information about the location of the UE, The information identifying the tracked object includes the gesture image of the tracked object or the feature data of the gesture image of the tracked object, and the server side sends the virtual to the UE according to the request for acquiring the virtual content information.
- Content information after receiving the virtual content information, the UE stores the virtual content information and the video stream captured by the UE.
- the UE acquires virtual reality information according to the stored virtual content information, and sequentially according to the sequence in which the video frames are captured. Obtaining the stored video frame in the video stream, superimposing the acquired virtual reality information on the acquired video frame, and displaying the superimposed video frame.
- the embodiment of the present invention does not limit the type of the UE.
- the UE may include a smart phone, a personal computer, a tablet, glasses with augmented reality function, or other terminal with augmented reality function.
- the embodiment of the present invention does not limit the composition of the server side.
- the server side is composed of at least one server, and the server in the server side may include a presentation layer server. , application layer server and database server.
- an embodiment of the present invention provides a method for implementing augmented reality. As shown in FIG. 2, the method includes:
- the UE stores an augmented reality context when the user experiences an augmented reality experience, where the enhanced real context includes the virtual content information received by the UE from the server side and the video stream captured by the UE;
- the stored video stream is a series of consecutive video frames
- the UE uses the video stream as real world information when the user experiences the augmented reality experience, the virtual content information includes virtual reality information or storage location information of virtual reality information; and virtual reality information that the UE will acquire The augmented reality experience begins when superimposed onto the captured video frame for display;
- the UE may image the tracked object and remove the pose image.
- the background image is stored separately; when the current location in the real environment needs to be enhanced, that is, when the video stream captured by the UE does not include the gesture image of the tracked object, the video frame captured by the UE may be directly directly
- the UE may merge the background images in the captured video frame to generate a panorama, and the UE may be configured according to the background image. The position in the panorama, restoring the background image;
- the UE may store the captured video stream in any of the following manners:
- the video stream captured by the UE includes a posture image of the tracked object: the UE sequentially captures a video frame, and stores the captured video. Corresponding relationship between the time stamp of the video frame and the tracked object information, removing the posture image of the tracked object from the captured video frame, updating the panorama according to the video frame after removing the posture image, and storing the image Corresponding relationship between the timestamp and the background information; the UE storing the standard image of the tracked object when capturing the video frame, and storing the panorama when the UE stops capturing the video frame;
- the timestamp is used to indicate the time at which the video frame is captured, by way of example and not limitation.
- the timestamp may be a time when the video frame is captured with respect to the start of the augmented reality experience;
- the tracked object information includes location information of the pose image in the captured video frame, the background information including the Position information of the captured video frame in the panorama and;
- the tracked object information may further include a homography matrix of the gesture image on the captured video frame, and the background information may further include the captured video frame being deflected relative to the panoramic image. Deflection angle
- the tracked object refers to an object to be tracked in the real world, such as a toy car in the current real world;
- the attitude image of the tracked object refers to the captured video frame.
- An image of the tracked object, such as a toy car in the current real world, when capturing a video frame, an image of the toy car in the captured video frame is a pose image of the toy car;
- the standard image refers to an image captured when the tracked object is horizontally placed on a horizontal plane, when the field of view is perpendicular to the horizontal plane;
- the video stream captured by the UE does not include the posture image of the tracked object: the UE sequentially captures the video frame, updates the panorama according to the captured video frame, and stores the timestamp and background of the captured video frame. Corresponding relationship between the information; when the UE stops capturing video frames, the UE stores the panorama;
- the UE may obtain virtual reality information in the following manner:
- the user equipment may directly obtain the virtual reality information; or
- the user equipment may acquire the virtual reality information according to the storage location information; for example, by way of example and not limitation, the virtual content information a URI (Uniform Resource Identifier) including the virtual reality information, where the UE may be based on a URI of the virtual reality information.
- the virtual content information a URI (Uniform Resource Identifier) including the virtual reality information, where the UE may be based on a URI of the virtual reality information.
- the UE sequentially acquires the stored video frames in the video stream according to the sequence in which the video frames are captured, and superimposes the acquired virtual reality information on the acquired video frames, and displays the overlay force. After the video frame;
- the UE may determine the sequence in which the video frames are captured according to the timestamp of the video frame.
- the UE needs to acquire.
- the virtual reality information and the video stream when the augmented reality experience is experienced, and the acquired virtual reality information is superimposed on each frame in the acquired video stream for display; and the first method corresponds to step S201.
- the method of storing the captured video stream is as follows: the UE acquires the stored standard image and the panoramic image, and sequentially acquires the timestamp of the currently displayed video frame according to the sequence in which the video frames are captured, according to the acquired Obtaining, by the timestamp, the tracked object information and the background information corresponding to the acquired timestamp, and performing affine transformation on the obtained standard image according to the obtained homography matrix included in the tracked object information. Obtaining a posture image of the tracked object, and receiving the bit according to the obtained background information Information and a deflection angle are obtained by intercepting the acquired panoramic image according to the displayed resolution to obtain a background image, and superimposing the obtained posture image on the cut background image according to the obtained position information included in the tracked object information. , generating a video frame to be displayed currently;
- Manner 2 corresponding to the method 2 of storing the captured video stream in step S201: the UE acquires the stored panorama, and sequentially acquires the timestamp of the current video frame to be displayed according to the sequence in which the video frames are captured. Obtaining the background information corresponding to the obtained timestamp according to the obtained timestamp, and extracting the acquired panoramic image according to the displayed resolution according to the obtained position information and the deflection angle of the background information, and generating The video frame currently to be displayed.
- the UE when the user starts to experience the augmented reality experience, the UE may also The user operation information is used to describe the interaction between the user and the UE by using the augmented reality context, where the user operation information may include an operation type, an operation parameter, and a timestamp.
- the time stamp included in the user operation information is used to indicate the moment when the interaction occurs.
- the time stamp included in the user operation information may be a time when the interaction occurs relative to the start of the augmented reality experience.
- the UE may simulate the operation of the user according to the operation type and the operation parameter at a time corresponding to the time stamp included in the user operation information.
- the UE may further send the augmented reality context to other UEs, so that other users may also experience the augmented reality experience, thereby enabling the The user can share the augmented reality experience with other users.
- a method for implementing augmented reality when a user experiences an augmented reality experience, the UE stores the virtual content information and the captured video stream through the augmented reality context, after the augmented reality experience ends, when the user
- the UE acquires virtual reality information according to the stored virtual content information, and superimposes the acquired virtual reality information on each video frame in the video stream for display, so that the After experiencing the augmented reality experience, the user can again experience the same augmented reality experience at any time.
- the UE captures a video frame that includes the gesture image of the tracked object the UE will be the tracked object.
- the gesture image is stored separately from the background image, and by storing location information of the gesture image of the tracked object in the captured video frame and a homography matrix, storing the gesture image of the tracked object, and storing the Position information of the captured video frame in the panorama, storing the background image, from And saving the storage resource of the UE; again, when the video frame captured by the UE does not include the gesture image of the tracked object, the UE uses the captured video frame as a background image, and uses a storage The location information of the captured video frame in the panorama stores the background image, thereby saving storage resources of the UE.
- FIG. 3 it is a flowchart of a method for implementing augmented reality according to an embodiment of the present invention. The method is applied to a scene of a captured video stream that includes a gesture image of the tracked object, and the method includes:
- the feature data of the gesture image may be an outline of the gesture image, and the gesture image may be obtained by capturing a video frame;
- the UE receives virtual content information sent by the server, where the virtual content information includes virtual reality information or storage location information of virtual reality information.
- the virtual content information is obtained by the server side according to the information of the identified object to be tracked.
- the server side stores the feature data of the posture image of the tracked object and the identifier of the tracked object (Identifier) a correspondence between the identifier of the tracked object and the virtual content information, and the server side obtains the posture of the tracked object after obtaining the information of the identified object to be tracked And obtaining, according to the feature data, the identifier of the tracked object, and obtaining virtual content information corresponding to the identifier of the tracked object according to the identifier of the tracked object;
- the server side stores a correspondence between the feature data of the gesture image of the tracked object and the virtual content information, and the server side obtains the information that identifies the tracked object, and obtains the Tracking feature data of the gesture image of the object, and obtaining virtual content information corresponding to the feature data according to the feature data;
- the server side may use a feature extraction algorithm to process the posture image of the tracked object. Obtaining feature data;
- the UE stores the virtual content information.
- the UE may store the virtual content information in an augmented reality context
- the UE captures a video frame.
- the UE may sequentially capture a video frame according to a frame rate of the captured video stream, where the video frame captured by the UE includes a posture image of the tracked object;
- the UE when the UE superimposes the virtual reality information acquired according to the virtual content information onto the captured video frame for display, the augmented reality experience starts;
- S305 The UE stores a correspondence between a timestamp of the captured video frame and the tracked object information.
- the tracked object information includes location information of the tracked object's pose image in the captured video frame, and the location information of the tracked object's pose image in the captured video frame may be a coordinate of a center point of the gesture image of the tracked object in the captured video frame, the coordinate being determined when the UE tracks the tracked object;
- the tracked object information may further include a homography matrix of the gesture image of the tracked object on the captured video frame, and the gesture image of the tracked object is on the captured video frame.
- the homography matrix may be determined when the UE tracks the tracked object, and the UE may perform affine transformation on the standard image of the tracked object according to the homography matrix to obtain the tracked object.
- the affine transformation of the standard image of the tracked object means that the standard image of the tracked object is multiplied by the homography matrix;
- the UE matches a key point on the captured video frame with a corresponding key point on the standard image to obtain a key point on the captured video frame.
- the location information and the location information on the standard image according to the position information of the key point on the captured video frame and the position information on the standard image, the RANSAC (RANdom S Ample Consensus) algorithm can be used to obtain the single Qualitative matrix
- the UE may store a correspondence between a timestamp of the captured video frame and the tracked object information in the augmented reality context;
- S306 The UE removes the posture image of the tracked object from the captured video frame, updates a video frame with the posture image removed as a background image, and stores the timestamp and Correspondence between background information;
- the UE may initialize the panorama with the obtained background image.
- “update the panorama according to the obtained background image” means “according to the obtained The background image initializes the panorama”;
- the background information includes location information of the captured video frame in the panoramic view and a deflection angle of the captured video frame relative to the panoramic view deflection;
- the location information of the captured video frame in the panorama may be coordinates of a center point of the captured video frame in the panorama, and a center point of the captured video frame is in the panorama
- the coordinates in the figure may be determined when the UE updates the panorama
- the UE may store a correspondence between a timestamp of the captured video frame and the background information in the augmented reality context;
- the UE may determine a deflection angle of the captured video frame relative to the panorama deflection when updating the panorama, and specifically, determining a horizontal line of the captured video frame relative to the panorama
- the angle of the horizontal rotation of the graph for example, when the panorama is updated by using a video frame, the video frame is rotated counterclockwise by 30°, and the rotation angle of the video relative to the panorama rotation is 30° counterclockwise;
- the operation of updating the panorama may include the following three steps:
- image registration determining a portion of the captured video frame that is repeated with the panorama
- image warping map the panorama to a spherical cluster or a columnar cluster, And splicing the background image on the panoramic image according to a portion of the captured video frame that is overlapped with the panoramic image;
- step S307 The UE determines whether the augmented reality experience is over, and if so, step S308 is performed, otherwise, step S304 is performed;
- the UE may store a standard image of the tracked object when capturing a video frame. Specifically, the tracked object may be stored before, after, or simultaneously with any of the steps S304 to S306. a standard image; the UE may generate a pose image of the tracked object according to a homography matrix of the image of the tracked object on a video frame captured by the UE and a standard image of the tracked object;
- the server side stores a standard image of the tracked object
- the UE may obtain a standard image of the tracked object from the server side
- the UE stops capturing video frames.
- the panoramic image stored by the UE is processed according to the background image in the video frame captured by the UE, and the UE may be according to the panoramic image. Restoring a background image of the captured video frame;
- the UE may obtain the virtual reality information in the following manner:
- the user equipment directly obtains the virtual reality information.
- the user Obtaining, by the device, the virtual reality information according to the storage location information;
- S310 The UE acquires the stored standard image and the panoramic image.
- S311 The UE acquires a timestamp of a video frame to be displayed, and obtains a posture image of the tracked object in the currently displayed video frame according to the obtained time stamp.
- the UE After acquiring the timestamp of the video frame to be displayed, the UE obtains the tracked object information and the background information corresponding to the acquired timestamp, and according to the obtained homography of the tracked object information. a matrix, performing affine transformation on the obtained standard image to obtain a posture image of the tracked object;
- the UE may sequentially acquire timestamps of the video frames to be displayed in sequence according to the sequence in which the video frames are captured;
- S312 The UE obtains a background image of the video frame to be currently displayed.
- the UE intercepts the acquired panoramic image according to the obtained resolution of the background information and the deflection angle, and obtains a background image in the currently displayed video frame.
- the UE may generate a horizontal rectangular frame according to the resolution to be displayed. If the angle of the current video frame to be displayed is 30° in the counterclockwise direction with respect to the panorama, the UE will rotate the horizontal rectangular frame counterclockwise. Rotate 30. And according to the position of the current video frame to be displayed in the panorama, the panoramic image is captured by using the rotated rectangular frame to generate a background image in the current video frame to be displayed;
- the resolution of the display may be determined by the screen resolution of the UE. For example, if the screen resolution of the UE is 480 ⁇ 320, the UE may intercept the acquired location according to the resolution of 480 ⁇ 320. Panoramic view
- S313 The UE generates the video frame to be displayed currently
- the UE superimposes the obtained posture image of the tracked object to the cut according to the obtained position information of the posture image of the tracked object included in the tracked object information in the video frame.
- the obtained background image generate a video frame to be displayed currently;
- S314 The UE superimposes the acquired virtual reality information on the generated video frame to be displayed, and displays the superimposed video frame.
- the virtual content information may further include the identifier of the tracked object corresponding to the virtual reality information, and the UE may superimpose the acquired virtual reality information to the generated current desired display manner.
- the UE When the virtual content information includes the identifier of the tracked object, the UE superimposes the acquired virtual reality information according to the position of the gesture image of the tracked object in the current video frame to be displayed. Going to the video frame currently to be displayed;
- step S315 The UE determines whether the video frame in the stored video stream has been acquired. If yes, the augmented reality experience ends. Otherwise, step S311 is performed.
- the UE may sample the timestamp of the video frame.
- the UE stores a video frame corresponding to the timestamp obtained by sampling;
- the UE may perform an interpolation process, specifically, the timestamp of the video frame to be currently displayed by the UE, and the current video to be displayed.
- the tracked object information and the background information corresponding to the time stamp of the frame are subjected to interpolation processing.
- the UE when the user starts to experience the augmented reality experience, the UE may further store user operation information, where the user operation information is used to describe an interaction between the user and the UE.
- the user operation information includes an operation type, an operation parameter, and a timestamp, and the time information included in the user operation information is used to indicate a moment when the interaction occurs;
- the UE may Simulating the operation of the user according to the operation type and the operation parameter at a time corresponding to the time stamp included in the user operation information;
- the interaction between the user and the UE may include any of the following types of operations: Click: For a click operation, the UE needs to store the coordinates of the clicked location and the timestamp when the click operation occurs;
- Press and hold for the hold operation, the UE needs to store the coordinates of the pressed position, the time stamp when the hold operation occurs, and the time during which the hold operation is continued;
- Drag For a drag operation, the UE needs to store the coordinates of the point on the drag path at a certain frequency, and the time stamp dragged to the point.
- the UE may send the augmented reality context to other UEs, so that other users may also experience the augmented reality experience, thereby causing the user to The augmented reality experience can be shared with other users.
- a method for implementing augmented reality when a user experiences an augmented reality experience, the UE stores the virtual content information and the captured video stream through the augmented reality context, after the augmented reality experience ends, when the user
- the UE acquires virtual reality information according to the stored virtual content information, and superimposes the acquired virtual reality information on each video frame in the video stream for display, so that the After experiencing the augmented reality experience, the user can again experience the same augmented reality experience at any time.
- the UE captures a video frame that includes the gesture image of the tracked object the UE will be the tracked object.
- the gesture image is stored separately from the background image, and by storing location information of the gesture image of the tracked object in the captured video frame and a homography matrix, storing the gesture image of the tracked object, and storing the Position information of the captured video frame in the panorama, storing the background image, from And saving the storage resource of the UE; the UE may further add the acquired virtual reality information to the location according to the location of the gesture image of the tracked object in the current video frame to be displayed. Currently on the video frame to be displayed, so that the user can have a better augmented reality experience.
- FIG. 4 is a flowchart of another method for implementing augmented reality according to an embodiment of the present invention.
- the method is applied to a scene of a captured video stream that does not include a pose image of a tracked object.
- the video frame in the video stream captured by the UE is used as a background image, and the method includes: S401: When the user determines that the augmented reality experience needs to be experienced, the UE sends the information about the location of the UE to the server side.
- the UE may obtain information about the location of the UE by using a positioning device, for example, the information of the location of the UE may be obtained by using a GPS (Global Position System) device;
- a GPS Global Position System
- the UE receives virtual content information sent by the server, where the virtual content information includes virtual reality information or storage location information of virtual reality information.
- the virtual content information is obtained by the server side according to the information of the location of the UE. Specifically, the server side stores a correspondence between the location information and the virtual content information, where the server side obtains After the information about the location of the UE, the virtual content information is obtained according to the information about the location of the UE;
- the UE stores the virtual content information.
- the UE may store the virtual content information in an augmented reality context
- S404 The UE captures a video frame.
- the UE may sequentially capture video frames according to a frame rate of the captured video stream
- the UE when the UE superimposes the virtual reality information acquired according to the virtual content information onto the captured video frame for display, the augmented reality experience starts;
- S405 The UE updates the panoramic image as the background image by using the captured video frame, and stores a correspondence between the timestamp of the captured video frame and the background information.
- the video frame captured by the UE is directly regarded as a background image.
- the video frame captured by the UE is directly regarded as a background image.
- the UE may store a correspondence between a timestamp of the captured video frame and the background information in the augmented reality context;
- step S406 The UE determines whether the augmented reality experience is over, and if so, step S407 is performed, otherwise, step S404 is performed; It should be noted that, when the augmented reality experience ends, the UE stops capturing video frames;
- the UE may store the panorama in the augmented reality context.
- step S308 For detailed description of this step, refer to step S308, and details are not described herein.
- step S309 For detailed description of this step, refer to step S309, and details are not described herein again.
- S409 The UE acquires the stored panorama view.
- S410 The UE acquires a timestamp of a video frame to be displayed, and obtains the current video frame to be displayed according to the obtained time stamp.
- the UE after acquiring the timestamp of the video frame to be displayed, the UE obtains background information corresponding to the acquired timestamp, and according to the obtained location information and the deflection angle of the background information, according to the resolution of the display. Rate capturing the acquired panorama to generate a video frame to be currently displayed;
- the UE may sequentially obtain the timestamp of the video frame to be displayed in sequence according to the sequence in which the video frames are captured;
- S411 The UE superimposes the acquired virtual reality information on the generated video frame to be displayed, and displays the superimposed video frame.
- the virtual content information may further include location information corresponding to the virtual reality information, where the background information further includes information about a location of the UE, and the UE may acquire the virtual reality in the following manner.
- the information is superimposed on the generated video frame to be displayed: the UE superimposes the acquired virtual reality information according to the information about the location of the UE and the location information included in the virtual content information included in the background information.
- S412 The UE determines whether the video frame in the stored video stream has been acquired. If yes, the augmented reality experience ends. Otherwise, step S410 is performed.
- the UE may sample the timestamp of the video frame.
- the UE stores a video frame corresponding to the timestamp obtained by sampling;
- the UE may perform an interpolation process, specifically, the timestamp of the video frame to be currently displayed by the UE, and the current video to be displayed.
- the background information corresponding to the timestamp of the frame is interpolated.
- the UE when the user starts to experience the augmented reality experience, the UE may further store user operation information, where the user operation information is used to describe an interaction between the user and the UE.
- the user operation information includes an operation type, an operation parameter, and a timestamp, and the time information included in the user operation information is used to indicate a moment when the interaction occurs; when the user experiences the augmented reality experience again, the UE may At the time corresponding to the time stamp included in the user operation information, the operation of the user is simulated according to the operation type and the operation parameter.
- the UE may send the augmented reality context to other UEs, so that other users may also experience the augmented reality experience, thereby causing the user to The augmented reality experience can be shared with other users.
- a method for implementing augmented reality when a user experiences an augmented reality experience, the UE stores the virtual content information and the captured video stream through the augmented reality context, after the augmented reality experience ends, when the user When the augmented reality experience needs to be experienced again, the UE acquires virtual reality information according to the stored virtual content information, and superimposes the acquired virtual reality information on each video frame in the video stream for display, so that the After experiencing the augmented reality experience, the user can also experience the same augmented reality experience again at any time.
- the UE when the video frame captured by the UE does not include the gesture image of the tracked object, the UE will Captured video
- the frame is used as a background image, and the background image is stored by storing location information of the captured video frame in the panorama, thereby saving storage resources of the UE; again, the UE may be included according to the background information.
- the information about the location of the UE and the location information corresponding to the virtual reality information included in the virtual content information are superimposed on the currently displayed video frame, so that the user can have a better augmented reality experience.
- FIG. 5 it is a structural diagram of a user equipment according to an embodiment of the present invention, where the user equipment includes:
- the receiving unit 501 is configured to receive virtual content information returned from the server side;
- a video stream capturing unit 502 configured to capture a video stream
- the storage unit 503 is configured to store an augmented reality context when the user experiences an augmented reality experience, where the augmented reality context includes the virtual content information received by the receiving unit 501 and the video stream captured by the video stream capturing unit 502 ;
- the virtual reality information acquiring unit 504 is configured to acquire virtual reality information according to the virtual content information stored by the storage unit 503 when the user needs to experience the augmented reality experience again;
- the video frame acquiring unit 505 is configured to follow the video. And acquiring, in sequence, the video frames in the video stream stored by the storage unit 503;
- the superimposing unit 506 is configured to superimpose the virtual reality information acquired by the virtual reality information acquiring unit 504 on the video frame acquired by the video frame acquiring unit 505;
- the display unit 507 is configured to display the superimposed video frame of the superimposing unit 506.
- the video frame acquiring unit 505 can sequentially acquire video frames in the video stream according to the frame rate of the video playing.
- the storage unit receives the virtual content information received by the receiving unit and the video stream captured by the video stream capturing unit, and ends the augmented reality experience.
- the superimposing unit superimposes the virtual reality information acquired by the virtual reality information acquiring unit.
- the display unit displays the video frame superimposed by the superimposing unit, so that the user can experience the same augmented reality experience again at any time after experiencing the augmented reality experience.
- the tracked object exists in the real world where the user is located, and the video stream captured by the video stream capturing unit includes the tracked object.
- the video stream capturing unit 502 may be specifically configured to sequentially capture video frames;
- the storage unit 503 may be specifically configured to store a correspondence between a timestamp of the video frame captured by the video stream capturing unit 502 and the tracked object information, and use the posture image of the tracked object from the captured video frame. Removing, updating the panoramic image according to the video frame after removing the posture image, and storing a correspondence between the time stamp and the background information;
- the video stream capturing unit 502 And storing a standard image of the tracked object when the video stream capturing unit 502 captures a video frame, and storing the panorama when the video stream capturing unit 502 stops capturing a video frame; wherein, the time a stamp indicating a time at which the video frame is captured, the tracked object information including location information of the gesture image in the captured video frame, the background information including the captured video frame in the panorama Location information;
- the tracked object information may further include a homography matrix of the gesture image on the captured video frame, and the background information may further include the captured video frame being deflected relative to the panoramic image. Deflection angle
- the video frame acquires a single image
- the timestamp of the video frame to be displayed is obtained in sequence according to the sequence in which the video frames are captured, and the tracked by the storage unit 503 corresponding to the acquired timestamp is obtained according to the acquired timestamp.
- Object information and background information, according to the obtained tracked object information packet The inclusion of the homography matrix, performing affine transformation on the obtained standard image, obtaining a posture image of the tracked object, and intercepting according to the displayed resolution according to the obtained position information and the deflection angle of the background information.
- the virtual content information received by the receiving unit 501 may include the identifier of the tracked object corresponding to the virtual reality information, and the superimposing unit 506 may be specifically configured to include the When the identifier of the tracked object is located, the virtual reality information acquired by the virtual reality information acquiring unit 504 is superimposed on the video according to the position of the image of the tracked object in the current video frame to be displayed.
- the frame to be displayed by the frame acquiring unit 505 is currently displayed on the video frame;
- the user equipment may further include a sending unit, where the sending unit may be configured to send to the server side before the receiving unit 501 receives the virtual content information returned from the server side.
- the virtual content information is obtained by the server side according to the information of the tracking and the object, and the virtual content information may further include the virtual reality information or the virtual reality information.
- the virtual reality information acquiring unit 504 may be specifically configured to directly acquire the virtual reality information when the virtual content information received by the receiving unit 501 includes the virtual reality information;
- the virtual content information received by the receiving unit 501 includes the storage of the virtual reality information.
- the location information access the Virtual Reality information.
- the video stream capture unit captures The video stream does not include a pose image of the tracked object, and the video stream capture unit 502 It can be specifically used to sequentially capture video frames;
- the storage unit 503 may be specifically configured to update a panoramic image according to the video frame captured by the video stream capturing unit 502, and store a correspondence between a timestamp of the captured video frame and background information;
- the background information may also include a deflection angle of the captured video frame relative to the panoramic view deflection;
- the video frame acquisition unit is configured to sequentially acquire the timestamp of the current video frame to be displayed according to the sequence in which the video frames are captured, according to the obtained a timestamp, obtaining background information corresponding to the acquired timestamp, and according to the obtained position information and the deflection angle of the background information, intercepting the acquired panoramic image according to the displayed resolution, and generating a current video to be displayed frame;
- the virtual content information received by the receiving unit 501 may include location information corresponding to the virtual reality information, and the background information may further include information about a location of the user equipment, where the superimposing unit 506 may Specifically, the virtual reality information acquired by the virtual reality information acquiring unit 504 is superimposed on the video frame according to the information about the location of the user equipment included in the background information and the location information included in the virtual content information.
- the user equipment may further include a sending unit, where the sending unit may be configured to send the user equipment to the server side before the receiving unit 501 receives the virtual content information returned from the server side.
- the location information so that the receiving unit 501 receives the virtual content information, where the virtual content information is determined by the server side according to the location of the user equipment
- the set information is obtained by searching, and the virtual content information may further include the virtual reality information or storage location information of the virtual reality information;
- the virtual reality information acquiring unit 504 may be specifically configured to directly acquire the virtual reality information when the virtual content information received by the receiving unit 501 includes the virtual reality information; or received by the receiving unit 501.
- the virtual content information includes the storage location information of the virtual reality information
- the virtual reality information is acquired according to the storage location information.
- the augmented reality context stored by the storage unit 503 may further include user operation information, where the user operation information includes an operation type and an operation parameter, and the presence of the tracked object in the current real world. Timestamp
- Shellfish 1 J the user equipment may further comprise:
- the user operation simulation unit is configured to simulate the operation of the user according to the operation type and the operation parameter at a time corresponding to the time stamp included in the user operation information.
- FIG. 6 is a structural diagram of another user equipment according to an embodiment of the present invention. As shown in FIG. 6, the user equipment includes at least one processor 601, a communication bus 602, a memory 603, and at least one communication interface. 604.
- the communication bus 602 is configured to implement a connection and communication between the components, and the communication interface 604 is configured to connect and communicate with an external device.
- the memory 603 is configured to store program code that needs to be executed.
- the program code may include: a receiving unit 6031, a video stream capturing unit 6032, a storage unit 6033, a virtual reality information acquiring unit 6034, a video frame acquiring unit 6035, and an overlay.
- the unit 6036 and the display unit 6037 are configured to execute the unit stored in the memory 603. When the unit is executed by the processor 601, the following functions are implemented:
- the receiving unit 6031 is configured to receive virtual content information returned from the server side;
- the video stream capturing unit 6032 is configured to capture a video stream.
- the storage unit 6033 is configured to store an augmented reality context when the user experiences an augmented reality experience, where the augmented reality context includes the virtual content information received by the receiving unit 6031 and the captured by the video stream capturing unit 6032 Video stream
- the virtual reality information acquiring unit 6034 is configured to acquire, according to the virtual content information stored by the storage unit 6033, the virtual reality information, the video frame acquiring unit 6035, when the user needs to experience the augmented reality experience again. Obtaining video frames in the video stream stored by the storage unit 6033 in sequence according to a sequence in which video frames are captured;
- the superimposing unit 6036 is configured to superimpose the virtual reality information acquired by the virtual reality information acquiring unit 6034 on the video frame acquired by the video frame acquiring unit 6035;
- the display unit 6037 is configured to display the superimposed video frame of the superimposing unit 6036.
- the video frame acquiring unit 6035 may sequentially acquire video frames in the video stream according to the frame rate of the video playing.
- the storage unit receives the virtual content information received by the receiving unit and the video stream captured by the video stream capturing unit, and ends the augmented reality experience.
- the superimposing unit superimposes the virtual reality information acquired by the virtual reality information acquiring unit on the video frame acquired by the video frame acquiring unit, and the display unit displays the superimposed video of the superimposing unit.
- the frame enables the user to experience the same augmented reality experience again at any time after experiencing the augmented reality experience.
- the tracked object when the object to be tracked needs to be enhanced, the tracked object exists in the real world where the user is located, and the video stream captured by the video stream capturing unit includes the video stream. Tracking the pose image of the object, the video stream capturing unit 6032 may be specifically configured to sequentially obtain a video frame;
- the storage unit 6033 may be specifically configured to store the view captured by the video stream capturing unit 6032. Corresponding relationship between the time stamp of the frequency frame and the tracked object information, removing the posture image of the tracked object from the captured video frame, updating the panorama according to the video frame after removing the posture image, and storing the image Corresponding relationship between the timestamp and the background information;
- the video stream capturing unit 6032 And storing a standard image of the tracked object when the video stream capturing unit 6032 captures a video frame, and storing the panoramic image when the video stream capturing unit 6032 stops capturing a video frame; wherein, the time a stamp indicating a time at which the video frame is captured, the tracked object information including location information of the gesture image in the captured video frame, the background information including the captured video frame in the panorama Location information;
- the tracked object information may further include a homography matrix of the gesture image on the captured video frame, and the background information may further include the captured video frame being deflected relative to the panoramic image. Deflection angle
- the video frame acquires a single scene view when the user needs to experience the augmented reality experience again;
- the timestamp of the video frame to be displayed is obtained in sequence according to the sequence in which the video frames are captured, and the tracked by the storage unit 6033 corresponding to the acquired timestamp is obtained according to the acquired timestamp.
- the object information and the background information are affine-transformed to the acquired standard image according to the obtained homography matrix included in the tracked object information, to obtain a posture image of the tracked object, according to the obtained background.
- the position information included in the information and the deflection angle are obtained by intercepting the acquired panoramic image according to the displayed resolution, and obtaining the background image according to the obtained position information included in the tracked object information, and superimposing the obtained posture image on the interception to obtain On the background image, generate the current video frame to be displayed;
- the virtual content information received by the receiving unit 6031 may include the identifier of the tracked object corresponding to the virtual reality information, and the superimposing unit 6036 may be specifically configured to include the virtual content information in the When the identifier of the tracked object is described, according to the posture of the tracked object The position of the state image in the current video frame to be displayed, the virtual reality information acquiring unit
- the virtual reality information acquired by 6034 is superimposed on the video frame to be displayed generated by the video frame obtaining unit 6035.
- the memory 603 may further include a sending unit.
- the processor 601 executes the sending unit, the following functions may be implemented:
- the sending unit may be configured to send, after the receiving unit 6031 receives the virtual content information returned from the server side, information indicating the tracked object to the server side, where the identifier is tracked
- the information of the object includes the attitude image of the tracked object or the feature data of the gesture image of the tracked object, so that the receiving unit 6031 receives the virtual content information, wherein the virtual content information is from the server side
- the virtual content information may further include the virtual reality information or storage location information of the virtual reality information;
- the virtual reality information acquiring unit 6034 may be specifically configured to directly acquire the virtual reality information when the virtual content information received by the receiving unit 6031 includes the virtual reality information; or received by the receiving unit 6031.
- the virtual content information includes the storage location information of the virtual reality information
- the virtual reality information is acquired according to the storage location information.
- the video stream capturing unit 6032 may Specifically used to sequentially capture video frames;
- the storage unit 6033 may be specifically configured to update a panoramic image according to the video frame captured by the video stream capturing unit 6032, and store a correspondence between a timestamp of the captured video frame and background information;
- the background information may also include a deflection angle of the captured video frame relative to the panoramic view deflection;
- the video frame acquisition unit is configured to sequentially acquire the timestamp of the current video frame to be displayed according to the sequence in which the video frames are captured, according to the obtained a timestamp, obtaining background information corresponding to the acquired timestamp, and according to the obtained position information and the deflection angle of the background information, intercepting the acquired panoramic image according to the displayed resolution, and generating a current video to be displayed frame;
- the virtual content information received by the receiving unit 6031 may include location information corresponding to the virtual reality information, and the background information further includes information about a location of the user equipment, and the superimposing unit 6036 may be specific. And superimposing the virtual reality information acquired by the virtual reality information acquiring unit 6034 on the video frame acquiring unit according to the information about the location of the user equipment included in the background information and the location information included in the virtual content information. 6035 is generated on the current video frame to be displayed.
- the memory 603 may further include a sending unit.
- the processor 601 executes the sending unit, the following functions may be implemented:
- the sending unit may be configured to send information about a location of the user equipment to the server side before the receiving unit 6031 receives the virtual content information returned from the server side, so that the receiving unit 6031 receives
- the virtual content information wherein the virtual content information is obtained by the server side according to the information of the location of the user equipment, and the virtual content information may further include the virtual reality information or the virtual reality information.
- the virtual reality information acquiring unit 6034 may be specifically configured to directly acquire the virtual reality information when the virtual content information received by the receiving unit 6031 includes the virtual reality information; When the virtual content information received by the receiving unit 6031 includes the storage location information of the virtual reality information, the virtual reality information is acquired according to the storage location information.
- the augmented reality context stored by the storage unit 6033 may further include user operation information, where the user operation information includes an operation type, an operation parameter, and Timestamp
- the memory 603 may further include a user operation simulation unit, and when the processor 601 executes the user operation simulation ticket, the following functions may be implemented:
- the user operation simulation unit is configured to simulate a user operation according to the operation type and the operation parameter at a time when the time stamp included in the user operation information corresponds.
- the method for implementing the augmented reality and the user equipment provided by the embodiment of the present invention, when the user experiences the augmented reality experience, the UE stores the virtual content information and the captured video stream through the augmented reality context, after the augmented reality experience ends, When the user needs to experience the augmented reality experience again, the UE acquires virtual reality information according to the stored virtual content information, and superimposes the acquired virtual reality information on each video frame in the video stream for display. After the user experiences the augmented reality experience, the user can again experience the same augmented reality experience at any time.
- the UE when the UE captures a video frame that includes the gesture image of the tracked object, the UE will The posture image of the tracking object is stored separately from the background image, and the position information of the posture image of the tracked object in the captured video frame and the homography matrix are stored, and the posture image of the tracked object is stored and passed Storing location information of the captured video frame in the panorama, storing the The scene view, thereby saving the storage resource of the UE; in addition, the UE may superimpose the acquired virtual reality information according to the position of the gesture image of the tracked object in the current video frame to be displayed.
- the UE will The captured video frame is used as a background image, and the background image is stored by storing location information of the captured video frame in the panoramic image, thereby saving storage resources of the UE, and the UE may be based on background information.
- the information about the location of the UE included in the location and the location information corresponding to the virtual reality information included in the virtual content information, and superimposing the acquired virtual reality information The video frame to be displayed before, so that the user can have a better augmented reality experience.
- Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
- a storage medium may be any available media that can be accessed by a computer.
- computer readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, disk storage media or other magnetic storage device, or can be used for carrying or storing in the form of an instruction or data structure.
- the desired program code and any other medium that can be accessed by the computer may suitably be a computer readable medium.
- the software is transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable , fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, wireless, and microwaves are included in the fixing of the associated media.
- coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, wireless, and microwaves are included in the fixing of the associated media.
- a disk and a disc include a compact disc (CD), a laser disc, a compact disc, a digital versatile disc (DVD), a floppy disc, and a Blu-ray disc, wherein the disc is usually magnetically copied, and the disc is The laser is used to optically replicate the data. Combinations of the above should also be included within the scope of the computer readable media.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention relates to the field of information technology, and particularly to a method for achieving augmented reality, and a user equipment. The method for achieving augmented reality is provided. When a user undergoes an augmented reality experience, a UE stores virtual content information and a captured video stream via the context of the augmented reality; and after the augmented reality experience has ended, when the user needs to undergo the augmented reality experience once again, the UE acquires virtual reality information according to the stored virtual content information, and superposes the acquired virtual reality information onto each video frame in the video stream for display, so that the user can also still experience the same augmented reality experience once again anytime after experiencing the augmented reality experience.
Description
一种实现增强现实的方法及用户设备 技术领域 本发明涉及信息技术( Information Technology , 筒称: IT )领域, 尤其涉 及一种实现增强现实的方法及用户设备。 TECHNICAL FIELD The present invention relates to the field of information technology (Information Technology, IT: IT), and in particular to a method and user equipment for implementing augmented reality.
背景技术 Background technique
增强现实 (Augmented Reality, 筒称 AR )技术是在虚拟现实技术的基础 上发展起来的一种新兴的人机交互技术, 它借助于可视化技术, 将虚拟现实信 息应用到现实世界, 把不能直接在现实世界获取的虚拟现实信息叠加到现实世 界的画面上, 并使用户可以与增强现实应用进行互动, 扩大了用户对真实世界 的感知。 随着智能的用户设备(User Equipment, 筒称: UE ) 的普及, AR技 术在近年得到了高速的发展。 Augmented Reality (AR) technology is an emerging human-computer interaction technology developed on the basis of virtual reality technology. It uses visual technology to apply virtual reality information to the real world. The virtual reality information acquired in the real world is superimposed on the real world image, and allows users to interact with the augmented reality application, which expands the user's perception of the real world. With the popularity of intelligent user equipment (User Equipment, UE), AR technology has developed rapidly in recent years.
现有的 AR应用中, 用户设备可以通过摄像头捕获视频流, 将捕获的视频 流作为现实世界信息, 并从服务器侧获取与该现实世界信息相关的虚拟现实信 息,将获取的该虚拟现实信息叠加在捕获的视频流上,并显示叠加后的视频流。 In the existing AR application, the user equipment can capture the video stream through the camera, use the captured video stream as real world information, and obtain virtual reality information related to the real world information from the server side, and superimpose the acquired virtual reality information. On the captured video stream, and display the superimposed video stream.
具体地, 该 UE在捕获视频流后, 可以向服务器侧发送获取虚拟现实信息 的请求, 该获取虚拟现实信息的请求包括该 UE捕获的关键帧或该 UE所在位 置的信息, 其中, 该关键帧包括被跟踪对象的姿态图像; 服务器侧在根据该 UE捕获的关键帧或该 UE所在位置的信息, 得到虚拟现实信息之后, 向该 UE 发送该虚拟现实信息, 该 UE将接收的虚拟现实信息叠加到捕获的视频流的每 一帧上进行显示。 其中, 该 UE接收的虚拟现实信息与现实世界中的被跟踪对 象, 或者与该 UE所在的位置相关。 当该 UE将接收的虚拟现实信息叠加到捕 获的视频流上时, AR体验开始。 Specifically, the UE may send a request for acquiring virtual reality information to the server side after the video stream is captured, where the request for acquiring the virtual reality information includes information about a key frame captured by the UE or a location of the UE, where the key frame is The gesture image of the tracked object is included; after the virtual reality information is obtained according to the key frame captured by the UE or the location of the UE, the virtual reality information is sent to the UE, and the UE superimposes the received virtual reality information. Displayed on each frame of the captured video stream. The virtual reality information received by the UE is related to the tracked object in the real world or to the location where the UE is located. The AR experience begins when the UE overlays the received virtual reality information onto the captured video stream.
通过对现有技术的分析, 发明人认为现有技术至少存在以下问题:
UE接收的虚拟现实信息是与现实世界相关的, 具体地, 该 UE接收的虚 拟现实信息是与现实世界中的被跟踪对象或该 UE所在的位置相关的, 在 AR 体验结束后, 若用户需要再次经历相同的 AR体验, 则该用户需要回到原来的 现实世界中, 例如, 该用户位于位置 A处, 当该用户利用 UE查询在位置 A附 近的餐馆时, 服务器侧会返回在位置 A附近的餐馆的信息, 该 UE将得到的餐 馆的信息叠加到捕获的视频帧上, 如果该用户之后还想经历相同的 AR体验, 则需要用户再次回到位置 A处, 并捕获相同的视频帧。 Through analysis of the prior art, the inventors believe that the prior art has at least the following problems: The virtual reality information received by the UE is related to the real world. Specifically, the virtual reality information received by the UE is related to the tracked object in the real world or the location where the UE is located. After the end of the AR experience, if the user needs After experiencing the same AR experience again, the user needs to go back to the original real world. For example, the user is located at location A. When the user queries the restaurant near location A with the UE, the server side will return to near location A. The information of the restaurant, the UE superimposes the obtained restaurant information onto the captured video frame, and if the user later wants to experience the same AR experience, the user is required to return to the location A again and capture the same video frame.
发明内容 Summary of the invention
为克服现有技术的缺陷, 本发明实施例的目的在于提供一种实现增强现实 的方法及用户设备, 以便在 AR体验结束之后, 用户还能够在任何时候再次经 历相同的 AR体验。 To overcome the deficiencies of the prior art, an object of embodiments of the present invention is to provide a method and user equipment for implementing augmented reality, so that after the end of the AR experience, the user can also experience the same AR experience again at any time.
第一方面, 本发明实施例提供一种实现增强现实的方法, 包括: In a first aspect, an embodiment of the present invention provides a method for implementing augmented reality, including:
用户设备存储用户经历增强现实体验时的增强现实上下文, 所述增强现实 上下文包括所述用户设备从服务器侧接收的虚拟内容信息以及所述用户设备 捕获的视频流; The user equipment stores an augmented reality context when the user experiences an augmented reality experience, the augmented reality context including virtual content information received by the user equipment from the server side and a video stream captured by the user equipment;
当所述用户需要再次经历所述增强现实体验时, 所述用户设备根据存储的 所述虚拟内容信息, 获取虚拟现实信息; When the user needs to experience the augmented reality experience again, the user equipment acquires virtual reality information according to the stored virtual content information;
所述用户设备按照视频帧被捕获的先后顺序, 依次获取存储的所述视频流 中的视频帧, 将获取的所述虚拟现实信息叠加到获取的所述视频帧上, 并显示 叠力 P后的视频帧。 The user equipment sequentially acquires the stored video frames in the video stream according to the sequence in which the video frames are captured, and superimposes the acquired virtual reality information on the acquired video frames, and displays the overlay force P. Video frame.
在第一方面的第一种可能的实现方式中, 所述用户设^^次捕获视频帧, 存储捕获的视频帧的时间戳与被跟踪对象信息之间的对应关系, 将被跟踪对象 的姿态图像从所述捕获的视频帧中去除, 根据去除所述姿态图像后的视频帧更 新全景图, 并存储所述时间戳与背景信息之间的对应关系; In a first possible implementation manner of the first aspect, the user sets a video frame to capture a correspondence between a timestamp of the captured video frame and the tracked object information, and a posture of the tracked object. And removing an image from the captured video frame, updating the panoramic image according to the video frame after removing the posture image, and storing a correspondence between the time stamp and the background information;
所述用户设备在捕获视频帧时存储所述被跟踪对象的标准图像, 并在所述
用户设备停止捕获视频帧时, 存储所述全景图; The user equipment stores a standard image of the tracked object when capturing a video frame, and in the When the user equipment stops capturing video frames, storing the panorama;
其中, 所述被跟踪对象信息包括所述姿态图像在所述捕获的视频帧中的位 置信息, 所述背景信息包括所述捕获的视频帧在所述全景图中的位置信息。 The tracked object information includes location information of the gesture image in the captured video frame, and the background information includes location information of the captured video frame in the panoramic image.
结合第一方面的第一种可能的实现方式, 在第一方面的第二种可能的实现 方式中, 所述被跟踪对象信息还包括所述姿态图像在所述捕获的视频帧上的单 应性矩阵, 所述背景信息还包括所述捕获的视频帧相对于所述全景图偏转的偏 转角度。 In conjunction with the first possible implementation of the first aspect, in a second possible implementation manner of the first aspect, the tracked object information further includes a single image of the gesture image on the captured video frame And the background information further includes a deflection angle of the captured video frame relative to the panoramic image deflection.
结合第一方面的第二种可能的实现方式, 在第一方面的第三种可能的实现 方式中, 所述用户设备获取存储的所述标准图像以及所述全景图; In conjunction with the second possible implementation of the first aspect, in a third possible implementation manner of the first aspect, the user equipment acquires the stored standard image and the panoramic image;
所述用户设备按照视频帧被捕获的先后顺序, 依次获取当前所要显示的视 频帧的时间戳, 根据获取的所述时间戳, 得到与获取的所述时间戳对应的被跟 踪对象信息以及背景信息, 根据得到的所述被跟踪对象信息包含的单应性矩 阵, 对获取的所述标准图像进行仿射变换, 得到所述被跟踪对象的姿态图像, 根据得到的所述背景信息包含的位置信息以及偏转角度, 按照显示的分辨率截 取获取的所述全景图得到背景图, 根据得到的所述被跟踪对象信息包含的位置 信息, 将得到的所述姿态图像叠加到截取得到的背景图上, 生成所述当前所要 显示的视频帧。 The user equipment sequentially acquires the timestamp of the video frame to be displayed in the order in which the video frames are captured, and obtains the tracked object information and the background information corresponding to the acquired timestamp according to the obtained timestamp. And performing affine transformation on the obtained standard image according to the obtained homography matrix included in the tracked object information, obtaining a posture image of the tracked object, and acquiring location information according to the obtained background information. And the deflection angle, the acquired panoramic image is obtained according to the displayed resolution, and the background image is obtained, and the obtained posture image is superimposed on the cut background image according to the obtained position information of the tracked object information. The video frame currently to be displayed is generated.
结合第一方面的第三种可能的实现方式, 在第一方面的第四种可能的实现 方式中, 所述虚拟内容信息包括与所述虚拟现实信息对应的所述被跟踪对象的 标识, 则所述将获取的所述虚拟现实信息叠加到获取的所述视频帧上, 包括: 在所述虚拟内容信息包括所述被跟踪对象的标识时, 所述用户设备根据所 述被跟踪对象的姿态图像在所述当前所要显示的视频帧中的位置, 将获取的所 述虚拟现实信息叠加到所述当前所要显示的视频帧上。 With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the virtual content information includes an identifier of the tracked object corresponding to the virtual reality information, The superimposing the acquired virtual reality information on the acquired video frame includes: when the virtual content information includes the identifier of the tracked object, the user equipment is according to the posture of the tracked object And superimposing the acquired virtual reality information on the currently displayed video frame in a position in the video frame to be displayed currently.
在第一方面的第五种可能的实现方式中, 所述用户设备依次捕获视频帧, 根据捕获的视频帧更新全景图, 并存储所述捕获的视频帧的时间戳与背景信息
之间的对应关系; In a fifth possible implementation manner of the first aspect, the user equipment sequentially captures a video frame, updates a panoramic image according to the captured video frame, and stores a timestamp and background information of the captured video frame. Correspondence between them;
在所述用户设备停止捕获视频帧时, 所述用户设备存储所述全景图; 其中, 所述背景信息包括所述捕获的视频帧在所述全景图中的位置信息。 结合第一方面的第五种可能的实现方式, 在第一方面的第六种可能的实现 方式中, 所述背景信息还包括所述捕获的视频帧相对于所述全景图偏转的偏转 角度。 And when the user equipment stops capturing a video frame, the user equipment stores the panoramic image; wherein the background information includes location information of the captured video frame in the panoramic image. In conjunction with the fifth possible implementation of the first aspect, in a sixth possible implementation of the first aspect, the background information further includes a deflection angle of the captured video frame relative to the panoramic image deflection.
结合第一方面的第六种可能的实现方式, 在第一方面的第七种可能的实现 方式中, 所述用户设备获取存储的所述全景图; In conjunction with the sixth possible implementation of the first aspect, in a seventh possible implementation manner of the first aspect, the user equipment acquires the stored panoramic image;
所述用户设备按照视频帧被捕获的先后顺序, 依次获取当前所要显示的视 频帧的时间戳, 根据获取的所述时间戳, 得到与获取的所述时间戳对应的背景 信息, 根据得到的所述背景信息包含的位置信息以及偏转角度, 按照显示的分 辨率截取获取的所述全景图, 生成所述当前所要显示的视频帧。 The user equipment obtains timestamps of the current video frame to be displayed in sequence according to the sequence in which the video frames are captured, and obtains background information corresponding to the obtained timestamp according to the obtained timestamp, according to the obtained information. The position information and the deflection angle included in the background information are intercepted, and the acquired panoramic image is intercepted according to the displayed resolution to generate the current video frame to be displayed.
结合第一方面的第七种可能的实现方式, 在第一方面的第八种可能的实现 方式中, 所述虚拟内容信息包括与所述虚拟现实信息对应的位置信息, 所述背 景信息还包括所述用户设备所在位置的信息, 则所述将获取的所述虚拟现实信 息叠加到获取的所述视频帧上, 包括: With reference to the seventh possible implementation of the first aspect, in an eighth possible implementation manner of the first aspect, the virtual content information includes location information corresponding to the virtual reality information, where the background information further includes And the information about the location of the user equipment, the superimposing the acquired virtual reality information on the acquired video frame, including:
所述用户设备根据所述背景信息包含的所述用户设备所在位置的信息以 及所述虚拟内容信息包含的位置信息, 将获取的所述虚拟现实信息叠加到所述 当前所要显示的视频帧上。 And the user equipment superimposes the acquired virtual reality information on the currently displayed video frame according to the information about the location of the user equipment included in the background information and the location information included in the virtual content information.
第二方面, 本发明实施例提供一种用户设备, 包括: In a second aspect, an embodiment of the present invention provides a user equipment, including:
接收单元, 用于接收从服务器侧返回的虚拟内容信息; a receiving unit, configured to receive virtual content information returned from the server side;
视频流捕获单元, 用于捕获视频流; a video stream capturing unit, configured to capture a video stream;
存储单元, 用于存储用户经历增强现实体验时的增强现实上下文, 所述增 强现实上下文包括所述接收单元接收的所述虚拟内容信息以及所述视频流捕 获单元捕获的所述视频流;
虚拟现实信息获取单元, 用于当所述用户需要再次经历所述增强现实体验 时, 根据所述存储单元存储的所述虚拟内容信息, 获取虚拟现实信息; a storage unit, configured to store an augmented reality context when the user experiences an augmented reality experience, where the augmented reality context includes the virtual content information received by the receiving unit and the video stream captured by the video stream capturing unit; a virtual reality information acquiring unit, configured to acquire virtual reality information according to the virtual content information stored by the storage unit when the user needs to experience the augmented reality experience again;
视频帧获取单元, 用于按照视频帧被捕获的先后顺序, 依次获取所述存储 单元存储的所述视频流中的视频帧; a video frame acquiring unit, configured to sequentially acquire video frames in the video stream stored by the storage unit according to a sequence in which the video frames are captured;
叠加单元, 用于将所述虚拟现实信息获取单元获取的所述虚拟现实信息叠 加到所述视频帧获取单元获取的所述视频帧上; a superimposing unit, configured to superimpose the virtual reality information acquired by the virtual reality information acquiring unit on the video frame acquired by the video frame acquiring unit;
显示单元, 用于显示所述叠加单元叠加后的视频帧。 a display unit, configured to display a video frame superimposed by the superposition unit.
在第二方面的第一种可能的实现方式中, 所述视频流捕获单元具体用于依 次 4翁获视频帧; In a first possible implementation manner of the second aspect, the video stream capturing unit is specifically configured to obtain a video frame according to a second time;
所述存储单元具体用于存储所述视频流捕获单元捕获的视频帧的时间戳 与被跟踪对象信息之间的对应关系, 将被跟踪对象的姿态图像从所述捕获的视 频帧中去除, 根据去除所述姿态图像后的视频帧更新全景图, 并存储所述时间 戳与背景信息之间的对应关系; 以及 The storage unit is specifically configured to store a correspondence between a timestamp of the video frame captured by the video stream capturing unit and the tracked object information, and remove the posture image of the tracked object from the captured video frame, according to the Removing the video frame after the gesture image to update the panorama, and storing a correspondence between the time stamp and the background information;
用于在所述视频流捕获单元捕获视频帧时存储所述被跟踪对象的标准图 像, 并在所述视频流捕获单元停止捕获视频帧时, 存储所述全景图; And storing a standard image of the tracked object when the video stream capturing unit captures a video frame, and storing the panoramic image when the video stream capturing unit stops capturing a video frame;
其中, 所述被跟踪对象信息包括所述姿态图像在所述捕获的视频帧中的位 置信息, 所述背景信息包括所述捕获的视频帧在所述全景图中的位置信息。 The tracked object information includes location information of the gesture image in the captured video frame, and the background information includes location information of the captured video frame in the panoramic image.
结合第二方面的第一种可能的实现方式, 在第二方面的第二种可能的实现 方式中, 所述被跟踪对象信息还包括所述姿态图像在所述捕获的视频帧上的单 应性矩阵, 所述背景信息还包括所述捕获的视频帧相对于所述全景图偏转的偏 转角度。 With reference to the first possible implementation of the second aspect, in a second possible implementation manner of the second aspect, the tracked object information further includes a single image of the gesture image on the captured video frame And the background information further includes a deflection angle of the captured video frame relative to the panoramic image deflection.
结合第二方面的第二种可能的实现方式, 在第二方面的第三种可能的实现 以及所述全景图; 以及 In conjunction with a second possible implementation of the second aspect, a third possible implementation in the second aspect, and the panoramic view;
用于按照视频帧被捕获的先后顺序, 依次获取当前所要显示的视频帧的时
间戳, 根据获取的所述时间戳, 得到与获取的所述时间戳对应的所述存储单元 存储的被跟踪对象信息以及背景信息, 根据得到的所述被跟踪对象信息包含的 单应性矩阵, 对获取的所述标准图像进行仿射变换, 得到所述被跟踪对象的姿 态图像, 根据得到的所述背景信息包含的位置信息以及偏转角度, 按照显示的 分辨率截取获取的所述全景图得到背景图, 根据得到的所述被跟踪对象信息包 含的位置信息, 将得到的所述姿态图像叠加到截取得到的背景图上, 生成所述 当前所要显示的视频帧。 For sequentially obtaining the video frame to be displayed according to the order in which the video frames are captured. And obtaining, according to the obtained timestamp, the tracked object information and the background information stored by the storage unit corresponding to the obtained timestamp, and the homography matrix included according to the obtained tracked object information Performing affine transformation on the obtained standard image to obtain a posture image of the tracked object, and intercepting the acquired panoramic image according to the displayed resolution according to the obtained position information and the deflection angle of the background information. Obtaining a background image, and superimposing the obtained posture image on the truncated background image according to the obtained position information included in the tracked object information, and generating the current video frame to be displayed.
结合第二方面的第三种可能的实现方式, 在第二方面的第四种可能的实现 方式中, 所述接收单元接收的所述虚拟内容信息包括与所述虚拟现实信息对应 的所述被跟踪对象的标识, 则所述叠加单元具体用于在所述虚拟内容信息包括 所述被跟踪对象的标识时, 根据所述被跟踪对象的姿态图像在所述当前所要显 示的视频帧中的位置, 将所述虚拟现实信息获取单元获取的所述虚拟现实信息 叠加到所述视频帧获取单元生成的所述当前所要显示的视频帧上。 With reference to the third possible implementation manner of the second aspect, in a fourth possible implementation manner of the second aspect, the virtual content information that is received by the receiving unit includes the Tracking the identifier of the object, the superimposing unit is specifically configured to: according to the position of the image of the tracked object in the current video frame to be displayed, when the virtual content information includes the identifier of the tracked object And superimposing the virtual reality information acquired by the virtual reality information acquiring unit on the current video frame to be displayed generated by the video frame acquiring unit.
在第二方面的第五种可能的实现方式中, 所述视频流捕获单元具体用于依 次 4翁获视频帧; In a fifth possible implementation manner of the second aspect, the video stream capturing unit is specifically configured to obtain a video frame according to a second time;
所述存储单元具体用于根据所述视频流捕获单元捕获的视频帧更新全景 图, 并存储所述捕获的视频帧的时间戳与背景信息之间的对应关系; 以及 The storage unit is specifically configured to update a panoramic image according to a video frame captured by the video stream capturing unit, and store a correspondence between a timestamp of the captured video frame and background information;
用于在所述视频流捕获单元停止捕获视频帧时, 存储所述全景图; 其中, 所述背景信息包括所述捕获的视频帧在所述全景图中的位置信息。 结合第二方面的第五种可能的实现方式, 在第二方面的第六种可能的实现 方式中, 所述背景信息还包括所述捕获的视频帧相对于所述全景图偏转的偏转 角度。 And storing the panoramic image when the video stream capturing unit stops capturing video frames; wherein the background information includes location information of the captured video frame in the panoramic image. In conjunction with the fifth possible implementation of the second aspect, in a sixth possible implementation of the second aspect, the background information further includes a deflection angle of the captured video frame relative to the panoramic image deflection.
结合第二方面的第六种可能的实现方式, 在第二方面的第七种可能的实现 In conjunction with the sixth possible implementation of the second aspect, the seventh possible implementation in the second aspect
以及
用于按照视频帧被捕获的先后顺序, 依次获取当前所要显示的视频帧的时 间戳, 根据获取的所述时间戳, 得到与获取的所述时间戳对应的背景信息, 根 据得到的所述背景信息包含的位置信息以及偏转角度, 按照显示的分辨率截取 获取的所述全景图, 生成所述当前所要显示的视频帧。 as well as And obtaining, according to the sequence of the video frames, the timestamps of the currently displayed video frames, and obtaining the background information corresponding to the acquired timestamps according to the acquired timestamps, according to the obtained background. The position information included in the information and the deflection angle are intercepted according to the displayed resolution to generate the currently displayed video frame.
结合第二方面的第七种可能的实现方式, 在第二方面的第八种可能的实现 方式中, 所述接收单元接收的所述虚拟内容信息包括与所述虚拟现实信息对应 的位置信息, 所述背景信息还包括所述用户设备所在位置的信息, 则所述叠加 单元具体用于根据所述背景信息包含的所述用户设备所在位置的信息以及所 述虚拟内容信息包含的位置信息, 将所述虚拟现实信息获取单元获取的所述虚 拟现实信息叠加到所述视频帧获取单元生成的所述当前所要显示的视频帧上。 With reference to the seventh possible implementation manner of the second aspect, in the eighth possible implementation manner of the second aspect, the virtual content information received by the receiving unit includes location information corresponding to the virtual reality information, The background information further includes information about a location of the user equipment, where the superimposing unit is specifically configured to: according to information about a location of the user equipment included in the background information, and location information included in the virtual content information, The virtual reality information acquired by the virtual reality information acquiring unit is superimposed on the current video frame to be displayed generated by the video frame acquiring unit.
本发明实施例提供的一种实现增强现实体验的方法及用户设备, 在用户经 历增强现实体验时, UE通过增强现实上下文存储虚拟内容信息以及捕获的视 频流, 在所述增强现实体验结束后, 当所述用户需要再次经历所述增强现实体 验时, 所述 UE根据存储的虚拟内容信息获取虚拟现实信息, 并将获取的虚拟 现实信息叠加到所述视频流中的每一视频帧上进行显示, 使得所述用户在经历 了增强现实体验之后, 还能够在任何时候再次经历相同的增强现实体验。 A method and a user equipment for implementing an augmented reality experience are provided by an embodiment of the present invention. When a user experiences an augmented reality experience, the UE stores the virtual content information and the captured video stream through the augmented reality context, after the augmented reality experience ends. The UE acquires virtual reality information according to the stored virtual content information, and superimposes the acquired virtual reality information on each video frame in the video stream for display. , enabling the user to experience the same augmented reality experience again at any time after experiencing the augmented reality experience.
附图说明 DRAWINGS
为了更清楚地说明本发明实施例的技术方案, 下面将对实施例或现有技术 描述中所需要使用的附图作筒单地介绍, 显而易见地, 下面描述中的附图仅仅 是本发明的一些实施例, 对于本领域普通技术人员来讲, 在不付出创造性劳动 的前提下, 还可以根据这些附图获取其他的附图。 In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings to be used in the embodiments or the description of the prior art will be briefly described below. Obviously, the drawings in the following description are only the present invention. For some embodiments, other drawings may be obtained from those skilled in the art without any inventive effort.
图 1为本发明实施例提供的一种实现增强现实的系统架构图; FIG. 1 is a schematic structural diagram of a system for implementing augmented reality according to an embodiment of the present invention;
图 2为本发明实施例提供的一种实现增强现实的方法流程图; 2 is a flowchart of a method for implementing augmented reality according to an embodiment of the present invention;
图 3为本发明实施例提供的另一种实现增强现实的方法流程图; 图 4为本发明实施例提供的又一种实现增强现实的方法流程图;
图 5为本发明实施例提供的一种用户设备的结构图; FIG. 3 is a flowchart of another method for implementing augmented reality according to an embodiment of the present invention; FIG. 4 is a flowchart of still another method for implementing augmented reality according to an embodiment of the present invention; FIG. 5 is a structural diagram of a user equipment according to an embodiment of the present invention;
图 6为本发明实施例提供的另一种用户设备的结构图。 具体实施方式 FIG. 6 is a structural diagram of another user equipment according to an embodiment of the present invention. detailed description
下面将结合本发明实施例中的附图, 对本发明实施例中的技术方案进行清 楚、 完整地描述, 显然, 所描述的实施例仅仅是本发明的一部分实施例, 而不 是全部实施例。 基于本发明中的实施例, 本领域普通技术人员在没有作出创造 性劳动前提下获取的所有其他实施例, 都属于本发明保护的范围。 BRIEF DESCRIPTION OF THE DRAWINGS The technical solutions in the embodiments of the present invention are clearly and completely described in the following description of the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative work are within the scope of the present invention.
如图 1所示, 为本发明实施例提供的一种实现增强现实的系统架构图。 其 中, 在用户确定需要经历增强现实体验时, UE 向服务器侧发送获取虚拟内容 信息的请求, 所述获取虚拟内容信息的请求包含标识被跟踪对象的信息或所述 UE所在位置的信息, 所述标识被跟踪对象的信息包括所述被跟踪对象的姿态 图像或所述被跟踪对象的姿态图像的特征数据, 所述服务器侧根据所述获取虚 拟内容信息的请求, 向所述 UE发送所述虚拟内容信息, 所述 UE在接收到所 述虚拟内容信息之后, 存储所述虚拟内容信息以及所述 UE捕获的视频流。 在 所述增强现实体验结束后, 如果所述用户确定需要再次经历所述增强现实体 验, 所述 UE根据存储的所述虚拟内容信息获取虚拟现实信息, 并按照视频帧 被捕获的先后顺序, 依次获取存储的所述视频流中的视频帧, 将获取的所述虚 拟现实信息叠加到获取的所述视频帧上, 并显示叠加后的视频帧。 As shown in FIG. 1 , it is a system architecture diagram for implementing augmented reality according to an embodiment of the present invention. When the user determines that the user needs to experience the augmented reality experience, the UE sends a request for acquiring the virtual content information to the server side, where the request for acquiring the virtual content information includes information identifying the tracked object or information about the location of the UE, The information identifying the tracked object includes the gesture image of the tracked object or the feature data of the gesture image of the tracked object, and the server side sends the virtual to the UE according to the request for acquiring the virtual content information. Content information, after receiving the virtual content information, the UE stores the virtual content information and the video stream captured by the UE. After the augmented reality experience ends, if the user determines that the augmented reality experience needs to be experienced again, the UE acquires virtual reality information according to the stored virtual content information, and sequentially according to the sequence in which the video frames are captured. Obtaining the stored video frame in the video stream, superimposing the acquired virtual reality information on the acquired video frame, and displaying the superimposed video frame.
其中, 本发明实施例并不限定所述 UE的类型, 作为示例而非限定, 所述 UE可以包括智能手机、 个人电脑、 平板电脑、 具有增强现实功能的眼镜或其 他具有增强现实功能的终端。 The embodiment of the present invention does not limit the type of the UE. By way of example and not limitation, the UE may include a smart phone, a personal computer, a tablet, glasses with augmented reality function, or other terminal with augmented reality function.
其中, 需要说明的是, 本发明实施例并不限定所述服务器侧的组成结构, 作为示例而非限定, 所述服务器侧由至少一个服务器组成, 所述服务器侧中的 服务器可以包括表示层服务器、 应用层服务器以及数据库服务器。 It should be noted that, the embodiment of the present invention does not limit the composition of the server side. By way of example and not limitation, the server side is composed of at least one server, and the server in the server side may include a presentation layer server. , application layer server and database server.
基于图 1所示的系统架构图,本发明实施例提供一种实现增强现实的方法,
如图 2所示, 所述方法包括: Based on the system architecture diagram shown in FIG. 1 , an embodiment of the present invention provides a method for implementing augmented reality. As shown in FIG. 2, the method includes:
S201: UE存储用户经历增强现实体验时的增强现实上下文, 所述增强现 实上下文包括所述 UE从服务器侧接收的虚拟内容信息以及所述 UE捕获的视 频流; S201: The UE stores an augmented reality context when the user experiences an augmented reality experience, where the enhanced real context includes the virtual content information received by the UE from the server side and the video stream captured by the UE;
其中, 需要说明的是, 存储的所述视频流为一系列连续的视频帧, 所述 It should be noted that, the stored video stream is a series of consecutive video frames,
UE将所述视频流作为所述用户经历所述增强现实体验时的现实世界信息, 所 述虚拟内容信息包括虚拟现实信息或虚拟现实信息的存储位置信息; 在所述 UE将获取的虚拟现实信息叠加到捕获的视频帧上进行显示时, 增强现实体验 开始; The UE uses the video stream as real world information when the user experiences the augmented reality experience, the virtual content information includes virtual reality information or storage location information of virtual reality information; and virtual reality information that the UE will acquire The augmented reality experience begins when superimposed onto the captured video frame for display;
其中, 当需要对被跟踪对象进行增强时, 即当所述 UE捕获的视频流包含 被跟踪对象的姿态图像时, 所述 UE可以将所述被跟踪对象的姿态图像和除去 所述姿态图像后的背景图分开存储; 当需要对现实环境中的当前位置进行增强 时, 即当所述 UE捕获的视频流没有包含所述被跟踪对象的姿态图像时, 可以 直接将所述 UE捕获的视频帧作为背景图存储; 对于所述 UE捕获的视频帧中 的背景图, 所述 UE 可以将捕获的视频帧中的背景图合并, 生成全景图 ( panorama ) , 所述 UE可以根据背景图在所述全景图中的位置, 恢复所述背 景图; Wherein, when the tracked object needs to be enhanced, that is, when the video stream captured by the UE includes the attitude image of the tracked object, the UE may image the tracked object and remove the pose image. The background image is stored separately; when the current location in the real environment needs to be enhanced, that is, when the video stream captured by the UE does not include the gesture image of the tracked object, the video frame captured by the UE may be directly directly For the background image in the video frame captured by the UE, the UE may merge the background images in the captured video frame to generate a panorama, and the UE may be configured according to the background image. The position in the panorama, restoring the background image;
具体地, 所述 UE可以采用以下任一方式存储所述捕获的视频流: 方式一, 所述 UE捕获的视频流中包含被跟踪对象的姿态图像: 所述 UE 依次捕获视频帧, 存储捕获的视频帧的时间戳与被跟踪对象信息之间的对应关 系, 将被跟踪对象的姿态图像从所述捕获的视频帧中去除, 根据去除所述姿态 图像后的视频帧更新全景图, 并存储所述时间戳与背景信息之间的对应关系; 所述 UE在捕获视频帧时存储所述被跟踪对象的标准图像, 并在所述 UE停止 捕获视频帧时, 存储所述全景图; Specifically, the UE may store the captured video stream in any of the following manners: In a first mode, the video stream captured by the UE includes a posture image of the tracked object: the UE sequentially captures a video frame, and stores the captured video. Corresponding relationship between the time stamp of the video frame and the tracked object information, removing the posture image of the tracked object from the captured video frame, updating the panorama according to the video frame after removing the posture image, and storing the image Corresponding relationship between the timestamp and the background information; the UE storing the standard image of the tracked object when capturing the video frame, and storing the panorama when the UE stops capturing the video frame;
其中, 所述时间戳用于指示捕获视频帧的时刻, 作为示例而非限定, 所述
时间戳可以是捕获视频帧时相对于所述增强现实体验开始时的时刻; 所述被跟 踪对象信息包括所述姿态图像在所述捕获的视频帧中的位置信息, 所述背景信 息包括所述捕获的视频帧在所述全景图中的位置信息以及; The timestamp is used to indicate the time at which the video frame is captured, by way of example and not limitation. The timestamp may be a time when the video frame is captured with respect to the start of the augmented reality experience; the tracked object information includes location information of the pose image in the captured video frame, the background information including the Position information of the captured video frame in the panorama and;
其中, 所述被跟踪对象信息还可以包括所述姿态图像在所述捕获的视频帧 上的单应性矩阵, 所述背景信息还可以包括所述捕获的视频帧相对于所述全景 图偏转的偏转角度; The tracked object information may further include a homography matrix of the gesture image on the captured video frame, and the background information may further include the captured video frame being deflected relative to the panoramic image. Deflection angle
其中,需要说明的是,所述被跟踪对象是指在现实世界中所要跟踪的对象, 例如当前现实世界中的一个玩具车; 所述被跟踪对象的姿态图像是指在捕获的 视频帧中的所述被跟踪对象的图像, 例如当前现实世界中有一个玩具车, 在捕 获视频帧时, 捕获的视频帧中的玩具车的图像即为所述玩具车的姿态图像; 所 述被跟踪对象的标准图像是指在所述被跟踪对象水平放置在水平面上时, 当视 野垂直于水平面时所捕获的图像; It should be noted that the tracked object refers to an object to be tracked in the real world, such as a toy car in the current real world; the attitude image of the tracked object refers to the captured video frame. An image of the tracked object, such as a toy car in the current real world, when capturing a video frame, an image of the toy car in the captured video frame is a pose image of the toy car; The standard image refers to an image captured when the tracked object is horizontally placed on a horizontal plane, when the field of view is perpendicular to the horizontal plane;
方式二,所述 UE捕获的视频流中不包含被跟踪对象的姿态图像: 所述 UE 依次捕获视频帧, 根据捕获的视频帧更新全景图, 并存储所述捕获的视频帧的 时间戳与背景信息之间的对应关系; 在所述 UE停止捕获视频帧时, 所述 UE 存储所述全景图; In the second mode, the video stream captured by the UE does not include the posture image of the tracked object: the UE sequentially captures the video frame, updates the panorama according to the captured video frame, and stores the timestamp and background of the captured video frame. Corresponding relationship between the information; when the UE stops capturing video frames, the UE stores the panorama;
S202: 当所述用户需要再次经历所述增强现实体验时, 所述 UE根据存储 的所述虚拟内容信息, 获取虚拟现实信息; S202: When the user needs to experience the augmented reality experience again, the UE acquires virtual reality information according to the stored virtual content information.
其中, 所述 UE可以采用以下方式获取虚拟现实信息: The UE may obtain virtual reality information in the following manner:
若所述虚拟内容信息包括所述虚拟现实信息, 则所述用户设备可以直接获 取所述虚拟现实信息; 或者, If the virtual content information includes the virtual reality information, the user equipment may directly obtain the virtual reality information; or
若所述虚拟内容信息包括所述虚拟现实信息的存储位置信息, 则所述用户 设备可以根据所述存储位置信息, 获取所述虚拟现实信息; 例如, 作为示例而 非限定,所述虚拟内容信息包含有所述虚拟现实信息的 URI( Uniform Resource Identifier, 统一资源定位符) , 所述 UE可以根据所述虚拟现实信息的 URI,
获取所述虚拟现实信息; If the virtual content information includes the storage location information of the virtual reality information, the user equipment may acquire the virtual reality information according to the storage location information; for example, by way of example and not limitation, the virtual content information a URI (Uniform Resource Identifier) including the virtual reality information, where the UE may be based on a URI of the virtual reality information. Obtaining the virtual reality information;
S203: 所述 UE按照视频帧被捕获的先后顺序, 依次获取存储的所述视频 流中的视频帧, 将获取的所述虚拟现实信息叠加到获取的所述视频帧上, 并显 示叠力。后的视频帧; S203: The UE sequentially acquires the stored video frames in the video stream according to the sequence in which the video frames are captured, and superimposes the acquired virtual reality information on the acquired video frames, and displays the overlay force. After the video frame;
其中, 需要说明的是, 所述 UE可以根据视频帧的时间戳确定视频帧被捕 获的先后顺序, 在所述用户需要再次经历之前曾经经历过的所述增强现实体验 时, 所述 UE需要获取之前经历所述增强现实体验时的虚拟现实信息以及视频 流, 并将获取的所述虚拟现实信息叠加到获取的所述视频流中的每一帧上进行 显示; 方式一,对应于步骤 S201中存储捕获的视频流的方式一: 所述 UE获取存 储的所述标准图像以及所述全景图, 并按照视频帧被捕获的先后顺序, 依次获 取当前所要显示的视频帧的时间戳, 根据获取的所述时间戳, 得到与获取的所 述时间戳对应的被跟踪对象信息以及背景信息, 根据得到的所述被跟踪对象信 息包含的单应性矩阵, 对获取的所述标准图像进行仿射变换, 得到所述被跟踪 对象的姿态图像, 根据得到的所述背景信息包含的位置信息以及偏转角度, 按 照显示的分辨率截取获取的所述全景图得到背景图, 根据得到的所述被跟踪对 象信息包含的位置信息, 将得到的所述姿态图像叠加到截取得到的背景图上, 生成当前所要显示的视频帧; The UE may determine the sequence in which the video frames are captured according to the timestamp of the video frame. When the user needs to experience the augmented reality experience that has been experienced before, the UE needs to acquire. The virtual reality information and the video stream when the augmented reality experience is experienced, and the acquired virtual reality information is superimposed on each frame in the acquired video stream for display; and the first method corresponds to step S201. The method of storing the captured video stream is as follows: the UE acquires the stored standard image and the panoramic image, and sequentially acquires the timestamp of the currently displayed video frame according to the sequence in which the video frames are captured, according to the acquired Obtaining, by the timestamp, the tracked object information and the background information corresponding to the acquired timestamp, and performing affine transformation on the obtained standard image according to the obtained homography matrix included in the tracked object information. Obtaining a posture image of the tracked object, and receiving the bit according to the obtained background information Information and a deflection angle are obtained by intercepting the acquired panoramic image according to the displayed resolution to obtain a background image, and superimposing the obtained posture image on the cut background image according to the obtained position information included in the tracked object information. , generating a video frame to be displayed currently;
方式二,对应于步骤 S201中存储捕获的视频流的方式二: 所述 UE获取存 储的所述全景图, 并按照视频帧被捕获的先后顺序, 依次获取当前所要显示的 视频帧的时间戳, 根据获取的所述时间戳, 得到与获取的所述时间戳对应的背 景信息, 根据得到的所述背景信息包含的位置信息以及偏转角度, 按照显示的 分辨率截取获取的所述全景图, 生成当前所要显示的视频帧。 Manner 2, corresponding to the method 2 of storing the captured video stream in step S201: the UE acquires the stored panorama, and sequentially acquires the timestamp of the current video frame to be displayed according to the sequence in which the video frames are captured. Obtaining the background information corresponding to the obtained timestamp according to the obtained timestamp, and extracting the acquired panoramic image according to the displayed resolution according to the obtained position information and the deflection angle of the background information, and generating The video frame currently to be displayed.
在本实施例中, 在所述用户开始经历所述增强现实体验时, 所述 UE还可
以通过所述增强现实上下文存储用户操作信息, 所述用户操作信息用于描述所 述用户与所述 UE之间的交互, 所述用户操作信息可以包括操作类型、 操作参 数以及时间戳, 所述用户操作信息包含的时间戳用于指示所述交互发生的时 刻, 作为示例而非限定, 所述用户操作信息包含的时间戳可以是所述交互发生 时相对于所述增强现实体验开始时的时刻; 在所述用户再次经历所述增强现实 体验时, 所述 UE可以在所述用户操作信息包含的时间戳所对应的时刻, 根据 根据所述操作类型以及所述操作参数, 模拟用户的操作。 In this embodiment, when the user starts to experience the augmented reality experience, the UE may also The user operation information is used to describe the interaction between the user and the UE by using the augmented reality context, where the user operation information may include an operation type, an operation parameter, and a timestamp. The time stamp included in the user operation information is used to indicate the moment when the interaction occurs. As an example and not by way of limitation, the time stamp included in the user operation information may be a time when the interaction occurs relative to the start of the augmented reality experience. When the user experiences the augmented reality experience again, the UE may simulate the operation of the user according to the operation type and the operation parameter at a time corresponding to the time stamp included in the user operation information.
其中, 需要说明的是, 在所述 UE存储所述增强现实上下文之后, 所述 UE 还可以向其他 UE发送所述增强现实上下文, 使得其他用户也可以经历所述增 强现实体验, 从而使得所述用户可以向其他用户分享所述增强现实体验。 It should be noted that, after the UE stores the augmented reality context, the UE may further send the augmented reality context to other UEs, so that other users may also experience the augmented reality experience, thereby enabling the The user can share the augmented reality experience with other users.
本发明实施例提供的一种实现增强现实的方法, 在用户经历增强现实体验 时, UE通过增强现实上下文存储虚拟内容信息以及捕获的视频流, 在所述增 强现实体验结束后, 当所述用户需要再次经历所述增强现实体验时, 所述 UE 根据存储的虚拟内容信息获取虚拟现实信息, 并将获取的虚拟现实信息叠加到 所述视频流中的每一视频帧上进行显示, 使得所述用户在经历了增强现实体验 之后, 还能够在任何时候再次经历相同的增强现实体验; 其次, 在 UE捕获的 视频帧中包含被跟踪对象的姿态图像时, 所述 UE将所述被跟踪对象的姿态图 像与背景图分开存储, 通过存储所述被跟踪对象的姿态图像在所述捕获的视频 帧中的位置信息以及单应性矩阵, 存储所述被跟踪对象的姿态图像, 并通过存 储所述捕获的视频帧在全景图中的位置信息, 存储所述背景图, 从而节省了所 述 UE的存储资源; 再次, 在所述 UE捕获的视频帧不包含所述被跟踪对象的 姿态图像时, 所述 UE将所述捕获的视频帧作为背景图, 并通过存储所述捕获 的视频帧在全景图中的位置信息存储所述背景图, 从而节省了所述 UE的存储 资源。 如图 3所示, 为本发明实施例提供的一种实现增强现实的方法流程图, 该
方法应用于捕获的视频流中包含被跟踪对象的姿态图像的场景, 方法包括:A method for implementing augmented reality according to an embodiment of the present invention, when a user experiences an augmented reality experience, the UE stores the virtual content information and the captured video stream through the augmented reality context, after the augmented reality experience ends, when the user When the augmented reality experience needs to be experienced again, the UE acquires virtual reality information according to the stored virtual content information, and superimposes the acquired virtual reality information on each video frame in the video stream for display, so that the After experiencing the augmented reality experience, the user can again experience the same augmented reality experience at any time. Secondly, when the UE captures a video frame that includes the gesture image of the tracked object, the UE will be the tracked object. The gesture image is stored separately from the background image, and by storing location information of the gesture image of the tracked object in the captured video frame and a homography matrix, storing the gesture image of the tracked object, and storing the Position information of the captured video frame in the panorama, storing the background image, from And saving the storage resource of the UE; again, when the video frame captured by the UE does not include the gesture image of the tracked object, the UE uses the captured video frame as a background image, and uses a storage The location information of the captured video frame in the panorama stores the background image, thereby saving storage resources of the UE. As shown in FIG. 3, it is a flowchart of a method for implementing augmented reality according to an embodiment of the present invention. The method is applied to a scene of a captured video stream that includes a gesture image of the tracked object, and the method includes:
S301: 在用户确定需要经历增强现实体验时, UE 向服务器侧发送标识被 跟踪对象的信息, 所述标识被跟踪对象的信息包括所述被跟踪对象的姿态图像 或所述被跟踪对象的姿态图像的特征数据; S301: When the user determines that the user needs to experience the augmented reality experience, the UE sends information identifying the tracked object to the server side, where the information of the tracked object includes the posture image of the tracked object or the posture image of the tracked object. Characteristic data;
其中, 作为示例而非限定, 所述姿态图像的特征数据可以是所述姿态图像 的轮廓, 所述姿态图像可以通过捕获视频帧得到; Wherein, by way of example and not limitation, the feature data of the gesture image may be an outline of the gesture image, and the gesture image may be obtained by capturing a video frame;
S302: 所述 UE接收所述服务器侧发送的虚拟内容信息, 所述虚拟内容信 息包括虚拟现实信息或虚拟现实信息的存储位置信息; S302: The UE receives virtual content information sent by the server, where the virtual content information includes virtual reality information or storage location information of virtual reality information.
其中, 所述虚拟内容信息由所述服务器侧根据所述标识被跟踪对象的信息 处理得到, 具体地, 所述服务器侧存储有被跟踪对象的姿态图像的特征数据与 被跟踪对象的标识( Identifier )之间的对应关系, 并存储有被跟踪对象的标识 与虚拟内容信息之间的对应关系, 所述服务器侧在获得到所述标识被跟踪对象 的信息后, 获取所述被跟踪对象的姿态图像的特征数据, 根据所述特征数据得 到所述被跟踪对象的标识, 根据所述被跟踪对象的标识得到与所述被跟踪对象 的标识对应的虚拟内容信息; The virtual content information is obtained by the server side according to the information of the identified object to be tracked. Specifically, the server side stores the feature data of the posture image of the tracked object and the identifier of the tracked object (Identifier) a correspondence between the identifier of the tracked object and the virtual content information, and the server side obtains the posture of the tracked object after obtaining the information of the identified object to be tracked And obtaining, according to the feature data, the identifier of the tracked object, and obtaining virtual content information corresponding to the identifier of the tracked object according to the identifier of the tracked object;
可选地, 所述服务器侧存储有被跟踪对象的姿态图像的特征数据与虚拟内 容信息之间的对应关系, 所述服务器侧在获得到所述标识被跟踪对象的信息 后, 获取所述被跟踪对象的姿态图像的特征数据, 根据所述特征数据得到与所 述特征数据对应的虚拟内容信息; Optionally, the server side stores a correspondence between the feature data of the gesture image of the tracked object and the virtual content information, and the server side obtains the information that identifies the tracked object, and obtains the Tracking feature data of the gesture image of the object, and obtaining virtual content information corresponding to the feature data according to the feature data;
其中, 需要说明的是, 当所述标识被跟踪对象的信息包括所述被跟踪对象 的姿态图像, 所述服务器侧可以采用特征提取算法, 对所述所述被跟踪对象的 姿态图像进行处理, 得到特征数据; It should be noted that, when the information of the identified object includes the posture image of the tracked object, the server side may use a feature extraction algorithm to process the posture image of the tracked object. Obtaining feature data;
S303: 所述 UE存储所述虚拟内容信息; S303: The UE stores the virtual content information.
其中, 所述 UE可以将所述虚拟内容信息存储在增强现实上下文中; The UE may store the virtual content information in an augmented reality context;
S304: 所述 UE捕获视频帧;
其中, 所述 UE可以按照捕获视频流的帧率, 依次捕获视频帧, 所述 UE 捕获的视频帧包含所述被跟踪对象的姿态图像; S304: The UE captures a video frame. The UE may sequentially capture a video frame according to a frame rate of the captured video stream, where the video frame captured by the UE includes a posture image of the tracked object;
其中, 需要说明的是, 所述 UE将根据所述虚拟内容信息获取的虚拟现实 信息叠加到捕获的视频帧上进行显示时, 所述增强现实体验开始; It should be noted that, when the UE superimposes the virtual reality information acquired according to the virtual content information onto the captured video frame for display, the augmented reality experience starts;
S305: 所述 UE存储捕获的视频帧的时间戳与被跟踪对象信息之间的对应 关系; S305: The UE stores a correspondence between a timestamp of the captured video frame and the tracked object information.
其中, 所述被跟踪对象信息包括所述被跟踪对象的姿态图像在所述捕获的 视频帧中的位置信息, 所述被跟踪对象的姿态图像在所述捕获的视频帧中的位 置信息可以是所述被跟踪对象的姿态图像的中心点在所述捕获的视频帧中的 坐标, 所述坐标可以在所述 UE跟踪所述被跟踪对象时确定; The tracked object information includes location information of the tracked object's pose image in the captured video frame, and the location information of the tracked object's pose image in the captured video frame may be a coordinate of a center point of the gesture image of the tracked object in the captured video frame, the coordinate being determined when the UE tracks the tracked object;
其中, 所述被跟踪对象信息还可以包括所述被跟踪对象的姿态图像在所述 捕获的视频帧上的单应性矩阵, 所述被跟踪对象的姿态图像在所述捕获的视频 帧上的单应性矩阵可以在所述 UE跟踪所述被跟踪对象时确定, 所述 UE可以 根据所述单应性矩阵, 对所述被跟踪对象的标准图像进行仿射变换, 得到所述 被跟踪对象的姿态图像, 所谓对所述被跟踪对象的标准图像进行仿射变换, 是 指将所述被跟踪对象的标准图像乘以所述单应性矩阵; The tracked object information may further include a homography matrix of the gesture image of the tracked object on the captured video frame, and the gesture image of the tracked object is on the captured video frame. The homography matrix may be determined when the UE tracks the tracked object, and the UE may perform affine transformation on the standard image of the tracked object according to the homography matrix to obtain the tracked object. The affine transformation of the standard image of the tracked object means that the standard image of the tracked object is multiplied by the homography matrix;
其中, 需要说明的是, 所述 UE在选定被跟踪对象的关键点之后, 将捕获 的视频帧上的关键点与标准图像上相应的关键点进行匹配, 得到关键点在捕获 的视频帧上的位置信息以及在标准图像上的位置信息, 根据关键点在捕获的视 频帧上的位置信息以及在标准图像上的位置信息, 采用 RANSAC ( RANdom S Ample Consensus, 随机抽样一致)算法即可得到单应性矩阵; It should be noted that, after selecting a key point of the tracked object, the UE matches a key point on the captured video frame with a corresponding key point on the standard image to obtain a key point on the captured video frame. The location information and the location information on the standard image, according to the position information of the key point on the captured video frame and the position information on the standard image, the RANSAC (RANdom S Ample Consensus) algorithm can be used to obtain the single Qualitative matrix
其中, 所述 UE可以将所述捕获的视频帧的时间戳与所述被跟踪对象信息 之间的对应关系存储在所述增强现实上下文中; The UE may store a correspondence between a timestamp of the captured video frame and the tracked object information in the augmented reality context;
S306:所述 UE将所述被跟踪对象的姿态图像从所述捕获的视频帧中除去, 将去除所述姿态图像后的视频帧作为背景图更新全景图, 并存储所述时间戳与
背景信息之间的对应关系; S306: The UE removes the posture image of the tracked object from the captured video frame, updates a video frame with the posture image removed as a background image, and stores the timestamp and Correspondence between background information;
其中, 需要说明的是, 所述 UE将所述被跟踪对象的姿态图像从所述捕获 的视频帧中除去后, 得到背景图, 根据得到的所述背景图更新所述全景图; 若 得到背景图后, 所述 UE还未创建全景图, 则所述 UE可以用得到的背景图初 始化全景图, 此时, "根据得到的所述背景图更新所述全景图" 即是指 "根据 得到的所述背景图初始化所述全景图" ; It should be noted that, after the UE removes the posture image of the tracked object from the captured video frame, a background image is obtained, and the panoramic image is updated according to the obtained background image; After the figure, the UE has not created a panorama, the UE may initialize the panorama with the obtained background image. At this time, “update the panorama according to the obtained background image” means “according to the obtained The background image initializes the panorama";
其中, 所述背景信息包括所述捕获的视频帧在所述全景图中的位置信息以 及所述捕获的视频帧相对于所述全景图偏转的偏转角度; The background information includes location information of the captured video frame in the panoramic view and a deflection angle of the captured video frame relative to the panoramic view deflection;
其中, 所述捕获的视频帧在所述全景图中的位置信息可以是所述捕获的视 频帧的中心点在所述全景图中的坐标, 所述捕获的视频帧的中心点在所述全景 图中的坐标可以在所述 UE更新所述全景图时确定; The location information of the captured video frame in the panorama may be coordinates of a center point of the captured video frame in the panorama, and a center point of the captured video frame is in the panorama The coordinates in the figure may be determined when the UE updates the panorama;
其中, 所述 UE可以将所述捕获的视频帧的时间戳与所述背景信息之间的 对应关系存储在所述增强现实上下文中; The UE may store a correspondence between a timestamp of the captured video frame and the background information in the augmented reality context;
其中, 在更新所述全景图时, 所述 UE可以确定所述捕获的视频帧相对于 所述全景图偏转的偏转角度, 具体地, 可以确定所述捕获的视频帧的水平线相 对于所述全景图的水平线旋转的角度,例如,在利用某一视频帧更新全景图时, 将该视频帧逆时针旋转了 30° ,则视频该帧相对于全景图旋转的旋转角度为逆 时针方向 30° ; The UE may determine a deflection angle of the captured video frame relative to the panorama deflection when updating the panorama, and specifically, determining a horizontal line of the captured video frame relative to the panorama The angle of the horizontal rotation of the graph, for example, when the panorama is updated by using a video frame, the video frame is rotated counterclockwise by 30°, and the rotation angle of the video relative to the panorama rotation is 30° counterclockwise;
其中, 需要说明的是, 更新全景图的操作可以包括如下三个步骤: It should be noted that the operation of updating the panorama may include the following three steps:
1 ) 图像注册( image registration ) : 确定所述捕获的视频帧中的背景图与 所述全景图重复的部分; 1) image registration: determining a portion of the captured video frame that is repeated with the panorama;
其中, 所述背景图中没有重复的部分, 可以用于扩展所述全景图; 通过所 述重复的部分, 可以确定所述捕获的视频帧在所述全景图中的位置信息以及所 述捕获的视频帧相对于所述全景图偏转的偏转角度; Wherein, there is no overlapping portion in the background image, which may be used to expand the panoramic image; by using the repeated portion, location information of the captured video frame in the panoramic image and the captured a deflection angle of the video frame relative to the panorama deflection;
2 ) 图像变形 (image warping ) : 将所述全景图映射到球面簇或柱状簇上,
根据所述捕获的视频帧中的背景图与所述全景图重复的部分, 将所述背景图拼 接在所述全景图上; 2) image warping: map the panorama to a spherical cluster or a columnar cluster, And splicing the background image on the panoramic image according to a portion of the captured video frame that is overlapped with the panoramic image;
3 ) 图像混合(image blending ) : 对拼接后的全景图进行平滑处理、 去色 差处理以及去重影处理, 以提高所述全景图的呈现质量; 3) image blending: smoothing, chrominance processing and de-ghosting processing of the stitched panorama to improve the rendering quality of the panorama;
S307:所述 UE确定所述增强现实体验是否结束,若是,则执行步骤 S308, 否则, 执行步骤 S304; S307: The UE determines whether the augmented reality experience is over, and if so, step S308 is performed, otherwise, step S304 is performed;
其中, 所述 UE在捕获视频帧时, 可以存储所述被跟踪对象的标准图像, 具体地, 可以在步骤 S304至步骤 S306中的任一步骤之前、 之后或同时, 存储 所述被跟踪对象的标准图像; 所述 UE根据所述被跟踪对象的姿态图像在所述 UE捕获的视频帧上的单应性矩阵以及所述被跟踪对象的标准图像, 能够生成 所述被跟踪对象的姿态图像; The UE may store a standard image of the tracked object when capturing a video frame. Specifically, the tracked object may be stored before, after, or simultaneously with any of the steps S304 to S306. a standard image; the UE may generate a pose image of the tracked object according to a homography matrix of the image of the tracked object on a video frame captured by the UE and a standard image of the tracked object;
其中, 作为示例而非限定, 所述服务器侧存储有所述被跟踪对象的标准图 像, 所述 UE可以从所述服务器侧获得所述被跟踪对象的标准图像; Wherein, by way of example and not limitation, the server side stores a standard image of the tracked object, and the UE may obtain a standard image of the tracked object from the server side;
其中, 需要说明的是, 在所述增强现实体验结束时, 所述 UE停止捕获视 频帧; It should be noted that, when the augmented reality experience ends, the UE stops capturing video frames.
S308: 所述 UE存储所述全景图; S308: The UE stores the panorama view.
其中, 需要说明的是, 在所述增强现实体验结束时, 所述 UE存储的全景 图是根据所述 UE捕获的视频帧中的背景图处理得到的, 所述 UE根据所述全 景图, 可以恢复所述捕获的视频帧的背景图; It should be noted that, when the augmented reality experience ends, the panoramic image stored by the UE is processed according to the background image in the video frame captured by the UE, and the UE may be according to the panoramic image. Restoring a background image of the captured video frame;
S309: 在所述增强现实体验结束后, 当所述用户需要再次经历所述增强现 实体验时, 所述 UE根据存储的所述虚拟内容信息, 获取虚拟现实信息; S309: After the augmented reality experience ends, when the user needs to experience the enhanced real-life experience again, the UE acquires virtual reality information according to the stored virtual content information.
其中, 所述 UE可以采用以下方式获得所述虚拟现实信息: The UE may obtain the virtual reality information in the following manner:
若所述虚拟内容信息包括所述虚拟现实信息, 则所述用户设备直接获取所 述虚拟现实信息; 或者, If the virtual content information includes the virtual reality information, the user equipment directly obtains the virtual reality information; or
若所述虚拟内容信息包括所述虚拟现实信息的存储位置信息, 则所述用户
设备根据所述存储位置信息, 获取所述虚拟现实信息; If the virtual content information includes storage location information of the virtual reality information, the user Obtaining, by the device, the virtual reality information according to the storage location information;
S310: 所述 UE获取存储的所述标准图像以及所述全景图; S310: The UE acquires the stored standard image and the panoramic image.
S311: 所述 UE获取当前所要显示的视频帧的时间戳, 根据获取的所述时 间戳得到所述当前所要显示的视频帧中所述被跟踪对象的姿态图像; S311: The UE acquires a timestamp of a video frame to be displayed, and obtains a posture image of the tracked object in the currently displayed video frame according to the obtained time stamp.
具体地, 所述 UE获取当前所要显示的视频帧的时间戳后, 得到与获取的 所述时间戳对应的被跟踪对象信息以及背景信息, 根据得到的所述被跟踪对象 信息包含的单应性矩阵, 对获取的所述标准图像进行仿射变换, 得到所述被跟 踪对象的姿态图像; Specifically, after acquiring the timestamp of the video frame to be displayed, the UE obtains the tracked object information and the background information corresponding to the acquired timestamp, and according to the obtained homography of the tracked object information. a matrix, performing affine transformation on the obtained standard image to obtain a posture image of the tracked object;
其中, 所述 UE可以按照视频帧被捕获的先后顺序, 依次获取当前所要显 示的视频帧的时间戳; The UE may sequentially acquire timestamps of the video frames to be displayed in sequence according to the sequence in which the video frames are captured;
S312: 所述 UE得到所述当前所要显示的视频帧的背景图; S312: The UE obtains a background image of the video frame to be currently displayed.
具体地,所述 UE根据得到的所述背景信息包含的位置信息以及偏转角度, 按照显示的分辨率截取获取的所述全景图, 得到所述当前所要显示的视频帧中 的背景图; Specifically, the UE intercepts the acquired panoramic image according to the obtained resolution of the background information and the deflection angle, and obtains a background image in the currently displayed video frame.
例如, 所述 UE可以根据所要显示的分辨率生成一个水平矩形框, 假设当 前所要显示的视频帧相对于全景图旋转的角度为逆时针方向 30° , 则所述 UE 将水平矩形框逆时针方向旋转 30。 ,并根据当前所要显示的视频帧在全景图中 的位置, 利用旋转后的矩形框截取全景图, 生成当前所要显示的视频帧中的背 景图; For example, the UE may generate a horizontal rectangular frame according to the resolution to be displayed. If the angle of the current video frame to be displayed is 30° in the counterclockwise direction with respect to the panorama, the UE will rotate the horizontal rectangular frame counterclockwise. Rotate 30. And according to the position of the current video frame to be displayed in the panorama, the panoramic image is captured by using the rotated rectangular frame to generate a background image in the current video frame to be displayed;
其中, 作为示例而非限定, 所述显示的分辨率可以由所述 UE的屏幕分辨 率决定, 例如所述 UE的屏幕分辨率为 480x320, 则所述 UE可以按照 480x320 的分辨率截取获取的所述全景图; As an example and not by way of limitation, the resolution of the display may be determined by the screen resolution of the UE. For example, if the screen resolution of the UE is 480×320, the UE may intercept the acquired location according to the resolution of 480×320. Panoramic view
S313: 所述 UE生成所述当前所要显示的视频帧; S313: The UE generates the video frame to be displayed currently;
具体地, 所述 UE根据得到的所述被跟踪对象信息包含的被跟踪对象的姿 态图像在视频帧中的位置信息, 将得到的所述被跟踪对象的姿态图像叠加到截
取得到的背景图上, 生成当前所要显示的视频帧; Specifically, the UE superimposes the obtained posture image of the tracked object to the cut according to the obtained position information of the posture image of the tracked object included in the tracked object information in the video frame. On the obtained background image, generate a video frame to be displayed currently;
S314: 所述 UE将获取的所述虚拟现实信息叠加到生成的所述当前所要显 示的视频帧上, 并显示叠加后的视频帧; S314: The UE superimposes the acquired virtual reality information on the generated video frame to be displayed, and displays the superimposed video frame.
其中, 所述虚拟内容信息还可以包括与所述虚拟现实信息对应的所述被跟 踪对象的标识, 则所述 UE可以采用以下方式将获取的所述虚拟现实信息叠加 到生成的当前所要显示的所述视频帧上: The virtual content information may further include the identifier of the tracked object corresponding to the virtual reality information, and the UE may superimpose the acquired virtual reality information to the generated current desired display manner. On the video frame:
在所述虚拟内容信息包括所述被跟踪对象的标识时, 所述 UE根据所述被 跟踪对象的姿态图像在所述当前所要显示的视频帧中的位置, 将获取的所述虚 拟现实信息叠加到所述当前所要显示的视频帧上; When the virtual content information includes the identifier of the tracked object, the UE superimposes the acquired virtual reality information according to the position of the gesture image of the tracked object in the current video frame to be displayed. Going to the video frame currently to be displayed;
S315: 所述 UE判断是否已经获取完所述存储的视频流中的视频帧,若是, 则增强现实体验结束, 否则, 执行步骤 S311。 S315: The UE determines whether the video frame in the stored video stream has been acquired. If yes, the augmented reality experience ends. Otherwise, step S311 is performed.
本发明实施例中, 如果捕获视频流的帧率大于捕获视频流的期望帧率, 则 可以仅存储视频流中部分视频帧, 例如, 所述 UE可以对视频帧的时间戳进行 采样, 所述 UE存储与采样得到的时间戳对应的视频帧; In the embodiment of the present invention, if the frame rate of the captured video stream is greater than the expected frame rate of the captured video stream, only a part of the video frames in the video stream may be stored. For example, the UE may sample the timestamp of the video frame. The UE stores a video frame corresponding to the timestamp obtained by sampling;
如果视频播放的帧率大于所述期望帧率, 则所述 UE可以进行插值处理, 具体地, 所述 UE可以对所述当前所要显示的视频帧的时间戳、 与所述当前所 要显示的视频帧的时间戳对应的被跟踪对象信息以及背景信息进行插值处理。 If the frame rate of the video playback is greater than the expected frame rate, the UE may perform an interpolation process, specifically, the timestamp of the video frame to be currently displayed by the UE, and the current video to be displayed. The tracked object information and the background information corresponding to the time stamp of the frame are subjected to interpolation processing.
本发明实施例中, 在所述用户开始经历所述增强现实体验时, 所述 UE还 可以存储用户操作信息, 所述用户操作信息用于描述所述用户与所述 UE之间 的交互, 所述用户操作信息包括操作类型、 操作参数以及时间戳, 所述用户操 作信息包含的时间戳用于指示所述交互发生的时刻; 在所述用户再次经历所述 增强现实体验时, 所述 UE可以在所述用户操作信息包含的时间戳所对应的时 刻, 根据根据所述操作类型以及所述操作参数, 模拟用户的操作; In the embodiment of the present invention, when the user starts to experience the augmented reality experience, the UE may further store user operation information, where the user operation information is used to describe an interaction between the user and the UE. The user operation information includes an operation type, an operation parameter, and a timestamp, and the time information included in the user operation information is used to indicate a moment when the interaction occurs; when the user experiences the augmented reality experience again, the UE may Simulating the operation of the user according to the operation type and the operation parameter at a time corresponding to the time stamp included in the user operation information;
其中, 作为示例而非限定, 所述用户与所述 UE之间的交互可以包括以下 任一类型的操作:
点击: 对于点击操作, 所述 UE需要存储被点击的位置的坐标以及发生点 击操作时的时间戳; Wherein, by way of example and not limitation, the interaction between the user and the UE may include any of the following types of operations: Click: For a click operation, the UE needs to store the coordinates of the clicked location and the timestamp when the click operation occurs;
按住: 对于按住操作, 所述 UE需要存储被按住的位置的坐标、 发生按住 操作时的时间戳以及所述按住操作所持续的时间; Press and hold: for the hold operation, the UE needs to store the coordinates of the pressed position, the time stamp when the hold operation occurs, and the time during which the hold operation is continued;
拖动: 对于拖动操作, 所述 UE需要以一定的频率存储拖动路径上的点的 坐标, 以及拖动至该点的时间戳。 Drag: For a drag operation, the UE needs to store the coordinates of the point on the drag path at a certain frequency, and the time stamp dragged to the point.
其中, 需要说明的是, 在所述 UE存储所述增强现实上下文之后, 所述 UE 可以向其他 UE发送所述增强现实上下文, 使得其他用户也可以经历所述增强 现实体验, 从而使得所述用户可以向其他用户分享所述增强现实体验。 It should be noted that, after the UE stores the augmented reality context, the UE may send the augmented reality context to other UEs, so that other users may also experience the augmented reality experience, thereby causing the user to The augmented reality experience can be shared with other users.
本发明实施例提供的一种实现增强现实的方法, 在用户经历增强现实体验 时, UE通过增强现实上下文存储虚拟内容信息以及捕获的视频流, 在所述增 强现实体验结束后, 当所述用户需要再次经历所述增强现实体验时, 所述 UE 根据存储的虚拟内容信息获取虚拟现实信息, 并将获取的虚拟现实信息叠加到 所述视频流中的每一视频帧上进行显示, 使得所述用户在经历了增强现实体验 之后, 还能够在任何时候再次经历相同的增强现实体验; 其次, 在 UE捕获的 视频帧中包含被跟踪对象的姿态图像时, 所述 UE将所述被跟踪对象的姿态图 像与背景图分开存储, 通过存储所述被跟踪对象的姿态图像在所述捕获的视频 帧中的位置信息以及单应性矩阵, 存储所述被跟踪对象的姿态图像, 并通过存 储所述捕获的视频帧在全景图中的位置信息, 存储所述背景图, 从而节省了所 述 UE的存储资源; 再次, 所述 UE可以根据所述被跟踪对象的姿态图像在所 述当前所要显示的视频帧中的位置, 将获取的所述虚拟现实信息叠加到所述当 前所要显示的视频帧上, 从而使得用户可以有更好的增强现实体验。 如图 4所示, 为本发明实施例提供的另一种实现增强现实的方法流程图, 该方法应用于捕获的视频流中不包含被跟踪对象的姿态图像的场景, 在该方法 中, 可以将 UE捕获的视频流中的视频帧作为背景图, 方法包括:
S401: 在用户确定需要经历增强现实体验时, UE向服务器侧发送所述 UE 所在位置的信息; A method for implementing augmented reality according to an embodiment of the present invention, when a user experiences an augmented reality experience, the UE stores the virtual content information and the captured video stream through the augmented reality context, after the augmented reality experience ends, when the user When the augmented reality experience needs to be experienced again, the UE acquires virtual reality information according to the stored virtual content information, and superimposes the acquired virtual reality information on each video frame in the video stream for display, so that the After experiencing the augmented reality experience, the user can again experience the same augmented reality experience at any time. Secondly, when the UE captures a video frame that includes the gesture image of the tracked object, the UE will be the tracked object. The gesture image is stored separately from the background image, and by storing location information of the gesture image of the tracked object in the captured video frame and a homography matrix, storing the gesture image of the tracked object, and storing the Position information of the captured video frame in the panorama, storing the background image, from And saving the storage resource of the UE; the UE may further add the acquired virtual reality information to the location according to the location of the gesture image of the tracked object in the current video frame to be displayed. Currently on the video frame to be displayed, so that the user can have a better augmented reality experience. FIG. 4 is a flowchart of another method for implementing augmented reality according to an embodiment of the present invention. The method is applied to a scene of a captured video stream that does not include a pose image of a tracked object. In the method, The video frame in the video stream captured by the UE is used as a background image, and the method includes: S401: When the user determines that the augmented reality experience needs to be experienced, the UE sends the information about the location of the UE to the server side.
其中, 作为示例而非限定, 所述 UE可以通过定位装置获得所述 UE所在 位置的信息, 例如, 可以通过 GPS ( Global Position System, 全球定位系统) 装置获得该 UE所在位置的信息; For example, the UE may obtain information about the location of the UE by using a positioning device, for example, the information of the location of the UE may be obtained by using a GPS (Global Position System) device;
S402: 所述 UE接收所述服务器侧发送的虚拟内容信息, 所述虚拟内容信 息包括虚拟现实信息或虚拟现实信息的存储位置信息; S402: The UE receives virtual content information sent by the server, where the virtual content information includes virtual reality information or storage location information of virtual reality information.
其中, 所述虚拟内容信息由所述服务器侧根据所述 UE所在位置的信息查 找得到, 具体地, 所述服务器侧存储有位置信息与虚拟内容信息之间的对应关 系, 所述服务器侧在获得所述 UE所在位置的信息后, 根据所述 UE所在位置 的信息得到所述虚拟内容信息; The virtual content information is obtained by the server side according to the information of the location of the UE. Specifically, the server side stores a correspondence between the location information and the virtual content information, where the server side obtains After the information about the location of the UE, the virtual content information is obtained according to the information about the location of the UE;
S403: 所述 UE存储所述虚拟内容信息; S403: The UE stores the virtual content information.
其中, 所述 UE可以将所述虚拟内容信息存储在增强现实上下文中; The UE may store the virtual content information in an augmented reality context;
S404: 所述 UE捕获视频帧; S404: The UE captures a video frame.
其中, 所述 UE可以按照捕获视频流的帧率, 依次捕获视频帧; The UE may sequentially capture video frames according to a frame rate of the captured video stream;
其中, 需要说明的是, 所述 UE将根据所述虚拟内容信息获取的虚拟现实 信息叠加到捕获的视频帧上进行显示时, 所述增强现实体验开始; It should be noted that, when the UE superimposes the virtual reality information acquired according to the virtual content information onto the captured video frame for display, the augmented reality experience starts;
S405: 所述 UE将捕获的视频帧作为背景图更新全景图, 并存储所述捕获 的视频帧的时间戳与背景信息之间的对应关系; S405: The UE updates the panoramic image as the background image by using the captured video frame, and stores a correspondence between the timestamp of the captured video frame and the background information.
其中, 需要说明的是, 本实施例直接将所述 UE捕获的视频帧视为背景图, 对本步骤的详细说明可以参考步骤 S306, 在此不再赘述; It should be noted that, in this embodiment, the video frame captured by the UE is directly regarded as a background image. For detailed description of this step, refer to step S306, and details are not described herein again.
其中, 所述 UE可以将所述捕获的视频帧的时间戳与所述背景信息之间的 对应关系存储在所述增强现实上下文中; The UE may store a correspondence between a timestamp of the captured video frame and the background information in the augmented reality context;
S406:所述 UE确定所述增强现实体验是否结束,若是,则执行步骤 S407, 否则, 执行步骤 S404;
其中, 需要说明的是, 在所述增强现实体验结束时, 所述 UE停止捕获视 频帧; S406: The UE determines whether the augmented reality experience is over, and if so, step S407 is performed, otherwise, step S404 is performed; It should be noted that, when the augmented reality experience ends, the UE stops capturing video frames;
S407: 所述 UE存储所述全景图; S407: The UE stores the panorama view.
其中, 所述 UE可以将所述全景图存储在所述增强现实上下文中; 其中, 对本步骤的详细说明可以参考步骤 S308, 在此不再赘述; The UE may store the panorama in the augmented reality context. For detailed description of this step, refer to step S308, and details are not described herein.
S408: 在所述增强现实体验结束后, 当所述用户需要再次经历所述增强现 实体验时, 所述 UE根据存储的所述虚拟内容信息, 获取虚拟现实信息; S408: After the augmented reality experience ends, when the user needs to experience the enhanced real-life experience again, the UE acquires virtual reality information according to the stored virtual content information.
其中, 对本步骤的详细说明可以参考步骤 S309, 在此不再赘述; For detailed description of this step, refer to step S309, and details are not described herein again.
S409: 所述 UE获取存储的所述全景图; S409: The UE acquires the stored panorama view.
S410: 所述 UE获取当前所要显示的视频帧的时间戳, 根据获取的所述时 间戳得到所述当前所要显示的视频帧; S410: The UE acquires a timestamp of a video frame to be displayed, and obtains the current video frame to be displayed according to the obtained time stamp.
具体地, 所述 UE获取当前所要显示的视频帧的时间戳后, 得到与获取的 所述时间戳对应的背景信息, 根据得到的所述背景信息包含的位置信息以及偏 转角度, 按照显示的分辨率截取获取的所述全景图, 生成当前所要显示的视频 帧; Specifically, after acquiring the timestamp of the video frame to be displayed, the UE obtains background information corresponding to the acquired timestamp, and according to the obtained location information and the deflection angle of the background information, according to the resolution of the display. Rate capturing the acquired panorama to generate a video frame to be currently displayed;
其中, 需要说明的是, 所述 UE可以按照视频帧被捕获的先后顺序, 依次 获取当前所要显示的视频帧的时间戳; It should be noted that, the UE may sequentially obtain the timestamp of the video frame to be displayed in sequence according to the sequence in which the video frames are captured;
S411: 所述 UE将获取的所述虚拟现实信息叠加到生成的所述当前所要显 示的视频帧上, 并显示叠加后的视频帧; S411: The UE superimposes the acquired virtual reality information on the generated video frame to be displayed, and displays the superimposed video frame.
其中, 所述虚拟内容信息还可以包括与所述虚拟现实信息对应的位置信 息, 所述背景信息还包括所述 UE所在位置的信息, 则所述 UE可以采用以下 方式将获取的所述虚拟现实信息叠加到生成的所述当前所要显示的视频帧上: 所述 UE根据所述背景信息包含的所述 UE所在位置的信息以及所述虚拟 内容信息包含的位置信息, 将获取的虚拟现实信息叠加到生成的所述当前所要 显示的视频帧上;
S412: 所述 UE判断是否已经获取完所述存储的视频流中的视频帧,若是, 则增强现实体验结束, 否则, 执行步骤 S410。 The virtual content information may further include location information corresponding to the virtual reality information, where the background information further includes information about a location of the UE, and the UE may acquire the virtual reality in the following manner. The information is superimposed on the generated video frame to be displayed: the UE superimposes the acquired virtual reality information according to the information about the location of the UE and the location information included in the virtual content information included in the background information. To the generated video frame currently to be displayed; S412: The UE determines whether the video frame in the stored video stream has been acquired. If yes, the augmented reality experience ends. Otherwise, step S410 is performed.
本发明实施例中, 如果捕获视频流的帧率大于捕获视频流的期望帧率, 则 可以仅存储视频流中部分视频帧, 例如, 所述 UE可以对视频帧的时间戳进行 采样, 所述 UE存储与采样得到的时间戳对应的视频帧; In the embodiment of the present invention, if the frame rate of the captured video stream is greater than the expected frame rate of the captured video stream, only a part of the video frames in the video stream may be stored. For example, the UE may sample the timestamp of the video frame. The UE stores a video frame corresponding to the timestamp obtained by sampling;
如果视频播放的帧率大于所述期望帧率, 则所述 UE可以进行插值处理, 具体地, 所述 UE可以对所述当前所要显示的视频帧的时间戳、 与所述当前所 要显示的视频帧的时间戳对应的背景信息进行插值处理。 If the frame rate of the video playback is greater than the expected frame rate, the UE may perform an interpolation process, specifically, the timestamp of the video frame to be currently displayed by the UE, and the current video to be displayed. The background information corresponding to the timestamp of the frame is interpolated.
本发明实施例中, 在所述用户开始经历所述增强现实体验时, 所述 UE还 可以存储用户操作信息, 所述用户操作信息用于描述所述用户与所述 UE之间 的交互, 所述用户操作信息包括操作类型、 操作参数以及时间戳, 所述用户操 作信息包含的时间戳用于指示所述交互发生的时刻; 在所述用户再次经历所述 增强现实体验时, 所述 UE可以在所述用户操作信息包含的时间戳所对应的时 刻, 根据根据所述操作类型以及所述操作参数, 模拟用户的操作。 对所述用户 操作信息的详细说明可以参考图 3所示的实施例, 在此不再赘述。 In the embodiment of the present invention, when the user starts to experience the augmented reality experience, the UE may further store user operation information, where the user operation information is used to describe an interaction between the user and the UE. The user operation information includes an operation type, an operation parameter, and a timestamp, and the time information included in the user operation information is used to indicate a moment when the interaction occurs; when the user experiences the augmented reality experience again, the UE may At the time corresponding to the time stamp included in the user operation information, the operation of the user is simulated according to the operation type and the operation parameter. For a detailed description of the user operation information, reference may be made to the embodiment shown in FIG. 3, and details are not described herein again.
其中, 需要说明的是, 在所述 UE存储所述增强现实上下文之后, 所述 UE 可以向其他 UE发送所述增强现实上下文, 使得其他用户也可以经历所述增强 现实体验, 从而使得所述用户可以向其他用户分享所述增强现实体验。 It should be noted that, after the UE stores the augmented reality context, the UE may send the augmented reality context to other UEs, so that other users may also experience the augmented reality experience, thereby causing the user to The augmented reality experience can be shared with other users.
本发明实施例提供的一种实现增强现实的方法, 在用户经历增强现实体验 时, UE通过增强现实上下文存储虚拟内容信息以及捕获的视频流, 在所述增 强现实体验结束后, 当所述用户需要再次经历所述增强现实体验时, 所述 UE 根据存储的虚拟内容信息获取虚拟现实信息, 并将获取的虚拟现实信息叠加到 所述视频流中的每一视频帧上进行显示, 使得所述用户在经历了增强现实体验 之后, 还能够在任何时候再次经历相同的增强现实体验; 其次, 在所述 UE捕 获的视频帧不包含所述被跟踪对象的姿态图像时, 所述 UE将所述捕获的视频
帧作为背景图, 并通过存储所述捕获的视频帧在全景图中的位置信息存储所述 背景图, 从而节省了所述 UE的存储资源; 再次, 所述 UE可以根据背景信息 中包含的所述 UE所在位置的信息以及虚拟内容信息包含的与所述虚拟现实信 息对应的位置信息, 将获取的虚拟现实信息叠加到当前所要显示的视频帧上, 从而使得用户可以有更好的增强现实体验。 如图 5所示, 为本发明实施例提供的一种用户设备的结构图, 所述用户设 备包括: A method for implementing augmented reality according to an embodiment of the present invention, when a user experiences an augmented reality experience, the UE stores the virtual content information and the captured video stream through the augmented reality context, after the augmented reality experience ends, when the user When the augmented reality experience needs to be experienced again, the UE acquires virtual reality information according to the stored virtual content information, and superimposes the acquired virtual reality information on each video frame in the video stream for display, so that the After experiencing the augmented reality experience, the user can also experience the same augmented reality experience again at any time. Secondly, when the video frame captured by the UE does not include the gesture image of the tracked object, the UE will Captured video The frame is used as a background image, and the background image is stored by storing location information of the captured video frame in the panorama, thereby saving storage resources of the UE; again, the UE may be included according to the background information. The information about the location of the UE and the location information corresponding to the virtual reality information included in the virtual content information are superimposed on the currently displayed video frame, so that the user can have a better augmented reality experience. . As shown in FIG. 5, it is a structural diagram of a user equipment according to an embodiment of the present invention, where the user equipment includes:
接收单元 501 , 用于接收从服务器侧返回的虚拟内容信息; The receiving unit 501 is configured to receive virtual content information returned from the server side;
视频流捕获单元 502, 用于捕获视频流; a video stream capturing unit 502, configured to capture a video stream;
存储单元 503 , 用于存储用户经历增强现实体验时的增强现实上下文, 所 述增强现实上下文包括所述接收单元 501接收的所述虚拟内容信息以及所述视 频流捕获单元 502捕获的所述视频流; The storage unit 503 is configured to store an augmented reality context when the user experiences an augmented reality experience, where the augmented reality context includes the virtual content information received by the receiving unit 501 and the video stream captured by the video stream capturing unit 502 ;
虚拟现实信息获取单元 504, 用于当所述用户需要再次经历所述增强现实 体验时, 根据所述存储单元 503存储的虚拟内容信息, 获取虚拟现实信息; 视频帧获取单元 505 , 用于按照视频帧被捕获的先后顺序, 依次获取所述 存储单元 503存储的所述视频流中的视频帧; The virtual reality information acquiring unit 504 is configured to acquire virtual reality information according to the virtual content information stored by the storage unit 503 when the user needs to experience the augmented reality experience again; the video frame acquiring unit 505 is configured to follow the video. And acquiring, in sequence, the video frames in the video stream stored by the storage unit 503;
叠加单元 506, 用于将所述虚拟现实信息获取单元 504获取的所述虚拟现 实信息叠加到所述视频帧获取单元 505获取的所述视频帧上; The superimposing unit 506 is configured to superimpose the virtual reality information acquired by the virtual reality information acquiring unit 504 on the video frame acquired by the video frame acquiring unit 505;
显示单元 507, 用于显示所述叠加单元 506叠加后的视频帧。 The display unit 507 is configured to display the superimposed video frame of the superimposing unit 506.
其中, 需要说明的是, 视频帧获取单元 505可以按照视频播放的帧率, 依 次获取视频流中的视频帧。 It should be noted that the video frame acquiring unit 505 can sequentially acquire video frames in the video stream according to the frame rate of the video playing.
本发明实施例提供的一种用户设备, 在用户经历增强现实体验时, 存储单 元通过增强现实上下文存储接收单元接收的虚拟内容信息以及视频流捕获单 元捕获的视频流, 在所述增强现实体验结束后, 当所述用户需要再次经历所述 增强现实体验时, 叠加单元将虚拟现实信息获取单元获取的虚拟现实信息叠加
到视频帧获取单元获取的视频帧上, 显示单元显示叠加单元叠加后的视频帧, 使得所述用户在经历了增强现实体验之后, 还能够在任何时候再次经历相同的 增强现实体验。 The user equipment provided by the embodiment of the present invention, when the user experiences the augmented reality experience, the storage unit receives the virtual content information received by the receiving unit and the video stream captured by the video stream capturing unit, and ends the augmented reality experience. After the user needs to experience the augmented reality experience again, the superimposing unit superimposes the virtual reality information acquired by the virtual reality information acquiring unit. On the video frame acquired by the video frame acquiring unit, the display unit displays the video frame superimposed by the superimposing unit, so that the user can experience the same augmented reality experience again at any time after experiencing the augmented reality experience.
在本发明实施例的一种实现方式中, 当需要对被跟踪对象进行增强时, 用 户所在的现实世界中存在被跟踪对象, 这时所述视频流捕获单元捕获的视频流 中包含被跟踪对象的姿态图像, 所述视频流捕获单元 502可以具体用于依次捕 获视频帧; In an implementation manner of the embodiment of the present invention, when the object to be tracked needs to be enhanced, the tracked object exists in the real world where the user is located, and the video stream captured by the video stream capturing unit includes the tracked object. The video stream capturing unit 502 may be specifically configured to sequentially capture video frames;
所述存储单元 503可以具体用于存储所述视频流捕获单元 502捕获的视频 帧的时间戳与被跟踪对象信息之间的对应关系, 将被跟踪对象的姿态图像从所 述捕获的视频帧中去除, 根据去除所述姿态图像后的视频帧更新全景图, 并存 储所述时间戳与背景信息之间的对应关系; 以及 The storage unit 503 may be specifically configured to store a correspondence between a timestamp of the video frame captured by the video stream capturing unit 502 and the tracked object information, and use the posture image of the tracked object from the captured video frame. Removing, updating the panoramic image according to the video frame after removing the posture image, and storing a correspondence between the time stamp and the background information;
用于在所述视频流捕获单元 502捕获视频帧时存储所述被跟踪对象的标准 图像, 并在所述视频流捕获单元 502停止捕获视频帧时, 存储所述全景图; 其中, 所述时间戳用于指示捕获视频帧的时刻, 所述被跟踪对象信息包括 所述姿态图像在所述捕获的视频帧中的位置信息, 所述背景信息包括所述捕获 的视频帧在所述全景图中的位置信息; And storing a standard image of the tracked object when the video stream capturing unit 502 captures a video frame, and storing the panorama when the video stream capturing unit 502 stops capturing a video frame; wherein, the time a stamp indicating a time at which the video frame is captured, the tracked object information including location information of the gesture image in the captured video frame, the background information including the captured video frame in the panorama Location information;
其中, 所述被跟踪对象信息还可以包括所述姿态图像在所述捕获的视频帧 上的单应性矩阵, 所述背景信息还可以包括所述捕获的视频帧相对于所述全景 图偏转的偏转角度; The tracked object information may further include a homography matrix of the gesture image on the captured video frame, and the background information may further include the captured video frame being deflected relative to the panoramic image. Deflection angle
其中, 在所述用户需要再次经历所述增强现实体验时, 所述视频帧获取单 图; 以及 Wherein, when the user needs to experience the augmented reality experience again, the video frame acquires a single image;
用于按照视频帧被捕获的先后顺序, 依次获取当前所要显示的视频帧的时 间戳, 根据获取的所述时间戳, 得到与获取的所述时间戳对应的所述存储单元 503存储的被跟踪对象信息以及背景信息, 根据得到的所述被跟踪对象信息包
含的单应性矩阵, 对获取的所述标准图像进行仿射变换, 得到所述被跟踪对象 的姿态图像, 根据得到的所述背景信息包含的位置信息以及偏转角度, 按照显 示的分辨率截取获取的所述全景图得到背景图, 根据得到的所述被跟踪对象信 息包含的位置信息, 将得到的所述姿态图像叠加到截取得到的背景图上, 生成 当前所要显示的视频帧; And the timestamp of the video frame to be displayed is obtained in sequence according to the sequence in which the video frames are captured, and the tracked by the storage unit 503 corresponding to the acquired timestamp is obtained according to the acquired timestamp. Object information and background information, according to the obtained tracked object information packet The inclusion of the homography matrix, performing affine transformation on the obtained standard image, obtaining a posture image of the tracked object, and intercepting according to the displayed resolution according to the obtained position information and the deflection angle of the background information. Obtaining the panoramic image to obtain a background image, and superimposing the obtained posture image on the truncated background image according to the obtained position information included in the tracked object information, to generate a video frame to be currently displayed;
其中, 所述接收单元 501接收的所述虚拟内容信息可以包括与所述虚拟现 实信息对应的所述被跟踪对象的标识, 所述叠加单元 506可以具体用于在所述 虚拟内容信息包括所述被跟踪对象的标识时, 根据所述被跟踪对象的姿态图像 在所述当前所要显示的视频帧中的位置, 将所述虚拟现实信息获取单元 504获 取的所述虚拟现实信息叠加到所述视频帧获取单元 505生成的所述当前所要显 示的视频帧上; The virtual content information received by the receiving unit 501 may include the identifier of the tracked object corresponding to the virtual reality information, and the superimposing unit 506 may be specifically configured to include the When the identifier of the tracked object is located, the virtual reality information acquired by the virtual reality information acquiring unit 504 is superimposed on the video according to the position of the image of the tracked object in the current video frame to be displayed. The frame to be displayed by the frame acquiring unit 505 is currently displayed on the video frame;
其中, 需要说明的是, 所述用户设备还可以包括发送单元, 所述发送单元 可以用于在所述接收单元 501接收从所述服务器侧返回的所述虚拟内容信息之 前, 向所述服务器侧发送标识所述被跟踪对象的信息, 所述标识所述被跟踪对 象的信息包括所述被跟踪对象的姿态图像或所述被跟踪对象的姿态图像的特 征数据, 以便所述接收单元 501接收所述虚拟内容信息, 其中, 所述虚拟内容 信息由所述服务器侧根据所述标识所述跟踪跟对象的信息处理得到, 所述虚拟 内容信息还可以包括所述虚拟现实信息或所述虚拟现实信息的存储位置信息; 所述虚拟现实信息获取单元 504可以具体用于在所述接收单元 501接收的 所述虚拟内容信息包括所述虚拟现实信息时, 直接获取所述虚拟现实信息; 或 者在所述接收单元 501接收的所述虚拟内容信息包括所述虚拟现实信息的存储 位置信息时, 根据所述存储位置信息, 获取所述虚拟现实信息。 在本发明实施例的另一种实现方式中, 当需要对现实环境中的当前位置进 行增强时, 所述用户所在的现实世界中不存在被跟踪对象, 这时所述视频流捕 获单元捕获的视频流不包含被跟踪对象的姿态图像, 所述视频流捕获单元 502
可以具体用于依次捕获视频帧; It should be noted that, the user equipment may further include a sending unit, where the sending unit may be configured to send to the server side before the receiving unit 501 receives the virtual content information returned from the server side. Sending information identifying the tracked object, the information identifying the tracked object includes feature image of the tracked object or a feature image of the tracked object, so that the receiving unit 501 receives the The virtual content information is obtained by the server side according to the information of the tracking and the object, and the virtual content information may further include the virtual reality information or the virtual reality information. The virtual reality information acquiring unit 504 may be specifically configured to directly acquire the virtual reality information when the virtual content information received by the receiving unit 501 includes the virtual reality information; The virtual content information received by the receiving unit 501 includes the storage of the virtual reality information. When the location information, according to the storage location information, access the Virtual Reality information. In another implementation manner of the embodiment of the present invention, when it is required to enhance the current location in the real environment, the tracked object does not exist in the real world where the user is located, and the video stream capture unit captures The video stream does not include a pose image of the tracked object, and the video stream capture unit 502 It can be specifically used to sequentially capture video frames;
所述存储单元 503可以具体用于根据所述视频流捕获单元 502捕获的视频 帧更新全景图, 并存储所述捕获的视频帧的时间戳与背景信息之间的对应关 系; 以及 The storage unit 503 may be specifically configured to update a panoramic image according to the video frame captured by the video stream capturing unit 502, and store a correspondence between a timestamp of the captured video frame and background information;
用于在所述视频流捕获 502单元停止捕获视频帧时, 存储所述全景图; 其中, 所述时间戳用于指示捕获视频帧的时刻, 所述背景信息包括所述捕 获的视频帧在所述全景图中的位置信息; And storing the panorama when the video stream capture 502 unit stops capturing a video frame; wherein the timestamp is used to indicate a moment of capturing a video frame, and the background information includes the captured video frame in the Position information in the panorama;
所述背景信息还可以包括所述捕获的视频帧相对于所述全景图偏转的偏 转角度; The background information may also include a deflection angle of the captured video frame relative to the panoramic view deflection;
其中, 在所述用户需要再次经历所述增强现实体验时, 所述视频帧获取单 用于按照视频帧被捕获的先后顺序, 依次获取当前所要显示的视频帧的时 间戳, 根据获取的所述时间戳, 得到与获取的所述时间戳对应的背景信息, 根 据得到的所述背景信息包含的位置信息以及偏转角度, 按照显示的分辨率截取 获取的所述全景图, 生成当前所要显示的视频帧; When the user needs to experience the augmented reality experience again, the video frame acquisition unit is configured to sequentially acquire the timestamp of the current video frame to be displayed according to the sequence in which the video frames are captured, according to the obtained a timestamp, obtaining background information corresponding to the acquired timestamp, and according to the obtained position information and the deflection angle of the background information, intercepting the acquired panoramic image according to the displayed resolution, and generating a current video to be displayed frame;
其中, 所述接收单元 501接收的所述虚拟内容信息可以包括与所述虚拟现 实信息对应的位置信息, 所述背景信息还可以包括所述用户设备所在位置的信 息, 则所述叠加单元 506可以具体用于根据所述背景信息包含的所述用户设备 所在位置的信息以及所述虚拟内容信息包含的位置信息, 将所述虚拟现实信息 获取单元 504获取的虚拟现实信息叠加到所述视频帧获取单元 505生成的所述 当前所要显示的视频帧上; The virtual content information received by the receiving unit 501 may include location information corresponding to the virtual reality information, and the background information may further include information about a location of the user equipment, where the superimposing unit 506 may Specifically, the virtual reality information acquired by the virtual reality information acquiring unit 504 is superimposed on the video frame according to the information about the location of the user equipment included in the background information and the location information included in the virtual content information. The current video frame to be displayed generated by the unit 505;
其中, 所述用户设备还可以包括发送单元, 所述发送单元可以用于在所述 接收单元 501接收从所述服务器侧返回的所述虚拟内容信息之前, 向所述服务 器侧发送所述用户设备所在位置的信息, 以便所述接收单元 501接收所述虚拟 内容信息, 其中, 所述虚拟内容信息由所述服务器侧根据所述用户设备所在位
置的信息进行查找得到, 所述虚拟内容信息还可以包括所述虚拟现实信息或所 述虚拟现实信息的存储位置信息; The user equipment may further include a sending unit, where the sending unit may be configured to send the user equipment to the server side before the receiving unit 501 receives the virtual content information returned from the server side. The location information, so that the receiving unit 501 receives the virtual content information, where the virtual content information is determined by the server side according to the location of the user equipment The set information is obtained by searching, and the virtual content information may further include the virtual reality information or storage location information of the virtual reality information;
所述虚拟现实信息获取单元 504可以具体用于在所述接收单元 501接收的 所述虚拟内容信息包括所述虚拟现实信息时, 直接获取所述虚拟现实信息; 或 者在所述接收单元 501接收的所述虚拟内容信息包括所述虚拟现实信息的存储 位置信息时, 根据所述存储位置信息, 获取所述虚拟现实信息。 其中, 需要说明的是, 不管当前现实世界中存不存在被跟踪对象, 所述存 储单元 503存储的所述增强现实上下文还可以包括用户操作信息, 所述用户操 作信息包括操作类型、 操作参数以及时间戳; The virtual reality information acquiring unit 504 may be specifically configured to directly acquire the virtual reality information when the virtual content information received by the receiving unit 501 includes the virtual reality information; or received by the receiving unit 501. When the virtual content information includes the storage location information of the virtual reality information, the virtual reality information is acquired according to the storage location information. The augmented reality context stored by the storage unit 503 may further include user operation information, where the user operation information includes an operation type and an operation parameter, and the presence of the tracked object in the current real world. Timestamp
贝1 J , 所述用户设备还可以包括: Shellfish 1 J, the user equipment may further comprise:
用户操作模拟单元, 用于在所述用户操作信息包含的时间戳对应的时刻, 根据所述操作类型以及所述操作参数, 模拟用户的操作。 如图 6所示, 为本发明实施例提供的另一种用户设备的结构图, 如图 6所 示, 所述用户设备包括至少一个处理器 601 , 通信总线 602, 存储器 603 以及 至少一个通信接口 604。 The user operation simulation unit is configured to simulate the operation of the user according to the operation type and the operation parameter at a time corresponding to the time stamp included in the user operation information. FIG. 6 is a structural diagram of another user equipment according to an embodiment of the present invention. As shown in FIG. 6, the user equipment includes at least one processor 601, a communication bus 602, a memory 603, and at least one communication interface. 604.
其中, 所述通信总线 602用于实现上述组件之间的连接并通信, 所述通信 接口 604用于与外部设备连接并通信。 The communication bus 602 is configured to implement a connection and communication between the components, and the communication interface 604 is configured to connect and communicate with an external device.
其中, 所述存储器 603用于存储需要执行的程序代码, 这些程序代码具体 可以包括: 接收单元 6031、 视频流捕获单元 6032、 存储单元 6033、 虚拟现实 信息获取单元 6034、视频帧获取单元 6035、叠加单元 6036以及显示单元 6037; 所述处理器 601用于执行所述存储器 603中存储的单元, 当上述单元被所述处 理器 601执行时, 实现如下功能: The memory 603 is configured to store program code that needs to be executed. The program code may include: a receiving unit 6031, a video stream capturing unit 6032, a storage unit 6033, a virtual reality information acquiring unit 6034, a video frame acquiring unit 6035, and an overlay. The unit 6036 and the display unit 6037 are configured to execute the unit stored in the memory 603. When the unit is executed by the processor 601, the following functions are implemented:
所述接收单元 6031 , 用于接收从服务器侧返回的虚拟内容信息; The receiving unit 6031 is configured to receive virtual content information returned from the server side;
所述视频流捕获单元 6032, 用于捕获视频流;
所述存储单元 6033 , 用于存储用户经历增强现实体验时的增强现实上下 文, 所述增强现实上下文包括所述接收单元 6031接收的所述虚拟内容信息以 及所述视频流捕获单元 6032捕获的所述视频流; The video stream capturing unit 6032 is configured to capture a video stream. The storage unit 6033 is configured to store an augmented reality context when the user experiences an augmented reality experience, where the augmented reality context includes the virtual content information received by the receiving unit 6031 and the captured by the video stream capturing unit 6032 Video stream
所述虚拟现实信息获取单元 6034,用于当所述用户需要再次经历所述增强 现实体验时, 根据所述存储单元 6033存储的虚拟内容信息, 获取虚拟现实信 所述视频帧获取单元 6035 , 用于按照视频帧被捕获的先后顺序,依次获取 所述存储单元 6033存储的所述视频流中的视频帧; The virtual reality information acquiring unit 6034 is configured to acquire, according to the virtual content information stored by the storage unit 6033, the virtual reality information, the video frame acquiring unit 6035, when the user needs to experience the augmented reality experience again. Obtaining video frames in the video stream stored by the storage unit 6033 in sequence according to a sequence in which video frames are captured;
所述叠加单元 6036, 用于将所述虚拟现实信息获取单元 6034获取的所述 虚拟现实信息叠加到所述视频帧获取单元 6035获取的所述视频帧上; The superimposing unit 6036 is configured to superimpose the virtual reality information acquired by the virtual reality information acquiring unit 6034 on the video frame acquired by the video frame acquiring unit 6035;
所述显示单元 6037, 用于显示所述叠加单元 6036叠加后的视频帧。 The display unit 6037 is configured to display the superimposed video frame of the superimposing unit 6036.
其中, 需要说明的是, 视频帧获取单元 6035可以按照视频播放的帧率, 依次获取视频流中的视频帧。 It should be noted that the video frame acquiring unit 6035 may sequentially acquire video frames in the video stream according to the frame rate of the video playing.
本发明实施例提供的一种用户设备, 在用户经历增强现实体验时, 存储单 元通过增强现实上下文存储接收单元接收的虚拟内容信息以及视频流捕获单 元捕获的视频流, 在所述增强现实体验结束后, 当所述用户需要再次经历所述 增强现实体验时, 叠加单元将虚拟现实信息获取单元获取的虚拟现实信息叠加 到视频帧获取单元获取的视频帧上, 显示单元显示叠加单元叠加后的视频帧, 使得所述用户在经历了增强现实体验之后, 还能够在任何时候再次经历相同的 增强现实体验。 The user equipment provided by the embodiment of the present invention, when the user experiences the augmented reality experience, the storage unit receives the virtual content information received by the receiving unit and the video stream captured by the video stream capturing unit, and ends the augmented reality experience. After the user needs to experience the augmented reality experience again, the superimposing unit superimposes the virtual reality information acquired by the virtual reality information acquiring unit on the video frame acquired by the video frame acquiring unit, and the display unit displays the superimposed video of the superimposing unit. The frame enables the user to experience the same augmented reality experience again at any time after experiencing the augmented reality experience.
在本发明实施例的一种实现方式中, 当需要对被跟踪对象进行增强时, 所 述用户所在的现实世界中存在被跟踪对象, 这时所述视频流捕获单元捕获的视 频流中包含被跟踪对象的姿态图像, 所述视频流捕获单元 6032可以具体用于 依次 4翁获视频帧; In an implementation manner of the embodiment of the present invention, when the object to be tracked needs to be enhanced, the tracked object exists in the real world where the user is located, and the video stream captured by the video stream capturing unit includes the video stream. Tracking the pose image of the object, the video stream capturing unit 6032 may be specifically configured to sequentially obtain a video frame;
所述存储单元 6033可以具体用于存储所述视频流捕获单元 6032捕获的视
频帧的时间戳与被跟踪对象信息之间的对应关系, 将被跟踪对象的姿态图像从 所述捕获的视频帧中去除, 根据去除所述姿态图像后的视频帧更新全景图, 并 存储所述时间戳与背景信息之间的对应关系; 以及 The storage unit 6033 may be specifically configured to store the view captured by the video stream capturing unit 6032. Corresponding relationship between the time stamp of the frequency frame and the tracked object information, removing the posture image of the tracked object from the captured video frame, updating the panorama according to the video frame after removing the posture image, and storing the image Corresponding relationship between the timestamp and the background information;
用于在所述视频流捕获单元 6032捕获视频帧时存储所述被跟踪对象的标 准图像, 并在所述视频流捕获单元 6032停止捕获视频帧时, 存储所述全景图; 其中, 所述时间戳用于指示捕获视频帧的时刻, 所述被跟踪对象信息包括 所述姿态图像在所述捕获的视频帧中的位置信息, 所述背景信息包括所述捕获 的视频帧在所述全景图中的位置信息; And storing a standard image of the tracked object when the video stream capturing unit 6032 captures a video frame, and storing the panoramic image when the video stream capturing unit 6032 stops capturing a video frame; wherein, the time a stamp indicating a time at which the video frame is captured, the tracked object information including location information of the gesture image in the captured video frame, the background information including the captured video frame in the panorama Location information;
其中, 所述被跟踪对象信息还可以包括所述姿态图像在所述捕获的视频帧 上的单应性矩阵, 所述背景信息还可以包括所述捕获的视频帧相对于所述全景 图偏转的偏转角度; The tracked object information may further include a homography matrix of the gesture image on the captured video frame, and the background information may further include the captured video frame being deflected relative to the panoramic image. Deflection angle
其中, 在所述用户需要再次经历所述增强现实体验时, 所述视频帧获取单 景图; 以及 The video frame acquires a single scene view when the user needs to experience the augmented reality experience again;
用于按照视频帧被捕获的先后顺序, 依次获取当前所要显示的视频帧的时 间戳, 根据获取的所述时间戳, 得到与获取的所述时间戳对应的所述存储单元 6033存储的被跟踪对象信息以及背景信息,根据得到的所述被跟踪对象信息包 含的单应性矩阵, 对获取的所述标准图像进行仿射变换, 得到所述被跟踪对象 的姿态图像, 根据得到的所述背景信息包含的位置信息以及偏转角度, 按照显 示的分辨率截取获取的所述全景图得到背景图, 根据得到的所述被跟踪对象信 息包含的位置信息, 将得到的所述姿态图像叠加到截取得到的背景图上, 生成 当前所要显示的视频帧; And the timestamp of the video frame to be displayed is obtained in sequence according to the sequence in which the video frames are captured, and the tracked by the storage unit 6033 corresponding to the acquired timestamp is obtained according to the acquired timestamp. The object information and the background information are affine-transformed to the acquired standard image according to the obtained homography matrix included in the tracked object information, to obtain a posture image of the tracked object, according to the obtained background. The position information included in the information and the deflection angle are obtained by intercepting the acquired panoramic image according to the displayed resolution, and obtaining the background image according to the obtained position information included in the tracked object information, and superimposing the obtained posture image on the interception to obtain On the background image, generate the current video frame to be displayed;
其中, 所述接收单元 6031接收的所述虚拟内容信息可以包括与所述虚拟 现实信息对应的所述被跟踪对象的标识, 则所述叠加单元 6036可以具体用于 在所述虚拟内容信息包括所述被跟踪对象的标识时, 根据所述被跟踪对象的姿
态图像在所述当前所要显示的视频帧中的位置, 将所述虚拟现实信息获取单元The virtual content information received by the receiving unit 6031 may include the identifier of the tracked object corresponding to the virtual reality information, and the superimposing unit 6036 may be specifically configured to include the virtual content information in the When the identifier of the tracked object is described, according to the posture of the tracked object The position of the state image in the current video frame to be displayed, the virtual reality information acquiring unit
6034获取的所述虚拟现实信息叠加到所述视频帧获取单元 6035生成的所述当 前所要显示的视频帧上。 The virtual reality information acquired by 6034 is superimposed on the video frame to be displayed generated by the video frame obtaining unit 6035.
其中, 需要说明的是, 所述存储器 603还可以包括发送单元, 当所述处理 器 601执行所述发送单元时, 可以实现如下功能: It should be noted that the memory 603 may further include a sending unit. When the processor 601 executes the sending unit, the following functions may be implemented:
所述发送单元可以用于在所述接收单元 6031接收从所述服务器侧返回的 所述虚拟内容信息之前, 向所述服务器侧发送标识所述被跟踪对象的信息, 所 述标识所述被跟踪对象的信息包括所述被跟踪对象的姿态图像或所述被跟踪 对象的姿态图像的特征数据, 以便所述接收单元 6031接收所述虚拟内容信息, 其中, 所述虚拟内容信息由所述服务器侧根据所述标识所述跟踪跟对象的信息 处理得到, 所述虚拟内容信息还可以包括所述虚拟现实信息或所述虚拟现实信 息的存储位置信息; The sending unit may be configured to send, after the receiving unit 6031 receives the virtual content information returned from the server side, information indicating the tracked object to the server side, where the identifier is tracked The information of the object includes the attitude image of the tracked object or the feature data of the gesture image of the tracked object, so that the receiving unit 6031 receives the virtual content information, wherein the virtual content information is from the server side Obtaining, according to the information processing of the tracking and the object, the virtual content information may further include the virtual reality information or storage location information of the virtual reality information;
所述虚拟现实信息获取单元 6034可以具体用于在所述接收单元 6031接收 的所述虚拟内容信息包括所述虚拟现实信息时, 直接获取所述虚拟现实信息; 或者在所述接收单元 6031接收的所述虚拟内容信息包括所述虚拟现实信息的 存储位置信息时, 根据所述存储位置信息, 获取所述虚拟现实信息。 在本发明实施例的另一种实现方式中, 当需要对现实环境中的当前位置进 行增强时, 所述用户所在的现实世界中不存在被跟踪对象, 这时所述视频流捕 获单元 6032可以具体用于依次捕获视频帧; The virtual reality information acquiring unit 6034 may be specifically configured to directly acquire the virtual reality information when the virtual content information received by the receiving unit 6031 includes the virtual reality information; or received by the receiving unit 6031. When the virtual content information includes the storage location information of the virtual reality information, the virtual reality information is acquired according to the storage location information. In another implementation manner of the embodiment of the present invention, when the current location in the real environment needs to be enhanced, the tracked object does not exist in the real world where the user is located, and the video stream capturing unit 6032 may Specifically used to sequentially capture video frames;
所述存储单元 6033可以具体用于根据所述视频流捕获单元 6032捕获的视 频帧更新全景图, 并存储所述捕获的视频帧的时间戳与背景信息之间的对应关 系; 以及 The storage unit 6033 may be specifically configured to update a panoramic image according to the video frame captured by the video stream capturing unit 6032, and store a correspondence between a timestamp of the captured video frame and background information;
用于在所述视频流捕获 6032单元停止捕获视频帧时, 存储所述全景图; 其中, 所述时间戳用于指示捕获视频帧的时刻, 所述背景信息包括所述捕 获的视频帧在所述全景图中的位置信息
所述背景信息还可以包括所述捕获的视频帧相对于所述全景图偏转的偏 转角度; And storing the panorama when the video stream capture 6032 unit stops capturing a video frame; wherein the timestamp is used to indicate a moment of capturing a video frame, and the background information includes the captured video frame at the location Location information in the panorama The background information may also include a deflection angle of the captured video frame relative to the panoramic view deflection;
其中, 在所述用户需要再次经历所述增强现实体验时, 所述视频帧获取单 用于按照视频帧被捕获的先后顺序, 依次获取当前所要显示的视频帧的时 间戳, 根据获取的所述时间戳, 得到与获取的所述时间戳对应的背景信息, 根 据得到的所述背景信息包含的位置信息以及偏转角度, 按照显示的分辨率截取 获取的所述全景图, 生成当前所要显示的视频帧; When the user needs to experience the augmented reality experience again, the video frame acquisition unit is configured to sequentially acquire the timestamp of the current video frame to be displayed according to the sequence in which the video frames are captured, according to the obtained a timestamp, obtaining background information corresponding to the acquired timestamp, and according to the obtained position information and the deflection angle of the background information, intercepting the acquired panoramic image according to the displayed resolution, and generating a current video to be displayed frame;
其中, 所述接收单元 6031接收的所述虚拟内容信息可以包括与所述虚拟 现实信息对应的位置信息, 所述背景信息还包括所述用户设备所在位置的信 息, 则所述叠加单元 6036可以具体用于根据所述背景信息包含的所述用户设 备所在位置的信息以及所述虚拟内容信息包含的位置信息, 将所述虚拟现实信 息获取单元 6034获取的虚拟现实信息叠加到所述视频帧获取单元 6035生成的 所述当前所要显示的视频帧上。 The virtual content information received by the receiving unit 6031 may include location information corresponding to the virtual reality information, and the background information further includes information about a location of the user equipment, and the superimposing unit 6036 may be specific. And superimposing the virtual reality information acquired by the virtual reality information acquiring unit 6034 on the video frame acquiring unit according to the information about the location of the user equipment included in the background information and the location information included in the virtual content information. 6035 is generated on the current video frame to be displayed.
其中, 所述存储器 603还可以包括发送单元, 当所述处理器 601执行所述 发送单元时, 可以实现如下的功能: The memory 603 may further include a sending unit. When the processor 601 executes the sending unit, the following functions may be implemented:
所述发送单元可以用于在所述接收单元 6031接收从所述服务器侧返回的 所述虚拟内容信息之前, 向所述服务器侧发送所述用户设备所在位置的信息, 以便所述接收单元 6031接收所述虚拟内容信息, 其中, 所述虚拟内容信息由 所述服务器侧根据所述用户设备所在位置的信息进行查找得到, 所述虚拟内容 信息还可以包括所述虚拟现实信息或所述虚拟现实信息的存储位置信息; 所述虚拟现实信息获取单元 6034具体可以用于在所述接收单元 6031接收 的所述虚拟内容信息包括所述虚拟现实信息时, 直接获取所述虚拟现实信息; 或者在所述接收单元 6031接收的所述虚拟内容信息包括所述虚拟现实信息的 存储位置信息时, 根据所述存储位置信息, 获取所述虚拟现实信息。
其中, 需要说明的是, 不管当前现实世界中存不存在被跟踪对象, 所述存 储单元 6033存储的所述增强现实上下文还可以包括用户操作信息, 所述用户 操作信息包括操作类型、 操作参数以及时间戳; The sending unit may be configured to send information about a location of the user equipment to the server side before the receiving unit 6031 receives the virtual content information returned from the server side, so that the receiving unit 6031 receives The virtual content information, wherein the virtual content information is obtained by the server side according to the information of the location of the user equipment, and the virtual content information may further include the virtual reality information or the virtual reality information. The virtual reality information acquiring unit 6034 may be specifically configured to directly acquire the virtual reality information when the virtual content information received by the receiving unit 6031 includes the virtual reality information; When the virtual content information received by the receiving unit 6031 includes the storage location information of the virtual reality information, the virtual reality information is acquired according to the storage location information. The augmented reality context stored by the storage unit 6033 may further include user operation information, where the user operation information includes an operation type, an operation parameter, and Timestamp
贝 |J , 所述存储器 603还可以包括用户操作模拟单元, 当所述处理器 601执 行所述用户操作模拟单时, 可以实现如下功能: The memory 603 may further include a user operation simulation unit, and when the processor 601 executes the user operation simulation ticket, the following functions may be implemented:
所述用户操作模拟单元, 用于在所述用户操作信息包含的时间戳对应的时 刻, 根据所述操作类型以及所述操作参数, 模拟用户的操作。 The user operation simulation unit is configured to simulate a user operation according to the operation type and the operation parameter at a time when the time stamp included in the user operation information corresponds.
本发明实施例提供的一种实现增强现实的方法及用户设备, 在用户经历增 强现实体验时, UE通过增强现实上下文存储虚拟内容信息以及捕获的视频流, 在所述增强现实体验结束后, 当所述用户需要再次经历所述增强现实体验时, 所述 UE根据存储的虚拟内容信息获取虚拟现实信息, 并将获取的虚拟现实信 息叠加到所述视频流中的每一视频帧上进行显示, 使得所述用户在经历了增强 现实体验之后, 还能够在任何时候再次经历相同的增强现实体验; 其次, 在 UE捕获的视频帧中包含被跟踪对象的姿态图像时, 所述 UE将所述被跟踪对 象的姿态图像与背景图分开存储, 通过存储所述被跟踪对象的姿态图像在所述 捕获的视频帧中的位置信息以及单应性矩阵, 存储所述被跟踪对象的姿态图 像, 并通过存储所述捕获的视频帧在全景图中的位置信息, 存储所述背景图, 从而节省了所述 UE的存储资源; 另外, 所述 UE可以根据所述被跟踪对象的 姿态图像在所述当前所要显示的视频帧中的位置, 将获取的所述虚拟现实信息 叠加到所述当前所要显示的视频帧上, 从而使得用户可以有更好的增强现实体 验; 再次, 在所述 UE捕获的视频帧不包含所述被跟踪对象的姿态图像时, 所 述 UE将所述捕获的视频帧作为背景图, 并通过存储所述捕获的视频帧在全景 图中的位置信息存储所述背景图, 从而节省了所述 UE的存储资源, 并且, 所 述 UE可以根据背景信息中包含的所述 UE所在位置的信息以及虚拟内容信息 包含的与所述虚拟现实信息对应的位置信息, 将获取的虚拟现实信息叠加到当
前所要显示的视频帧上, 从而使得用户可以有更好的增强现实体验。 通过以上的实施方式的描述可知, 所属领域的技术人员可以清楚地了解到 本发明可以用硬件实现, 或软件实现, 或它们的组合方式来实现。 当使用软件 实现时, 可以将上述功能存储在计算机可读介质中或作为计算机可读介质上的 一个或多个指令或代码进行传输。 计算机可读介质包括计算机存储介质和通信 介质, 其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何 介质。 存储介质可以是计算机能够存取的任何可用介质。 以此为例但不限于: 计算机可读介质可以包括 RAM、 ROM, EEPROM、 CD-ROM或其他光盘存储、 磁盘存储介质或者其他磁存储设备、 或者能够用于携带或存储具有指令或数据 结构形式的期望的程序代码并能够由计算机存取的任何其他介质。 此外。 任何 连接可以适当的成为计算机可读介质。 例如, 如果软件是使用同轴电缆、 光纤 光缆、 双绞线、 数字用户线(DSL )或者诸如红外线、 无线电和微波之类的无 线技术从网站、 服务器或者其他远程源传输的, 那么同轴电缆、 光纤光缆、 双 绞线、 DSL或者诸如红外线、 无线和微波之类的无线技术包括在所属介质的定 影中。 如本发明所使用的, 盘(Disk )和碟(disc ) 包括压缩光碟(CD ) 、 激 光碟、 光碟、 数字通用光碟(DVD ) 、 软盘和蓝光光碟, 其中盘通常磁性的复 制数据, 而碟则用激光来光学的复制数据。 上面的组合也应当包括在计算机可 读介质的保护范围之内。 The method for implementing the augmented reality and the user equipment provided by the embodiment of the present invention, when the user experiences the augmented reality experience, the UE stores the virtual content information and the captured video stream through the augmented reality context, after the augmented reality experience ends, When the user needs to experience the augmented reality experience again, the UE acquires virtual reality information according to the stored virtual content information, and superimposes the acquired virtual reality information on each video frame in the video stream for display. After the user experiences the augmented reality experience, the user can again experience the same augmented reality experience at any time. Secondly, when the UE captures a video frame that includes the gesture image of the tracked object, the UE will The posture image of the tracking object is stored separately from the background image, and the position information of the posture image of the tracked object in the captured video frame and the homography matrix are stored, and the posture image of the tracked object is stored and passed Storing location information of the captured video frame in the panorama, storing the The scene view, thereby saving the storage resource of the UE; in addition, the UE may superimpose the acquired virtual reality information according to the position of the gesture image of the tracked object in the current video frame to be displayed. Going to the current video frame to be displayed, so that the user can have a better augmented reality experience; again, when the video frame captured by the UE does not include the gesture image of the tracked object, the UE will The captured video frame is used as a background image, and the background image is stored by storing location information of the captured video frame in the panoramic image, thereby saving storage resources of the UE, and the UE may be based on background information. The information about the location of the UE included in the location and the location information corresponding to the virtual reality information included in the virtual content information, and superimposing the acquired virtual reality information The video frame to be displayed before, so that the user can have a better augmented reality experience. It will be apparent to those skilled in the art that the present invention can be implemented in hardware, software implementation, or a combination thereof, as will be apparent to those skilled in the art. When implemented in software, the functions described above may be stored in or transmitted as one or more instructions or code on a computer readable medium. Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another. A storage medium may be any available media that can be accessed by a computer. By way of example and not limitation, computer readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, disk storage media or other magnetic storage device, or can be used for carrying or storing in the form of an instruction or data structure. The desired program code and any other medium that can be accessed by the computer. Also. Any connection may suitably be a computer readable medium. For example, if the software is transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable , fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, wireless, and microwaves are included in the fixing of the associated media. As used in the present invention, a disk and a disc include a compact disc (CD), a laser disc, a compact disc, a digital versatile disc (DVD), a floppy disc, and a Blu-ray disc, wherein the disc is usually magnetically copied, and the disc is The laser is used to optically replicate the data. Combinations of the above should also be included within the scope of the computer readable media.
需要说明的是, 本说明书中的各个实施例均采用递进的方式描述, 各个实 施例之间相同相似的部分互相参见即可, 每个实施例重点说明的都是与其他实 施例的不同之处。尤其,对于装置实施例而言, 由于其基本相似于方法实施例, 所以描述得比较筒单, 各单元具体功能的执行过程参见方法实施例的部分说明 即可。 以上所描述的装置实施例仅仅是示意性的, 其中作为分离部件说明的单 元可以是或者也可以不是物理上分开的, 作为单元显示的部件可以是或者也可 以不是物理单元, 即可以位于一个地方, 或者也可以分布到多个网络单元上。
可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目 的。 本领域普通技术人员在不付出创造性劳动的情况下, 即可以理解并实施。 It is to be noted that the various embodiments in the present specification are described in a progressive manner, and the same similar parts between the various embodiments may be referred to each other, and each embodiment focuses on different embodiments from other embodiments. At the office. In particular, for the device embodiment, since it is basically similar to the method embodiment, it is described as a comparison, and the execution process of each unit specific function can be referred to the description of the method embodiment. The device embodiments described above are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located in one place. , or it can be distributed to multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without any creative effort.
总之, 以上所述仅为本发明技术方案的较佳实施例而已, 并非用于限定本 发明的保护范围。凡在本发明的精神和原则之内, 所作的任何修改、等同替换、 改进等, 均应包含在本发明的保护范围之内。
In summary, the above description is only a preferred embodiment of the technical solution of the present invention, and is not intended to limit the scope of the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and scope of the present invention are intended to be included within the scope of the present invention.
Claims
1、 一种实现增强现实的方法, 其特征在于, 包括: A method for realizing augmented reality, comprising:
用户设备存储用户经历增强现实体验时的增强现实上下文, 所述增强现实 上下文包括所述用户设备从服务器侧接收的虚拟内容信息以及所述用户设备 捕获的视频流; The user equipment stores an augmented reality context when the user experiences an augmented reality experience, the augmented reality context including virtual content information received by the user equipment from the server side and a video stream captured by the user equipment;
当所述用户需要再次经历所述增强现实体验时, 所述用户设备根据存储的 所述虚拟内容信息, 获取虚拟现实信息; When the user needs to experience the augmented reality experience again, the user equipment acquires virtual reality information according to the stored virtual content information;
所述用户设备按照视频帧被捕获的先后顺序, 依次获取存储的所述视频流 中的视频帧, 将获取的所述虚拟现实信息叠加到获取的所述视频帧上, 并显示 叠力 p后的视频帧。 The user equipment sequentially acquires the stored video frames in the video stream according to the sequence in which the video frames are captured, and superimposes the acquired virtual reality information on the acquired video frames, and displays the overlay force p. Video frame.
2、 如权利要求 1 所述的方法, 其特征在于, 所述用户设备存储所述捕获 的视频流, 包括: 2. The method of claim 1, wherein the storing, by the user equipment, the captured video stream comprises:
所述用户设备依次捕获视频帧, 存储捕获的视频帧的时间戳与被跟踪对象 信息之间的对应关系, 将被跟踪对象的姿态图像从所述捕获的视频帧中去除, 根据去除所述姿态图像后的视频帧更新全景图, 并存储所述时间戳与背景信息 之间的对应关系; The user equipment sequentially captures a video frame, stores a correspondence between a timestamp of the captured video frame and the tracked object information, and removes the posture image of the tracked object from the captured video frame, according to removing the gesture. The video frame after the image updates the panorama, and stores the correspondence between the time stamp and the background information;
所述用户设备在捕获视频帧时存储所述被跟踪对象的标准图像, 并在所述 用户设备停止捕获视频帧时, 存储所述全景图; The user equipment stores a standard image of the tracked object when capturing a video frame, and stores the panorama when the user equipment stops capturing a video frame;
其中, 所述被跟踪对象信息包括所述姿态图像在所述捕获的视频帧中的位 置信息, 所述背景信息包括所述捕获的视频帧在所述全景图中的位置信息。 The tracked object information includes location information of the gesture image in the captured video frame, and the background information includes location information of the captured video frame in the panoramic image.
3、 如权利要求 2所述的方法, 其特征在于, 所述被跟踪对象信息还包括 所述姿态图像在所述捕获的视频帧上的单应性矩阵, 所述背景信息还包括所述 捕获的视频帧相对于所述全景图偏转的偏转角度。 3. The method according to claim 2, wherein the tracked object information further comprises a homography matrix of the gesture image on the captured video frame, and the background information further comprises the capturing The angle of deflection of the video frame relative to the panorama deflection.
4、 如权利要求 3所述的方法, 其特征在于, 所述用户设备按照视频帧被 捕获的先后顺序, 依次获取存储的所述视频流中的视频帧, 包括:
所述用户设备获取存储的所述标准图像以及所述全景图; The method of claim 3, wherein the user equipment sequentially acquires the stored video frames in the video stream according to a sequence in which the video frames are captured, including: The user equipment acquires the stored standard image and the panorama;
所述用户设备按照视频帧被捕获的先后顺序, 依次获取当前所要显示的视 频帧的时间戳, 根据获取的所述时间戳, 得到与获取的所述时间戳对应的被跟 踪对象信息以及背景信息, 根据得到的所述被跟踪对象信息包含的单应性矩 阵, 对获取的所述标准图像进行仿射变换, 得到所述被跟踪对象的姿态图像, 根据得到的所述背景信息包含的位置信息以及偏转角度, 按照显示的分辨率截 取获取的所述全景图得到背景图, 根据得到的所述被跟踪对象信息包含的位置 信息, 将得到的所述姿态图像叠加到截取得到的背景图上, 生成所述当前所要 显示的视频帧。 The user equipment sequentially acquires the timestamp of the video frame to be displayed in the order in which the video frames are captured, and obtains the tracked object information and the background information corresponding to the acquired timestamp according to the obtained timestamp. And performing affine transformation on the obtained standard image according to the obtained homography matrix included in the tracked object information, obtaining a posture image of the tracked object, and acquiring location information according to the obtained background information. And the deflection angle, the acquired panoramic image is obtained according to the displayed resolution, and the background image is obtained, and the obtained posture image is superimposed on the cut background image according to the obtained position information of the tracked object information. The video frame currently to be displayed is generated.
5、 如权利要求 4所述的方法, 其特征在于, 所述虚拟内容信息包括与所 述虚拟现实信息对应的所述被 3艮踪对象的标识, 则所述将获取的所述虚拟现实 信息叠加到获取的所述视频帧上, 包括: The method according to claim 4, wherein the virtual content information includes an identifier of the 3 tracked object corresponding to the virtual reality information, and the virtual reality information to be acquired Superimposed on the acquired video frame, including:
在所述虚拟内容信息包括所述被跟踪对象的标识时, 所述用户设备根据所 述被跟踪对象的姿态图像在所述当前所要显示的视频帧中的位置, 将获取的所 述虚拟现实信息叠加到所述当前所要显示的视频帧上。 When the virtual content information includes the identifier of the tracked object, the user equipment acquires the virtual reality information according to the position of the gesture image of the tracked object in the current video frame to be displayed. Superimposed on the currently displayed video frame.
6、 如权利要求 2所述的方法, 其特征在于, 在所述用户设备存储所述增 强现实上下文之前, 所述方法还包括: The method of claim 2, wherein before the user equipment stores the enhanced reality context, the method further includes:
所述用户设备向所述服务器侧发送标识所述被跟踪对象的信息, 所述标识 所述被跟踪对象的信息包括所述被跟踪对象的姿态图像或所述被跟踪对象的 姿态图像的特征数据; The user equipment sends information identifying the tracked object to the server side, where the information identifying the tracked object includes a pose image of the tracked object or feature data of a pose image of the tracked object ;
所述用户设备接收所述服务器侧发送的所述虚拟内容信息, 其中, 所述虚 拟内容信息由所述服务器侧根据所述标识所述跟踪跟对象的信息处理得到, 所 述虚拟内容信息包括所述虚拟现实信息或所述虚拟现实信息的存储位置信息; 贝 |J ,所述用户设备根据存储的所述虚拟内容信息,获取所述虚拟现实信息, 包括:
若所述虚拟内容信息包括所述虚拟现实信息, 则所述用户设备直接获取所 述虚拟现实信息; 或者, The user equipment receives the virtual content information sent by the server side, where the virtual content information is obtained by the server side according to the information of the tracking and the object, and the virtual content information includes The virtual reality information or the storage location information of the virtual reality information; and the user equipment acquiring the virtual reality information according to the stored virtual content information, including: If the virtual content information includes the virtual reality information, the user equipment directly acquires the virtual reality information; or
若所述虚拟内容信息包括所述虚拟现实信息的存储位置信息, 则所述用户 设备根据所述存储位置信息, 获取所述虚拟现实信息。 And if the virtual content information includes the storage location information of the virtual reality information, the user equipment acquires the virtual reality information according to the storage location information.
7、 如权利要求 1 所述的方法, 其特征在于, 所述用户设备存储所述捕获 的视频流, 包括: The method of claim 1, wherein the storing, by the user equipment, the captured video stream comprises:
所述用户设备依次捕获视频帧, 根据捕获的视频帧更新全景图, 并存储所 述捕获的视频帧的时间戳与背景信息之间的对应关系; The user equipment sequentially captures a video frame, updates a panoramic image according to the captured video frame, and stores a correspondence between a timestamp of the captured video frame and background information;
在所述用户设备停止捕获视频帧时, 所述用户设备存储所述全景图; 其中, 所述背景信息包括所述捕获的视频帧在所述全景图中的位置信息。 And when the user equipment stops capturing a video frame, the user equipment stores the panoramic image; wherein the background information includes location information of the captured video frame in the panoramic image.
8、 如权利要求 7所述的方法, 其特征在于, 所述背景信息还包括所述捕 获的视频帧相对于所述全景图偏转的偏转角度。 8. The method of claim 7, wherein the background information further comprises a deflection angle of the captured video frame relative to the panoramic view deflection.
9、 如权利要求 8所述的方法, 其特征在于, 所述用户设备按照视频帧被 捕获的先后顺序, 依次获取存储的所述视频流中的视频帧, 包括: The method of claim 8, wherein the user equipment sequentially acquires the stored video frames in the video stream according to a sequence in which the video frames are captured, including:
所述用户设备获取存储的所述全景图; The user equipment acquires the stored panorama view;
所述用户设备按照视频帧被捕获的先后顺序, 依次获取当前所要显示的视 频帧的时间戳, 根据获取的所述时间戳, 得到与获取的所述时间戳对应的背景 信息, 根据得到的所述背景信息包含的位置信息以及偏转角度, 按照显示的分 辨率截取获取的所述全景图, 生成所述当前所要显示的视频帧。 The user equipment obtains timestamps of the current video frame to be displayed in sequence according to the sequence in which the video frames are captured, and obtains background information corresponding to the obtained timestamp according to the obtained timestamp, according to the obtained information. The position information and the deflection angle included in the background information are intercepted, and the acquired panoramic image is intercepted according to the displayed resolution to generate the current video frame to be displayed.
10、 如权利要求 9所述的方法, 其特征在于, 所述虚拟内容信息包括与所 述虚拟现实信息对应的位置信息, 所述背景信息还包括所述用户设备所在位置 的信息,则所述将获取的所述虚拟现实信息叠加到获取的所述视频帧上, 包括: 所述用户设备根据所述背景信息包含的所述用户设备所在位置的信息以 及所述虚拟内容信息包含的位置信息, 将获取的所述虚拟现实信息叠加到所述 当前所要显示的视频帧上。
The method of claim 9, wherein the virtual content information includes location information corresponding to the virtual reality information, and the background information further includes information about a location of the user equipment, And superimposing the acquired virtual reality information on the acquired video frame, including: information about a location where the user equipment is located according to the background information, and location information included in the virtual content information, The acquired virtual reality information is superimposed on the currently displayed video frame.
11、 如权利要求 7所述的方法, 其特征在于, 所述用户设备存储所述增强 现实上下文之前, 所述方法还包括: The method of claim 7, wherein before the user equipment stores the enhanced reality context, the method further includes:
所述用户设备向所述服务器侧发送所述用户设备所在位置的信息; 所述用户设备接收所述服务器侧发送的所述虚拟内容信息, 其中, 所述虚 拟内容信息由所述服务器侧根据所述用户设备所在位置的信息查找得到, 所述 虚拟内容信息包括所述虚拟现实信息或所述虚拟现实信息的存储位置信息; 贝 |J ,所述用户设备根据存储的所述虚拟内容信息,获取所述虚拟现实信息, 包括: The user equipment sends the information about the location of the user equipment to the server side; the user equipment receives the virtual content information sent by the server side, where the virtual content information is used by the server side The information about the location of the user equipment is obtained, and the virtual content information includes the virtual reality information or the storage location information of the virtual reality information. The user equipment obtains the virtual content information according to the stored virtual content information. The virtual reality information includes:
若所述虚拟内容信息包括所述虚拟现实信息, 则所述用户设备直接获取所 述虚拟现实信息; 或者, If the virtual content information includes the virtual reality information, the user equipment directly obtains the virtual reality information; or
若所述虚拟内容信息包括所述虚拟现实信息的存储位置信息, 则所述用户 设备根据所述存储位置信息, 获取所述虚拟现实信息。 And if the virtual content information includes the storage location information of the virtual reality information, the user equipment acquires the virtual reality information according to the storage location information.
12、 如权利要求 1至 11 中任一项所述的方法, 其特征在于, 所述增强现 实上下文还包括用户操作信息, 所述用户操作信息包括操作类型、 操作参数以 及时间戳; 则, 所述方法还包括: The method according to any one of claims 1 to 11, wherein the augmented reality context further includes user operation information, and the user operation information includes an operation type, an operation parameter, and a time stamp; The method also includes:
所述用户设备在所述用户操作信息包含的时间戳所对应的时刻, 根据所述 操作类型以及所述操作参数, 模拟用户的操作。 The user equipment simulates the operation of the user according to the operation type and the operation parameter at a time corresponding to the time stamp included in the user operation information.
13、 一种用户设备, 其特征在于, 包括: 13. A user equipment, comprising:
接收单元, 用于接收从服务器侧返回的虚拟内容信息; a receiving unit, configured to receive virtual content information returned from the server side;
视频流捕获单元, 用于捕获视频流; a video stream capturing unit, configured to capture a video stream;
存储单元, 用于存储用户经历增强现实体验时的增强现实上下文, 所述增 强现实上下文包括所述接收单元接收的所述虚拟内容信息以及所述视频流捕 获单元捕获的所述视频流; a storage unit, configured to store an augmented reality context when the user experiences an augmented reality experience, where the enhanced reality context includes the virtual content information received by the receiving unit and the video stream captured by the video stream capture unit;
虚拟现实信息获取单元, 用于当所述用户需要再次经历所述增强现实体验 时, 根据所述存储单元存储的所述虚拟内容信息, 获取虚拟现实信息;
视频帧获取单元, 用于按照视频帧被捕获的先后顺序, 依次获取所述存储 单元存储的所述视频流中的视频帧; a virtual reality information acquiring unit, configured to acquire virtual reality information according to the virtual content information stored by the storage unit when the user needs to experience the augmented reality experience again; a video frame acquiring unit, configured to sequentially acquire video frames in the video stream stored by the storage unit according to a sequence in which video frames are captured;
叠加单元, 用于将所述虚拟现实信息获取单元获取的所述虚拟现实信息叠 加到所述视频帧获取单元获取的所述视频帧上; a superimposing unit, configured to superimpose the virtual reality information acquired by the virtual reality information acquiring unit on the video frame acquired by the video frame acquiring unit;
显示单元, 用于显示所述叠加单元叠加后的视频帧。 a display unit, configured to display a video frame superimposed by the superposition unit.
14、 如权利要求 13所述的用户设备, 其特征在于, 所述视频流捕获单元 具体用于依次捕获视频帧; The user equipment according to claim 13, wherein the video stream capturing unit is specifically configured to sequentially capture video frames;
所述存储单元具体用于存储所述视频流捕获单元捕获的视频帧的时间戳 与被跟踪对象信息之间的对应关系, 将被跟踪对象的姿态图像从所述捕获的视 频帧中去除, 根据去除所述姿态图像后的视频帧更新全景图, 并存储所述时间 戳与背景信息之间的对应关系; 以及 The storage unit is specifically configured to store a correspondence between a timestamp of the video frame captured by the video stream capturing unit and the tracked object information, and remove the posture image of the tracked object from the captured video frame, according to the Removing the video frame after the gesture image to update the panorama, and storing a correspondence between the time stamp and the background information;
用于在所述视频流捕获单元捕获视频帧时存储所述被跟踪对象的标准图 像, 并在所述视频流捕获单元停止捕获视频帧时, 存储所述全景图; And storing a standard image of the tracked object when the video stream capturing unit captures a video frame, and storing the panoramic image when the video stream capturing unit stops capturing a video frame;
其中, 所述被跟踪对象信息包括所述姿态图像在所述捕获的视频帧中的位 置信息, 所述背景信息包括所述捕获的视频帧在所述全景图中的位置信息。 The tracked object information includes location information of the gesture image in the captured video frame, and the background information includes location information of the captured video frame in the panoramic image.
15、 如权利要求 14所述的用户设备, 其特征在于, 所述被跟踪对象信息 还包括所述姿态图像在所述捕获的视频帧上的单应性矩阵, 所述背景信息还包 括所述捕获的视频帧相对于所述全景图偏转的偏转角度。 The user equipment according to claim 14, wherein the tracked object information further includes a homography matrix of the gesture image on the captured video frame, and the background information further includes the The angle of deflection of the captured video frame relative to the panorama deflection.
16、 如权利要求 15所述的用户设备, 其特征在于, 所述视频帧获取单元 具体用于获取所述存储单元存储的所述标准图像以及所述全景图; 以及 The user equipment according to claim 15, wherein the video frame obtaining unit is specifically configured to acquire the standard image and the panoramic image stored by the storage unit;
用于按照视频帧被捕获的先后顺序, 依次获取当前所要显示的视频帧的时 间戳, 根据获取的所述时间戳, 得到与获取的所述时间戳对应的所述存储单元 存储的被跟踪对象信息以及背景信息, 根据得到的所述被跟踪对象信息包含的 单应性矩阵, 对获取的所述标准图像进行仿射变换, 得到所述被跟踪对象的姿 态图像, 根据得到的所述背景信息包含的位置信息以及偏转角度, 按照显示的
分辨率截取获取的所述全景图得到背景图, 根据得到的所述被跟踪对象信息包 含的位置信息, 将得到的所述姿态图像叠加到截取得到的背景图上, 生成所述 当前所要显示的视频帧。 The timestamps of the video frames to be displayed are obtained in sequence according to the sequence in which the video frames are captured, and the tracked objects stored in the storage unit corresponding to the acquired timestamps are obtained according to the acquired timestamps. And the background information is obtained by performing affine transformation on the obtained standard image according to the obtained homography matrix included in the tracked object information, to obtain a posture image of the tracked object, according to the obtained background information. The position information contained and the angle of deflection, as shown The resolution captures the obtained panoramic image to obtain a background image, and according to the obtained position information included in the tracked object information, superimposes the obtained posture image on the cut background image to generate the current desired image to be displayed. Video frame.
17、 如权利要求 16所述的用户设备, 其特征在于, 所述接收单元接收的 所述虚拟内容信息包括与所述虚拟现实信息对应的所述被跟踪对象的标识, 则 所述叠加单元具体用于在所述虚拟内容信息包括所述被跟踪对象的标识时, 根 据所述被跟踪对象的姿态图像在所述当前所要显示的视频帧中的位置, 将所述 虚拟现实信息获取单元获取的所述虚拟现实信息叠加到所述视频帧获取单元 生成的所述当前所要显示的视频帧上。 The user equipment according to claim 16, wherein the virtual content information received by the receiving unit includes an identifier of the tracked object corresponding to the virtual reality information, and the superimposing unit is specific And when the virtual content information includes the identifier of the tracked object, according to a position of the posture image of the tracked object in the currently displayed video frame, the virtual reality information acquiring unit acquires The virtual reality information is superimposed on the current video frame to be displayed generated by the video frame acquiring unit.
18、 如权利要求 14所述的用户设备, 其特征在于, 所述用户设备还包括 发送单元, 所述发送单元用于在所述接收单元接收从所述服务器侧返回的所述 虚拟内容信息之前, 向所述服务器侧发送标识所述被跟踪对象的信息, 所述标 识所述被跟踪对象的信息包括所述被跟踪对象的姿态图像或所述被跟踪对象 的姿态图像的特征数据, 以便所述接收单元接收所述虚拟内容信息, 其中, 所 述虚拟内容信息由所述服务器侧根据所述标识所述跟踪跟对象的信息处理得 到, 所述虚拟内容信息包括所述虚拟现实信息或所述虚拟现实信息的存储位置 信息; The user equipment according to claim 14, wherein the user equipment further comprises a sending unit, wherein the sending unit is configured to: before the receiving unit receives the virtual content information returned from the server side Sending information identifying the tracked object to the server side, where the information identifying the tracked object includes a pose image of the tracked object or feature data of a pose image of the tracked object, so as to The receiving unit receives the virtual content information, where the virtual content information is obtained by the server side according to the information of the tracking and the object, and the virtual content information includes the virtual reality information or the Storage location information of virtual reality information;
则, 所述虚拟现实信息获取单元具体用于在所述接收单元接收的所述虚拟 内容信息包括所述虚拟现实信息时, 直接获取所述虚拟现实信息; 或者在所述 接收单元接收的所述虚拟内容信息包括所述虚拟现实信息的存储位置信息时, 根据所述存储位置信息, 获取所述虚拟现实信息。 The virtual reality information acquiring unit is specifically configured to directly acquire the virtual reality information when the virtual content information received by the receiving unit includes the virtual reality information; or the When the virtual content information includes the storage location information of the virtual reality information, the virtual reality information is acquired according to the storage location information.
19、 如权利要求 13所述的用户设备, 其特征在于, 所述视频流捕获单元 具体用于依次捕获视频帧; The user equipment according to claim 13, wherein the video stream capturing unit is specifically configured to sequentially capture video frames;
所述存储单元具体用于根据所述视频流捕获单元捕获的视频帧更新全景 图, 并存储所述捕获的视频帧的时间戳与背景信息之间的对应关系; 以及
用于在所述视频流捕获单元停止捕获视频帧时, 存储所述全景图; 其中, 所述背景信息包括所述捕获的视频帧在所述全景图中的位置信息。The storage unit is specifically configured to update a panorama according to a video frame captured by the video stream capturing unit, and store a correspondence between a timestamp of the captured video frame and background information; And storing the panoramic image when the video stream capturing unit stops capturing video frames; wherein the background information includes location information of the captured video frame in the panoramic image.
20、 如权利要求 19所述的用户设备, 其特征在于, 所述背景信息还包括 所述捕获的视频帧相对于所述全景图偏转的偏转角度。 20. The user equipment of claim 19, wherein the background information further comprises a deflection angle of the captured video frame relative to the panoramic view deflection.
21、 如权利要求 20所述的用户设备, 其特征在于, 所述视频帧获取单元 具体用于获取所述存储单元存储的所述全景图; 以及 The user equipment according to claim 20, wherein the video frame obtaining unit is specifically configured to acquire the panoramic image stored by the storage unit;
用于按照视频帧被捕获的先后顺序, 依次获取当前所要显示的视频帧的时 间戳, 根据获取的所述时间戳, 得到与获取的所述时间戳对应的背景信息, 根 据得到的所述背景信息包含的位置信息以及偏转角度, 按照显示的分辨率截取 获取的所述全景图, 生成所述当前所要显示的视频帧。 And obtaining, according to the sequence of the video frames, the timestamps of the currently displayed video frames, and obtaining the background information corresponding to the acquired timestamps according to the acquired timestamps, according to the obtained background. The position information included in the information and the deflection angle are intercepted according to the displayed resolution to generate the currently displayed video frame.
22、 如权利要求 21 所述的用户设备, 其特征在于, 所述接收单元接收的 所述虚拟内容信息包括与所述虚拟现实信息对应的位置信息, 所述背景信息还 包括所述用户设备所在位置的信息, 则所述叠加单元具体用于根据所述背景信 息包含的所述用户设备所在位置的信息以及所述虚拟内容信息包含的位置信 息, 将所述虚拟现实信息获取单元获取的所述虚拟现实信息叠加到所述视频帧 获取单元生成的所述当前所要显示的视频帧上。 The user equipment according to claim 21, wherein the virtual content information received by the receiving unit includes location information corresponding to the virtual reality information, and the background information further includes where the user equipment is located. The information of the location, the superimposing unit is specifically configured to: according to the information about the location of the user equipment included in the background information and the location information included in the virtual content information, the obtained by the virtual reality information acquiring unit The virtual reality information is superimposed on the currently displayed video frame generated by the video frame obtaining unit.
23、 如权利要求 19所述的用户设备, 其特征在于, 所述用户设备还包括 发送单元, 所述发送单元用于在所述接收单元接收从所述服务器侧返回的所述 虚拟内容信息之前, 向所述服务器侧发送所述用户设备所在位置的信息, 以便 所述接收单元接收所述虚拟内容信息, 其中, 所述虚拟内容信息由所述服务器 侧根据所述用户设备所在位置的信息查找得到, 所述虚拟内容信息包括所述虚 拟现实信息或所述虚拟现实信息的存储位置信息; The user equipment according to claim 19, wherein the user equipment further comprises a sending unit, wherein the sending unit is configured to: before the receiving unit receives the virtual content information returned from the server side Sending, to the server side, the information about the location of the user equipment, so that the receiving unit receives the virtual content information, where the virtual content information is searched by the server side according to the information of the location of the user equipment. Obtaining that the virtual content information includes the virtual reality information or storage location information of the virtual reality information;
则, 所述虚拟现实信息获取单元具体用于在所述接收单元接收的所述虚拟 内容信息包括所述虚拟现实信息时, 直接获取所述虚拟现实信息; 或者在所述 接收单元接收的所述虚拟内容信息包括所述虚拟现实信息的存储位置信息时,
根据所述存储位置信息, 获取所述虚拟现实信息。 The virtual reality information acquiring unit is specifically configured to directly acquire the virtual reality information when the virtual content information received by the receiving unit includes the virtual reality information; or the When the virtual content information includes storage location information of the virtual reality information, Obtaining the virtual reality information according to the storage location information.
24、 如权利要求 13至 23中任一项所述的用户设备, 其特征在于, 所述增 强现实上下文还包括用户操作信息, 所述用户操作信息包括操作类型、 操作参 数以及时间戰; The user equipment according to any one of claims 13 to 23, wherein the enhanced reality context further includes user operation information, and the user operation information includes an operation type, an operation parameter, and a time war;
贝 |J , 所述用户设备还包括: The user equipment further includes:
用户操作模拟单元, 用于在所述用户操作信息包含的时间戳所对应的时 刻, 根据所述操作类型以及所述操作参数, 模拟用户的操作。
The user operation simulation unit is configured to simulate the operation of the user according to the operation type and the operation parameter at a time corresponding to the time stamp included in the user operation information.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201280001959.6A CN103959220B (en) | 2012-11-14 | 2012-11-14 | Method for achieving augmented reality, and user equipment |
PCT/CN2012/084581 WO2014075237A1 (en) | 2012-11-14 | 2012-11-14 | Method for achieving augmented reality, and user equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2012/084581 WO2014075237A1 (en) | 2012-11-14 | 2012-11-14 | Method for achieving augmented reality, and user equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014075237A1 true WO2014075237A1 (en) | 2014-05-22 |
Family
ID=50730471
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2012/084581 WO2014075237A1 (en) | 2012-11-14 | 2012-11-14 | Method for achieving augmented reality, and user equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN103959220B (en) |
WO (1) | WO2014075237A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106354869A (en) * | 2016-09-13 | 2017-01-25 | 四川研宝科技有限公司 | Real-scene image processing method and server based on location information and time periods |
CN106446098A (en) * | 2016-09-13 | 2017-02-22 | 四川研宝科技有限公司 | Live action image processing method and server based on location information |
CN107423420A (en) * | 2017-07-31 | 2017-12-01 | 努比亚技术有限公司 | A kind of photographic method, mobile terminal and computer-readable recording medium |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109814704B (en) * | 2017-11-22 | 2022-02-11 | 腾讯科技(深圳)有限公司 | Video data processing method and device |
CN108230448A (en) * | 2017-12-29 | 2018-06-29 | 光锐恒宇(北京)科技有限公司 | Implementation method, device and the computer readable storage medium of augmented reality AR |
CN110166787B (en) * | 2018-07-05 | 2022-11-29 | 腾讯数码(天津)有限公司 | Augmented reality data dissemination method, system and storage medium |
CN112752119B (en) * | 2019-10-31 | 2023-12-01 | 中兴通讯股份有限公司 | Delay error correction method, terminal equipment, server and storage medium |
CN113452896B (en) | 2020-03-26 | 2022-07-22 | 华为技术有限公司 | Image display method and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1694512A (en) * | 2005-06-24 | 2005-11-09 | 清华大学 | Synthesis method of virtual viewpoint in interactive multi-viewpoint video system |
CN102056015A (en) * | 2009-11-04 | 2011-05-11 | 沈阳隆惠科技有限公司 | Streaming media application method in panoramic virtual reality roaming |
CN102455864A (en) * | 2010-10-25 | 2012-05-16 | Lg电子株式会社 | Information processing apparatus and method thereof |
CN102625993A (en) * | 2009-07-30 | 2012-08-01 | Sk普兰尼特有限公司 | Method for providing augmented reality, server for same, and portable terminal |
-
2012
- 2012-11-14 CN CN201280001959.6A patent/CN103959220B/en active Active
- 2012-11-14 WO PCT/CN2012/084581 patent/WO2014075237A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1694512A (en) * | 2005-06-24 | 2005-11-09 | 清华大学 | Synthesis method of virtual viewpoint in interactive multi-viewpoint video system |
CN102625993A (en) * | 2009-07-30 | 2012-08-01 | Sk普兰尼特有限公司 | Method for providing augmented reality, server for same, and portable terminal |
CN102056015A (en) * | 2009-11-04 | 2011-05-11 | 沈阳隆惠科技有限公司 | Streaming media application method in panoramic virtual reality roaming |
CN102455864A (en) * | 2010-10-25 | 2012-05-16 | Lg电子株式会社 | Information processing apparatus and method thereof |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106354869A (en) * | 2016-09-13 | 2017-01-25 | 四川研宝科技有限公司 | Real-scene image processing method and server based on location information and time periods |
CN106446098A (en) * | 2016-09-13 | 2017-02-22 | 四川研宝科技有限公司 | Live action image processing method and server based on location information |
CN107423420A (en) * | 2017-07-31 | 2017-12-01 | 努比亚技术有限公司 | A kind of photographic method, mobile terminal and computer-readable recording medium |
Also Published As
Publication number | Publication date |
---|---|
CN103959220A (en) | 2014-07-30 |
CN103959220B (en) | 2017-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014075237A1 (en) | Method for achieving augmented reality, and user equipment | |
US9384588B2 (en) | Video playing method and system based on augmented reality technology and mobile terminal | |
WO2022088918A1 (en) | Virtual image display method and apparatus, electronic device and storage medium | |
US20120293613A1 (en) | System and method for capturing and editing panoramic images | |
CN107911737B (en) | Media content display method and device, computing equipment and storage medium | |
WO2023051185A1 (en) | Image processing method and apparatus, and electronic device and storage medium | |
WO2018000609A1 (en) | Method for sharing 3d image in virtual reality system, and electronic device | |
WO2019060985A1 (en) | A cloud-based system and method for creating a virtual tour | |
CN112073754B (en) | Cloud game screen projection method and device, computer equipment, computer readable storage medium and cloud game screen projection interaction system | |
CN108765536A (en) | A kind of synchronization processing method and device of virtual three-dimensional space | |
WO2018000619A1 (en) | Data display method, device, electronic device and virtual reality device | |
CN108880983B (en) | Real-time voice processing method and device for virtual three-dimensional space | |
WO2022132033A1 (en) | Display method and apparatus based on augmented reality, and device and storage medium | |
JP2023504608A (en) | Display method, device, device, medium and program in augmented reality scene | |
CN114863014B (en) | Fusion display method and device for three-dimensional model | |
CN108961424B (en) | Virtual information processing method, device and storage medium | |
WO2024016828A2 (en) | Virtual camera-based image acquisition method and related apparatus | |
CN108765084B (en) | Synchronous processing method and device for virtual three-dimensional space | |
US20230405475A1 (en) | Shooting method, apparatus, device and medium based on virtual reality space | |
JP2022507502A (en) | Augmented Reality (AR) Imprint Method and System | |
CN113178017A (en) | AR data display method and device, electronic equipment and storage medium | |
WO2019118028A1 (en) | Methods, systems, and media for generating and rendering immersive video content | |
CN111949515A (en) | Test scene reproduction method and device and electronic equipment | |
US20240062479A1 (en) | Video playing method and apparatus, electronic device, and storage medium | |
CN108898680A (en) | A kind of method and device automatically correcting interception picture in virtual three-dimensional space |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12888375 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12888375 Country of ref document: EP Kind code of ref document: A1 |