CN113674433A - Mixed reality display method and system - Google Patents

Mixed reality display method and system Download PDF

Info

Publication number
CN113674433A
CN113674433A CN202110984544.6A CN202110984544A CN113674433A CN 113674433 A CN113674433 A CN 113674433A CN 202110984544 A CN202110984544 A CN 202110984544A CN 113674433 A CN113674433 A CN 113674433A
Authority
CN
China
Prior art keywords
display screen
information
display
camera
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110984544.6A
Other languages
Chinese (zh)
Inventor
李龙威
曹正之
黄康宁
杨俊超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Surreal Film Production Shanghai Co ltd
Original Assignee
Surreal Film Production Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Surreal Film Production Shanghai Co ltd filed Critical Surreal Film Production Shanghai Co ltd
Priority to CN202110984544.6A priority Critical patent/CN113674433A/en
Publication of CN113674433A publication Critical patent/CN113674433A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The application relates to a mixed reality display method and a system, which belong to the field of film and television picture shooting and are used for solving the problem of poor shooting effect of the film and television picture of an application display screen in the related technology, wherein the system comprises the display screen, a space positioning device and a control device; in the method and the system, the space positioning equipment can determine the position and the posture of the camera in real time, and the control equipment can adjust the picture displayed by the display screen in real time according to the change of the position and the posture of the real camera equipment, so that the picture displayed on the display screen shot by the camera achieves the effect of shooting real scenery.

Description

Mixed reality display method and system
Technical Field
The application relates to the field of film and television picture shooting, in particular to a mixed reality display method and system.
Background
In the photographic work, in order to meet the shooting requirements of some video pictures, in order to reduce the shooting cost and/or improve the shooting effect, a display screen may be selected to display some scenes or be used as the background of the scenes, that is, the content displayed by the display screen shot by the camera is used as some video pictures, or the scenes under the display screen shot by the camera are used as some video pictures.
In the related art, the content displayed on the display screen is mostly used for supporting the long-range view required in the shooting, so that the content displayed on the display screen is mostly still pictures or dynamic videos. However, in the shooting process, the camera may move, and the viewing angle of the still picture or the dynamic video displayed on the display screen is fixed, which may cause the mismatch between the content displayed on the display screen and the actual shooting position and viewing angle of the camera, resulting in poor shooting effect of the video picture applied to the display screen.
Disclosure of Invention
In order to improve the shooting effect of the film and television picture of the application display screen, the application provides a mixed reality display method and a mixed reality display system.
In a first aspect, the present application provides a mixed reality display method. The method comprises the following steps:
acquiring the position and posture information of a camera;
determining the relative position and posture information of the display screen and the camera according to the position and posture information of the camera based on the position and posture information of the display screen;
and determining the display content of the display screen according to the relative position and posture information based on the demand information of the movie and television picture shooting, so that the change of the display content acquired by the camera accords with the change of the relative position and posture information.
By adopting the technical scheme, when the position and the posture of the camera are changed under the condition of shooting the movie and television picture of the application display screen, the display content of the display screen is controlled to match the change, so that the content shot by the camera matches the action of the camera, the effect similar to the real movie and television picture shooting is achieved, namely the situation that the content displayed in the movie and television picture of the application display screen is not matched with the actual shooting position and the actual shooting angle of the camera is effectively overcome, and the shooting effect of the movie and television picture of the application display screen is effectively improved.
Optionally, the position and posture information of the display screen is obtained in real time by a spatial positioning device configured on the display screen, or is pre-stored in a main device for executing the method.
Optionally, the position and posture information of the display screen is obtained in real time by a spatial positioning device configured on the display screen, or is pre-stored in a main device for executing the method.
Optionally, the display screen is formed by splicing a plurality of sub-display screens;
if the position and posture information of the display screen is acquired in real time by a space positioning device configured on the display screen, the method for determining the position and posture information of the display screen comprises the following steps:
acquiring the position and posture information of each sub display screen;
and determining the position and posture information of the display screen according to the information carried by the position and posture information of all the sub display screens.
Optionally, the position and posture information of the sub-display screen carries shape information or model information of the sub-display screen.
Optionally, the method for determining and controlling the display content of the display screen according to the relative position and posture information based on the demand information for movie and television picture shooting so that the change of the display content acquired by the camera conforms to the change of the relative position and posture information includes:
determining the positions and postures of the display screen and the camera in a virtual space displayed by the display screen according to the relative position and posture information;
calling a three-dimensional model of a specified virtual scenery to the virtual space based on the requirement information of movie and television picture shooting, wherein the three-dimensional model of the virtual scenery is positioned at a specified position in a specified posture;
and when the relative position and posture information changes, controlling the content displayed by the display screen to change so as to enable the visual range and the visual angle of the virtual scenery acquired by the camera to accord with the change of the relative position and posture of the camera and the virtual scenery in the virtual space.
Optionally, the method for obtaining the position and posture information through the spatial positioning device includes:
and determining the position and posture information of the main body according to the information carried by at least two pieces of space positioning information.
In a second aspect, the present application provides a mixed reality display system. The system comprises: the system comprises a display screen, a space positioning device and a control device;
the camera and the display screen are both provided with the space positioning equipment so as to realize the acquisition of the position and posture information of the camera and the display screen;
the control apparatus is arranged to perform any of the methods as described in the first aspect above.
Optionally, the display screen is formed by splicing a plurality of sub-display screens; and each sub-display screen is provided with the space positioning equipment so as to obtain the position and posture information of each sub-display screen.
Optionally, at least two spatial locating devices are provided on the same body.
Optionally, the number of spatial locating devices on the same body is three to five.
In summary, the present application includes at least one of the following beneficial technical effects:
1. according to the method and the system, when the position and the posture of the camera change, the display content of the display screen correspondingly changes, so that the change of the content shot by the camera accords with the change of the position and the posture of the camera, and the shooting effect of a movie and television picture of the application display screen is effectively improved;
2. each sub-display screen of the display screen is provided with a space positioning device, the space positioning device is used for collecting the position and the posture of each sub-display screen, and then the position and the posture of the display screen are determined, so that the free movement of the position of the display screen and the acquisition of the position and the posture after assembly are facilitated;
3. the position and the posture are determined by the space positioning information sent by at least two space positioning devices, and the method is efficient and accurate.
It should be understood that what is described in this summary section is not intended to limit key or critical features of the embodiments of the application, nor is it intended to limit the scope of the application. Other features of the present application will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present application will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an exemplary operating environment in which embodiments of the present application can operate.
Fig. 2 shows a schematic flow chart of a mixed reality display method in an embodiment of the present application.
Fig. 3 is a schematic diagram illustrating a principle of display content change of a display screen in an embodiment of the present application.
Fig. 4 shows a schematic structural diagram of a mixed reality display system in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
According to the method and the device, when the content displayed on the display screen is shot through the camera, the content displayed in the display screen is adjusted according to the position and posture change of the camera, so that the change of the content shot by the camera is in accordance with the position and posture change of the camera, and the effect of shooting the obtained movie and television pictures is improved.
FIG. 1 illustrates a schematic diagram of an exemplary operating environment 100 in which embodiments of the present application can operate. As shown in fig. 1, the execution environment 100 includes a real-world image capture apparatus 110, a display apparatus 120, a position and orientation acquisition apparatus 130, and a server 140.
The real image pickup apparatus 110 is a video camera for performing a shooting work of a movie picture.
The display device 120 may be specifically selected as a display screen, the display device 120 is used as a background screen of a video picture, and the display device 120 is generally large, so that the display device 120 may be formed by splicing a plurality of sub-display devices 120, and the display device 120 may have any overall shape, be a plane, or be a curved surface, depending on the shooting requirement of the video picture.
The position and orientation acquiring device 130 is disposed on the real image capturing device 110 and the display device 120, and is used for acquiring the position and orientation information of the real image capturing device 110 and the display device 120, respectively, when the display device 120 is formed by splicing a plurality of sub-display devices 120, the position and orientation acquiring device 130 may be disposed on each sub-display device 120, so that the position and orientation information of each sub-display device 120 can be acquired when the sub-display devices 120 are freely spliced and assembled to form the display device 120, and the position and orientation information of the whole display device 120 can be acquired.
The position and orientation acquiring device 130 may specifically include at least two positioning modules, and determine the position and orientation of the sub-display device 120 or the real camera device 110 where the position and orientation acquiring device 130 is located according to the positions acquired by the at least two positioning modules. Of course, the position and orientation acquiring device 130 may also select a positioning module to cooperate with a gyroscope to determine the position and orientation of the sub-display device 120 or the real camera device 110, or otherwise acquire the position and orientation information of the subject, which is not described in detail. In the embodiment of the present application, the position and orientation acquiring apparatus 130 is constructed by four positioning modules in consideration of the cost and the positioning and orientation precision.
The server 140 is connected to the display device 120 and the position and orientation acquiring device 130, respectively, the server 10 is used for controlling the display content of the display device 120, and the server can receive the position and orientation information sent by the position and orientation acquiring device 130.
Since the position and the posture of the display device 120 are rarely changed during the process of shooting the movie, the position and the posture of the display device 120 can be pre-stored in the server 140 for the server 140 to retrieve.
Fig. 2 shows a schematic flow chart of a mixed reality display method 200 in an embodiment of the present application. The method 200 can be performed by the server 140 of fig. 1.
As shown in fig. 2, the method 200 includes the steps of:
s210: the position and orientation information of the real image pickup apparatus 110 is acquired.
The position and orientation acquiring device 130 disposed on the real-time image capturing device 110 acquires the position and orientation information of the real-time image capturing device 110 in real time, and the server 140 acquires the position and orientation information of the real-time image capturing device 110 at a certain frequency through the data acquisition module. Generally, in order to ensure that the content displayed by the display device 120 is highly matched with the change of the position and the posture of the real image capturing device 110, the frequency of acquiring the position and the posture information of the real image capturing device 110 by the server 140 is equal to the refresh frame rate of the display device 120.
If the position and orientation acquiring device 130 disposed in the real image capturing apparatus 110 is composed of four positioning modules, the position and orientation information of the real image capturing apparatus 110 carries four pieces of position information, and the three-dimensional model of the real image capturing apparatus 110 and the positions of the positioning modules on the real image capturing apparatus 110 are prestored in the server 140, that is, the relationship between the four pieces of position information and the model information of the real image capturing apparatus 110 is determined, so that the server 140 can determine the position and orientation of the real image capturing apparatus 110 from the four pieces of position information.
If the position and orientation acquiring device 130 disposed on the real-image capturing device 110 is composed of a positioning module and a gyroscope, the position and orientation information of the real-image capturing device 110 carries a position information and a spatial angular displacement information, the server 140 stores a three-dimensional model of the real-image capturing device 110, a position of the positioning module on the real-image capturing device 110, and an installation manner of the gyroscope on the real-image capturing device 110, that is, a positional relationship between the position information and the model information of the real-image capturing device 110 is determined, and a relationship between the spatial angular displacement of the gyroscope and the orientation of the real-image capturing device 110 is determined, so that the server 140 can determine the position and the orientation of the real-image capturing device 110 from the position information and the spatial angular displacement information.
Those skilled in the art can deduce that, when other structures of the position and orientation obtaining module 130 are adopted, the content of the corresponding other position and orientation information can be matched, and the manner of determining the position and orientation information of the real image capturing apparatus 110 by the other servers 140 according to the position and orientation information is not exhaustive here.
S220: based on the position and orientation information of the display apparatus 120, the relative position and orientation information of the display apparatus 120 and the real image pickup apparatus 110 is determined from the position and orientation information of the real image pickup apparatus 110.
Before the method in this step, the server 140 needs to first obtain the position and orientation information of the display device 120, so that the display device 120 can be compared with the position and orientation information of the real camera device 110 in the method in this step to generate the relative position and orientation information.
A method for acquiring the position and orientation information of the display device 120 by the server 140 will be described below by taking a large display device 120 in which the display device 120 is formed by splicing a plurality of sub-display devices 120 as an example.
If the position and orientation information of the display device 120 is pre-stored in the server 140, the method for the server 140 to obtain the position and orientation information of the display device 120 may be: after the display device 120 is built and formed by the sub-display device 120, a virtual space (which can be understood as a three-dimensional graphic editing space) and a three-dimensional space model of the display device 120 can be built in the server 140, and the three-dimensional space model of the display device 120 is placed at a specified position of the virtual space, so that the three-dimensional space model and the position and posture information of the display device 120 can be determined.
If the position and orientation information of the display device 120 is obtained by the position and orientation obtaining device 130 disposed on the sub-display device 120, the server 140 may obtain the position and orientation information of each sub-display device 120 according to the obtaining principle of obtaining the position and orientation information of the real image capturing device 110 in the previous step, and when the server 140 is connected to a plurality of position and orientation obtaining devices 130 and obtains a plurality of position and orientation information, each position and orientation information carries an identifier of the position and orientation obtaining device. The server 140 prestores a three-dimensional space model of each sub-display device 120, and the correspondence between the positions of the three-dimensional space models of the sub-display devices 120 and the identifiers of the gesture obtaining devices 130 on the sub-display devices 120 is also prestored in the server 140, that is, the server 140 can determine the three-dimensional space model and the positions and gestures of each sub-display device 120.
When the display device 120 is a whole piece of display device 120, the determination method of the position and the posture of the piece of sub-display device 120 can be analogous, and details are not repeated.
The server 140 constructs a virtual space, and places a designated sub-display device 120 at a designated position in the virtual space, the positions of other sub-display devices 120 can be determined accordingly, and the position and posture information of the whole display device 120 and the three-dimensional space model can be determined as well.
When the server 140 determines the three-dimensional space model of the display device 120 and the position and orientation information in the virtual space, the server 140 may train the position and orientation information of the real-world image capturing device 110 and the display device 120 in the virtual space through a calibration algorithm of the virtual space and the real space so that the position and orientation information in the virtual space can be matched with the position and orientation information in the display space.
When the position and orientation information of the real-world image pickup apparatus 110 and the display apparatus 120 in the virtual space and the real space are determined, the relative position and orientation information of the real-world image pickup apparatus 110 and the display apparatus 120 is determined. The relative position and orientation information can represent the relative positional relationship and the relative orientation relationship of the real image pickup apparatus 110 and the display apparatus 120 in the virtual space and the real space.
S230: based on the demand information of the movie and television picture shooting, the display content of the display device 120 is determined according to the relative position and posture information, so that the change of the display content acquired by the real camera device 110 conforms to the change of the relative position and posture information.
The requirements of the movie and television pictures mainly include scenes, such as mountains and waters, trees, and the like, which need to be displayed by the display device 120 in the shooting process, and the three-dimensional model information of the needed scenes is prestored in the server 140.
When a scene needs to be presented in a specified viewing angle and viewing distance in a video frame, if the scene in the video frame is realized by shooting an actual scene, the distance from the real camera device 110 to the scene and the angle at which the scene is shot can be controlled. When the position of the real-world image pickup apparatus 110 changes, the viewing distance and/or angle of view thereof toward the subject changes, and the viewing distance and/or angle of view of the subject in the captured movie picture changes accordingly.
Similarly, if a scene is required to be presented in a video frame at a specified viewing distance and viewing angle and the scene in the video frame is realized by shooting the scene displayed by the display device 120, a three-dimensional model of the scene can be constructed in a virtual space by setting the distance from the real camera device 110 to the three-dimensional model of the scene and the angle at which the three-dimensional model of the scene is shot. When the position of the real image pickup device 110 is changed, the view distance and/or the view angle thereof toward the three-dimensional space model of the scene is changed, and the view distance and/or the view angle of the scene in the captured movie picture is changed accordingly.
Based on the foregoing principle, the positions and postures of the real-image capturing apparatus 110 and the display apparatus 120 in the virtual space can be acquired by the server 140. When a specified scene displayed by the display device 120 needs to be shot, the positions and postures of the real camera device 110 and the display device 120 in the display device 120 are inconvenient, then the first required visual range and visual angle of the scene are determined according to the requirements of a video picture, the visual range and the visual angle reflect the relative positions and postures of the real camera device 110 and a three-dimensional model of the scene in a virtual space, the display device 120 is positioned between the real camera devices 110 in the three-dimensional model of the scene in the virtual space, and the projection of the scene on the display device 120 when being projected to the real camera device 110 is the content displayed by the display device 120, so that the picture of the scene containing the specified visual range and visual angle, which is obtained by shooting the display device 120 by the real camera device 110, has the same effect as the real picture.
When the position and the posture of the real camera device 110 change, the relative position and the posture of the real camera device 110 with respect to the three-dimensional model of the display device 120 and the scene change, and the projection of the three-dimensional space model of the scene onto the display device 120 changes when the three-dimensional space model of the scene is projected onto the real camera device 110, that is, the content displayed by the display device 120 should also change, so that the effect similar to the effect of shooting the real scene can be achieved.
Based on this principle, the server 140 controls the projection of the three-dimensional model of the scene, which is always displayed on the display device 120, onto the display device 120 when the three-dimensional model is projected onto the real camera device 110, so that the real camera device 110 can shoot the virtual scene to achieve the effect of shooting the real scene.
Fig. 3 is a schematic diagram illustrating a principle of changing display contents of the display device 120 in the embodiment of the present application.
As shown in fig. 3, for convenience of description, the principle in the three-dimensional virtual space is analogized to the two-dimensional virtual space for illustration.
In the two-dimensional virtual space, a1 represents the first position and orientation of the real image pickup device 110, a2 represents the second position and orientation of the real image pickup device 110, B represents a two-dimensional model of the scene, and C represents the position and orientation of the display device 120. The angle of view of the real image pickup apparatus 110 in the first position and posture of the two-dimensional model of the subject B is the angle between the straight line L1 and the straight line N1, the angle of view of the real image pickup apparatus 110 in the second position and posture is the angle between the straight line L2 and the straight line N2, the line segment M1 represents the distance of the real image pickup apparatus 110 at the a1 position from the two-dimensional model B of the subject, and the line segment M2 represents the distance of the image pickup self at the a2 position from the two-dimensional model B of the subject. The intersection of L1 with C is E, the focus of L2 with C is F, the focus of N1 with C is G, and the intersection of N2 with C is H.
If the content displayed on the display device 120 when photographed by the real image pickup device 110 at a1 is the line segment EG, when the real image pickup device 110 moves to a2, the content displayed on the display device 120 changes to the line segment FH, so that the two-dimensional model C of the photographed scene achieves the effect of photographing an actual scene.
The specific implementation principle of the method 200 is as follows: the scene displayed by the display device 120 is shot by the real camera device 110, and when the position and the posture of the real camera device 110 change, the scene displayed by the display device 120 correspondingly changes, so that the change of the image shot by the real camera device 110 accords with the change of the actual scene shot, thereby achieving the effect of shooting the actual scene and being beneficial to improving the effect of shooting the obtained movie and television pictures.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
The above is a description of method embodiments, and the embodiments of the present application are further described below by way of apparatus embodiments.
Fig. 4 shows a schematic structural diagram of a mixed reality display system 400 in an embodiment of the present application. The system 400 can be included in the operating environment 100 of FIG. 1 or implemented as the operating environment 100 of FIG. 1.
As shown in fig. 4, system 400 includes a display screen 410, a spatial locator 420, and a control device 430. The display screen 410 may be similar to the display device 120 in fig. 1, the spatial positioning device 420 may be similar to the position and orientation acquiring device 130 constructed by four positioning modules in fig. 1, and the control device 430 may be similar to the server 140 in fig. 1.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system 400 described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the disclosure. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (10)

1. A mixed reality display method, comprising:
acquiring the position and posture information of a camera;
determining relative position and posture information of the display screen (410) and the camera according to the position and posture information of the camera based on the position and posture information of the display screen (410);
and determining the display content of the display screen (410) according to the relative position and posture information based on the requirement information of movie and television picture shooting, so that the change of the display content acquired by the camera conforms to the change of the relative position and posture information.
2. The method according to claim 1, characterized in that the position and orientation information of the display screen (410) is obtained in real time by a spatial positioning device (420) arranged on the display screen (410) or pre-stored in a main device for performing the method.
3. The method of claim 2, wherein the display screen (410) is formed by a plurality of sub-display screens being tiled;
if the position and orientation information of the display screen (410) is obtained in real time by a spatial positioning device (420) configured on the display screen (410), the method for determining the position and orientation information of the display screen (410) comprises the following steps:
acquiring the position and posture information of each sub display screen;
and determining the position and posture information of the display screen (410) according to the information carried by the position and posture information of all the sub display screens.
4. The method of claim 3, wherein the position and orientation information of the sub-display carries shape information or model information of the sub-display.
5. The method according to any one of claims 1 to 4, wherein the method for determining and controlling the display content of the display screen (410) according to the relative position and posture information based on the demand information of the movie and television picture shooting so that the change of the display content acquired by the camera conforms to the change of the relative position and posture information comprises the following steps:
determining the position and the posture of the display screen (410) and the camera in a virtual space displayed by the display screen (410) according to the relative position and posture information;
calling a three-dimensional model of a specified virtual scenery to the virtual space based on the requirement information of movie and television picture shooting, wherein the three-dimensional model of the virtual scenery is positioned at a specified position in a specified posture;
and when the relative position and posture information changes, controlling the content displayed by the display screen (410) to change so as to enable the visual range and the visual angle of the virtual scenery acquired by the camera to accord with the change of the relative position and posture of the camera and the virtual scenery in the virtual space.
6. The method of any one of claims 1 to 4, wherein the method of obtaining position and orientation information via a spatial location device (420) comprises:
receiving spatial positioning information sent by at least two spatial positioning devices (420) configured on the same main body;
and determining the position and posture information of the main body according to the information carried by at least two pieces of space positioning information.
7. A mixed reality display system, comprising: a display screen (410), a spatial positioning device (420) and a control device (430);
the camera and the display screen (410) are both provided with the space positioning equipment (420) so as to realize the acquisition of the position and posture information of the camera and the display screen (410);
the control device (430) is configured to perform the method according to any one of claims 1 to 6.
8. The system of claim 7, wherein the display screen (410) is formed by a plurality of sub-display screens being tiled; each sub-display is configured with the spatial localization device (420) to obtain position and attitude information of each sub-display.
9. The system according to claim 7 or 8, characterized in that at least two spatial localization apparatuses (420) are configured with the same body.
10. The system of claim 9, wherein the number of spatial locating devices (420) on the same body is three to five.
CN202110984544.6A 2021-08-25 2021-08-25 Mixed reality display method and system Pending CN113674433A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110984544.6A CN113674433A (en) 2021-08-25 2021-08-25 Mixed reality display method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110984544.6A CN113674433A (en) 2021-08-25 2021-08-25 Mixed reality display method and system

Publications (1)

Publication Number Publication Date
CN113674433A true CN113674433A (en) 2021-11-19

Family

ID=78546393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110984544.6A Pending CN113674433A (en) 2021-08-25 2021-08-25 Mixed reality display method and system

Country Status (1)

Country Link
CN (1) CN113674433A (en)

Similar Documents

Publication Publication Date Title
EP3163535B1 (en) Wide-area image acquisition method and device
US9277122B1 (en) System and method for removing camera rotation from a panoramic video
JP5740884B2 (en) AR navigation for repeated shooting and system, method and program for difference extraction
US8345961B2 (en) Image stitching method and apparatus
US8970690B2 (en) Methods and systems for determining the pose of a camera with respect to at least one object of a real environment
US9756277B2 (en) System for filming a video movie
US20080253685A1 (en) Image and video stitching and viewing method and system
US20060165310A1 (en) Method and apparatus for a virtual scene previewing system
JP2016504819A (en) A method for acquiring and inserting a virtual object in a virtual scene in real time from a physical object
US10606347B1 (en) Parallax viewer system calibration
CN110691175B (en) Video processing method and device for simulating motion tracking of camera in studio
CN112330736A (en) Scene picture shooting method and device, electronic equipment and storage medium
JP7196421B2 (en) Information processing device, information processing system, information processing method and program
CN113259642B (en) Film visual angle adjusting method and system
JP2018033107A (en) Video distribution device and distribution method
US20160037148A1 (en) 3d-mapped video projection based on on-set camera positioning
KR102138333B1 (en) Apparatus and method for generating panorama image
JP2009141508A (en) Television conference device, television conference method, program, and recording medium
CN112312041B (en) Shooting-based image correction method and device, electronic equipment and storage medium
CN113674433A (en) Mixed reality display method and system
US11758101B2 (en) Restoration of the FOV of images for stereoscopic rendering
CN115580691A (en) Image rendering and synthesizing system for virtual film production
CN115705660A (en) Image background generation method and device and computer-readable storage medium
CN111242107A (en) Method and electronic device for setting virtual object in space
JP2020204973A (en) Information processing device, program, and information processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination