CN113706720A - Image display method and device - Google Patents

Image display method and device Download PDF

Info

Publication number
CN113706720A
CN113706720A CN202111037944.2A CN202111037944A CN113706720A CN 113706720 A CN113706720 A CN 113706720A CN 202111037944 A CN202111037944 A CN 202111037944A CN 113706720 A CN113706720 A CN 113706720A
Authority
CN
China
Prior art keywords
image
user
mixed reality
scene
display screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111037944.2A
Other languages
Chinese (zh)
Inventor
龚江涛
张柳新
韩腾
田丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202111037944.2A priority Critical patent/CN113706720A/en
Publication of CN113706720A publication Critical patent/CN113706720A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/22Processes or apparatus for obtaining an optical image from holograms
    • G03H1/2202Reconstruction geometries or arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Projection Apparatus (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an image display method and an image display device, which are applied to control equipment in a holographic projection system, wherein the holographic projection system further comprises a transparent holographic display screen, a projection device and an image acquisition device, the projection device and the image acquisition device are positioned on one side of a first display surface of the transparent holographic display screen, and the method comprises the following steps: obtaining a scene freeze instruction; obtaining a mixed reality image corresponding to a mixed reality scene, wherein the mixed reality scene comprises a virtual scene presented by a transparent holographic display screen and at least one real figure positioned on one side of a second display surface of the transparent holographic display screen; extracting a user image of at least one user belonging to a real person from the mixed reality image; superposing a user image of at least one user onto a virtual scene image to be projected to obtain a real stop-motion image to be projected; and projecting the realistic stop-motion image to the first display surface of the transparent holographic display screen through the projection device. The scheme can present the action image before the current time of the real character in the virtual scene.

Description

Image display method and device
Technical Field
The present disclosure relates to the field of display technologies, and in particular, to an image display method and apparatus.
Background
In a mixed reality scene, virtual scene information is presented in a real scene, and an interactive feedback information loop is constructed among the real world, the virtual world and a user so as to enhance the reality sense of user experience.
However, at present, the degree of mutual fusion between the virtual world and the real physical world in the mixed reality scene is not high, and therefore, how to improve the degree of fusion between the virtual world and the real physical world in the mixed reality scene is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The application provides an image display method and device.
The image display method is applied to control equipment in a holographic projection system, and the holographic projection system comprises the following steps: the control equipment, the transparent holographic display screen, the projection device and the image acquisition device are arranged on one side of a first display surface of the transparent holographic display screen, and the method comprises the following steps:
obtaining a scene freeze instruction, wherein the scene freeze instruction is used for requesting to present a freeze image of a real character in a mixed reality scene, and the mixed reality scene comprises a virtual scene presented by the transparent holographic display screen and at least one real character positioned on one side of a second display surface of the transparent holographic display screen;
acquiring a mixed reality image corresponding to the mixed reality scene acquired by the image acquisition device;
extracting a user image of at least one user belonging to a real person from the mixed reality image;
superposing the user image of the at least one user to the virtual scene image to be projected to obtain a real stop-motion image to be projected;
and projecting the real stop-motion image to a first display surface of the transparent holographic display screen through the projection device.
In one possible implementation, the holographic projection system further includes: the wearable positioning device is worn on the body of the real person and is connected with the control equipment;
the method further comprises the following steps:
obtaining the position information of the real person positioned by the wearable positioning device;
the extracting of the user image of at least one user belonging to a real person from the mixed reality image includes:
determining a user image of at least one user belonging to a real person in the mixed reality image based on the position information of the real person;
extracting a user image of the at least one user from the mixed reality image.
In yet another possible implementation, the holographic projection system further includes: the stroboscopic control device is connected with the control equipment;
before the obtaining of the mixed reality image corresponding to the mixed reality scene acquired by the image acquisition device, the method further includes:
and sending a synchronous control instruction to the stroboscopic control device, wherein the synchronous control instruction is used for indicating the stroboscopic control device to control the image acquisition device and the projection device to be at the same refresh rate.
In another possible implementation manner, before the obtaining of the mixed reality image corresponding to the mixed reality scene acquired by the image acquisition apparatus, the method further includes:
sending a projection suspension instruction to the projection device, wherein the projection suspension instruction is used for instructing the projection device to stop projecting to the first display surface of the transparent holographic display screen.
In another possible implementation manner, after the obtaining of the mixed reality image corresponding to the mixed reality scene acquired by the image acquisition apparatus, the method further includes:
and sending a projection recovery instruction to the projection device, wherein the projection recovery instruction is used for instructing the projection device to recover the projection to the first display surface of the transparent holographic display screen.
In another possible implementation manner, the obtaining a mixed reality image corresponding to the mixed reality scene acquired by the image acquisition apparatus includes:
obtaining at least one frame of candidate mixed reality image corresponding to the mixed reality scene recently acquired by the image acquisition device;
and determining a frame of mixed reality image with the optimal picture quality in the at least one frame of candidate mixed reality image.
In another possible implementation manner, the superimposing, by overlaying the user image of the at least one user onto the virtual scene image to be projected, to obtain a real stop-motion image to be projected includes:
determining the relative position of the user with respect to the transparent holographic display screen;
determining a candidate superposition position corresponding to the relative position in the virtual scene image to be projected;
if at least part of object images of the virtual object exist in the candidate superposition position in the virtual scene image, determining a target superposition position which is adjacent to the candidate superposition position and does not exist at least part of images of the virtual object in the virtual scene image, and superposing the user image of the user to the target superposition position.
In another possible implementation manner, the superimposing, on the virtual scene image to be projected, the user image of the at least one user includes:
determining the actual ratio of the user to the transparent holographic display screen based on the user image of the user in the mixed reality image and the virtual image area corresponding to the transparent holographic display screen;
and according to the actual proportion of the user image of the user in the virtual scene image to be projected, superposing the user image of the at least one user to the virtual scene image to be projected.
The image display device is applied to a control device in a holographic projection system, and the holographic projection system comprises: the control device, transparent holographic display screen, projection arrangement and image acquisition device, wherein, projection arrangement with image acquisition device is located transparent holographic display screen's first display surface one side, the device includes:
the instruction obtaining unit is used for obtaining a scene freeze instruction, and the scene freeze instruction is used for requesting to present a freeze image of a real person in a mixed reality scene, wherein the mixed reality scene comprises a virtual scene presented by the transparent holographic display screen and at least one real person positioned on one side of a second display surface of the transparent holographic display screen;
the image acquisition unit is used for acquiring a mixed reality image corresponding to the mixed reality scene acquired by the image acquisition device;
an image extraction unit for extracting a user image of at least one user belonging to a real person from the mixed reality image;
the image superposition unit is used for superposing the user image of the at least one user onto the virtual scene image to be projected to obtain a real stop-motion image to be projected;
and the image projection unit is used for projecting the real stop-motion image to the first display surface of the transparent holographic display screen through the projection device.
In one possible implementation, the holographic projection system further includes: the wearable positioning device is worn on the body of the real person and is connected with the control equipment;
the device further comprises: the position obtaining unit is used for obtaining the position information of the real person positioned by the wearable positioning device;
the image extraction unit includes:
an image extraction subunit, configured to determine, based on the position information of the real person, a user image of at least one user belonging to the real person in the mixed reality image;
extracting a user image of the at least one user from the mixed reality image.
According to the scheme, after the instruction for requesting to present the stop-motion image of the real person in the mixed reality scene is obtained, the mixed reality image corresponding to the mixed reality scene and acquired by the image acquisition device can be obtained. On the basis, the user image of at least one user belonging to the real character extracted from the mixed reality image is superposed to the virtual scene image to be projected, so that the stop motion image of the real character in the physical world can be superposed to the virtual scene image, therefore, after the virtual scene image superposed with the stop motion image of the real character is projected to the transparent holographic display screen through the projection device, the action image before the current moment of the real character can be presented in the virtual scene, and the fusion degree of the virtual scene and the real scene in the mixed reality scene is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an exemplary holographic projection system;
FIG. 2 is a schematic diagram of a mixed reality scene suitable for use in embodiments of the present application;
fig. 3 is a schematic flowchart of an image display method according to an embodiment of the present application;
FIG. 4 shows a scene schematic of a realistic freeze frame image output by a transparent holographic display
FIG. 5 is a schematic flow chart illustrating an image display method provided by an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a composition structure of an image display device according to an embodiment of the present disclosure;
fig. 7 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be practiced otherwise than as specifically illustrated.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present disclosure.
The scheme of the application is suitable for the mixed reality scene realized based on the holographic projection system.
The display screen in the holographic projection system is the transparent holographic display screen, and therefore a user can see the image displayed in the holographic transparent display screen and can see physical objects such as real characters behind the transparent holographic display screen through the transparent holographic display screen.
For ease of understanding, a holographic projection system to which the solution of the present application is applicable will be described.
Fig. 1 is a schematic diagram showing a structure of a holographic projection system according to the present application.
As shown in fig. 1, the holographic projection system includes: a control device 101, a transparent holographic display screen 102, a projection means 103 and an image acquisition means 104.
The transparent holographic display screen 102, the projection device 103, and the image collecting device 104 are all connected to the control device 101, for example, connected through a wired link or a wireless network, which is not limited herein.
In the present application, the number of the control devices in the holographic projection system may be one or more, and in the case of a plurality of control devices, the plurality of control devices actually form a cluster or a distributed system, which is not limited.
The transparent holographic display screen is a transparent plane screen based on the holographic display principle. Not only can all the characteristics of the stereoscopically displayed object be seen through the transparent holographic display screen, but also parallax effects. In addition, because the transparent holographic display screen is transparent, a user can see physical objects in a real scene such as real characters on the other side of the transparent holographic display screen through the transparent holographic display screen.
In the present application, the projection device may be a holographic projector or other projection device or apparatus having a projection image.
The image capturing device may be any device capable of capturing an image, for example, the image capturing device may be any type of camera or the like.
In the present application, the transparent holographic display screen has two display planes, and the projection device and the image acquisition device are located on the first display surface side of the transparent holographic display screen. The first display surface is any one of two display planes of the holographic transparent display screen, namely, the projection device and the image acquisition device are arranged on the same side of a certain display plane of the holographic transparent display screen.
Correspondingly, the projection device and the image acquisition device are both towards the first display surface of the transparent holographic display screen.
On the basis, the projection device can project the virtual image to the first display surface of the transparent holographic display screen, so that the transparent holographic display screen can display the corresponding virtual image. In a mixed reality scene where the holographic projection system is located, a user on one side of the first display surface of the transparent holographic display screen can see not only virtual images displayed on the transparent holographic display screen, but also figure behaviors and actions of real figures on one side of the second display surface of the transparent holographic display screen through the transparent holographic display screen.
The image acquisition device can at least acquire the image of the virtual scene presented on the transparent holographic display screen and the image of the real scene (such as a real person) on the second display surface side of the transparent holographic display screen.
As shown in fig. 2, which shows a schematic diagram of a mixed reality scene to which the solution of the present application is applicable. As can be seen from fig. 2, the mixed reality scene has a transparent holographic display 201, and there are a plurality of viewers 202 on the first display surface side of the holographic display, and a projection device (not shown) is located on the viewer side, and the projection device projects a virtual scene image toward the transparent holographic display. The virtual scene image as presented on the transparent holographic display of fig. 2 comprises at least a virtual user 203.
And a plurality of real characters in real scenes are also performed on the second display surface side of the transparent holographic display 201 (i.e., the back side of the transparent holographic display from the viewer's perspective). The viewer can see the performance behavior of real character 204 in the real scene through the transparent holographic display.
In the scene of fig. 2, since the image capturing device faces the first real surface of the transparent holographic display, the image captured by the image capturing device includes not only the transparent holographic display and the image of the virtual scene displayed on the transparent holographic display, but also the image of the real person on the second display surface side of the transparent holographic display.
In the present application, the control device may control projection by the projection apparatus, image acquisition by the image acquisition apparatus, and the like. In addition, in the application, the control device can also output a stop motion image containing a person in a real scene to the transparent holographic display screen according to needs.
The image display method of the present application will be described with reference to the flowchart.
Fig. 3 is a schematic flow chart illustrating an embodiment of an image display method according to the present application.
The method of this embodiment is applied to a control device in a holographic projection system, where the holographic projection system includes the control device shown in fig. 1, a transparent holographic display screen, a projection device, and an image acquisition device, where the projection device and the image acquisition device are located on one side of a first display surface of the transparent holographic display screen, which may be referred to the foregoing description specifically, and are not described herein again.
The method of the embodiment of the application can comprise the following steps:
s301, a scene freeze instruction is obtained.
Wherein the scene freeze instruction is used for requesting to present a freeze image of a real person in the mixed reality scene. The stop motion image of the real person is an image of the real person in a state of being maintained at a certain time.
For example, an image in which an actual person keeps a certain motion at a certain history time before the current time, which is a stop-motion image of the actual person, is obtained.
The mixed reality scene refers to a real physical scene where the holographic projection system is located and a scene formed by a virtual scene projected by the holographic projection system. As mentioned above, in the present application, the mixed reality scene includes at least a virtual scene presented by the transparent holographic display screen and at least one real character located on the second display surface side of the transparent holographic display screen.
And S302, acquiring a mixed reality image corresponding to the mixed reality scene acquired by the image acquisition device.
The mixed reality image refers to imaging of a mixed reality scene. Because the image acquisition device faces the first display surface of the transparent holographic display screen, the mixed display image acquired by the image acquisition device at least comprises a virtual scene presented by the transparent holographic display screen and at least one image of a real person positioned on one side of the second display surface of the transparent holographic display screen.
It will be appreciated that there are many possibilities for the specific manner in which the mixed reality image is obtained in the present application. For example, in one possible scenario, when the control device determines that a mixed reality image needs to be obtained, the control device may send an image capture instruction to the image capture device, the image capture instruction being for instructing the image capture device to capture an image of the mixed reality scene. On the basis, the control equipment can obtain the imaging of the mixed reality scene acquired by the image acquisition device.
In yet another possible case, where the image capture device is continuously capturing images of a mixed reality scene, the control apparatus may capture the image currently captured by the image capture device or capture a frame of mixed reality image from the image captured by the image capture device over the last period of time.
It can be understood that the content difference of the multi-frame images in a short time by the image acquisition device is almost negligible, but the quality of the images acquired at different times may be different. Based on this, in order to obtain a high-quality mixed reality image and improve the image quality of the extracted real person, the method and the device can obtain at least one frame of candidate mixed reality image of the mixed reality scene recently acquired by the image acquisition device, and then determine one frame of mixed reality image with the optimal picture quality in the at least one frame of candidate mixed reality image.
For example, after the image acquisition device obtains the image acquisition instruction, the candidate mixed reality images within a period of time can be continuously acquired and uploaded to the control device, and the control device can select one frame of candidate mixed reality image with the best picture quality from the multiple frames of mixed reality images acquired by the image acquisition device as the mixed reality image to be processed.
For another example, in a case where the image capturing device continuously captures images of a mixed reality scene, the control device may obtain multiple frames of candidate mixed reality images captured by the image capturing device before and after the reference time, and determine a mixed reality image with the best quality from the multiple frames of candidate mixed reality images. The reference time may be a time when the control device obtains the scene freeze instruction, or a reference time determined by the control device after the scene freeze instruction is obtained.
The image quality can be evaluated from a plurality of different dimensions such as image definition, distortion condition, color uniformity and the like of the image, and specifically, an index for evaluating the image quality can be set according to needs, which is not limited in the present application.
S303, a user image of at least one user belonging to the real person is extracted from the mixed reality image.
The real person refers to a real user in a real scene.
In the present application, it is desirable to make the action of the real person at a certain or some historical time before the current time visible to the user in the transparent holographic display, and therefore, in order to reproduce the action of the real person at a certain historical time on the transparent holographic display, it is necessary to extract the user images of the users belonging to the real person in the mixed reality image.
S304, overlapping the user image of at least one user to the virtual scene image to be projected to obtain the real stop-motion image to be projected.
The virtual scene image to be projected is an image which needs to be projected onto the transparent holographic display screen through the projection device after the current moment.
After the user image of the user is superposed on the virtual scene image to be projected, the user image and the virtual scene image are combined to form a new image, and the newly combined image is called a real stop-motion image.
It is understood that there are many possible specific ways to superimpose the user image onto the virtual scene image, which is not limited in this application.
In the application, the user image of at least one user can be only superimposed to the nearest frame of virtual scene image to be output after the current time, so that a frame of real stop-motion image to be projected is obtained.
However, considering that only the user image of the real person is superimposed in one frame of virtual scene image, the user image of the real person in the real stop motion image may not be clearly seen by the user, and the user image is switched to display the common virtual scene image.
And S305, projecting the real stop-motion image to the first display surface of the transparent holographic display screen through the projection device.
It is understood that, since the user image of the real character located on the second display surface side of the transparent holographic display is presented in the real stop motion image, and the user image of the real character can show the action behavior characteristics of the real character before the current time, after the real stop motion image is displayed in the transparent holographic display, the user on the first display surface side can see the character image (e.g., action behavior, expression, etc.) of the real character before the current time in the transparent holographic display, so that the user feels the effect of the character stop motion of the real character.
Meanwhile, when a user on one side of the first display surface sees the character image of the real character before the current moment, the user can see the character image of the real character at the current moment through the transparent holographic display screen, so that the fusion mode of the virtual scene and the real scene in the mixed reality scene is enriched.
It is understood that the scene freeze instruction in this application may be a start instruction that triggers the continuous presentation of a freeze image of an actual person, in which case the control device will continue to perform the above S302 to S305 so that the user on the first display side of the transparent holographic display can continuously see the freeze image of the actual person on the side of the second display side.
The scene freeze instruction may also be an instruction to trigger presentation of a freeze image of a real person once, in which case the control device only needs to perform steps S302 to S305 once without continuously repeating the loop execution of these several steps.
According to the scheme, after the instruction for requesting to present the stop-motion image of the real person in the mixed reality scene is obtained, the mixed reality image corresponding to the mixed reality scene and acquired by the image acquisition device can be obtained. On the basis, the user image of at least one user belonging to the real character extracted from the mixed reality image is superposed to the virtual scene image to be projected, so that the stop motion image of the real character in the physical world can be superposed to the virtual scene image, therefore, after the virtual scene image superposed with the stop motion image of the real character is projected to the transparent holographic display screen through the projection device, the action image before the current moment of the real character can be presented in the virtual scene, and the fusion degree of the virtual scene and the real scene in the mixed reality scene is improved.
It is understood that there are many possibilities for implementing the method of superimposing the user image of the real person extracted by the control device from the mixed reality image onto the virtual scene image to be projected, and the following description will be made in conjunction with several possibilities.
In one possible overlay implementation, for each user's user image, the application determines the relative position of the user with respect to the transparent holographic display screen. Then, a candidate superposition position corresponding to the relative position in the virtual scene image to be projected can be determined according to the relative position, and on the basis, the user image can be superposed to the candidate superposition position corresponding to the user image in the virtual scene image.
Wherein the relative position of the user with respect to the transparent holographic display screen can be determined in a number of ways. For example, the reference position of the user image of the user in the mixed reality image or the reference position of the user image of the user relative to the image of the transparent holographic display screen in the mixed reality image may be determined as the relative position of the user relative to the transparent holographic display screen. For another example, the user may be located to obtain the location information of the user, so as to obtain the relative location.
In this implementation, since the relative position of the user image with respect to the mixed reality image reflects the historical position of the user in the mixed reality scene, the historical position of the user in the real scene at the historical time can be more truly restored by the user image position of the user in the real stop motion image obtained by superimposing.
The description is still made in connection with the mixed reality scenario shown in fig. 2.
Assume that fig. 2 shows a schematic diagram of a mixed reality scene at a first time. As can be seen from fig. 2, the viewer 202 sees the real person 203 behind the transparent holographic display kicking through the transparent holographic display 201 at a first moment.
In the scenario of fig. 2, if the control device obtains a scene freeze instruction, the control device may include an image of a mixed reality scene, i.e., a mixed reality image. The mixed reality image includes at least what can be seen through the transparent holographic display, i.e., an image of a real person kicking a leg and an image of a virtual person reaching out. The control device extracts the image area of the real person 203 in the mixed display image to obtain the user image of the real person.
On this basis, the control device can superimpose the user image of the real person 203 on the virtual scene image to be projected according to the relative position of the real person 203 with respect to the transparent holographic display screen 201, so as to obtain the real stop-motion image.
Fig. 4 is a schematic diagram of a real stop motion image output by the control device to the transparent holographic display at a second time after the first time.
Comparing fig. 2 and fig. 4, it can be seen that:
in fig. 2 the viewer can see through the holographic display that a real person is kicking at a first moment.
Whereas in fig. 4, the viewer 202 at a second moment in time can see on the transparent holographic display 201 a virtual image 401 of the real character being kicked as it appears in the virtual scene. Moreover, the position and kicking motion of the real person shown in fig. 4 through the virtual scene are consistent with the actual position and kicking motion of the real person 204 in fig. 2 at the first moment, so that the audience can still see the kicking motion of the real person before the second moment at the second moment, and the effect of freezing the image of the real person is realized.
Meanwhile, in fig. 4, the viewer can also see the action of the real person 402 in the real scene of making a fist at the second moment through the display screen at the second moment.
It can be understood that, when the user image of the real person is superimposed at the corresponding position in the virtual scene image based on the historical position of the real person, a situation may occur in which the user image of the real person obstructs the virtual object in the virtual scene image, so that the finally presented real stop motion image has a situation of object overlapping and the like.
Based on this, in yet another possible overlay implementation manner, after determining, for each user image of the user, a candidate overlay position corresponding to the relative position in the virtual scene image to be projected, it may also be determined whether at least part of the object image of the virtual object exists at the candidate overlay position in the virtual scene image.
Correspondingly, if at least part of object images of the virtual object exist in the candidate superposition position in the virtual scene image to be projected, determining a target superposition position which is adjacent to the candidate superposition position and does not exist at least part of images of the virtual object in the virtual scene image, and superposing the user image of the user to the target superposition position in the virtual scene image. It is to be understood that the candidate superimposition position and the target superimposition position may be one position region, and are not necessarily one position point.
Of course, if at least part of the object image of the virtual object does not exist at the candidate overlay position in the virtual scene image to be projected, the user image of the user may be directly overlaid at the candidate overlay position in the virtual scene image.
In another possible implementation manner of superposition, in order to enable the image state of the real person at the previous moment to be presented more realistically through the holographic transparent display screen and enable the user on the first display surface side of the holographic transparent display screen to see the effect of the real person, the application can also control the projected image of the real person in the realistic stop motion image to be consistent with the actual size of the real person, so that the ratio presentation of the real person to the image of the real person is 1: 1.
On the basis, the method and the device can determine the actual proportion of the user relative to the transparent holographic display screen based on the user image of the user in the mixed reality image and the virtual image area corresponding to the transparent holographic display screen. Of course, the actual occupancy of the user relative to the transparent holographic display screen may also be determined by infrared light scanning or other means.
Correspondingly, the user image of at least one user can be superposed into the virtual scene image to be projected according to the actual occupation ratio of the user image of the user in the virtual scene image to be projected.
In this possible case, since the ratio of the user image superimposed on the virtual scene image to be projected to the virtual scene image is the actual ratio of the user to the transparent holographic display screen, after the superimposed real stop-motion image is projected to the transparent holographic display screen, the user image of the real character in the transparent holographic display screen and the actual size of the real character are kept consistent, so that the user can more realistically feel the actions and behaviors of the real character at the historical time through the virtual scene.
As shown in fig. 2 and 4, the user image 301 of the real person presented in the transparent holographic display in fig. 4 is consistent with the actual size of the real person 204 in the real scene.
It can be understood that the several ways of superimposing the user image of the user onto the virtual scene image are to control image superimposition from different dimensions, so that in practical applications, some or all of the several ways may be combined as needed to superimpose the user image onto the virtual scene image to be projected, and details are not repeated here.
It can be understood that, in the embodiment of the present application, since the mixed reality image may include the user image of the real person or the image of the virtual user, in order to extract the user image belonging to the real person from the mixed reality image, it is necessary to determine the position of the image of the real person in the mixed reality image.
The way of identifying the real person from the mixed reality image in the present application may be various, and the following description will be made in conjunction with several cases.
In one possible implementation, a recognition model for recognizing the real person may be trained in advance, and the recognition model may be trained by using a plurality of user image samples labeled with the positions of the real person. On the basis of the real image recognition method, the mixed reality image can be input into the recognition model, and the position of the image of the real person in the mixed reality image can be output through the recognition model.
However, it can be understood that the complexity of training the recognition model is high, and since the shape of the real person varies, the recognition model is difficult to recognize the accurate position of the real person in the mixed reality image, and the false recognition is easy to occur.
In another possible implementation manner, in order to reduce the complexity of locating the real person in the mixed reality image and improve the accuracy of locating the real person, the present application may further determine the position information of the real person in the mixed reality scene, and then determine the real person in the mixed reality image based on the position information. This implementation is described below in conjunction with a flow chart.
As shown in fig. 5, which shows a schematic flow chart of another embodiment of the image display method of the present application, the method of the present embodiment is also applied to a control device in a holographic projection system.
The difference from the foregoing is that the holographic projection system applied in the present embodiment further includes: a wearable positioning device. The wearable positioning device is worn on the body of a real person. In particular, the wearable positioning device is required to be configured on the body of a real person on one side of the second display surface of the transparent holographic display screen.
The wearable positioning device is connected with the control equipment.
The wearable positioning device can be a smart watch, a positioning helmet or a positioning tag, and the like, without limitation.
The wearable positioning device can position the position information of the real person wearing the wearable positioning device.
The process of this embodiment may include:
s501, a scene freeze instruction is obtained.
Wherein the scene freeze instruction is used for requesting to present a freeze image of a real person in the mixed reality scene.
The mixed reality scene at least comprises a virtual scene presented by the transparent holographic display screen and at least one real character positioned on one side of the second display surface of the transparent holographic display screen.
And S502, acquiring a mixed reality image corresponding to the mixed reality scene acquired by the image acquisition device.
And S503, obtaining the position information of the real person positioned by the wearable positioning device.
For example, the position information of each real person at the time of generation of the mixed reality image can be obtained.
It can be understood that, in the case that a plurality of real persons exist on the side of the second display surface of the transparent holographic display screen, the position information reported to the real persons by the wearable positioning devices worn by the plurality of real persons can be obtained respectively, so as to obtain the position information of the plurality of real persons.
It is understood that the sequence of steps S502 and S503 is not limited to that shown in fig. 5, and in practical applications, the two steps may be executed simultaneously.
S504, determining a user image of at least one user belonging to the real person in the mixed reality image based on the position information of the real person.
For example, for the position information of each real person, the relative position of the real person with respect to the transparent holographic display screen may be determined based on the position information of the real person, and the user image at the relative position in the mixed display image may be determined as the image of the user belonging to the real person.
For another example, the images of the users belonging to the real person may be matched according to the position information of the real person and the positions of the respective user images in the mixed reality image.
It is understood that, in the case of determining the position information of the real person, the image for locating the real person from the mixed image based on the position information of the real person is only subjected to position comparison, and does not need to train a model or perform more complicated data calculation, which is less complicated.
S505, a user image of the at least one user is extracted from the mixed reality image.
S506, determining the relative position of the user relative to the transparent holographic display screen for each user belonging to the real person, and executing the step S507.
And S507, determining a candidate superposition position corresponding to the relative position in the virtual scene image to be projected.
For example, for each user belonging to a real person, the position of the candidate superimposition position of the user with respect to the virtual scene image is the same as the relative position of the user with respect to the transparent holographic display screen, for example, the relative coordinate position is the same.
And S508, if at least part of object images of the virtual object exist in the candidate superposition position in the virtual scene image, determining a target superposition position which is adjacent to the candidate superposition position and does not exist at least part of images of the virtual object in the virtual scene image, and superposing the user image of the user to the target superposition position to obtain the real stop-motion image.
S509, if at least a part of the object image of the virtual object does not exist at the candidate overlay position in the virtual scene image, overlay the user image of the user to the candidate overlay position in the virtual scene image to obtain a real stop motion image.
It should be noted that, for convenience of understanding, the present embodiment is described by taking, as an example, an implementation manner in which the user image is superimposed on the virtual scene image as an implementation manner in steps S506 to S509. However, it is understood that, in practical applications, the user image is superimposed on the virtual scene image by the aforementioned other methods, and the method is also applicable to the embodiment, and details thereof are not described again.
And S510, projecting the real stop-motion image to the first display surface of the transparent holographic display screen through the projection device.
It can be understood that, due to the different refresh rates of the projection device and the image acquisition device, the mixed reality image acquired by the image acquisition device is easily unclear due to the stroboflash. Based on this, the present application also needs to reduce the occurrence of stroboscopic events.
For example, in one possible implementation, a strobe control device may be included in the holographic projection system, and the strobe control device is connected to the control device, and is also connected to the image capture device and the projection device.
On the basis, before the control equipment obtains the mixed reality image acquired by the image acquisition device, the control equipment can also send a synchronous control instruction to the stroboscopic control device. Wherein the synchronous control instruction is used for instructing the stroboscopic control device to control the image acquisition device and the projection device to be at the same refresh rate.
The refresh rate is also referred to as the refresh frequency, and the refresh rate of the image acquisition device may be the acquisition frequency of the image acquisition device, while the refresh rate of the projection device is the image scanning frequency.
It can be understood that, under the condition that the projection device and the image acquisition device are at the same refresh rate, stroboflash caused by refreshing the image by the projection device in the process of acquiring the image by the image acquisition device can be avoided, so that the quality of the acquired mixed reality image can be improved.
In yet another possible implementation manner, the control device may further send a pause projection instruction to the projection apparatus before obtaining the mixed reality image acquired by the image acquisition apparatus, where the pause projection instruction is used to instruct the projection apparatus to stop projecting onto the first display surface of the transparent holographic display screen.
Accordingly, after the projection apparatus suspends projection, the control device may obtain the mixed reality image captured by the image capturing apparatus, for example, the control device instructs the image capturing apparatus to capture an image of the mixed reality scene.
Further, in order to ensure that the projection device performs normal and continuous projection, after the control device obtains the mixed reality image corresponding to the mixed reality scene collected by the image collection device, the control device may further send a projection resumption instruction to the projection device, where the projection resumption instruction is used to instruct the projection device to resume projection onto the first display surface of the transparent holographic display screen.
The application also provides an image display device corresponding to the image display method.
As shown in fig. 6, which shows a schematic diagram of a composition structure of an image display apparatus of the present application, the apparatus of the present embodiment can be applied to a control device in a holographic projection system, which includes: the control device, the transparent holographic display screen, the projection device and the image acquisition device are arranged on one side of the first display surface of the transparent holographic display screen.
The device includes:
an instruction obtaining unit 601, configured to obtain a scene freeze instruction, where the scene freeze instruction is used to request to present a freeze image of a real person in a mixed reality scene, where the mixed reality scene includes a virtual scene presented by the transparent holographic display screen and at least one real person located on a second display surface side of the transparent holographic display screen;
an image obtaining unit 602, configured to obtain a mixed reality image corresponding to the mixed reality scene acquired by the image acquisition apparatus;
an image extraction unit 603 configured to extract a user image of at least one user belonging to a real person from the mixed reality image;
an image overlaying unit 604, configured to overlay the user image of the at least one user onto a virtual scene image to be projected, so as to obtain a real stop-motion image to be projected;
and an image projection unit 605, configured to project the realistic stop motion image to the first display surface of the transparent holographic display screen through the projection device.
In one possible implementation, the holographic projection system further includes: the wearable positioning device is worn on the body of the real person and is connected with the control equipment;
the device further comprises: the position obtaining unit is used for obtaining the position information of the real person positioned by the wearable positioning device;
the image extraction unit includes:
an image extraction subunit, configured to determine, based on the position information of the real person, a user image of at least one user belonging to the real person in the mixed reality image;
extracting a user image of the at least one user from the mixed reality image.
In yet another possible implementation, the holographic projection system further includes: the stroboscopic control device is connected with the control equipment;
the apparatus may further include: and the stroboscopic control unit is used for sending a synchronous control instruction to the stroboscopic control device before the image acquisition unit acquires the mixed reality image, and the synchronous control instruction is used for indicating the stroboscopic control device to control the image acquisition device and the projection device to be at the same refresh rate.
In yet another possible implementation manner, the apparatus further includes: and the pause control unit is used for sending a pause projection instruction to the projection device before the image obtaining unit obtains the mixed reality image, and the pause projection instruction is used for instructing the projection device to stop projecting to the first display surface of the transparent holographic display screen.
Further, the apparatus further comprises: and the projection recovery unit is used for sending a projection recovery instruction to the projection device after the image obtaining unit obtains the mixed reality image, and the projection recovery instruction is used for instructing the projection device to recover the projection of the first display surface of the transparent holographic display screen.
In yet another possible implementation, the image obtaining unit may include:
the candidate obtaining unit is used for obtaining at least one frame of candidate mixed reality image corresponding to the mixed reality scene recently collected by the image collecting device;
and the image determining unit is used for determining one frame of mixed reality image with the optimal picture quality in the at least one frame of candidate mixed reality image.
In yet another possible implementation manner, the image superimposing unit includes:
a first position determination unit for determining the relative position of the user with respect to the transparent holographic display screen;
the second position determining unit is used for determining a candidate superposition position corresponding to the relative position in the virtual scene image to be projected;
and if at least part of object images of the virtual objects exist in the candidate superposition positions in the virtual scene image, determining a target superposition position which is adjacent to the candidate superposition positions and does not exist at least part of images of the virtual objects in the virtual scene image, and superposing the user image of the user to the target superposition position.
In yet another possible implementation manner, the image superimposing unit includes:
the proportion determining subunit is configured to determine an actual proportion of the user with respect to the transparent holographic display screen based on a user image of the user in the mixed reality image and a virtual image area corresponding to the transparent holographic display screen;
and the image superposition subunit is used for superposing the user image of the at least one user to the virtual scene image to be projected according to the actual proportion of the user image of the user in the virtual scene image to be projected.
In another aspect, the present application further provides an electronic device, as shown in fig. 7, which shows a schematic structural diagram of the electronic device, and the electronic device may be the aforementioned control device, and there are many possibilities for specific forms thereof. Wherein the electronic device comprises at least a memory 701 and a processor 702;
wherein the processor 701 is configured to perform the image display method as in any of the above embodiments.
The memory 702 is used to store programs needed for the processor to perform operations.
It is to be understood that the electronic device may further include a display unit 703 and an input unit 704.
Of course, the electronic device may have more or less components than those shown in fig. 7, which is not limited thereto.
In another aspect, the present application further provides a computer-readable storage medium having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by a processor to implement the image display method according to any one of the above embodiments.
The present application also proposes a computer program comprising computer instructions stored in a computer readable storage medium. The computer program is for performing the image display method as in any one of the above embodiments when run on an electronic device.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. Meanwhile, the features described in the embodiments of the present specification may be replaced or combined with each other, so that those skilled in the art can implement or use the present application. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An image display method applied to a control device in a holographic projection system, the holographic projection system comprising: the control equipment, the transparent holographic display screen, the projection device and the image acquisition device are arranged on one side of a first display surface of the transparent holographic display screen, and the method comprises the following steps:
obtaining a scene freeze instruction, wherein the scene freeze instruction is used for requesting to present a freeze image of a real character in a mixed reality scene, and the mixed reality scene comprises a virtual scene presented by the transparent holographic display screen and at least one real character positioned on one side of a second display surface of the transparent holographic display screen;
acquiring a mixed reality image corresponding to the mixed reality scene acquired by the image acquisition device;
extracting a user image of at least one user belonging to a real person from the mixed reality image;
superposing the user image of the at least one user to the virtual scene image to be projected to obtain a real stop-motion image to be projected;
and projecting the real stop-motion image to a first display surface of the transparent holographic display screen through the projection device.
2. The method of claim 1, the holographic projection system further comprising: the wearable positioning device is worn on the body of the real person and is connected with the control equipment;
the method further comprises the following steps:
obtaining the position information of the real person positioned by the wearable positioning device;
the extracting of the user image of at least one user belonging to a real person from the mixed reality image includes:
determining a user image of at least one user belonging to a real person in the mixed reality image based on the position information of the real person;
extracting a user image of the at least one user from the mixed reality image.
3. The method of claim 1, the holographic projection system further comprising: the stroboscopic control device is connected with the control equipment;
before the obtaining of the mixed reality image corresponding to the mixed reality scene acquired by the image acquisition device, the method further includes:
and sending a synchronous control instruction to the stroboscopic control device, wherein the synchronous control instruction is used for indicating the stroboscopic control device to control the image acquisition device and the projection device to be at the same refresh rate.
4. The method according to claim 1, further comprising, before the obtaining of the mixed reality image corresponding to the mixed reality scene acquired by the image acquisition device:
sending a projection suspension instruction to the projection device, wherein the projection suspension instruction is used for instructing the projection device to stop projecting to the first display surface of the transparent holographic display screen.
5. The method of claim 4, further comprising, after the obtaining of the mixed reality image corresponding to the mixed reality scene acquired by the image acquisition device:
and sending a projection recovery instruction to the projection device, wherein the projection recovery instruction is used for instructing the projection device to recover the projection to the first display surface of the transparent holographic display screen.
6. The method of claim 1, wherein the obtaining of the mixed reality image corresponding to the mixed reality scene acquired by the image acquisition device comprises:
obtaining at least one frame of candidate mixed reality image corresponding to the mixed reality scene recently acquired by the image acquisition device;
and determining a frame of mixed reality image with the optimal picture quality in the at least one frame of candidate mixed reality image.
7. The method according to claim 1, wherein superimposing the user image of the at least one user onto the virtual scene image to be projected, resulting in a real stop-motion image to be projected, comprises:
determining the relative position of the user with respect to the transparent holographic display screen;
determining a candidate superposition position corresponding to the relative position in the virtual scene image to be projected;
if at least part of object images of the virtual object exist in the candidate superposition position in the virtual scene image, determining a target superposition position which is adjacent to the candidate superposition position and does not exist at least part of images of the virtual object in the virtual scene image, and superposing the user image of the user to the target superposition position.
8. The method according to claim 1, said superimposing a user image of said at least one user onto a virtual scene image to be projected, comprising:
determining the actual ratio of the user to the transparent holographic display screen based on the user image of the user in the mixed reality image and the virtual image area corresponding to the transparent holographic display screen;
and according to the actual proportion of the user image of the user in the virtual scene image to be projected, superposing the user image of the at least one user to the virtual scene image to be projected.
9. An image display apparatus applied to a control device in a holographic projection system, the holographic projection system comprising: the control device, transparent holographic display screen, projection arrangement and image acquisition device, wherein, projection arrangement with image acquisition device is located transparent holographic display screen's first display surface one side, the device includes:
the instruction obtaining unit is used for obtaining a scene freeze instruction, and the scene freeze instruction is used for requesting to present a freeze image of a real person in a mixed reality scene, wherein the mixed reality scene comprises a virtual scene presented by the transparent holographic display screen and at least one real person positioned on one side of a second display surface of the transparent holographic display screen;
the image acquisition unit is used for acquiring a mixed reality image corresponding to the mixed reality scene acquired by the image acquisition device;
an image extraction unit for extracting a user image of at least one user belonging to a real person from the mixed reality image;
the image superposition unit is used for superposing the user image of the at least one user onto the virtual scene image to be projected to obtain a real stop-motion image to be projected;
and the image projection unit is used for projecting the real stop-motion image to the first display surface of the transparent holographic display screen through the projection device.
10. The apparatus of claim 9, the holographic projection system further comprising: the wearable positioning device is worn on the body of the real person and is connected with the control equipment;
the device further comprises: the position obtaining unit is used for obtaining the position information of the real person positioned by the wearable positioning device;
the image extraction unit includes:
an image extraction subunit, configured to determine, based on the position information of the real person, a user image of at least one user belonging to the real person in the mixed reality image;
extracting a user image of the at least one user from the mixed reality image.
CN202111037944.2A 2021-09-06 2021-09-06 Image display method and device Pending CN113706720A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111037944.2A CN113706720A (en) 2021-09-06 2021-09-06 Image display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111037944.2A CN113706720A (en) 2021-09-06 2021-09-06 Image display method and device

Publications (1)

Publication Number Publication Date
CN113706720A true CN113706720A (en) 2021-11-26

Family

ID=78660384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111037944.2A Pending CN113706720A (en) 2021-09-06 2021-09-06 Image display method and device

Country Status (1)

Country Link
CN (1) CN113706720A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114005168A (en) * 2021-12-31 2022-02-01 北京瑞莱智慧科技有限公司 Physical world confrontation sample generation method and device, electronic equipment and storage medium
CN114333031A (en) * 2021-12-31 2022-04-12 北京瑞莱智慧科技有限公司 Vulnerability detection method and device of living body detection model and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105929962A (en) * 2016-05-06 2016-09-07 四川大学 360-DEG holographic real-time interactive method
CN108205823A (en) * 2017-12-29 2018-06-26 妫汭智能科技(上海)有限公司 MR holographies vacuum experiences shop and experiential method
CN110013678A (en) * 2019-05-09 2019-07-16 浙江棱镜文化传媒有限公司 Immersion interacts panorama holography theater performance system, method and application
CN110673735A (en) * 2019-09-30 2020-01-10 长沙自由视像信息科技有限公司 Holographic virtual human AR interaction display method, device and equipment
US20210271075A1 (en) * 2018-08-29 2021-09-02 Sony Corporation Information processing apparatus, information processing method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105929962A (en) * 2016-05-06 2016-09-07 四川大学 360-DEG holographic real-time interactive method
CN108205823A (en) * 2017-12-29 2018-06-26 妫汭智能科技(上海)有限公司 MR holographies vacuum experiences shop and experiential method
US20210271075A1 (en) * 2018-08-29 2021-09-02 Sony Corporation Information processing apparatus, information processing method, and program
CN110013678A (en) * 2019-05-09 2019-07-16 浙江棱镜文化传媒有限公司 Immersion interacts panorama holography theater performance system, method and application
CN110673735A (en) * 2019-09-30 2020-01-10 长沙自由视像信息科技有限公司 Holographic virtual human AR interaction display method, device and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114005168A (en) * 2021-12-31 2022-02-01 北京瑞莱智慧科技有限公司 Physical world confrontation sample generation method and device, electronic equipment and storage medium
CN114333031A (en) * 2021-12-31 2022-04-12 北京瑞莱智慧科技有限公司 Vulnerability detection method and device of living body detection model and storage medium

Similar Documents

Publication Publication Date Title
JP7132730B2 (en) Information processing device and information processing method
CN111540055B (en) Three-dimensional model driving method, three-dimensional model driving device, electronic equipment and storage medium
US9384588B2 (en) Video playing method and system based on augmented reality technology and mobile terminal
CN108668050B (en) Video shooting method and device based on virtual reality
CN113706720A (en) Image display method and device
CN113891060B (en) Free viewpoint video reconstruction method, play processing method, device and storage medium
KR20140082610A (en) Method and apaaratus for augmented exhibition contents in portable terminal
CN112653848B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
US9773523B2 (en) Apparatus, method and computer program
CN102945563A (en) Showing and interacting system and method for panoramic videos
US20160381290A1 (en) Apparatus, method and computer program
JP7476375B2 (en) Information processing device, information processing method, and program
Reimat et al. Cwipc-sxr: Point cloud dynamic human dataset for social xr
CN105872521A (en) 2D video playing method and device
CN113676692A (en) Video processing method and device in video conference, electronic equipment and storage medium
CN113382224B (en) Interactive handle display method and device based on holographic sand table
CN113259544B (en) Remote interactive holographic demonstration system and method
CN109840948B (en) Target object throwing method and device based on augmented reality
CN114302234B (en) Quick packaging method for air skills
CN113315885B (en) Holographic studio and system for remote interaction
CN113875227A (en) Information processing apparatus, information processing method, and program
JP2020135290A (en) Image generation device, image generation method, image generation system, and program
CN115175005A (en) Video processing method and device, electronic equipment and storage medium
CN113938752A (en) Processing method and device
CN114625468A (en) Augmented reality picture display method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination