CN114793274A - Data fusion method and device based on video projection - Google Patents

Data fusion method and device based on video projection Download PDF

Info

Publication number
CN114793274A
CN114793274A CN202111415533.2A CN202111415533A CN114793274A CN 114793274 A CN114793274 A CN 114793274A CN 202111415533 A CN202111415533 A CN 202111415533A CN 114793274 A CN114793274 A CN 114793274A
Authority
CN
China
Prior art keywords
frame image
projection
target area
users
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111415533.2A
Other languages
Chinese (zh)
Inventor
王红光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Mengtebo Intelligent Robot Technology Co ltd
Original Assignee
Beijing Mengtebo Intelligent Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Mengtebo Intelligent Robot Technology Co ltd filed Critical Beijing Mengtebo Intelligent Robot Technology Co ltd
Priority to CN202111415533.2A priority Critical patent/CN114793274A/en
Publication of CN114793274A publication Critical patent/CN114793274A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor

Abstract

The embodiment of the disclosure provides a data fusion method and device based on video projection, wherein the method comprises the following steps: projecting a first projection frame image to a target area, and acquiring a first operation frame image operated by a plurality of users in the target area aiming at the first projection frame image; respectively determining data change contents of the first operation frame image after a single user operates the first operation frame image, fusing the data change contents corresponding to the operation of the multiple users, and generating a second projection frame image; and projecting the second projection frame image to the target area to acquire a second operation frame image, operated by a plurality of users in the target area, aiming at the second projection frame image. In this way, can reduce the demand to office equipment quantity, reduce the official working cost, can improve office efficiency to the directly perceived show of joint official working progress simultaneously.

Description

Data fusion method and device based on video projection
Technical Field
The present disclosure relates to the field of video projection technologies, and in particular, to a data fusion method and apparatus based on video projection.
Background
As the amount of information increases, the way of presenting information becomes more diversified. The development of information technology makes the realization of virtual reality and augmented reality technology possible. Before the advent of this type of technology, it was difficult for the real world in which we were located to present information beyond time and space. However, through virtual technology and analog simulation, any information can be superposed to the real world, so that the user can perceive rich visual information, and the sensory experience beyond reality is achieved. The virtual reality aims to seamlessly interface the information of the physical world with the information of the virtual world, and realize one-to-one mapping from atoms to bits. The current augmented reality project is mostly realized through a helmet display and glasses. Another augmented reality solution is to use a smartphone to call up the image captured by the camera and render the information to be presented on the image.
The existing office environment is usually operated separately at the front end, then the background server updates data according to the operation instruction, especially for the combined office scene, on one hand, a large amount of office space is occupied, a large amount of office equipment is needed, the office cost is increased, on the other hand, the visual display of the combined office progress is lacked, the office efficiency is reduced, and the user experience is influenced.
Disclosure of Invention
In order to solve the technical problems in the prior art, the embodiments of the present disclosure provide a data fusion method and apparatus based on video projection.
In a first aspect of the present disclosure, a data fusion method based on video projection is provided, including:
projecting a first projection frame image to a target area, and acquiring a first operation frame image operated by a plurality of users in the target area aiming at the first projection frame image;
respectively determining data change contents of the first operation frame image after a single user operates the first operation frame image, and fusing the data change contents corresponding to the operations of the multiple users to generate a second projection frame image;
and projecting the second projection frame image to the target area to acquire a second operation frame image, operated by a plurality of users in the target area, aiming at the second projection frame image.
In some embodiments, further comprising:
receiving projection data content sent by a target user, projecting the projection data content to the target area as a first projection frame image, or,
and acquiring a real object image in the target area, acquiring corresponding projection content from a cloud server according to the real object image, and projecting the projection content to the target area as a first projection frame image.
In some embodiments, the acquiring a first operation frame image in which a plurality of users in the target region operate on the first projection frame image includes:
a first operation frame image in which a plurality of users operate on a virtual projection image in a target region is acquired.
In some embodiments, the virtual projection image includes a virtual input device or model projection content;
the operation on the virtual projection image in the target area comprises the following steps:
and performing key operation on the virtual input equipment in the target area, or performing gesture operation on the model projection content in the target area.
In some embodiments, a first operation frame image in which a plurality of users within the target region operate on the first projection frame image includes:
the method comprises the steps of obtaining first operation frame images of a plurality of users operating aiming at a real object input device in a target area.
In some embodiments, the determining the data change content of the first operation frame image after the single user operates the first operation frame image respectively includes:
and for a single user in the plurality of users who operate the first projection frame image, determining a corresponding operation instruction according to the first operation frame image, and determining corresponding change data in the first projection frame image according to the operation instruction.
In some embodiments, the fusing the data change contents corresponding to the operations of the multiple users to generate a second projection frame image includes:
and summarizing the change data corresponding to the operation instructions of the plurality of users, modifying the corresponding area in the first projection frame image, and generating a second projection frame image.
In a second aspect of the present disclosure, there is provided a video projection-based data fusion apparatus, including:
the first operation frame image acquisition module is used for projecting a first projection frame image to a target area and acquiring a first operation frame image which is operated by a plurality of users in the target area aiming at the first projection frame image; projecting a second projection frame image to the target area to acquire a second operation frame image operated by a plurality of users in the target area aiming at the second projection frame image;
and the data processing module is used for respectively determining the data change contents of the first operation frame image after a single user operates the first operation frame image, fusing the data change contents corresponding to the operations of the multiple users and generating a second projection frame image.
According to a third aspect of the present disclosure, an electronic device is provided. The electronic device includes: a memory having a computer program stored thereon and a processor implementing the method as described in the first aspect above when executing the program.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of the first aspect as described above.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Through the data fusion method based on video projection, the requirement on the number of office equipment can be reduced, the office cost is reduced, meanwhile, the joint office progress can be visually displayed, and the office efficiency is improved.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings. The accompanying drawings are included to provide a further understanding of the present disclosure, and are not incorporated in or constitute a part of this specification, wherein like reference numerals refer to like or similar elements throughout the several views and wherein:
FIG. 1 shows a flow diagram of a video projection-based data fusion method according to an embodiment of the present disclosure;
FIG. 2 shows a block diagram of a video projection-based data fusion apparatus according to an embodiment of the present disclosure;
FIG. 3 illustrates a block diagram of an exemplary electronic device capable of implementing embodiments of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without inventive step, are intended to be within the scope of the present disclosure.
In addition, the term "and/or" herein is only one kind of association relationship describing the association object, and means that there may be three kinds of relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The data fusion method based on video projection can be applied to joint office work, and the content to be displayed is projected to a target area (for example, projected to a transparent screen attached with a holographic film) through projection equipment, so that all people can see the projection content of the target area, the projection content of the target area can be directly or indirectly operated, and then the projection content projected to the target area is immediately updated through an operation instruction of a user, so that the progress of the joint office work can be visually displayed, and the user experience is improved.
Specifically, as shown in fig. 1, it is a flowchart of a data fusion method based on video projection according to an embodiment of the present disclosure. The data fusion method based on video projection of the embodiment may include the following steps:
s101: the method comprises the steps of projecting a first projection frame image to a target area, and acquiring a first operation frame image which is operated by a plurality of users in the target area aiming at the first projection frame image.
In this embodiment, the projection content may be obtained in advance, and the projection content is projected to a target area for multi-user operation in a joint office scene. In the projection process, the image can be projected to the target area according to the preset time interval for the user to operate, after the user operates, the image projected to the target area is updated according to the user operation, and the data corresponding to the user operation are fused by circularly repeating the process. And for a link in the circulation process, a first projection frame image is projected to a target area, and a first operation frame image which is operated by a plurality of users in the target area aiming at the first projection frame image is obtained. The method of the embodiment can be used for projecting the projection content to the target area through the projection device by means of the projection device and the image acquisition device for operation of a user, acquiring image frames of the projection content operated by the user through the image acquisition device, identifying and analyzing the image frames of the projection content operated by the user, determining the operation instruction of the user and the projection content targeted by the operation instruction, updating the data of the object in the projection content targeted by the operation instruction according to the operation instruction, and updating the data of the object in the projection content targeted by a plurality of user operation instructions at the same time, so as to generate a next frame of image, and projecting the next frame of image into the target area.
As an optional embodiment of the present disclosure, projection data content sent by a target user may be received, and the projection data content is projected to the target area as a first projection frame image, or a real object image in the target area is obtained, and a corresponding projection content is obtained from a cloud server according to the real object image and is projected to the target area as the first projection frame image. For example, a data connection sent by a user may be received, projection data content is acquired from a cloud by clicking the data connection, and the projection data content is projected to a target area as initial projection content for the user to operate. Or, a document sent by the user can be received, and the received document is projected to the target area as the initial projection content for the user to operate. Or acquiring a real object image provided by a user and located in the target area through an image acquisition device, and acquiring a model of a corresponding real object from a cloud server according to the real object image to serve as projection content to be projected to the target area for the user to operate.
S102: and respectively determining the data change content of the first operation frame image after a single user operates the first operation frame image, and fusing the data change content corresponding to the operation of the multiple users to generate a second projection frame image.
In this embodiment, after a first projection frame image is projected to a target area to obtain a first operation frame image, which is operated by a plurality of users in the target area with respect to the first projection frame image, data change contents of the first operation frame image after a single user operates the first operation frame image may be respectively determined, and data change contents corresponding to operations of the plurality of users are fused to generate a second projection frame image.
In this embodiment, the operation performed by the plurality of users on the first operation frame image may be a first operation frame image that operates on the virtual projection image within the target region. For example, a virtual input device (e.g., a virtual keyboard and a mouse) is projected in a target area, a user can normally operate the virtual input device, an image of the virtual input device operated by the user (e.g., a click operation of the user on the virtual keyboard or the mouse) is acquired by an image acquisition device, a corresponding operation instruction is determined, then the operation instructions of a plurality of users are fused, data change content in a first projection frame image is determined, and a second projection frame image is generated.
When the projection content is a virtual model, the gesture operation contents of a plurality of users can be obtained according to a predefined gesture rule, corresponding operation instructions are respectively determined, and then the data change content in the first projection frame image is determined, and a second projection frame image is generated. For example, the projection content is a model of a vehicle, the gesture of the user a is amplification, the gesture of the user B is rotation, the gesture operation of the user C is opening of a vehicle door, the gestures of the three users can be determined in the first operation frame image, and then the corresponding operation instruction is determined, the data change content in the first projection frame image is determined, that is, the vehicle model is amplified, rotated and opened of the vehicle door, and then the vehicle model after the vehicle door is amplified, rotated and opened is the second projection frame image.
Of course, in some embodiments, a plurality of first operation frame images of the user operating on the physical input device in the target area may be acquired. For example, a physical keyboard and a physical mouse are not connected with any device, but the operation feeling of a user is improved by the user, that is, the change of the projection content caused by the user clicking the keyboard and the physical mouse is transmitted in an image recognition mode, that is, the data change content in the first projection frame image is determined by acquiring the image of the operation of the user on the physical input device, the data change content in the first projection frame image is not determined according to an electric signal generated by the operation of the user on the physical input device, and the second projection frame image is generated after the data change content in the first projection frame image is determined.
S103: and projecting the second projection frame image to the target area to acquire a second operation frame image, operated by a plurality of users in the target area, aiming at the second projection frame image.
In this embodiment, after generating a second projection frame image, the second projection frame image may be projected to the target area to obtain a second operation frame image in which a plurality of users in the target area operate on the second projection frame image.
The data fusion method based on video projection can reduce the requirement on the number of office equipment, reduce office cost, and improve office efficiency by visually displaying the joint office progress.
In the embodiment of the present disclosure, 128 frames of images can be projected every second, so that the operation of the user on the projected content can be captured instantly, and on the other hand, the projected content can be updated quickly, thereby improving the office efficiency.
It should be noted that for simplicity of description, the above-mentioned method embodiments are described as a series of acts, but those skilled in the art should understand that the present disclosure is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present disclosure. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
The above is a description of embodiments of the method, and the embodiments of the apparatus are further described below.
Fig. 2 is a block diagram illustrating a structure of a data fusion apparatus based on video projection according to an embodiment of the present disclosure. The data fusion device based on video projection of this embodiment includes:
a first operation frame image obtaining module 201, configured to project a first projection frame image to a target area, and obtain a first operation frame image in which a plurality of users in the target area operate on the first projection frame image; and projecting a second projection frame image to the target area to acquire a second operation frame image operated by a plurality of users in the target area aiming at the second projection frame image.
The data processing module 202 is configured to determine data change contents of the first operation frame image after a single user operates the first operation frame image, and fuse the data change contents corresponding to the operations of the multiple users to generate a second projection frame image.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 3 shows a schematic block diagram of an electronic device 300 that may be used to implement an embodiment method of the present disclosure. As shown, device 300 includes a Central Processing Unit (CPU)301 that may perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM)302 or loaded from a storage unit 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the device 300 can also be stored. The CPU 301, ROM 302, and RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Various components in device 300 are connected to I/O interface 305, including: an input unit 306 such as a keyboard, a mouse, or the like; an output unit 307 such as various types of displays, speakers, and the like; a storage unit 308 such as a magnetic disk, optical disk, or the like; and a communication unit 309 such as a network card, modem, wireless communication transceiver, etc. The communication unit 309 allows the device 300 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processing unit 301, which tangibly embodies a machine-readable medium, such as the storage unit 308, performs the various methods and processes described above. In some embodiments, part or all of the computer program may be loaded onto and/or installed onto device 300 via ROM 302 and/or communications unit 309. When the computer program is loaded into the RAM 703 and executed by the CPU 301, one or more steps of the method described above may be performed. Alternatively, in other embodiments, the CPU 301 may be configured to perform the above-described method in any other suitable manner (e.g., by way of firmware).
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and the like.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A data fusion method based on video projection is characterized by comprising the following steps:
projecting a first projection frame image to a target area, and acquiring a first operation frame image operated by a plurality of users in the target area aiming at the first projection frame image;
respectively determining data change contents of the first operation frame image after a single user operates the first operation frame image, and fusing the data change contents corresponding to the operations of the multiple users to generate a second projection frame image;
and projecting the second projection frame image to the target area to acquire a second operation frame image operated by a plurality of users in the target area aiming at the second projection frame image.
2. The data fusion method of claim 1, further comprising:
receiving projection data content sent by a target user, projecting the projection data content to the target area as a first projection frame image, or,
and acquiring a real object image in the target area, acquiring corresponding projection content from a cloud server according to the real object image, and projecting the projection content to the target area as a first projection frame image.
3. The data fusion method according to claim 2, wherein the obtaining a first operation frame image in which a plurality of users in the target region operate on the first projection frame image comprises:
a first operation frame image in which a plurality of users operate on a virtual projection image in a target region is acquired.
4. The data fusion method of claim 3, wherein the virtual projection image includes a virtual input device or model projection content;
the operation on the virtual projection image in the target area comprises the following steps:
and performing key operation on the virtual input equipment in the target area, or performing gesture operation on the model projection content in the target area.
5. The data fusion method according to claim 2, wherein the first operation frame image in which the plurality of users in the target region operate on the first projection frame image includes:
the method comprises the steps of obtaining first operation frame images of a plurality of users operating aiming at a real object input device in a target area.
6. The data fusion method according to claim 5, wherein the determining the data change content of the first operation frame image after the single user operates the first operation frame image respectively comprises:
and for a single user in the plurality of users who operate the first projection frame image, determining a corresponding operation instruction according to the first operation frame image, and determining corresponding change data in the first projection frame image according to the operation instruction.
7. The data fusion method according to claim 6, wherein the fusing the data change contents corresponding to the operations of the plurality of users to generate a second projection frame image comprises:
summarizing the change data corresponding to the operation instructions of the users, modifying the corresponding area in the first projection frame image, and generating a second projection frame image.
8. A data fusion device based on video projection is characterized by comprising:
the image acquisition module is used for projecting a first projection frame image to a target area and acquiring a first operation frame image which is operated by a plurality of users in the target area aiming at the first projection frame image; projecting a second projection frame image to the target area to acquire a second operation frame image operated by a plurality of users in the target area aiming at the second projection frame image;
and the data processing module is used for respectively determining the data change contents of the first operation frame image after a single user operates the first operation frame image, fusing the data change contents corresponding to the operations of the multiple users and generating a second projection frame image.
9. An electronic device comprising a memory and a processor, wherein the memory stores a program, and the processor executes the program to implement the method for fusing data based on video projection according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a program is stored, which program, when being executed by a processor, is adapted to carry out a method for video projection based data fusion according to any one of claims 1 to 7.
CN202111415533.2A 2021-11-25 2021-11-25 Data fusion method and device based on video projection Pending CN114793274A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111415533.2A CN114793274A (en) 2021-11-25 2021-11-25 Data fusion method and device based on video projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111415533.2A CN114793274A (en) 2021-11-25 2021-11-25 Data fusion method and device based on video projection

Publications (1)

Publication Number Publication Date
CN114793274A true CN114793274A (en) 2022-07-26

Family

ID=82459910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111415533.2A Pending CN114793274A (en) 2021-11-25 2021-11-25 Data fusion method and device based on video projection

Country Status (1)

Country Link
CN (1) CN114793274A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009083259A1 (en) * 2008-01-03 2009-07-09 Wunderworks B.V. Multi-user collaboration system and method
US20100188478A1 (en) * 2009-01-28 2010-07-29 Robinson Ian N Methods and systems for performing visual collaboration between remotely situated participants
CN102695032A (en) * 2011-02-10 2012-09-26 索尼公司 Information processing apparatus, information sharing method, program, and terminal device
US20150193979A1 (en) * 2014-01-08 2015-07-09 Andrej Grek Multi-user virtual reality interaction environment
CN108028871A (en) * 2015-09-11 2018-05-11 华为技术有限公司 The more object augmented realities of unmarked multi-user in mobile equipment
CN108205823A (en) * 2017-12-29 2018-06-26 妫汭智能科技(上海)有限公司 MR holographies vacuum experiences shop and experiential method
US20190251884A1 (en) * 2018-02-14 2019-08-15 Microsoft Technology Licensing, Llc Shared content display with concurrent views
US20200128106A1 (en) * 2018-05-07 2020-04-23 EolianVR, Inc. Device and content agnostic, interactive, collaborative, synchronized mixed reality system and method
CN111277893A (en) * 2020-02-12 2020-06-12 北京字节跳动网络技术有限公司 Video processing method and device, readable medium and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009083259A1 (en) * 2008-01-03 2009-07-09 Wunderworks B.V. Multi-user collaboration system and method
US20100188478A1 (en) * 2009-01-28 2010-07-29 Robinson Ian N Methods and systems for performing visual collaboration between remotely situated participants
CN102695032A (en) * 2011-02-10 2012-09-26 索尼公司 Information processing apparatus, information sharing method, program, and terminal device
US20150193979A1 (en) * 2014-01-08 2015-07-09 Andrej Grek Multi-user virtual reality interaction environment
CN108028871A (en) * 2015-09-11 2018-05-11 华为技术有限公司 The more object augmented realities of unmarked multi-user in mobile equipment
CN108205823A (en) * 2017-12-29 2018-06-26 妫汭智能科技(上海)有限公司 MR holographies vacuum experiences shop and experiential method
US20190251884A1 (en) * 2018-02-14 2019-08-15 Microsoft Technology Licensing, Llc Shared content display with concurrent views
US20200128106A1 (en) * 2018-05-07 2020-04-23 EolianVR, Inc. Device and content agnostic, interactive, collaborative, synchronized mixed reality system and method
CN111277893A (en) * 2020-02-12 2020-06-12 北京字节跳动网络技术有限公司 Video processing method and device, readable medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN109829981B (en) Three-dimensional scene presentation method, device, equipment and storage medium
CN111414225B (en) Three-dimensional model remote display method, first terminal, electronic device and storage medium
CN107977141B (en) Interaction control method and device, electronic equipment and storage medium
CN107451272B (en) Information display method, medium, device and computing equipment
CN113220118B (en) Virtual interface display method, head-mounted display device and computer readable medium
US10921796B2 (en) Component information retrieval device, component information retrieval method, and program
US11354875B2 (en) Video blending method, apparatus, electronic device and readable storage medium
CN113359995A (en) Man-machine interaction method, device, equipment and storage medium
CN110874172A (en) Method, device, medium and electronic equipment for amplifying APP interface
CN113903210A (en) Virtual reality simulation driving method, device, equipment and storage medium
US20220405823A1 (en) Object comparison method, and device
CN110673886A (en) Method and device for generating thermodynamic diagram
CN110288523B (en) Image generation method and device
CN114793274A (en) Data fusion method and device based on video projection
CN115576470A (en) Image processing method and apparatus, augmented reality system, and medium
CN115643468A (en) Poster generation method and device, electronic equipment and storage medium
CN109675312B (en) Game item list display method and device
CN113836455A (en) Special effect rendering method, device, equipment, storage medium and computer program product
CN113989427A (en) Illumination simulation method and device, electronic equipment and storage medium
CN109062645B (en) Method and apparatus for processing information for terminal
CN109343703B (en) Multi-terminal collaborative information processing method, device, system and storage medium
CN110941389A (en) Method and device for triggering AR information points by focus
CN112328940A (en) Method and device for embedding transition page into webpage, computer equipment and storage medium
CN111913711A (en) Video rendering method and device
CN108499102B (en) Information interface display method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Shi Xuan

Inventor after: Kang Hua

Inventor after: Wang Hongguang

Inventor before: Wang Hongguang