CN111640191A - Projection screen picture acquisition processing method based on VR (virtual reality) all-in-one machine - Google Patents

Projection screen picture acquisition processing method based on VR (virtual reality) all-in-one machine Download PDF

Info

Publication number
CN111640191A
CN111640191A CN202010506055.5A CN202010506055A CN111640191A CN 111640191 A CN111640191 A CN 111640191A CN 202010506055 A CN202010506055 A CN 202010506055A CN 111640191 A CN111640191 A CN 111640191A
Authority
CN
China
Prior art keywords
picture
screen
texture
screen projection
rendering thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010506055.5A
Other languages
Chinese (zh)
Other versions
CN111640191B (en
Inventor
孙鹏飞
高磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing IQIYI Intelligent Technology Co Ltd
Original Assignee
Nanjing IQIYI Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing IQIYI Intelligent Technology Co Ltd filed Critical Nanjing IQIYI Intelligent Technology Co Ltd
Priority to CN202010506055.5A priority Critical patent/CN111640191B/en
Publication of CN111640191A publication Critical patent/CN111640191A/en
Application granted granted Critical
Publication of CN111640191B publication Critical patent/CN111640191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a projection screen picture acquisition and processing method based on a VR (virtual reality) all-in-one machine, which is used for projecting an updated picture of a 3D application into a projection screen module, and can realize that different threads share texture resources in the same process while updating the picture. Compared with the prior art, the technical scheme of the invention can flexibly process multiple screen projection requests and can perfectly adapt to the size of the target screen projection end. Meanwhile, the collected and processed pictures are clear and complete, large black edges do not exist, and the influence on the system rendering load is small. The scheme is very good whether the material is used by a user or made into a propaganda material.

Description

Projection screen picture acquisition processing method based on VR (virtual reality) all-in-one machine
Technical Field
The invention relates to the technical field of VR (virtual reality), in particular to a projection screen picture acquisition and processing method based on a VR (virtual reality) all-in-one machine.
Background
VR (Virtual Reality technology) is a computer simulation system that can create and experience Virtual worlds, which uses computers to create a simulation environment, and is an interactive three-dimensional dynamic view and entity behavior system with multi-source information fusion, and immerses users in the simulation environment.
When the VR equipment performs picture projection processing, a monocular picture in the all-in-one machine can be projected to external equipment or recorded into a video file. Currently, two common screen projection processing modes are available: the first method in the prior art is as follows: one is to record the whole screen directly as the picture source by means of the virtual screen of the android system. The second way in the prior art is: presentation scheme by means of android system. The Apk renders the picture to be projected to a separate surfeview, which is then associated with the virtual screen with the Presentation. And finally, transferring the picture to a virtual screen.
However, it is clear that the above two solutions of the prior art still have some objective technical problems: the technical drawback of the first mode is that: the virtual screen of the android system is used for directly recording the whole screen as a picture source. The biggest disadvantage of this method is that the projected pictures are two circular pictures, and the image seen by the receiving end is not beautiful and the user experience is poor. The technical drawback of the second mode is that: the Presentation scheme using the android system has the disadvantages of low rendering efficiency, requirement of surfactin, and difficulty in adding a post-processing link (such as watermarking). Specifically, when image data is transmitted through the surfactin, additional operations for copying the picture of the apk to the virtual screen are required, data transmission delay and system load are increased, and picture tearing phenomenon of a projected picture is caused because the copy operation and the apk rendering are not perfectly synchronized.
Therefore, how to overcome the above technical problems is a problem to be solved by those skilled in the art.
Disclosure of Invention
In view of this, an embodiment of the present invention provides a projection screen image collecting and processing method based on a VR all-in-one machine, so as to solve the above technical problem.
An embodiment of the present invention provides a projection screen image acquisition processing method based on a VR all-in-one machine, which projects an updated image of a 3D application into a projection screen module, and can also realize that different threads in the same process share texture resources while performing image updating, and specifically includes the following operation steps:
step S100: after the background screen projection service unit is started, a background main rendering thread is created, and meanwhile, an OpenGLES texture is created; the texture of OpenGLES is used as a texture area of a data middle end for providing shared texture resources;
step S200: the screen projection module with the screen projection requirement sends a screen projection request to a background screen projection service unit; the background screen projection service unit identifies the received screen projection request and identifies the ID number of the Surface-x and the size information of the required picture of the Surface-x contained in the screen projection request; the size information comprises the height size and the width size of a screen required to be projected by Surface-x;
step S300: after receiving the screen-casting request in step S200, the background screen-casting service unit creates a rendering thread a based on Surface-x in the screen-casting request and shares a context with the background main rendering thread; the rendering thread A and the background main rendering thread share a context and are used for sharing texture resources recorded by the OpenGLES textures in the background main rendering thread; the rendering thread A corresponds to the Surface-x one by one; randomly calling texture resources recorded by the textures of OpenGLES by other rendering threads in the background screen projection service unit; meanwhile, the background screen-casting service unit can create a Surface-APP based on the texture of the OpenGLES of the background main rendering thread;
step S400: the background screen-casting service unit transmits Surface-APP to the 3D application through an AIDL interface, and the Surface-APP is used for establishing information link connection between the 3D application and the background screen-casting service unit;
step S500: the 3D application comprises a 3D main rendering thread, and after the 3D application receives the Surface-APP, the 3D application creates a rendering thread B which shares the context with the 3D main rendering thread based on the Surface-APP; and the rendering thread B is used for rendering and drawing a monocular picture in the 3D main rendering thread, and the rendered monocular picture is dropped on the picture of the Surface-APP so as to ensure that the picture of the Surface-APP is updated, and finally the updated picture is added into the texture of the OpenGLES.
Preferably, as one possible embodiment; the screen projection module comprises an Rtmp screen projection module and a Miracast screen projection module.
Preferably, as one possible embodiment; in step S500, the method further includes an operation step of performing cropping processing on the updated screen to adapt to the size of the screen projection module:
step S510: after the updated picture is added into the texture of OpenGLES, the size information recorded in the Surface-x of the screen projection module of the target is called; and cutting according to the size information to ensure that the updated picture size is matched with the screen projection module.
Preferably, as one possible embodiment; in step S500, the method further includes an operation step of performing logo-added watermarking on the updated picture:
step S520: and after the updated picture is added into the texture of the OpenGLES, obtaining a logo watermark to be added and adding the logo watermark into the updated picture, and finally putting the updated picture into a target screen projection module.
Preferably, as one possible embodiment; in the above step S520: the method for acquiring the logo watermark to be added and adding the logo watermark to the updated picture specifically comprises the following operation steps:
step S5210: after the updated picture is added into the texture of the OpenGLES, the rendering thread A acquires a logo watermark needing to be added, adds the logo watermark to the updated picture, and arranges the picture added with the logo watermark into the texture resource of the texture of the OpenGLES;
step S5220: and the screen projection module of the target accesses the texture of OpenGLES, calls an updating picture corresponding to the ID number of the Surface-x on the same link according to the ID number of the Surface-x, and projects the updating picture into the screen projection module of the target.
The embodiment of the invention has the following technical advantages:
the embodiment of the invention provides a technical scheme of a projection screen image acquisition and processing method based on a VR (virtual reality) all-in-one machine, which at least comprises the following two technical advantages;
the projection screen image acquisition and processing method based on the VR all-in-one machine provided by the embodiment of the invention reduces the system rendering load, and particularly reduces the system rendering load mainly because the 3D application image is transmitted to a background projection screen service through an AIDL interface and is not processed by a surfefringer. The projection screen picture acquisition and processing method based on the VR all-in-one machine provided by the embodiment of the invention realizes the sharing of texture areas among different rendering threads in the same process, thereby simplifying the processing operation, reducing the rendering storage space and further reducing the system rendering load; researches find that different threads in the same process are isolated from each other in the prior art, and texture sharing cannot be realized; therefore, after the technical scheme is implemented, other rendering threads created subsequently in the same process can also access randomly to share the texture resources recorded by the textures of the OpenGLES in the background main rendering thread.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings required to be used in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of the present invention. Like components are numbered similarly in the various figures.
Fig. 1 is a schematic main flow diagram illustrating a projection screen image acquisition processing method based on a VR all-in-one machine according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a control principle structure of a projection screen image acquisition processing method based on a VR all-in-one machine in an embodiment of the present invention.
Reference numbers: 10-screen projection module; 20-background screen projection service unit; 21-background main rendering thread; texture of 22-OpenGLES; 30-3D applications; 31-3D main rendering thread.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Hereinafter, the terms "including", "having", and their derivatives, which may be used in various embodiments of the present invention, are only intended to indicate specific features, numbers, steps, operations, elements, components, or combinations of the foregoing, and should not be construed as first excluding the existence of, or adding to, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the present invention belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments of the present invention.
Example one
As shown in fig. 1, a detailed description is provided below of a projection screen image acquisition processing method based on a VR all-in-one machine in an embodiment of the present invention.
The invention provides a projection screen picture acquisition and processing method based on a VR (virtual reality) all-in-one machine, which is used for projecting an updated picture of a 3D application into a projection screen module, and can realize that different threads share texture resources in the same process while updating the picture, and specifically comprises the following operation steps:
step S100: after the background screen projection service unit is started, a background main rendering thread is created, and meanwhile, an OpenGLES texture is created; the texture of OpenGLES is used as a texture area of a data middle end for providing shared texture resources; that is, it should be noted that the background main rendering thread involved in step S100 is different from the 3D main rendering thread in step S500; RenderThread refers to rendering thread RenderThread, which is a rendering thread in an android system.
Step S200: the screen projection module with the screen projection requirement sends a screen projection request to a background screen projection service unit; the background screen projection service unit identifies the received screen projection request and identifies the ID number of the Surface-x and the size information of the required picture of the Surface-x contained in the screen projection request; the size information comprises the height size and the width size of a screen required to be projected by Surface-x; for example: the screen projection request comprises Surface-1 and the size information of the required picture of Surface-1. Another screen projection module may send Surface-2 and the size information of the required picture of Surface-2; (i.e., the number x in Surface-x is the ID number);
step S300: after receiving the screen-casting request in step S200, the background screen-casting service unit creates a rendering thread a based on Surface-x in the screen-casting request and shares a context with the background main rendering thread; the rendering thread A and the background main rendering thread share a context and are used for sharing texture resources recorded by the OpenGLES textures in the background main rendering thread; the rendering thread A corresponds to the Surface-x one by one; the rendering thread A is also used for adding an additional logo to the monocular picture and arranging the drawing result into texture resources of the texture of OpenGLES; randomly calling texture resources recorded by the textures of OpenGLES by other rendering threads in the background screen projection service unit; meanwhile, the background screen-casting service unit can create a Surface-APP based on the texture of the OpenGLES of the background main rendering thread; the screen projection module comprises an Rtemp screen projection module, a Miracast screen projection module or other screen projection modules.
Step S400: the background screen-casting service unit transmits Surface-APP to 3D application (or called 3D application APK) through an AIDL interface, and the Surface-APP is used for establishing information link connection between the 3D application and the background screen-casting service unit;
step S500: the 3D application comprises a 3D main rendering thread, and after the 3D application receives the Surface-APP, the 3D application creates a rendering thread B which shares the context with the 3D main rendering thread based on the Surface-APP; the rendering thread B is used for rendering and drawing a monocular picture (or called monocular texture, namely, each time the 3D application changes the picture, processing operation needs to be carried out through the 3D main rendering thread) in the 3D main rendering thread, the rendered monocular picture is dropped on the picture of the Surface-APP, so that the picture of the Surface-APP is ensured to be updated, and the updated picture is finally added into the texture of OpenGLES; it should be noted that the rendering thread B renders and draws a monocular picture in the 3D main rendering thread, and the rendered monocular picture is dropped on the picture of the Surface-APP to ensure that the picture of the Surface-APP is updated, and the Surface-APP is created by the texture implementation of OpenGLES in the background service unit, so that the background service unit can also know the change of each frame of picture data of the Surface-APP, and at this time, the content of the texture of the OpenGLES in the background service unit is the latest picture.
It should be noted that the background main rendering thread and the rendering thread a created in the preamble step S100 are both in the process of the background screen projection service unit, whereas the 3D main rendering thread and the rendering thread B created later are in the process of the 3D application; on one hand, the 3D application creates a rendering thread B based on the Surface-APP, and then the rendering thread B draws a monocular texture, and the final picture result falls on the Surface-APP. The following is illustrated by way of example: for example, if the rendering thread B renders at a frame rate of 30FPS, the picture of the Surface-APP will be updated at a frame rate of 30 FPS. On the other hand, the Surface-APP is created by the background service unit implementation. Therefore, the background service unit can also know the change of each frame of picture data of the Surface-APP, that is, each time the picture is updated in the rendering thread B of the 3D application, the background service unit can receive the system notification to inform that the Surface-APP is updated. The texture 1 content in the background service unit is now the latest picture.
In a process of video rendering development using an Embedded Open Graphics Library (OpenGLES), it is necessary to create a texture from decoded video frame data and render an image according to the created texture. According to the technical scheme adopted by the embodiment of the invention, the texture of OpenGLES is used as a texture area of a data middle end for providing shared texture resources;
in the step S300, it should be noted that the rendering thread a and the background main rendering thread share a context, and the ultimate essential purpose is to share a texture resource described by the texture of OpenGLES in the background main rendering thread, so that other rendering threads (e.g., rendering thread B1, rendering thread B2, rendering thread B3, etc.) can also access the texture resource described by the texture of OpenGLES; therefore, the purpose of sharing the texture resource is to arbitrarily call the texture resource of the OpenGLES using the background main rendering thread to directly draw when other rendering threads in the same process draw. In addition, the shared context exists only between different threads of the same process. It should be noted that the background main rendering thread and the rendering thread a created in the preamble step S100 are both in the process of the background screen-casting service unit; however, the 3D main rendering thread, rendering thread B, created later, occurs in the process of the 3D application; it should be noted that, each time the background screen projection service unit receives the screen projection request in step S100, a rendering thread a is created based on the Surface-x in the screen projection request, that is, a rendering thread a is created for each Surface-x; the ID number for each Surface-x specifies one rendering thread A to be produced.
As shown in fig. 2, fig. 2 illustrates a screen projection module 10, a background screen projection service unit 20, a background main rendering thread 21, an OpenGLES texture 22 in the background main rendering thread, a rendering thread a and Surface-x, a Surface-APP, a 3D application 30, a 3D main rendering thread 31, a rendering thread B, and the like, respectively, according to a control principle applied by the projection screen image acquisition processing method based on the VR all-in-one machine according to the embodiment of the present invention.
In step S500, the method further includes an operation step of performing cropping processing on the updated screen to adapt to the size of the screen projection module:
step S510: after the updated picture is added into the texture of OpenGLES, the size information recorded in the Surface-x of the screen projection module of the target is called; and cutting according to the size information to ensure that the updated picture size is matched with the screen projection module.
It should be noted that, in the prior art, a monocular picture cannot be cut, the monocular picture is generally square, however, the display screen of the external screen projection module is rectangular, and therefore, in the prior art, the monocular picture is placed on the display screen of the external screen projection module, so that large black edges appear on both sides, and the attractiveness is seriously affected. However, the embodiment of the invention adopts a cutting adaptation mode to avoid the occurrence of large black edges.
In step S500, the method further includes an operation step of performing logo-added watermarking on the updated picture:
step S520: and after the updated picture is added into the texture of the OpenGLES, obtaining a logo watermark to be added and adding the logo watermark into the updated picture, and finally putting the updated picture into a target screen projection module.
It should be noted that, in the prior art, watermarking cannot be performed on an updated picture, and according to the scheme of the embodiment of the present invention, the updated picture may be added to the texture of OpenGLES, and then a logo watermark is acquired and added to the updated picture.
In a specific technical solution, in the step S520: the method for acquiring the logo watermark to be added and adding the logo watermark to the updated picture specifically comprises the following operation steps:
step S5210: after the updated picture is added into the texture of the OpenGLES, the rendering thread A acquires a logo watermark needing to be added, adds the logo watermark to the updated picture, and arranges the picture added with the logo watermark into the texture resource of the texture of the OpenGLES;
step S5220: and the screen projection module of the target accesses the texture of OpenGLES, calls an updating picture corresponding to the ID number of the Surface-x on the same link according to the ID number of the Surface-x, and projects the updating picture into the screen projection module of the target.
In summary, the projection screen image acquisition and processing method based on the VR all-in-one machine provided by the embodiment of the invention can flexibly process multiple projection screen requests and can perfectly adapt to the size of a target projection screen end. And the picture is clear and complete without large black edges. The system rendering load is little influenced. The scheme is very good whether the material is used by a user or made into a propaganda material.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.

Claims (5)

1. A projection screen picture collecting and processing method based on a VR (virtual reality) all-in-one machine is characterized in that an updated picture of a 3D application is projected into a projection screen module, and texture resources can be shared by different threads in the same process while the picture is updated, and the method specifically comprises the following operation steps:
step S100: after the background screen projection service unit is started, a background main rendering thread is created, and meanwhile, an OpenGLES texture is created; the texture of OpenGLES is used as a texture area of a data middle end for providing shared texture resources;
step S200: the screen projection module with the screen projection requirement sends a screen projection request to a background screen projection service unit; the background screen projection service unit identifies the received screen projection request and identifies the ID number of the Surface-x and the size information of the required picture of the Surface-x contained in the screen projection request; the size information comprises the height size and the width size of a screen required to be projected by Surface-x;
step S300: after receiving the screen-casting request in step S200, the background screen-casting service unit creates a rendering thread a based on Surface-x in the screen-casting request and shares a context with the background main rendering thread; the rendering thread A and the background main rendering thread share a context and are used for sharing texture resources recorded by the OpenGLES textures in the background main rendering thread; the rendering thread A corresponds to the Surface-x one by one; randomly calling texture resources recorded by the textures of OpenGLES by other rendering threads in the background screen projection service unit; meanwhile, the background screen-casting service unit can create a Surface-APP based on the texture of the OpenGLES of the background main rendering thread;
step S400: the background screen-casting service unit transmits Surface-APP to the 3D application through an AIDL interface, and the Surface-APP is used for establishing information link connection between the 3D application and the background screen-casting service unit;
step S500: the 3D application comprises a 3D main rendering thread, and after the 3D application receives the Surface-APP, the 3D application creates a rendering thread B which shares the context with the 3D main rendering thread based on the Surface-APP; and the rendering thread B is used for rendering and drawing a monocular picture in the 3D main rendering thread, and the rendered monocular picture is dropped on the picture of the Surface-APP so as to ensure that the picture of the Surface-APP is updated, and finally the updated picture is added into the texture of the OpenGLES.
2. The VR integrated machine-based screen projection and recording picture collection processing method of claim 1, wherein the screen projection module comprises an Rtmp screen projection module and a Miracast screen projection module.
3. The VR all-in-one machine-based screen projection and recording picture collection processing method of claim 2, further comprising, in step S500, an operation step of clipping the updated picture to fit the size of the screen projection module:
step S510: after the updated picture is added into the texture of OpenGLES, the size information recorded in the Surface-x of the screen projection module of the target is called; and cutting according to the size information to ensure that the updated picture size is matched with the screen projection module.
4. The VR all-in-one machine-based screen projection collection processing method of claim 3, further comprising, in step S500, an operation step of adding a logo watermark to the updated screen:
step S520: and after the updated picture is added into the texture of the OpenGLES, obtaining a logo watermark to be added and adding the logo watermark into the updated picture, and finally putting the updated picture into a target screen projection module.
5. The VR integrated machine based screen shot collection processing method of claim 4, wherein in step S520: the method for acquiring the logo watermark to be added and adding the logo watermark to the updated picture specifically comprises the following operation steps:
step S5210: after the updated picture is added into the texture of the OpenGLES, the rendering thread A acquires a logo watermark needing to be added, adds the logo watermark to the updated picture, and arranges the picture added with the logo watermark into the texture resource of the texture of the OpenGLES;
step S5220: and the screen projection module of the target accesses the texture of OpenGLES, calls an updating picture corresponding to the ID number of the Surface-x on the same link according to the ID number of the Surface-x, and projects the updating picture into the screen projection module of the target.
CN202010506055.5A 2020-06-05 2020-06-05 VR (virtual reality) -based method for collecting and processing projection screen images Active CN111640191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010506055.5A CN111640191B (en) 2020-06-05 2020-06-05 VR (virtual reality) -based method for collecting and processing projection screen images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010506055.5A CN111640191B (en) 2020-06-05 2020-06-05 VR (virtual reality) -based method for collecting and processing projection screen images

Publications (2)

Publication Number Publication Date
CN111640191A true CN111640191A (en) 2020-09-08
CN111640191B CN111640191B (en) 2023-04-21

Family

ID=72329826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010506055.5A Active CN111640191B (en) 2020-06-05 2020-06-05 VR (virtual reality) -based method for collecting and processing projection screen images

Country Status (1)

Country Link
CN (1) CN111640191B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565839A (en) * 2020-11-23 2021-03-26 青岛海信传媒网络技术有限公司 Display method and display device of screen projection image
CN113891167A (en) * 2021-08-27 2022-01-04 荣耀终端有限公司 Screen projection method and electronic equipment
CN114650442A (en) * 2020-12-17 2022-06-21 青岛海尔多媒体有限公司 Method and device for mirror image screen projection and mirror image screen projection equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902390A (en) * 2014-03-12 2014-07-02 深圳创维-Rgb电子有限公司 Inter-process communication method based on Android application layer and basis application communication system
US9473758B1 (en) * 2015-12-06 2016-10-18 Sliver VR Technologies, Inc. Methods and systems for game video recording and virtual reality replay
CN110248022A (en) * 2019-06-06 2019-09-17 武汉卡比特信息有限公司 A kind of throwing screen method of tripartite's application based on mobile phone interconnection
CN110908624A (en) * 2018-09-18 2020-03-24 深圳市布谷鸟科技有限公司 Control method and system for screen abnormal display

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902390A (en) * 2014-03-12 2014-07-02 深圳创维-Rgb电子有限公司 Inter-process communication method based on Android application layer and basis application communication system
US9473758B1 (en) * 2015-12-06 2016-10-18 Sliver VR Technologies, Inc. Methods and systems for game video recording and virtual reality replay
CN110908624A (en) * 2018-09-18 2020-03-24 深圳市布谷鸟科技有限公司 Control method and system for screen abnormal display
CN110248022A (en) * 2019-06-06 2019-09-17 武汉卡比特信息有限公司 A kind of throwing screen method of tripartite's application based on mobile phone interconnection

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565839A (en) * 2020-11-23 2021-03-26 青岛海信传媒网络技术有限公司 Display method and display device of screen projection image
CN112565839B (en) * 2020-11-23 2022-11-29 青岛海信传媒网络技术有限公司 Display method and display device of screen projection image
CN114650442A (en) * 2020-12-17 2022-06-21 青岛海尔多媒体有限公司 Method and device for mirror image screen projection and mirror image screen projection equipment
CN113891167A (en) * 2021-08-27 2022-01-04 荣耀终端有限公司 Screen projection method and electronic equipment
CN116887005A (en) * 2021-08-27 2023-10-13 荣耀终端有限公司 Screen projection method and electronic equipment
CN116887005B (en) * 2021-08-27 2024-05-03 荣耀终端有限公司 Screen projection method, electronic device and computer readable storage medium

Also Published As

Publication number Publication date
CN111640191B (en) 2023-04-21

Similar Documents

Publication Publication Date Title
CN107590771B (en) 2D video with options for projection viewing in modeled 3D space
US20220007083A1 (en) Method and stream-pushing client for processing live stream in webrtc
CN111640191A (en) Projection screen picture acquisition processing method based on VR (virtual reality) all-in-one machine
US8860716B2 (en) 3D image processing method and portable 3D display apparatus implementing the same
US7667704B2 (en) System for efficient remote projection of rich interactive user interfaces
US7884823B2 (en) Three dimensional rendering of display information using viewer eye coordinates
US20070070067A1 (en) Scene splitting for perspective presentations
CN107924587A (en) Object is directed the user in mixed reality session
EP3311565B1 (en) Low latency application streaming using temporal frame transformation
CN102834849A (en) Image drawing device for drawing stereoscopic image, image drawing method, and image drawing program
EP2126851A1 (en) Graphics rendering system
CN102780892B (en) 3d image processing method and portable 3d display apparatus implementing the same
EP4412227A1 (en) Immersive-media data processing method, apparatus, device, storage medium and program product
CN116672702A (en) Image rendering method and electronic equipment
US11417060B2 (en) Stereoscopic rendering of virtual 3D objects
WO2022033162A1 (en) Model loading method and related apparatus
WO2019118028A1 (en) Methods, systems, and media for generating and rendering immersive video content
CN114363687B (en) Three-dimensional scene interactive video creation method and creation device
CN107925657A (en) Via the asynchronous session of user equipment
CN111589111B (en) Image processing method, device, equipment and storage medium
CN117557701B (en) Image rendering method and electronic equipment
US20220329912A1 (en) Information processing apparatus, information processing method, and program
JP4080295B2 (en) Plural media display method in virtual space, plural media display terminal device in virtual space, virtual space management server device, plural media display program in virtual space, and recording medium recording this program
CN115665461B (en) Video recording method and virtual reality device
JP2008053884A (en) Image processing method and apparatus and electronic device utilizing them

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant