CN110597577A - Head-mounted visual equipment and split-screen display method and device thereof - Google Patents

Head-mounted visual equipment and split-screen display method and device thereof Download PDF

Info

Publication number
CN110597577A
CN110597577A CN201910468041.6A CN201910468041A CN110597577A CN 110597577 A CN110597577 A CN 110597577A CN 201910468041 A CN201910468041 A CN 201910468041A CN 110597577 A CN110597577 A CN 110597577A
Authority
CN
China
Prior art keywords
layer
mode
video
screen
split
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910468041.6A
Other languages
Chinese (zh)
Inventor
刘丽琼
朱振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZHUHAI QUANZHI TECHNOLOGY Co Ltd
Allwinner Technology Co Ltd
Original Assignee
ZHUHAI QUANZHI TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHUHAI QUANZHI TECHNOLOGY Co Ltd filed Critical ZHUHAI QUANZHI TECHNOLOGY Co Ltd
Priority to CN201910468041.6A priority Critical patent/CN110597577A/en
Publication of CN110597577A publication Critical patent/CN110597577A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The invention discloses a head-mounted visual device, and also discloses a split-screen display method and a split-screen display device of the head-mounted visual device, which can correctly split-screen display a 2D interface, enhance the operability of the head-mounted device and enrich the content of the head-mounted device; meanwhile, after the 2D interface is subjected to split screen processing, different types of videos can be supported, besides the basic 2D video, the 3D video can be displayed in a left-right mode, and a 3D effect of the 3D video can be experienced; thereby greatly improving the experience of the user. In addition, the split-screen display method for the head-mounted visual equipment is simple and easy to use, small in system development difficulty and low in cost.

Description

Head-mounted visual equipment and split-screen display method and device thereof
Technical Field
The invention relates to the field of computers, in particular to the field of image display, and particularly relates to image display of integrated virtual reality equipment or integrated 3D film watching equipment and the like.
Background
Head-mounted visual equipment based on android platforms, such as integrated VR (Virtual Reality) equipment and integrated 3D (three-dimensional) film watching equipment, are increasingly popular and widely used. However, the number of applications for displaying left and right eye content currently developed for these head-mounted visual devices is small, and the development difficulty thereof is large; for many existing 2D applications, since they are not separately developed for the head-mounted visual device, when these 2D applications are directly displayed on the screen, the left eye of the user sees the left half content of the application, and the right eye sees the right half content of the application, which can seriously affect the experience and interaction of the user.
Currently, in the industry, the 2D application is generally split-screen processed by using an open Graphics library opengl (open Graphics library), that is, the split-screen processing is performed by drawing on a left screen and a right screen respectively; when left and right 3D videos need to be viewed, then the 2D application is directly drawn on the whole screen.
For example, the prior art discloses a video playing control method, which, before a target video is selected, starts a split screen mode, that is, a playing control interface is copied into a dual-screen display, and is displayed on a screen in a left-right arrangement, where the left screen corresponds to a left eye of a person and the right screen corresponds to a right eye of the person, and then the video is played according to a format of the target video. Although the method performs different strategies on the vertex shader of OpenGL according to different types of videos to perform split-screen processing, the method only performs split-screen processing on video images and control interfaces in an application for applications such as a player, and coexists with split-screen video images for system user interfaces such as system message notification.
For another example, an interface display method is also disclosed in the prior art, where an application identifier submitted by a target application is obtained first, and it is determined that the target application is a non-virtual reality VR application according to the identifier, a left virtual screen is constructed in a left screen and a right virtual screen is constructed in a right screen, and N interfaces to be displayed are all overlapped according to a display sequence thereof to obtain interface content. The distinguishing hierarchy of the method is an application program, namely, the VR application program is not subjected to screen splitting processing, and the non-VR application program is subjected to screen splitting processing, so that the left content and the right content after the non-VR application program interface is processed are the same, and the distinguishing standard is based on an application program identifier, namely whether the application belongs to the VR application program or not.
In order to solve the technical problem that the android 2D application cannot be directly applied to the VR device, the prior art also discloses an image display method of the head-mounted visual device, which modifies the screen width of the to-be-displayed 2D application image to be half of the screen width of the device itself, then obtains an undistorted image of the to-be-displayed 2D application based on the modified screen width, then calls a surface flicker module in the android system, and draws the undistorted image to the left screen and the right screen for display by using OpenGL. The method can solve the problem of image distortion, particularly improve the display effect of pictures and characters, but the content of the left screen and the content of the right screen are completely the same, and no 3D effect exists when the left and right 3D videos are watched.
Meanwhile, there is another image data processing method for a VR device in the prior art, which determines in real time whether image data to be displayed in a data buffer can be applied to a virtual display scene when a virtual display application is running, and converts the image data into standard image data that can be used for virtual display scene display and displays the standard image data on a screen when the image data cannot be applied to the virtual display scene. Although this method can perform split-screen processing on image data that cannot be applied to a VR scene, it copies the entire image data to obtain a left-eye image and a right-eye image, and cannot experience a 3D effect for 3D video.
Therefore, when the technical problem that split-screen display of an android 2D interface on a head-mounted visual device cannot be compatible with support of a 3D video is solved, in the prior art, a method for respectively drawing all 2D interfaces to a left screen and a right screen, namely simple content copying, is adopted, but the method cannot experience the 3D effect of the 3D video, including a left 3D video, a right 3D video and an up-down 3D video; or further improving, performing split screen processing on the video image by enabling the video of the 2D application to belong to left and right eye pictures or inside the 2D application, wherein the whole interface cannot be respectively drawn to a left screen and a right screen, so that the 3D effect of the 3D video can be experienced; however, in addition to the video image, for other interfaces, including a control interface, a system notification interface, and the like, the left eye can only see half of the picture, and the right eye can only see the other half of the picture, which affects the interaction, even causes vertigo, and the user experience is not high.
Disclosure of Invention
The invention aims to provide a split-screen display method, a split-screen display system and a split-screen display device for head-mounted visual equipment, which can correctly split-screen display a 2D interface, enhance the operability of the head-mounted equipment and enrich the content of the head-mounted equipment; meanwhile, after the 2D interface is subjected to split screen processing, different types of videos can be supported, besides the basic 2D video, the 3D video can be displayed in a left-right mode, and a 3D effect of the 3D video can be experienced; thereby greatly improving the experience of the user. In addition, the split-screen display method for the head-mounted visual equipment is simple and easy to use, small in system development difficulty and low in cost.
The invention discloses a split-screen display method of head-mounted visual equipment, which comprises the following steps:
step 10, acquiring a video type;
wherein the video types include 2D video and 3D video; the 3D video includes left and right 3D video and up and down 3D video.
Step 20, comprehensively judging a display mode;
the display mode comprises a common mode and a 3D mode, and the 3D mode comprises a left 3D mode, a right 3D mode and an up 3D mode;
step 30, layer classification, split screen and synthesis;
the layer classification split screen synthesis refers to different split screen processing according to layer types and display modes when all layers are synthesized; the image layer types comprise an original video image layer and a non-video image layer;
the image layer classification split screen synthesis comprises the following steps:
configuring an image area required to be acquired by the left canvas according to the layer type and the display mode;
when the layer type is a non-video layer, configuring and acquiring all image areas in any display mode;
when the layer type is a video layer:
in a common mode, configuring and acquiring all image areas;
configuring and acquiring a left half area in a left-right 3D mode;
configuring and acquiring an upper half area in a vertical 3D mode;
subsequently, adjusting the image display area on the left canvas; adjusting a vertex matrix and a viewport according to the position of the layer on the whole interface and a left display area;
drawing on the left side of the synthetic canvas;
then, judging whether the mode is a common mode;
if the layer is in the common mode, finishing the split screen processing of the layer;
otherwise, if the canvas is in other modes, configuring an image area needing to be acquired by the right canvas according to the layer type and the display mode; wherein the content of the first and second substances,
when the layer type is a non-video layer, configuring and acquiring all image areas in any display mode;
when the layer type is a video layer:
configuring and acquiring a right half area in a left and right 3D mode;
under the up-down 3D mode, configuring and acquiring a lower half area;
then, adjusting the display area of the image on the right canvas; adjusting a vertex matrix and a viewport according to the position of the layer on the whole interface and the right display area;
drawing on the right side of the synthesis canvas, and finishing the split screen processing of the layer.
In one embodiment, the method further comprises the steps of:
step 40, the composite layer is displayed in a split mode;
if the mode is the common mode, acquiring the left half area of the synthetic layer and displaying the left half area on the left side of the total screen, and acquiring the left half area of the synthetic layer and displaying the left half area on the right side of the total screen;
and if the mode is other, acquiring the left half area of the synthesized layer and displaying the left half area on the left side of the total screen, and acquiring the right half area of the synthesized layer and displaying the right half area on the right side of the total screen.
In one embodiment, in step 10, the method for obtaining the video type includes setting by calling an interface through an external module.
In one embodiment, in step 10, the external modules include a player application, a multimedia module, and a key response module.
In one embodiment, in step 20, the comprehensively determining the display mode includes the steps of:
traversing all the visual image layers, and recording the number of the original video image layers;
judging whether the number of the original video image layers is 0 or not;
when the number of the original video image layers is 0, judging as a common mode;
otherwise, when the number of the original video layers is not 0, determining that the mode corresponding to the video type comprises a left-right 3D mode and an up-down 3D mode.
In one embodiment, in step 20, when the number of original video layers is 0, determining that the normal mode includes: when the number of the original video image layers is 0, the video is in a plane 2D format and is judged to be in a common mode;
in an embodiment, in step 20, when the number of original video layers is not 0, determining that the mode corresponding to the video type includes: if the video is in a left-right 3D format, judging the video to be in a left-right 3D mode; and if the video is in a top-bottom 3D format, judging the video to be in a top-bottom 3D mode.
In one embodiment, the process, in step 20,
the normal mode means that the picture contents of the left eye and the right eye are the same;
the left-right 3D mode refers to that the left-eye picture content is the superposition of the left half part of a video picture and all other non-video layers, and the right-eye picture content is the superposition of the right half part of the video and all other non-video layers.
The up-down 3D mode refers to that left-eye picture content is the superposition of the upper half part of a video picture and all other non-video layers, and right-eye picture content is the superposition of the lower half part of the video and all other non-video layers.
In one embodiment, in step 30, the non-video layer includes an interface layer and a system layer of each 2D application.
In one embodiment, the method further comprises the following steps of adjusting the width and height of the synthesis canvas of the android system to be the same as the width and height of one terminal screen or the total width and height of two terminal screens when the system is initialized; wherein, for the case of two terminal screens, the width of the composition canvas is twice the width of the system window, and the height of the composition canvas is the height of the system window.
The invention also discloses a split-screen display device of the head-mounted visual equipment, which comprises a video type acquisition module, a comprehensive judgment display mode module and a layer classification split-screen synthesis module;
the video type obtaining module obtains the type of a video to be displayed; wherein the video types include 2D video and 3D video; the 3D videos comprise left and right 3D videos and up and down 3D videos;
the comprehensive judgment display mode module judges the display mode of the video to be displayed; the display mode comprises a common mode and a 3D mode, and the 3D mode comprises a left 3D mode, a right 3D mode and an up 3D mode;
the layer classification split screen synthesis module performs different split screen processing on all layers according to layer types and display modes during synthesis; the image layer types comprise an original video image layer and a non-video image layer;
the image layer classification split screen synthesis module operates as follows:
configuring an image area required to be acquired by the left canvas according to the layer type and the display mode;
when the layer type is a non-video layer, configuring and acquiring all image areas in any display mode;
when the layer type is a video layer:
in a common mode, configuring and acquiring all image areas;
configuring and acquiring a left half area in a left-right 3D mode;
configuring and acquiring an upper half area in a vertical 3D mode;
subsequently, adjusting the image display area on the left canvas; adjusting a vertex matrix and a viewport according to the position of the layer on the whole interface and a left display area;
drawing on the left side of the synthetic canvas;
then, judging whether the mode is a common mode;
if the layer is in the common mode, finishing the split screen processing of the layer;
otherwise, if the canvas is in other modes, configuring an image area needing to be acquired by the right canvas according to the layer type and the display mode; wherein the content of the first and second substances,
when the layer type is a non-video layer, configuring and acquiring all image areas in any display mode;
when the layer type is a video layer:
configuring and acquiring a right half area in a left and right 3D mode;
under the up-down 3D mode, configuring and acquiring a lower half area;
then, adjusting the display area of the image on the right canvas; adjusting a vertex matrix and a viewport according to the position of the layer on the whole interface and the right display area;
drawing on the right side of the synthesis canvas, and finishing the split screen processing of the layer.
In one embodiment, the split-screen display device further includes a synthesized layer partition display module, where the synthesized layer partition display module performs the following operations according to a display mode:
if the mode is the common mode, acquiring the left half area of the synthetic layer and displaying the left half area on the left side of the total screen, and acquiring the left half area of the synthetic layer and displaying the left half area on the right side of the total screen;
and if the mode is other, acquiring the left half area of the synthesized layer and displaying the left half area on the left side of the total screen, and acquiring the right half area of the synthesized layer and displaying the right half area on the right side of the total screen.
In one embodiment, the method of obtaining the video type includes setting by an external module calling interface.
In one embodiment, the external modules include a player application, a multimedia module, and a key response module.
In one embodiment, the comprehensive decision display mode module operates as follows:
traversing all the visual image layers, and recording the number of the original video image layers;
judging whether the number of the original video image layers is 0 or not;
when the number of the original video image layers is 0, judging as a common mode;
otherwise, when the number of the original video layers is not 0, determining that the mode corresponding to the video type comprises a left-right 3D mode and an up-down 3D mode.
In one embodiment, in step 20, when the number of original video layers is 0, determining that the normal mode includes: when the number of the original video image layers is 0, the video is in a plane 2D format and is judged to be in a common mode;
in an embodiment, in step 20, when the number of original video layers is not 0, determining that the mode corresponding to the video type includes: if the video is in a left-right 3D format, judging the video to be in a left-right 3D mode; and if the video is in a top-bottom 3D format, judging the video to be in a top-bottom 3D mode.
In one embodiment of the present invention,
the normal mode means that the picture contents of the left eye and the right eye are the same;
the left-right 3D mode refers to that the left-eye picture content is the superposition of the left half part of a video picture and all other non-video layers, and the right-eye picture content is the superposition of the right half part of the video and all other non-video layers.
The up-down 3D mode refers to that left-eye picture content is the superposition of the upper half part of a video picture and all other non-video layers, and right-eye picture content is the superposition of the lower half part of the video and all other non-video layers.
In one embodiment, the non-video layers include an interface layer and a system layer for each 2D application.
The invention also discloses a head-wearing visual device which at least comprises a 2D interface classification split-screen display device and a screen;
the 2D interface classification split-screen display device comprises an upper layer drawing module, a system split-screen synthesis device and a system division display module;
the system split-screen synthesis device comprises a video type taking module, a comprehensive judgment display mode module and a layer classification split-screen synthesis module.
The video type acquiring module acquires a video type to be displayed; wherein the video types include 2D video and 3D video; the 3D videos comprise left and right 3D videos and up and down 3D videos;
the comprehensive judgment display mode module judges the display mode of the video to be displayed; the display mode comprises a common mode and a 3D mode, and the 3D mode comprises a left 3D mode, a right 3D mode and an up 3D mode;
the layer classification split screen synthesis module performs different split screen processing on all layers according to layer types and display modes during synthesis; the image layer types comprise an original video image layer and a non-video image layer;
the image layer classification split screen synthesis module operates as follows:
configuring an image area required to be acquired by the left canvas according to the layer type and the display mode;
when the layer type is a non-video layer, configuring and acquiring all image areas in any display mode;
when the layer type is a video layer:
in a common mode, configuring and acquiring all image areas;
configuring and acquiring a left half area in a left-right 3D mode;
configuring and acquiring an upper half area in a vertical 3D mode;
subsequently, adjusting the image display area on the left canvas; adjusting a vertex matrix and a viewport according to the position of the layer on the whole interface and a left display area;
drawing on the left side of the synthetic canvas;
then, judging whether the mode is a common mode;
if the layer is in the common mode, finishing the split screen processing of the layer;
otherwise, if the canvas is in other modes, configuring an image area needing to be acquired by the right canvas according to the layer type and the display mode; wherein the content of the first and second substances,
when the layer type is a non-video layer, configuring and acquiring all image areas in any display mode;
when the layer type is a video layer:
configuring and acquiring a right half area in a left and right 3D mode;
under the up-down 3D mode, configuring and acquiring a lower half area;
then, adjusting the display area of the image on the right canvas; adjusting a vertex matrix and a viewport according to the position of the layer on the whole interface and the right display area;
drawing on the right side of the synthesis canvas, and finishing the split screen processing of the layer.
In one embodiment, the system division display module includes a synthesis layer division display module, and the synthesis layer division display module performs the following operations according to a display mode:
if the mode is the common mode, acquiring the left half area of the synthetic layer and displaying the left half area on the left side of the total screen, and acquiring the left half area of the synthetic layer and displaying the left half area on the right side of the total screen;
and if the mode is other, acquiring the left half area of the synthesized layer and displaying the left half area on the left side of the total screen, and acquiring the right half area of the synthesized layer and displaying the right half area on the right side of the total screen.
In one embodiment, the screen is a block of screens.
In one embodiment, the screens include a left screen and a right screen.
Drawings
Fig. 1 is a flowchart of a split-screen display method of a head-mounted visual device according to the present invention.
Fig. 2 is a flowchart of comprehensively determining a display mode in the split-screen display method of the head-mounted visual device according to the present invention.
Fig. 3 is a flowchart of layer classification split screen synthesis in the split screen display method of the head-mounted visual device according to the present invention.
Fig. 4 is a schematic diagram of an interface layer before screen division processing in the screen division display method of the head-mounted visual device according to the present invention.
Fig. 5 is a schematic view of an interface layer after screen division processing in the screen division display method of the head-mounted visual device according to the present invention.
Fig. 6 is a schematic diagram of a synthesized canvas and data storage in a normal mode when a synthesized layer is displayed in a split-screen display method of the head-mounted visual device according to the present invention.
Fig. 7 is a schematic diagram of a synthesis canvas and data storage in a left-right 3D mode or an upper-lower 3D mode when a synthesis layer is displayed in a split manner in the split-screen display method of the head-mounted visual device according to the present invention.
Fig. 8 is a block diagram of the head-mounted visual device of the present invention.
Fig. 9 is a diagram illustrating an effect of a split screen display performed by a head-mounted visual device according to the related art.
Fig. 10 is a diagram showing an actual effect displayed by the split screen display method of the head-mounted visual device according to the present invention.
Detailed Description
In order to describe the technical solutions of the present invention in more detail to facilitate further understanding of the present invention, the following describes specific embodiments of the present invention with reference to the accompanying drawings. It should be understood, however, that all of the illustrative embodiments and descriptions thereof are intended to illustrate the invention and are not to be construed as the only limitations of the invention.
Referring to fig. 1, the split-screen display method of the head-mounted visual device disclosed by the invention comprises the following steps:
step 10, acquiring a video type;
the video types include, but are not limited to, flat 2D, left-right 3D, and up-down 3D.
The method for acquiring the video type comprises the steps of calling an interface through an external module to set; the external module includes, but is not limited to, a player application, a multimedia module, and a key response module.
Step 20, comprehensively judging a display mode;
the display modes include a normal mode, a left-right 3D mode, and a top-down 3D mode.
Wherein, the common mode means that the picture contents of the left eye and the right eye are the same;
the left-right 3D mode refers to that the left-eye picture content is the superposition of the left half part of a video picture and all other non-video layers, and the right-eye picture content is the superposition of the right half part of the video and all other non-video layers.
The up-down 3D mode refers to that left-eye picture content is the superposition of the upper half part of a video picture and all other non-video layers, and right-eye picture content is the superposition of the lower half part of the video and all other non-video layers.
Referring to fig. 2, the method of comprehensively determining the display mode includes the steps of:
traversing all the visual image layers, and recording the number of the original video image layers (step 21); the visual layer refers to a layer which is still visible and needs to be drawn after a plurality of layers of the android system are overlapped. The original video layer refers to image data output after being decoded by a video decoder, and the original image data is not processed by an OpenGL technology. The basis for judging whether the layer is the original video layer may be that the format of the graphic buffer of the layer is YUV. However, the present invention is not limited to this determination method, and may be determined based on other characteristics. For example, the basis for determining the layer as the original video layer may be a specific identifier of the Graphic Buffer by the decoder.
Judging whether the number of the original video layers is 0 or not (step 22); when the number of the original video layers is 0, judging as a common mode (step 23); otherwise, determining the mode corresponding to the video type (step 24); that is, whether an original video layer exists in the visual layer is determined, and if the video is in a planar 2D format, the mode is determined as a normal mode; if the video is in a left-right 3D format, judging the video to be in a left-right 3D mode; and if the video is in a top-bottom 3D format, judging the video to be in a top-bottom 3D mode.
Step 30, layer classification, split screen and synthesis;
specifically, the layer classification split-screen composition refers to performing different split-screen processing on all layers according to the layer types and the display modes during composition. The image layer types comprise an original video image layer and a non-video image layer; the non-video image layer comprises an interface image layer and a system image layer of each 2D application.
In an embodiment, a canvas framebuffer surface (hereinafter, referred to as a synthesis canvas for short) used for drawing when a layer is synthesized in an android system may be used for synthesis, and all layers to be synthesized are sequentially overlaid and drawn to the canvas according to a display order by using an OpenGL technology. The layer classification split-screen synthesis processing refers to sequentially performing split-screen processing on one or more visual layers in the system according to a display sequence.
Referring to fig. 3, the method for performing split-screen processing on layers of the present invention includes the following steps:
configuring an image area required to be acquired by the left canvas according to the layer type and the display mode (step 31);
when the layer type is a non-video layer, configuring and acquiring all image areas in any display mode;
specifically, the texture matrix of the Vertex shader in OpenGL is
When the layer type is a video layer:
in a common mode, configuring and acquiring all image areas;
configuring and acquiring a left half area in a left-right 3D mode; specifically, the texture matrix of the Vertex shader in OpenGL is
Configuring and acquiring an upper half area in a vertical 3D mode; specifically, the texture matrix of the Vertex shader in OpenGL is
Subsequently, the image is adjusted to the left canvas display area (step 32); adjusting a vertex matrix and a viewport (viewport) according to the position of the layer on the whole interface and a left display area;
drawing on the left side of the composite canvas (step 33);
then, judging whether the mode is a normal mode or not (step 34);
if the image layer is in the normal mode, because the content seen by the left eye is the same as the content seen by the right eye, the drawing is not repeated, and the split-screen processing of the image layer is finished (step 38);
otherwise, when the canvas is in other modes, configuring an image area required to be acquired by the right canvas according to the layer type and the display mode (step 35); wherein the content of the first and second substances,
when the layer type is a non-video layer, configuring and acquiring all image areas in any display mode;
when the layer type is a video layer:
configuring and acquiring a right half area in a left and right 3D mode; in particular toThe texture matrix of the Vertex shader in OpenGL is
In the up-down 3D mode, the lower half area is configured and obtained, specifically, the texture matrix of the Vertex shader in OpenGL is
Then, adjust the image to the right canvas display area (step 36); adjusting a vertex matrix and a viewport (viewport) according to the position of the layer on the whole interface and the right display area;
drawing to the right of the composition canvas (step 37) and ending the split screen process for that layer (step 38).
It should be noted that, when the system is initialized, the width and height of the synthesized canvas of the android system need to be adjusted to be the same as the width and height of one terminal screen, or the total width and height of two terminal screens; wherein, for the case of two terminal screens, the width of the composition canvas is twice the width of the system window, and the height of the composition canvas is the height of the system window.
The screen splitting processing method is described below as a specific embodiment.
Fig. 4 shows the interface before the screen division process. The current system has 3 layers, the display sequence is a video layer of the application 401, a play progress bar layer of the application 402, and a status bar layer of the system 403, and the display mode is an up-down 3D mode.
Performing split screen processing according to the sequence of 401, 402 and 403; firstly, acquiring the upper half area of a 401 image layer to be drawn on a left canvas, and acquiring the lower half area of the 401 image layer to be drawn on a right canvas; then, acquiring 402 all areas of the image layer and drawing on the left canvas, and acquiring 402 all areas of the image layer and drawing on the right canvas; and obtaining 403 all areas of the layer to be drawn on the left canvas, and obtaining 403 all areas of the layer to be drawn on the right canvas.
Fig. 5 shows the interface after the screen-splitting process. Therefore, an application interface and a system interface of the android 2D interface are displayed in a split screen mode correctly, and the upper and lower played 3D videos are displayed in a split screen mode to form correct left and right pictures.
Step 40, the composite layer is displayed in a split mode; specifically, different processing is performed according to the display mode:
if the mode is the common mode, acquiring the left half area of the synthetic layer and displaying the left half area on the left side of the total screen, and acquiring the left half area of the synthetic layer and displaying the left half area on the right side of the total screen;
and if the mode is other, acquiring the left half area of the synthesized layer and displaying the left half area of the synthesized layer on the left side of the total screen, and acquiring the right half area of the synthesized layer and displaying the right half area of the synthesized layer on the right side of the total screen.
In one embodiment, for the synthesis canvas with the resolution of 2W × H, since it is a logic concept, the memory is actually a piece of continuous data storing RGBA values of 2W × H pixels according to a certain rule, and the piece of memory is called a buffer. Inputting (buffer first address, x offset, y offset, x length and y length), calculating RGBA value address of each pixel on the screen, and reading data for display.
Referring to FIG. 6, a diagram of a generic model synthetic canvas and data store is shown. The RGBA value storage rule of the pixel is from bottom to top and from left to right, and since the right area does not need to be drawn in the normal mode, the data in the second half of the buffer is invalid. On the left screen, when data, xoffset is 0, yoffset is 0, xlength is W, ylength is H, data from (0,0) to (W, H), that is, the left half area of the composition layer, is input. Similarly, when data, xoffset 0, yoffset 0, xlength W, ylength H are input to the right screen, data from (0,0) to (W-1, H-1), that is, the left half area of the composition layer, is also displayed on the screen. Therefore, the synthesized image data is multiplexed in the normal mode, so that the drawing of the right eye area is reduced, and the overhead is reduced.
Referring to fig. 7, a schematic diagram of a synthesis canvas and data storage in a left-right 3D mode or a top-bottom 3D mode is shown. Wherein, the RGBA value storage rule of the pixel is from bottom to top and from left to right. On the left screen, when data, xoffset is 0, yoffset is 0, xlength is W, ylength is H, data from (0,0) to (W-1, H-1), that is, the left half area of the composition layer, is input. When data, xoffset ═ W, yoffset ═ 0, xlength ═ W, ylength ═ H, are input to the right screen, data from (W,0) to (2W-1, H), that is, the right half area of the composition layer, are also displayed on the screen.
The invention also discloses a split-screen display device of the head-mounted visual equipment, which comprises a video type acquisition module, a comprehensive judgment display mode module, a layer classification split-screen synthesis module and a synthesized layer segmentation display module.
The acquisition video type module acquires the type of a video to be displayed, and particularly, the acquisition video type module calls an interface through an external module to set the type of the video; the external module includes, but is not limited to, a player application, a multimedia module, and a key response module.
The comprehensive judgment display mode module judges the display mode of the video to be displayed; the display mode comprises a common mode, a left and right 3D mode and an up and down 3D mode; wherein, the common mode means that the picture contents of the left eye and the right eye are the same; the left-right 3D mode refers to that the left-eye picture content is the superposition of the left half part of a video picture and all other non-video layers, and the right-eye picture content is the superposition of the right half part of the video and all other non-video layers; the up-down 3D mode refers to that left-eye picture content is the superposition of the upper half part of a video picture and all other non-video layers, and right-eye picture content is the superposition of the lower half part of the video and all other non-video layers.
The comprehensive judgment display mode module performs the following operations:
traversing all the visual image layers, and recording the number of the original video image layers; the visual layer refers to a layer which is still visible and needs to be drawn after a plurality of layers of the android system are overlapped. The original video layer refers to image data output after being decoded by a video decoder, and the original image data is not processed by an OpenGL technology. The comprehensive judgment display mode module traverses all the visual image layers, judges whether the visual image layers are original video image layers or not, and records the number of the original video image layers; the basis for judging whether the visual layer is the original video layer can be that the format of the graphic buffer of the layer is YUV.
Judging whether the number of the original video image layers is 0 or not; when the number of the original video image layers is 0, judging as a common mode; otherwise, judging the mode corresponding to the video type; that is, whether an original video layer exists in the visual layer is determined, and if the video is in a planar 2D format, the mode is determined as a normal mode; if the video is in a left-right 3D format, judging the video to be in a left-right 3D mode; and if the video is in a top-bottom 3D format, judging the video to be in a top-bottom 3D mode.
The layer classification split screen synthesis module performs different split screen processing on all layers according to layer types and display modes during synthesis; the image layer types comprise an original video image layer and a non-video image layer; the non-video image layer comprises an interface image layer and a system image layer of each 2D application.
The split screen processing method comprises the following steps:
configuring an image area required to be acquired by the left canvas according to the layer type and the display mode; when the layer type is a non-video layer, all image areas are configured and obtained in any display mode; when the layer type is a video layer: in a common mode, configuring and acquiring all image areas; configuring and acquiring a left half area in a left-right 3D mode; in the top-bottom 3D mode, the top half area is configured for acquisition.
Adjusting the display area of the image on the left canvas; adjusting a vertex matrix and a viewport according to the position of the layer on the whole interface and a left display area;
drawing on the left side of the synthetic canvas;
then, judging whether the mode is a common mode;
if so, namely in the common mode, ending the split screen processing of the layer;
if not, namely in other modes, configuring an image area needing to be acquired by the right canvas according to the layer type and the display mode; wherein the content of the first and second substances,
when the layer type is a non-video layer, configuring and acquiring all image areas in any display mode;
when the layer type is a video layer:
configuring and acquiring a right half area in a left and right 3D mode; in the top-bottom 3D mode, the acquisition lower half area is configured.
Adjusting the image in the right canvas display area; adjusting a vertex matrix and a viewport according to the position of the layer on the whole interface and the right display area;
drawing on the right side of the synthesis canvas, and finishing the split screen processing of the layer.
The synthetic layer segmentation display module carries out different processing according to the display mode;
if the mode is the common mode, acquiring the left half area of the synthetic layer and displaying the left half area on the left side of the total screen, and acquiring the left half area of the synthetic layer and displaying the left half area on the right side of the total screen;
and if the mode is other, acquiring the left half area of the synthesized layer and displaying the left half area of the synthesized layer on the left side of the total screen, and acquiring the right half area of the synthesized layer and displaying the right half area of the synthesized layer on the right side of the total screen.
In addition, the invention also discloses a head-mounted visual device, as shown in fig. 8, the head-mounted visual device at least comprises a device for classified split-screen display of the 2D interface and a screen. The device for classified split-screen display of the 2D interface comprises an upper layer drawing module 601, a system split-screen composition module 602 and a system split display module 603. The system split-screen composition module 602 includes a video type obtaining module, a comprehensive judgment display mode module, and a layer classification split-screen composition module. The screen may be one screen; or two screens, including a left screen and a right screen; the display resolution of the screen is W (width) × H (height), as shown in fig. 8.
One of the two screens is transparent to the upper layer rendering module 601, and the upper layer rendering module 601 performs rendering according to the display resolution (i.e., the window resolution is W × H) of one of the two screens, including the drawing of multiple layers by 2D application and system participation. The system split-screen synthesis module 602 performs split-screen processing on each visible layer according to the layer type and the display mode, and synthesizes an image with a resolution of 2W × H for displaying the left and right eye images, where the specific method includes the steps of obtaining the video type in step 10, comprehensively determining the display mode in step 20, and performing classified split-screen synthesis on the layers in step 30. The system division and display module 603 selects a designated half area of the synthesized canvas to be sent to the two screens, and the specific method includes the step 40 of division and display of the synthesized layer.
The split-screen display method and the split-screen display device of the head-mounted visual equipment are realized and used on a VR9 platform, and are proved to be feasible.
Fig. 9 illustrates a display method existing in the related art. In fig. 9, (a) shows a display effect of a general copy split process for left and right 3D videos; (b) the display effect of a general copy split screen process on 2D video is shown; (c) the display effect that the UI of the system is displayed without split screen in the left and right 3D mode is shown; (d) a display effect of the 3D video cannot be up and down is shown.
Fig. 10 illustrates an actual effect diagram displayed by the split-screen display method of the head-mounted visual device according to the present invention, wherein the display effect diagram includes a display effect in a normal mode, a left-right 3D mode, and a top-bottom 3D mode.
As shown in fig. 10, as compared with the display effect of the prior art shown in fig. 9, it can be seen that the split screen of the android 2D interface in the actual display result is correct, the head-mounted visual device can interact normally and can watch left and right 3D videos, up and down 3D videos and experience 3D effects.
By combining the above embodiments and implementation effects, the display method of the present invention achieves the following beneficial effects:
(1) and the correct visual interaction and 3D film watching effect experience are both considered. The method has the advantages that the application interface and the system interface of the android 2D interface are accurately displayed in a split screen mode, meanwhile, the right left picture and the right picture are formed in a split screen mode when the 2D player is adopted to play the 3D video, a user can experience the 3D effect, compatibility of the system to 2D software is improved, and development cost of software required by the system is reduced.
(2) A variety of 3D video formats are supported, including left-right 3D and top-bottom 3D.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (20)

1. A split-screen display method of a head-mounted visual device is characterized by comprising the following steps:
step 10, acquiring a video type;
wherein the video types include 2D video and 3D video; the 3D videos comprise left and right 3D videos and up and down 3D videos;
step 20, comprehensively judging a display mode;
the display mode comprises a common mode and a 3D mode, and the 3D mode comprises a left 3D mode, a right 3D mode and an up 3D mode;
step 30, layer classification, split screen and synthesis;
the layer classification split screen synthesis refers to different split screen processing according to layer types and display modes when all layers are synthesized; the image layer types comprise an original video image layer and a non-video image layer;
the image layer classification split screen synthesis comprises the following steps:
configuring an image area required to be acquired by the left canvas according to the layer type and the display mode;
when the layer type is a non-video layer, configuring and acquiring all image areas in any display mode;
when the layer type is a video layer:
in a common mode, configuring and acquiring all image areas;
configuring and acquiring a left half area in a left-right 3D mode;
configuring and acquiring an upper half area in a vertical 3D mode;
subsequently, adjusting the image display area on the left canvas; adjusting a vertex matrix and a viewport according to the position of the layer on the whole interface and a left display area;
drawing on the left side of the synthetic canvas;
then, judging whether the mode is a common mode;
if the layer is in the common mode, finishing the split screen processing of the layer;
otherwise, if the canvas is in other modes, configuring an image area needing to be acquired by the right canvas according to the layer type and the display mode; wherein the content of the first and second substances,
when the layer type is a non-video layer, configuring and acquiring all image areas in any display mode;
when the layer type is a video layer:
configuring and acquiring a right half area in a left and right 3D mode;
under the up-down 3D mode, configuring and acquiring a lower half area;
then, adjusting the display area of the image on the right canvas; adjusting a vertex matrix and a viewport according to the position of the layer on the whole interface and the right display area;
drawing on the right side of the synthesis canvas, and finishing the split screen processing of the layer.
2. The split-screen display method as claimed in claim 1, further comprising the steps of:
step 40, the composite layer is displayed in a split mode;
if the mode is the common mode, acquiring the left half area of the synthetic layer and displaying the left half area on the left side of the total screen, and acquiring the left half area of the synthetic layer and displaying the left half area on the right side of the total screen;
and if the mode is other, acquiring the left half area of the synthesized layer and displaying the left half area on the left side of the total screen, and acquiring the right half area of the synthesized layer and displaying the right half area on the right side of the total screen.
3. The split-screen display method as claimed in claim 1, wherein the method of acquiring the video type in step 10 comprises setting by calling an interface through an external module.
4. The split-screen display method of claim 3, wherein the external module comprises a player application, a multimedia module, and a key response module.
5. The screen-division display method of any one of claims 1 to 4, wherein the comprehensively judging the display mode in step 20 comprises the steps of:
traversing all the visual image layers, and recording the number of the original video image layers;
judging whether the number of the original video image layers is 0 or not;
when the number of the original video image layers is 0, judging as a common mode;
otherwise, when the number of the original video layers is not 0, determining that the mode corresponding to the video type comprises a left-right 3D mode and an up-down 3D mode.
6. The split-screen display method of claim 5, wherein when the number of original video layers is 0, determining as the normal mode comprises: and when the number of the original video layers is 0, the video is in a plane 2D format and is judged to be in a common mode.
7. The split-screen display method of claim 5, wherein when the number of the original video layers is not 0, determining that the mode corresponding to the video type comprises: if the video is in a left-right 3D format, judging the video to be in a left-right 3D mode; and if the video is in a top-bottom 3D format, judging the video to be in a top-bottom 3D mode.
8. The split-screen display method according to any one of claims 1 to 7, wherein in step 30, the non-video layers include an interface layer and a system layer of each 2D application.
9. The split-screen display method as claimed in any one of claims 1 to 8, further comprising the steps of: when a system is initialized, adjusting the width and height of a synthesis canvas of the android system to be the same as the width and height of one terminal screen or the total width and height of two terminal screens; wherein, for the case of two terminal screens, the width of the composition canvas is twice the width of the system window, and the height of the composition canvas is the height of the system window.
10. A split screen display device of a head-mounted visual device is characterized by comprising a video type acquisition module, a comprehensive judgment display mode module and a layer classification split screen synthesis module;
the video type obtaining module obtains the type of a video to be displayed; wherein the video types include 2D video and 3D video; the 3D videos comprise left and right 3D videos and up and down 3D videos;
the comprehensive judgment display mode module judges the display mode of the video to be displayed; the display mode comprises a common mode and a 3D mode, and the 3D mode comprises a left 3D mode, a right 3D mode and an up 3D mode;
the layer classification split screen synthesis module performs different split screen processing on all layers according to layer types and display modes during synthesis; the image layer types comprise an original video image layer and a non-video image layer;
the image layer classification split screen synthesis module operates to perform the following steps:
configuring an image area required to be acquired by the left canvas according to the layer type and the display mode;
when the layer type is a non-video layer, configuring and acquiring all image areas in any display mode;
when the layer type is a video layer:
in a common mode, configuring and acquiring all image areas;
configuring and acquiring a left half area in a left-right 3D mode;
configuring and acquiring an upper half area in a vertical 3D mode;
subsequently, adjusting the image display area on the left canvas; adjusting a vertex matrix and a viewport according to the position of the layer on the whole interface and a left display area;
drawing on the left side of the synthetic canvas;
then, judging whether the mode is a common mode;
if the layer is in the common mode, finishing the split screen processing of the layer;
otherwise, if the canvas is in other modes, configuring an image area needing to be acquired by the right canvas according to the layer type and the display mode; wherein the content of the first and second substances,
when the layer type is a non-video layer, configuring and acquiring all image areas in any display mode;
when the layer type is a video layer:
configuring and acquiring a right half area in a left and right 3D mode;
under the up-down 3D mode, configuring and acquiring a lower half area;
then, adjusting the display area of the image on the right canvas; adjusting a vertex matrix and a viewport according to the position of the layer on the whole interface and the right display area;
drawing on the right side of the synthesis canvas, and finishing the split screen processing of the layer.
11. The split-screen display apparatus according to claim 10, further comprising a synthesized layer split display module, wherein the synthesized layer split display module performs the following operations according to a display mode:
if the mode is the common mode, acquiring the left half area of the synthetic layer and displaying the left half area on the left side of the total screen, and acquiring the left half area of the synthetic layer and displaying the left half area on the right side of the total screen;
and if the mode is other, acquiring the left half area of the synthesized layer and displaying the left half area on the left side of the total screen, and acquiring the right half area of the synthesized layer and displaying the right half area on the right side of the total screen.
12. The split-screen display device of claim 10, wherein the method of acquiring the video type comprises setting by an external module call interface.
13. The split-screen display apparatus of claim 12, wherein the external modules include a player application, a multimedia module, and a key response module.
14. The split-screen display apparatus of any one of claims 10 to 13, wherein the comprehensive decision display mode module operates by the steps of:
traversing all the visual image layers, and recording the number of the original video image layers;
judging whether the number of the original video image layers is 0 or not;
when the number of the original video image layers is 0, judging as a common mode;
otherwise, when the number of the original video layers is not 0, determining that the mode corresponding to the video type comprises a left-right 3D mode and an up-down 3D mode.
15. The split-screen display apparatus according to claim 14, wherein when the number of original video layers is 0, the determining as the normal mode comprises: and when the number of the original video layers is 0, the video is in a plane 2D format and is judged to be in a common mode.
16. The split-screen display device according to claim 14, wherein when the number of original video layers is not 0, determining that the mode corresponding to the video type includes: if the video is in a left-right 3D format, judging the video to be in a left-right 3D mode; and if the video is in a top-bottom 3D format, judging the video to be in a top-bottom 3D mode.
17. The split-screen display apparatus according to any one of claims 10 to 16, wherein the non-video layer includes an interface layer and a system layer of each 2D application.
18. A head-mounted visual device comprises a 2D interface classification split-screen display device and a screen, and is characterized in that the 2D interface classification split-screen display device comprises an upper layer drawing module, a system split-screen synthesis device and a system split-screen display module;
the system split-screen synthesis device comprises a video type taking module, a comprehensive judgment display mode module and a layer classification split-screen synthesis module; wherein the content of the first and second substances,
the video type obtaining module obtains the type of a video to be displayed; wherein the video types include 2D video and 3D video; the 3D videos comprise left and right 3D videos and up and down 3D videos;
the comprehensive judgment display mode module judges the display mode of the video to be displayed; the display mode comprises a common mode and a 3D mode, and the 3D mode comprises a left 3D mode, a right 3D mode and an up 3D mode;
the layer classification split screen synthesis module performs different split screen processing on all layers according to layer types and display modes during synthesis; the image layer types comprise an original video image layer and a non-video image layer;
the image layer classification split screen synthesis module operates as follows:
configuring an image area required to be acquired by the left canvas according to the layer type and the display mode;
when the layer type is a non-video layer, configuring and acquiring all image areas in any display mode;
when the layer type is a video layer:
in a common mode, configuring and acquiring all image areas;
configuring and acquiring a left half area in a left-right 3D mode;
configuring and acquiring an upper half area in a vertical 3D mode;
subsequently, adjusting the image display area on the left canvas; adjusting a vertex matrix and a viewport according to the position of the layer on the whole interface and a left display area;
drawing on the left side of the synthetic canvas;
then, judging whether the mode is a common mode;
if the layer is in the common mode, finishing the split screen processing of the layer;
otherwise, if the canvas is in other modes, configuring an image area needing to be acquired by the right canvas according to the layer type and the display mode; wherein the content of the first and second substances,
when the layer type is a non-video layer, configuring and acquiring all image areas in any display mode;
when the layer type is a video layer:
configuring and acquiring a right half area in a left and right 3D mode;
under the up-down 3D mode, configuring and acquiring a lower half area;
then, adjusting the display area of the image on the right canvas; adjusting a vertex matrix and a viewport according to the position of the layer on the whole interface and the right display area;
drawing on the right side of the synthesis canvas, and finishing the split screen processing of the layer.
19. The head-mounted visual device of claim 18 wherein the system-split display module comprises a composite-layer-split display module that operates according to a display mode to:
if the mode is the common mode, acquiring the left half area of the synthetic layer and displaying the left half area on the left side of the total screen, and acquiring the left half area of the synthetic layer and displaying the left half area on the right side of the total screen;
and if the mode is other, acquiring the left half area of the synthesized layer and displaying the left half area on the left side of the total screen, and acquiring the right half area of the synthesized layer and displaying the right half area on the right side of the total screen.
20. The head-mounted visual device of claim 18 wherein the screen is one screen or two screens; wherein, the two screens comprise a left screen and a right screen.
CN201910468041.6A 2019-05-31 2019-05-31 Head-mounted visual equipment and split-screen display method and device thereof Pending CN110597577A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910468041.6A CN110597577A (en) 2019-05-31 2019-05-31 Head-mounted visual equipment and split-screen display method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910468041.6A CN110597577A (en) 2019-05-31 2019-05-31 Head-mounted visual equipment and split-screen display method and device thereof

Publications (1)

Publication Number Publication Date
CN110597577A true CN110597577A (en) 2019-12-20

Family

ID=68852542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910468041.6A Pending CN110597577A (en) 2019-05-31 2019-05-31 Head-mounted visual equipment and split-screen display method and device thereof

Country Status (1)

Country Link
CN (1) CN110597577A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268302A (en) * 2021-05-27 2021-08-17 杭州灵伴科技有限公司 Display mode switching method and device of head-mounted display equipment
CN114079821A (en) * 2021-11-18 2022-02-22 福建汇川物联网技术科技股份有限公司 Video playing method and device, electronic equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447898A (en) * 2015-12-31 2016-03-30 北京小鸟看看科技有限公司 Method and device for displaying 2D application interface in virtual real device
CN106792094A (en) * 2016-12-23 2017-05-31 歌尔科技有限公司 The method and VR equipment of VR device plays videos
WO2018086295A1 (en) * 2016-11-08 2018-05-17 华为技术有限公司 Application interface display method and apparatus
CN109271117A (en) * 2017-07-17 2019-01-25 北京海鲸科技有限公司 A kind of image display method, device and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447898A (en) * 2015-12-31 2016-03-30 北京小鸟看看科技有限公司 Method and device for displaying 2D application interface in virtual real device
WO2018086295A1 (en) * 2016-11-08 2018-05-17 华为技术有限公司 Application interface display method and apparatus
CN106792094A (en) * 2016-12-23 2017-05-31 歌尔科技有限公司 The method and VR equipment of VR device plays videos
CN109271117A (en) * 2017-07-17 2019-01-25 北京海鲸科技有限公司 A kind of image display method, device and equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268302A (en) * 2021-05-27 2021-08-17 杭州灵伴科技有限公司 Display mode switching method and device of head-mounted display equipment
CN114079821A (en) * 2021-11-18 2022-02-22 福建汇川物联网技术科技股份有限公司 Video playing method and device, electronic equipment and readable storage medium
CN114079821B (en) * 2021-11-18 2024-02-20 福建汇川物联网技术科技股份有限公司 Video playing method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN105447898B (en) The method and apparatus of 2D application interface are shown in a kind of virtual reality device
US8666147B2 (en) Multi-view image generating method and apparatus
CN108108140B (en) Multi-screen cooperative display method, storage device and equipment supporting 3D display
US10935788B2 (en) Hybrid virtual 3D rendering approach to stereovision
BRPI0911014B1 (en) METHOD OF CREATING A THREE-DIMENSIONAL IMAGE SIGNAL FOR RENDING ON A DISPLAY, DEVICE FOR CREATING A THREE-DIMENSIONAL IMAGE SIGNAL FOR RENDING ON A DISPLAY, METHOD OF PROCESSING A THREE-DIMENSIONAL IMAGE SIGNAL, AND DEVICE FOR PROCESSING A THREE-DIMENSIONAL IMAGE
JP2005534113A (en) Method and system enabling real-time mixing of composite and video images by a user
JP2012085301A (en) Three-dimensional video signal processing method and portable three-dimensional display device embodying the method
KR101090981B1 (en) 3d video signal processing method and portable 3d display apparatus implementing the same
US11589027B2 (en) Methods, systems, and media for generating and rendering immersive video content
CN110597577A (en) Head-mounted visual equipment and split-screen display method and device thereof
US8411110B2 (en) Interactive image and graphic system and method capable of detecting collision
Hutchison Introducing DLP 3-D TV
CN113301425A (en) Video playing method, video playing device and electronic equipment
JP5016648B2 (en) Image processing method for 3D display device having multilayer structure
US11297378B2 (en) Image arrangement determination apparatus, display controlling apparatus, image arrangement determination method, display controlling method, and program
CN107491934B (en) 3D interview system based on virtual reality
JP2008053884A (en) Image processing method and apparatus and electronic device utilizing them
CN115665461B (en) Video recording method and virtual reality device
US11962743B2 (en) 3D display system and 3D display method
CN109803163B (en) Image display method and device and storage medium
CN113268302B (en) Display mode switching method and device of head-mounted display equipment
CN102111630A (en) Image processing device, image processing method, and program
KR101438447B1 (en) An apparatus for displaying 3-dimensional subtitles and the method thereof
JP2003108112A (en) Method and device for object display
CN116527861A (en) Method and device for displaying preview image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination