CN116996658A - Image display method, system, device and storage medium - Google Patents

Image display method, system, device and storage medium Download PDF

Info

Publication number
CN116996658A
CN116996658A CN202310047408.3A CN202310047408A CN116996658A CN 116996658 A CN116996658 A CN 116996658A CN 202310047408 A CN202310047408 A CN 202310047408A CN 116996658 A CN116996658 A CN 116996658A
Authority
CN
China
Prior art keywords
eye image
image
frame
display device
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310047408.3A
Other languages
Chinese (zh)
Inventor
郑超
张�浩
苗京花
陈丽莉
范清文
郝帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Beijing BOE Display Technology Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Beijing BOE Display Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd, Beijing BOE Display Technology Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202310047408.3A priority Critical patent/CN116996658A/en
Publication of CN116996658A publication Critical patent/CN116996658A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type

Abstract

The present disclosure provides an image display method, system, device and storage medium. The method comprises the steps that the head-mounted display device acquires a left-eye image and a right-eye image corresponding to a current frame display picture, and transmits the left-eye image and the right-eye image to the 3D display device when 3D display is carried out based on the left-eye image and the right-eye image, so that the 3D display device synchronously displays the 3D picture in the head-mounted display device.

Description

Image display method, system, device and storage medium
Technical Field
The disclosure relates to the technical field of display, and in particular relates to an image display method, an image display system, an image display device and a storage medium.
Background
Currently, in some display sites with 3D display functions, such as VR (Virtual Reality) products, display contents are generally projected onto an external 2D display synchronously. The user can experience the 3D effect well by wearing the head-mounted display device, but other audiences can only watch the 2D display, and cannot experience the better 3D effect.
Disclosure of Invention
In view of the above, the present disclosure has been made in order to provide an image display method, system, apparatus, and storage medium that are advantageous in improving the above-described problems or at least partially improving the above-described problems.
In a first aspect, an embodiment of the present disclosure provides an image display method, applied to a head-mounted display device, including:
acquiring a left eye image and a right eye image corresponding to a current frame display picture;
3D display based on the left eye image and the right eye image;
the left-eye image and the right-eye image are transmitted to a 3D display device, so that the 3D display device performs 3D display based on the received left-eye image and right-eye image.
Further, transmitting the left eye image and the right eye image to a 3D display device, comprising:
respectively adding mark information heads at preset positions of the left eye image and the right eye image to obtain a first image frame corresponding to the left eye image and a second image frame corresponding to the right eye image, wherein the mark information heads comprise image frame verification information, and the image frame verification information is used for marking the transmission sequence of the image frames;
and sequentially transmitting the first image frame and the second image frame to the 3D display device according to a preset sequence.
Further, the left eye image and the right eye image include a gazing area and a non-gazing area, the resolution of the gazing area is greater than the resolution of the non-gazing area, and the marking information heads are respectively added at preset positions of the left eye image and the right eye image, including:
Respectively compressing the non-gazing areas in the left eye image and the right eye image;
splicing the gazing area and the compressed non-gazing area to obtain a compressed left eye image and a compressed right eye image;
the marking information head is added at preset positions of the compressed left eye image and the compressed right eye image respectively, and the marking information head further comprises: and the regional parameters corresponding to the gazing area and the non-gazing area and the compression ratio of the non-gazing area.
In a second aspect, an embodiment of the present disclosure provides an image display method, applied to a 3D display device, including:
receiving a left eye image and a right eye image transmitted by a target head-mounted display device, wherein the left eye image and the right eye image correspond to a 3D picture displayed by the target head-mounted display device;
the 3D picture is displayed based on the received left eye image and right eye image.
Further, before receiving the left eye image and the right eye image transmitted by the target head-mounted display device, the method further comprises:
presenting a list of available devices, the list of available devices including at least one head mounted display device;
and determining a target head-mounted display device from the available device list, and carrying out pairing connection with the determined target head-mounted display device.
Further, determining a target head mounted display device from the list of available devices includes:
using the head-mounted display equipment which is paired with the 3D display equipment for the last time in the available equipment list as the target head-mounted display equipment; or alternatively, the process may be performed,
in response to a user operation, a target head mounted display device is selected from the list of available devices.
Further, receiving the left eye image and the right eye image transmitted by the target head-mounted display device includes:
receiving a current image frame transmitted by target head-mounted display equipment, wherein the current image frame carries a mark information head, and the mark information head comprises image frame verification information;
and identifying the view type corresponding to the current image frame based on the image frame verification information, analyzing the current image frame to obtain a left eye image if the view type is a left eye view, and analyzing the current image frame to obtain a right eye image if the view type is a right eye view.
Further, the image frame check information includes a parity bit, and identifying a view type corresponding to the current image frame based on the image frame check information includes:
And if the parity bit indicates an odd frame, judging that the view type corresponding to the current image frame is a left eye view, and if the parity bit indicates an even frame, judging that the view type corresponding to the current image frame is a right eye view.
Further, the receiving the left eye image and the right eye image transmitted by the target head-mounted display device further includes:
and detecting whether a frame is lost or not in the transmission process based on the image frame verification information, and multiplexing an image of a corresponding view type in a previous frame display picture if the frame is lost.
Further, the image frame verification information includes: and the frame marking bit is used for detecting whether a frame is lost or not in the transmission process based on the image frame checking information, and multiplexing the image of the corresponding view type in the previous frame display picture if the frame is lost, and comprises the following steps:
if the view type corresponding to the current image frame is a left eye view, waiting for the next image frame;
if the view type corresponding to the current image frame is a right eye view and the view type corresponding to the previous image frame is a left eye view, judging whether the current image frame and the previous image frame belong to the same frame display picture or not based on the frame marking bit, if not, judging that the right eye image of the current frame display picture and the left eye image of the next frame display picture are lost, multiplexing the right eye image of the previous frame display picture as the right eye image of the current frame display picture, and multiplexing the left eye image of the current frame display picture as the left eye image of the next frame display picture.
Further, the current image frame includes a gazing area and a compressed non-gazing area, the gazing area having a resolution greater than that of the non-gazing area, and the marker information header further includes: analyzing the current image frame by the partition parameters corresponding to the gazing area and the non-gazing area and the compression ratio of the non-gazing area, including:
and carrying out pixel expansion on the compressed non-gazing area in the current image frame based on the partition parameter and the compression ratio, and removing the mark information head in the current image frame.
Further, the display screen of the 3D display device is a 3D lenticular display screen, and the displaying of the 3D picture based on the received left eye image and right eye image includes:
multiplexing the received left eye image and right eye image for multiple times to generate a multi-view mosaic;
and displaying the 3D picture based on the multi-view mosaic.
In a third aspect, embodiments of the present disclosure provide an image display system, including: head-mounted display device and 3D display device, wherein:
the head-mounted display device is used for acquiring a left eye image and a right eye image corresponding to a current frame display picture, performing 3D display based on the left eye image and the right eye image, and transmitting the left eye image and the right eye image to the 3D display device;
The 3D display device is configured to perform 3D display based on the received left-eye image and right-eye image.
In a fourth aspect, embodiments of the present disclosure provide a head mounted display device, comprising: a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the image display method of the first aspect described above.
In a fifth aspect, embodiments of the present disclosure provide a 3D display device, including: the image display device comprises a 3D display screen, a processor, a memory and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the steps of the image display method according to the second aspect when being executed by the processor.
In a sixth aspect, embodiments of the present disclosure provide a computer-readable storage medium storing computer instructions that, when executed on a computer, cause the computer to perform the steps of the image display method of the first or second aspects described above.
The technical scheme provided in the embodiment of the disclosure has at least the following technical effects or advantages:
The embodiment of the disclosure provides an image display method, an image display system, an image display device and a storage medium, wherein the head-mounted display device transmits the acquired left and right eye images to the 3D display device while 3D displaying the left and right eye images, so that the 3D display device synchronously displays 3D images displayed in the head-mounted display device. Therefore, when the user experiences the head-mounted display device, other audiences can see the 3D effect seen by the user on the 3D display device, and better 3D experience is obtained.
The foregoing description is merely an overview of the technical solutions provided by the embodiments of the present disclosure, and in order to make the technical means of the embodiments of the present disclosure more clear, it may be implemented according to the content of the specification, and in order to make the foregoing and other objects, features and advantages of the embodiments of the present disclosure more understandable, the following specific implementation of the embodiments of the present disclosure will be specifically described below.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
Fig. 1 is a schematic structural diagram of an image display system according to a first aspect of an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an exemplary encoding compression process in an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an exemplary decoding process in an embodiment of the present disclosure;
FIG. 4 is a flowchart of an image display method according to a second aspect of an embodiment of the present disclosure;
FIG. 5 is a flowchart of an image display method provided by a third aspect in an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a 3D display device according to a fifth aspect of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present specification will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present specification are shown in the drawings, it should be understood that the present specification may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It should be noted that, the term "and/or" appearing herein is merely an association relationship describing the association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. The term "plurality" includes two or more than two cases.
In a first aspect, embodiments of the present disclosure provide an image display system. As shown in fig. 1, the image display system 10 includes: a 3D display device 102 and a head mounted display device 101 capable of data interaction with the 3D display device 102. It should be noted that, the image display system 10 may include one or more head-mounted display devices 101 and one or more external 3D display devices 102, and the numbers of the head-mounted display devices 101 and the 3D display devices 102 shown in fig. 1 are only illustrative, and the specific numbers are determined according to practical application scenarios.
For example, the head-mounted display device 101 may be VR devices such as VR glasses and VR helmets, or may be other head-mounted devices having a 3D display function, which is not limited in this embodiment. The 3D display device 102 is an intelligent display with a 3D display function, for example, the intelligent display may use a raster display screen, or may use a 3D lenticular display screen, which is not limited in this embodiment.
In some examples, the 3D display device 102 and the head-mounted display device 101 may be in communication connection through a wireless transmission manner such as WIFI or bluetooth, so as to implement wireless screen projection. It is understood that wireless projection may also be referred to as wireless on-screen, off-screen, or screen sharing. Specifically, the screen of the mobile device (such as a head-mounted display device, a mobile phone, a tablet, a notebook and a computer) is displayed on the screen of another device in a real-time manner through a wireless transmission mode, and the output content comprises various media information and a real-time operation screen. Of course, in other examples, the 3D display device 102 and the head-mounted display device 101 may also be connected through a data line, which is not limited in this embodiment.
For example, the 3D display device 102 and the head mounted display device 101 may be disposed in the same local area network, and the 3D display device 102 and the head mounted display device 101 are connected to the local area network through a wired or wireless network connection. In the local area network, the 3D display device 102 and the head-mounted display devices 101 may implement network connection through a router or the like, and the router may assign an IP address to each access device in the local area network, so that intercommunication between the head-mounted display device 101 and the 3D display device 102 is implemented according to the IP address.
For example, when the image display system 10 is applied to a VR product exhibition hall, the head mounted display device 101 is required to display VR products such as VR helmets and the like in the exhibition hall. The 3D display device 102 is a 3D smart display provided in the exhibition hall that can be viewed by viewers other than the player of the head-mounted display device 101.
In use, the head-mounted display device 101 is configured to acquire a left-eye image and a right-eye image corresponding to a current frame display screen, perform 3D display based on the left-eye image and the right-eye image, and transmit the left-eye image and the right-eye image to the 3D display device 102. The 3D display device 102 is configured to perform 3D display based on the received left-eye image and right-eye image. Therefore, when a player watches the head-mounted display device 101, 3D pictures seen by the player through the head-mounted display device 101 can be synchronously shared into the 3D display device 102 for display, other viewers nearby can see the 3D pictures, 3D effects are experienced, and functions of a VR exhibition hall are further enriched.
In some examples, after the system includes a plurality of head-mounted display devices 101,3D and the display device 102 is started, it is necessary to determine the paired head-mounted display device 101 first, that is, select the head-mounted display device 101 that needs to synchronously display the 3D screen in the 3D display device 102, and after the paired connection is completed, the display data transmitted by the head-mounted display device 101 can be received. For distinction, the head-mounted display device 101 that needs to synchronously present a 3D picture in the 3D display device 102 is referred to herein as a target head-mounted display device to which the 3D display device 102 corresponds.
Taking WIFI communication as an example, after the 3D display device 102 is started, firstly, starting a WIFI module, traversing the head-mounted display device 101 in the local area network, obtaining information of the currently available head-mounted display device 101, and displaying an available device list, where the available device list includes at least one head-mounted display device 101. Further, a target head mounted display device is determined from the available device list, and a pairing connection is performed with the determined target head mounted display device.
In use, the target head mounted display device may be determined from the list of available devices in any one of two ways:
First kind: the head-mounted display device 101 which is paired with the 3D display device 102 in the last time in the available device list is used as a target head-mounted display device, so that pairing is realized quickly, and the image display efficiency is improved. In some usage scenarios, the presence of paired head mounted display device 101,3D in the list of available devices display device 102 may automatically connect with paired head mounted display device 101 in the list of available devices. If there are multiple paired head mounted display devices 101 in the list of available devices, the last connected head mounted display device 101 is automatically selected for pairing.
Second kind: in response to a user operation, a target head mounted display device is selected from a list of available devices. This may facilitate a user in flexibly selecting the head mounted display device 101 to be displayed.
There are various ways in which the user may operate the 3D display device 102, for example, the 3D display device 102 may be configured with function keys, which may include touch keys displayed on a screen, and/or physical keys through which the user performs corresponding operations. For another example, the 3D display device 102 has a remote control function, and the user may perform a corresponding operation through a remote control or a function key in an application program.
For example, when the 3D display device 102 is in an unpaired state, the user may select the name of the head mounted display device 101 to be presented from a list of available devices displayed on the screen, and then click on a pairing button, so that the 3D display device 102 handshakes with the selected target head mounted display device to achieve data transmission. When the 3D display device 102 is in the paired state, if the user wants to switch the target head-mounted display device, the name of the corresponding head-mounted display device 101 can be reselected from the list of available devices displayed on the screen, and then the switch button is clicked, so that the switching and the pairing of the target head-mounted display device can be realized.
Of course, the user may perform other operations in addition to the selection and switching operations. For example, if the available device list includes the head-mounted display device 101 that does not need to be displayed, the user may select the name of the head-mounted display device 101 that needs to be hidden from the available device list displayed on the screen, and then click a hidden button to implement hiding of the corresponding head-mounted display device 101. For another example, when the 3D display device 102 is in the paired state, the user may also click the disconnect button to disconnect the 3D display device 102 from the target head-mounted display device, and after the disconnection, the 3D display device 102 does not synchronously display the 3D screen displayed by the target head-mounted display device.
The image display process executed by the target head-mounted display device after the 3D display device 102 and the target head-mounted display device are successfully paired will be described.
The target head-mounted display device can perform data processing according to the scene to obtain a left eye image and a right eye image corresponding to the current frame display picture. In some examples, the gaze areas of the left eye and the right eye of the user are determined through eyeball tracking, then the gaze areas and non-gaze areas except for the gaze areas are respectively subjected to zone rendering according to different resolutions, and then the gaze area images and the non-gaze area images obtained through rendering are spliced to obtain a left eye image and a right eye image corresponding to the current frame display picture.
At this time, the resolution of the gazing region in the left-eye image and the right-eye image is greater than that of the non-gazing region. For example, a fixation region may be understood as a high definition region centered on the point of gaze of the human eye, and a non-fixation region as a region surrounding the fixation region, which may be of relatively low resolution. Thus, the high image quality of the visual fixation area can be ensured, and the whole data volume of the image to be displayed and the operation load of the processor can be reduced.
For example, the original left and right eye images have 4K resolution, and the non-gazing area can be compressed 4 times, so that the resolution of the non-gazing area is one fourth of the original resolution, and for the image of the gazing area, the original 4K resolution is reserved, so that high-low definition zone rendering is realized.
Further, the target head-mounted display device divides the left eye image and the right eye image obtained by processing into two paths, wherein one path is rendered on a screen of the target head-mounted display device for 3D display, and the other path is transmitted to the 3D display device 102.
In some examples, the left and right eye images may be transmitted in two frames. At this time, the process of transmitting the left-eye image and the right-eye image to the 3D display device 102 may include: respectively adding mark information heads at preset positions of a left eye image and a right eye image to obtain a first image frame corresponding to the left eye image and a second image frame corresponding to the right eye image; the first image frame and the second image frame are sequentially transmitted to the 3D display device 102 in a preset order.
For example, the left-eye image and the right-eye image can be sequentially transmitted in the sequence of first left-eye image and then right-eye image, the first image frame is an odd frame, the second image frame is an even frame, and the bandwidth requirement on wireless transmission is reduced in a mode of transmitting the left-eye data and the right-eye data separately through the odd-even frame.
The marking information header includes image frame check information for marking a transmission order of the image frames so that the 3D display device 102 can recognize the order of the received image frames through the image frame check information to ensure the accuracy of the display screen.
For example, the image frame check information may include parity bits so that the 3D display device 102 may identify the view type of the received image frame, i.e., whether it is a left eye view or a right eye view, through the parity bits. Further, considering that the first image frame and the second image frame are alternately transmitted and parsed, but are actually data of the same frame display picture, in order to avoid the dislocation parsing of images of adjacent frame display pictures, the image frame verification information may further include: the frame flag bit is used to identify whether the first image frame and the second image frame received before and after the first image frame and the second image frame belong to the same frame display through the frame flag bit, so as to determine whether frame loss occurs in the transmission process.
In some examples, in order to reduce the data amount of each frame of image, so as to further reduce the bandwidth requirement in the 3D wireless transmission process, the left eye image and the right eye image to be transmitted may be compressed first, and then, the marker information header may be added at the preset positions of the compressed left eye image and the compressed right eye image. At this time, the flag information header may include encoded compression information in addition to the above-described image frame check information so that the 3D display device 102 may decode the received image frame in accordance with the encoded compression information. It should be noted that, in addition to the above, the flag information header may also include other information, for example, resolution information of the image before compression, specifically, set according to the needs of the actual scene, which is not limited in this embodiment.
For example, when the image to be transmitted is a left eye image and a right eye image which are rendered and spliced by the high-low definition partitions, the compression processing can be performed on the non-gazing area in the image for the left eye image and the right eye image respectively; and then splicing the compressed non-gazing area with the original gazing area in the image to obtain a compressed left eye image and a compressed right eye image. At this time, the encoded compressed information may include: partition parameters corresponding to the gazing area and the non-gazing area and compression ratio of the non-gazing area.
In some examples, the left and right eye images may include: one gazing region and a plurality of non-gazing regions. The compression strategy corresponding to each non-gazing area can be set in advance according to the position relation between the non-gazing area and the gazing area, and the non-gazing area at the corresponding position is compressed according to the compression strategy when the non-gazing area is used.
For example, as shown in fig. 2, the left eye image (L) and the right eye image (R) may be divided into nine areas in a nine-grid form. Before compression, region a represents a fixation region, the non-fixation region including: and the regions B and C respectively positioned right left and right of the gazing region A and right above and below the gazing region A and the regions D respectively positioned left and above, right and below the gazing region A. The gazing area a and the non-gazing area B, C, D may be rectangular. At this time, the data amount of the gazing area a may be kept unchanged, the non-gazing area B may be compressed by n times in the lateral direction, the non-gazing area C may be compressed by n times in the longitudinal direction, and the non-gazing area D may be compressed by n times in each of the lateral and longitudinal directions. The specific value of n can be determined according to the actual scene.
And after compression is completed, splicing the uncompressed gazing area and the compressed non-gazing area according to the original position relation to obtain a compressed left eye image and a compressed right eye image. For example, in fig. 2, the area A1 represents the gazing area of the left-eye image, and B1, C1, D1 represent the compressed areas corresponding to the corresponding non-gazing areas in the left-eye image. The area A2 represents the gazing area of the right-eye image, and B2, C2, D2 represent compressed areas corresponding to the respective non-gazing areas in the right-eye image.
It should be noted that the partition and the corresponding compression manner shown in fig. 2 are only examples, and the partition may be specifically performed according to actual needs, which is not limited in this embodiment. For example, in some examples, the regions immediately above, above left, and above right in fig. 2 may be divided into the same non-gazing region, and accordingly, the regions immediately below, below left, and below right in fig. 2 may be divided into the same non-gazing region.
Further, the marker information header is added to the compressed left-eye image and the compressed right-eye image. For example, a marker information header corresponding to the left-eye image and a marker information header corresponding to the right-eye image may be generated, respectively, and the marker information header corresponding to the left-eye image may be spliced into the compressed left-eye image and right-eye image in rows or columns. As shown in fig. 2, E1 represents a marker information header of the left-eye image, E2 represents a marker information header of the right-eye image, and E1 is spliced with a start line of the compressed left-eye image to obtain a first image frame, and E2 is spliced with a start line of the compressed right-eye image to obtain a second image frame.
For the 3D display device 102, after the matching is successful, the receiving of the left and right eye image data transmitted by the target head mounted display device may begin. In some examples, the process of receiving the left eye image and the right eye image transmitted by the target head mounted display device may include: receiving a current image frame transmitted by target head-mounted display equipment, wherein the current image frame carries a mark information head, and the mark information head comprises image frame verification information; and extracting image frame verification information from the mark information head, identifying the view type corresponding to the current image frame based on the image frame verification information, analyzing the current image frame to obtain a left eye image if the view type is a left eye view, and analyzing the current image frame to obtain a right eye image if the view type is a right eye view.
For example, the target head-mounted display device transmits left and right eye images separately through odd and even frames, and if the first image frame corresponding to the left eye image is an odd frame, the second image frame corresponding to the right eye image is an even frame. The image frame check information may include parity bits, and when the parity bits in the current image frame are identified to represent odd frames, it is determined that the view type corresponding to the current image frame is a left-eye image, and when the parity bits in the current image frame are identified to represent even frames, it is determined that the view type corresponding to the current image frame is a right-eye image.
When the head-mounted display device alternately transmits the image frames corresponding to the left eye image and the right eye image according to the sequence of the left eye image and the right eye image, the 3D display device firstly receives the image frame corresponding to the left eye view of the current frame display picture, namely the first image frame, then receives the image frame corresponding to the right eye view of the current frame display picture, namely the second image frame, then receives the first image frame and the second image frame of the next frame display picture according to the sequence, and the like, so that 3D display of each frame display picture is sequentially realized.
However, in consideration of that in the actual transmission process, the frame may be lost due to network or other reasons, so in order to avoid display errors caused by frame loss as much as possible, in some examples, besides identifying the view type, whether there is frame loss in the transmission process may be detected based on the image frame verification information, if yes, the image of the corresponding view type in the previous frame display frame is multiplexed, so as to avoid error resolution of the subsequent frame. For example, if the left-eye image of the current frame display screen is lost, that is, the first image frame is lost, the left-eye image of the previous frame display screen is multiplexed for display, and if the right-eye image of the current frame display screen is lost, that is, the second image frame is lost, the right-eye image of the previous frame display screen is multiplexed for display, so as to avoid the error resolution of the next frame.
In some examples, the above-mentioned image frame verification information may further include a frame flag bit, and it may be identified by the frame flag bit whether the left eye image and the right eye image received before and after belong to the same frame display screen, so as to determine whether a frame loss phenomenon occurs in the transmission process.
Specifically, if the view type corresponding to the received current image frame is a left eye view, waiting for the next image frame; if the received view type corresponding to the current image frame is a right eye view and the view type corresponding to the previous image frame is a left eye view, judging whether the current image frame and the previous image frame belong to the same frame display picture or not based on the frame mark bit, if not, judging that the right eye image of the current frame display picture and the left eye image of the next frame display picture are lost, multiplexing the right eye image of the previous frame display picture as the right eye image of the current frame display picture, and multiplexing the left eye image of the current frame display picture as the left eye image of the next frame display picture. Of course, if it is determined that the current image frame and the previous image frame belong to the same frame display frame, it indicates that the current frame display frame has no frame loss problem, and the subsequent image analysis and 3D display process is continuously executed. It should be noted that, the image data is sequentially transmitted frame by frame, where "next image frame" is an image frame received in a row following the current image frame, and "previous image frame" is an image frame received in a row preceding the current image frame.
Of course, besides the above-mentioned situations, other frame loss situations may exist, and the setting may be specifically performed according to actual needs. For example, if the view type corresponding to the current image frame is a left-eye view and the view type corresponding to the next image frame is still a left-eye view, it is determined that the right-eye image of the current frame display screen is lost, and the right-eye image of the previous frame display screen needs to be multiplexed as the right-eye image of the current frame display screen. If the view type corresponding to the current image frame is a right eye view and the previous image frame is not stored, namely, if the current image frame is the first image frame received after the 3D display device is started and successfully paired with the target head-mounted display device, the left eye image of the first frame display picture is lost, the current image frame is discarded, the first frame display picture is not displayed, and the next image frame is continuously received and analyzed.
In some examples, if the left and right eye images transmitted from the head-mounted display device 101 are images subjected to encoding compression processing, decoding processing is required for the images corresponding to the respective view types when the image frames are analyzed.
For example, corresponding to the above listed partitions and compression, the received image frame includes a gazing area and a compressed non-gazing area. Accordingly, the tag information head further includes: and the partition parameters corresponding to the gazing area and the non-gazing area and the compression ratio of the non-gazing area.
At this time, the above-mentioned process of parsing the image frame may include: and (3) carrying out pixel expansion on the compressed non-fixation area in the image frame based on the partition parameter and the compression ratio, and removing the mark information head in the image frame so as to avoid the phenomenon of image data scrambling during rendering.
Taking an image frame corresponding to the left eye image as an example, the positions of the gazing area and the compressed non-gazing area in the image frame may be determined based on the partition parameter, and then pixel expansion is performed on the compressed non-gazing area, such as the B1 area, the C1 area, and the D1 area shown in fig. 3. Assuming that the compression ratio of each non-gazing area is n, transversely expanding pixels of the two B1 areas by n times to generate a B' area; longitudinally expanding pixels of the two C1 areas by n times to generate a C' area; respectively expanding pixels of two D1 areas transversely and longitudinally by n times to generate a D' area; and then, splicing the image of the B ' region, the C ' region, the D ' region and the gazing region A1 according to the original relative positions, and removing the mark information head after splicing, so as to complete the analysis of the image frame and obtain a left eye image.
Similarly, for an image frame corresponding to the right eye image, according to the partition parameters and the compression ratio in the marking information header, the compressed non-gazing areas B2, C2 and D2 are spliced with the gazing area A2 after pixel expansion, and the marking information header is removed, so that the analysis of the image frame is completed, and the right eye image is obtained.
After the analysis of the left-eye image and the right-eye image is completed, the left-eye image and the right-eye image can be subjected to 3D splicing, so that the display of a 3D picture is realized. The specific process is determined by the type of 3D display screen employed by the 3D display device 102.
For example, when the 3D display device 102 employs a raster display screen, the left eye image and the right eye image may be arranged at intervals pixel by pixel in the longitudinal direction, so as to generate a 3D raster mosaic, and then the positions where the gratings are turned on are controlled by eye tracking, so as to finally achieve a 3D effect.
For example, the 3D display device 102 further includes a 3D screen controller and a video processing chip, where after the 3D screen controller is started, the status of the raster is initialized, and the raster is set to be all on. At this time, the system interface is directly displayed on the 3D display screen, and the viewer sees a common 2D display screen. When the 3D screen controller receives the left eye image and the right eye image transmitted by the head-mounted display device 101, the left eye image and the right eye image for 3D display are transmitted to the video processing chip, the chip generates a 3D composite image according to the raster algorithm, and finally the composite image is rendered on the 3D display screen.
Further, the 3D display device 102 may further include an eye tracking module, and the 3D screen controller may turn on the eye tracking module after receiving the left eye image and the right eye image transmitted by the head-mounted display device 101. The eyeball tracking module captures the gazing coordinates of the audience, calculates the grating switch coordinates through a gazing point grating algorithm and sends a grating switch instruction to the 3D display screen. On the 3D display screen, the raster switch instruction is executed by rendering the 3D composite map, and finally, the audience sees the 3D content shown in the player head-mounted display device 101 on the 3D display screen.
When the 3D display device 102 adopts a 3D lenticular display screen, for example, adopts a multi-view lenticular 3D display screen, considering that the head-mounted display device 101 transmits left and right eye images of a single view point, in order to adapt to the display of the multi-view lenticular 3D display screen, the received left eye images and right eye images can be multiplexed for multiple times to generate a multi-view mosaic; then, a 3D picture is displayed based on the multi-view mosaic.
For example, after the 3D screen controller in the 3D display device 102 is started, the video processing chip performs multiplexing and splicing processing on the system interface according to the physical parameters of the cylindrical lens, and displays the multiplexed and spliced processing on the 3D display screen, and at this time, the viewer sees a common 2D system display screen. After the 3D screen controller receives the left and right eye images transmitted by the head-mounted display device 101, the left and right eye images are transmitted to the video processing chip, and the chip executes a corresponding splicing algorithm according to the physical parameters of the cylindrical lens. For the 3D display device 102 with the multi-view cylindrical lens 3D display screen, assuming that the screen uses n view designs, the video processing chip multiplexes the left and right eye images received n times, performs stitching and compositing according to algorithms corresponding to screen parameters, and generates a multi-view stitching graph depending on the 3D cylindrical lens algorithm. And then, displaying the spliced and synthesized image on the 3D display screen to realize a 3D display effect. It should be noted that, since the lenticular lens imaging principle is different from that of the grating screen, the eye tracking function is not required, and the viewer can also see the 3D content shown in the player head-mounted display device 101 on the display screen.
In a second aspect, the embodiments of the present disclosure also provide an image display method applied to the head-mounted display device 101 in the above-described image display system 10. As shown in fig. 4, the method may include the following steps S401 to S403.
Step S401, a left eye image and a right eye image corresponding to the current frame display screen are acquired.
Step S402, 3D display is performed based on the left-eye image and the right-eye image.
Step S403, transmitting the left-eye image and the right-eye image to the 3D display device, so that the 3D display device performs 3D display based on the received left-eye image and right-eye image.
It should be noted that, the specific implementation process of step S401 to step S403 may be referred to the related description in the embodiment of the first aspect, and will not be repeated here.
In some examples, transmitting the left eye image and the right eye image to the 3D display device includes: respectively adding mark information heads at preset positions of the left eye image and the right eye image to obtain a first image frame corresponding to the left eye image and a second image frame corresponding to the right eye image, wherein the mark information heads comprise image frame check information, and the image frame check information is used for marking the transmission sequence of the image frames; and sequentially transmitting the first image frame and the second image frame to the 3D display device according to a preset sequence. The specific implementation process may be referred to the related description in the foregoing embodiments of the first aspect, which is not repeated herein.
In some examples, the left-eye image and the right-eye image include a gazing region and a non-gazing region, the gazing region having a resolution greater than a resolution of the non-gazing region, and the adding of the marker information header at the preset positions of the left-eye image and the right-eye image, respectively, includes: respectively compressing non-gazing areas in the left eye image and the right eye image; splicing the gazing area and the compressed non-gazing area to obtain a compressed left eye image and a compressed right eye image; adding a mark information head at preset positions of the compressed left eye image and the compressed right eye image respectively, wherein the mark information head further comprises: partition parameters corresponding to the gazing area and the non-gazing area and compression ratio of the non-gazing area. The specific implementation process may be referred to the related description in the foregoing embodiments of the first aspect, which is not repeated herein.
In a third aspect, the embodiments of the present disclosure also provide an image display method applied to the 3D display device 102 in the image display system 10 described above. As shown in fig. 5, the method may include the following steps S501 to S502.
In step S501, a left-eye image and a right-eye image transmitted by the target head-mounted display device are received, where the left-eye image and the right-eye image correspond to a 3D picture displayed by the target head-mounted display device.
Step S502 displays a 3D picture based on the received left-eye image and right-eye image.
It should be noted that, the specific implementation process of step S501 to step S502 may be referred to the related description in the embodiment of the first aspect, and will not be repeated here.
In some examples, prior to receiving the left eye image and the right eye image transmitted by the target head mounted display device, further comprising: displaying a list of available devices, the list of available devices including at least one head mounted display device; and determining a target head-mounted display device from the available device list, and performing pairing connection with the determined target head-mounted display device. The specific implementation process may be referred to the related description in the foregoing embodiments of the first aspect, which is not repeated herein.
In some examples, determining the target head mounted display device from the list of available devices includes: taking the head-mounted display device which is paired with the 3D display device for the last time in the available device list as a target head-mounted display device; alternatively, in response to a user operation, a target head mounted display device is selected from the list of available devices. The specific implementation process may be referred to the related description in the foregoing embodiments of the first aspect, which is not repeated herein.
In some examples, the process of receiving the left eye image and the right eye image transmitted by the target head mounted display device may include: receiving a current image frame transmitted by target head-mounted display equipment, wherein the current image frame carries a mark information head, and the mark information head comprises image frame verification information; and identifying the view type corresponding to the current image frame based on the image frame verification information, if the view type is a left eye view, analyzing the current image frame to obtain a left eye image, and if the view type is a right eye view, analyzing the current image frame to obtain a right eye image. The specific implementation process may be referred to the related description in the foregoing embodiments of the first aspect, which is not repeated herein.
In some examples, the image frame check information includes parity bits. The above-mentioned process of identifying the view type corresponding to the current image frame based on the image frame check information may include: if the parity bit indicates an odd frame, the view type corresponding to the current image frame is determined to be a left eye view, and if the parity bit indicates an even frame, the view type corresponding to the current image frame is determined to be a right eye view. The specific implementation process may be referred to the related description in the foregoing embodiments of the first aspect, which is not repeated herein.
Further, the process of receiving the left eye image and the right eye image transmitted by the target head-mounted display device may further include: and detecting whether a frame is lost or not in the transmission process based on the image frame verification information, and multiplexing the image of the corresponding view type in the previous frame display picture if the frame is lost. The specific implementation process may be referred to the related description in the foregoing embodiments of the first aspect, which is not repeated herein.
In some examples, the image frame verification information includes: the frame flag bit. The head-mounted display device alternately transmits image frames corresponding to left and right eye images in the order of left eye and right eye. The above process of detecting whether a frame is lost in the transmission process based on the image frame verification information, if yes, multiplexing an image of a corresponding view type in a previous frame display picture may include: if the view type corresponding to the current image frame is a left eye view, waiting for the next image frame; if the view type corresponding to the current image frame is a right eye view and the view type corresponding to the previous image frame is a left eye view, judging whether the current image frame and the previous image frame belong to the same frame display picture or not based on the frame mark bit, if not, judging that the right eye image of the current frame display picture and the left eye image of the next frame display picture are lost, multiplexing the right eye image of the previous frame display picture as the right eye image of the current frame display picture, and multiplexing the left eye image of the current frame display picture as the left eye image of the next frame display picture. The specific implementation process may be referred to the related description in the foregoing embodiments of the first aspect, which is not repeated herein.
In some examples, the received image frames include a gaze region and a compressed non-gaze region, the gaze region having a resolution greater than a resolution of the non-gaze region, the marker information header further comprising: the analyzing the current image frame may include: and carrying out pixel expansion on the compressed non-gazing area in the current image frame based on the partition parameter and the compression ratio, and removing the mark information head in the current image frame. The specific implementation process may be referred to the related description in the foregoing embodiments of the first aspect, which is not repeated herein.
In some examples, the display screen of the 3D display device is a 3D lenticular display screen. The above-described process of displaying a 3D picture based on the received left-eye image and right-eye image may include: multiplexing the received left eye image and right eye image for multiple times to generate a multi-view mosaic; and displaying the 3D picture based on the multi-view mosaic. The specific implementation process may be referred to the related description in the foregoing embodiments of the first aspect, which is not repeated herein.
In a fourth aspect, embodiments of the present disclosure further provide a head-mounted display device, including: a processor, a memory, and a computer program stored on the memory and executable on the processor. Which computer program, when being executed by said processor, carries out the steps of the image display method as described in the second aspect above. In addition, to implement the wireless data transmission function, the head-mounted display device may further include a first wireless communication module, for example, the first wireless communication module may include, but is not limited to, a bluetooth module and/or a WIFI module. The head-mounted display device may transmit device information and left and right eye images to the 3D display device through the first wireless communication module.
For example, the head mounted display device may be: the present embodiment is not limited to this, and the head-mounted device such as VR helmet or VR glasses having VR function, or other head-mounted devices having 3D display function may be used.
In a fifth aspect, embodiments of the present disclosure further provide a 3D display device, including: a 3D display screen, a processor, a memory, and a computer program stored on the memory and executable on the processor. Which computer program, when being executed by said processor, carries out the steps of the image display method as described in the third aspect above.
For example, fig. 6 shows a schematic structural diagram of an exemplary 3D display device. As shown in fig. 6, the 3D display device 60 may include: a 3D display screen 603, a 3D screen controller 601, a video processing chip 602, a second wireless communication module 604, and a memory 605.
The 3D screen controller 601 is connected to the memory 605, the video processing chip 602, the second wireless communication module 604, the power management chip 606, and the function keys 607, and is responsible for system logic and software operations. For example, the 3D screen controller 601 may employ a microprocessor (Microcontroller Unit, MCU).
The second wireless communication module 604 is configured corresponding to the first wireless communication module in the head-mounted display device, and is configured to implement data interaction with the head-mounted display device, for example, may include, but is not limited to, a bluetooth module and/or a WIFI module. Specifically, the second wireless communication module 604 may receive the image frames transmitted from the head-mounted display device, and transmit the received image frames to the video processing chip 602 for graphics processing. In addition, the second wireless communication module 604 may also be used to transmit currently available head mounted display device information to the 3D screen controller 601,3D the screen controller 601 will save the device information to the memory 605 to present a list of available devices to the user.
The memory 605 is responsible for storing a plurality of matched head mounted display device information, device configuration information, etc. data for a long time, while the memory 605 is a carrier of system and software, storing installation packages and a software link library. For example, memory 605 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
The video processing chip 602 is configured to perform decoding processing on the received image frame, and transmit the decoded left-eye image and right-eye image to the 3D display screen 603. For example, the video processing chip 602 may employ an image processor (Graphics Processing Unit, GPU).
The 3D display screen 603 is used for 3D displaying the received left eye image and right eye image, and presenting a 3D image effect to the user.
Of course, the 3D display device 60 may include other structures besides the above-described structure, such as the power management chip 606, the function keys 607, and a housing (not shown in the drawings), which is not limited in this embodiment. The power management chip 606 and the function keys 607 are connected to the 3D screen controller 601. Wherein the power management chip 606 is used to manage device power. The function key 607 is used to provide a key response event to the 3D screen controller 601, and the 3D screen controller 601 responds to the user's operation after processing.
In a sixth aspect, embodiments of the present disclosure further provide a computer readable storage medium, where computer instructions are stored, when the computer instructions run on a computer, cause the computer to execute the respective processes of the image display method provided in the second aspect or the third aspect, and achieve the same technical effects, and in order to avoid repetition, a description is omitted herein. The computer readable storage medium may be, for example, read-Only Memory (ROM), random access Memory (Random Access Memory RAM), magnetic or optical disk, etc.
It will be apparent to those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While preferred embodiments of the present description have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the disclosure.

Claims (16)

1. An image display method, applied to a head-mounted display device, comprising:
acquiring a left eye image and a right eye image corresponding to a current frame display picture;
3D display based on the left eye image and the right eye image;
the left-eye image and the right-eye image are transmitted to a 3D display device, so that the 3D display device performs 3D display based on the received left-eye image and right-eye image.
2. The method of claim 1, wherein transmitting the left eye image and the right eye image to a 3D display device comprises:
respectively adding mark information heads at preset positions of the left eye image and the right eye image to obtain a first image frame corresponding to the left eye image and a second image frame corresponding to the right eye image, wherein the mark information heads comprise image frame verification information, and the image frame verification information is used for marking the transmission sequence of the image frames;
And sequentially transmitting the first image frame and the second image frame to the 3D display device according to a preset sequence.
3. The method according to claim 2, wherein the left-eye image and the right-eye image include a gazing area and a non-gazing area, the gazing area having a resolution greater than that of the non-gazing area, and adding marker information heads at preset positions of the left-eye image and the right-eye image, respectively, includes:
respectively compressing the non-gazing areas in the left eye image and the right eye image;
splicing the gazing area and the compressed non-gazing area to obtain a compressed left eye image and a compressed right eye image;
the marking information head is added at preset positions of the compressed left eye image and the compressed right eye image respectively, and the marking information head further comprises: and the regional parameters corresponding to the gazing area and the non-gazing area and the compression ratio of the non-gazing area.
4. An image display method, characterized by being applied to a 3D display device, the method comprising:
receiving a left eye image and a right eye image transmitted by a target head-mounted display device, wherein the left eye image and the right eye image correspond to a 3D picture displayed by the target head-mounted display device;
The 3D picture is displayed based on the received left eye image and right eye image.
5. The method of claim 4, further comprising, prior to receiving the left eye image and the right eye image transmitted by the target head mounted display device:
presenting a list of available devices, the list of available devices including at least one head mounted display device;
and determining a target head-mounted display device from the available device list, and carrying out pairing connection with the determined target head-mounted display device.
6. The method of claim 5, wherein determining a target head mounted display device from the list of available devices comprises:
using the head-mounted display equipment which is paired with the 3D display equipment for the last time in the available equipment list as the target head-mounted display equipment; or alternatively, the process may be performed,
in response to a user operation, a target head mounted display device is selected from the list of available devices.
7. The method of claim 4, wherein receiving the left eye image and the right eye image transmitted by the target head mounted display device comprises:
receiving a current image frame transmitted by target head-mounted display equipment, wherein the current image frame carries a mark information head, and the mark information head comprises image frame verification information;
And identifying the view type corresponding to the current image frame based on the image frame verification information, analyzing the current image frame to obtain a left eye image if the view type is a left eye view, and analyzing the current image frame to obtain a right eye image if the view type is a right eye view.
8. The method of claim 7, wherein the image frame check information includes parity bits, wherein identifying a view type corresponding to the current image frame based on the image frame check information comprises:
and if the parity bit indicates an odd frame, judging that the view type corresponding to the current image frame is a left eye view, and if the parity bit indicates an even frame, judging that the view type corresponding to the current image frame is a right eye view.
9. The method of claim 7, wherein receiving the left eye image and the right eye image transmitted by the target head mounted display device further comprises:
and detecting whether a frame is lost or not in the transmission process based on the image frame verification information, and multiplexing an image of a corresponding view type in a previous frame display picture if the frame is lost.
10. The method of claim 9, wherein the image frame verification information comprises: the frame marking bit, the said head-mounted display device transmits the image frame that the left eye image corresponds to, right eye image alternately according to the order of left eye before right eye, check information detect whether there is frame to lose in the transmission course on the basis of the said image frame, if, multiplex the image of the corresponding view type in the display picture of the previous frame, include:
If the view type corresponding to the current image frame is a left eye view, waiting for the next image frame;
if the view type corresponding to the current image frame is a right eye view and the view type corresponding to the previous image frame is a left eye view, judging whether the current image frame and the previous image frame belong to the same frame display picture or not based on the frame marking bit, if not, judging that the right eye image of the current frame display picture and the left eye image of the next frame display picture are lost, multiplexing the right eye image of the previous frame display picture as the right eye image of the current frame display picture, and multiplexing the left eye image of the current frame display picture as the left eye image of the next frame display picture.
11. The method of claim 7, wherein the current image frame includes a gaze region and a compressed non-gaze region, the gaze region having a resolution greater than a resolution of the non-gaze region, the marker information header further comprising: analyzing the current image frame by the partition parameters corresponding to the gazing area and the non-gazing area and the compression ratio of the non-gazing area, including:
and carrying out pixel expansion on the compressed non-gazing area in the current image frame based on the partition parameter and the compression ratio, and removing the mark information head in the current image frame.
12. The method of claim 4, wherein the display screen of the 3D display device is a 3D lenticular display screen, the displaying a 3D picture based on the received left eye image and right eye image, comprising:
multiplexing the received left eye image and right eye image for multiple times to generate a multi-view mosaic;
and displaying the 3D picture based on the multi-view mosaic.
13. An image display system, comprising: head-mounted display device and 3D display device, wherein:
the head-mounted display device is used for acquiring a left eye image and a right eye image corresponding to a current frame display picture, performing 3D display based on the left eye image and the right eye image, and transmitting the left eye image and the right eye image to the 3D display device;
the 3D display device is configured to perform 3D display based on the received left-eye image and right-eye image.
14. A head-mounted display device, comprising: a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the image display method according to any one of claims 1-3.
15. A 3D display device, comprising: a 3D display screen, a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor performs the steps of the image display method according to any of claims 4-12.
16. A computer readable storage medium storing computer instructions which, when run on a computer, cause the computer to perform the steps of the image display method according to any one of claims 1 to 12.
CN202310047408.3A 2023-01-31 2023-01-31 Image display method, system, device and storage medium Pending CN116996658A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310047408.3A CN116996658A (en) 2023-01-31 2023-01-31 Image display method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310047408.3A CN116996658A (en) 2023-01-31 2023-01-31 Image display method, system, device and storage medium

Publications (1)

Publication Number Publication Date
CN116996658A true CN116996658A (en) 2023-11-03

Family

ID=88529037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310047408.3A Pending CN116996658A (en) 2023-01-31 2023-01-31 Image display method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN116996658A (en)

Similar Documents

Publication Publication Date Title
US9083963B2 (en) Method and device for the creation of pseudo-holographic images
CN102246529B (en) Image based 3D video format
US8827150B2 (en) 3-D matrix barcode presentation
US20150341614A1 (en) Stereoscopic video encoding device, stereoscopic video decoding device, stereoscopic video encoding method, stereoscopic video decoding method, stereoscopic video encoding program, and stereoscopic video decoding program
US8488869B2 (en) Image processing method and apparatus
US20140376635A1 (en) Stereo scopic video coding device, steroscopic video decoding device, stereoscopic video coding method, stereoscopic video decoding method, stereoscopic video coding program, and stereoscopic video decoding program
US9578305B2 (en) Digital receiver and method for processing caption data in the digital receiver
EP2477410B1 (en) Moving stereo picture encoding method and apparatus, moving stereo picture decoding method and apparatus
US20110157315A1 (en) Interpolation of three-dimensional video content
CN101822049B (en) Apparatus and method for providing stereoscopic three-dimensional image/video contents on terminal based on lightweight application scene representation
JP2003111101A (en) Method, apparatus and system for processing stereoscopic image
US20110292175A1 (en) Broadcast receiver and 3d subtitle data processing method thereof
TWI651960B (en) Method and encoder/decoder of encoding/decoding a video data signal and related video data signal, video data carrier and computer program product
CN102067611B (en) System and method for marking a stereoscopic film
KR100576544B1 (en) Apparatus and Method for Processing of 3D Video using MPEG-4 Object Descriptor Information
KR101314601B1 (en) apparatus for transmitting contents, apparatus for outputting contents, method for transmitting contents and method for outputting contents
US9247240B2 (en) Three-dimensional glasses, three-dimensional image display apparatus, and method for driving the three-dimensional glasses and the three-dimensional image display apparatus
US9106894B1 (en) Detection of 3-D videos
US20120120056A1 (en) Computer, monitor, recording medium, and method for providing 3d image thereof
JP2013516117A (en) 3D video display system with multi-stream transmission / reception operation
KR101228916B1 (en) Apparatus and method for displaying stereoscopic 3 dimensional image in multi vision
CN116996658A (en) Image display method, system, device and storage medium
US20140307051A1 (en) Broadcast receiver and 3d subtitle data processing method thereof
KR100842568B1 (en) Apparatus and method for making compressed image data and apparatus and method for output compressed image data
KR101347744B1 (en) Image processing apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination