CN114422819A - Video display method, device, equipment, system and medium - Google Patents
Video display method, device, equipment, system and medium Download PDFInfo
- Publication number
- CN114422819A CN114422819A CN202210089820.7A CN202210089820A CN114422819A CN 114422819 A CN114422819 A CN 114422819A CN 202210089820 A CN202210089820 A CN 202210089820A CN 114422819 A CN114422819 A CN 114422819A
- Authority
- CN
- China
- Prior art keywords
- target
- subsystem
- relative position
- candidate
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000004590 computer program Methods 0.000 claims description 5
- 239000011521 glass Substances 0.000 description 14
- 238000010586 diagram Methods 0.000 description 10
- 238000003491 array Methods 0.000 description 8
- 230000004888 barrier function Effects 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000010287 polarization Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/398—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
Abstract
The embodiment of the invention discloses a video display method, a device, equipment, a system and a medium, wherein the method is executed by a first subsystem and comprises the following steps: receiving a target relative position sent by the second subsystem, and determining a target binocular video stream according to the target relative position; wherein the target relative position is determined by a user position determining device in the second subsystem; and sending the target binocular video stream to a second subsystem so that the second subsystem displays the target binocular video stream through 3D display equipment. By executing the scheme, the stereoscopic viewing experience of the live audience can be guaranteed while bandwidth resources are saved, the storage and processing resources of a local system can be saved, and the processing efficiency is improved.
Description
Technical Field
Embodiments of the present invention relate to the field of 3D display technologies, and in particular, to a video display method, apparatus, device, system, and medium.
Background
The 3D display device utilizes the principle that the angles of the two eyes of a person for observing an object are slightly different, so that the distance of the object can be distinguished, the stereoscopic vision is generated, the images seen by the left eye and the right eye are separated, and a user can experience stereoscopic feeling by means of stereoscopic glasses or without the aid of stereoscopic glasses (namely naked eyes). 3D live broadcast display service can be realized through the display equipment.
In the implementation of 3D live broadcast display in the related art, multiple channels of videos shot by an image collector array in a live broadcast site need to be transmitted to a host system of a user, the host system of the user selects two corresponding channels of videos from multiple channels of video streams in a local host system for 3D playing according to a human eye tracking result, and a local device needs to store the received video streams, which increases the requirement on bandwidth and occupies storage and processing resources of the local system.
Disclosure of Invention
Embodiments of the present invention provide a video display method, apparatus, device, system, and medium, which can save bandwidth resources and guarantee stereoscopic viewing experience of live viewers, save storage and processing resources of a local system, and improve processing efficiency.
In a first aspect, an embodiment of the present invention provides a video display method, where the method is applied to a first subsystem, and includes: receiving a target relative position sent by a second subsystem, and determining a target binocular video stream according to the target relative position; wherein the target relative position is determined by a user position determination device in the second subsystem;
and sending the target binocular video stream to the second subsystem so that the second subsystem displays the target binocular video stream through 3D display equipment.
In a second aspect, an embodiment of the present invention further provides a video display method, which is performed by a second subsystem, and the method includes: determining a target relative position of a user relative to a screen through user position determining equipment, and sending the target relative position to a first subsystem, so that the first subsystem determines a target binocular video stream according to the target relative position;
and receiving the target binocular video stream sent by the first subsystem, and displaying the target binocular video stream through 3D display equipment.
In a third aspect, an embodiment of the present invention further provides a video display apparatus, configured in a first subsystem, where the apparatus includes: the target binocular video stream determining module is used for receiving the target relative position sent by the second subsystem and determining a target binocular video stream according to the target relative position; wherein the target relative position is determined by a user position determination device in the second subsystem;
and the display module is used for sending the target binocular video stream to the second subsystem so that the second subsystem displays the target binocular video stream through 3D display equipment.
In a fourth aspect, an embodiment of the present invention further provides a video display apparatus, configured in the second subsystem, where the apparatus includes:
the target relative position determining module is used for determining a target relative position of a user relative to a screen through user position determining equipment and sending the target relative position to the first subsystem so that the first subsystem determines a target binocular video stream according to the target relative position;
and the display module is used for receiving the target binocular video stream sent by the first subsystem and displaying the target binocular video stream through 3D display equipment.
In a fifth aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a video display method performed by a first subsystem as in any one of the embodiments of the present invention, or a video display method performed by a second subsystem as in any one of the embodiments of the present invention.
In a sixth aspect, an embodiment of the present invention further provides a video display system, where the system includes:
the system comprises a first subsystem and a second subsystem, wherein the first subsystem comprises a server and an image collector, and the second subsystem comprises user position determining equipment and 3D display equipment; the user position determining equipment determines a target relative position of a user relative to a screen and sends the target relative position to the server;
the server determines a target binocular video stream collected by the image collector according to the relative position of the target and sends the target binocular video stream to the 3D display equipment;
and the 3D display equipment performs 3D display on the target binocular video stream.
In a seventh aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the video display method performed by the first subsystem according to any one of the embodiments of the present invention or the video display method performed by the second subsystem according to any one of the embodiments of the present invention.
According to the technical scheme provided by the embodiment of the invention, when the first subsystem is executed, the relative position of the target sent by the second subsystem is received, and the target binocular video stream is determined according to the relative position of the target; wherein the target relative position is determined by a user position determining device in the second subsystem; and sending the target binocular video stream to the second subsystem so that the second subsystem displays the target binocular video stream through the 3D display equipment. By executing the technical scheme provided by the embodiment of the invention, the stereoscopic viewing experience of the live audience can be ensured while the bandwidth resource is saved, the storage and processing resources of a local system can be saved, and the processing efficiency is improved.
Drawings
Fig. 1 is a flowchart of a video display method performed by a first subsystem according to an embodiment of the present invention;
FIG. 2 is a flow chart of another video display method performed by the first subsystem according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a small-scale scene with only one viewing zone according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a large-scale scene with multiple viewing zones provided by an embodiment of the present invention;
FIG. 5 is a flow chart of a video display method performed by the second subsystem according to an embodiment of the present invention;
FIG. 6 is a flow chart of another video display method performed by the second subsystem according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a video display apparatus configured in a first subsystem according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a video display apparatus configured in a second subsystem according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a video display system according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Fig. 1 is a flowchart of a video display method executed by a first subsystem according to an embodiment of the present invention, where the method may be executed by a video display apparatus, the apparatus may be implemented by software and/or hardware, and the apparatus may be configured in an electronic device for video display. The method is applied to a scene of video display. As shown in fig. 1, the technical solution provided by the embodiment of the present invention specifically includes:
and S110, receiving the relative position of the target sent by the second subsystem, and determining the target binocular video stream according to the relative position of the target.
Wherein the target relative position is determined by a user position determination device in the second subsystem.
The first subsystem may be an associated device located on the side of the shooting site. The first subsystem may include a server and an image collector converged on a shooting area. The second subsystem may be an associated device located on the user side. The second subsystem may include a 3D display device and a user position determining device. The target relative position can be set according to actual needs, for example, the target relative position can be coordinate information of the head or both eyes or a single eye of the user in a screen reference system with the center of the screen as a coordinate origin. The relative position of the target may also be the user's angle relative to the normal to the center of the screen.
Wherein, the first subsystem and the second subsystem can have two data output links between them, including a control signal link and a video stream link. Wherein the control signal link may be adapted to communicate the relative position of the target obtained by the second subsystem to the server of the first subsystem. The scheme can receive the target relative position sent by the second subsystem and determine the target binocular video stream according to the target relative position. Specifically, according to the scheme, at least two image collectors corresponding to the target relative position can be determined according to the corresponding position relation between the target relative position and the image collector arrangement, then the image data collected by the at least two image collectors is obtained from the server, and then the image data is processed to obtain the target binocular video stream. And the server sends the target binocular video stream to the second subsystem through the video stream link.
The user position determining device may be, for example, a user position tracking sensor, and the user position tracking sensor may be, for example, a tracking sensor based on a single camera or at least two cameras, or a tracking sensor based on a combination of an image sensor and a depth information sensor, or an eye tracking sensor based on a gaze point tracking. The user position determining device may determine a target relative position of the user with respect to the screen.
And S120, sending the target binocular video stream to the second subsystem so that the second subsystem displays the target binocular video stream through 3D display equipment.
Wherein, the 3D display device can realize the 3D effect display of the video data. The 3D display device may be a naked eye 3D display or 3D glasses. The naked-eye 3D display can be a lenticular naked-eye 3D display or a parallax barrier naked-eye 3D display or a backlight naked-eye 3D display. The 3D glasses may be polarized glasses or shutter 3D glasses. According to the scheme, the target binocular video stream can be sent to the second subsystem, and the second subsystem can display the target binocular video stream through the 3D display equipment.
According to the technical scheme provided by the embodiment of the invention, when the first subsystem is executed, the relative position of the target sent by the second subsystem is received, and the target binocular video stream is determined according to the relative position of the target; wherein the target relative position is determined by a user position determining device in the second subsystem; and sending the target binocular video stream to the second subsystem so that the second subsystem displays the target binocular video stream through the 3D display equipment. By executing the technical scheme provided by the embodiment of the invention, the stereoscopic viewing experience of the live audience can be ensured while the bandwidth resource is saved, the storage and processing resources of a local system can be saved, and the processing efficiency is improved.
Fig. 2 is a flowchart of a video display method executed by the first subsystem according to an embodiment of the present invention, which is optimized based on the foregoing embodiments. As shown in fig. 2, the video display method in the embodiment of the present invention may include:
and S210, receiving the target relative position sent by the second subsystem, and determining at least two target image collectors from the candidate image collectors according to the target relative position and the incidence relation between the candidate relative position and the candidate image collectors.
The number of the candidate image collectors is at least three, and the candidate image collectors may be arranged in an arc shape, as shown in fig. 3. The scheme can be used for pre-establishing the incidence relation between the candidate relative position and the candidate image collector. Because the positions of the user relative to the screen are continuously distributed, and the shooting visual angle sets formed by all the image collectors are generally distributed discretely, the association relationship can be expressed that a plurality of candidate relative positions correspond to one image collector. The parallax equivalent to the parallax of human eyes is formed between the video contents output by two adjacent image collectors or two image collectors at preset intervals. The server in the first subsystem can receive the multiple paths of videos shot by the image collector array, and outputs the two paths of videos to the second subsystem after respectively or combining the two paths of videos according to the relative position of the target.
In this embodiment, optionally, before determining at least two target image collectors from the candidate image collectors according to the target relative position and the association relationship between the candidate relative position and the candidate image collector, the method further includes: and determining the incidence relation between the candidate relative position and the candidate image collector according to the watching angle of the user on the candidate relative position and the image collecting angle of the candidate image collector.
Wherein, the viewing angle of the user at the candidate relative position may be an angle of the user with respect to the normal of the center of the screen at the candidate relative position. According to the scheme, if the watching angle of the user at the candidate relative position is determined to be consistent with the image acquisition angle of the candidate image acquirer, or the watching angle of the user at the candidate relative position is smaller than or equal to the image acquisition angle of the candidate image acquirer, the incidence relation between the candidate relative position and the candidate image acquirer is determined. The included angle is used as the target relative position of the user relative to the screen, and is closer to the arc-shaped features arranged in the image collector array, so that the accuracy of the server for determining the target image collector can be further improved, and the complexity of a matching algorithm can be reduced.
Illustratively, the server receives videos taken by all candidate image collectors. As shown in fig. 3, 5 videos captured by 5 candidate image collectors are received by a server, for example. According to the incidence relation between the candidate relative position and the candidate image collector, when the user is at the position 1, the server determines that the two corresponding video streams are the video streams shot by the No. 3 image collector and the No. 2 image collector respectively, and sends or combines the video streams shot by the No. 3 image collector and the No. 2 image collector respectively to the 3D display device in the second subsystem for display. When the user moves from the position 1 to the position 2, the fact that the user wants to watch the view angle to rotate to the left is indicated, the server determines that the two corresponding video streams are the video streams shot by the No. 4 image collector and the No. 3 image collector respectively, and sends or combines the video streams shot by the No. 4 image collector and the No. 3 image collector to the second subsystem respectively.
Therefore, the incidence relation between the candidate relative position and the candidate image collector is determined according to the watching angle of the user on the candidate relative position and the image collecting angle of the candidate image collector. The target binocular video stream can be accurately positioned, and reliable data support is provided for normal display of live broadcast.
In a possible embodiment, optionally, determining the association relationship between the candidate relative position and the candidate image collector according to the viewing angle of the user at the candidate relative position and the image collecting angle of the candidate image collector includes: and if the viewing angle of the user at the candidate relative position is consistent with the image acquisition angle of the candidate image acquisition device, establishing the incidence relation between the candidate relative position and the candidate image acquisition device.
Illustratively, the angle of the user with respect to the normal of the screen center may be known, for example, the angle of the user with respect to the normal of the screen center may be 20 degrees, and the angle of the user with respect to the normal of the screen center may also be 30 degrees. And the direction of the offset of the user with respect to the center of the screen is known, for example, to the left of the screen or to the right of the screen. Similarly, as shown in fig. 3, 5 candidate image collectors are uniformly arranged in an arc array, and an included angle of the image collector with respect to a central normal of the viewing area may be used as an image collecting angle of the candidate image collector. And the direction of the offset of the image collector with respect to the viewing area can be determined, for example to the left of the viewing area or to the right of the viewing area. Therefore, if the viewing angle of the user at the candidate relative position coincides with the image capturing angle of the candidate image capturing device, and the offset direction with respect to the screen coincides with the offset direction with respect to the viewing area, the association relationship of the candidate relative position with the candidate image capturing device can be established.
Therefore, if the viewing angle of the user at the candidate relative position is consistent with the image acquisition angle of the candidate image acquirer, the incidence relation between the candidate relative position and the candidate image acquirer is established. The target binocular video stream can be accurately positioned, and reliable data support is provided for normal display of live broadcast.
In another possible embodiment, optionally, before determining at least two target image collectors from the candidate image collectors according to the target relative position and the association relationship between the candidate relative position and the candidate imager, the method further includes: receiving a viewing area selection instruction sent by a second subsystem, and determining a target viewing area according to the viewing area selection instruction; correspondingly, according to the relative position of the target and the incidence relation between the candidate relative position and the candidate imager, determining at least two target image collectors from the candidate image collectors, comprising: determining a candidate image collector associated with the target viewing area according to the target viewing area and the association relationship between the candidate image collector and the candidate viewing area; and determining at least two target image collectors from the candidate image collectors associated with the target viewing area according to the target relative position and the association relationship between the candidate image collectors and the candidate relative position.
If the field range of the shooting site is large, all image data cannot be acquired through one group of candidate image collector arrays, several groups of candidate image collector arrays may be needed to acquire the image data of all candidate viewing areas together, and each group of candidate image collector arrays focuses on one candidate viewing area of the shooting scene. And pre-establishing and storing the corresponding relation between the candidate image collector and the candidate watching region. The user needs to select a viewing zone that the user wants to view, i.e., a target viewing zone, by sending a viewing zone selection instruction. Wherein, the user can send a viewing area selection instruction through a shortcut key of the keyboard. The user may also send viewing area selection instructions via buttons in the web browser. The user may also send viewing zone selection instructions to the second subsystem through the live application. And after receiving the viewing area selection instruction sent by the second subsystem, the first subsystem determines a target viewing area according to the viewing area selection instruction. Then determining a candidate image collector associated with the target viewing area according to the target viewing area and the association relationship between the candidate image collector and the candidate viewing area; and determining at least two target image collectors from the candidate image collectors associated with the target viewing area according to the relative positions of the targets and the association relationship between the candidate image collectors and the candidate relative positions.
Illustratively, as shown in fig. 4, two sets of arc-shaped image collector arrays respectively consist of image collectors No. 1-5 and image collectors No. 6-10, and respectively focus on and shoot a candidate viewing area a and a candidate viewing area B. And the second subsystem sends a viewing area selection instruction to the server of the first subsystem, wherein the viewing area selection instruction comprises viewing area information and is used for prompting the server to select a corresponding candidate viewing area, namely the video stream of the image collector array of the target viewing area. And the relative position of the target sent by the second subsystem is used for determining which candidate image collector video streams in the image collector array of the target viewing area are transmitted back to the second subsystem for displaying.
For example, in state 1, the user is in position 1, a viewing area selection instruction is input through the user interaction interface (input through a button or a keyboard), and the second subsystem transmits the selected candidate viewing area information and the target relative position of the user to the server of the first subsystem.
And the server determines that the user wants to watch the video of the watching area A according to the watching area selection instruction, further determines the two matched video streams as the video streams shot by the No. 3 image collector and the No. 2 image collector according to the target relative position of the user, and then transmits the two video streams back to the 3D display equipment of the second subsystem for display respectively or in a combined mode.
In the state 2, the user moves to the position 2, and assuming that the user does not change the selection of the watching area, the server still determines that the user wants to watch the video of the watching area a, further determines that the two matched video streams are changed into the video streams shot by the image collector No. 4 and the image collector No. 3 according to the latest target relative position of the user, and then transmits the two video streams back to the 3D display device of the second subsystem for display respectively or in a combined manner.
In state 3, the user has not changed the relative position of the target, and has simply entered the latest viewing zone selection instruction via the user interface (via button or keyboard input), attempting to switch from viewing zone a to viewing zone B. And at the moment, the server determines that the user wants to watch the video of the watching area B according to the latest watching area selection instruction, further determines that the two matched video streams are changed into the video streams shot by the No. 9 image collector and the No. 8 image collector according to the current position information of the user, and then transmits the two video streams back to the 3D display equipment of the second subsystem for display respectively or in a combined mode.
Therefore, a target viewing area is determined according to the viewing area selection instruction, and a candidate image collector related to the target viewing area is determined according to the target viewing area and the incidence relation between the candidate image collector and the candidate viewing area; and determining at least two target image collectors from the candidate image collectors associated with the target viewing area according to the relative positions of the targets and the association relationship between the candidate image collectors and the candidate relative positions. The binocular video stream of the target watching region can be accurately positioned, and reliable data support is provided for normal display of live broadcast.
In this embodiment, optionally, the determining process of the association relationship between the candidate image collector and the candidate viewing area includes: determining a candidate image collector with an image collection angle within a viewing angle range according to the viewing angle range when a user views the candidate viewing area; and establishing the incidence relation between the candidate watching area and the candidate image collector with the image collection angle within the watching angle range.
Illustratively, as shown in fig. 4, if the user views the viewing area a, the viewing angle range is an angle α, and if the user views the viewing area B, the viewing angle range is β, then the shooting angle of the image collector 1-5 is also α, and the shooting angle of the image collector 6-10 is also β. That is, the viewing angle range is consistent with the image capturing angle range, the image capturing devices 1 to 5 establish an association relationship for the image capturing devices located within the viewing angle α range. For example, two sets of arc-shaped image collector arrays respectively consist of No. 1-5 image collectors and No. 6-10 image collectors, and respectively focus and shoot a candidate viewing area A and a candidate viewing area B. And respectively establishing the incidence relation between the candidate viewing area A and the No. 1-5 image collector and the incidence relation between the candidate viewing area B and the No. 6-10 image collector.
Therefore, the candidate image collector with the image collection angle within the viewing angle range is determined according to the viewing angle range when the user views the candidate viewing area; and establishing the incidence relation between the candidate watching area and the candidate image collector with the image collection angle within the watching angle range. The target watching region can be accurately positioned, and reliable data support is provided for normal display of live broadcast.
And S220, determining the target binocular video stream according to the image data collected by the at least two target image collectors.
If the parallax between the video contents output by the at least two target image collectors is equivalent to the binocular parallax of the user, the server can respectively output or combine the video streams composed of the image data acquired by the at least two target image collectors to the second subsystem according to the requirement. If the parallax between the video contents output by the at least two target image collectors is not equivalent to the binocular parallax of the user, the server can perform operations such as merging processing on the image data acquired by the at least two target image collectors according to needs to obtain the video contents with the parallax equivalent to the binocular parallax of the user, and output the video streams to the second subsystem respectively or after merging.
And S230, sending the target binocular video stream to the second subsystem so that the second subsystem displays the target binocular video stream through 3D display equipment.
According to the technical scheme provided by the embodiment of the invention, when the first subsystem is executed, the target relative position sent by the second subsystem is received, and at least two target image collectors are determined from the candidate image collectors according to the target relative position and the incidence relation between the candidate relative position and the candidate image collectors; determining a target binocular video stream according to image data acquired by at least two target image collectors; and sending the target binocular video stream to the second subsystem so that the second subsystem displays the target binocular video stream through the 3D display equipment. By executing the technical scheme provided by the embodiment of the invention, the stereoscopic viewing experience of the live audience can be ensured while the bandwidth resource is saved, the storage and processing resources of a local system can be saved, and the processing efficiency is improved.
Fig. 5 is a flowchart of a video display method executed by the second subsystem according to an embodiment of the present invention, where the method may be executed by a video display apparatus, and the apparatus may be implemented by software and/or hardware, and the apparatus may be configured in an electronic device for video display. The method is applied to scenes for carrying out 3D display on live broadcast. As shown in fig. 5, the technical solution provided by the embodiment of the present invention specifically includes:
s310, determining a target relative position of a user relative to a screen through user position determining equipment, and sending the target relative position to a first subsystem, so that the first subsystem determines a target binocular video stream according to the target relative position.
In particular, the second subsystem may comprise a 3D display device and a user position determination device. The user location determining device may be, for example, a user location tracking sensor. The user position tracking sensor may be, for example, a tracking sensor based on a single camera or at least two cameras, or may be, for example, a tracking sensor based on a combination of image and depth information sensors, or may be an eye tracking sensor based on gaze point tracking. The user position determining device may determine a target relative position of the user with respect to the screen. The target relative position can be set according to actual needs, for example, the target relative position can be coordinate information of the head or both eyes or a single eye of the user in a screen reference system with the center of the screen as a coordinate origin. The relative position of the target may also be the user's angle relative to the normal to the center of the screen. The second subsystem may send the target relative position to the first subsystem to enable the server of the first subsystem to determine the target binocular video stream according to the target relative position. The specific description of the server determining the target binocular video stream according to the relative position of the target is described in detail in the above embodiments.
And S320, receiving the target binocular video stream sent by the first subsystem, and displaying the target binocular video stream through 3D display equipment.
Wherein, the 3D display device can realize the 3D effect display of the video data. The 3D display device may be a naked eye 3D display or 3D glasses. The naked eye 3D display can be a lenticular lens type naked eye 3D display, the naked eye 3D display can also be a parallax barrier type naked eye 3D display, and the naked eye 3D display can also be a directional backlight type naked eye 3D display. The 3D glasses may be polarized glasses or shutter 3D glasses. The scheme can receive the target binocular video stream sent by the first subsystem, and the target binocular video stream is displayed through the 3D display equipment.
According to the technical scheme provided by the embodiment of the invention, the target relative position of the user relative to the screen is determined through the user position determining equipment, and the target relative position is sent to the first subsystem, so that the first subsystem determines the target binocular video stream according to the target relative position; and receiving the target binocular video stream sent by the first subsystem, and displaying the target binocular video stream through the 3D display equipment. By executing the technical scheme provided by the embodiment of the invention, the stereoscopic viewing experience of the live audience can be ensured while the bandwidth resource is saved, the storage and processing resources of a local system can be saved, and the processing efficiency is improved.
Fig. 6 is a flowchart of a video display method executed by the second subsystem according to an embodiment of the present invention, which is optimized based on the foregoing embodiments. As shown in fig. 6, the video display method in the embodiment of the present invention may include:
s410, receiving a watching area selection instruction, and sending the watching area selection instruction to a first subsystem, so that the first subsystem determines a target watching area according to the watching area selection instruction.
If the field range of the shooting site is large, all image data cannot be acquired through one group of candidate image collector arrays, several groups of candidate image collector arrays may be needed to acquire the image data of all candidate viewing areas together, and each group of candidate image collector arrays focuses on one candidate viewing area of the shooting scene. And pre-establishing and storing the corresponding relation between the candidate image collector and the candidate watching region. The user needs to select a viewing zone that the user wants to view, i.e., a target viewing zone, by sending a viewing zone selection instruction. Wherein, the user can send a viewing area selection instruction through a shortcut key of the keyboard. The user may also send viewing area selection instructions via buttons in the web page. The user may also send viewing zone selection instructions to the second subsystem through the live application. And after receiving the viewing area selection instruction sent by the second subsystem, the first subsystem determines a target viewing area according to the viewing area selection instruction.
And S420, determining the target relative position of the user relative to the screen through the user position determining equipment, and sending the target relative position to the first subsystem, so that the first subsystem determines a target binocular video stream according to the target relative position.
In this embodiment, optionally, the target relative position includes: the angle of the user relative to the normal to the center of the screen or the offset of the user relative to the screen.
The target relative position of the user relative to the screen may be a horizontal coordinate parallel to a screen coordinate system, or an angle between the user and a screen center normal, for example, an angle between a connection line between the user position and the middle of the screen and the screen center normal. The included angle is used as the position of the viewer relative to the screen, and is closer to the arc-shaped features arranged in the image collector array, so that the accuracy of the server for determining the matched image collector can be further improved, and the complexity of the matching algorithm can be further reduced.
And S430, receiving the target binocular video stream sent by the first subsystem, and displaying the target binocular video stream through 3D display equipment.
According to the technical scheme provided by the embodiment of the invention, the target relative position of the user relative to the screen is determined through the user position determining equipment, and the target relative position is sent to the first subsystem, so that the first subsystem determines the target binocular video stream according to the target relative position; and receiving the target binocular video stream sent by the first subsystem, and displaying the target binocular video stream through the 3D display equipment. By executing the technical scheme provided by the embodiment of the invention, the stereoscopic viewing experience of the live audience can be ensured while the bandwidth resource is saved, the storage and processing resources of a local system can be saved, and the processing efficiency is improved.
Fig. 7 is a schematic structural diagram of a video display apparatus configured in a first subsystem according to an embodiment of the present invention, where the apparatus may be configured in an electronic device such as a server. As shown in fig. 7, the apparatus includes:
a target binocular video stream determining module 510, configured to receive a target relative position sent by the second subsystem, and determine a target binocular video stream according to the target relative position; wherein the target relative position is determined by a user position determination device in the second subsystem;
a display module 520, configured to send the target binocular video stream to the second subsystem, so that the second subsystem displays the target binocular video stream through a 3D display device.
Optionally, the target binocular video stream determining module 510 includes a target image collector determining unit, configured to determine at least two target image collectors from the candidate image collectors according to the target relative position and an association relationship between the candidate relative position and the candidate image collectors; and the target binocular video stream determining unit is used for determining the target binocular video stream according to the image data acquired by the at least two target image acquirers.
Optionally, the apparatus further includes an association determining module, configured to determine, before determining at least two target image collectors from the candidate image collectors according to the target relative positions and the association between the candidate relative positions and the candidate image collectors, the association between the candidate relative positions and the candidate image collectors according to the viewing angles of the users at the candidate relative positions and the image collection angles of the candidate image collectors.
Optionally, the association relation determining module is specifically configured to establish an association relation between the candidate relative position and the candidate image collector if the viewing angle of the user at the candidate relative position is consistent with the image collecting angle of the candidate image collector.
Optionally, the apparatus further includes a target viewing area determining module, configured to receive a viewing area selection instruction sent by the second subsystem before determining at least two target image collectors from the candidate image collectors according to the target relative position and the association relationship between the candidate relative position and the candidate imager, and determine a target viewing area according to the viewing area selection instruction; correspondingly, the target binocular video stream determining module 510 is specifically configured to determine a candidate image collector associated with the target viewing area according to the target viewing area and an association relationship between the candidate image collector and the candidate viewing area; and determining at least two target image collectors from the candidate image collectors associated with the target viewing area according to the target relative position and the association relationship between the candidate image collectors and the candidate relative position.
Optionally, the process of determining the association relationship between the candidate image collector and the candidate viewing area includes: determining a candidate image collector with an image collection angle within a viewing angle range according to the viewing angle range when a user views the candidate viewing area; and establishing the incidence relation between the candidate watching area and the candidate image collector with the image collection angle within the watching angle range.
The device provided by the above embodiment can execute the video display method executed by the first subsystem provided by any embodiment of the present invention, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 8 is a schematic structural diagram of a video display apparatus configured in a second subsystem according to an embodiment of the present invention, where the apparatus may be configured in an electronic device such as a server. As shown in fig. 8, the apparatus includes:
the target relative position determining module 610 is configured to determine a target relative position of a user relative to a screen through a user position determining device, and send the target relative position to a first subsystem, so that the first subsystem determines a target binocular video stream according to the target relative position;
and the display module 620 is configured to receive the target binocular video stream sent by the first subsystem, and display the target binocular video stream through a 3D display device.
Optionally, the target relative position includes: the angle of the user relative to the normal to the center of the screen or the offset of the user relative to the screen.
Optionally, the apparatus further includes a target viewing area determining module, configured to receive a viewing area selection instruction before determining a target relative position of the user with respect to the screen by using the user position determining device, and send the viewing area selection instruction to the first subsystem, so that the first subsystem determines the target viewing area according to the viewing area selection instruction.
The device provided by the above embodiment can execute the video display method executed by the second subsystem provided by any embodiment of the present invention, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 9, the electronic device includes:
one or more processors 710, one processor 710 being illustrated in FIG. 9;
a memory 720;
the apparatus may further include: an input device 730 and an output device 740.
The processor 710, the memory 720, the input device 730 and the output device 740 of the apparatus may be connected by a bus or other means, for example, in fig. 9.
The memory 720, which is a non-transitory computer-readable storage medium, may be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to a video display method in an embodiment of the present invention. The processor 710 executes various functional applications and data processing of the computer device by executing software programs, instructions and modules stored in the memory 720, namely, implementing a video display method executed by the first subsystem of the above method embodiments, namely:
receiving a target relative position sent by a second subsystem, and determining a target binocular video stream according to the target relative position; wherein the target relative position is determined by a user position determination device in the second subsystem;
and sending the target binocular video stream to the second subsystem so that the second subsystem displays the target binocular video stream through 3D display equipment.
Or implementing a video display method executed by the second subsystem as provided by the embodiment of the present invention, that is:
determining a target relative position of a user relative to a screen through user position determining equipment, and sending the target relative position to a first subsystem, so that the first subsystem determines a target binocular video stream according to the target relative position;
and receiving the target binocular video stream sent by the first subsystem, and displaying the target binocular video stream through 3D display equipment.
The memory 720 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 720 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 720 may optionally include memory located remotely from processor 710, which may be connected to the terminal device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 730 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the computer apparatus. The output device 740 may include a display device such as a display screen.
Fig. 10 is a schematic structural diagram of a video display system according to an embodiment of the present invention, and as shown in fig. 10, the system includes: a first subsystem 81 and a second subsystem 82, the first subsystem 81 including a server 811 and an image collector 812, the second subsystem 82 including a user position determination device 821 and a 3D display device 822;
wherein the user position determination device 821 determines a target relative position of the user with respect to the screen and transmits the target relative position to the server 811;
the server 811 determines a target binocular video stream acquired by the image acquirer 812 according to the target relative position, and transmits the target binocular video stream to the 3D display device 822;
and the 3D display device 822 performs 3D display on the target binocular video stream.
The second subsystem 82 refers to a related device located at the user side. The second subsystem 82 includes a 3D display device 822, and a user position determination device 821 built in or external to the 3D display device 822.
The 3D display device 822 may be a naked eye 3D display or 3D glasses, among others. The naked-eye 3D display may be one of a lenticular naked-eye 3D display, a parallax barrier naked-eye 3D display, or a directional backlit naked-eye 3D display. The 3D glasses may be polarization type 3D glasses, shutter type 3D glasses. The user position determination device 821 is used to track a target relative position of the user's head or eyes with respect to the screen. The user location determination device 821 may be a user location tracking sensor, for example, a tracking sensor based on a single camera or at least two cameras, a tracking sensor based on a combination of image and depth information sensors, or an eye tracking sensor based on point of regard tracking.
The first subsystem 81 refers to a relevant device located on the side of the shooting site. The first subsystem 81 includes an image collector 812 and a server 811 arranged in an arc shape converging on a photographing area. The image collector array comprises at least three image collectors. The parallax equivalent to the parallax of human eyes is formed between the video contents output by two adjacent image collectors or two image collectors at preset intervals. The server 811 is configured to receive multiple paths of videos shot by the image collector array, and output the two paths of videos to the second subsystem 82 respectively or after combining the two paths of videos as needed.
The technical scheme provided by the embodiment of the invention comprises the following steps: the system comprises a first subsystem and a second subsystem, wherein the first subsystem comprises a server and an image collector, and the second subsystem comprises user position determining equipment and 3D display equipment; the user position determining equipment determines a target relative position of a user relative to a screen and sends the target relative position to the server; the server determines a target binocular video stream collected by the image collector according to the relative position of the target and sends the target binocular video stream to the 3D display equipment; and 3D display equipment performs 3D display on the target binocular video stream. By executing the scheme, the purpose of changing the view angle of the watched content according to the position change of the user can be realized only by transmitting two paths of matched video streams back without transmitting the video streams of all the image collectors to the local end by the server, and the requirement on the bandwidth is reduced.
An embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements a video display method performed by a first subsystem according to an embodiment of the present invention, that is:
receiving a target relative position sent by a second subsystem, and determining a target binocular video stream according to the target relative position; wherein the target relative position is determined by a user position determination device in the second subsystem;
and sending the target binocular video stream to the second subsystem so that the second subsystem displays the target binocular video stream through 3D display equipment.
Alternatively, a video display method executed by the second subsystem as provided by the embodiment of the present invention is implemented, that is:
determining a target relative position of a user relative to a screen through user position determining equipment, and sending the target relative position to a first subsystem, so that the first subsystem determines a target binocular video stream according to the target relative position;
and receiving the target binocular video stream sent by the first subsystem, and displaying the target binocular video stream through 3D display equipment.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (14)
1. A video display method performed by a first subsystem, comprising:
receiving a target relative position sent by a second subsystem, and determining a target binocular video stream according to the target relative position; wherein the target relative position is determined by a user position determination device in the second subsystem;
and sending the target binocular video stream to the second subsystem so that the second subsystem displays the target binocular video stream through 3D display equipment.
2. The method of claim 1, wherein determining a target binocular video stream according to the target relative position comprises:
determining at least two target image collectors from the candidate image collectors according to the target relative positions and the incidence relation between the candidate relative positions and the candidate image collectors;
and determining the target binocular video stream according to the image data acquired by at least two target image acquirers.
3. The method of claim 2, wherein before determining at least two target image collectors from the candidate image collectors according to the target relative positions and the association relationship between the candidate relative positions and the candidate image collectors, the method further comprises:
and determining the incidence relation between the candidate relative position and the candidate image collector according to the watching angle of the user on the candidate relative position and the image collecting angle of the candidate image collector.
4. The method of claim 3, wherein determining the association relationship between the candidate relative position and the candidate image collector according to the viewing angle of the user at the candidate relative position and the image collection angle of the candidate image collector comprises:
and if the viewing angle of the user at the candidate relative position is consistent with the image acquisition angle of the candidate image acquisition device, establishing the incidence relation between the candidate relative position and the candidate image acquisition device.
5. The method of claim 2, wherein before determining at least two target image collectors from the candidate image collectors according to the target relative positions and the association relationship between the candidate relative positions and the candidate image collectors, the method further comprises:
receiving a viewing area selection instruction sent by a second subsystem, and determining a target viewing area according to the viewing area selection instruction;
correspondingly, according to the relative position of the target and the incidence relation between the candidate relative position and the candidate imager, determining at least two target image collectors from the candidate image collectors, comprising:
determining a candidate image collector associated with the target viewing area according to the target viewing area and the association relationship between the candidate image collector and the candidate viewing area;
and determining at least two target image collectors from the candidate image collectors associated with the target viewing area according to the target relative position and the association relationship between the candidate image collectors and the candidate relative position.
6. The method of claim 5, wherein the determining of the association relationship between the candidate image collector and the candidate viewing area comprises:
determining a candidate image collector with an image collection angle within a viewing angle range according to the viewing angle range when a user views the candidate viewing area;
and establishing the incidence relation between the candidate watching area and the candidate image collector with the image collection angle within the watching angle range.
7. A video display method performed by a second subsystem, comprising:
determining a target relative position of a user relative to a screen through user position determining equipment, and sending the target relative position to a first subsystem, so that the first subsystem determines a target binocular video stream according to the target relative position;
and receiving the target binocular video stream sent by the first subsystem, and displaying the target binocular video stream through 3D display equipment.
8. The method of claim 7, wherein the target relative position comprises:
the angle of the user relative to the normal to the center of the screen or the offset of the user relative to the screen.
9. The method of claim 7, prior to determining the target relative position of the user with respect to the screen by the user position determination device, further comprising:
receiving a viewing area selection instruction, and sending the viewing area selection instruction to the first subsystem, so that the first subsystem determines a target viewing area according to the viewing area selection instruction.
10. A video display apparatus disposed in a first subsystem, comprising:
the target binocular video stream determining module is used for receiving the target relative position sent by the second subsystem and determining a target binocular video stream according to the target relative position; wherein the target relative position is determined by a user position determination device in the second subsystem;
and the display module is used for sending the target binocular video stream to the second subsystem so that the second subsystem displays the target binocular video stream through 3D display equipment.
11. A video display apparatus disposed in a second subsystem, comprising:
the target relative position determining module is used for determining a target relative position of a user relative to a screen through user position determining equipment and sending the target relative position to the first subsystem so that the first subsystem determines a target binocular video stream according to the target relative position;
and the display module is used for receiving the target binocular video stream sent by the first subsystem and displaying the target binocular video stream through 3D display equipment.
12. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a video display method performed by the first subsystem as claimed in any one of claims 1-6, or a video display method performed by the second subsystem as claimed in any one of claims 7-9.
13. A video display system, the system comprising:
the system comprises a first subsystem and a second subsystem, wherein the first subsystem comprises a server and an image collector, and the second subsystem comprises user position determining equipment and 3D display equipment; the user position determining equipment determines a target relative position of a user relative to a screen and sends the target relative position to the server;
the server determines a target binocular video stream collected by the image collector according to the relative position of the target and sends the target binocular video stream to the 3D display equipment;
and the 3D display equipment performs 3D display on the target binocular video stream.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a video display method carried out by a first subsystem as claimed in any one of claims 1 to 6, or a video display method carried out by a second subsystem as claimed in any one of claims 7 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210089820.7A CN114422819A (en) | 2022-01-25 | 2022-01-25 | Video display method, device, equipment, system and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210089820.7A CN114422819A (en) | 2022-01-25 | 2022-01-25 | Video display method, device, equipment, system and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114422819A true CN114422819A (en) | 2022-04-29 |
Family
ID=81278046
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210089820.7A Pending CN114422819A (en) | 2022-01-25 | 2022-01-25 | Video display method, device, equipment, system and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114422819A (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103698884A (en) * | 2013-12-12 | 2014-04-02 | 京东方科技集团股份有限公司 | Opening type head-mounted display device and display method thereof |
CN104349155A (en) * | 2014-11-25 | 2015-02-11 | 深圳超多维光电子有限公司 | Method and equipment for displaying simulated three-dimensional image |
CN105120251A (en) * | 2015-08-19 | 2015-12-02 | 京东方科技集团股份有限公司 | 3D scene display method and device |
US20160142703A1 (en) * | 2014-11-19 | 2016-05-19 | Samsung Electronics Co., Ltd. | Display method and electronic device |
CN106060523A (en) * | 2016-06-29 | 2016-10-26 | 北京奇虎科技有限公司 | Methods for collecting and displaying panoramic stereo images, and corresponding devices |
CN106604042A (en) * | 2016-12-22 | 2017-04-26 | Tcl集团股份有限公司 | Panorama webcasting system and panorama webcasting method based on cloud server |
CN107948631A (en) * | 2017-12-25 | 2018-04-20 | 河南新汉普影视技术有限公司 | It is a kind of based on cluster and the bore hole 3D systems that render |
CN110475111A (en) * | 2019-07-11 | 2019-11-19 | 西安万像电子科技有限公司 | Image processing method, apparatus and system |
CN112351265A (en) * | 2020-09-27 | 2021-02-09 | 成都华屏科技有限公司 | Self-adaptive naked eye 3D visual camouflage system |
WO2021083174A1 (en) * | 2019-10-28 | 2021-05-06 | 阿里巴巴集团控股有限公司 | Virtual viewpoint image generation method, system, electronic device, and storage medium |
CN112969061A (en) * | 2021-01-29 | 2021-06-15 | 陕西红星闪闪网络科技有限公司 | Restaurant experience system based on holographic image technology |
CN113141501A (en) * | 2020-01-20 | 2021-07-20 | 北京芯海视界三维科技有限公司 | Method and device for realizing 3D display and 3D display system |
CN113411561A (en) * | 2021-06-17 | 2021-09-17 | 纵深视觉科技(南京)有限责任公司 | Stereoscopic display method, device, medium and system for field performance |
CN113852841A (en) * | 2020-12-23 | 2021-12-28 | 上海飞机制造有限公司 | Visual scene establishing method, device, equipment, medium and system |
-
2022
- 2022-01-25 CN CN202210089820.7A patent/CN114422819A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103698884A (en) * | 2013-12-12 | 2014-04-02 | 京东方科技集团股份有限公司 | Opening type head-mounted display device and display method thereof |
US20160142703A1 (en) * | 2014-11-19 | 2016-05-19 | Samsung Electronics Co., Ltd. | Display method and electronic device |
CN104349155A (en) * | 2014-11-25 | 2015-02-11 | 深圳超多维光电子有限公司 | Method and equipment for displaying simulated three-dimensional image |
CN105120251A (en) * | 2015-08-19 | 2015-12-02 | 京东方科技集团股份有限公司 | 3D scene display method and device |
CN106060523A (en) * | 2016-06-29 | 2016-10-26 | 北京奇虎科技有限公司 | Methods for collecting and displaying panoramic stereo images, and corresponding devices |
CN106604042A (en) * | 2016-12-22 | 2017-04-26 | Tcl集团股份有限公司 | Panorama webcasting system and panorama webcasting method based on cloud server |
CN107948631A (en) * | 2017-12-25 | 2018-04-20 | 河南新汉普影视技术有限公司 | It is a kind of based on cluster and the bore hole 3D systems that render |
CN110475111A (en) * | 2019-07-11 | 2019-11-19 | 西安万像电子科技有限公司 | Image processing method, apparatus and system |
WO2021083174A1 (en) * | 2019-10-28 | 2021-05-06 | 阿里巴巴集团控股有限公司 | Virtual viewpoint image generation method, system, electronic device, and storage medium |
CN113141501A (en) * | 2020-01-20 | 2021-07-20 | 北京芯海视界三维科技有限公司 | Method and device for realizing 3D display and 3D display system |
CN112351265A (en) * | 2020-09-27 | 2021-02-09 | 成都华屏科技有限公司 | Self-adaptive naked eye 3D visual camouflage system |
CN113852841A (en) * | 2020-12-23 | 2021-12-28 | 上海飞机制造有限公司 | Visual scene establishing method, device, equipment, medium and system |
CN112969061A (en) * | 2021-01-29 | 2021-06-15 | 陕西红星闪闪网络科技有限公司 | Restaurant experience system based on holographic image technology |
CN113411561A (en) * | 2021-06-17 | 2021-09-17 | 纵深视觉科技(南京)有限责任公司 | Stereoscopic display method, device, medium and system for field performance |
Non-Patent Citations (1)
Title |
---|
卓力: "用户驱动的交互式立体视频流传输系统", 《北京工业大学学报》, vol. 39, no. 6, pages 846 - 850 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103270759B (en) | For zero disparity plane of the 3 D video based on feedback | |
KR101695819B1 (en) | A apparatus and a method for displaying a 3-dimensional image | |
JP5833667B2 (en) | Method and apparatus for providing monovision in a multi-view system | |
JP5259519B2 (en) | Digital broadcast receiver, transmitter and terminal device | |
KR101685981B1 (en) | A system, an apparatus and a method for displaying a 3-dimensional image | |
KR102069930B1 (en) | Immersion communication client and server, and method for obtaining content view | |
JP2006121553A (en) | Video display unit | |
KR20120014433A (en) | A system, an apparatus and a method for displaying a 3-dimensional image and an apparatus for tracking a location | |
CN101636747A (en) | Two dimensional/three dimensional digital information obtains and display device | |
CN111970524B (en) | Control method, device, system, equipment and medium for interactive live broadcast and microphone connection | |
KR101367458B1 (en) | System for providing multi-angle broardcasting service | |
WO2022262839A1 (en) | Stereoscopic display method and apparatus for live performance, medium, and system | |
KR101329057B1 (en) | An apparatus and method for transmitting multi-view stereoscopic video | |
US20200029066A1 (en) | Systems and methods for three-dimensional live streaming | |
JP2013051602A (en) | Electronic device and control method for electronic device | |
KR20120054746A (en) | Method and apparatus for generating three dimensional image in portable communication system | |
CN103200441A (en) | Obtaining method, conforming method and device of television channel information | |
KR20130033815A (en) | Image display apparatus, and method for operating the same | |
CN114422819A (en) | Video display method, device, equipment, system and medium | |
KR101867815B1 (en) | Apparatus for displaying a 3-dimensional image and method for adjusting viewing distance of 3-dimensional image | |
CN110198457B (en) | Video playing method and device, system, storage medium, terminal and server thereof | |
CN113641247A (en) | Sight angle adjusting method and device, electronic equipment and storage medium | |
CN114040184A (en) | Image display method, system, storage medium and computer program product | |
CN114500911A (en) | Method, device, equipment, system and medium for realizing video call | |
KR101758274B1 (en) | A system, a method for displaying a 3-dimensional image and an apparatus for processing a 3-dimensional image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |