CN115550565A - Data processing method, head-mounted device, control device, electronic device, and medium - Google Patents

Data processing method, head-mounted device, control device, electronic device, and medium Download PDF

Info

Publication number
CN115550565A
CN115550565A CN202211167306.7A CN202211167306A CN115550565A CN 115550565 A CN115550565 A CN 115550565A CN 202211167306 A CN202211167306 A CN 202211167306A CN 115550565 A CN115550565 A CN 115550565A
Authority
CN
China
Prior art keywords
video stream
head
viewport
mounted device
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211167306.7A
Other languages
Chinese (zh)
Inventor
李蕾
崔新宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211167306.7A priority Critical patent/CN115550565A/en
Publication of CN115550565A publication Critical patent/CN115550565A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application discloses a data processing method, a head-mounted device, a control device, an electronic device and a medium. The technical scheme that this application each embodiment provided is applicable to video transmission, solves the user and wears head mounted device to watch the VR video, and then the technical problem who appears "black border" because the change of user visual angle in the VR video. The method comprises the following steps: the method comprises the steps that the head-mounted equipment receives a first video stream sent by the control equipment, wherein the first video stream is a video stream corresponding to a first viewport of the head-mounted equipment; receiving a second video stream sent by the control device, wherein the second video stream is a panoramic video stream, the panoramic video comprises a video corresponding to the first viewport, and the resolution of the first video stream is greater than that of the second video stream; the head-mounted device synthesizes a first image according to the first video stream and the second video stream; according to the position of a second view port of the head-mounted equipment, extracting an image corresponding to the second view port from the first image, wherein the second view port is a current view port of the head-mounted equipment; and displaying the image corresponding to the second viewport.

Description

Data processing method, head-mounted device, control device, electronic device, and medium
Technical Field
The present application belongs to the field of Virtual Reality (VR) technology, and in particular, relates to a data processing method, a head-mounted device, a control device, an electronic device, and a medium.
Background
The VR video is also called panoramic video, and means that a VR shooting function is adopted to truly record the field environment, and then post-processing is carried out through a computer, so that a video capable of realizing a three-dimensional space display function is formed. The panoramic video is different from a single watching visual angle of a traditional video, so that a user can watch the panoramic video in 360 degrees freely. As such, the user wears the VR headset to watch the VR video, and the user's head posture changes, such as when the user twists his head, raises his head, causing the user's visual angle to change. When the visual angle of the user changes, the VR headset needs to render the picture again according to the visual angle of the user. In the process of re-rendering the picture by the VR headset, due to the influences of the decoding rate and the network delay of the VR headset, a phenomenon of 'black borders' is likely to occur in the VR video, and the experience of a user is influenced.
Disclosure of Invention
The embodiment of the application provides a different implementation scheme with prior art to solve the user and wear head mounted device and watch the VR video, because the change of user's visual angle and then appear the technical problem of "black border" in the VR video.
In a first aspect, the present application provides a method of data processing, the method being applicable to a head-mounted device, comprising: receiving a first video stream sent by a control device, where the first video stream is a video stream corresponding to a first viewport of the head-mounted device; receiving a second video stream sent by the control device, where the second video stream is a panoramic video stream, the panoramic video includes a video corresponding to the first viewport, and a resolution of the first video stream is greater than a resolution of the second video stream; synthesizing a first image from the first video stream and the second video stream; according to the position of a second view port of the head-mounted device, extracting an image corresponding to the second view port from the first image, wherein the second view port is a current view port of the head-mounted device; and displaying an image corresponding to the second viewport.
In a second aspect, the present application provides a method of data processing, the method being adapted for controlling a device, comprising: determining a first video stream from a location of a first viewport of the headset and the first panoramic image; down-sampling the first panoramic image; determining a second video stream according to the first panoramic image after down-sampling, wherein the resolution of the first video stream is greater than that of the second video stream; transmitting the first video stream and the second video stream to a headset.
In a third aspect, the present application provides a head-mounted device comprising: a transceiving unit, configured to receive a first video stream sent by a control device, where the first video stream is a video stream corresponding to a first viewport of the head-mounted device; the transceiver unit is further configured to receive a second video stream sent by the control device, where the second video stream is a panoramic video stream, the panoramic video includes a video corresponding to the first viewport, and a resolution of the first video stream is greater than a resolution of the second video stream; a processing unit for synthesizing a first image from the first video stream and the second video stream; the processing unit is further used for extracting an image corresponding to a second view port from the first image according to the position of the second view port of the head-mounted device, wherein the second view port is a current view port of the head-mounted device; and the display unit is used for displaying the image corresponding to the second viewport.
In a fourth aspect, the present application provides a control apparatus comprising: a processing unit for determining a first video stream from a position of a first viewport of the head-mounted device and the first panoramic image; the processing unit is further configured to down-sample the first panoramic image; the processing unit is further configured to determine a second video stream according to the down-sampled first panoramic image, where a resolution of the first video stream is greater than a resolution of the second video stream; a transceiving unit, configured to send the first video stream and the second video stream to a head-mounted device.
In a fifth aspect, the present application provides an electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform any one of the first aspect, the second aspect, the possible implementations of the first aspect, or the possible implementations of the second aspect, via execution of the executable instructions.
In a sixth aspect, this application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements any one of the first aspect, the second aspect, the possible implementations of the first aspect, or the possible implementations of the second aspect.
In a seventh aspect, an embodiment of the present application provides a computer program product, including a computer program, where the computer program, when executed by a processor, implements any one of the first aspect, the second aspect, the possible implementations of the first aspect, or the possible implementations of the second aspect.
According to the data processing method provided by the application, the head-mounted device receives two paths of video streams from the control device, the first video stream is a video stream corresponding to a first view port of the head-mounted device, and the head-mounted device can decode the first video stream to obtain a first view port image; the second video stream is a panoramic video stream, the panoramic video comprises a video corresponding to the first viewport, and the head-mounted device can decode the second video stream to obtain a background image. If the head-mounted device rotates to cause the view port to change, for example, the view port changes from the first view port to the second view port, an image corresponding to the second view port with the changed position can be extracted from the synthesized first image, so that the occurrence of 'black borders' is avoided, and the experience of a user is improved. And the resolution of the first video stream is greater than that of the second video stream, that is, the first video stream adopts high resolution, and the second video stream adopts low resolution, and meanwhile, the pressure of network bandwidth is not increased.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts. In the drawings:
fig. 1 shows a spherical model of a VR video;
FIG. 2 is a schematic block diagram of a user perspective;
FIG. 3 is a schematic block diagram of a viewport of a headset;
fig. 4 shows an example of partitioning a VR video;
FIG. 5 is a schematic diagram of a system scenario provided in an exemplary embodiment of the present application;
fig. 6 is a schematic flow chart of a data processing method according to an exemplary embodiment of the present application;
FIG. 7 is a composite first image provided by an exemplary embodiment of the present application;
FIG. 8 is a schematic view of a viewport update of a headset;
fig. 9 is a schematic block diagram of extracting an image corresponding to a second viewport from a synthesized first image according to an exemplary embodiment of the present application;
FIG. 10 is a flowchart illustrating a data processing method according to an exemplary embodiment of the present application;
FIG. 11 is a panoramic image coordinate example provided by an exemplary embodiment of the present application;
FIG. 12 is a schematic flow chart diagram of a data processing method according to an exemplary embodiment of the present application;
fig. 13 is a schematic structural diagram of a head-mounted device according to an exemplary embodiment of the present application;
FIG. 14 is a schematic structural diagram of a control device according to an exemplary embodiment of the present application;
fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The terms "first" and "second," and the like in the description and in the claims, and in the drawings of the embodiments of the present application, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms in the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
VR video: also known as panoramic video, VR video can show 360 degree panoramic shots to viewers, which makes the users feel in them. A typical VR video is a spherical model, as shown in fig. 1, and fig. 1 shows the spherical model of VR. The left diagram in fig. 1 shows the total number of videos (total composed footage) of the VR video, and the VR video is a 360-degree panoramic view. When a user watches a VR video, the VR video is equivalent to watching by standing on the center of a sphere and looking outwards, and because the visual angle of human eyes is limited, the user can only see a small part of a 360-degree sphere at the same time, such as the middle graph in fig. 1, and the middle graph in fig. 1 shows a visible video (visibleoot) of the user. When the user rotates the viewing angle, other partial images of the spherical surface can be seen, for example, the right diagram of fig. 1 shows that when the user rotates the viewing angle, the visual field of the user changes, and the seen image changes accordingly.
The panoramic video displays partial graphs of corresponding pixel points in a visual field range on a screen, and the common video displays the corresponding pixel points on the screen in a whole manner. For example, 4K video, where K is the unit of measure of the pixel or photosite, and 4K represents a video image resolution of 3840 x 2160. When a common video such as a 4K television plays a 4K video, 3840 × 2160 pixels are displayed on a television screen, and when a VR headset plays a 4K panoramic video, only a part in the user visual field range is displayed on the screen of the VR headset, as shown in fig. 2, fig. 2 shows a schematic diagram of the user visual angle, in fig. 2, an α angle is a left and right visual field angle, a β angle is an upper and lower visual field angle, the α angle and the β angle form the user visual field range, and the VR headset only displays the part in the user visual field range on the screen.
The aspect ratio of the panoramic video is generally 2, so the resolution of the 4K panoramic video sphere is 4096 × 2048 or 3840 × 1920, and assuming that the field angle of the VR headset is 90 degrees from left to right and up and down (which is also the field angle of most current VR headsets), the resolution of the image displayed on the screen by the 4K panoramic video is about 960 × 540, which is basically a definition between standard definition and high definition if normally viewed on a mobile phone.
6DoF:6degree of freedom free, the object has six degrees of freedom in space, namely the freedom of movement along the directions of three orthogonal coordinate axes of x, y and z and the freedom of rotation around the three coordinate axes.
And a viewport: an area of the VR video that the user sees. As shown in fig. 3, fig. 3 is a schematic block diagram of a viewport, in fig. 3, a solid box is a VR panoramic video, and a dashed box is the viewport of a user.
When a user watches VR video, a phenomenon of "black borders" may occur, which generally occurs when the user turns his head (quickly), because: and the VR head-mounted device twists a basic rendering picture to a new visual angle according to the change of the head posture, wherein the rendering process is related to the decoding efficiency, the network delay, the rendering performance of a GPU (graphics processing unit) and the like of the VR head-mounted device, and if the VR head-mounted device does not acquire a corresponding picture in a newly added visual angle range, a phenomenon of 'black borders' appears in the newly added visual angle.
Two methods for solving the phenomenon of 'black edge' are adopted at present:
1. the 'black border' phenomenon is solved by improving the resolution of the video. Theoretically, as long as the definition of the whole screen reaches a certain height, the VR impression can be correspondingly improved. However, simply increasing the resolution doubles the complexity of encoding, transmission, and decoding. At present, mobile phone equipment capable of decoding a full 8K image in real time is very limited, not to mention definition above 8K. Therefore, the performance of the existing hardware equipment can quickly reach the bottleneck, and the high-definition streaming media cannot be played. Therefore, simply increasing the resolution of the video to solve the "black edge" phenomenon causes a series of problems.
2. The "black border" phenomenon is solved by dividing the original image into a plurality of regions. The user can only see the area in the visual field range in the VR video, only the image of the area in the visual field range is decoded for the user to watch, and when the visual angle of the user is changed, the area corresponding to the visual angle is updated.
Since the current encoding technology is basically encoding for rectangular image blocks, the original image can be divided into 4 × 4 blocks, and each small block is encoded independently. If the original is 8K (7680X 4320), each of the divided tiles is exactly 1920X 1080. Thus, it appears that it is only necessary to determine where the user's current viewport is, covering several tiles, and then to decode these several tiles. This does solve some of the problems and reduces some of the decoding burden. But this division is less than ideal. As shown in fig. 4, fig. 4 shows an example of dividing VR video, in fig. 4, a dashed box is a view port of a user, if the view port of the user is at the position shown in fig. 4, 9 small blocks (i.e. 9/16 of the full graph) need to be decoded simultaneously, but in reality, the main view ports of the user are all on 1 block in the center of the 9 blocks, and 8 small blocks on the edge, although all are decoded, render only a little edge corner angle. Decoding resources are still wasted much. If the picture is subdivided, the number of decoders is increased, and generally speaking, the number of hardware decoders on the VR headset is limited.
The application provides a data transmission method, which is used for solving the technical problem that a user wears head-mounted equipment to watch a VR video and a black edge appears in the VR video due to the change of the visual angle of the user.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 5 is a schematic structural diagram of a system scene provided in an exemplary embodiment of the present application, where the system scene is a live broadcast scene, and the structural diagram includes: the multiple groups of ultra-high-definition cameras 110, the control device 120, the cloud server 130 and the head-mounted device 140 may be connected through wireless or wired communication between the multiple groups of ultra-high-definition cameras 110, the control device 120, the cloud server 130 and the head-mounted device 140. The multiple groups of ultrahigh-definition images 110 are used for acquiring video images of a live-broadcast meeting place, the multiple groups of ultrahigh-definition images 110 transmit the acquired video images to the control device 120, the control device 120 synthesizes a local panoramic VR image according to the acquired video images by using an algorithm, and extracts an image corresponding to a viewport of the head-mounted device 140. The control device 120 transmits an image corresponding to the view port of the head-mounted device 140 to the cloud server 130, and the cloud server transmits the VR image to the head-mounted device 140. When the view port of the head-mounted device 140 changes, the head-mounted device 140 sends view port location update information to the cloud server 130, the cloud server 130 receives the view port location update information and sends the view port location update information to the control device 120, and the control device 120 extracts an image corresponding to the view port of the head-mounted device 140 from the panoramic VR image according to the view port location update information.
It should be understood that this system is for example only and not limiting in any way to the present application. Other variations of the system are also possible, for example, the system includes multiple sets of the ultra high definition camera 110, the cloud server 130, and the head-mounted device 140, and part of the functions of the control device 120 may be implemented by the cloud server 130; as another example, the system includes multiple sets of ultra high definition cameras 110, a control device 120, and a headset device 140, and part of the functions of the cloud server 130 may be implemented by the control device 120.
The implementation principle and the interaction process of the components in the embodiment of the system, such as the control device 120, the cloud server 130 and the head-mounted device 140, can be referred to the following description of the method embodiments.
Fig. 6 is a flowchart of a data processing method 200 according to an exemplary embodiment of the present application, where an execution subject of the method may be a head-mounted device. The method at least comprises the following steps:
201, the head-mounted device receives a first video stream sent by the control device, where the first video stream is a video stream corresponding to a first viewport of the head-mounted device.
202, the head-mounted device receives a second video stream sent by the control device, where the second video stream is a panoramic video stream, the panoramic video includes a video corresponding to the first viewport, and the resolution of the first video stream is greater than that of the second video stream.
The head-mounted device synthesizes 203 a first image from the first video stream and the second video stream.
And 204, according to the position of a second view port of the head-mounted device, extracting an image corresponding to the second view port from the first image, wherein the second view port is a current view port of the head-mounted device.
An image corresponding to the second viewport is displayed 205.
In the data processing method, the head-mounted device receives two paths of video streams from the control device, wherein a first video stream is a video stream corresponding to a first view port of the head-mounted device, and the head-mounted device can decode to obtain a first view port image according to the first video stream; the second video stream is a panoramic video stream, the panoramic video comprises a video corresponding to the first view port, and the head-mounted device can decode the panoramic video stream to obtain a background image. If the head-mounted device rotates to cause the view port to change, for example, the view port changes from the first view port to the second view port, an image corresponding to the second view port with the changed position can be extracted from the synthesized first image, so that the occurrence of 'black borders' is avoided, and the experience of a user is improved. And the resolution of the first video stream is greater than that of the second video stream, that is, the first video stream adopts high resolution, and the second video stream adopts low resolution, and meanwhile, the pressure of network bandwidth is not increased.
In step 203, the head-mounted device synthesizes a first image according to the first video stream and the second video stream, including: the head-mounted equipment decodes according to the first video stream to obtain a viewport image; the head-mounted equipment decodes according to the second video stream to obtain a background image; the head-mounted device synthesizes a first image from the viewport image and the background image.
Specifically, the head-mounted device synthesizing the first image from the viewport image and the background image may include: and the head-mounted equipment laminates the decoded view port image into the background image according to the position of the first view port and the area size of the view port to synthesize the first image. The code for synthesizing the first image is as follows, this piece of code being for example only:
m_d3dContext->CopySubresourceRegion(sharedResource,0,x,y,z,winResource,0,&pRegion);
wherein the content of the first and second substances,
Microsoft::WRL::ComPtr<ID3D11Texture2D>winResource_;
Microsoft::WRL::ComPtr<ID3D11Resource>sharedResource_;
sharedResource: a background image obtained after decoding the second video stream;
winResource: obtaining a "viewport" image after decoding the first video stream;
and (6) pRegion: specifying the size of the "viewport" image to be pasted;
x, y, z: a location is specified that specifies a sticker "viewport" image.
Optionally, the head-mounted device may perform a zoom operation when obtaining the background image because the original video and the display screen of the head-mounted device are different in scale.
In order to more clearly understand the embodiment of the present application, the synthesized first image is explained below with reference to fig. 7. Fig. 7 shows a synthesized first image. Fig. 7 shows a panoramic image, where a rectangular frame is a first view port of the head-mounted device, an image in the first view port is a view port image, a size of the view port image is 3840 × 1920, the rest is a background image, a size of the background image is 7680 × 3840, a first video stream corresponding to the first view port image is encoded according to 8K, an original resolution of a second video stream corresponding to the background image is lower, such as 1080p or 2K, and the head-mounted device enlarges a decoded image to a resolution of 8K according to the second video stream. The image synthesized by the head-mounted device through subsequent rendering is a clear image in a rectangular frame, and the rest (outside the rectangular frame) is influenced by the original resolution 1080p or 2K and is not as clear.
After the first image is synthesized, the head-mounted device needs to determine whether the position of the viewport needs to be changed, and if the position of the viewport does not change, that is, the position of the viewport is also the first viewport, the head-mounted device may extract a corresponding viewport image from the synthesized first image according to the position of the first viewport for display.
After the first image is synthesized, if the head-mounted device determines that the position of the current viewport changes, that is, the head-mounted device changes from the first viewport to the second viewport, step 204 needs to be performed, and the head-mounted device extracts an image corresponding to the second viewport from the first image according to the position of the second viewport.
Optionally, when the view port of the head-mounted device is changed from the first view port to the second view port, the method further includes: sending an update message to the control device, the update message including the motion data of the head-mounted device, or the update message including the motion data of the head-mounted device and the resolution of the display screen of the head-mounted device.
Specifically, when the user wears the head-mounted device, actions such as turning head, raising head, lowering head, etc. may occur, which may cause the view port of the head-mounted device to change, as shown in fig. 8, fig. 8 is a view port update diagram of the head-mounted device, and the small squares in fig. 8 represent different view ports of the head-mounted device. When the view port of the head-mounted device changes, the head-mounted device needs to send an update message to the control device, so that the control device determines the updated view port of the head-mounted device according to the update message, and then transmits a video stream corresponding to the second view port to the head-mounted device.
Optionally, the motion data of the head-mounted device comprises 6Dof data.
Optionally, the update message includes a request frame, and the request frame is used to control the device to send the video stream corresponding to the second viewport.
Specifically, when the head-mounted device changes from the first view port to the second view port, the head-mounted device sends an update message to the control device to inform the control device that the view port of the head-mounted device has changed, and the head-mounted device needs to determine the first video stream according to the position of the second view port and the panoramic image, and send the updated first video stream to the head-mounted device. However, in this process, due to the influence of network transmission delay and the decoding efficiency of the head-mounted device, the updated first video stream is not transmitted to the head-mounted device, or the updated first video stream is not decoded successfully, at this time, the image corresponding to the second viewport may be extracted from the synthesized first image for display, so as to avoid the occurrence of the "black edge" phenomenon. For example, fig. 7 is a first image synthesized by the head-mounted device according to the first video stream and the second video stream corresponding to the first view port, when the head-mounted device changes from the first view port to the second view port, and the head-mounted device does not acquire the video stream corresponding to the second view port, the head-mounted device may extract an image corresponding to the second view port from the first image shown in fig. 7 for display. Fig. 9 shows a schematic block diagram of an extraction of a second viewport corresponding image in a synthesized first image. As shown in fig. 9, the image in the frame a is the image corresponding to the second viewport, the image in the frame b is the image corresponding to the first viewport, and the image portion of the frame a that does not overlap with the frame b is not clear enough due to the downsampling and low-resolution image as the background image, but the human eye is not sensitive enough to the perception of the image portion at the edge of the "viewport", so the effect of the unclear image edge due to the background of the composite image is hard to be perceived by the user, thereby solving the phenomenon of "black edge".
Optionally, the center coordinate of the initial viewport of the head-mounted device is the center point of the panoramic video.
The center coordinate of the initial view port of the head-mounted device is preset as the center point of the panoramic video, so that message overhead can be reduced, for example, when the head-mounted device is initially started, the initial view port information does not need to be reported to the control device. Moreover, the center coordinate of the initial viewport is preset as the center point of the panoramic video, so that a better watching effect can be experienced, and the user experience is better.
Optionally, the method further includes: receiving an updated first video stream sent by the control device, wherein the updated first video stream comprises a video stream corresponding to the second viewport; receiving an updated second video stream sent by the control device, wherein the updated second video stream is a panoramic video stream, the panoramic video comprises a video corresponding to the second viewport, and the resolution of the updated first video stream is greater than that of the updated second video stream; synthesizing a second image from the updated first video stream and the updated second video stream; according to the position of a second viewport of the head-mounted device, extracting an image corresponding to the second viewport from a second image; and displaying the image corresponding to the second viewport.
In step 205, the displaying, by the head-mounted device, an image corresponding to the second viewport includes: and displaying an image corresponding to the second viewport rendered in real time to a display screen of the head-mounted device according to the screen-up signal.
Fig. 10 is a flowchart illustrating a data processing method 300 according to an exemplary embodiment of the present application, where an execution subject of the method may be a control device. The method at least comprises the following steps:
the control device determines 301 a first video stream from the position of the first viewport of the head-mounted device and the first panoramic image.
302, the control device down-samples the first panoramic image.
303, the control device determines a second video stream according to the down-sampled first panoramic image, wherein the resolution of the first video stream is greater than that of the second video stream.
The control device transmits 304 the first video stream and the second video stream to the head-mounted device.
In the data processing method, control equipment sends two paths of video streams to head-mounted equipment, wherein a first video stream is a video stream corresponding to a first view port of the head-mounted equipment, and the head-mounted equipment can decode according to the first video stream to obtain a first view port image; the second video stream is a panoramic video stream, the panoramic video comprises a video corresponding to the first viewport, and the head-mounted device can decode the second video stream to obtain a background image. If the headset rotates, the view port changes, for example, the view port changes from the first view port to the second view port, an image corresponding to the second view port with a changed position can be extracted from the synthesized first image, so that occurrence of a 'black edge' is avoided, and experience of a user is improved. Moreover, the resolution of the first video stream is greater than that of the second video stream, that is, the first video stream adopts high resolution, and the second video stream adopts low resolution, and meanwhile, the pressure of network bandwidth is not increased.
For example, the control device acquires multiple sets of high definition video images, synthesizes a panoramic image with a resolution of 8K or more of local ultra high definition by using an algorithm, and intercepts an image with the same size as the first viewport from a corresponding position of the 8K panoramic image and performs encoding. The portion of the video stream may be marked as the first video stream. Then, the control device performs downsampling processing on the 8K panoramic image to obtain an image with a resolution of 1080p or 2K, encodes the image, marks a second video stream with the part of the video stream, and transmits the first video stream and the second video stream to the head-mounted device.
Optionally, the method further includes: receiving an update message sent by the head-mounted device, wherein the update message comprises the motion data of the head-mounted device, or the update message comprises the motion data of the head-mounted device and the resolution of the display screen of the head-mounted device.
Optionally, the center coordinate of the initial viewport of the head-mounted device is the center point of the panoramic video.
Optionally, the method further includes: the control device determines the position of a second view port of the head-mounted device according to the updating message; determining an updated first video stream according to the position of the second viewport and the second panoramic image, wherein the updated first video stream comprises a video stream corresponding to the second viewport; down-sampling the second panoramic image; determining an updated second video stream according to the down-sampled second panoramic image, wherein the resolution of the updated first video stream is greater than that of the updated second video stream; the updated first video stream and the updated second video stream are transmitted to the head-mounted device.
Optionally, the controlling device determines a position of the second viewport of the head-mounted device according to the update message, including: the position of the second viewport is determined based on the motion data of the head-mounted device and a resolution of a display screen of the head-mounted device.
It should be understood that the position of the viewport is generally determined by the center coordinates of the viewport and the size of the resolution thereof, and if the size of the resolution is not switched during the use of the head-mounted device, the control device only needs to determine the center coordinates of the viewport according to the 6Dof data of the head-mounted device.
For example, taking the synthesized 8K panoramic image as an example, the four vertex positions in the cartesian coordinate system are: left lower: (0, 0), upper left: (0, 3840), upper right: (7680, 3840), bottom right: (7680, 0), FIG. 11 is a panoramic image coordinate example, as shown in FIG. 11.
Generally only the following information needs to be marked: top:0, left. This is coordinate position information of the panoramic image.
Taking the second view port at the center of the panoramic image as an example, the resolution of the second view port is 3840 × 2160, then, the center coordinate position of the second view port is: (3840,1920).
While the data transmission method provided in the present application is described in a single-sided manner with reference to fig. 6 and fig. 10, it should be understood that the control device described above may be a control device providing a cloud service, or may be a device not providing a cloud service. For a clearer understanding of the present application, a method of data transmission provided by the present application is described below in conjunction with the method 400 in an interactive manner. In the method 400, the related execution subject includes a control device, a cloud server, and a head-mounted device, where the control device is mainly configured to process an obtained high-definition image to obtain a video stream, and send the video stream to the head-mounted device through the cloud server, and the cloud server provides data services, such as video forwarding, information forwarding, and the like. The following is a detailed description.
Fig. 12 is a flowchart illustrating a data processing method 400 according to an exemplary embodiment of the present application. The method at least comprises the following steps:
the control device acquires multiple sets of video images, such as multiple sets of video images from multiple high definition cameras, and synthesizes a local ultra high definition (e.g., 8K and above resolution) first panoramic image using a locally used algorithm 401.
The control device determines 402 a first video stream from a position of a first viewport of the head-mounted device and the first panoramic image.
The control device down-samples 403 the first panoramic image.
And 404, the control device determines a second video stream according to the down-sampled first panoramic image, wherein the resolution of the first video stream is greater than that of the second video stream.
405, the control device sends the first video stream and the second video stream to the cloud server.
406, the cloud server receives the first video stream and the second video stream.
407, the cloud server sends the first video stream and the second video stream to the head-mounted device.
408, the head-mounted device receives the first video stream and the second video stream sent by the cloud server.
409, the head-mounted device synthesizes a first image from the first video stream and the second video stream.
The head-mounted device determines whether the position of the first viewport has changed 410.
And 411, the position of the first viewport of the head-mounted device is not changed, and the head-mounted device extracts an image corresponding to the first viewport from the first image according to the position of the first viewport of the head-mounted device.
The head-mounted device displays an image corresponding to the first viewport 412.
And 413, the first view port of the head-mounted device is changed into a second view port, and the head-mounted device extracts an image corresponding to the second view port from the first image according to the position of the second view port.
The head-mounted device displays an image corresponding to the second viewport 414.
415, the head-mounted device sends an update message to the cloud server.
It should be understood that the update message may be understood with reference to the update messages in the method 200 and the method 300, and is not described in detail herein.
The cloud server receives the update message sent by the head-mounted device and sends the update message to the control device 416.
The control device determines 417 a position of the second viewport based on the update message.
It should be understood that step 417 further includes the control device determining the updated first video stream and the second video stream according to the position of the second viewport and the second panoramic image, and sending the updated first video stream and the updated second video stream to the head-mounted device via the cloud server. This process may be understood with reference to the corresponding description in method 300.
It should also be understood that the functions of the control device in the method 400 may be implemented by being integrated into a cloud server.
Fig. 13 is a schematic structural diagram of a head-mounted device 500 according to an exemplary embodiment of the present application.
Wherein the head-mounted device 500 comprises: a transceiving unit 501, configured to receive a first video stream sent by a control device, where the first video stream is a video stream corresponding to a first viewport of the head-mounted device; the transceiving unit 501 is further configured to receive a second video stream sent by the control device, where the second video stream is a panoramic video stream, the panoramic video includes a video corresponding to the first viewport, and a resolution of the first video stream is greater than a resolution of the second video stream; a processing unit 502 for synthesizing a first image from the first video stream and the second video stream; the processing unit 502 further extracts an image corresponding to a second viewport of the head-mounted device from the first image according to a position of the second viewport, where the second viewport is a current viewport of the head-mounted device; a display unit 503, configured to display an image corresponding to the second viewport.
Optionally, the transceiver unit 501 is further configured to: sending an update message to the control device, the update message including motion data of the head-mounted device, or the update message including motion data of the head-mounted device and a resolution of a display screen of the head-mounted device.
Optionally, the center coordinate of the initial view port of the head-mounted device is a center point of the panoramic video.
Optionally, the transceiver unit 501 is further configured to: receiving an updated first video stream sent by a control device, wherein the updated first video stream comprises a video stream corresponding to the second viewport; receiving an updated second video stream sent by the control device, where the updated second video stream is a panoramic video stream, the panoramic video includes a video corresponding to the second viewport, and a resolution of the updated first video stream is greater than a resolution of the updated second video stream; the processing unit 502 is further configured to synthesize a second image from the updated first video stream and the updated second video stream; according to the position of a second view port of the head-mounted device, extracting an image corresponding to the second view port from the second image; the display unit 503 is further configured to display an image corresponding to the second viewport.
Fig. 14 is a schematic structural diagram of a control device 600 according to an exemplary embodiment of the present application.
Wherein the control apparatus 600 comprises: a processing unit 601, configured to determine a first video stream according to a position of a first viewport of a head-mounted device and a first panoramic image; the processing unit 601 is further configured to down-sample the first panoramic image; the processing unit 601 is further configured to determine a second video stream according to the down-sampled first panoramic image, where a resolution of the first video stream is greater than a resolution of the second video stream; a transceiving unit 602, configured to send the first video stream and the second video stream to a headset.
Optionally, the transceiver unit 602 is further configured to: and receiving an update message sent by the head-mounted device, wherein the update message comprises the motion data of the head-mounted device, or the update message comprises the motion data of the head-mounted device and the resolution of the display screen of the head-mounted device.
Optionally, the center coordinate of the initial viewport of the head-mounted device is a center point of the panoramic video.
Optionally, the processing unit 601 is further configured to: determining a location of a second viewport of the headset according to the update message; determining an updated first video stream according to the position of the second viewport and the second panoramic image, wherein the updated first video stream comprises a video stream corresponding to the second viewport; down-sampling the second panoramic image; determining an updated second video stream according to the down-sampled second panoramic image, wherein the resolution of the updated first video stream is greater than that of the updated second video stream; the transceiving unit 602 is further configured to transmit the updated first video stream and the updated second video stream to the head-mounted device.
It is to be understood that apparatus embodiments and method embodiments may correspond to one another and that similar descriptions may refer to method embodiments. To avoid repetition, further description is omitted here. Specifically, the apparatus may perform the method embodiment, and the foregoing and other operations and/or functions of each module in the apparatus are respectively corresponding flows in each method in the method embodiment, and for brevity, are not described again here.
The apparatus of the embodiments of the present application is described above in connection with the drawings from the perspective of functional modules. It should be understood that the functional modules may be implemented by hardware, by instructions in software, or by a combination of hardware and software modules. Specifically, the steps of the method embodiments in the present application may be implemented by integrated logic circuits of hardware in a processor and/or instructions in the form of software, and the steps of the method disclosed in conjunction with the embodiments in the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. Alternatively, the software modules may be located in random access memory, flash memory, read only memory, programmable read only memory, electrically erasable programmable memory, registers, and the like, as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps in the above method embodiments in combination with hardware thereof.
Fig. 15 is a schematic block diagram of an electronic device provided in an embodiment of the present application, where the electronic device 700 includes:
a memory 701 and a processor 702, the memory 701 being adapted to store a computer program and to transfer the program code to the processor 702. In other words, the processor 702 may call and execute a computer program from the memory 701 to implement the method in the embodiment of the present application.
For example, the processor 702 may be configured to perform the above-described method embodiments according to instructions in the computer program.
In some embodiments of the present application, the processor 702 may include, but is not limited to:
general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like.
In some embodiments of the present application, the memory 701 includes, but is not limited to:
volatile memory and/or non-volatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double Data Rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), SLDRAM (Synchronous link DRAM), and Direct Rambus RAM (DR RAM).
In some embodiments of the present application, the computer program may be partitioned into one or more modules, which are stored in the memory 701 and executed by the processor 702 to perform the methods provided herein. The one or more modules may be a series of computer program instruction segments capable of performing certain functions, the instruction segments describing the execution of the computer program in the electronic device.
As shown in fig. 7, the electronic device may further include:
a transceiver 703. The transceiver 703 may be connected to the processor 702 or the memory 701.
The processor 702 may control the transceiver 703 to communicate with other devices, and specifically, may transmit information or data to the other devices or receive information or data transmitted by the other devices. The transceiver 703 may include a transmitter and a receiver. The transceiver 703 may further include an antenna, and the number of antennas may be one or more.
It should be understood that the various components in the electronic device are connected by a bus system that includes a power bus, a control bus, and a status signal bus in addition to a data bus.
The present application also provides a computer storage medium having a computer program stored thereon, which, when executed by a computer, enables the computer to perform the method of the above-described method embodiments. Alternatively, the present application also provides a computer program product containing instructions, which when executed by a computer, cause the computer to execute the method of the above method embodiment.
When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application occur, in whole or in part, when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disc (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
According to one or more embodiments of the present application, there is provided a method of data processing, adapted for a head-mounted device, including:
receiving a first video stream sent by a control device, where the first video stream is a video stream corresponding to a first viewport of the head-mounted device;
receiving a second video stream sent by the control device, where the second video stream is a panoramic video stream, the panoramic video includes a video corresponding to the first viewport, and a resolution of the first video stream is greater than a resolution of the second video stream;
synthesizing a first image from the first video stream and the second video stream;
according to the position of a second view port of the head-mounted device, extracting an image corresponding to the second view port from the first image, wherein the second view port is a current view port of the head-mounted device;
and displaying an image corresponding to the second viewport.
According to one or more embodiments of the present application, when the view port of the head mounted device is changed from the first view port to the second view port, the method further comprises:
sending an update message to the control device, the update message including motion data of a head-mounted device, or the update message including motion data of a head-mounted device and a resolution of a display screen of the head-mounted device.
According to one or more embodiments of the present application, the center coordinates of the initial viewport of the headset are the center points of the panoramic video.
According to one or more embodiments of the present application, the method further comprises: receiving an updated first video stream sent by a control device, wherein the updated first video stream comprises a video stream corresponding to the second viewport;
receiving an updated second video stream sent by the control device, where the updated second video stream is a panoramic video stream, the panoramic video includes a video corresponding to the second viewport, and a resolution of the updated first video stream is greater than a resolution of the updated second video stream;
synthesizing a second image from the updated first video stream and the updated second video stream;
according to the position of a second viewport of the head-mounted device, extracting an image corresponding to the second viewport from the second image;
and displaying an image corresponding to the second viewport.
According to one or more embodiments of the present application, there is provided a data processing method, adapted to control a device, including:
determining a first video stream from a location of a first viewport of a headset and a first panoramic image;
down-sampling the first panoramic image;
determining a second video stream according to the first panoramic image after down-sampling, wherein the resolution of the first video stream is greater than that of the second video stream;
transmitting the first video stream and the second video stream to a head-mounted device.
According to one or more embodiments of the present application, the method further comprises:
receiving an update message sent by the head-mounted device, wherein the update message comprises the motion data of the head-mounted device, or the update message comprises the motion data of the head-mounted device and the resolution of the display screen of the head-mounted device.
According to one or more embodiments of the present application, the center coordinate of the initial viewport of the headset is a center point of the panoramic video.
According to one or more embodiments of the present application, the method further comprises:
determining a location of a second viewport of the headset device according to the update message;
determining an updated first video stream according to the position of the second viewport and the second panoramic image, wherein the updated first video stream comprises a video stream corresponding to the second viewport;
down-sampling the second panoramic image;
determining an updated second video stream according to the down-sampled second panoramic image, wherein the resolution of the updated first video stream is greater than that of the updated second video stream;
transmitting the updated first video stream and the updated second video stream to a headset.
According to one or more embodiments of the present application, there is provided a head-mounted device including:
a transceiving unit, configured to receive a first video stream sent by a control device, where the first video stream is a video stream corresponding to a first viewport of the head-mounted device;
the transceiver unit is further configured to receive a second video stream sent by the control device, where the second video stream is a panoramic video stream, the panoramic video includes a video corresponding to the first viewport, and a resolution of the first video stream is greater than a resolution of the second video stream;
a processing unit for synthesizing a first image from the first video stream and the second video stream;
the processing unit is further used for extracting an image corresponding to a second view port from the first image according to the position of the second view port of the head-mounted device, wherein the second view port is a current view port of the head-mounted device;
and the display unit is used for displaying the image corresponding to the second viewport.
According to one or more embodiments of the present application, the transceiver unit is further configured to:
sending an update message to the control device, the update message including motion data of a head-mounted device, or the update message including motion data of a head-mounted device and a resolution of a display screen of the head-mounted device.
According to one or more embodiments of the present application, the center coordinate of the initial viewport of the headset is a center point of the panoramic video.
According to one or more embodiments of the present application, the transceiver unit is further configured to:
receiving an updated first video stream sent by a control device, wherein the updated first video stream comprises a video stream corresponding to the second viewport;
receiving an updated second video stream sent by the control device, where the updated second video stream is a panoramic video stream, the panoramic video includes a video corresponding to the second viewport, and a resolution of the updated first video stream is greater than a resolution of the updated second video stream;
the processing unit is further configured to synthesize a second image from the updated first video stream and the updated second video stream;
according to the position of a second viewport of the head-mounted device, extracting an image corresponding to the second viewport from the second image;
the display unit is further configured to display an image corresponding to the second viewport.
According to one or more embodiments of the present application, there is provided a control apparatus including:
a processing unit for determining a first video stream from a position of a first viewport of the head-mounted device and the first panoramic image;
the processing unit is further configured to down-sample the first panoramic image;
the processing unit is further configured to determine a second video stream according to the down-sampled first panoramic image, where a resolution of the first video stream is greater than a resolution of the second video stream;
a transceiving unit, configured to send the first video stream and the second video stream to a headset.
According to one or more embodiments of the present application, the transceiver unit is further configured to:
receiving an update message sent by the head-mounted device, wherein the update message comprises the motion data of the head-mounted device, or the update message comprises the motion data of the head-mounted device and the resolution of the display screen of the head-mounted device.
According to one or more embodiments of the present application, the center coordinate of the initial viewport of the headset is a center point of the panoramic video.
According to one or more embodiments of the present application, the processing unit is further configured to:
determining a location of a second viewport of the headset from the update message;
determining an updated first video stream according to the position of the second viewport and the second panoramic image, wherein the updated first video stream comprises a video stream corresponding to the second viewport;
down-sampling the second panoramic image;
determining an updated second video stream according to the down-sampled second panoramic image, wherein the resolution of the updated first video stream is greater than that of the updated second video stream;
the transceiver unit is further configured to transmit the updated first video stream and the updated second video stream to a headset.
According to one or more embodiments of the present application, there is provided an electronic device including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the aforementioned methods via execution of the executable instructions.
According to one or more embodiments of the present application, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the aforementioned methods.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the module is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts shown as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. For example, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall cover the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (18)

1. A method of data processing, adapted for use with a head-mounted device, comprising:
receiving a first video stream sent by a control device, where the first video stream is a video stream corresponding to a first viewport of the head-mounted device;
receiving a second video stream sent by the control device, where the second video stream is a panoramic video stream, the panoramic video includes a video corresponding to the first viewport, and a resolution of the first video stream is greater than a resolution of the second video stream;
synthesizing a first image from the first video stream and the second video stream;
according to the position of a second view port of the head-mounted device, extracting an image corresponding to the second view port from the first image, wherein the second view port is a current view port of the head-mounted device;
and displaying an image corresponding to the second viewport.
2. The method of claim 1, wherein when the viewport of the headset is changed from the first viewport to the second viewport, the method further comprises:
sending an update message to the control device, the update message including motion data of a head-mounted device, or the update message including motion data of a head-mounted device and a resolution of a display screen of the head-mounted device.
3. The method of claim 2, wherein the center coordinate of the initial viewport of the headset is a center point of the panoramic video.
4. The method according to any one of claims 1 to 3, further comprising:
receiving an updated first video stream sent by a control device, wherein the updated first video stream comprises a video stream corresponding to the second viewport;
receiving an updated second video stream sent by the control device, where the updated second video stream is a panoramic video stream, the panoramic video includes a video corresponding to the second viewport, and a resolution of the updated first video stream is greater than a resolution of the updated second video stream;
synthesizing a second image from the updated first video stream and the updated second video stream;
according to the position of a second viewport of the head-mounted device, extracting an image corresponding to the second viewport from the second image;
and displaying an image corresponding to the second viewport.
5. A method of data processing adapted for use with a control device, comprising:
determining a first video stream from a location of a first viewport of the headset and the first panoramic image;
down-sampling the first panoramic image;
determining a second video stream according to the first panoramic image after down-sampling, wherein the resolution of the first video stream is greater than that of the second video stream;
transmitting the first video stream and the second video stream to a head-mounted device.
6. The method of claim 5, further comprising:
receiving an update message sent by the head-mounted device, wherein the update message comprises the motion data of the head-mounted device, or the update message comprises the motion data of the head-mounted device and the resolution of the display screen of the head-mounted device.
7. The method of claim 6, wherein the center coordinate of the initial viewport of the headset is a center point of the panoramic video.
8. The method according to claim 6 or 7, characterized in that the method further comprises:
determining a location of a second viewport of the headset from the update message;
determining an updated first video stream according to the position of the second viewport and the second panoramic image, wherein the updated first video stream comprises a video stream corresponding to the second viewport;
down-sampling the second panoramic image;
determining an updated second video stream according to the down-sampled second panoramic image, wherein the resolution of the updated first video stream is greater than that of the updated second video stream;
transmitting the updated first video stream and the updated second video stream to a headset.
9. A head-mounted device, comprising:
a transceiving unit, configured to receive a first video stream sent by a control device, where the first video stream is a video stream corresponding to a first viewport of the head-mounted device;
the transceiver unit is further configured to receive a second video stream sent by the control device, where the second video stream is a panoramic video stream, the panoramic video includes a video corresponding to the first viewport, and a resolution of the first video stream is greater than a resolution of the second video stream;
a processing unit for synthesizing a first image from the first video stream and the second video stream;
the processing unit is further used for extracting an image corresponding to a second view port from the first image according to the position of the second view port of the head-mounted device, wherein the second view port is a current view port of the head-mounted device;
and the display unit is used for displaying the image corresponding to the second viewport.
10. The headset of claim 9, wherein the transceiver unit is further configured to:
sending an update message to the control device, the update message including motion data of a head-mounted device, or the update message including motion data of a head-mounted device and a resolution of a display screen of the head-mounted device.
11. The headset of claim 10, wherein the center coordinates of an initial viewport of the headset are a center point of the panoramic video.
12. The headset of any one of claims 9 to 11, wherein the transceiver unit is further configured to:
receiving an updated first video stream sent by a control device, wherein the updated first video stream comprises a video stream corresponding to the second viewport;
receiving an updated second video stream sent by the control device, where the updated second video stream is a panoramic video stream, the panoramic video includes a video corresponding to the second viewport, and a resolution of the updated first video stream is greater than a resolution of the updated second video stream;
the processing unit is further configured to synthesize a second image from the updated first video stream and the updated second video stream;
according to the position of a second viewport of the head-mounted device, extracting an image corresponding to the second viewport from the second image;
the display unit is further configured to display an image corresponding to the second viewport.
13. A control apparatus, characterized by comprising:
a processing unit for determining a first video stream from a position of a first viewport of the head-mounted device and the first panoramic image;
the processing unit is further configured to down-sample the first panoramic image;
the processing unit is further configured to determine a second video stream according to the down-sampled first panoramic image, where a resolution of the first video stream is greater than a resolution of the second video stream;
a transceiving unit, configured to send the first video stream and the second video stream to a head-mounted device.
14. The control device according to claim 13, wherein the transceiving unit is further configured to:
receiving an update message sent by the head-mounted device, wherein the update message comprises the motion data of the head-mounted device, or the update message comprises the motion data of the head-mounted device and the resolution of the display screen of the head-mounted device.
15. The control device of claim 14, wherein the center coordinate of the initial viewport of the headset is a center point of the panoramic video.
16. The control apparatus according to claim 14 or 15,
the processing unit is further to:
determining a location of a second viewport of the headset device according to the update message;
determining an updated first video stream according to the position of the second viewport and the second panoramic image, wherein the updated first video stream comprises a video stream corresponding to the second viewport;
down-sampling the second panoramic image;
determining an updated second video stream according to the down-sampled second panoramic image, wherein the resolution of the updated first video stream is greater than that of the updated second video stream;
the transceiver unit is further configured to transmit the updated first video stream and the updated second video stream to a headset.
17. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1-4 via execution of the executable instructions; or to perform the method of any of claims 5-8.
18. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1-4; or to implement the method of any one of claims 5-8.
CN202211167306.7A 2022-09-23 2022-09-23 Data processing method, head-mounted device, control device, electronic device, and medium Pending CN115550565A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211167306.7A CN115550565A (en) 2022-09-23 2022-09-23 Data processing method, head-mounted device, control device, electronic device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211167306.7A CN115550565A (en) 2022-09-23 2022-09-23 Data processing method, head-mounted device, control device, electronic device, and medium

Publications (1)

Publication Number Publication Date
CN115550565A true CN115550565A (en) 2022-12-30

Family

ID=84728771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211167306.7A Pending CN115550565A (en) 2022-09-23 2022-09-23 Data processing method, head-mounted device, control device, electronic device, and medium

Country Status (1)

Country Link
CN (1) CN115550565A (en)

Similar Documents

Publication Publication Date Title
US10645369B2 (en) Stereo viewing
US11706403B2 (en) Positional zero latency
CN110419224B (en) Method for consuming video content, electronic device and server
EP3542530B1 (en) Suggested viewport indication for panoramic video
CN112204993B (en) Adaptive panoramic video streaming using overlapping partitioned segments
KR20190095430A (en) 360 video processing method and apparatus therefor
CN113243112B (en) Streaming volumetric video and non-volumetric video
US20180295352A1 (en) Adapting video images for wearable devices
EP3434021B1 (en) Method, apparatus and stream of formatting an immersive video for legacy and immersive rendering devices
CN116325769A (en) Panoramic video streaming scenes from multiple viewpoints
CN115550565A (en) Data processing method, head-mounted device, control device, electronic device, and medium
WO2018109266A1 (en) A method and technical equipment for rendering media content
WO2018069215A1 (en) Method, apparatus and stream for coding transparency and shadow information of immersive video format
EP3310052A1 (en) Method, apparatus and stream for immersive video format
US20190052868A1 (en) Wide viewing angle video processing system, wide viewing angle video transmitting and reproducing method, and computer program therefor
EP3598271A1 (en) Method and device for disconnecting user&#39;s attention
JP2023507586A (en) Method and Apparatus for Encoding, Decoding, and Rendering 6DOF Content from 3DOF Components
CN117671198A (en) Image processing method, apparatus, device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination