WO2018077142A1 - 全景视频的处理方法、装置及系统 - Google Patents

全景视频的处理方法、装置及系统 Download PDF

Info

Publication number
WO2018077142A1
WO2018077142A1 PCT/CN2017/107376 CN2017107376W WO2018077142A1 WO 2018077142 A1 WO2018077142 A1 WO 2018077142A1 CN 2017107376 W CN2017107376 W CN 2017107376W WO 2018077142 A1 WO2018077142 A1 WO 2018077142A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
code rate
area
panoramic
picture
Prior art date
Application number
PCT/CN2017/107376
Other languages
English (en)
French (fr)
Inventor
刘洋
Original Assignee
深圳市道通智能航空技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通智能航空技术有限公司 filed Critical 深圳市道通智能航空技术有限公司
Publication of WO2018077142A1 publication Critical patent/WO2018077142A1/zh
Priority to US16/389,556 priority Critical patent/US20190246104A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440236Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text

Definitions

  • the present application relates to the field of panoramic image processing technologies, and in particular, to a method, device, and system for processing panoramic video.
  • the panoramic camera module carried by the aircraft captures a large range of image information in a high-altitude scene, and then uses the wireless transmission technology such as WiFi, Bluetooth, Zigbee, mobile communication, etc. to image information.
  • the wireless transmission technology such as WiFi, Bluetooth, Zigbee, mobile communication, etc.
  • images of various angles captured by the panoramic camera module are usually spliced to form an image frame, and then the image frame is mapped to the surface of the sphere of the constructed virtual sphere model to obtain a spherical image presented by the sphere model, through the VR.
  • Wear a display device to watch the panoramic view of the spherical image enhance the user experience and immersion.
  • the technical problem to be solved by the embodiments of the present application is to provide a method, a device and a system for processing a panoramic video, which can ensure real-time image transmission between a display device and a terminal device and improve users in a limited channel bandwidth.
  • the clarity of the video image viewed.
  • the embodiment of the present application provides a method for processing a panoramic video, including:
  • the video data in the first video area is processed according to a first code rate
  • the video data in the second video area is processed according to a second code rate
  • the embodiment of the present application provides a processing device for a panoramic video, where the device includes:
  • a parameter receiving module configured to receive a partition parameter sent by the display device
  • An area determining module configured to determine, according to the partitioning parameter, a first video area and a second video area in the panoramic video picture;
  • a first rate processing module configured to process video data in the first video area according to a first code rate
  • the second rate processing module is configured to process the video data in the second video area according to a second code rate.
  • an embodiment of the present application provides a method for processing a panoramic video, where the method includes:
  • the panoramic video picture that is received by the terminal device according to the partition parameter, where the panoramic video picture includes a first video area and a second video area, and video data in the first video area is processed according to a first code rate.
  • the video data in the second video area is processed according to the second code rate.
  • the embodiment of the present application provides a processing device for a panoramic video, where the device includes:
  • a parameter sending module configured to send a partition parameter to the terminal device
  • a picture receiving module configured to receive a panoramic video picture that is sent by the terminal device according to the partitioning parameter, where the panoramic video picture includes a first video area and a second video area, and the video data in the first video area is in accordance with At a rate processing, the video data in the second video region is processed at a second code rate.
  • an embodiment of the present application provides a system for processing a panoramic video, where the system is The system includes:
  • a display device configured to send a partition parameter to the terminal device
  • a terminal device configured to determine, according to the partition parameter, a first video area and a second video area in the panoramic video picture, and process video data in the first video area according to a first code rate, to the second The video data in the video area is processed at the second code rate.
  • an embodiment of the present application provides a computer readable storage medium, where a computer program is stored thereon, and the computer program is executed by a processor to implement the steps of the above-described panoramic video processing method.
  • the terminal device receives the partition parameter sent by the display device, and further determines the first video area and the second video area in the panoramic video picture according to the partition parameter,
  • the video data in the first video area is processed according to the first code rate
  • the video data in the second video area is processed according to the second code rate, so that the image corresponding to the user observation area is processed according to a high code rate, and the non-observation area of the user is corresponding.
  • the image is processed at a low code rate to ensure real-time image transmission between the display device and the terminal device with a limited channel bandwidth, improve the definition of the video image viewed by the user, and enhance the user experience.
  • FIG. 1 is a flowchart of a method for processing a panoramic video according to an embodiment of the present application
  • FIGS. 2a and 2b are schematic diagrams of the orientation of the user's viewing angle being switched within the sphere model
  • 3a and 3b are schematic diagrams showing changes in the range of the user's perspective corresponding to the image in the panoramic video screen
  • FIG. 4 is a functional block diagram of a panoramic video processing apparatus according to an embodiment of the present application.
  • FIG. 5 is a functional block diagram of a processing apparatus for panoramic video according to another embodiment of the present application.
  • FIG. 6 is a flowchart of a method for processing a panoramic video according to an embodiment of the present application.
  • FIG. 7 is a functional block diagram of a panoramic video processing apparatus according to an embodiment of the present application.
  • FIG. 8 is a functional block diagram of a processing apparatus for panoramic video according to another embodiment of the present application.
  • FIG. 9 is a schematic diagram of a system for processing panoramic video according to an embodiment of the present application.
  • the processing method of the panoramic video in the embodiment of the present application may be based on an information interaction process between the terminal device and the display device communicatively connected to the panoramic camera module.
  • the panoramic camera module can be composed of one or more cameras.
  • the terminal device may be an aircraft, a camera, a mobile phone, a tablet computer, etc., and the display device may be a VR headset display device, a television, a projection device, or the like.
  • the terminal device transmits the panoramic video captured by the panoramic camera module to the display device for wireless or wired transmission through preset processing, and the wireless transmission includes, but is not limited to, wireless transmission such as Wifi, Bluetooth, ZigBee, mobile data communication, and the like. technology.
  • an embodiment of the present application provides a method for processing a panoramic video, where the method may be performed by a terminal device, where the method includes:
  • Step 11 Receive a partition parameter sent by the display device.
  • the display device is a VR head-mounted display device
  • the panoramic camera module captures a panoramic video frame
  • the terminal device feeds back the panoramic video image to the VR according to the partition parameter sent by the VR head-mounted display device.
  • Head-mounted display device is a VR head-mounted display device
  • the VR head-mounted display device can construct the sphere model 21 in the virtual three-dimensional space, thereby mapping the panoramic video picture to On the surface of the sphere of the sphere model 21,
  • the spherical video picture displayed by the sphere model 21 is obtained, and the two-dimensional panoramic video picture is simulated into a three-dimensional spherical video picture for presentation to the user.
  • the switching of the user's perspective can be implemented to present different areas in the spherical video picture to the user.
  • the implementation of the switching of the user's perspective includes, but is not limited to, the following two methods:
  • the user wearing the VR head-mounted display device rotates the head, and the gyroscope of the VR head-mounted display device detects the rotation of the user's head to determine the orientation of the user's perspective, so as to present the user's perspective in the spherical video image to the user.
  • the oriented area for example, is switched by the area of the user's viewing angle as shown in Fig. 2a to the area of the user's viewing angle as shown in Fig. 2b.
  • the user wearing the VR head-mounted display device operates the joystick or the button on the remote controller, and the VR head-mounted display device can present the different spherical images to the user according to the swing of the joystick or the trigger of the button. Area.
  • the remote control and the VR head mounted display device can communicate in a wireless or wired transmission manner.
  • the foregoing first manner may be adopted, or the foregoing second manner may be adopted, and the switching between the first manner and the second manner may also be adopted. So that the user can choose the first way or the second way.
  • the sphere surface of the sphere model may be divided into a plurality of partitions, and the number of partitions and the area size of each partition may be adaptively adjusted according to the display view angle of the device.
  • the six partitions include: A partition, B partition, C partition, D partition, E partition, and F partition, and one partition can be used for corresponding A region image of a camera captured in the panoramic camera module.
  • the range of the user's perspective can involve one to three partitions, and the partition location involved in the range of the user's perspective can be calculated according to the orientation of the user's perspective, and the partition number involved can be determined.
  • Step 12 Determine, according to the partition parameter, a first video area and a second video area in the panoramic video picture.
  • the partition parameter may be a partition position of the panoramic video picture determined according to a user perspective of the display device.
  • the range of the user perspective relates to the two areas of the C partition and the D partition.
  • the range of the user's perspective relates to the two areas of the B partition and the C partition.
  • the partitioning parameter includes first identifier information and second identifier information.
  • the steps for determining the partition parameters specifically include:
  • the first identification information and the second identification information are integrated into the component zone parameters.
  • the image in the spherical video picture corresponding to the first video area in the panoramic video picture has first identification information
  • the image in the spherical video picture corresponding to the second video area in the panoramic video picture has the second identifier.
  • the information can determine the first video area and the second video area in the panoramic video picture by using the first identification information and the second identification information.
  • Step 13 The video data in the first video area is processed according to the first code rate, and the video data in the second video area is processed according to the second code rate.
  • the video data in the panoramic video picture corresponding to the different partitions is compression-coded at the first code rate and the second code rate, that is, the video data corresponding to each partition is according to the first code.
  • the rate is compression-coded and compression-encoded at the second code rate.
  • the video data of the corresponding code rate compression coding is transmitted for different video areas according to the partition parameters.
  • FIG. 3a is a schematic diagram of the range of the user's perspective in FIG. 2a corresponding to the image in the panoramic video screen
  • FIG. 3b is a schematic diagram of the range of the user's perspective in FIG. 2b corresponding to the image in the panoramic video screen.
  • the images in the panoramic video picture include: an image, a b image, a c image, a d image, an e image, and an f image.
  • the a picture, the b picture, the c picture, the d picture, the e picture, and the f picture are spliced to form a panoramic video picture.
  • the a image is mapped to the A partition
  • the b image is mapped to the B partition
  • the c image is mapped to the C partition
  • the d image is mapped to the D partition
  • the e image is mapped to the E partition
  • the f image is mapped to the F partition.
  • the range of the user's perspective corresponds to the two images of c image and d image in the panoramic video picture.
  • the range of the user's perspective corresponds to the b image in the panoramic video picture.
  • c images these two images.
  • the first code rate is greater than the second code rate
  • the first video area includes a range of the user perspective corresponding to the image in the panoramic video picture
  • the second video area includes any of the other images in the panoramic video picture.
  • One or more images For example, as shown in Figure 3a, The first video area includes a c image and a d image, and the image in the first video area is processed according to the first code rate, that is, the video data outputted in the first video area is compressed and encoded according to the first code rate;
  • the two video regions include any one or more of the a image, the b image, the e image, and the f image, and the image of the second video region is processed according to the second code rate, that is, the video data outputted in the second video region is in accordance with the first
  • the second code rate compresses the encoded video data.
  • the second code rate is greater than the first code rate
  • the second video area includes a range of the user's perspective corresponding to the image in the panoramic video picture, where the first video area includes any of the other images in the panoramic video picture.
  • the image in the first video region is processed at a first code rate; the image of the second video region is processed at a second code rate.
  • the terminal device receives a partition parameter sent by the display device, and further determines a first video region and a second video region in the panoramic video image according to the partition parameter, and the first video region
  • the video data in the video processing is processed according to the first code rate
  • the video data in the second video region is processed according to the second code rate, so that the image corresponding to the user observation area is processed according to the high code rate, and the image corresponding to the non-observation area of the user is in accordance with the low code.
  • the rate processing ensures that the real-time image transmission between the display device and the terminal device is improved in a case of limited channel bandwidth, the definition of the video image viewed by the user is improved, and the user experience is enhanced.
  • the processing of the first video area according to the first code rate is specifically: compressing and encoding video data in the first video area according to a first code rate;
  • the processing of the video area according to the second code rate is specifically: compressing and encoding the video data in the second video area according to the second code rate.
  • the processing of the first video region according to the first code rate is specifically: determining that the first group of cameras that capture the first video region, and the video data captured by the first group of cameras are respectively according to the first code.
  • the video data that is compression-encoded at the first code rate is selected as the video data to be transmitted;
  • the processing of the second video region according to the second code rate is specifically: determining to capture the second video region.
  • the second group of cameras compresses and encodes the video data captured by the second group of cameras according to the first code rate and the second code rate, and selects the video data compressed and encoded at the second code rate as the video data to be transmitted. For example, as shown in FIG.
  • the first group of cameras includes a camera that captures a c image and a d image.
  • the first group of cameras captures and compresses the encoded video data at the first code rate as the video data to be transmitted;
  • the second group of cameras includes the camera that captures any one or more of the a image, the b image, the e image, and the f image
  • the video data of the second group camera captured and compressed at the second code rate is used as the video data to be transmitted.
  • the terminal device transmits the encoded video code stream or the video data to be transmitted to the display device by means of wireless transmission, and the manner of wireless transmission includes but is not limited to wireless transmission technologies such as Wifi, Bluetooth, ZigBee, and mobile data communication.
  • the terminal device converts the encoded video code stream to the display device by means of wired transmission.
  • the processing of the first video region according to the first code rate is specifically: determining to capture a first group of cameras of the first video region, and setting an output code rate of the first group of cameras to the first code.
  • the processing of the second video region according to the second code rate is specifically: determining a second group of cameras for capturing the second video region, and setting an output code rate of the second group of cameras to a second code rate. For example, as shown in FIG.
  • the first group of cameras includes a camera that captures a c image and a d image, and sets an output code rate of the first group of cameras to a first code rate;
  • the second group of cameras includes a picture, a b image, The camera of any one or more of the e image and the f image sets the output code rate of the second group of cameras to the second code rate.
  • the range of the user's perspective corresponds to the first video area in the panoramic video picture, and the second group of cameras that capture the second video area may be closed; it is assumed that the range of the user's perspective corresponds to the second video in the panoramic video picture. Area, you can turn off the first group of cameras that capture the first video area.
  • the embodiment of the present application provides a panoramic video processing device 40.
  • the processing device 40 includes a parameter receiving module 41, an area determining module 42, a first rate processing module 43, and a second code rate. Processing module 44.
  • the parameter receiving module 41 is configured to receive a partition parameter sent by the display device.
  • the partitioning parameter is a partitioning position of a panoramic video image screen determined according to a user perspective of the display device.
  • the area determining module 42 is configured to determine the first video area and the second video area in the panoramic video picture according to the partitioning parameter.
  • the first rate processing module 43 is configured to press the video data in the first video area according to the first Rate processing.
  • the first rate processing module 43 is configured to compress and encode the video data in the first video area according to the first code rate; or the first code rate processing module 43 is configured to determine to capture the first video area.
  • the group camera sets the output bit rate of the first group of cameras to the first code rate.
  • the first rate processing module 43 is configured to compress and encode the video data in each area according to the first code rate. Further, the first rate processing module 43 is further configured to determine to capture the first group of the first video area. The camera takes video data that is captured by the first group of cameras and compression-encoded at the first code rate as video data to be transmitted.
  • the second rate processing module 44 is configured to process the video data in the second video region according to the second code rate.
  • the second rate processing module 44 is configured to compress and encode the video data in the second video region according to the second code rate; or the second rate processing module 44 is configured to determine to capture the second video region.
  • the group camera sets the output bit rate of the second group of cameras to the second code rate.
  • the second rate processing module 44 is configured to perform compression encoding on the video data in each area according to the second code rate. Further, the second rate processing module 44 is further configured to determine to capture the second group of the second video area. The camera captures the video data compressed and encoded at the second code rate by the second group camera as the video data to be transmitted.
  • the apparatus for processing a panoramic video receives the partition parameter sent by the display device by the parameter receiving module 41, and the area determining module 42 determines the first video area and the second video area in the panoramic video picture according to the partition parameter.
  • the first rate processing module 43 processes the video data in the first video area according to the first code rate
  • the second code rate processing module 44 processes the video data in the second video area according to the second code rate to implement user observation.
  • the image corresponding to the area is processed according to the high code rate
  • the image corresponding to the non-observed area of the user is processed according to the low code rate, thereby ensuring the improvement between the display device and the terminal device in the case of limited channel bandwidth. It enhances the user experience by improving the real-time performance of the video, improving the clarity of the video images viewed by the user.
  • the processing apparatus 50 includes a parameter receiving module 51, an area determining module 52, a first rate processing module 53, and a second.
  • the parameter receiving module 51 is configured to receive a partition parameter sent by the display device.
  • the area determining module 52 is configured to determine the first video area and the second video area in the panoramic video picture according to the partitioning parameter.
  • the first rate processing module 53 is configured to compress and encode the video data in the first video area according to the first code rate.
  • the first rate processing module 53 is configured to perform compression encoding on the video data in each area according to the first code rate, and the first rate processing module 53 is further configured to determine to capture the first group of cameras of the first video area. And compressing the encoded video data at the first code rate as the video data to be transmitted by the first group of cameras.
  • the second rate processing module 54 is configured to compress and encode the video data in the second video region according to the second code rate.
  • the second rate processing module 54 is configured to perform compression encoding on the video data in each area according to the second code rate, and the second rate processing module 54 is further configured to determine to capture the second group of cameras in the second video area.
  • the video data of the second group camera captured and compressed at the second code rate is used as the video data to be transmitted.
  • the code stream sending module 55 is configured to send the compressed coded video code stream to the display device by wireless transmission.
  • the code stream sending module 55 may be replaced by a video data sending module, and configured to send the video data to be transmitted to the display device by wireless transmission.
  • step 11 and steps above. 12 please refer to step 11 and steps above. 12 and the explanation of step 13.
  • the apparatus for processing a panoramic video receives the partition parameter sent by the display device by the receiving module 51, and the area determining module 52 determines the first video area and the second video area in the panoramic video picture according to the partition parameter.
  • the first rate processing module 53 performs compression coding on the video data in the first video area according to the first code rate, and the second code rate.
  • the processing module 54 compresses and encodes the video data in the second video area according to the second code rate, and the code stream sending module 55 sends the compressed coded video stream to the display device by means of wireless transmission, or the first code rate.
  • the processing module 53 determines the video data that is compression-encoded at the first code rate in the first video region as the video data to be transmitted, and the second rate processing module 54 compresses and encodes the video in the second video region at the second code rate.
  • the video data sending module transmits the video data to be transmitted to the display device by wireless transmission, thereby ensuring wireless transmission of images between the display device and the terminal device under the condition of limited channel bandwidth. Real-time, improve the clarity of video images viewed by users, and enhance the user experience.
  • the embodiment of the present application provides a method for processing a panoramic video, where the method is performed by a display device, and the method includes:
  • Step 61 Send a partition parameter to the terminal device.
  • Step 62 Receive a panoramic video picture that is sent by the terminal device according to the partition parameter, where the panoramic video picture includes a first video area and a second video area, and the video data in the first video area is processed according to the first code rate, and the second video area is processed. The video data within is processed at the second code rate.
  • the terminal device is an aircraft
  • the display device is a VR head-mounted display device.
  • the panoramic camera module captures a panoramic video frame, and the terminal device feeds back the panorama according to the partition parameter sent by the VR head-mounted display device. Video screen to VR head-mounted display device.
  • step 62 When the user wears the VR head-mounted display device, in order to be able to view the three-dimensional effect of the panoramic video picture, after step 62, the following steps are also included:
  • the panoramic video picture is mapped onto the sphere surface of the sphere model to obtain a spherical video picture presented in a sphere model.
  • the VR head-mounted display device can construct a sphere model 21 in the virtual three-dimensional space, and then map the panoramic video image onto the sphere surface of the sphere model 21 to obtain a spherical video image displayed by the sphere model 21.
  • a sphere model 21 in the virtual three-dimensional space
  • map the panoramic video image onto the sphere surface of the sphere model 21 to obtain a spherical video image displayed by the sphere model 21.
  • a three-dimensional spherical video image is proposed for presentation to the user.
  • the switching of the user's perspective can be implemented to present different areas in the spherical video picture to the user.
  • the implementation of the switching of the user's perspective includes, but is not limited to, the following two methods:
  • the user wearing the VR head-mounted display device rotates the head, and the gyroscope of the VR head-mounted display device detects the rotation of the user's head to determine the orientation of the user's perspective, so as to present the user's perspective in the spherical video image to the user.
  • the oriented area for example, is switched by the area of the user's viewing angle as shown in Fig. 2a to the area of the user's viewing angle as shown in Fig. 2b.
  • the user wearing the VR head-mounted display device operates the joystick or the button on the remote controller, and the VR head-mounted display device can present the different spherical images to the user according to the swing of the joystick or the trigger of the button. Area.
  • the remote control and the VR head mounted display device can communicate in a wireless or wired transmission manner.
  • the foregoing first manner may be adopted, or the foregoing second manner may be adopted, and the switching between the first manner and the second manner may also be adopted. So that the user can choose the first way or the second way.
  • the sphere surface of the sphere model may be divided into a plurality of partitions, and the number of partitions and the area size of each partition may be adaptively adjusted according to the display view angle of the device.
  • the six partitions include: A partition, B partition, C partition, D partition, E partition, and F partition, and one partition can be used for corresponding A region image of a camera captured in the panoramic camera module.
  • the range of the user's perspective can involve one to three partitions, and the partition location involved in the range of the user's perspective can be calculated according to the orientation of the user's perspective, and the partition number involved can be determined.
  • the partition parameter may be a partition position of the panoramic video picture determined according to a user perspective of the display device.
  • the range of the user perspective relates to the two areas of the C partition and the D partition.
  • the range of the user's perspective relates to the two areas of the B partition and the C partition.
  • the partitioning parameter includes first identifier information and second identifier information.
  • the first identification information and the second identification information are integrated into the component zone parameters.
  • the image in the spherical video picture corresponding to the first video area in the panoramic video picture has first identification information
  • the image in the spherical video picture corresponding to the second video area in the panoramic video picture has the second identifier.
  • the information can determine the first video area and the second video area in the panoramic video picture by using the first identification information and the second identification information.
  • the first identification information and the second identification information are integrated into the component zone parameters.
  • FIG. 3a is a schematic diagram of the range of the user's perspective in FIG. 2a corresponding to the image in the panoramic video screen
  • FIG. 3b is a schematic diagram of the range of the user's perspective in FIG. 2b corresponding to the image in the panoramic video screen.
  • the images in the panoramic video picture include: an image, a b image, a c image, a d image, an e image, and an f image.
  • the a picture, the b picture, the c picture, the d picture, the e picture, and the f picture are spliced to form a panoramic video picture.
  • the a image is mapped to the A partition
  • the b image is mapped to the B partition
  • the c image is mapped to the C partition
  • the d image is mapped to the D partition
  • the e image is mapped to the E partition
  • the f image is mapped to the F partition.
  • the method further includes the step of detecting whether the partition location of the panoramic video picture corresponding to the user perspective changes, and if yes, resending the partition parameter to the terminal device.
  • the range of the user's perspective corresponds to the two images of c image and d image in the panoramic video picture.
  • the range of the user's perspective corresponds to the b image in the panoramic video picture.
  • c images these two images.
  • the first identification information and the second identification information included in the partition parameter also undergo corresponding changes, according to the new first identification information and the second The identification information is re-sent to the terminal device after it is integrated into the new partition parameters.
  • the first code rate is greater than the second code rate
  • the first video area includes a range of the user perspective corresponding to the image in the panoramic video picture
  • the second video area includes any of the other images in the panoramic video picture.
  • One or more images For example, as shown in FIG. 3a, the first video region includes a c image and a d image, and the image in the first video region is processed according to a first code rate; the second video region includes an a image, a b image, an e image, and an f image. Any one or more of the images, the image of the second video region is processed according to the second code rate.
  • the second code rate is greater than the first code rate
  • the second video region is The range including the user's perspective corresponds to an image in the panoramic video picture
  • the first video area includes any one or more of the other images in the panoramic video picture.
  • the image in the first video region is processed at a first code rate; the image of the second video region is processed at a second code rate.
  • step 62 the following steps are further included:
  • An image corresponding to the first video area or an image corresponding to the second video area in the spherical video screen is displayed.
  • a method for processing a panoramic video by transmitting a partition parameter to a terminal device, receiving a panoramic video picture that is sent by the terminal device according to the partition parameter, where the panoramic video picture includes a first video area and a second video area,
  • the video data in the first video area is processed according to the first code rate
  • the video data in the second video area is processed according to the second code rate to ensure image transmission between the display device and the terminal device in the case of limited channel bandwidth.
  • the processing device 70 includes a parameter sending module 71 and a screen receiving module 72.
  • the parameter sending module 71 is configured to send a partition parameter to the terminal device.
  • the picture receiving module 72 is configured to receive a panoramic video picture that is sent by the terminal device according to the partition parameter, where the panoramic video picture includes a first video area and a second video area, and the video data in the first video area is processed according to the first code rate, The video data in the two video regions is processed at the second code rate.
  • the parameter sending module sends a partition parameter to the terminal device
  • the screen receiving module receives the panoramic video picture that the terminal device feeds back according to the partition parameter, where the panoramic video picture includes the first video area.
  • the second video area the video data in the first video area is processed according to the first code rate
  • the video data in the second video area is processed according to the second code rate, to ensure that the display device is improved with a limited channel bandwidth.
  • the processing device 80 includes a parameter sending module 81, a screen receiving module 82, a model building module 83, a screen mapping module 84, an identification information acquiring module 85, an integration module 86, and a display module 87.
  • the parameter sending module 81 is configured to send a partition parameter to the terminal device.
  • the picture receiving module 82 is configured to receive a panoramic video picture that is sent by the terminal device according to the partition parameter, where the panoramic video picture includes a first video area and a second video area, and the video data in the first video area is processed according to the first code rate, The video data in the two video regions is processed at the second code rate.
  • the model building module 83 is used to construct a sphere model within the virtual three dimensional space.
  • the picture mapping module 84 is configured to map the panoramic video picture onto the sphere surface of the sphere model to obtain a spherical video picture presented in a sphere model.
  • the identification information acquiring module 85 is configured to acquire first identification information of an image corresponding to the first video area and second identification information of an image corresponding to the second video area in the spherical video picture.
  • the integration module 86 is configured to integrate the first identification information and the second identification information into the component zone parameters.
  • a detection module is further configured to detect whether a partition location of the panoramic video picture corresponding to the user perspective changes, and if yes, the partition parameter is retransmitted to the terminal device by the parameter sending module 81.
  • the range of the user's perspective relates to the two areas of the C partition and the D partition.
  • the range of the user's perspective is switched to the two areas of the B partition and the C partition. That is, the first video area and the second video area are changed, and the first identifier information and the second identifier information included in the partition parameter also change correspondingly, and the integration module 86 is based on the new first identifier information and the second identifier.
  • the information is integrated into the new partition parameters and then resent to the terminal device via the transmitting module 81.
  • the display module 87 is configured to display an image corresponding to the first video area or an image corresponding to the second video area in the spherical video picture.
  • the display module 87 please refer to the explanation of the above steps 61 and 62.
  • the embodiment of the present application provides a panoramic video processing system 90, which includes a display device 91 and a terminal device 92.
  • the terminal device may be an aircraft, a camera, a mobile phone, a tablet computer, etc.
  • the display device may be a VR headset display device, a television, a projection device, or the like.
  • the display device 91 is configured to send a partition parameter to the terminal device.
  • the partition parameter may be a partition location of the panoramic video picture determined according to a user perspective of the display device.
  • the partitioning parameter includes first identifier information and second identifier information.
  • the steps for determining the partition parameters specifically include:
  • the first identification information and the second identification information are integrated into the component zone parameters.
  • the terminal device 92 is configured to determine, according to the partition parameter, the first video area and the second video area in the panoramic video picture, and process the video data in the first video area according to the first code rate, and the video data in the second video area. Processed according to the second code rate.
  • the first code rate is greater than the second code rate
  • the first video area includes a range of the user perspective corresponding to the image in the panoramic video picture
  • the second video area includes any of the other images in the panoramic video picture.
  • One or more images For example, as shown in FIG. 3a, the first video region includes a c image and a d image, and the image in the first video region is processed according to a first code rate; the second video region includes an a image, a b image, an e image, and an f image. Any one or more of the images, the image of the second video region is processed according to the second code rate.
  • the second code rate is greater than the first code rate
  • the second video area includes a range of the user's perspective corresponding to the image in the panoramic video picture, where the first video area includes any of the other images in the panoramic video picture.
  • the image in the first video region is processed at a first code rate; the image of the second video region is processed at a second code rate.
  • the processing system of the panoramic video provided by the embodiment of the present application sends a partition parameter to the terminal device by using the display device, and the terminal device determines the first video area and the second video area in the panoramic video picture according to the partition parameter, and is first Video data in the video area according to the first A rate processing, processing the video data in the second video area according to the second code rate, ensuring real-time image transmission between the display device and the terminal device and improving the user's viewing in the case of limited channel bandwidth.
  • the clarity of the video image enhances the user experience.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • disk optical disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本申请实施方式公开了一种全景视频的处理方法、装置及系统,终端设备通过接收显示设备发送的分区参数,进而根据分区参数,确定全景视频画面中第一视频区域和第二视频区域,对第一视频区域内的视频数据按照第一码率处理,对第二视频区域内的视频数据按照第二码率处理,实现用户观察区域对应的图像按照高码率处理,用户非观察区域对应的图像按照低码率处理,确保在有限的信道带宽的情况下,提高显示设备与终端设备之间图像传输的实时性、提高用户观看到的视频图像的清晰度,增强用户体验。

Description

全景视频的处理方法、装置及系统
申请要求于2016年10月26日申请的、申请号为201610952287.7、发明名称为“全景视频的处理方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及全景图像处理技术领域,特别是涉及一种全景视频的处理方法、装置及系统。
背景技术
随着VR头戴显示设备以及飞行器的普及,通过飞行器携带的全景摄像模组捕捉高空场景中较大范围的图像信息,进而利用WiFi、蓝牙、Zigbee、移动通信等无线传输技术,将上述图像信息发送至VR头戴显示设备。现有技术中,通常对全景摄像模组拍摄的各个角度的图像进行拼接形成图像帧,进而将图像帧映射到构建的虚拟球体模型的球体表面,得到以球体模型来呈现的球面图像,通过VR头戴显示设备来观看球面图像所呈现出来的全景景观,提升用户的体验和沉浸感。
发明人在实现本申请的过程中发现:鉴于目前无线传输技术中信道带宽有限,将图像帧发送至VR头戴显示设备的过程中,很难通过VR头戴显示设备实时地显示球面图像。虽然能够通过降低码率、减少帧率的方式适应有限的信道带宽,以提高实时性,但是却以牺牲在VR头戴显示设备上看到的视频图像质量为代价,降低了用户的体验。
申请内容
本申请实施方式主要解决的技术问题是提供一种全景视频的处理方法、装置及系统,能够确保在有限的信道带宽的情况下,提高显示设备与终端设备之间图像传输的实时性、提高用户观看到的视频图像的清晰度。
第一方面,本申请实施例提供了一种全景视频的处理方法,包括:
接收显示设备发送的分区参数;
根据所述分区参数,确定全景视频画面中第一视频区域和第二视频区域;
对所述第一视频区域内的视频数据按照第一码率处理,对所述第二视频区域内的视频数据按照第二码率处理。
第二方面,本申请实施例提供了一种全景视频的处理装置,所述装置包括:
参数接收模块,用于接收显示设备发送的分区参数;
区域确定模块,用于根据所述分区参数,确定全景视频画面中第一视频区域和第二视频区域;
第一码率处理模块,用于对所述第一视频区域内的视频数据按照第一码率处理;
第二码率处理模块,用于对所述第二视频区域内的视频数据按照第二码率处理。
第三方面,本申请实施例提供了一种全景视频的处理方法,所述方法包括:
向终端设备发送分区参数;
接收终端设备根据所述分区参数反馈的全景视频画面,其中,所述全景视频画面包括第一视频区域和第二视频区域,所述第一视频区域内的视频数据按照第一码率处理,所述第二视频区域内的视频数据按照第二码率处理。
第四方面,本申请实施例提供了一种全景视频的处理装置,所述装置包括:
参数发送模块,用于向终端设备发送分区参数;
画面接收模块,用于接收终端设备根据所述分区参数反馈的全景视频画面,其中,所述全景视频画面包括第一视频区域和第二视频区域,所述第一视频区域内的视频数据按照第一码率处理,所述第二视频区域内的视频数据按照第二码率处理。
第五方面,本申请实施例提供了一种全景视频的处理系统,所述系 统包括:
显示设备,用于向所述终端设备发送分区参数;
终端设备,用于根据所述分区参数,确定全景视频画面中第一视频区域和第二视频区域,并对所述第一视频区域内的视频数据按照第一码率处理,对所述第二视频区域内的视频数据按照第二码率处理。
第六方面,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述的全景视频处理方法的步骤。
本申请实施例提供的一种全景视频的处理方法、装置及系统,终端设备通过接收显示设备发送的分区参数,进而根据分区参数,确定全景视频画面中第一视频区域和第二视频区域,对第一视频区域内的视频数据按照第一码率处理,对第二视频区域内的视频数据按照第二码率处理,实现用户观察区域对应的图像按照高码率处理,用户非观察区域对应的图像按照低码率处理,确保在有限的信道带宽的情况下,提高显示设备与终端设备之间图像传输的实时性、提高用户观看到的视频图像的清晰度,增强用户体验。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例中所需要使用的附图作简单地介绍。显而易见地,下面所描述的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种全景视频的处理方法的流程图;
图2a和图2b是用户视角的朝向在球体模型内切换的示意图;
图3a和图3b是用户视角的范围对应全景视频画面中图像的变化示意图;
图4是本申请实施例提供的一种全景视频的处理装置的功能框图;
图5是本申请又一实施例提供的一种全景视频的处理装置的功能框图;
图6是本申请实施例提供的一种全景视频的处理方法的流程图;
图7是本申请实施例提供的一种全景视频的处理装置的功能框图;
图8是本申请又一实施例提供的一种全景视频的处理装置的功能框图;
图9是本申请实施例提供的一种全景视频的处理系统的示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
此外,下面所描述的本申请各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。
本申请实施例的全景视频的处理方法可以是基于与全景摄像模组通信连接的终端设备和显示设备之间的信息交互过程。全景摄像模组可以是由一个或者多个摄像头组成。终端设备可以是飞行器、相机、手机、平板电脑等,显示设备可以是VR头戴显示设备、电视机、投影设备等。终端设备将全景摄像模组采集的全景视频经过预设处理后以无线或有线传输的方式发送给显示设备进行显示,无线传输的方式包括但不限于Wifi、Bluetooth、ZigBee、移动数据通信等无线传输技术。
下面结合具体附图对本申请实施例作具体阐述。
如图1所示,本申请实施例提供了一种全景视频的处理方法,所述方法可以由终端设备执行,所述方法包括:
步骤11、接收显示设备发送的分区参数。
在本申请实施例中,以显示设备是VR头戴式显示设备为例,全景摄像模组拍摄全景视频画面,终端设备根据VR头戴式显示设备发送的分区参数反馈所述全景视频画面至VR头戴式显示设备。
用户佩戴VR头戴式显示设备时,为了能够观看全景视频画面的三维效果,如图2a所示,VR头戴式显示设备可以构建虚拟三维空间内的球体模型21,进而将全景视频画面映射到球体模型21的球体表面上, 得到以球体模型21来展现的球面视频画面,实现把二维的全景视频画面模拟成三维的球面视频画面,以呈现给用户。
用户视角的切换可以实现向用户呈现球面视频画面中不同的区域,在用户视角的切换的实现方式上,包括但不限于如下两种方式:
第一种方式,佩戴VR头戴式显示设备的用户转动头部,VR头戴式显示设备的陀螺仪检测用户的头部转动,确定用户视角的方位,以便向用户呈现球面视频画面中用户视角朝向的区域,例如,由如图2a所示的用户视角朝向的区域切换到如图2b所示的用户视角朝向的区域。
第二种方式,佩戴VR头戴式显示设备的用户操作遥控器上的摇杆或按键,VR头戴式显示设备根据摇杆的摆动或按键的触发,即可向用户呈现球面视频画面中不同的区域。遥控器与VR头戴式显示设备可以是以无线或有线传输的方式进行通信。
需要说明的是,在用户视角的切换的实现方式上,可以采用上述第一种方式,也可以采用上述第二种方式,还也可以采用在第一种方式与第二种方式之间切换,以便供用户选择第一种方式或第二种方式。
在本申请实施例中,可以将球体模型的球体表面分成多个分区,分区的数量及各个分区的面积大小可以显示设备的显示屏视角大小进行适应性调整。图2a和图2b中以将球体模型21的球体表面分成六个分区为例,六个分区包括:A分区、B分区、C分区、D分区、E分区以及F分区,一个分区可以用于对应展现全景摄像模组中一个摄像头拍摄的区域画面。在一固定时刻,用户视角的范围能够涉及1~3个分区,根据用户视角的方位可以计算出用户视角的范围所涉及的分区位置,进而可以确定所涉及的分区编号。
步骤12、根据分区参数,确定全景视频画面中第一视频区域和第二视频区域。
作为一种可选的实施方式,分区参数可以是根据显示设备的用户视角所确定的全景视频画面的分区位置,如图2a所示,用户视角的范围涉及到C分区和D分区这两个区域,在切换用户视角后,如图2b所示,用户视角的范围涉及到B分区和C分区这两个区域。
作为一种可选的实施方式,分区参数包括第一标识信息以及第二标识信息。确定分区参数的步骤具体包括:
获取球面视频画面中与第一视频区域对应的图像的第一标识信息以及与第二视频区域对应的图像的第二标识信息。
将第一标识信息以及第二标识信息整合成分区参数。
在本可选实施方式中,全景视频画面中第一视频区域对应的球面视频画面中的图像具有第一标识信息,全景视频画面中第二视频区域对应的球面视频画面中的图像具有第二标识信息,能够通过第一标识信息和第二标识信息确定全景视频画面中第一视频区域和第二视频区域。
步骤13、对第一视频区域内的视频数据按照第一码率处理,对第二视频区域内的视频数据按照第二码率处理。
在一个实施例中,针对不同分区对应的全景视频画面中的视频数据都会以第一码率和第二码率两种码率进行压缩编码,即每个分区对应的视频数据即按照第一码率进行了压缩编码,又按照第二码率进行了压缩编码。但在视频数据发送阶段,会根据分区参数,针对不同视频区域发送相应码率压缩编码后的视频数据。
图3a是图2a中用户视角的范围对应全景视频画面中图像的示意图,图3b是图2b中用户视角的范围对应全景视频画面中图像的示意图。
在本申请实施例中,全景视频画面中的图像包括:a图像、b图像、c图像、d图像、e图像以及f图像。a图像、b图像、c图像、d图像、e图像以及f图像拼接后形成全景视频画面。a图像映射到A分区,b图像映射到B分区,c图像映射到C分区,d图像映射到D分区,e图像映射到E分区,f图像映射到F分区。
如图3a所示,用户视角的范围对应全景视频画面中的c图像和d图像这两个图像,在切换用户视角后,如图3b所示,用户视角的范围对应全景视频画面中的b图像和c图像这两个图像。
作为一种可选的实施方式,第一码率大于第二码率,第一视频区域包括用户视角的范围对应全景视频画面中的图像,第二视频区域包括全景视频画面中其他图像中的任意一个或多个图像。例如,如图3a所示, 第一视频区域包括c图像和d图像,对第一视频区域内的图像按照第一码率处理,即第一视频区域内输出的视频数据是按照第一码率压缩编码后的视频数据;第二视频区域包括a图像、b图像、e图像以及f图像中任意一个或多个图像,对第二视频区域的图像按照第二码率处理,即第二视频区域内输出的视频数据是按照第二码率压缩编码后的视频数据。
作为一种可选的实施方式,第二码率大于第一码率,第二视频区域包括用户视角的范围对应全景视频画面中的图像,第一视频区域包括全景视频画面中其他图像中的任意一个或多个图像。对第一视频区域内的图像按照第一码率处理;对第二视频区域的图像按照第二码率处理。
本申请实施例提供的一种全景视频的处理方法,终端设备通过接收显示设备发送的分区参数,进而根据分区参数,确定全景视频画面中第一视频区域和第二视频区域,对第一视频区域内的视频数据按照第一码率处理,对第二视频区域内的视频数据按照第二码率处理,实现用户观察区域对应的图像按照高码率处理,用户非观察区域对应的图像按照低码率处理,确保在有限的信道带宽的情况下,提高显示设备与终端设备之间图像传输的实时性、提高用户观看到的视频图像的清晰度,增强用户体验。
作为一种可选的实施方式,上述对所述第一视频区域按照第一码率处理具体为:对第一视频区域内的视频数据按照第一码率进行压缩编码;上述对所述第二视频区域按照第二码率处理具体为:对第二视频区域内的视频数据按照第二码率进行压缩编码。
作为一种可选的实施方式,上述对第一视频区域按照第一码率处理具体为:确定拍摄第一视频区域的第一组摄像头,将第一组摄像头拍摄的视频数据分别按照第一码率和第二码率进行压缩编码后,选择以第一码率压缩编码的视频数据作为需传输的视频数据;上述对第二视频区域按照第二码率处理具体为:确定拍摄第二视频区域的第二组摄像头,将第二组摄像头拍摄的视频数据分别按照第一码率和第二码率进行压缩编码后,选择以第二码率压缩编码的视频数据作为需传输的视频数据。例如,如图3a所示,第一组摄像头包括拍摄c图像和d图像的摄像头, 将第一组摄像头拍摄并以第一码率压缩编码的视频数据作为需传输的视频数据;第二组摄像头包括拍摄a图像、b图像、e图像以及f图像中任意一个或多个图像的摄像头,将第二组摄像头拍摄并以第二码率压缩编码的视频数据作为需传输的视频数据。
终端设备将压缩编码后的视频码流或者将需传输的视频数据可以通过无线传输的方式发送至显示设备,无线传输的方式包括但不限于Wifi、Bluetooth、ZigBee、移动数据通信等无线传输技术。终端设备将压缩编码后的视频码流也可以通过有线传输的方式发送至显示设备。
作为一种可选的实施方式,上述对第一视频区域按照第一码率处理具体为:确定拍摄第一视频区域的第一组摄像头,将第一组摄像头的输出码率设置为第一码率;上述对第二视频区域按照第二码率处理具体为:确定拍摄第二视频区域的第二组摄像头,将第二组摄像头的输出码率设置为第二码率。例如,如图3a所示,第一组摄像头包括拍摄c图像和d图像的摄像头,将第一组摄像头的输出码率设置为第一码率;第二组摄像头包括拍摄a图像、b图像、e图像以及f图像中任意一个或多个图像的摄像头,将第二组摄像头的输出码率设置为第二码率。
为了进一步降低对信道带宽的负载,假设用户视角的范围对应全景视频画面中第一视频区域,可以关闭拍摄第二视频区域的第二组摄像头;假设用户视角的范围对应全景视频画面中第二视频区域,可以关闭拍摄第一视频区域的第一组摄像头。
如图4所示,本申请实施例提供了一种全景视频的处理装置40,所述处理装置40包括:参数接收模块41、区域确定模块42、第一码率处理模块43和第二码率处理模块44。
参数接收模块41用于接收显示设备发送的分区参数。
所述分区参数为根据所述显示设备的用户视角所确定的全景视频图像画面的分区位置。
区域确定模块42用于根据分区参数,确定全景视频画面中第一视频区域和第二视频区域。
第一码率处理模块43用于对第一视频区域内的视频数据按照第一 码率处理。
具体的,第一码率处理模块43用于对第一视频区域内的视频数据按照第一码率进行压缩编码;或者,第一码率处理模块43用于确定拍摄第一视频区域的第一组摄像头,将第一组摄像头的输出码率设置为第一码率。
或者,第一码率处理模块43用于对各个区域内的视频数据按照第一码率进行压缩编码;进一步地,第一码率处理模块43还用于确定拍摄第一视频区域的第一组摄像头,将第一组摄像头拍摄并以第一码率进行压缩编码的视频数据作为需传输的视频数据。
第二码率处理模块44用于对第二视频区域内的视频数据按照第二码率处理。
具体的,第二码率处理模块44用于对第二视频区域内的视频数据按照第二码率进行压缩编码;或者,第二码率处理模块44用于确定拍摄第二视频区域的第二组摄像头,将第二组摄像头的输出码率设置为第二码率。
或者,第二码率处理模块44用于对各个区域内的视频数据按照第二码率进行压缩编码;进一步地,第二码率处理模块44还用于确定拍摄第二视频区域的第二组摄像头,将第二组摄像头拍摄并以第二码率压缩编码的视频数据作为需传输的视频数据。
在本申请实施例中,对参数接收模块41、区域确定模块42、第一码率处理模块43以及第二码率处理模块44的解释说明请参考对上述步骤11、步骤12以及步骤13的解释说明。
本申请实施例提供的一种全景视频的处理装置,通过参数接收模块41接收显示设备发送的分区参数,进而区域确定模块42根据分区参数,确定全景视频画面中第一视频区域和第二视频区域,第一码率处理模块43对第一视频区域内的视频数据按照第一码率处理,第二码率处理模块44对第二视频区域内的视频数据按照第二码率处理,实现用户观察区域对应的图像按照高码率处理,用户非观察区域对应的图像按照低码率处理,确保在有限的信道带宽的情况下,提高显示设备与终端设备之间图 像传输的实时性、提高用户观看到的视频图像的清晰度,增强用户体验。
如图5所示,本申请又一实施例提供了一种全景视频的处理装置50,所述处理装置50包括:参数接收模块51、区域确定模块52、第一码率处理模块53、第二码率处理模块54以及码流发送模块55。
参数接收模块51用于接收显示设备发送的分区参数。
区域确定模块52用于根据分区参数,确定全景视频画面中第一视频区域和第二视频区域。
第一码率处理模块53用于对第一视频区域内的视频数据按照第一码率进行压缩编码。
或者,第一码率处理模块53用于对各个区域内的视频数据按照第一码率进行压缩编码,并且,第一码率处理模块53还用于确定拍摄第一视频区域的第一组摄像头,将第一组摄像头拍摄并以第一码率压缩编码的视频数据作为需传输的视频数据。
第二码率处理模块54用于对第二视频区域内的视频数据按照第二码率进行压缩编码。
或者,第二码率处理模块54用于对各个区域内的视频数据按照第二码率进行压缩编码,并且,第二码率处理模块54还用于确定拍摄第二视频区域的第二组摄像头,将第二组摄像头拍摄并以第二码率压缩编码的视频数据作为需传输的视频数据。
码流发送模块55用于将压缩编码后的视频码流通过无线传输的方式发送至显示设备。或者,码流发送模块55可以用视频数据发送模块代替,用于将需传输的视频数据通过无线传输的方式发送至显示设备。
在本申请实施例中,对参数接收模块51、区域确定模块52、第一码率处理模块53、第二码率处理模块54以及码流发送模块55的解释说明请参考对上述步骤11、步骤12以及步骤13的解释说明。
本申请实施例提供的一种全景视频的处理装置,通过接收模块51接收显示设备发送的分区参数,进而区域确定模块52根据分区参数,确定全景视频画面中第一视频区域和第二视频区域,第一码率处理模块53对第一视频区域内的视频数据按照第一码率进行压缩编码,第二码率 处理模块54对第二视频区域内的视频数据按照第二码率进行压缩编码,码流发送模块55将压缩编码后的视频码流通过无线传输的方式发送至显示设备,或者,第一码率处理模块53确定第一视频区域内以第一码率进行压缩编码的视频数据作为需传输的视频数据,第二码率处理模块54对第二视频区域内以第二码率进行压缩编码的视频数据作为需传输的视频数据,视频数据发送模块将需传输的视频数据通过无线传输的方式发送至显示设备,确保在有限的信道带宽的情况下,提高显示设备与终端设备之间图像无线传输的实时性、提高用户观看到的视频图像的清晰度,增强用户体验。
如图6所示,本申请实施例提供了一种全景视频的处理方法,所述方法由显示设备执行,所述方法包括:
步骤61、向终端设备发送分区参数。
步骤62、接收终端设备根据分区参数反馈的全景视频画面,其中,全景视频画面包括第一视频区域和第二视频区域,第一视频区域内的视频数据按照第一码率处理,第二视频区域内的视频数据按照第二码率处理。
在本申请实施例中,以终端设备是飞行器,以显示设备是VR头戴式显示设备为例,全景摄像模组拍摄全景视频画面,终端设备根据VR头戴式显示设备发送的分区参数反馈全景视频画面至VR头戴式显示设备。
用户佩戴VR头戴式显示设备时,为了能够观看全景视频画面的三维效果,在步骤62之后,还包括如下步骤:
构建虚拟三维空间内的球体模型;
将全景视频画面映射到球体模型的球体表面上,得到以球体模型来展现的球面视频画面。
如图2a所示,VR头戴式显示设备可以构建虚拟三维空间内的球体模型21,进而将全景视频画面映射到球体模型21的球体表面上,得到以球体模型21来展现的球面视频画面,实现把二维的全景视频画面模 拟成三维的球面视频画面,以呈现给用户。
用户视角的切换可以实现向用户呈现球面视频画面中不同的区域,在用户视角的切换的实现方式上,包括但不限于如下两种方式:
第一种方式,佩戴VR头戴式显示设备的用户转动头部,VR头戴式显示设备的陀螺仪检测用户的头部转动,确定用户视角的方位,以便向用户呈现球面视频画面中用户视角朝向的区域,例如,由如图2a所示的用户视角朝向的区域切换到如图2b所示的用户视角朝向的区域。
第二种方式,佩戴VR头戴式显示设备的用户操作遥控器上的摇杆或按键,VR头戴式显示设备根据摇杆的摆动或按键的触发,即可向用户呈现球面视频画面中不同的区域。遥控器与VR头戴式显示设备可以是以无线或有线传输的方式进行通信。
需要说明的是,在用户视角的切换的实现方式上,可以采用上述第一种方式,也可以采用上述第二种方式,还也可以采用在第一种方式与第二种方式之间切换,以便供用户选择第一种方式或第二种方式。
在本申请实施例中,可以将球体模型的球体表面分成多个分区,分区的数量及各个分区的面积大小可以显示设备的显示屏视角大小进行适应性调整。图2a和图2b中以将球体模型21的球体表面分成六个分区为例,六个分区包括:A分区、B分区、C分区、D分区、E分区以及F分区,一个分区可以用于对应展现全景摄像模组中一个摄像头拍摄的区域画面。在一固定时刻,用户视角的范围能够涉及1~3个分区,根据用户视角的方位可以计算出用户视角的范围所涉及的分区位置,进而可以确定所涉及的分区编号。
作为一种可选的实施方式,分区参数可以是根据显示设备的用户视角所确定的全景视频画面的分区位置,如图2a所示,用户视角的范围涉及到C分区和D分区这两个区域,在切换用户视角后,如图2b所示,用户视角的范围涉及到B分区和C分区这两个区域。
作为一种可选的实施方式,分区参数包括第一标识信息以及第二标识信息。在步骤62之后,还包括如下步骤:
获取球面视频画面中与第一视频区域对应的图像的第一标识信息 以及与第二视频区域对应的图像的第二标识信息。
将第一标识信息以及第二标识信息整合成分区参数。
在本可选实施方式中,全景视频画面中第一视频区域对应的球面视频画面中的图像具有第一标识信息,全景视频画面中第二视频区域对应的球面视频画面中的图像具有第二标识信息,能够通过第一标识信息和第二标识信息确定全景视频画面中第一视频区域和第二视频区域。
将第一标识信息以及第二标识信息整合成分区参数。
图3a是图2a中用户视角的范围对应全景视频画面中图像的示意图,图3b是图2b中用户视角的范围对应全景视频画面中图像的示意图。
在本申请实施例中,全景视频画面中的图像包括:a图像、b图像、c图像、d图像、e图像以及f图像。a图像、b图像、c图像、d图像、e图像以及f图像拼接后形成全景视频画面。a图像映射到A分区,b图像映射到B分区,c图像映射到C分区,d图像映射到D分区,e图像映射到E分区,f图像映射到F分区。
进一步地,还包括检测用户视角对应的全景视频画面的分区位置是否发生变化的步骤,若是,则重新向终端设备发送分区参数。
如图3a所示,用户视角的范围对应全景视频画面中的c图像和d图像这两个图像,在切换用户视角后,如图3b所示,用户视角的范围对应全景视频画面中的b图像和c图像这两个图像。这时,相当于第一视频区域和第二视频区域发生了变化,那么分区参数包括的第一标识信息以及第二标识信息也发生了相应的变化,在根据新的第一标识信息和第二标识信息整合成新的分区参数之后会重新发送至终端设备。
作为一种可选的实施方式,第一码率大于第二码率,第一视频区域包括用户视角的范围对应全景视频画面中的图像,第二视频区域包括全景视频画面中其他图像中的任意一个或多个图像。例如,如图3a所示,第一视频区域包括c图像和d图像,对第一视频区域内的图像按照第一码率处理;第二视频区域包括a图像、b图像、e图像以及f图像中任意一个或多个图像,对第二视频区域的图像按照第二码率处理。
作为一种可选的实施方式,第二码率大于第一码率,第二视频区域 包括用户视角的范围对应全景视频画面中的图像,第一视频区域包括全景视频画面中其他图像中的任意一个或多个图像。对第一视频区域内的图像按照第一码率处理;对第二视频区域的图像按照第二码率处理。
在步骤62之后,还包括如下步骤:
显示球面视频画面中与第一视频区域对应的图像或与第二视频区域对应的图像。
本申请实施例提供的一种全景视频的处理方法,通过向终端设备发送分区参数,接收终端设备根据分区参数反馈的全景视频画面,其中,全景视频画面包括第一视频区域和第二视频区域,第一视频区域内的视频数据按照第一码率处理,第二视频区域内的视频数据按照第二码率处理,确保在有限的信道带宽的情况下,提高显示设备与终端设备之间图像传输的实时性、提高用户观看到的视频图像的清晰度,增强用户体验。
如图7所示,本申请实施例提供了一种全景视频的处理装置70,所述处理装置70包括:参数发送模块71和画面接收模块72。
参数发送模块71用于向终端设备发送分区参数。
画面接收模块72用于接收终端设备根据分区参数反馈的全景视频画面,其中,全景视频画面包括第一视频区域和第二视频区域,第一视频区域内的视频数据按照第一码率处理,第二视频区域内的视频数据按照第二码率处理。
在本申请实施例中,对参数发送模块71和画面接收模块72的解释说明请参考对上述步骤61、步骤62的解释说明。
本申请实施例提供的一种全景视频的处理装置,通过参数发送模块向终端设备发送分区参数,画面接收模块接收终端设备根据分区参数反馈的全景视频画面,其中,全景视频画面包括第一视频区域和第二视频区域,第一视频区域内的视频数据按照第一码率处理,第二视频区域内的视频数据按照第二码率处理,确保在有限的信道带宽的情况下,提高显示设备与终端设备之间图像传输的实时性、提高用户观看到的视频图像的清晰度,增强用户体验。
如图8所示,本申请又一实施例提供了一种全景视频的处理装置80, 所述处理装置80包括:参数发送模块81、画面接收模块82、模型构建模块83、画面映射模块84、标识信息获取模块85、整合模块86、显示模块87。
参数发送模块81用于向终端设备发送分区参数。
画面接收模块82用于接收终端设备根据分区参数反馈的全景视频画面,其中,全景视频画面包括第一视频区域和第二视频区域,第一视频区域内的视频数据按照第一码率处理,第二视频区域内的视频数据按照第二码率处理。
模型构建模块83用于构建虚拟三维空间内的球体模型。
画面映射模块84用于将全景视频画面映射到球体模型的球体表面上,得到以球体模型来展现的球面视频画面。
标识信息获取模块85用于获取球面视频画面中与第一视频区域对应的图像的第一标识信息以及与第二视频区域对应的图像的第二标识信息。
整合模块86用于将第一标识信息以及第二标识信息整合成分区参数。
可以理解,在其他实施例中还包括检测模块,用于检测用户视角对应的全景视频画面的分区位置是否发生变化,若是,则通过参数发送模块81重新向终端设备发送分区参数。
如图2a所示,用户视角的范围涉及到C分区和D分区这两个区域,在切换用户视角后,如图2b所示,用户视角的范围切换到B分区和C分区这两个区域,即第一视频区域和第二视频区域发生了变化,那么分区参数包括的第一标识信息以及第二标识信息也发生了相应的变化,在整合模块86根据新的第一标识信息和第二标识信息整合成新的分区参数之后会通过发送模块81重新发送至终端设备。
显示模块87用于显示球面视频画面中与第一视频区域对应的图像或与第二视频区域对应的图像。
在本申请实施例中,对参数发送模块81、画面接收模块82、模型构建模块83、画面映射模块84、标识信息获取模块85、整合模块86、 显示模块87的解释说明请参考对上述步骤61、步骤62的解释说明。
如图9所示,本申请实施例提供了一种全景视频的处理系统90,所述系统包括:显示设备91和终端设备92。终端设备可以是飞行器、相机、手机、平板电脑等,显示设备可以是VR头戴显示设备、电视机、投影设备等。
显示设备91用于向终端设备发送分区参数。
作为一种可选的实施方式,分区参数可以是根据显示设备的用户视角所确定的全景视频画面的分区位置。
作为一种可选的实施方式,分区参数包括第一标识信息以及第二标识信息。确定分区参数的步骤具体包括:
获取球面视频画面中与第一视频区域对应的图像的第一标识信息以及与第二视频区域对应的图像的第二标识信息。
将第一标识信息以及第二标识信息整合成分区参数。
终端设备92用于根据分区参数,确定全景视频画面中第一视频区域和第二视频区域,并对第一视频区域内的视频数据按照第一码率处理,对第二视频区域内的视频数据按照第二码率处理。
作为一种可选的实施方式,第一码率大于第二码率,第一视频区域包括用户视角的范围对应全景视频画面中的图像,第二视频区域包括全景视频画面中其他图像中的任意一个或多个图像。例如,如图3a所示,第一视频区域包括c图像和d图像,对第一视频区域内的图像按照第一码率处理;第二视频区域包括a图像、b图像、e图像以及f图像中任意一个或多个图像,对第二视频区域的图像按照第二码率处理。
作为一种可选的实施方式,第二码率大于第一码率,第二视频区域包括用户视角的范围对应全景视频画面中的图像,第一视频区域包括全景视频画面中其他图像中的任意一个或多个图像。对第一视频区域内的图像按照第一码率处理;对第二视频区域的图像按照第二码率处理。
本申请实施例提供的一种全景视频的处理系统,通过显示设备向向终端设备发送分区参数,终端设备根据分区参数,确定全景视频画面中第一视频区域和第二视频区域,并对第一视频区域内的视频数据按照第 一码率处理,对第二视频区域内的视频数据按照第二码率处理,确保在有限的信道带宽的情况下,提高显示设备与终端设备之间图像传输的实时性、提高用户观看到的视频图像的清晰度,增强用户体验。
本领域普通技术人员可以理解实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘或光盘等。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本申请的保护范围之内。

Claims (28)

  1. 一种全景视频的处理方法,其特征在于,包括:
    接收显示设备发送的分区参数;
    根据所述分区参数,确定全景视频画面中第一视频区域和第二视频区域;
    对所述第一视频区域内的视频数据按照第一码率处理,对所述第二视频区域内的视频数据按照第二码率处理。
  2. 根据权利要求1所述的全景视频的处理方法,其特征在于,所述分区参数为根据所述显示设备的用户视角所确定的全景视频画面的分区位置。
  3. 根据权利要求1或2所述的全景视频的处理方法,其特征在于,所述对所述第一视频区域按照第一码率处理,对所述第二视频区域按照第二码率处理,具体为:
    对所述第一视频区域内的视频数据按照第一码率进行压缩编码;
    对所述第二视频区域内的视频数据按照第二码率进行压缩编码。
  4. 根据权利要求3所述的全景视频的处理方法,其特征在于,还包括:
    将压缩编码后的视频码流通过无线传输的方式发送至所述显示设备。
  5. 根据权利要求1或2所述的全景视频的处理方法,其特征在于,所述对所述第一视频区域按照第一码率处理,对所述第二视频区域按照第二码率处理,具体为:
    确定拍摄第一视频区域的第一组摄像头,将所述第一组摄像头的输出码率设置为第一码率;以及,
    确定拍摄第二视频区域的第二组摄像头,将所述第二组摄像头的输出码率设置为第二码率。
  6. 根据权利要求1或2所述的全景视频的处理方法,其特征在于,所述对所述第一视频区域按照第一码率处理,对所述第二视频区域按照 第二码率处理,具体为:
    确定拍摄第一视频区域的第一组摄像头,将第一组摄像头拍摄的视频数据分别按照第一码率和第二码率进行压缩编码后,选择以第一码率压缩编码的视频数据作为需传输的视频数据;以及,
    确定拍摄第二视频区域的第二组摄像头,将第二组摄像头拍摄的视频数据分别按照第一码率和第二码率进行压缩编码后,选择以第二码率压缩编码的视频数据作为需传输的视频数据。
  7. 根据权利要求6所述的全景视频的处理方法,其特征在于,还包括:
    将需传输的视频数据通过无线传输的方式发送至所述显示设备。
  8. 一种全景视频的处理装置,其特征在于,所述装置包括:
    参数接收模块,用于接收显示设备发送的分区参数;
    区域确定模块,用于根据所述分区参数,确定全景视频画面中第一视频区域和第二视频区域;
    第一码率处理模块,用于对所述第一视频区域内的视频数据按照第一码率处理;
    第二码率处理模块,用于对所述第二视频区域内的视频数据按照第二码率处理。
  9. 根据权利要求8所述的装置,其特征在于,所述分区参数为根据所述显示设备的用户视角所确定的全景视频图像画面的分区位置。
  10. 根据权利要求8或9所述的装置,其特征在于,
    所述第一码率处理模块用于对所述第一视频区域内的视频数据按照第一码率进行压缩编码;
    所述第二码率处理模块用于对所述第二视频区域内的视频数据按照第二码率进行压缩编码。
  11. 根据权利要求10所述的装置,其特征在于,所述装置还包括:
    码流发送模块,用于将压缩编码后的视频码流通过无线传输的方式发送至所述显示设备。
  12. 根据权利要求8或9所述的装置,其特征在于,
    所述第一码率处理模块用于确定拍摄第一视频区域的第一组摄像头,将所述第一组摄像头的输出码率设置为第一码率;
    所述第二码率处理模块用于确定拍摄第二视频区域的第二组摄像头,将所述第二组摄像头的输出码率设置为第二码率。
  13. 根据权利要求8或9所述的装置,其特征在于,
    所述第一码率处理模块用于对各个区域内的视频数据按照第一码率进行压缩编码,所述第一码率处理模块还用于确定拍摄第一视频区域的第一组摄像头,将第一组摄像头拍摄并以第一码率压缩编码的视频数据作为需传输的视频数据;
    所述第二码率处理模块用于对各个区域内的视频数据按照第二码率进行压缩编码,第二码率处理模块还用于确定拍摄第二视频区域的第二组摄像头,将第二组摄像头拍摄并以第二码率压缩编码的视频数据作为需传输的视频数据。
  14. 根据权利要求13所述的装置,其特征在于,所述装置还包括:
    视频数据发送模块,用于将需传输的视频数据通过无线传输的方式发送至所述显示设备。
  15. 一种全景视频的处理方法,其特征在于,所述方法包括:
    向终端设备发送分区参数;
    接收终端设备根据所述分区参数反馈的全景视频画面,其中,所述全景视频画面包括第一视频区域和第二视频区域,所述第一视频区域内的视频数据按照第一码率处理,所述第二视频区域内的视频数据按照第二码率处理。
  16. 根据权利要求15所述的方法,其特征在于,在接收终端设备根据所述分区参数反馈的全景视频画面的步骤之后,所述方法还包括:
    构建虚拟三维空间内的球体模型;
    将所述全景视频画面映射到所述球体模型的球体表面上,得到以球体模型来展现的球面视频画面。
  17. 根据权利要求16所述的方法,其特征在于,所述分区参数为根据用户视角所确定的全景视频图像画面的分区位置。
  18. 根据权利要求16所述的方法,其特征在于,在将所述全景视频画面映射到所述球体模型的球体表面上,得到以球体模型来展现的球面视频画面的步骤之后,所述方法还包括:
    获取所述球面视频画面中与第一视频区域对应的图像的第一标识信息以及与第二视频区域对应的图像的第二标识信息;
    将所述第一标识信息以及所述第二标识信息整合成所述分区参数。
  19. 根据权利要求16~18中任一所述的方法,其特征在于,在接收终端设备根据所述分区参数反馈的全景视频画面的步骤之后,所述方法还包括:
    显示所述球面视频画面中与第一视频区域对应的图像或与第二视频区域对应的图像。
  20. 根据权利要求16~19中任一所述的方法,其特征在于,所述方法还包括:
    检测用户视角对应的全景视频画面的分区位置是否发生变化,若是,则重新向终端设备发送分区参数。
  21. 一种全景视频的处理装置,其特征在于,所述装置包括:
    参数发送模块,用于向终端设备发送分区参数;
    画面接收模块,用于接收终端设备根据所述分区参数反馈的全景视频画面,其中,所述全景视频画面包括第一视频区域和第二视频区域,所述第一视频区域内的视频数据按照第一码率处理,所述第二视频区域内的视频数据按照第二码率处理。
  22. 根据权利要求21所述的装置,其特征在于,所述装置还包括:
    模型构建模块,用于构建虚拟三维空间内的球体模型;
    画面映射模块,用于将所述全景视频画面映射到所述球体模型的球体表面上,得到以球体模型来展现的球面视频画面。
  23. 根据权利要求22所述的装置,其特征在于,所述分区参数为根据用户视角所确定的全景视频图像画面的分区位置。
  24. 根据权利要求22所述的装置,其特征在于,所述装置还包括:
    标识信息获取模块,用于获取所述球面视频画面中与第一视频区域 对应的图像的第一标识信息以及与第二视频区域对应的图像的第二标识信息;
    整合模块,用于将所述第一标识信息以及所述第二标识信息整合成所述分区参数。
  25. 根据权利要求21~24中任一所述的装置,其特征在于,所述装置还包括:
    检测模块,用于检测用户视角对应的全景视频画面的分区位置是否发生变化的步骤,若是,则通过所述参数发送模块重新向终端设备发送分区参数。
  26. 根据权利要求22-24中任一所述的装置,其特征在于,所述装置还包括:
    显示模块,用于显示所述球面视频画面中与第一视频区域对应的图像或与第二视频区域对应的图像。
  27. 一种全景视频的处理系统,其特征在于,所述系统包括:
    显示设备,用于向所述终端设备发送分区参数;
    终端设备,用于根据所述分区参数,确定全景视频画面中第一视频区域和第二视频区域,并对所述第一视频区域内的视频数据按照第一码率处理,对所述第二视频区域内的视频数据按照第二码率处理。
  28. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1~7或权利要求15~20任一项所述的全景视频处理方法的步骤。
PCT/CN2017/107376 2016-10-26 2017-10-23 全景视频的处理方法、装置及系统 WO2018077142A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/389,556 US20190246104A1 (en) 2016-10-26 2019-04-19 Panoramic video processing method, device and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610952287.7 2016-10-26
CN201610952287.7A CN106454321A (zh) 2016-10-26 2016-10-26 全景视频的处理方法、装置及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/389,556 Continuation US20190246104A1 (en) 2016-10-26 2019-04-19 Panoramic video processing method, device and system

Publications (1)

Publication Number Publication Date
WO2018077142A1 true WO2018077142A1 (zh) 2018-05-03

Family

ID=58179315

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/107376 WO2018077142A1 (zh) 2016-10-26 2017-10-23 全景视频的处理方法、装置及系统

Country Status (3)

Country Link
US (1) US20190246104A1 (zh)
CN (1) CN106454321A (zh)
WO (1) WO2018077142A1 (zh)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106454321A (zh) * 2016-10-26 2017-02-22 深圳市道通智能航空技术有限公司 全景视频的处理方法、装置及系统
KR102598082B1 (ko) * 2016-10-28 2023-11-03 삼성전자주식회사 영상 표시 장치, 모바일 장치 및 그 동작방법
CN108513119A (zh) * 2017-02-27 2018-09-07 阿里巴巴集团控股有限公司 图像的映射、处理方法、装置和机器可读介质
CN106954093B (zh) * 2017-03-15 2020-12-04 北京小米移动软件有限公司 全景视频处理方法、装置及系统
CN106911902B (zh) * 2017-03-15 2020-01-07 微鲸科技有限公司 视频图像传输方法、接收方法及装置
CN108668138B (zh) * 2017-03-28 2021-01-29 华为技术有限公司 一种视频下载方法以及用户终端
CN107123080A (zh) * 2017-03-29 2017-09-01 北京疯景科技有限公司 显示全景内容的方法及装置
CN106961622B (zh) * 2017-03-30 2020-09-25 联想(北京)有限公司 一种显示处理方法及装置
CN107087145A (zh) * 2017-06-02 2017-08-22 深圳市本道科技有限公司 多路视频进行360度全景视频显示的方法及装置
US10477105B2 (en) 2017-06-08 2019-11-12 Futurewei Technologies, Inc. Method and system for transmitting virtual reality (VR) content
CN109218836B (zh) * 2017-06-30 2021-02-26 华为技术有限公司 一种视频处理方法及其设备
CN109429062B (zh) * 2017-08-22 2023-04-11 阿里巴巴集团控股有限公司 金字塔模型的处理方法和装置、图像编码方法和装置
CN107396077B (zh) * 2017-08-23 2022-04-08 深圳看到科技有限公司 虚拟现实全景视频流投影方法和设备
CN107395984A (zh) * 2017-08-25 2017-11-24 北京佰才邦技术有限公司 一种视频传输的方法及装置
CN107529064A (zh) * 2017-09-04 2017-12-29 北京理工大学 一种基于vr终端反馈的自适应编码方法
CN109698952B (zh) * 2017-10-23 2020-09-29 腾讯科技(深圳)有限公司 全景视频图像的播放方法、装置、存储介质及电子装置
CN109756540B (zh) * 2017-11-06 2021-09-14 中国移动通信有限公司研究院 一种全景视频传输方法、装置和计算机可读存储介质
CN108401183A (zh) * 2018-03-06 2018-08-14 深圳市赛亿科技开发有限公司 Vr全景视频显示的实现方法和系统、vr服务器
CN108833929A (zh) * 2018-06-26 2018-11-16 曜宇航空科技(上海)有限公司 一种全景视频的播放方法和播放系统
CN109634427B (zh) * 2018-12-24 2022-06-14 陕西圆周率文教科技有限公司 基于头部追踪的ar眼镜控制系统及控制方法
CN110266714B (zh) * 2019-06-28 2020-04-21 合肥工业大学 一种QoE驱动下的VR视频自适应采集与传输方法
CN112399187A (zh) * 2019-08-13 2021-02-23 华为技术有限公司 一种数据传输的方法以及装置
CN112541858A (zh) * 2019-09-20 2021-03-23 华为技术有限公司 视频图像的增强方法、装置、设备、芯片及存储介质
CN112752032B (zh) * 2019-10-31 2023-01-06 华为技术有限公司 一种全景视频生成方法、视频采集方法与相关装置
CN111447457A (zh) * 2020-03-25 2020-07-24 咪咕文化科技有限公司 直播视频处理方法、装置及存储介质
CN113518249B (zh) * 2020-04-10 2023-03-10 华为技术有限公司 一种远端图像处理方法及装置
CN115437390A (zh) * 2021-06-02 2022-12-06 影石创新科技股份有限公司 无人机的控制方法及控制系统
CN117768669A (zh) * 2022-09-19 2024-03-26 腾讯科技(深圳)有限公司 一种数据传输的方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103458238A (zh) * 2012-11-14 2013-12-18 深圳信息职业技术学院 一种结合视觉感知的可伸缩视频码率控制方法、装置
CN104980740A (zh) * 2014-04-08 2015-10-14 富士通株式会社 图像处理方法、装置和电子设备
US20150312575A1 (en) * 2012-04-16 2015-10-29 New Cinema, LLC Advanced video coding method, system, apparatus, and storage medium
CN105635624A (zh) * 2014-10-27 2016-06-01 华为技术有限公司 视频图像的处理方法、设备及系统
CN106454321A (zh) * 2016-10-26 2017-02-22 深圳市道通智能航空技术有限公司 全景视频的处理方法、装置及系统

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101534436B (zh) * 2008-03-11 2011-02-02 深圳市融创天下科技发展有限公司 一种视频图像宏块级自适应码率分配方法
CN102984495A (zh) * 2012-12-06 2013-03-20 北京小米科技有限责任公司 一种视频图像的处理方法及装置
US9804669B2 (en) * 2014-11-07 2017-10-31 Eye Labs, Inc. High resolution perception of content in a wide field of view of a head-mounted display
US10362290B2 (en) * 2015-02-17 2019-07-23 Nextvr Inc. Methods and apparatus for processing content based on viewing information and/or communicating content
CN104767992A (zh) * 2015-04-13 2015-07-08 北京集创北方科技有限公司 头戴式显示系统及影像低频宽传输方法
EP3354007B1 (en) * 2015-09-23 2023-03-01 Nokia Technologies Oy Video content selection
US10291910B2 (en) * 2016-02-12 2019-05-14 Gopro, Inc. Systems and methods for spatially adaptive video encoding
US20180027241A1 (en) * 2016-07-20 2018-01-25 Mediatek Inc. Method and Apparatus for Multi-Level Region-of-Interest Video Coding
US10142540B1 (en) * 2016-07-26 2018-11-27 360fly, Inc. Panoramic video cameras, camera systems, and methods that provide data stream management for control and image streams in multi-camera environment with object tracking
US10623634B2 (en) * 2017-04-17 2020-04-14 Intel Corporation Systems and methods for 360 video capture and display based on eye tracking including gaze based warnings and eye accommodation matching

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150312575A1 (en) * 2012-04-16 2015-10-29 New Cinema, LLC Advanced video coding method, system, apparatus, and storage medium
CN103458238A (zh) * 2012-11-14 2013-12-18 深圳信息职业技术学院 一种结合视觉感知的可伸缩视频码率控制方法、装置
CN104980740A (zh) * 2014-04-08 2015-10-14 富士通株式会社 图像处理方法、装置和电子设备
CN105635624A (zh) * 2014-10-27 2016-06-01 华为技术有限公司 视频图像的处理方法、设备及系统
CN106454321A (zh) * 2016-10-26 2017-02-22 深圳市道通智能航空技术有限公司 全景视频的处理方法、装置及系统

Also Published As

Publication number Publication date
CN106454321A (zh) 2017-02-22
US20190246104A1 (en) 2019-08-08

Similar Documents

Publication Publication Date Title
WO2018077142A1 (zh) 全景视频的处理方法、装置及系统
US10827176B2 (en) Systems and methods for spatially adaptive video encoding
US10841532B2 (en) Rectilinear viewport extraction from a region of a wide field of view using messaging in video transmission
US10757423B2 (en) Apparatus and methods for compressing video content using adaptive projection selection
US11671712B2 (en) Apparatus and methods for image encoding using spatially weighted encoding quality parameters
US20230276054A1 (en) Systems and methods for spatially selective video coding
WO2018014495A1 (zh) 实时全景直播网络摄像机和系统及方法
US20170272698A1 (en) Portable device capable of generating panoramic file
WO2017219652A1 (zh) 一种头戴显示器、视频输出设备和视频处理方法、系统
WO2018133589A1 (zh) 航拍方法、装置和无人机
CN116134809A (zh) 用于传输3d xr媒体数据的方法和设备
US20170201689A1 (en) Remotely controlled communicated image resolution
EP3434021B1 (en) Method, apparatus and stream of formatting an immersive video for legacy and immersive rendering devices
CA3015189A1 (en) Systems and methods for transmitting a high quality video image from a low power sensor
US10565679B2 (en) Imaging device and method
WO2021164082A1 (zh) 控制视频数据采集的方法、发射控制装置及无线传输系统
JP2020115299A (ja) 仮想空間情報処理装置、方法、プログラム
KR20220001312A (ko) 무선 통신 시스템에서 데이터의 송수신을 제어하기 위한 방법 및 장치
CN109479147B (zh) 用于时间视点间预测的方法及技术设备
KR20200076529A (ko) 가상 현실 비디오 스트리밍에서의 관심영역 타일 인덱싱
CN117440176A (zh) 用于视频传输的方法、装置、设备和介质
JP2007249335A (ja) 輪郭データを利用した画像転送

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17863394

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17863394

Country of ref document: EP

Kind code of ref document: A1