CN106231317A - Video processing, coding/decoding method and device, VR terminal, audio/video player system - Google Patents

Video processing, coding/decoding method and device, VR terminal, audio/video player system Download PDF

Info

Publication number
CN106231317A
CN106231317A CN201610865440.2A CN201610865440A CN106231317A CN 106231317 A CN106231317 A CN 106231317A CN 201610865440 A CN201610865440 A CN 201610865440A CN 106231317 A CN106231317 A CN 106231317A
Authority
CN
China
Prior art keywords
video data
video
layer video
enhancement layer
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610865440.2A
Other languages
Chinese (zh)
Inventor
李宗辰
王成军
陈有鑫
刘明
熊张亮
马权
邱男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN201610865440.2A priority Critical patent/CN106231317A/en
Publication of CN106231317A publication Critical patent/CN106231317A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234327Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Discrete Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

This application discloses Video processing, coding/decoding method and device, VR terminal, audio/video player system.One detailed description of the invention of described method for processing video frequency includes: obtain multiple video datas that panoramic video harvester gathers;Synthesize the plurality of video data, obtain panoramic video;Described panoramic video is encoded, obtains Primary layer video data and enhancement layer video data;Described Primary layer video data and described enhancement layer video data are transferred to terminal by different transmission means.This embodiment can provide the video data of multiple resolution, adds the selection of user, simultaneously because the video of different resolution is different to the requirement of network, reduces the requirement to the network bandwidth of traditional high-resolution video.

Description

Video processing, coding/decoding method and device, VR terminal, audio/video player system
Technical field
The application relates to field of video processing, is specifically related to panoramic video process field, particularly relate to a kind of Video processing, Coding/decoding method and device, VR terminal, audio/video player system.
Background technology
Universal along with VR (Virtual Reality, virtual reality) in recent years, increasing content supplier and VR Entertainment content together, is represented by equipment vendors by the form of VR.Compared to traditional TV and computer, VR equipment is by by field Scape is presented to user the most truly, thus brings unprecedented Interactive Experience for user.
8K resolution is the ultimate resolution of TV cinematographic field application, and referring generally to resolution is 7680 × 4320 Content.4K (resolution of 3840 × 2150) video the most actively promoted with current all big enterprises is compared, and 8K resolution is at water In flat, vertical pixel quantity, it it is all two times of 4K resolution.But due to coding efficiency, the restriction of transmission performance, 8K video is so far Popularize not yet, even had not been used in VR terminal.Therefore, the resolution playing video how improving VR terminal becomes urgently Problem to be solved.
Summary of the invention
The purpose of the application is to propose a kind of Video processing, coding/decoding method and device, VR terminal, audio/video player system, Solve the technical problem that background section above is mentioned.
First aspect, this application provides a kind of method for processing video frequency, and described method includes: obtain panoramic video collection dress Put multiple video datas of collection;Synthesize the plurality of video data, obtain panoramic video;Described panoramic video is compiled Code, obtains Primary layer video data and enhancement layer video data;By described Primary layer video data and described enhancement-layer video number It is transferred to terminal according to by different transmission means.
In certain embodiments, the plurality of video data of described synthesis, including: every in the plurality of video data One frame video carries out projective transformation, obtains the projection picture that each frame video is corresponding;Splicing belongs to multiple throwings of same frame of video Shadow image, obtains each frame of video of described panoramic video;Each frame of video of described panoramic video is arranged according to frame sequential, obtains Described panoramic video.
In certain embodiments, described described panoramic video is encoded, obtain Primary layer video data and enhancement layer Video data, including: use expansible efficient video coding that described panoramic video is encoded, obtain described Primary layer video Data and described enhancement layer video data, wherein, the resolution that resolution is described panoramic video of described Primary layer video data / 4th of rate, the resolution of described enhancement layer video data is identical with the resolution of described panoramic video.
In certain embodiments, described Primary layer video data and described enhancement layer video data are passed through different networks Transmission means is transferred to terminal, including: by described Primary layer video data by broadcast transmission to terminal;By described enhancing Layer video data are transferred to terminal by wide band system.
Second aspect, this application provides a kind of video encoding/decoding method for virtual reality terminal, and described method includes: Obtain Primary layer video data and the enhancement layer video data of video to be decoded;Detect whether following condition meets: described enhancing The size of layer video data is more than the first preset value, gets described Primary layer video data and gets described enhancement-layer video The time difference of data is less than the second preset value;It is satisfied by response to conditions above, determines that user is in described virtual reality terminal Visual angle window, and the enhancement layer video data including described Primary layer video data and described visual angle window is decoded.
In certain embodiments, described virtual reality terminal includes gravity sensor;And described determine that user is described The angular field of view of virtual reality terminal, including: at described gravity sensor, obtain described virtual reality terminal in the horizontal direction The first angle of rotation and at the second angle of rotation of vertical direction;By the central point of default visual angle window along described horizontal direction Rotate described first angle of rotation, rotate described second angle of rotation along described vertical direction;Visual angle window after rotating is defined as Described user is at the visual angle window of described virtual reality terminal.
In certain embodiments, the described enhancement-layer video that described Primary layer video data and described visual angle window are included Decoding data, comprises determining that in described enhancement layer video data, the sub-enhancement-layer video number that described visual angle window includes According to;Decode described Primary layer video data and described sub-enhancement layer video data.
In certain embodiments, described method also includes: by decoded Primary layer video data and decoded described The enhancement layer video data that angular field of view includes combines;Video data after combining is rendered;Export the video after rendering.
The third aspect, this application provides a kind of video process apparatus, and described device includes: the first acquiring unit, is used for Obtain multiple video datas that panoramic video harvester gathers;Synthesis unit, is used for synthesizing the plurality of video data, obtains Panoramic video;Coding unit, for encoding described panoramic video, obtains Primary layer video data and enhancement-layer video number According to;Transmission unit, for passing described Primary layer video data and described enhancement layer video data by different transmission means It is defeated by terminal.
In certain embodiments, described synthesis unit includes: projection module, every in the plurality of video data One frame video carries out projective transformation, obtains the projection picture that each frame video is corresponding;Concatenation module, belongs to same for splicing and regards Frequently multiple projection pictures of frame, obtain each frame of video of described panoramic video;Arrangement module, each for by described panoramic video Frame of video arranges according to frame sequential, obtains described panoramic video.
In certain embodiments, described coding unit is further used for: use expansible efficient video coding to described entirely Scape video encodes, and obtains described Primary layer video data and described enhancement layer video data, wherein, described Primary layer video The resolution of data is 1/4th of the resolution of described panoramic video, and the resolution of described enhancement layer video data is with described The resolution of panoramic video is identical.
In certain embodiments, described transmission unit includes: the first transport module, for by described Primary layer video data By broadcast transmission to terminal;Second transport module, for transmitting described enhancement layer video data by wide band system To terminal.
Fourth aspect, this application provides a kind of video decoder for virtual reality terminal, and described device includes: Second acquisition unit, for obtaining Primary layer video data and the enhancement layer video data of video to be decoded;Detector unit, is used for Detect whether following condition meets: the size of described enhancement layer video data is more than the first preset value, gets described Primary layer Video data and the time difference getting described enhancement layer video data are less than the second preset value;Decoding unit, in response to Conditions above is satisfied by, and determines user's visual angle window in described virtual reality terminal, and to described Primary layer video data and The enhancement layer video data that described visual angle window includes is decoded
In certain embodiments, described virtual reality terminal includes gravity sensor;And described decoding unit includes: turn Dynamic angle acquisition module, for obtaining described virtual reality terminal the first angle of rotation in the horizontal direction at described gravity sensor And the second angle of rotation in vertical direction;Rotating module, is used for the central point of default visual angle window along described level side To rotating described first angle of rotation, rotating described second angle of rotation along described vertical direction;First determines module, for rotating After visual angle window be defined as the described user visual angle window in described virtual reality terminal.
In certain embodiments, described decoding unit includes: second determines module, is used for determining described enhancement-layer video number According to, the sub-enhancement layer video data that described visual angle window includes;Decoder module, be used for decoding described Primary layer video data and Described sub-enhancement layer video data.
In certain embodiments, described device also includes: combining unit, for by decoded Primary layer video data and The enhancement layer video data that decoded described angular field of view includes combines;Rendering unit, for the video data after combining Render;Output unit, the video after output renders.
5th aspect, this application provides a kind of virtual reality terminal, and described virtual reality terminal includes any of the above-described reality Execute the video decoder for virtual reality terminal described in example.
6th aspect, this application provides a kind of audio/video player system, and described audio/video player system includes communication link successively The panoramic video harvester that connects, server, virtual reality terminal;Described server includes regarding described in any of the above-described embodiment Frequency processing device;Described virtual reality terminal includes that the video for virtual reality terminal described in any of the above-described embodiment decodes Device.
Video processing, coding/decoding method and the device of the application offer, VR terminal, audio/video player system, by aphorama Frequency encodes, the Primary layer after being layered and enhancement layer, then by Primary layer and enhancement layer by different transmission means It is transferred to terminal such that it is able to provide the video data of multiple resolution, adds the selection of user, differentiate simultaneously because different The video of rate is different to the requirement of network, reduces the requirement to the network bandwidth of traditional high-resolution video;Virtual reality Terminal by after getting Primary layer video data and enhancement layer video data, first detection enhancement data the most completely with And whether the time difference between Primary layer video data and enhancement layer video data is less than preset value, after both meets, really Determine user's visual angle window in virtual reality terminal, and then the enhancement layer video data that above-mentioned visual angle window is corresponding is solved Code, reduces the workload to high-resolution video decoding, thus shortens virtual reality terminal and playing high-resolution video Time time delay.
Accompanying drawing explanation
By the detailed description that non-limiting example is made made with reference to the following drawings of reading, other of the application Feature, purpose and advantage will become more apparent upon:
Fig. 1 is that the application can apply to exemplary system architecture figure therein;
Fig. 2 is the flow chart of an embodiment of the method for processing video frequency according to the application;
Fig. 3 is the flow chart of an embodiment of the video encoding/decoding method for virtual reality terminal according to the application;
Fig. 4 is the structural representation of an embodiment of the video process apparatus according to the application;
Fig. 5 is adapted for the structural representation of the computer system of the video process apparatus for realizing the embodiment of the present application;
Fig. 6 is the structural representation of an embodiment of the video decoder for virtual reality terminal according to the application Figure;
Fig. 7 is the structural representation of an embodiment of the virtual reality terminal according to the application;
Fig. 8 is adapted for the structural representation of the computer system of the virtual reality terminal for realizing the embodiment of the present application;
Fig. 9 is the structural representation of an embodiment of the audio/video player system according to the application.
Detailed description of the invention
With embodiment, the application is described in further detail below in conjunction with the accompanying drawings.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to this invention.It also should be noted that, in order to It is easy to describe, accompanying drawing illustrate only the part relevant to about invention.
It should be noted that in the case of not conflicting, the embodiment in the application and the feature in embodiment can phases Combination mutually.Describe the application below with reference to the accompanying drawings and in conjunction with the embodiments in detail.
Fig. 1 show can apply the method for processing video frequency of the application, video process apparatus, for virtual reality terminal Video encoding/decoding method or the exemplary system architecture 100 of embodiment of the video decoder for virtual reality terminal.
As it is shown in figure 1, system architecture 100 can include panoramic video harvester 101, network 102,102 ', server 103 and virtual reality terminal 104.Network 102,102 ' respectively in order to panoramic video harvester 101 and server 103 it Between, the medium of communication link is provided between server 103 and virtual reality terminal 104.Network 102 can include various connection class Type, the most wired, wireless communication link or fiber optic cables etc., network 102 ' can both include all of function of network 102, The all of function of DVB radio network can also be included.
Panoramic video supplier can use panoramic video harvester 101 to gather the panoramic video under various scene, example As gathered a football match, the panoramic video of a concert and mutual, being adopted with server 103 by network 102 The panoramic video of collection is sent to server 103 etc..
Panoramic video harvester 101 can be to have the photographic head of multiple different angles and support video file transfer Various electronic installations, can be such as the intelligent terminal with multiple wide-angle cameras, it is also possible to be the most equal The camera head of the multiple photographic head of even distribution, it is also possible to be the camera head being uniformly distributed multiple photographic head on virtual spherical surface.
Server 103 can be to provide the server of various service, such as, gather many to panoramic video harvester 101 Individual video carries out the background video server processed.The data such as the multiple videos received can be carried out by background video server Codings etc. process, and are sent to virtual by result (video data that such as video segmentation obtains) by various communication modes Non-real end 104.
Can install in virtual reality terminal 104 various telecommunication customer end application, as panoramic video play application, virtual existing Real game application etc..Virtual reality terminal 104 can be to have display screen and the electronic equipment supporting panoramic video to watch, bag Include but be not limited to intelligent helmet, intelligent glasses etc..It is understood that virtual reality terminal 104 is when reality is applied, it is also possible to With supporting virtual reality device with the use of, such as can be with virtual reality treadmill, virtual reality pistol, virtual reality hands Set, virtual reality clothing etc..
It should be noted that the method for processing video frequency that the embodiment of the present application is provided typically is performed by server 103, accordingly Ground, video process apparatus is generally positioned in server 103;And it is used for the video encoding/decoding method of virtual reality terminal typically by void Intending non-real end 104 to perform, correspondingly, the video decoder for virtual reality terminal is generally positioned at virtual reality terminal In 104.
It should be understood that the number of panoramic video harvester, network, server and virtual reality terminal in Fig. 1 is only It is schematic.According to realizing needs, can have any number of panoramic video harvester, network, server and virtual Non-real end.
With continued reference to Fig. 2, it is shown that according to flow process Figure 200 of an embodiment of the method for processing video frequency of the application.This The method for processing video frequency of embodiment, comprises the following steps:
Step 201, obtains multiple video datas that panoramic video harvester gathers.
In the present embodiment, it is level 350 ° that panoramic video refers generally to viewfinder range, and the video of vertical 180 °, to ensure User when watching panoramic video it can be seen that either direction, the image of unspecified angle.Method for processing video frequency runs on thereon Electronic equipment (the such as server shown in Fig. 1) can be gathered by wired connection mode or radio connection panoramic video Its multiple video datas gathered are obtained at device.Panoramic video harvester can utilize existing spherical VR photographic head, dish Shape VR shooting is first-class, and it can gather the video of the multiple angle of multiple directions of a position, and multiple regards according to what it gathered Frequency evidence, can obtain panoramic video through process.
It is pointed out that above-mentioned radio connection can include but not limited to that 3G/4G connects, WiFi connects, bluetooth Connect, WiMAX connects, Zigbee connects, UWB (ultra wideband) connects and other are currently known or exploitation in the future Radio connection.
Step 202, synthesizes above-mentioned multiple video data, obtains panoramic video.
Server, can be to multiple videos after getting multiple video datas that above-mentioned panoramic video harvester gathers Each frame of video in data carries out splicing synthesis, obtains panoramic video.When synthesis, multiple types of tools can be used to carry out video Synthesis, such as, can use Video Composition software to carry out, it is also possible to imports in video synthesizer by multiple videos to be synthesized Row synthesis.
In some optional implementations of the present embodiment, in above-mentioned splicing building-up process, it is contemplated that above-mentioned multiple Video data is that the lower shooting from different perspectives of panoramic video harvester obtains, and the projection of the most above-mentioned multiple video datas is put down Face is the most overlapping, if direct splicing, can destroy the visual consistency of scenery in video data.Therefore, above-mentioned steps 202 May further include the following sub-step not shown in Fig. 2:
Each frame video in above-mentioned multiple video datas is carried out projective transformation;Splicing belongs to the multiple of same frame of video Image, obtains each frame of video of panoramic video;Each frame of video of panoramic video is arranged according to frame sequential, obtains panoramic video.
In this implementation, can be to use multiple the regarding to panoramic video harvester collection of the multiple projective transformation mode Each frame video in frequency evidence projects, such as, can use plane projection, cylindrical surface projecting, spherical projection and fisheye projection Deng.After each frame video is carried out projective transformation, multiple images that projection obtains can be spliced, when splicing, can With the intersection in the most above-mentioned multiple images, then intersection is overlapped, adjusting angle, so that it may obtain panoramic video Each frame of video.Finally each frame of video of panoramic video is combined according to the playing sequence of each frame in above-mentioned multiple videos, I.e. can get panoramic video.
Step 203, encodes panoramic video, obtains Primary layer video data and enhancement layer video data.
In the present embodiment, server can use multiple coded system to encode panoramic video, such as, can use SVC (Scalable Video Coding, telescopic video encoding and decoding) coding techniques, it is also possible to use SHVC (Scalable HEVC, scalable efficient video coding) mode.Wherein, SVC and SHVC has in common that, by above-mentioned coded system, permissible By first encoding, video flowing being divided into multiple different resolution, different quality and the layer of different frame per second, will input regards Frequently flow point be Primary layer that a resolution, quality or frame per second are less and at least one resolution, quality or frame per second of a relatively high Enhancement layer.
In the present embodiment, above-mentioned coded system can generate Primary layer on the premise of not reducing video frequency distortion performance Video data and enhancement layer video data.
In some optional implementations of the present embodiment, in the cataloged procedure to panoramic video of step 203, can To use SHVC coded system, obtain Primary layer video data and enhancement layer video data.Wherein, Primary layer video data point Resolution is 1/4th of the resolution of panoramic video, the resolution phase of the resolution of enhancement layer video data and panoramic video With.Such as, the resolution of panoramic video is 7680 × 4320, then the resolution of Primary layer video data is 3840 × 2150, The resolution of enhancement layer video data is 7680 × 4320.
In this implementation, the resolution of above-mentioned Primary layer video data can ensure that user sees in virtual reality terminal There is when seeing video higher definition, the viewing experience of user will not be reduced.
Step 204, is transferred to terminal by Primary layer video data and enhancement layer video data by different transmission means.
In the present embodiment, server, can be by the Primary layer video data obtained and increasing after completing panorama video code Strong layer video data are sent to terminal.In view of the network condition of the user using terminal, different transmission means can be used Above-mentioned Primary layer video data and enhancement layer video data are sent to terminal.Herein, terminal can be to play aphorama The various electronic equipments of frequency, such as, can be virtual reality terminal, computer, portable notebook computer, panel computer, intelligence Mobile phone etc..
In some optional implementations of the present embodiment, Primary layer video data can be passed through broadcaster by server Formula is transferred to terminal, by wide band system, enhancement layer video data can be transferred to terminal.In this implementation, terminal is both It is able to receive that the data by broadcast transmission, is able to receive that again the electronic equipment of the data transmitted by network, such as may be used Think that be connectable to DVB radio network is received the audio-video signal of standard broadcasting, can be connected by broadband interface again simultaneously VR terminal to the Internet, it is also possible to for being provided with HBBTV (Hybrid Broadcast Broadband TV, mixing broadcast width Band TV) the VR terminal of application program.
In existing video transmission manner, it is common that user utilizes terminal enter video website or open Video Applications, It is connected to provide, for above-mentioned video website, the server supported or carry for above-mentioned Video Applications by cable network or wireless network For the server supported, when network environment is undesirable, the video data that resolution is relatively low can be obtained;Preferable in network environment Time, the video data that resolution is higher can be obtained.Such as, user can select the play mode " very fast " in Video Applications, " standard ", " high definition " or " super clear ".But the precondition realizing aforesaid way is that terminal is necessarily connected to Internet network.
And in this implementation, Primary layer video data is transferred to terminal by the way of radio broadcasting by server, by Have that time delay is little in broadcast (broadcast) transmission means, broad covered area, be independent of the feature of network environment, so that use The terminal that family uses, when not accessing Internet network, remains able to receive Primary layer video data, it can still see See panoramic video.
When the terminal that user is used has been concurrently accessed Internet network and network environment is preferable, can pass through simultaneously Internet network receives enhancement layer video data, thus can watch high-resolution panoramic video.
Therefore, in above-mentioned implementation, user can remain able to watch panorama in the case of departing from Internet network Video, drastically increases the universality of terminal, expands the applied environment of virtual reality terminal.
The method for processing video frequency that above-described embodiment of the application provides, by encoding panoramic video, is layered After Primary layer and enhancement layer, then Primary layer and enhancement layer are transferred to terminal by different transmission means such that it is able to The video data of multiple resolution is provided, adds the selection of user, simultaneously because network is wanted by the video of different resolution Ask different, reduce the requirement to the network bandwidth of traditional high-resolution video.
It is an enforcement of the video encoding/decoding method for virtual reality terminal according to the application with continued reference to Fig. 3, Fig. 3 The flow chart 300 of example.The video encoding/decoding method for virtual reality terminal of the present embodiment comprises the following steps:
Step 301, obtains Primary layer video data and the enhancement layer video data of video to be decoded.
In the present embodiment, VR terminal can obtain Primary layer video data and the enhancement-layer video number of video to be decoded simultaneously According to.Wherein, video to be decoded can be high-resolution panoramic video.It is understood that above-mentioned video to be decoded is through one Primary layer video data and enhancement layer video data it is separated into after fixed coded system coding.
In some optional implementations of the present embodiment, the broadcast that VR terminal can send by receiving server regards Frequently signal, obtains above-mentioned Primary layer video data, can receive above-mentioned enhancement layer video data by the Internet;Can also be the most logical Cross the Internet and receive above-mentioned Primary layer video data and above-mentioned enhancement layer video data.
Step 302, detects whether following condition meets: the size of enhancement layer video data is more than the first preset value, obtains It is less than the second preset value to Primary layer video data and the time difference getting enhancement layer video data.
First VR terminal, after receiving Primary layer video data and enhancement layer video data, can detect enhancement-layer video number According to the most complete, by detecting the size of enhancement layer video data, and the size detected can be regarded with the enhancement layer preset The size of frequency evidence determines.It is understood that when user is by the VR a certain video of terminal program request, VR terminal may determine that The size of complete enhancement layer video data.After VR terminal is by Network Capture to enhancement layer video data, by get Size of data compares with the size of complete enhancement layer video data, i.e. can determine that enhancement layer video data is the most complete. Long time delay will not be produced in order to avoid VR terminal when playing high-resolution video simultaneously, impact showing of Consumer's Experience As, VR terminal, after getting Primary layer video data and enhancement layer video data, can judge to get Primary layer video data With the time difference of enhancement layer video data, when above-mentioned time difference is less than the time delay preset, just perform step 203.Above-mentioned default Time delay can be the various user acceptable time, such as 10 seconds, 5 seconds or shorter time.
Step 303, is satisfied by response to conditions above, determines user's visual angle window in virtual reality terminal, and to base The enhancement layer video data that this layer video data and above-mentioned visual angle window include is decoded.
Determine that the enhancement layer video data that receives is complete and Primary layer video data and enhancement-layer video in VR terminal When time delay between data is less than preset value, determine user's visual angle window in VR terminal.The size of above-mentioned visual angle window is permissible It is the configuration settings previously according to VR terminal, such as horizontal direction 120 °, vertical direction 50 °.Owing to user is above-mentioned in use During VR terminal, may rotation head or movement, then user's visual angle window in VR terminal will change.Determining use After the visual angle window in VR terminal of the family, the enhancement layer that VR terminal can be corresponding with above-mentioned visual angle window to Primary layer video data Video data is decoded, thus ensures that user is high-resolution by the panoramic video that VR terminal is seen.It is appreciated that , in the present embodiment, the decoding process that VR terminal uses when decoding is corresponding with the coded system of server.
In some optional implementations of the present embodiment, VR terminal can include gravity sensor, determining use During the angular field of view in VR terminal of the family, can be realized by the following sub-step not shown in Fig. 3:
Virtual reality terminal the first angle of rotation in the horizontal direction is obtained and in vertical direction at gravity sensor Second angle of rotation;The central point of default visual angle window is rotated in the horizontal direction the first angle of rotation, vertically rotates Two angles of rotation;Visual angle window after rotating is defined as user's visual angle window in virtual reality terminal.
Gravity sensor in VR terminal, when user's head rotation, can obtain what user's head rotated in the horizontal direction Angle and the angle rotated in vertical direction, the angle that the center rotating of default visual angle window is rotated to user, Then the visual angle window after rotating is defined as user's visual angle window in VR terminal.
In some optional implementations of the present embodiment, first VR terminal, when decoding, can determine above-mentioned visual angle The sub-enhancement layer video data that window includes, is then decoded Primary layer video data and sub-enhancement layer video data.
In some optional implementations of the present embodiment, said method can also include below not shown in Fig. 3 Step:
The enhancement layer video data decoded Primary layer video data and decoded angular field of view included combines;Right Video data after in conjunction with renders;Export the video after rendering.
After the decoding, video flowing decoding obtained combines, and then renders.Render and can strengthen the true of video Sense.Then the video after rendering exports in VR terminal, and user can watch high-resolution panoramic video by VR terminal.
The video encoding/decoding method for virtual reality terminal that above-described embodiment of the application provides, by getting base After this layer video data and enhancement layer video data, first whether detection enhancement data complete and Primary layer video data with Whether the time difference between enhancement layer video data is less than preset value, after both meets, determines that user is in virtual reality eventually Visual angle window in end, and then the enhancement layer video data that above-mentioned visual angle window is corresponding is decoded, reduce high-resolution The workload of rate video decoding, thus shorten the time delay when playing high-resolution video of the virtual reality terminal.
With further reference to Fig. 4, as to the realization of method shown in above-mentioned Fig. 2, this application provides a kind of Video processing dress The embodiment put, this device embodiment is corresponding with the embodiment of the method shown in Fig. 2, and this device specifically can apply to respectively Plant in electronic equipment.
As shown in Figure 4, the video process apparatus 400 of the present embodiment includes the first acquiring unit 401, synthesis unit 402, compiles Code unit 403 and transmission unit 404.
First acquiring unit 401, for obtaining multiple video datas that panoramic video harvester gathers.
Synthesis unit 402, the multiple video datas obtained for synthesis the first acquiring unit 401, obtain panoramic video.
In some optional implementations of the present embodiment, above-mentioned synthesis unit 402 can further include in Fig. 4 Unshowned projection module, concatenation module and arrangement module.
Wherein, projection module, each frame video in the multiple video datas obtaining the first acquiring unit 401 enters Row projective transformation, obtains the projection picture that each frame video is corresponding.
Concatenation module, belongs to multiple projection pictures of same frame of video, obtains each frame of video of panoramic video for splicing.
Arrangement module, each frame of video of the panoramic video for concatenation module being obtained arranges according to frame sequential, obtains complete Scape video.
Coding unit 403, encodes for the panoramic video obtaining synthesis unit 402, obtains Primary layer video counts According to and enhancement layer video data.
In some optional implementations of the present embodiment, above-mentioned coding unit 403 can also be further used for: uses The panoramic video that synthesis unit 402 is obtained by SHVC encodes, and obtains Primary layer video data and enhancement layer video data, its In, the resolution of Primary layer video data is 1/4th of the resolution of panoramic video, the resolution of enhancement layer video data Identical with the resolution of panoramic video.
Transmission unit 404, passes through for Primary layer video data and the enhancement layer video data obtained by coding unit 403 Different transmission means is transferred to terminal.
In some optional implementations of the present embodiment, above-mentioned transmission unit 404 can further include in Fig. 4 Unshowned first transport module and the second transport module.
Wherein, the first transport module, for passing through broadcast transmission to terminal by Primary layer video data.
Second transport module, for being transferred to terminal by enhancement layer video data by wide band system.
Fig. 5 is adapted for the structural representation of the computer system of the video process apparatus for realizing the embodiment of the present application. As it is shown in figure 5, computer system 500 includes CPU (CPU) 501, it can be according to being stored in read only memory (ROM) program in 502 or be loaded into the program random access storage device (RAM) 503 from storage part 508 and perform each Plant suitable action and process.In RAM 503, also storage has system 500 to operate required various programs and data.CPU 501, ROM 502 and RAM 503 is connected with each other by bus 504.Input/output (I/O) interface 505 is also connected to bus 504。
It is connected to I/O interface 505: include the importation 506 of keyboard, mouse etc. with lower component;Penetrate including such as negative electrode The output part 507 of spool (CRT), liquid crystal display (LCD) etc. and speaker etc.;Storage part 508 including hard disk etc.; And include the communications portion 509 of the NIC of such as LAN card, modem etc..Communications portion 509 via such as because of The network of special net performs communication process.Driver 510 is connected to I/O interface 505 also according to needs.Detachable media 511, such as Disk, CD, magneto-optic disk, semiconductor memory etc., be arranged in driver 510, in order to read from it as required Computer program as required be mounted into storage part 508.
Especially, according to embodiment of the disclosure, the process described above with reference to flow chart may be implemented as computer Software program.Such as, embodiment of the disclosure and include a kind of computer program, it includes being tangibly embodied in machine readable Computer program on medium, described computer program comprises the program code for performing the method shown in flow chart.At this In the embodiment of sample, this computer program can be downloaded and installed from network by communications portion 509, and/or from removable Unload medium 511 to be mounted.When this computer program is performed by CPU (CPU) 501, perform in the present processes The above-mentioned functions limited.
The video process apparatus that above-described embodiment of the application provides, is obtained the first acquiring unit by coding unit Panoramic video encodes, the Primary layer after being layered and enhancement layer, and then Primary layer and enhancement layer are passed through by transmission unit Different transmission means is transferred to terminal such that it is able to provide the video data of multiple resolution, adds the selection of user, with Time due to the video of different resolution different to the requirement of network, reduce traditional high-resolution video to the network bandwidth Requirement.
With further reference to Fig. 6, as to the realization of method shown in above-mentioned Fig. 3, this application provides a kind of for virtual existing One embodiment of the video decoder of real terminal, this device embodiment is corresponding with the embodiment of the method shown in Fig. 3, this dress Put and specifically can apply in virtual reality terminal.
As shown in Figure 6, the video decoder 600 for virtual reality terminal of the present embodiment includes: second obtains list Unit 601, detector unit 602 and decoding unit 603.
Second acquisition unit 601, for obtaining Primary layer video data and the enhancement layer video data of video to be decoded.
Detector unit 602, is used for detecting whether following condition meets: the enhancement-layer video that second acquisition unit 601 obtains The size of data is more than the first preset value, and it is little with the time difference getting enhancement layer video data to get Primary layer video data In the second preset value.
Decoding unit 603, for detecting that conditions above is satisfied by response to detector unit 602, determines that user is virtual The visual angle window of non-real end, and the enhancement layer video data including Primary layer video data and visual angle window is decoded.
In some optional implementations of the present embodiment, above-mentioned virtual reality terminal includes gravity sensor, accordingly Ground, decoding unit 603 specifically can pass through the rotation not shown in Fig. 6 when determining user's visual angle window in virtual reality terminal Angle acquisition module, rotating module and first determine that module realizes.
Wherein, angle of rotation acquisition module, for obtaining virtual reality terminal the in the horizontal direction at gravity sensor One angle of rotation and at the second angle of rotation of vertical direction.
Rotating module, for rotating the first angle of rotation in the horizontal direction, along vertical by the central point of default visual angle window Direction rotates the second angle of rotation.
First determines module, and the visual angle window after rotating is defined as user's visual angle window in virtual reality terminal Mouthful.
In some optional implementations of the present embodiment, decoding unit 603 is at decoded base layer video data and regards During the enhancement layer video data that quarter window mouth includes, specifically can determine that module and decoder module realize by second.
Second determines module, is used for determining in enhancement layer video data, the sub-enhancement-layer video that above-mentioned visual angle window includes Data.
Decoder module, for decoded base layer video data and sub-enhancement layer video data.
In some optional implementations of the present embodiment, the above-mentioned video decoder for virtual reality terminal 600 can further include combining unit, rendering unit and output unit not shown in Fig. 6.
Wherein, combining unit is for by decoded for decoding unit 603 Primary layer video data and decoded visual angle model Enclose the enhancement layer video data included to combine.
Rendering unit, the video data after the combination obtaining combining unit renders.
Output unit, for exporting the video after rendering unit renders.
The video decoder for virtual reality terminal that above-described embodiment of the application provides, by obtaining second After unit gets Primary layer video data and enhancement layer video data, it is the most complete that first detector unit detects enhancement data And whether the time difference between Primary layer video data and enhancement layer video data is less than preset value, after both meets, Decoding unit determines user's visual angle window in virtual reality terminal, and then the enhancement-layer video corresponding to above-mentioned visual angle window Decoding data, reduces the workload to high-resolution video decoding, thus shortens virtual reality terminal and playing height Time delay during resolution video.
Fig. 7 shows the structural representation of an embodiment of the virtual reality terminal according to the application.As it is shown in fig. 7, The virtual reality terminal 700 of the present embodiment includes the video decoder 701 for virtual reality terminal.Wherein, for virtual The video decoder 701 of non-real end and the video decoder 600 for virtual reality terminal of the embodiment shown in Fig. 6 Structure and principle are identical.
Fig. 8 is adapted for the computer of the video decoder for virtual reality terminal for realizing the embodiment of the present application The structural representation of system.As shown in Figure 8, virtual reality terminal 800 include CPU (CPU) 801, memorizer 802, Input block 803 and output unit 804, wherein, CPU 801, memorizer 802, input block 803 and output unit 804 lead to Cross bus 805 to be connected with each other.Here, may be implemented as according to the video encoding/decoding method for virtual reality terminal of the application Computer program, and be stored in memorizer 802.CPU 801 in virtual reality terminal 800 is by calling memorizer 802 The above computer program of middle storage, implements limiting in the video encoding/decoding method of virtual reality terminal of the application View display function.In some implementations, input block 803 can be that wireless receiver etc. can be used for passing through broadcaster Formula is obtained Primary layer video data and is obtained the equipment of enhancement layer video data by wide band system, and output unit 804 can be Display screens etc. can be used for showing the equipment of panoramic video.Thus, CPU 801 call above computer program perform view show During function, input block 803 can be controlled and take Primary layer video data and the enhancement-layer video number of video to be decoded from server According to, and control output unit 804 panoramic video is shown.
Fig. 9 shows the structural representation of an embodiment of the audio/video player system according to the application.The present embodiment Audio/video player system 900 includes panoramic video harvester 901, server 902 and the virtual reality terminal communicated to connect successively 903。
Wherein, panoramic video harvester 901 is for gathering the multiple video datas constituting panoramic video.Server 902 Including the video process apparatus 400 of embodiment illustrated in fig. 4, virtual reality terminal 903 include embodiment illustrated in fig. 6 for virtual The video decoder 600 of non-real end.
The audio/video player system of the present embodiment, can be that virtual reality terminal use provides high-resolution panoramic video, Promote Consumer's Experience.
Should be appreciated that unit 401 to the unit 404 described in video process apparatus 400 respectively with Fig. 2 described in method In each step corresponding;For unit 601 to the unit 603 described in the video decoder 600 of virtual reality terminal Respectively with Fig. 3 described in method in each step corresponding.Thus, the operation described above with respect to the method for Video processing It is equally applicable to device 400 and the unit wherein comprised with feature;Retouch for the video encoding/decoding method for virtual reality terminal The operation stated and feature are equally applicable to device 600 and the unit wherein comprised, and do not repeat them here.The corresponding list of device 400 Unit can cooperate to realize the scheme of the embodiment of the present application with the unit in server;The corresponding units of device 600 is permissible Cooperate to realize the scheme of the embodiment of the present application with virtual reality terminal.
In above-described embodiment of the application, the first angle of rotation and the second angle of rotation are only used for distinguishing two differences Angle of rotation;First preset value and the second preset value are only used for distinguishing two different preset values;First acquiring unit And second acquisition unit is only used for distinguishing two different acquiring units;First transport module and the second transport module It is only used for distinguishing two different transport modules;First determines that module and second determines that module is only used for distinguishing two Individual difference cover half block really.It will be appreciated by those skilled in the art that therein first or second be not intended that to angle of rotation, preset Value, acquiring unit, transport module, determine the particular determination of module.
Flow chart in accompanying drawing and block diagram, it is illustrated that according to system, method and the computer journey of the various embodiment of the application Architectural framework in the cards, function and the operation of sequence product.In this, each square frame in flow chart or block diagram can generation One module of table, program segment or a part for code, a part for described module, program segment or code comprises one or more For realizing the executable instruction of the logic function of regulation.It should also be noted that some as replace realization in, institute in square frame The function of mark can also occur to be different from the order marked in accompanying drawing.Such as, the square frame that two succeedingly represent is actual On can perform substantially in parallel, they can also perform sometimes in the opposite order, and this is depending on involved function.Also want It is noted that the combination of the square frame in each square frame in block diagram and/or flow chart and block diagram and/or flow chart, Ke Yiyong The special hardware based system of the function or operation that perform regulation realizes, or can refer to computer with specialized hardware The combination of order realizes.
It is described in the embodiment of the present application involved unit to realize by the way of software, it is also possible to by firmly The mode of part realizes.Described unit can also be arranged within a processor, for example, it is possible to be described as: a kind of processor bag Include the first acquiring unit, synthesis unit, coding unit and transmission unit;Or one processor includes second acquisition unit, detection Unit and decoding unit.Wherein, the title of these unit is not intended that the restriction to this unit itself, example under certain conditions As, the first acquiring unit is also described as " obtaining the unit of multiple video datas that panoramic video harvester gathers ".
As on the other hand, present invention also provides a kind of nonvolatile computer storage media, this non-volatile calculating Machine storage medium can be the nonvolatile computer storage media described in above-described embodiment included in device;Can also be Individualism, is unkitted the nonvolatile computer storage media allocating in terminal.Above-mentioned nonvolatile computer storage media is deposited Contain one or more program, when one or more program is performed by an equipment so that described equipment: obtain Multiple video datas that panoramic video harvester gathers;Synthesize the plurality of video data, obtain panoramic video;To described entirely Scape video encodes, and obtains Primary layer video data and enhancement layer video data;By described Primary layer video data and described Enhancement layer video data is transferred to terminal by different transmission means.Or obtain video to be decoded Primary layer video data and Enhancement layer video data;Detect whether following condition meets: the size of described enhancement layer video data is more than the first preset value, obtains Get described Primary layer video data and get the time difference of described enhancement layer video data less than the second preset value;In response to Conditions above is satisfied by, and determines user's visual angle window in described virtual reality terminal, and to described Primary layer video data and The enhancement layer video data that described visual angle window includes is decoded.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art Member should be appreciated that invention scope involved in the application, however it is not limited to the technology of the particular combination of above-mentioned technical characteristic Scheme, also should contain in the case of without departing from described inventive concept simultaneously, above-mentioned technical characteristic or its equivalent feature carry out Combination in any and other technical scheme of being formed.Such as features described above has similar merit with (but not limited to) disclosed herein The technical scheme that the technical characteristic of energy is replaced mutually and formed.

Claims (18)

1. a method for processing video frequency, it is characterised in that described method includes:
Obtain multiple video datas that panoramic video harvester gathers;
Synthesize the plurality of video data, obtain panoramic video;
Described panoramic video is encoded, obtains Primary layer video data and enhancement layer video data;
Described Primary layer video data and described enhancement layer video data are transferred to terminal by different transmission means.
Method the most according to claim 1, it is characterised in that the plurality of video data of described synthesis, including:
Each frame video in the plurality of video data is carried out projective transformation, obtains the projection that each frame video is corresponding Picture;
Splicing belongs to multiple projection pictures of same frame of video, obtains each frame of video of described panoramic video;
Each frame of video of described panoramic video is arranged according to frame sequential, obtains described panoramic video.
Method the most according to claim 1, it is characterised in that described encode described panoramic video, obtains basic Layer video data and enhancement layer video data, including:
Use expansible efficient video coding that described panoramic video is encoded, obtain described Primary layer video data and described Enhancement layer video data, wherein, the four of the resolution that resolution is described panoramic video of described Primary layer video data/ One, the resolution of described enhancement layer video data is identical with the resolution of described panoramic video.
4. according to the method described in claim 1 or 3, it is characterised in that by described Primary layer video data and described enhancement layer Video data is transferred to terminal by different network transmission means, including:
By described Primary layer video data by broadcast transmission to terminal;
Described enhancement layer video data is transferred to terminal by wide band system.
5. the video encoding/decoding method for virtual reality terminal, it is characterised in that described method includes:
Obtain Primary layer video data and the enhancement layer video data of video to be decoded;
Detect whether following condition meets: the size of described enhancement layer video data is more than the first preset value, gets described base This layer video data and the time difference getting described enhancement layer video data are less than the second preset value;
It is satisfied by response to conditions above, determines user's visual angle window in described virtual reality terminal, and to described Primary layer The enhancement layer video data that video data and described visual angle window include is decoded.
Method the most according to claim 5, it is characterised in that described virtual reality terminal includes gravity sensor;And
Described determine user's angular field of view in described virtual reality terminal, including:
Described virtual reality terminal the first angle of rotation in the horizontal direction is obtained and at Vertical Square at described gravity sensor To the second angle of rotation;
The central point of default visual angle window is rotated described first angle of rotation along described horizontal direction, turns along described vertical direction Dynamic described second angle of rotation;
Visual angle window after rotating is defined as the described user visual angle window in described virtual reality terminal.
Method the most according to claim 5, it is characterised in that described to described Primary layer video data and described visual angle window The enhancement layer video data that mouth includes is decoded, including:
Determine in described enhancement layer video data, the sub-enhancement layer video data that described visual angle window includes;
Decode described Primary layer video data and described sub-enhancement layer video data.
8. according to the method described in any one of claim 5-7, it is characterised in that described method also includes:
The enhancement layer video data decoded Primary layer video data and decoded described angular field of view included combines;
Video data after combining is rendered;
Export the video after rendering.
9. a video process apparatus, it is characterised in that described device includes:
First acquiring unit, for obtaining multiple video datas that panoramic video harvester gathers;
Synthesis unit, is used for synthesizing the plurality of video data, obtains panoramic video;
Coding unit, for encoding described panoramic video, obtains Primary layer video data and enhancement layer video data;
Transmission unit, for passing described Primary layer video data and described enhancement layer video data by different transmission means It is defeated by terminal.
Device the most according to claim 9, it is characterised in that described synthesis unit includes:
Projection module, for each frame video in the plurality of video data is carried out projective transformation, obtains each frame video Corresponding projection picture;
Concatenation module, belongs to multiple projection pictures of same frame of video, obtains each frame of video of described panoramic video for splicing;
Arrangement module, for each frame of video of described panoramic video being arranged according to frame sequential, obtains described panoramic video.
11. devices according to claim 9, it is characterised in that described coding unit is further used for:
Use expansible efficient video coding that described panoramic video is encoded, obtain described Primary layer video data and described Enhancement layer video data, wherein, the four of the resolution that resolution is described panoramic video of described Primary layer video data/ One, the resolution of described enhancement layer video data is identical with the resolution of described panoramic video.
12. according to the device described in claim 9 or 10, it is characterised in that described transmission unit includes:
First transport module, for passing through broadcast transmission to terminal by described Primary layer video data;
Second transport module, for being transferred to terminal by described enhancement layer video data by wide band system.
13. 1 kinds of video decoders for virtual reality terminal, it is characterised in that described device includes:
Second acquisition unit, for obtaining Primary layer video data and the enhancement layer video data of video to be decoded;
Detector unit, is used for detecting whether following condition meets: the size of described enhancement layer video data is more than the first preset value, Get described Primary layer video data and get the time difference of described enhancement layer video data less than the second preset value;
Decoding unit, for being satisfied by response to conditions above, determines user's visual angle window in described virtual reality terminal, and The enhancement layer video data including described Primary layer video data and described visual angle window is decoded.
14. devices according to claim 13, it is characterised in that described virtual reality terminal includes gravity sensor;With And
Described decoding unit includes:
Angle of rotation acquisition module, for obtaining described virtual reality terminal first in the horizontal direction at described gravity sensor Angle of rotation and at the second angle of rotation of vertical direction;
Rotating module, for rotating described first angle of rotation, edge by the central point of default visual angle window along described horizontal direction Described vertical direction rotates described second angle of rotation;
First determines module, and the visual angle window after rotating is defined as the described user visual angle in described virtual reality terminal Window.
15. devices according to claim 13, it is characterised in that described decoding unit includes:
Second determines module, is used for determining in described enhancement layer video data, the sub-enhancement-layer video that described visual angle window includes Data;
Decoder module, is used for decoding described Primary layer video data and described sub-enhancement layer video data.
16. according to the device described in any one of claim 13-15, it is characterised in that described device also includes:
Combining unit, for regarding the enhancement layer that decoded Primary layer video data and decoded described angular field of view include Frequency is according to combination;
Rendering unit, for rendering the video data after combining;
Output unit, the video after output renders.
17. 1 kinds of virtual reality terminals, it is characterised in that described virtual reality terminal includes such as any one of claim 13-16 The described video decoder for virtual reality terminal.
18. 1 kinds of audio/video player systems, it is characterised in that described audio/video player system includes the panoramic video communicated to connect successively Harvester, server, virtual reality terminal;
Described server includes the video process apparatus as described in any one of claim 9-12;
Described virtual reality terminal includes that the video for virtual reality terminal as described in any one of claim 13-16 decodes Device.
CN201610865440.2A 2016-09-29 2016-09-29 Video processing, coding/decoding method and device, VR terminal, audio/video player system Pending CN106231317A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610865440.2A CN106231317A (en) 2016-09-29 2016-09-29 Video processing, coding/decoding method and device, VR terminal, audio/video player system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610865440.2A CN106231317A (en) 2016-09-29 2016-09-29 Video processing, coding/decoding method and device, VR terminal, audio/video player system

Publications (1)

Publication Number Publication Date
CN106231317A true CN106231317A (en) 2016-12-14

Family

ID=58076181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610865440.2A Pending CN106231317A (en) 2016-09-29 2016-09-29 Video processing, coding/decoding method and device, VR terminal, audio/video player system

Country Status (1)

Country Link
CN (1) CN106231317A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803958A (en) * 2017-01-12 2017-06-06 同济大学 A kind of numerical model analysis video transmission method based on superposition modulated coding
CN107205122A (en) * 2017-08-03 2017-09-26 哈尔滨市舍科技有限公司 The live camera system of multiresolution panoramic video and method
CN108419142A (en) * 2017-01-03 2018-08-17 黑帆科技有限公司 VR video broadcasting methods and device
WO2018161789A1 (en) * 2017-03-07 2018-09-13 华为技术有限公司 Projection type recommendation method, server and client
WO2018196790A1 (en) * 2017-04-28 2018-11-01 华为技术有限公司 Video playing method, device and system
CN108965847A (en) * 2017-05-27 2018-12-07 华为技术有限公司 A kind of processing method and processing device of panoramic video data
CN109302636A (en) * 2017-07-24 2019-02-01 阿里巴巴集团控股有限公司 The method and device of data object panorama sketch information is provided
CN109600597A (en) * 2016-10-04 2019-04-09 联发科技股份有限公司 For handling the method and device of 360 ° of VR frame sequences
CN109672897A (en) * 2018-12-26 2019-04-23 北京数码视讯软件技术发展有限公司 Panorama video code method and device
CN109819272A (en) * 2018-12-26 2019-05-28 平安科技(深圳)有限公司 Video transmission method, device, computer readable storage medium and electronic equipment
CN110290409A (en) * 2019-07-26 2019-09-27 浙江开奇科技有限公司 Data processing method, VR equipment and system
CN110347163A (en) * 2019-08-07 2019-10-18 京东方科技集团股份有限公司 A kind of control method of unmanned equipment, equipment and unmanned control system
CN111133763A (en) * 2017-09-26 2020-05-08 Lg 电子株式会社 Superposition processing method and device in 360 video system
CN112383816A (en) * 2020-11-03 2021-02-19 广州长嘉电子有限公司 ATSC system signal analysis method and system based on android system intervention
CN112639870A (en) * 2018-08-24 2021-04-09 索尼公司 Image processing apparatus, image processing method, and image processing program
CN113035226A (en) * 2019-12-24 2021-06-25 中兴通讯股份有限公司 Voice call method, communication terminal, and computer-readable medium
CN113228683A (en) * 2018-12-21 2021-08-06 交互数字Vc控股公司 Method and apparatus for encoding and decoding an image of a point of a sphere
CN114466202A (en) * 2020-11-06 2022-05-10 中移物联网有限公司 Mixed reality live broadcast method and device, electronic equipment and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101951506A (en) * 2010-09-17 2011-01-19 中兴通讯股份有限公司 System and method for realizing synchronous transmitting and receiving of scalable video coding service
CN102307309A (en) * 2011-07-29 2012-01-04 杭州电子科技大学 Somatosensory interactive broadcasting guide system and method based on free viewpoints
CN103108160A (en) * 2013-01-24 2013-05-15 中国联合网络通信集团有限公司 Surveillance video data acquisition method, server and terminal
US20140050264A1 (en) * 2012-08-16 2014-02-20 Vid Scale, Inc. Slice base skip mode signaling for multiple layer video coding
CN103716278A (en) * 2012-09-28 2014-04-09 上海贝尔股份有限公司 Layered transmission in relay communication system
CN104054346A (en) * 2012-01-19 2014-09-17 索尼公司 Image processing device and method
CN105847850A (en) * 2016-03-28 2016-08-10 乐视控股(北京)有限公司 Panorama video real time playing method and apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101951506A (en) * 2010-09-17 2011-01-19 中兴通讯股份有限公司 System and method for realizing synchronous transmitting and receiving of scalable video coding service
CN102307309A (en) * 2011-07-29 2012-01-04 杭州电子科技大学 Somatosensory interactive broadcasting guide system and method based on free viewpoints
CN104054346A (en) * 2012-01-19 2014-09-17 索尼公司 Image processing device and method
US20140050264A1 (en) * 2012-08-16 2014-02-20 Vid Scale, Inc. Slice base skip mode signaling for multiple layer video coding
CN103716278A (en) * 2012-09-28 2014-04-09 上海贝尔股份有限公司 Layered transmission in relay communication system
CN103108160A (en) * 2013-01-24 2013-05-15 中国联合网络通信集团有限公司 Surveillance video data acquisition method, server and terminal
CN105847850A (en) * 2016-03-28 2016-08-10 乐视控股(北京)有限公司 Panorama video real time playing method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
胡小强: "《虚拟现实技术应用基础》", 30 April 2007, 中央广播电视大学出版社 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109600597A (en) * 2016-10-04 2019-04-09 联发科技股份有限公司 For handling the method and device of 360 ° of VR frame sequences
CN108419142A (en) * 2017-01-03 2018-08-17 黑帆科技有限公司 VR video broadcasting methods and device
CN106803958A (en) * 2017-01-12 2017-06-06 同济大学 A kind of numerical model analysis video transmission method based on superposition modulated coding
CN106803958B (en) * 2017-01-12 2019-12-27 同济大学 Digital-analog hybrid video transmission method based on superposition modulation coding
CN108574881B (en) * 2017-03-07 2020-08-25 华为技术有限公司 Projection type recommendation method, server and client
CN108574881A (en) * 2017-03-07 2018-09-25 华为技术有限公司 A kind of projection type recommends method, server and client
WO2018161789A1 (en) * 2017-03-07 2018-09-13 华为技术有限公司 Projection type recommendation method, server and client
US11159848B2 (en) 2017-04-28 2021-10-26 Huawei Technologies Co., Ltd. Video playing method, device, and system
WO2018196790A1 (en) * 2017-04-28 2018-11-01 华为技术有限公司 Video playing method, device and system
CN108965847A (en) * 2017-05-27 2018-12-07 华为技术有限公司 A kind of processing method and processing device of panoramic video data
CN108965847B (en) * 2017-05-27 2020-04-14 华为技术有限公司 Method and device for processing panoramic video data
CN109302636A (en) * 2017-07-24 2019-02-01 阿里巴巴集团控股有限公司 The method and device of data object panorama sketch information is provided
CN109302636B (en) * 2017-07-24 2022-05-27 阿里巴巴集团控股有限公司 Method and device for providing panoramic image information of data object
CN107205122A (en) * 2017-08-03 2017-09-26 哈尔滨市舍科技有限公司 The live camera system of multiresolution panoramic video and method
CN111133763A (en) * 2017-09-26 2020-05-08 Lg 电子株式会社 Superposition processing method and device in 360 video system
US11575869B2 (en) 2017-09-26 2023-02-07 Lg Electronics Inc. Overlay processing method in 360 video system, and device thereof
CN112639870B (en) * 2018-08-24 2024-04-12 索尼公司 Image processing device, image processing method, and image processing program
CN112639870A (en) * 2018-08-24 2021-04-09 索尼公司 Image processing apparatus, image processing method, and image processing program
US12062110B2 (en) 2018-08-24 2024-08-13 Sony Corporation Image processing apparatus and image processing method
CN113228683B (en) * 2018-12-21 2024-08-02 交互数字Vc控股公司 Method and device for encoding and decoding an image of points of a sphere
CN113228683A (en) * 2018-12-21 2021-08-06 交互数字Vc控股公司 Method and apparatus for encoding and decoding an image of a point of a sphere
CN109672897B (en) * 2018-12-26 2021-03-16 北京数码视讯软件技术发展有限公司 Panoramic video coding method and device
CN109819272A (en) * 2018-12-26 2019-05-28 平安科技(深圳)有限公司 Video transmission method, device, computer readable storage medium and electronic equipment
CN109672897A (en) * 2018-12-26 2019-04-23 北京数码视讯软件技术发展有限公司 Panorama video code method and device
CN109819272B (en) * 2018-12-26 2022-09-16 平安科技(深圳)有限公司 Video sending method, video sending device, computer readable storage medium and electronic equipment
CN110290409A (en) * 2019-07-26 2019-09-27 浙江开奇科技有限公司 Data processing method, VR equipment and system
CN110347163A (en) * 2019-08-07 2019-10-18 京东方科技集团股份有限公司 A kind of control method of unmanned equipment, equipment and unmanned control system
CN110347163B (en) * 2019-08-07 2022-11-18 京东方科技集团股份有限公司 Control method and device of unmanned equipment and unmanned control system
CN113035226B (en) * 2019-12-24 2024-04-23 中兴通讯股份有限公司 Voice communication method, communication terminal and computer readable medium
CN113035226A (en) * 2019-12-24 2021-06-25 中兴通讯股份有限公司 Voice call method, communication terminal, and computer-readable medium
CN112383816A (en) * 2020-11-03 2021-02-19 广州长嘉电子有限公司 ATSC system signal analysis method and system based on android system intervention
CN114466202B (en) * 2020-11-06 2023-12-12 中移物联网有限公司 Mixed reality live broadcast method, apparatus, electronic device and readable storage medium
CN114466202A (en) * 2020-11-06 2022-05-10 中移物联网有限公司 Mixed reality live broadcast method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN106231317A (en) Video processing, coding/decoding method and device, VR terminal, audio/video player system
KR102241082B1 (en) Method and apparatus for transceiving metadata for multiple viewpoints
Fan et al. A survey on 360 video streaming: Acquisition, transmission, and display
CN106416239B (en) Method and apparatus for delivering content and/or playing back content
US10523980B2 (en) Method, apparatus and stream of formatting an immersive video for legacy and immersive rendering devices
US10880346B2 (en) Streaming spherical video
KR102262727B1 (en) 360 video processing method and device
CN106131591B (en) Live broadcasting method, device and terminal
US20020147991A1 (en) Transmission of panoramic video via existing video infrastructure
JP6151355B2 (en) Panorama picture processing
KR102214085B1 (en) Method and apparatus for transmitting and receiving metadata for a plurality of viewpoints
JP7177034B2 (en) Method, apparatus and stream for formatting immersive video for legacy and immersive rendering devices
CN106993177A (en) A kind of 720 degree of panorama acquisition systems of binocular
CN106210525A (en) For realizing camera and the method for net cast
CN105635675A (en) Panorama playing method and device
CN110637463B (en) 360-degree video processing method
CN110199519A (en) Method for multiphase machine equipment
US20240119660A1 (en) Methods for transmitting and rendering a 3d scene, method for generating patches, and corresponding devices and computer programs
Hu et al. Mobile edge assisted live streaming system for omnidirectional video
CN111726598B (en) Image processing method and device
CN206117889U (en) 720 degrees panorama collection system in two meshes
Niamut et al. Live event experiences-interactive UHDTV on mobile devices
US12081720B2 (en) Devices and methods for generating and rendering immersive video
Priyadharshini et al. 360 user-generated videos: Current research and future trends
CN112203101B (en) Remote video live broadcast method and device and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20161214

RJ01 Rejection of invention patent application after publication