CN108322727A - A kind of panoramic video transmission method and device - Google Patents

A kind of panoramic video transmission method and device Download PDF

Info

Publication number
CN108322727A
CN108322727A CN201810165985.1A CN201810165985A CN108322727A CN 108322727 A CN108322727 A CN 108322727A CN 201810165985 A CN201810165985 A CN 201810165985A CN 108322727 A CN108322727 A CN 108322727A
Authority
CN
China
Prior art keywords
view
field
main
frame
visual angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810165985.1A
Other languages
Chinese (zh)
Inventor
马茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sohu New Media Information Technology Co Ltd
Original Assignee
Beijing Sohu New Media Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sohu New Media Information Technology Co Ltd filed Critical Beijing Sohu New Media Information Technology Co Ltd
Priority to CN201810165985.1A priority Critical patent/CN108322727A/en
Publication of CN108322727A publication Critical patent/CN108322727A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

This application provides a kind of panoramic video transmission method and devices, and full-view video image frame is mapped as cube picture frame;Cube picture frame is rearranged for a rectangle panoramic image frame again.Rectangle panoramic image frame is divided into spatially continuous N number of field of view, and therefrom determines main view angular zone and non-master field of view;Down-sampling is carried out for the corresponding image of non-master field of view, obtains non-main perspective sampled images;Then, the corresponding image of main view angular zone and each non-main perspective sampled images are encoded, obtains Video coding frame.The target viewing visual angle of VR terminals is obtained, and the target video coded frame with the target viewing visual angle for main field of view is sent to VR terminals.This method to carrying out encoding again after non-master field of view corresponding image progress down-sampling by showing the data volume for greatly reducing image, to reduce the data volume of the image in the angular field of view except viewing visual angle, and then save the Internet resources needed for transmission.

Description

A kind of panoramic video transmission method and device
Technical field
The invention belongs to virtual reality (Virtual Reality, VR) technical fields more particularly to a kind of aphorama to keep pouring in Transmission method and device.
Background technology
As VR technologies continue to develop, VR equipment comes into the life of ordinary user.With universal, the people of VR equipment Demand to panoramic video is more and more.The angular field of view of panoramic video is 360 ° × 180 °, and 360 ° are horizontal view angles, and 180 ° are Vertical angle of view, it is seen then that the angular field of view that panoramic video includes is very big, and therefore, the data volume of panoramic video is very big.
But the mass data of panoramic video is contradicted with limited bandwidth, at traditional video coding-decoding method Manage panoramic video, it may appear that the problem of video image quality is poor, poor user experience, the very Internet communication of influence panoramic video.
Invention content
In view of this, the purpose of the present invention is to provide a kind of panoramic video transmission method and device, it is traditional to solve The technical issues of decoding method processing panoramic video leads to poor image quality, poor user experience.Its specific implementation is as follows:
In a first aspect, this application provides a kind of panoramic video transmission methods, including:
Obtain full-view video image frame;
The full-view video image frame is converted into cubic panorama as frame;
By the cubic panorama as frame is rearranged for a rectangle panoramic image frame;
The rectangle panoramic image frame is divided into spatially continuous N number of field of view, and from N number of field of view Middle determining main view angular zone and non-master field of view, wherein N is positive integer;
Down-sampling is carried out to the corresponding image of the non-master field of view, obtains non-main perspective sampled images;
The corresponding image of the main view angular zone and each non-main perspective sampled images are encoded, video is obtained Coded frame;
Determine the target viewing visual angle of virtual reality terminal, and it is main field of view to obtain with the target viewing visual angle Target video coded frame;
Send the target video coded frame.
Optionally, the type of the main view angular zone includes horizontal main view angular zone and vertical main view angular zone;
Determining main view angular zone and the non-master field of view from N number of field of view, including:
Any one field of view in N number of field of view is extended into default regard respectively in horizontal view angle scope or so Angular region obtains horizontal main view angular zone, and, it will be other in addition to the main view angular zone in N number of field of view Field of view is determined as the non-master field of view;
Alternatively,
Any one field of view in N number of field of view is extended in vertical angle of view range respectively up and down described pre- If angular field of view, obtain vertical main view angular zone, and, by N number of field of view in addition to the main view angular zone Other field of view are determined as the non-master field of view.
Optionally, described that the corresponding image of the main view angular zone and each non-main perspective sampled images are compiled Code, obtains Video coding frame, including:
Corresponding main perspective sampled images when to each field of view in the N field of view as main view angular zone And all non-main perspective sampled images are spliced, and obtain picture frame to be encoded;
The picture frame to be encoded is encoded respectively, obtains Video coding frame.
Optionally, the acquisition is the target video coded frame of main field of view with the target main view angular zone, including:
The Video coding frame with the target viewing visual angle for main field of view is searched from the 2N Video coding frame It is determined as the target video coded frame.
Optionally, the target viewing visual angle of the determining virtual reality terminal, including:
The viewing visual angle information that the virtual reality terminal is sent is received, and according to described in viewing visual angle information determination Target viewing visual angle, the viewing visual angle information are detected to obtain by the virtual reality terminal by sensor.
Optionally, the method further includes:
Obtain the newest viewing visual angle range of the virtual reality terminal;
Obtain the overlap proportion of newest the viewing visual angle range and currently viewing angular field of view;
When the overlap proportion is less than predetermined threshold value, determine that the ranging from newest target viewing of the newest viewing visual angle regards Angular region;
It will be switched to described newest with the target video coded frame that the currently viewing angular field of view is main field of view The target video coded frame of target viewing visual angle ranging from main view angular zone.
Optionally, the overlap proportion for obtaining newest the viewing visual angle range and currently viewing angular field of view, including:
Obtain the corresponding first main view angular zone of the newest viewing visual angle range and the currently viewing angular field of view pair The the second main view angular zone answered;
Obtain the overlapping area of the first main view angular zone and the second main view angular zone;
The ratio between the overlapping area and the area of the second main view angular zone is calculated, the overlap ratio is obtained Example.
Second aspect, present invention also provides another panoramic video transmission methods, including:
Target video coded frame is received, and the target video coded frame is decoded, obtains decoded video figure As frame;
Determine the main view angular zone in the decoded video image frame and non-master field of view;
The corresponding video image frame of the non-master field of view is up-sampled respectively, obtains up-sampling video image Frame;
By the corresponding video image frame of the main view angular zone, and, corresponding up-sample of each non-master field of view regards Frequency picture frame is spliced, and target image frame is obtained;
The target image frame is converted into target three-dimensional image;
Show the target three-dimensional image.
The third aspect, present invention also provides a kind of panoramic video transmitting devices, including:
First acquisition unit, for obtaining full-view video image frame;
Converting unit, for the full-view video image frame to be converted to cubic panorama as frame;
Arrangement units, for by the cubic panorama as frame is rearranged for a rectangle panoramic image frame;
Division unit, for the rectangle panoramic image frame to be divided into spatially continuous N number of field of view, wherein N For positive integer;
First determination unit, for determining main view angular zone and non-master field of view from N number of field of view;
Downsampling unit obtains non-main perspective and adopts for carrying out down-sampling to the corresponding image of the non-master field of view Sampled images;
Coding unit, for being carried out to the corresponding image of the main view angular zone and each non-main perspective sampled images Coding, obtains Video coding frame;
Second determination unit, the target viewing visual angle for determining virtual reality terminal, and obtain and watched with the target Visual angle is the target video coded frame of main field of view;
Transmission unit, for sending the target video coded frame.
Optionally, the type of the main view angular zone includes horizontal main view angular zone and vertical main view angular zone;
First determination unit includes:
First determination subelement is used for any one field of view in N number of field of view in horizontal view angle model It encloses left and right and extends default angular field of view respectively, obtain horizontal main view angular zone, and, the second determination subelement is used for the N Other field of view in a field of view in addition to the main view angular zone are determined as the non-master field of view;
Alternatively,
Third determination subelement is used for any one field of view in N number of field of view in vertical angle of view model Place it is lower extend the default angular field of view respectively, obtain vertical main view angular zone, and, the 4th determination subelement, for will Other field of view in N number of field of view in addition to the main view angular zone are determined as the non-master field of view.
Optionally, second determination unit is specifically used for:
The viewing visual angle information that the virtual reality terminal is sent is received, and according to described in viewing visual angle information determination Target viewing visual angle, the viewing visual angle information are detected to obtain by the virtual reality terminal by sensor.
Optionally, described device further includes:
Second acquisition unit, the newest viewing visual angle range for obtaining the virtual reality terminal;
Third acquiring unit, the overlap ratio for obtaining newest the viewing visual angle range and currently viewing angular field of view Example;
Third determination unit, for when the overlap proportion is less than predetermined threshold value, determining the newest viewing visual angle model It encloses for newest target viewing visual angle range;
Switch unit, for will be switched with the target video coded frame that the currently viewing angular field of view is main field of view For with the target video coded frame of the newest target viewing visual angle ranging from main view angular zone.
Optionally, the third acquiring unit includes:
First obtains subelement, for obtaining the corresponding first main view angular zone of the newest viewing visual angle range and described The corresponding second main view angular zone of currently viewing angular field of view;
Second obtains subelement, the faying surface for obtaining the first main view angular zone and the second main view angular zone Product;
Computation subunit, for calculating the ratio between the overlapping area and the area of the second main view angular zone, Obtain the overlap proportion.
Fourth aspect, present invention also provides another panoramic video transmitting devices, including:
Receiving unit is decoded for receiving target video coded frame, and to the target video coded frame, is solved Video image frame after code;
Determination unit, for determining main view angular zone and non-master field of view in the decoded video image frame;
Up-sampling unit is obtained for being up-sampled respectively to the corresponding video image frame of the non-master field of view Up-sample video image frame;
Concatenation unit is used for the corresponding video image frame of the main view angular zone, and, each non-master field of view pair The up-sampling video image frame answered is spliced, and target image frame is obtained;
Converting unit, for the target image frame to be converted into target three-dimensional image;
Display unit, for showing the target three-dimensional image.
The full-view video image frame of acquisition is mapped as cube graph picture by panoramic video transmission method provided in this embodiment Frame;Cube picture frame is rearranged for a rectangle panoramic image frame again.Rectangle panoramic image frame is divided into spatially Continuous N number of field of view, and therefrom determine main view angular zone and non-master field of view;It is corresponding for non-master field of view Image carries out down-sampling, obtains non-main perspective sampled images;Then, the corresponding image of main view angular zone and each non-main perspective are adopted Sampled images are encoded, and Video coding frame is obtained.The target viewing visual angle of virtual reality terminal is obtained, and to virtual reality terminal Send the target video coded frame for main field of view with the target viewing visual angle.This method passes through corresponding to main view angular zone Image is encoded and is shown according to full resolution, meanwhile, the corresponding image of non-master field of view carries out again after carrying out down-sampling Coding display.The data volume of image can be substantially reduced by carrying out down-sampling to image, to reduce except viewing visual angle The data volume of image in angular field of view, and then save the Internet resources needed for transmission.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is the present invention Some embodiments for those of ordinary skill in the art without creative efforts, can also basis These attached drawings obtain other attached drawings.
Fig. 1 is a kind of structural schematic diagram of panoramic video Transmission system of the embodiment of the present application;
Fig. 2 is a kind of flow chart of panoramic video transmission method of the embodiment of the present application;
Fig. 3 is a kind of structural schematic diagram of the cubic panorama of the embodiment of the present application as frame;
Fig. 4 is a kind of cubic panorama of the embodiment of the present application showing as the rectangle panoramic image frame that frame rearranges It is intended to;
Fig. 5 is a kind of schematic diagram dividing main view angular zone and non-master field of view of the embodiment of the present application;
Fig. 6 is a kind of schematic diagram of image to be encoded of the embodiment of the present application;
Fig. 7 is the flow chart of the embodiment of the present application another kind panoramic video transmission method;
Fig. 8 is the flow chart of another panoramic video transmission method of the embodiment of the present application;
Fig. 9 is a kind of block diagram of panoramic video transmitting device of the embodiment of the present application;
Figure 10 is the block diagram of the embodiment of the present application another kind panoramic video transmitting device;
Figure 11 is the block diagram of another panoramic video transmitting device of the embodiment of the present application.
Specific implementation mode
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art The every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Fig. 1 is referred to, shows a kind of structural schematic diagram of panoramic video Transmission system of the embodiment of the present application, the aphorama Frequency Transmission system includes server 1 and at least one VR terminals 2.
Server 1 encodes panoramic video for obtaining panoramic video, obtains Video coding frame.
When VR terminals 2 watch panoramic video, corresponding panoramic video can be asked to server 1, at this point, server 1 The corresponding Video coding frame of the panoramic video is sent to VR terminals 2.
Fig. 2 is referred to, shows that a kind of flow chart of panoramic video transmission method of the embodiment of the present application, this method are applied to In server 1, as shown in Fig. 2, this method may comprise steps of:
S110 obtains full-view video image frame.
Panoramic video is made of full-view video image frame, and full-view video image frame is the image for recording full view content information Frame.Processing procedure to panoramic video is exactly a series of processing procedure to full-view video image frames.
Panoramic video can be shot to obtain by the panorama camera of profession, alternatively, other equipment with shooting video capability The content of shooting splices to obtain.
The database of storage panoramic video can be established in server, it can be from database to obtain full-view video image frame It is middle to read corresponding full-view video image frame.
In embodiments herein, full-view video image frame can be 360 ° × 180 ° of cylindrical projection, alternatively, entirely Scape video image frame can be 360 ° × 180 ° of globular projection.
Full-view video image frame is converted to cubic panorama as frame by S120.
By taking panoramic video is 360 ° × 180 ° of cylindrical projection as an example, the panoramic video of cylindrical projection is mapped as standing Cube.As shown in figure 3, there are six faces for cube, each face is 90 ° × 90 ° of visual angle.Wherein, intermediate four faces are front and back The image (that is, 360 °) of left and right, upper and lower surfaces are the image (that is, 180 °) of top and bottom herein by cubic panorama picture Frame is known as CubePanoVideo.
S130, by cubic panorama as frame is rearranged for a rectangle panoramic image frame.
Cube panoramic image frame is encoded for convenience, cubical six faces are unfolded and are rearranged into one A rectangle panoramic image frame.
For example, as shown in figure 3, by cubic panorama as frame is unfolded to obtain picture frame sequence, then by picture frame sequence weight New arrangement, can obtain a rectangle panoramic image frame.Wherein, arrangement mode includes but not limited to four kinds of arrangements shown in Fig. 4 Mode.
Rectangle panoramic image frame is divided into spatially continuous N number of field of view by S140, and from N number of field of view Determine main view angular zone and non-master field of view, wherein N is positive integer.
It is illustrated by taking the rectangle sequence 1 in Fig. 4 as an example, the rectangle panoramic image frame that previous step is obtained is (that is, rectangle Sequence 1) it is divided into N number of field of view, N is positive integer.
Entire rectangular video sequence can be covered after the N number of field of view superposition divided, if for example, full-view video image frame Total level visual angle be 360 °, vertical angle of view be 180 °, then the horizontal view angle summation of N number of field of view be 360 °, vertical angle of view Summation be 180 °.
The horizontal view angle of each field of view or the interval range of vertical angle of view can phases in one rectangle panoramic image frame Together, it can also be different, the application is not construed as limiting this.
In one embodiment of the application, it is main view directions to take any one field of view in N number of field of view, And extend default angular field of view respectively in horizontal view angle scope or so and obtain main view angular zone, meanwhile, institute is removed in N number of field of view It is non-master field of view to state other field of view except main view angular zone.
Wherein, default angular field of view can be determined according to the main view angular region of VR terminals, for example, ranging from 120 ° of main perspective × 90 °, and the angular field of view of each field of view is 90 ° × 90 °, then it can be 15 ° to preset angular field of view.For another example, VR terminals Ranging from 180 ° × 90 ° of main perspective, and the angular field of view of each field of view divided in rectangular image frame be 90 ° × 90 °, It can be 45 ° then to preset angular field of view.
As shown in figure 5, by currently the rectangle panoramic image frame encoded being needed to be divided into 6 field of view, that is, N=6, it can Using by cubic panorama as each face in frame is as a field of view.Certainly, in other embodiments, N can be Positive integer less than or greater than 6.
For example, it is current main view angular direction to take positive (that is, marked as 0 one side), and it is each in horizontal view angle scope or so The angular field of view of extension 1/2, and current 180 ° × 90 ° of the main view angular zone of composition (in such as Fig. 5, ViewMainArea00Shown in area Domain).Remaining field of view RestArea0~RestArea4 (that is, not by region that main view angular zone is covered) is non-main view Angular zone, that is, remaining area is divided into multiple non-master field of view.For example, can be by the remainder in face 2, the remainder in face 3 Point, and, face 1,4 knead dough 5 of face are respectively as a non-master field of view to get to 5 non-master field of view.
In another embodiment of the application, take any one field of view in N number of field of view in vertical angle of view Range extends the predetermined angle range respectively up and down, obtains main view angular zone;Other field of view are non-master field of view.
For example, it is main view directions still to take positive (that is, face 0), and above and below the range of vertical angle of view (that is, in cube with Neighbouring 4 knead dough 5 of face in face 0) each 45 ° of extension, obtain 180 ° × 90 ° of main view angular zone, that is, vertical angle of view range is 180 °, horizontal view angle scope is 90 °.The main view angular zone can be denoted as ViewMainArea01
S150 carries out down-sampling to the corresponding image of non-master field of view, obtains non-main perspective sampled images.
Then, down-sampling is carried out to the image for the non-master field of view determined, obtains non-main perspective sampled images.
Wherein, the method for down-sampling includes but not limited to 1/4 pixel down-sampling, for example, by non-master field of view according to vertical Histogram is to odd column and horizontal direction extraction odd-numbered line is extracted, to obtain non-main perspective sampled images.Non- main perspective sample graph The pixel of picture is the 1/4 of original image.
Still with main view angular zone shown in fig. 5 and non-master field of view, field of view RestArea0 non-master to 5~ RestArea4 carries out down-sampling, obtains DownSamplei(i=0,1,2,3,4).In order to facilitate coding, down-sampling is obtained DownSampleiSpliced, obtains ViewDownArea00, the ViewDownArea00Pixel be former non-master field of view pair The 1/4 of the image pixel answered.
For another example, with ranging from 180 ° of vertical angle of view, in the application scenarios that horizontal view angle scope is 90 °, main view angular zone is ViewMainArea01.To not by ViewMainArea01The remaining area of covering carries out down-sampling and obtains DownSamplei(i= 0,1,2,3,4).And by ViewMainArea01With DownSampleiSpliced, obtains ViewDownArea01
S160 encodes the corresponding image of main view angular zone and each non-main perspective sampled images, obtains video volume Code frame.
In order to facilitate coding, by main view angular zone ViewMainArea00With spliced down-sampled images ViewDownArea00Spliced, obtains image PostFrame to be encoded as shown in FIG. 600
For another example, by main view angular zone ViewMainArea01With spliced down-sampled images ViewDownArea01It carries out Splicing, obtains image PostFrame to be encoded01
If different field of view respectively obtains PostFrame as main view angular zoneij, wherein i=0,1,2, 3 ... ..., N;J=0,1.
Wherein, i indicates that i-th of field of view indicates main view angular zone as main view angular zone, j=0 in N number of field of view Horizontal direction be 180 °, vertical direction is 90 °;J=1 indicates that the vertical direction of main view angular zone is 180 °, and vertical direction is 90°。
Then, coded image PostFrame is treatedijIt is encoded, obtains corresponding Video coding frame.
In one embodiment of the application, if server needs to provide the volume of panoramic video for multiple and different users Code data, server can be in advance by whole PostFrameijIt is encoded to obtain corresponding coded data respectively.Then, according to The viewing visual angle of different user, from PostFrameijRegard corresponding with the viewing visual angle of user is obtained in corresponding coded data Frequency coded frame, and it is sent to VR terminals.
Such mode in advance encodes each visual angle as the corresponding video sequence of main perspective, reduces VR terminals use Family sends the stand-by period of panoramic video receiving server.
In another embodiment of the application, if the viewing visual angle of user is known in advance in server, it will correspond on demand The viewing visual angle be main field of view image to be encoded encoded, obtain Video coding frame.
S170 determines the target viewing visual angle of VR terminals, and it is main field of view to obtain with the target viewing visual angle Target video coded frame.
In one embodiment of the application, the target viewing visual angle of VR terminals can be the VR ends of server end storage Hold corresponding acquiescence viewing visual angle.The acquiescence viewing visual angle can be the acquiescence viewing visual angle of the VR terminals of server memory storage.
In another embodiment of the application, VR terminals can obtain the currently viewing of user by attitude transducer and regard Angle, and feed back to server;The currently viewing visual angle that server feeds back the VR terminals is as target viewing visual angle.
In one embodiment of the application, after the target viewing visual angle for determining VR terminals, it can obtain with the target Viewing visual angle is the target video coded frame of main field of view.
S180 sends target video coded frame.
Server sends the target video coded frame to VR terminals, so that VR terminals can show panoramic video.
According to the corresponding target video coded frame of each video image frame in above-mentioned process one panoramic video of generation.
The full-view video image frame of acquisition is mapped as cube graph picture by panoramic video transmission method provided in this embodiment Frame;Cube picture frame is rearranged for a rectangle panoramic image frame again.Rectangle panoramic image frame is divided into spatially Continuous N number of field of view, and therefrom determine main view angular zone and non-master field of view;It is corresponding for non-master field of view Image carries out down-sampling, obtains non-main perspective sampled images;Then, the corresponding image of main view angular zone and each non-main perspective are adopted Sampled images are encoded, and Video coding frame is obtained.The target viewing visual angle of virtual reality terminal is obtained, and to virtual reality terminal Send the target video coded frame for main field of view with the target viewing visual angle.This method passes through corresponding to main view angular zone Image is encoded and is shown according to full resolution, meanwhile, the corresponding image of non-master field of view carries out again after carrying out down-sampling Coding display.The data volume of image can be substantially reduced by carrying out down-sampling to image, to reduce except viewing visual angle The data volume of image in angular field of view, and then save the Internet resources needed for transmission.
Fig. 7 is referred to, shows the flow chart of the embodiment of the present application another kind panoramic video transmission method, this method application In server.
In the present embodiment, the newest viewing of user that server is capable of interval acquiring VR terminals feedback at preset timed intervals regards Angle further determines whether to need the Video coding frame for being sent to VR terminals being switched to newest viewing visual angle to be main perspective area The Video coding frame in domain.
As shown in fig. 7, can also include the following steps on the basis of this method embodiment shown in Fig. 2:
S210 obtains the newest viewing visual angle range of VR terminals.
Server receives the newest viewing visual angle range of VR terminals feedback, which can be by VR terminals It detects to obtain according to attitude transducer.For example, when the attitude transducer in VR terminals detects that there is movement on the head of user, note Record current pose further calculates to obtain the corresponding viewing visual angle of current pose.
S220 obtains the overlap proportion of newest viewing visual angle range and currently viewing angular field of view.
In one embodiment of the application, overlap proportion can be calculated by following procedure:
Obtain the corresponding first main view angular zone of newest viewing visual angle range and currently viewing angular field of view corresponding Two main view angular zones;Then, the overlapping area of the first main view angular zone and the second main view angular zone is obtained;Finally, overlapping is calculated Ratio between area and the area of the second main view angular zone, obtains overlap proportion.
Wherein, currently viewing visual angle is the viewing visual angle for the user that server is obtained before obtaining newest viewing visual angle.
For example, currently viewing visual angle is FOVk, and the current viewing visual angle corresponding main view in rectangle panoramic image frame Angular zone is ViewMainAreak, the area of the main view angular zone is Areak;Newest viewing visual angle is FOVk+1, and the newest sight See visual angle FOVk+1Corresponding main view angular zone is ViewMainArea in rectangle panoramic image framek+1, and the main view angular zone Area be Areak+1.In addition, ViewMainAreakWith ViewMainAreak+1Overlapping region area be Areaoverlapk
Then, overlap proportion PkIt can be calculated according to formula 1:
Pk=Areaoverlap/Areak(formula 1)
Wherein, PkMore than 0 and it is less than or equal to 1.
It, can be directly by the area Area of overlapping region in another embodiment of the applicationoverlapkAs overlap ratio Example.
S230 determines the ranging from newest target viewing visual angle of newest viewing visual angle when overlap proportion is less than predetermined threshold value Range.
If the overlap proportion being calculated is less than or equal to predetermined threshold value, it is determined that newest viewing visual angle is ranging from newest Then target viewing visual angle range executes S240.
If overlap proportion is more than predetermined threshold value, it is determined that the viewing visual angle of user does not change, and ignores newest viewing and regards Angular region is not necessarily to switch main view angular zone, does not switch the multi-view video for being sent to VR terminals.
S240 will be that the corresponding target video coded frame of main field of view is switched to described with currently viewing angular field of view Target video coded frame corresponding to newest target viewing visual angle ranging from main view angular zone.
The multi-view video that former viewing visual angle is main perspective is switched to using newest viewing visual angle as the multi-view video of main perspective.
Panoramic video transmission method provided in this embodiment obtains during sending target code video to VR terminals The newest viewing visual angle range at family is taken, if the overlap proportion of newest viewing visual angle range and former viewing visual angle range is less than in advance If threshold value, it is determined that the viewing visual angle of user changes.And by with newest viewing visual angle be main field of view Video coding Frame is sent to VR terminals.The image in main view angular zone to ensure user is most clear.
Corresponding to panoramic video transmission method embodiment of the above application in server, present invention also provides applied to The panoramic video transmission method embodiment of VR terminals.
Fig. 8 is referred to, shows the flow chart of another panoramic video transmission method of the embodiment of the present application, this method application In VR terminals.As shown in figure 8, this method may comprise steps of:
S310 receives target video coded frame, and is decoded to target video coded frame, obtains decoded video figure As frame.
VR terminals receive the target video coded frame that server-side is sent, which is server-side according to user Viewing visual angle, the viewing visual angle with user of acquisition is the Video coding frame of main field of view.
After the target video coded frame for receiving server transmission, the target video coded frame is solved using decoder Code, obtains decoded video image frame.
In one embodiment of the application, what VR terminals received is a series of Video coding frames that server is sent, This series video coded frame constitutes a video.
S320 determines the main view angular zone in decoded video image frame and non-master field of view.
It can appoint which is partly main view angular zone, which portion in each video image frame between server and VR terminals It is non-main view angular zone to divide.
S330 up-samples the corresponding video image frame of non-master field of view respectively, obtains up-sampling video image Frame.
Non-master field of view in decoded video image frame is up-sampled, up-sampling video image is obtained Frame.
Upsampling process is the inverse process with down-sampling process, and up-sampling mode includes but not limited to non-master field of view The row, column of interior image carries out 2 times of sample mode respectively.
For example, the DownSample that server down-sampling obtainsiPixel resolution be wi*hi, then VR terminals up-sample The pixel resolution of the up-sampling video image frame arrived is 2wi*2hi
S340, by the corresponding decoded video image frame of main view angular zone, and, each non-master field of view is corresponding Up-sampling video image frame is spliced, and target image frame is obtained.
By the corresponding decoded video image of main view angular zone and the obtained up-sampling video image of up-sampling according to Server-side obtains the mode arrangement mode of each field of view (that is, in rectangular image) of rectangle panoramic image frame, is spliced, Obtain target image frame.
Target image frame is converted into target three-dimensional image by S350.
In one embodiment of the application, the threedimensional model of VR terminals is cube, then the target image frame that will be obtained Cubic panorama is mapped as frame, and shows the cubic panorama as frame on the display module of VR terminals.
In another embodiment of the application, the threedimensional model of VR terminals is not cube but other threedimensional models, For example, it is cylindrical, then target image frame is first mapped as cubic panorama picture, then, then by cubic panorama as frame reflects It penetrates as target three-dimensional image.Finally, which is shown by display module.
S360, display target threedimensional model image.
A kind of panoramic video transmission method provided in this embodiment, after receiving target video coded frame, and is decoded Obtain decoded video image frame.Then, it is determined that the main view angular zone in decoded video image frame and non-main perspective area Domain.Non-master field of view is up-sampled, then by the corresponding decoded image of main view angular zone and each non-main perspective Image after the corresponding up-sampling in region is spliced, and target image frame is obtained.Target image frame is converted into target three-dimensional mould Type image, and show the target three-dimensional image to user.Server carries out down-sampling to image can substantially reduce image Data volume therefore, save VR terminals to reduce the data volume of the image in the angular field of view except viewing visual angle The Internet resources needed for panoramic video are received, moreover, reducing the workload of VR decoding terminals panoramic videos, improve decoding speed Degree.
Corresponding to the above-mentioned panoramic video transmission method embodiment applied to server end, present invention also provides applications Panoramic video transmitting device embodiment in server.
Fig. 9 is referred to, shows that a kind of block diagram of panoramic video transmitting device of the embodiment of the present application, the device are applied to clothes It is engaged in device, as shown in figure 9, the device may include:
First acquisition unit 110, for obtaining full-view video image frame.
Panoramic video is made of full-view video image frame, and full-view video image frame is the image for recording full view content information Frame.Processing procedure to panoramic video is exactly a series of processing procedure to full-view video image frames.
Converting unit 120, for the full-view video image frame to be converted to cubic panorama as frame.
Full-view video image frame is mapped as cube shown in Fig. 3, obtains cubic panorama as frame.
Arrangement units 130, for by cubic panorama as frame is rearranged for a rectangle panoramic image frame.
By cubic panorama shown in Fig. 3 as frame is rearranged for rectangle panoramic image frame (any one in Fig. 4 Arrangement mode).
Division unit 140, for rectangle panoramic image frame to be divided into spatially continuous N number of field of view.Wherein, N For positive integer.
Entire rectangular video sequence can be covered after the N number of field of view superposition divided, if for example, full-view video image frame Total level visual angle be 360 °, vertical angle of view be 180 °, then the horizontal view angle summation of N number of field of view be 360 °, vertical angle of view Summation be 180 °.
It in one embodiment of the application, is illustrated by taking the rectangle sequence 1 in Fig. 4 as an example, by rectangle panoramic picture Frame is divided into 6 field of view (that is, N=6), using cubic panorama as each face in frame is as a field of view; In the other embodiments of the application, N can be less than or the positive integer more than 6, is no longer described in detail one by one herein.
First determination unit 150, for determining main view angular zone and non-master field of view from N number of field of view.
In one embodiment of the application, the first determination unit includes that the first determination subelement and second determine that son is single Member;
First determination subelement is used to any one field of view in N number of field of view being determined as main view angular direction, And default angular field of view is extended respectively in horizontal view angle scope or so and obtains main view angular zone;Second determination subelement is used for N Other field of view in a field of view in addition to the main view angular zone are determined as non-master field of view.
For example, it is current main view angular direction to take positive (that is, marked as 0 one side), and it is each in horizontal view angle scope or so The angular field of view of extension 1/2, and current 180 ° × 90 ° of the main view angular zone of composition (in such as Fig. 5, ViewMainArea00Shown in area Domain).Remaining field of view RestArea0~RestArea4 (that is, not by region that main view angular zone is covered) is non-main view Angular zone, that is, remaining area is divided into multiple non-master field of view.For example, can be by the remainder in face 2, the remainder in face 3 Point, and, face 1,4 knead dough 5 of face are respectively as a non-master field of view to get to 5 non-master field of view.
In another embodiment of the application, the first determination unit includes that third determination subelement and the 4th determine that son is single Member;
Third determination subelement is used for any one field of view in N number of field of view above and below the range of vertical angle of view The predetermined angle range is extended respectively, obtains main view angular zone;4th determination subelement is for determining that other field of view are Non-master field of view.
For example, it is main view directions still to take positive (that is, face 0), and above and below the range of vertical angle of view (that is, in cube with Neighbouring 4 knead dough 5 of face in face 0) each 45 ° of extension, obtain 180 ° × 90 ° of main view angular zone, that is, vertical angle of view range is 180 °, horizontal view angle scope is 90 °.The main view angular zone can be denoted as ViewMainArea01
Downsampling unit 160 obtains non-main perspective sampling for carrying out down-sampling to the corresponding image of non-master field of view Image.
The corresponding image resolution ratio of main view angular zone is constant.Down-sampling is carried out to the image of non-master field of view, is obtained non- Main perspective sampled images.
Wherein, the method for down-sampling includes but not limited to 1/4 pixel down-sampling, for example, by non-master field of view according to vertical Histogram is to odd column and horizontal direction extraction odd-numbered line is extracted, to obtain non-main perspective sampled images.Non- main perspective sample graph The pixel of picture is the 1/4 of original image.
Coding unit 170, for the corresponding image of the main view angular zone and each non-main perspective sampled images It is encoded, obtains Video coding frame.
The non-main perspective sampled images that down-sampling obtains are spliced, ViewDownArea is obtained00
In order to facilitate coding, then by main view angular zone ViewMainArea00With spliced down-sampled images ViewDownArea00Spliced, obtains image PostFrame to be encoded as shown in FIG. 600
If different field of view respectively obtains PostFrame as main view angular zoneij, wherein i=0,1,2, 3 ... ..., N;J=0,1.
Wherein, i indicates that i-th of field of view indicates main view angular zone as main view angular zone, j=0 in N number of field of view Horizontal direction be 180 °, vertical direction is 90 °;J=1 indicates that the vertical direction of main view angular zone is 180 °, and vertical direction is 90°。
Then, coded image PostFrame is treatedijIt is encoded, obtains corresponding Video coding frame.
In one embodiment of the application, if server needs to provide the volume of panoramic video for multiple and different users Code data, server can be in advance by whole PostFrameijIt is encoded to obtain corresponding coded data respectively.Then, according to The viewing visual angle of different user, from PostFrameijRegard corresponding with the viewing visual angle of user is obtained in corresponding coded data Frequency coded frame, and it is sent to VR terminals.
Such mode in advance encodes each visual angle as the corresponding video sequence of main perspective, reduces VR terminals use Family sends the stand-by period of panoramic video receiving server.
In another embodiment of the application, if the viewing visual angle of user is known in advance in server, it will correspond on demand The viewing visual angle be main field of view image to be encoded encoded, obtain Video coding frame.
Second determination unit 180, the target viewing visual angle for determining VR terminals, and obtain based on target viewing visual angle The target video coded frame of field of view.
In one embodiment of the application, the target viewing visual angle of VR terminals can be the VR ends of server end storage Hold corresponding acquiescence viewing visual angle.The acquiescence viewing visual angle can be the acquiescence viewing visual angle of the VR terminals of server memory storage.
In another embodiment of the application, VR terminals can obtain the currently viewing of user by attitude transducer and regard Angle, and feed back to server;The currently viewing visual angle that server feeds back the VR terminals is as target viewing visual angle.
Transmission unit 190, for sending the target video coded frame.
Server sends the target video coded frame to VR terminals, so that VR terminals can show panoramic video.
According to the corresponding target video coded frame of each video image frame in above-mentioned process one panoramic video of generation.
The full-view video image frame of acquisition is mapped as cube graph picture by panoramic video transmitting device provided in this embodiment Frame;Cube picture frame is rearranged for a rectangle panoramic image frame again.Rectangle panoramic image frame is divided into spatially Continuous N number of field of view, and therefrom determine main view angular zone and non-master field of view;It is corresponding for non-master field of view Image carries out down-sampling, obtains non-main perspective sampled images;Then, the corresponding image of main view angular zone and each non-main perspective are adopted Sampled images are encoded, and Video coding frame is obtained.The target viewing visual angle of virtual reality terminal is obtained, and to virtual reality terminal Send the target video coded frame for main field of view with the target viewing visual angle.The device passes through corresponding to main view angular zone Image is encoded and is shown according to full resolution, meanwhile, the corresponding image of non-master field of view carries out again after carrying out down-sampling Coding display.The data volume of image can be substantially reduced by carrying out down-sampling to image, to reduce except viewing visual angle The data volume of image in angular field of view, and then save the Internet resources needed for transmission.
Figure 10 is referred to, shows the block diagram of another panoramic video transmitting device of the embodiment of the present application, the device is in Fig. 9 Further include on the basis of illustrated embodiment:
Second acquisition unit 210, the newest viewing visual angle range for obtaining VR terminals.
The newest viewing visual angle range can be detected to obtain by VR terminals according to attitude transducer.For example, when in VR terminals Attitude transducer when detecting that there is movement on the head of user, record current pose further calculates to obtain current pose corresponding Viewing visual angle.
Third acquiring unit 220, the overlap proportion for obtaining newest viewing visual angle range and currently viewing angular field of view.
In one embodiment of the application, third acquiring unit 220 may include the first acquisition subelement, the second acquisition Subelement and computation subunit.
The first acquisition subelement, for obtaining the corresponding first main view angular zone of newest viewing visual angle range and current sight See the corresponding second main view angular zone of angular field of view.
The second acquisition subelement, the overlapping area for obtaining the first main view angular zone and the second main view angular zone.
The computation subunit obtains weight for calculating the ratio between overlapping area and the area of the second main view angular zone Folded ratio.
Third determination unit 230, for when overlap proportion is less than predetermined threshold value, determining newest viewing visual angle ranging from most Fresh target viewing visual angle range.
If the overlap proportion being calculated is less than or equal to predetermined threshold value, it is determined that newest viewing visual angle is ranging from newest Target viewing visual angle range, then, triggering switch unit 240 execute corresponding operation.
If overlap proportion is more than predetermined threshold value, it is determined that the viewing visual angle of user does not change, and ignores newest viewing and regards Angular region is not necessarily to switch main view angular zone, does not switch the multi-view video for being sent to VR terminals.
Switch unit 240, for will be switched with the target video coded frame that currently viewing angular field of view is main field of view For with the target video coded frame of newest target viewing visual angle ranging from main view angular zone.
The multi-view video that former viewing visual angle is main perspective is switched to using newest viewing visual angle as the multi-view video of main perspective.
Panoramic video transmitting device provided in this embodiment obtains during sending target code video to VR terminals The newest viewing visual angle range at family is taken, if the overlap proportion of newest viewing visual angle range and former viewing visual angle range is less than in advance If threshold value, it is determined that the viewing visual angle of user changes.And by with newest viewing visual angle be main field of view Video coding Frame is sent to VR terminals.The image in main view angular zone to ensure user is most clear.
Corresponding to above application in the panoramic video transmission method embodiment of VR terminals, present invention also provides applied to VR The panoramic video transmitting device embodiment of terminal.
Figure 11 is referred to, shows that a kind of block diagram of panoramic video transmitting device of the embodiment of the present application, the device are applied to In VR terminals, as shown in figure 11, which may include:Receiving unit 310, determination unit 320, up-sampling unit 330, splicing Unit 340, converting unit 350 and display unit 360.
Receiving unit 310 is decoded for receiving target video coded frame, and to target video coded frame, is solved Video image frame after code.
VR terminals receive the target video coded frame that server-side is sent, which is server-side according to user Viewing visual angle, the viewing visual angle with user of acquisition is the Video coding frame of main field of view.
After the target video coded frame for receiving server transmission, the target video coded frame is solved using decoder Code, obtains decoded video image frame.
In one embodiment of the application, what VR terminals received is a series of Video coding frames that server is sent, This series video coded frame constitutes a video.
Determination unit 320, for determining main view angular zone and non-master field of view in decoded video image frame.
It can appoint which is partly main view angular zone, which portion in each video image frame between server and VR terminals It is non-main view angular zone to divide.
Up-sampling unit 330 is obtained for being up-sampled respectively to the corresponding video image frame of non-master field of view Sample video image frame.
Non-master field of view in decoded video image frame is up-sampled, up-sampling video image is obtained Frame.
Upsampling process is the inverse process with down-sampling process, and up-sampling mode includes but not limited to non-master field of view The row, column of interior image carries out 2 times of sample mode respectively.
For example, the DownSample that server down-sampling obtainsiPixel resolution be wi*hi, then VR terminals up-sample The pixel resolution of the up-sampling video image frame arrived is 2wi*2hi
Concatenation unit 340 is used for the corresponding video image frame of main view angular zone, and, each non-master field of view pair The up-sampling video image frame answered is spliced, and target image frame is obtained.
By the corresponding decoded video image of main view angular zone and the obtained up-sampling video image of up-sampling according to Server-side obtains the mode arrangement mode of each field of view (that is, in rectangular image) of rectangle panoramic image frame, is spliced, Obtain target image frame.
Converting unit 350, for target image frame to be converted into target three-dimensional image.
In one embodiment of the application, the threedimensional model of VR terminals is cube, then the target image frame that will be obtained Cubic panorama is mapped as frame, and shows the cubic panorama as frame on the display module of VR terminals.
In another embodiment of the application, the threedimensional model of VR terminals is not cube but other threedimensional models, For example, it is cylindrical, then target image frame is first mapped as cubic panorama picture, then, then by cubic panorama as frame reflects It penetrates as target three-dimensional image.Finally, which is shown by display module.
Display unit 360 is used for display target threedimensional model image.
A kind of panoramic video transmitting device provided in this embodiment, after receiving target video coded frame, and is decoded Obtain decoded video image frame.Then, it is determined that the main view angular zone in decoded video image frame and non-main perspective area Domain.Non-master field of view is up-sampled, then by the corresponding decoded image of main view angular zone and each non-main perspective Image after the corresponding up-sampling in region is spliced, and target image frame is obtained.Target image frame is converted into target three-dimensional mould Type image, and show the target three-dimensional image to user.Server carries out down-sampling to image can substantially reduce image Data volume therefore, save VR terminals to reduce the data volume of the image in the angular field of view except viewing visual angle The Internet resources needed for panoramic video are received, moreover, reducing the workload of VR decoding terminals panoramic videos, improve decoding speed Degree.
For each method embodiment above-mentioned, for simple description, therefore it is all expressed as a series of combination of actions, but Be those skilled in the art should understand that, the present invention is not limited by the described action sequence because according to the present invention, certain A little steps can be performed in other orders or simultaneously.Secondly, it those skilled in the art should also know that, is retouched in specification The embodiment stated belongs to preferred embodiment, and involved action and module are not necessarily essential to the invention.
It should be noted that each embodiment in this specification is described in a progressive manner, each embodiment weight Point explanation is all difference from other examples, and the same or similar parts between the embodiments can be referred to each other. For device class embodiment, since it is basically similar to the method embodiment, so fairly simple, the related place ginseng of description See the part explanation of embodiment of the method.
Step in each embodiment method of the application can be sequentially adjusted, merged and deleted according to actual needs.
Each embodiment kind device of the application and module in terminal and submodule can be merged, be drawn according to actual needs Divide and deletes.
In several embodiments provided herein, it should be understood that disclosed terminal, device and method, Ke Yitong Other modes are crossed to realize.For example, terminal embodiment described above is only schematical, for example, module or submodule Division, only a kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple submodule or Module may be combined or can be integrated into another module, or some features can be ignored or not executed.Another point is shown The mutual coupling, direct-coupling or communication connection shown or discussed can be by some interfaces, between device or module Coupling or communication connection are connect, can be electrical, machinery or other forms.
The module or submodule illustrated as separating component may or may not be physically separated, as mould The component of block or submodule may or may not be physical module or submodule, you can be located at a place, or It may be distributed on multiple network modules or submodule.Some or all of mould therein can be selected according to the actual needs Block or submodule achieve the purpose of the solution of this embodiment.
In addition, each function module or submodule in each embodiment of the application can be integrated in a processing module In, can also be that modules or submodule physically exist alone, can also two or more modules or submodule it is integrated In a module.The form that hardware had both may be used in above-mentioned integrated module or submodule is realized, software work(can also be used Energy module or the form of submodule are realized.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that the process, method, article or equipment including a series of elements includes not only that A little elements, but also include other elements that are not explicitly listed, or further include for this process, method, article or The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged Except there is also other identical elements in the process, method, article or apparatus that includes the element.
The foregoing description of the disclosed embodiments enables those skilled in the art to realize or use the present invention.To this A variety of modifications of a little embodiments will be apparent for a person skilled in the art, and the general principles defined herein can Without departing from the spirit or scope of the present invention, to realize in other embodiments.Therefore, the present invention will not be limited It is formed on the embodiments shown herein, and is to fit to consistent with the principles and novel features disclosed in this article widest Range.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered It is considered as protection scope of the present invention.

Claims (14)

1. a kind of panoramic video transmission method, which is characterized in that including:
Obtain full-view video image frame;
The full-view video image frame is converted into cubic panorama as frame;
By the cubic panorama as frame is rearranged for a rectangle panoramic image frame;
The rectangle panoramic image frame is divided into spatially continuous N number of field of view, and from N number of field of view really Determine main view angular zone and non-master field of view, wherein N is positive integer;
Down-sampling is carried out to the corresponding image of the non-master field of view, obtains non-main perspective sampled images;
The corresponding image of the main view angular zone and each non-main perspective sampled images are encoded, Video coding is obtained Frame;
It determines the target viewing visual angle of virtual reality terminal, and obtains the target with the target viewing visual angle for main field of view Video coding frame;
Send the target video coded frame.
2. according to the method described in claim 1, it is characterized in that, the type of the main view angular zone includes horizontal main view angular region Domain and vertical main view angular zone;
Determining main view angular zone and the non-master field of view from N number of field of view, including:
Any one field of view in N number of field of view is extended into default visual angle model respectively in horizontal view angle scope or so It encloses, obtains horizontal main view angular zone, and, by other visual angles in N number of field of view in addition to the main view angular zone Region is determined as the non-master field of view;
Alternatively,
Any one field of view in N number of field of view is extended described preset respectively up and down in vertical angle of view range to regard Angular region obtains vertical main view angular zone, and, it will be other in addition to the main view angular zone in N number of field of view Field of view is determined as the non-master field of view.
3. method according to claim 1 or 2, which is characterized in that it is described to the corresponding image of the main view angular zone and Each non-main perspective sampled images are encoded, and Video coding frame is obtained, including:
Corresponding main perspective sampled images and full when to each field of view in the N field of view as main view angular zone The non-main perspective sampled images in portion are spliced, and picture frame to be encoded is obtained;
The picture frame to be encoded is encoded respectively, obtains Video coding frame.
4. according to the method described in claim 3, it is characterized in that, described obtain with the target main view angular zone as main visual angle The target video coded frame in region, including:
It searches from the 2N Video coding frame and is determined with the Video coding frame that the target viewing visual angle is main field of view For the target video coded frame.
5. according to the method described in claim 1, it is characterized in that, the target viewing visual angle of the determining virtual reality terminal, Including:
The viewing visual angle information that the virtual reality terminal is sent is received, and the target is determined according to the viewing visual angle information Viewing visual angle, the viewing visual angle information are detected to obtain by the virtual reality terminal by sensor.
6. according to the method described in claim 1, it is characterized in that, the method further includes:
Obtain the newest viewing visual angle range of the virtual reality terminal;
Obtain the overlap proportion of newest the viewing visual angle range and currently viewing angular field of view;
When the overlap proportion is less than predetermined threshold value, the ranging from newest target viewing visual angle model of the newest viewing visual angle is determined It encloses;
It will be switched to the newest target with the target video coded frame that the currently viewing angular field of view is main field of view The target video coded frame of viewing visual angle ranging from main view angular zone.
7. according to the method described in claim 6, it is characterized in that, described obtain the newest viewing visual angle range and current sight See the overlap proportion of angular field of view, including:
It obtains the corresponding first main view angular zone of the newest viewing visual angle range and the currently viewing angular field of view is corresponding Second main view angular zone;
Obtain the overlapping area of the first main view angular zone and the second main view angular zone;
The ratio between the overlapping area and the area of the second main view angular zone is calculated, the overlap proportion is obtained.
8. a kind of panoramic video transmission method, which is characterized in that including:
Target video coded frame is received, and the target video coded frame is decoded, obtains decoded video image frame;
Determine the main view angular zone in the decoded video image frame and non-master field of view;
The corresponding video image frame of the non-master field of view is up-sampled respectively, obtains up-sampling video image frame;
By the corresponding video image frame of the main view angular zone, and, the corresponding up-sampling video figure of each non-master field of view As frame is spliced, target image frame is obtained;
The target image frame is converted into target three-dimensional image;
Show the target three-dimensional image.
9. a kind of panoramic video transmitting device, which is characterized in that including:
First acquisition unit, for obtaining full-view video image frame;
Converting unit, for the full-view video image frame to be converted to cubic panorama as frame;
Arrangement units, for by the cubic panorama as frame is rearranged for a rectangle panoramic image frame;
Division unit, for the rectangle panoramic image frame to be divided into spatially continuous N number of field of view, wherein N is just Integer;
First determination unit, for determining main view angular zone and non-master field of view from N number of field of view;
Downsampling unit obtains non-main perspective sample graph for carrying out down-sampling to the corresponding image of the non-master field of view Picture;
Coding unit, for being compiled to the corresponding image of the main view angular zone and each non-main perspective sampled images Code, obtains Video coding frame;
Second determination unit, the target viewing visual angle for determining virtual reality terminal, and obtain with the target viewing visual angle For the target video coded frame of main field of view;
Transmission unit, for sending the target video coded frame.
10. device according to claim 9, which is characterized in that the type of the main view angular zone includes horizontal main perspective Region and vertical main view angular zone;
First determination unit includes:
First determination subelement, for any one field of view in N number of field of view is left in horizontal view angle scope Right extend respectively presets angular field of view, obtains horizontal main view angular zone, and, the second determination subelement, for N number of being regarded described Other field of view in angular zone in addition to the main view angular zone are determined as the non-master field of view;
Alternatively,
Third determination subelement is used for any one field of view in N number of field of view in vertical angle of view range It is lower to extend the default angular field of view respectively, vertical main view angular zone is obtained, and, the 4th determination subelement is used for the N Other field of view in a field of view in addition to the main view angular zone are determined as the non-master field of view.
11. device according to claim 9, which is characterized in that second determination unit is specifically used for:
The viewing visual angle information that the virtual reality terminal is sent is received, and the target is determined according to the viewing visual angle information Viewing visual angle, the viewing visual angle information are detected to obtain by the virtual reality terminal by sensor.
12. device according to claim 9, which is characterized in that described device further includes:
Second acquisition unit, the newest viewing visual angle range for obtaining the virtual reality terminal;
Third acquiring unit, the overlap proportion for obtaining newest the viewing visual angle range and currently viewing angular field of view;
Third determination unit, for when the overlap proportion is less than predetermined threshold value, determining the newest viewing visual angle ranging from Newest target viewing visual angle range;
Switch unit, for by with the currently viewing angular field of view be main field of view target video coded frame be switched to The target video coded frame of the newest target viewing visual angle ranging from main view angular zone.
13. device according to claim 12, which is characterized in that the third acquiring unit includes:
First obtains subelement, for obtaining the corresponding first main view angular zone of the newest viewing visual angle range and described current The corresponding second main view angular zone of viewing visual angle range;
Second obtains subelement, the overlapping area for obtaining the first main view angular zone and the second main view angular zone;
Computation subunit is obtained for calculating the ratio between the overlapping area and the area of the second main view angular zone The overlap proportion.
14. a kind of panoramic video transmitting device, which is characterized in that including:
Receiving unit is decoded for receiving target video coded frame, and to the target video coded frame, after obtaining decoding Video image frame;
Determination unit, for determining main view angular zone and non-master field of view in the decoded video image frame;
Up-sampling unit obtains adopting for up-sampling the corresponding video image frame of the non-master field of view respectively Sample video image frame;
Concatenation unit is used for the corresponding video image frame of the main view angular zone, and, each non-master field of view is corresponding Up-sampling video image frame is spliced, and target image frame is obtained;
Converting unit, for the target image frame to be converted into target three-dimensional image;
Display unit, for showing the target three-dimensional image.
CN201810165985.1A 2018-02-28 2018-02-28 A kind of panoramic video transmission method and device Pending CN108322727A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810165985.1A CN108322727A (en) 2018-02-28 2018-02-28 A kind of panoramic video transmission method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810165985.1A CN108322727A (en) 2018-02-28 2018-02-28 A kind of panoramic video transmission method and device

Publications (1)

Publication Number Publication Date
CN108322727A true CN108322727A (en) 2018-07-24

Family

ID=62900677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810165985.1A Pending CN108322727A (en) 2018-02-28 2018-02-28 A kind of panoramic video transmission method and device

Country Status (1)

Country Link
CN (1) CN108322727A (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109040601A (en) * 2018-09-05 2018-12-18 清华-伯克利深圳学院筹备办公室 A kind of multiple dimensioned non-structured 1,000,000,000 pixel VR panoramic shooting system
CN109587516A (en) * 2018-11-01 2019-04-05 深圳威尔视觉传媒有限公司 Method for processing video frequency, device and storage medium
CN109815409A (en) * 2019-02-02 2019-05-28 北京七鑫易维信息技术有限公司 A kind of method for pushing of information, device, wearable device and storage medium
CN110913198A (en) * 2018-09-14 2020-03-24 北京恒信彩虹信息技术有限公司 VR image transmission method
WO2020057249A1 (en) * 2018-09-19 2020-03-26 中兴通讯股份有限公司 Image processing method, apparatus and system, and network device, terminal and storage medium
CN111131879A (en) * 2019-12-30 2020-05-08 歌尔股份有限公司 Video data playing method and device and computer readable storage medium
CN111246237A (en) * 2020-01-22 2020-06-05 视联动力信息技术股份有限公司 Panoramic video live broadcast method and device
CN111314729A (en) * 2020-02-25 2020-06-19 广州华多网络科技有限公司 Panoramic image generation method, device, equipment and storage medium
CN111726640A (en) * 2020-07-03 2020-09-29 中图云创智能科技(北京)有限公司 Live broadcast method with 0-360 degree dynamic viewing angle
CN112449171A (en) * 2019-09-03 2021-03-05 上海交通大学 Encoding method, system and medium for point cloud view-division transmission
CN113014924A (en) * 2019-12-20 2021-06-22 中国电信股份有限公司 Video encoding method, server, apparatus, and computer-readable storage medium
CN113542209A (en) * 2020-03-30 2021-10-22 腾讯美国有限责任公司 Method, apparatus and readable storage medium for video signaling
CN113630622A (en) * 2021-06-18 2021-11-09 中图云创智能科技(北京)有限公司 Panoramic video image processing method, server, target device, apparatus and system
WO2021259175A1 (en) * 2020-06-23 2021-12-30 华为技术有限公司 Video transmission method, apparatus, and system
CN114143602A (en) * 2021-12-06 2022-03-04 深圳创维数字技术有限公司 Panoramic picture playing method and device, panoramic playing server and storage medium
CN115529451A (en) * 2021-06-25 2022-12-27 北京金山云网络技术有限公司 Data transmission method and device, storage medium and electronic equipment
WO2022268008A1 (en) * 2021-06-26 2022-12-29 华为技术有限公司 Virtual reality video transmission method and apparatus
WO2022267256A1 (en) * 2021-06-22 2022-12-29 青岛小鸟看看科技有限公司 Method and system for vr image compression and transmission
GB2609064A (en) * 2021-07-16 2023-01-25 Sony Interactive Entertainment Inc Video processing and playback systems and methods
WO2023125353A1 (en) * 2021-12-27 2023-07-06 影石创新科技股份有限公司 Panoramic video transmission method and apparatus, and storage medium
US11748915B2 (en) 2021-06-22 2023-09-05 Qingdao Pico Technology Co., Ltd. VR image compression transmission method and system
WO2024055925A1 (en) * 2022-09-13 2024-03-21 影石创新科技股份有限公司 Image transmission method and apparatus, image display method and apparatus, and computer device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1479462A (en) * 2003-06-04 2004-03-03 威海克劳斯数码通迅有限公司 Full view remote network safety monitoring system
CN105933688A (en) * 2015-10-26 2016-09-07 北京蚁视科技有限公司 Image storage method based on panoramic image display
CN107396077A (en) * 2017-08-23 2017-11-24 深圳看到科技有限公司 Virtual reality panoramic video stream projecting method and equipment
CN107622474A (en) * 2017-09-26 2018-01-23 北京大学深圳研究生院 Panoramic video mapping method based on main view point

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1479462A (en) * 2003-06-04 2004-03-03 威海克劳斯数码通迅有限公司 Full view remote network safety monitoring system
CN105933688A (en) * 2015-10-26 2016-09-07 北京蚁视科技有限公司 Image storage method based on panoramic image display
CN107396077A (en) * 2017-08-23 2017-11-24 深圳看到科技有限公司 Virtual reality panoramic video stream projecting method and equipment
CN107622474A (en) * 2017-09-26 2018-01-23 北京大学深圳研究生院 Panoramic video mapping method based on main view point

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109040601A (en) * 2018-09-05 2018-12-18 清华-伯克利深圳学院筹备办公室 A kind of multiple dimensioned non-structured 1,000,000,000 pixel VR panoramic shooting system
CN110913198A (en) * 2018-09-14 2020-03-24 北京恒信彩虹信息技术有限公司 VR image transmission method
CN110913198B (en) * 2018-09-14 2021-04-27 东方梦幻虚拟现实科技有限公司 VR image transmission method
WO2020057249A1 (en) * 2018-09-19 2020-03-26 中兴通讯股份有限公司 Image processing method, apparatus and system, and network device, terminal and storage medium
CN109587516A (en) * 2018-11-01 2019-04-05 深圳威尔视觉传媒有限公司 Method for processing video frequency, device and storage medium
CN109815409A (en) * 2019-02-02 2019-05-28 北京七鑫易维信息技术有限公司 A kind of method for pushing of information, device, wearable device and storage medium
CN109815409B (en) * 2019-02-02 2021-01-01 北京七鑫易维信息技术有限公司 Information pushing method and device, wearable device and storage medium
CN112449171B (en) * 2019-09-03 2021-10-29 上海交通大学 Encoding method, system and medium for point cloud view-division transmission
CN112449171A (en) * 2019-09-03 2021-03-05 上海交通大学 Encoding method, system and medium for point cloud view-division transmission
CN113014924A (en) * 2019-12-20 2021-06-22 中国电信股份有限公司 Video encoding method, server, apparatus, and computer-readable storage medium
CN111131879A (en) * 2019-12-30 2020-05-08 歌尔股份有限公司 Video data playing method and device and computer readable storage medium
CN111246237A (en) * 2020-01-22 2020-06-05 视联动力信息技术股份有限公司 Panoramic video live broadcast method and device
CN111314729A (en) * 2020-02-25 2020-06-19 广州华多网络科技有限公司 Panoramic image generation method, device, equipment and storage medium
CN113542209B (en) * 2020-03-30 2024-06-07 腾讯美国有限责任公司 Method, apparatus and readable storage medium for video signaling
CN113542209A (en) * 2020-03-30 2021-10-22 腾讯美国有限责任公司 Method, apparatus and readable storage medium for video signaling
WO2021259175A1 (en) * 2020-06-23 2021-12-30 华为技术有限公司 Video transmission method, apparatus, and system
CN111726640A (en) * 2020-07-03 2020-09-29 中图云创智能科技(北京)有限公司 Live broadcast method with 0-360 degree dynamic viewing angle
CN113630622A (en) * 2021-06-18 2021-11-09 中图云创智能科技(北京)有限公司 Panoramic video image processing method, server, target device, apparatus and system
CN113630622B (en) * 2021-06-18 2024-04-26 中图云创智能科技(北京)有限公司 Panoramic video image processing method, server, target equipment, device and system
US11748915B2 (en) 2021-06-22 2023-09-05 Qingdao Pico Technology Co., Ltd. VR image compression transmission method and system
WO2022267256A1 (en) * 2021-06-22 2022-12-29 青岛小鸟看看科技有限公司 Method and system for vr image compression and transmission
CN115529451A (en) * 2021-06-25 2022-12-27 北京金山云网络技术有限公司 Data transmission method and device, storage medium and electronic equipment
WO2022268008A1 (en) * 2021-06-26 2022-12-29 华为技术有限公司 Virtual reality video transmission method and apparatus
GB2609064A (en) * 2021-07-16 2023-01-25 Sony Interactive Entertainment Inc Video processing and playback systems and methods
US12022231B2 (en) 2021-07-16 2024-06-25 Sony Interactive Entertainment Inc. Video recording and playback systems and methods
CN114143602A (en) * 2021-12-06 2022-03-04 深圳创维数字技术有限公司 Panoramic picture playing method and device, panoramic playing server and storage medium
WO2023125353A1 (en) * 2021-12-27 2023-07-06 影石创新科技股份有限公司 Panoramic video transmission method and apparatus, and storage medium
WO2024055925A1 (en) * 2022-09-13 2024-03-21 影石创新科技股份有限公司 Image transmission method and apparatus, image display method and apparatus, and computer device

Similar Documents

Publication Publication Date Title
CN108322727A (en) A kind of panoramic video transmission method and device
US11341715B2 (en) Video reconstruction method, system, device, and computer readable storage medium
US8736659B2 (en) Method, apparatus, and system for 3D video communication
US11075974B2 (en) Video data processing method and apparatus
CN113347405B (en) Scaling related method and apparatus
CN101002471B (en) Method and apparatus to encode image, and method and apparatus to decode image data
CN101459857B (en) Communication terminal
JP2003111101A (en) Method, apparatus and system for processing stereoscopic image
US20130100123A1 (en) Image processing apparatus, image processing method, program and integrated circuit
US9654762B2 (en) Apparatus and method for stereoscopic video with motion sensors
CN109698949B (en) Video processing method, device and system based on virtual reality scene
CN111669564B (en) Image reconstruction method, system, device and computer readable storage medium
KR102327972B1 (en) Projection image construction method and device
US11202099B2 (en) Apparatus and method for decoding a panoramic video
CN103795961A (en) Video conference telepresence system and image processing method thereof
CN101743750A (en) Method and apparatus for encoding and decoding multi-view image
CN107707830B (en) Panoramic video playing and photographing system based on one-way communication
KR102505130B1 (en) A method and a device for encoding a signal representative of a light-field content
US20120120185A1 (en) Video communication method, apparatus, and system
CN115761190A (en) Multi-user augmented reality photo browsing method and system based on scene mapping
CN115314658A (en) Video communication method and system based on three-dimensional display
CN114040184A (en) Image display method, system, storage medium and computer program product
Valli et al. Advances in spatially faithful (3d) telepresence
JP2887272B2 (en) 3D image device
KR20100097868A (en) System and method for 3-dimensional image acquisition using camera terminal for shooting multi angle pictures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180724