CN105828104A - Video data processing method and device - Google Patents

Video data processing method and device Download PDF

Info

Publication number
CN105828104A
CN105828104A CN201610312118.7A CN201610312118A CN105828104A CN 105828104 A CN105828104 A CN 105828104A CN 201610312118 A CN201610312118 A CN 201610312118A CN 105828104 A CN105828104 A CN 105828104A
Authority
CN
China
Prior art keywords
video data
frame
spatial axes
space
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610312118.7A
Other languages
Chinese (zh)
Inventor
张荣辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LeTV Holding Beijing Co Ltd
LeTV Cloud Computing Co Ltd
Original Assignee
LeTV Holding Beijing Co Ltd
LeTV Cloud Computing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LeTV Holding Beijing Co Ltd, LeTV Cloud Computing Co Ltd filed Critical LeTV Holding Beijing Co Ltd
Priority to CN201610312118.7A priority Critical patent/CN105828104A/en
Publication of CN105828104A publication Critical patent/CN105828104A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2365Multiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26275Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for distributing content or additional data in a staggered manner, e.g. repeating movies on different channels in a time-staggered manner in a near video on demand system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/26603Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel for automatically generating descriptors from content, e.g. when it is not made available by its provider, using content analysis techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a video data processing method and device. The method comprises following steps of establishing a spatial shaft in advance, wherein the spatial shaft is used for configuring spatial characteristics for video data; obtaining video data frames in the video data; binding the video data frames with the spatial shaft; determining spatial video data frames; and generating live broadcast video data streams by using the spatial video data frames. According to the method provided by the invention, the video data with the spatial characteristics are generated; and the method is convenient for a user to adjust the video data.

Description

A kind of video data handling procedure and device
Technical field
It relates to video data processing technology field, particularly relate to a kind of video data handling procedure and device.
Background technology
At the video data of mobile terminal playing, either order video data, or live video data, be all two-dimensional video;Video data is to play out based on time shaft, the corresponding video pictures of each time point.User's viewing is when watching video, it is impossible to by zoom operations, regulation viewing video comprises the distance of object, clearly watches described object;When i.e. zooming in and out object a certain in video, whole video pictures is all scaled by equal proportion on two dimensional surface.
Summary of the invention
Disclosure embodiment provides a kind of video data handling procedure and device, in order to solve in prior art user by being adjusted two-dimensional video, it is impossible to be clearly seen the defect of any object that video pictures comprises;Realize user can video data arbitrarily be operated, to understand any object that viewing video pictures comprises.
In order to solve the problems referred to above, present disclosure discloses a kind of video data handling procedure, concrete steps include: pre-build spatial axes, and described spatial axes is used for as video data configuration space feature;Obtain each video data frame in described video data;Each video data frame is bound with described spatial axes, determines space video Frame;Space video Frame is used to generate live video data stream.
The disclosure is also disclosed a kind of video process apparatus, specifically includes:
Spatial axes sets up module, is used for pre-building spatial axes, and described spatial axes is used for as video data configuration space feature;
Frame acquisition module, for obtaining each video data frame in described video data;
Binding processing module, for being bound with described spatial axes by each video data frame, determines space video Frame;
Data stream generation module, is used for using space video Frame to generate live video data stream.
Compared with prior art, disclosure embodiment includes advantages below:
The video data handling procedure of disclosure embodiment offer and device, by pre-building spatial axes, then bind the video data frame of acquisition with spatial axes, generates space video Frame, then uses space video Frame to generate live video data stream;Described live video data stream is the live video data stream with spatial character, the video data picture play can be operated by user, regulation video playback picture, to understand any object comprised in viewing video data picture, improves user and watches the effect of video.
Accompanying drawing explanation
In order to be illustrated more clearly that disclosure embodiment or technical scheme of the prior art, the accompanying drawing used required in embodiment or description of the prior art will be briefly described below, apparently, accompanying drawing in describing below is some embodiments of the disclosure, for those of ordinary skill in the art, on the premise of not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the flow chart of steps of a kind of video data handling procedure embodiment of the disclosure;
Fig. 2 is the flow chart of steps of a kind of video data handling procedure embodiment of the disclosure;
Fig. 3 is the structured flowchart of a kind of video data processing apparatus embodiment of the disclosure;
Fig. 4 is the structured flowchart of a kind of video data processing apparatus embodiment of the disclosure.
Detailed description of the invention
For making the purpose of disclosure embodiment, technical scheme and advantage clearer, below in conjunction with the accompanying drawing in disclosure embodiment, technical scheme in disclosure embodiment is clearly and completely described, obviously, described embodiment is a part of embodiment of the disclosure rather than whole embodiments.Based on the embodiment in the disclosure, the every other embodiment that those of ordinary skill in the art are obtained under not making creative work premise, broadly fall into the scope of disclosure protection.
One of core idea of disclosure embodiment is, by pre-building spatial axes, generate the live video data with spatial character, make the operations such as the video data of mobile terminal playing can be zoomed in and out by user, regulation video data broadcasting pictures, clearly to watch any object that video data broadcasting pictures comprises.
With reference to Fig. 1, it is shown that the flow chart of steps of a kind of video data handling procedure embodiment of the disclosure, specifically may include steps of:
Step S101, pre-building spatial axes, described spatial axes is for for video data configuration space feature.
Before video data is live, setting up a spatial axes for video data in advance, the purpose setting up spatial axes is for video data configuration space characteristic.Spatial character, it is i.e. on the basis of original flat two-dimensional images, increase a space being perpendicular to two dimensional surface, two dimensional image is made to have space multistory sense, spatial axes is perpendicular to two dimensional surface, after spatial axes is set up, is a three-dimensional model, such as, with two dimensions of two dimensional surface be perpendicular to the dimension of the two dimensional surface part sphere as radius.
Step S102, each video data frame obtained in described video data.
Video data is made up of video data frame, can obtain video data at the live end of video data, and then determine all of video data frame that described video data is corresponding, then obtain the relevant information that each video data frame is corresponding.
Step S103, each video data frame is bound with described spatial axes, determine space video Frame.
Relevant information according to the video data frame obtained, the mapping relations of the spatial axes determining video data frame and pre-build, according to mapping relations, map to each video data frame comprises all of object one by one in spatial axes, and each video data frame and described spatial axes are bound, determine the video data frame with space characteristics, now video data frame has three dimensions, such as, the two-dimensional video that existing video data frame is the longest and wide, space video Frame not only has long and wide, the also degree of depth, is i.e. perpendicular to the spatial axes of long and wide plane.
Step S104, employing space video Frame generate live video data stream.
After each video data frame is generated corresponding space video Frame, get final product corresponding generation space video data stream, form live video data stream.The live video data stream generated has spatial character, and described live video data stream can be operated by user, checks each details in video data.Such as, to when in video data, certain object is amplified checking details, as walked close to this object and observing.
Disclosure embodiment, by pre-building spatial axes, then binds the video data frame of acquisition with spatial axes, generates space video Frame, then uses space video Frame to generate live video data stream;Described live video data stream is the live video data stream with spatial character, the video data picture play can be operated by user, regulation video playback picture, to understand any object comprised in viewing video data picture, improves user and watches the effect of video.
With reference to Fig. 2, it is shown that the flow chart of steps of a kind of video data handling procedure embodiment of the disclosure, specifically may include steps of:
Step S201, pre-building spatial axes, described spatial axes is for for video data configuration space feature.
Before video data is live, setting up a spatial axes for video data in advance, the purpose setting up spatial axes is for video data configuration space characteristic.Spatial character, it is i.e. on the basis of original flat two-dimensional images, increase a space being perpendicular to two dimensional surface, two dimensional image is made to have space multistory sense, spatial axes is perpendicular to two dimensional surface, after spatial axes is set up, is a three-dimensional model, such as, with two dimensions of two dimensional surface and the spheroid that is perpendicular to the dimension of two dimensional surface 1/4th as radius.
Such as, the length of two dimensional surface and width are respectively x positive axis just half and y positive axis, set up that one vertical and x/y plane z-axis, spatial axes can be with x-axis as diameter, y positive axis and the z-axis positive axis spheroid as radius, in the range of the angular field of view of user's viewing is 180 ° of spheroid x-axis, in the range of y positive axis 90 °, the degree of depth of viewing is the length of z-axis, and the object in visual angle is all three-dimensional.
Step S202, each video data frame obtained in described video data.
Video data is made up of video data frame, can obtain video data at the live end of video data, and then determine all of video data frame that described video data is corresponding, then obtain the relevant information that each video data frame is corresponding.
Step S203, each video data frame is analyzed, determines the texture information of described video data frame.
The each video data frame obtained and corresponding relevant information can be analyzed, determine the image texture information of each video data frame.Wherein, texture is a kind of important visual cues, the most generally exists, and image texture information includes forming the mutual relation between the tone primitive of texture and tone primitive, such as texture ID (Identity).Described texture information is the key realizing video data frame with spatial axes binding.
Step S204, according to described texture information, described video data is bound with described spatial axes, determine space video Frame.
According to described texture information, determine the mapping relations of described texture information and described video data, described texture information is carried out corresponding with spatial axes, again described texture information is bound with described video data, and then each video data frame is bound with spatial axes, determining space video Frame, wherein, described space video Frame has space characteristics.
Step S205, employing space video Frame generate live video data stream.
After each video data frame is generated corresponding space video Frame, each video data frame is combined in order, generates corresponding space video data stream, form live video data stream.Server by generate video data stream to subscription client so that user carries out the broadcasting of live video data by subscription client.
Step S206, foundation adjust instruction and determine adjustment information, according to described adjustment information, described live video data are flow to Row sum-equal matrix.
User is when subscription client viewing live video data stream, any broadcasting pictures of described live video data stream can be operated, the instruction that adjusts according to user determines the adjustment information of correspondence, then according to described adjustment information, live video data stream is adjusted correspondingly, to meet the demand of user.Such as, during live, user performs amplifieroperation, and as user's two fingers on the mobile apparatus slowly separate by closing on certain position, adjustment information corresponding to this operation is for amplifying adjustment information;Accordingly, it would be desirable to be amplified adjusting to the object of described position.
Step S2061, by described adjust instruction be converted to the adjustment information corresponding with described spatial axes.
Adjust and instruct in addition to the space video data on two axles of video plane are adjusted, also on spatial axis video data being adjusted, accordingly, it would be desirable to described is adjusted instruction conversion to the adjustment information corresponding with spatial axes.For example, it is possible to calculate viewpoint matrix by viewpoint change, further according to viewpoint matrix, the adjustment information the most corresponding with spatial axes by adjusting quality conversion, wherein, viewpoint change refers to, by object coordinate in world coordinate system, changes the process to eye coordinate;Viewpoint matrix include current transform matrix, projection matrix, towards matrix and final transformation matrix.First obtain the current transform matrix of video data frame, then calculate towards matrix according to the rotation information of gyroscope, the direction of motion information of touch screen and gyroscope;Calculate projection matrix according to adjusting instruction, finally give final transformation matrix, so that it is determined that the described adjustment information adjusting instruction and spatial axes.
Step S2062, according to described adjustment information, the space video Frame of described live video data stream is adjusted, generates the live video data adjusted.
After determining adjustment information on spatial axis, the spatial data frame of described video data stream is adjusted, according to the adjustment instruction of user, generates the live video data that correspondence adjusts.
The open disclosure is by setting up spatial axes for video data in advance, existing two-dimensional video data is made to have spatial character, user can check the details in video data by regulation video data, user is made to have the experience of a kind of scene depth, when watching each video data picture, the effect of VR (virtual reality) panorama can be experienced.
It should be noted that, for embodiment of the method, in order to be briefly described, therefore it is all expressed as a series of combination of actions, but those skilled in the art should know, disclosure embodiment is not limited by described sequence of movement, because according to disclosure embodiment, some step can use other orders or carry out simultaneously.Secondly, those skilled in the art also should know, embodiment described in this description belongs to preferred embodiment, necessary to involved action not necessarily disclosure embodiment.
With reference to Fig. 3, it is shown that the structured flowchart of the disclosure a kind of video data processing apparatus embodiment, specifically can include such as lower module: spatial axes sets up module 301, Frame acquisition module 302, bind processing module 303, and data stream generation module 304, wherein:
Described spatial axes sets up module 301, is used for pre-building spatial axes, and described spatial axes is used for as video data configuration space feature.
Described Frame acquisition module 302, for obtaining each video data frame in described video data.
Described binding processing module 303, for being bound with described spatial axes by each video data frame, determines space video Frame.
Described data stream generation module 304, is used for using space video Frame to generate live video data stream.
Before video data is live, described spatial axes sets up module 301, and for setting up a spatial axes for video data in advance, the purpose setting up spatial axes is for video data configuration space characteristic.Spatial character, is i.e. on the basis of original flat two-dimensional images, increases a space being perpendicular to two dimensional surface so that two dimensional image has space multistory sense.Owing to video data is made up of video data frame, therefore, described Frame acquisition module 302, for obtaining video data at the live end of video data, and then determine all of video data frame that described video data is corresponding, then obtain the relevant information that each video data frame is corresponding.Described binding processing module 303, for the relevant information according to the video data frame obtained, the mapping relations of the spatial axes determining video data frame and pre-build, according to mapping relations, map to each video data frame comprises all of object one by one in spatial axes, and each video data frame and described spatial axes are bound, determine the video data frame with space characteristics.After each video data frame is generated corresponding space video Frame, described data stream generation module 304, get final product corresponding generation space video data stream, form live video data stream.The live video data stream generated has spatial character, and described live video data stream can be operated by user, checks each details in video data.Such as, to when in video data, certain object is amplified checking details, as walked close to this object and observing.
With reference to Fig. 4, it is shown that the structured flowchart of the disclosure a kind of video data processing apparatus embodiment.
In disclosure embodiment, described binding processing module 403 includes that video data frame analyzes submodule 4031 and space video Frame determines submodule 4032, wherein:
Described data frame analysis submodule 4031, for being analyzed each video data frame, determines the texture information of described video data frame.The each video data frame obtained and corresponding relevant information can be analyzed, determine the image texture information of each video data frame.Wherein, texture is a kind of important visual cues, the most generally exists, and image texture information includes forming the mutual relation between the tone primitive of texture and tone primitive, such as texture ID (Identity).Described texture information is the key realizing video data frame with spatial axes binding.
Described Frame determines submodule 4032, for being bound with described spatial axes by described video data according to described texture information, determines space video Frame.According to described texture information, determine the mapping relations of described texture information and described video data, described texture information is carried out corresponding with spatial axes, again described texture information is bound with described video data, and then each video data frame is bound with spatial axes, determining space video Frame, wherein, described space video Frame has space characteristics.
In disclosure embodiment, described device also includes adjusting module 405, for determining adjustment information according to adjustment instruction, according to described adjustment information, described live video data is flow to Row sum-equal matrix.User is when subscription client viewing live video data stream, any broadcasting pictures of described live video data stream can be operated, the instruction that adjusts according to user determines the adjustment information of correspondence, then according to described adjustment information, live video data stream is adjusted correspondingly, to meet the demand of user.Such as, during live, user performs amplifieroperation, and as user's two fingers on the mobile apparatus slowly separate by closing on certain position, adjustment information corresponding to this operation is for amplifying adjustment information;Accordingly, it would be desirable to be amplified adjusting to the object of described position.
In disclosure embodiment, described adjusting module 405, including: informoter module 4051 and data genaration submodule 4052, wherein:
Described informoter module 4051, for being converted to the adjustment information corresponding with described spatial axes by described adjustment instruction.Adjust and instruct in addition to the space video data on two axles of video plane are adjusted, also on spatial axis video data being adjusted, accordingly, it would be desirable to described is adjusted instruction conversion to the adjustment information corresponding with spatial axes.For example, it is possible to calculate viewpoint matrix, further according to viewpoint matrix, quality conversion will be adjusted to the adjustment information corresponding with spatial axes.Wherein, viewpoint change refers to, by object coordinate in world coordinate system, changes the process to eye coordinate;Viewpoint matrix include current transform matrix, projection matrix, towards matrix and final transformation matrix.First obtain the current transform matrix of video data frame, then calculate towards matrix according to the rotation information of gyroscope, the direction of motion information of touch screen and gyroscope;Calculate projection matrix according to adjusting instruction, finally give final transformation matrix, so that it is determined that the described adjustment information adjusting instruction and spatial axes.
Described data genaration submodule 4052, for being adjusted the space video Frame of described live video data stream according to described adjustment information, generates the live video data adjusted.After determining adjustment information on spatial axis, the spatial data frame of described video data stream is adjusted, according to the adjustment instruction of user, generates the live video data that correspondence adjusts.
For device embodiment, due to itself and embodiment of the method basic simlarity, so describe is fairly simple, relevant part sees the part of embodiment of the method and illustrates.
Each embodiment in this specification all uses the mode gone forward one by one to describe, and what each embodiment stressed is the difference with other embodiments, and between each embodiment, identical similar part sees mutually.
Those skilled in the art are it should be appreciated that the embodiment of disclosure embodiment can be provided as method, device or computer program.Therefore, the form of the embodiment in terms of disclosure embodiment can use complete hardware embodiment, complete software implementation or combine software and hardware.And, disclosure embodiment can use the form at one or more upper computer programs implemented of computer-usable storage medium (including but not limited to disk memory, CD-ROM, optical memory etc.) wherein including computer usable program code.
Disclosure embodiment is with reference to describing according to method, terminal unit (system) and the flow chart of computer program and/or the block diagram of disclosure embodiment.It should be understood that can be by the flow process in each flow process in computer program instructions flowchart and/or block diagram and/or square frame and flow chart and/or block diagram and/or the combination of square frame.These computer program instructions can be provided to produce a machine to the processor of general purpose computer, special-purpose computer, Embedded Processor or other programmable data processing terminal equipment so that the instruction performed by the processor of computer or other programmable data processing terminal equipment is produced for realizing the device of function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions may be alternatively stored in and can guide in the computer-readable memory that computer or other programmable data processing terminal equipment work in a specific way, the instruction making to be stored in this computer-readable memory produces the manufacture including command device, and this command device realizes the function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions also can be loaded on computer or other programmable data processing terminal equipment, make to perform sequence of operations step on computer or other programmable terminal equipment to produce computer implemented process, thus the instruction performed on computer or other programmable terminal equipment provides the step of the function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame for realization.
Although having been described for the preferred embodiment of disclosure embodiment, but those skilled in the art once know basic creative concept, then these embodiments can be made other change and amendment.So, claims are intended to be construed to include preferred embodiment and fall into all changes and the amendment of disclosure scope of embodiments.
Finally, it can further be stated that, in this article, the relational terms of such as first and second or the like is used merely to separate an entity or operation with another entity or operating space, and not necessarily requires or imply the relation or sequentially that there is any this reality between these entities or operation.And, term " includes ", " comprising " or its any other variant are intended to comprising of nonexcludability, so that include that the process of a series of key element, method, article or terminal unit not only include those key elements, but also include other key elements being not expressly set out, or also include the key element intrinsic for this process, method, article or terminal unit.In the case of there is no more restriction, statement " including ... " key element limited, it is not excluded that there is also other identical element in including the process of described key element, method, article or terminal unit.
A kind of the video data handling procedure above disclosure provided and a kind of video data processing apparatus, it is described in detail, specific case principle of this disclosure used herein and embodiment are set forth, and the explanation of above example is only intended to help and understands disclosed method and core concept thereof;Simultaneously for one of ordinary skill in the art, according to the thought of the disclosure, the most all will change, in sum, this specification content should not be construed as restriction of this disclosure.

Claims (8)

1. a video data handling procedure, wherein, including:
Pre-building spatial axes, described spatial axes is used for as video data configuration space feature;
Obtain each video data frame in described video data;
Each video data frame is bound with described spatial axes, determines space video Frame;
Space video Frame is used to generate live video data stream.
The most according to claim 1, method, wherein, bind each video data frame with described spatial axes, determine space video Frame, including:
Each video data frame is analyzed, determines the texture information of described video data frame;
According to described texture information, described video data is bound with described spatial axes, determine space video Frame.
The most according to claim 1, method, wherein, also include:
Determine adjustment information according to adjusting instruction, according to described adjustment information, described live video data is flow to Row sum-equal matrix.
The most according to claim 3, method, wherein, determine adjustment information according to adjusting instruction, according to described adjustment information, described live video data flow to Row sum-equal matrix, including:
Described adjustment instruction is converted to the adjustment information corresponding with described spatial axes;
According to described adjustment information, the space video Frame of described live video data stream is adjusted, generates the live video data adjusted.
5. a video data processing apparatus, wherein, including:
Spatial axes sets up module, is used for pre-building spatial axes, and described spatial axes is used for as video data configuration space feature;
Frame acquisition module, for obtaining each video data frame in described video data;
Binding processing module, for being bound with described spatial axes by each video data frame, determines space video Frame;
Data stream generation module, is used for using space video Frame to generate live video data stream.
Device the most according to claim 5, wherein, described binding processing module includes:
Data frame analysis submodule, for being analyzed each video data frame, determines the texture information of described video data frame;
Frame determines submodule, for being bound with described spatial axes by described video data according to described texture information, determines space video Frame.
The most according to claim 5, device, wherein, also include:
Adjusting module, for determining adjustment information according to adjustment instruction, flows to Row sum-equal matrix according to described adjustment information to described live video data.
Device the most according to claim 7, wherein, described adjusting module includes:
Informoter module, for being converted to the adjustment information corresponding with described spatial axes by described adjustment instruction;
Data genaration submodule, for being adjusted the space video Frame of described live video data stream according to described adjustment information, generates the live video data adjusted.
CN201610312118.7A 2016-05-11 2016-05-11 Video data processing method and device Pending CN105828104A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610312118.7A CN105828104A (en) 2016-05-11 2016-05-11 Video data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610312118.7A CN105828104A (en) 2016-05-11 2016-05-11 Video data processing method and device

Publications (1)

Publication Number Publication Date
CN105828104A true CN105828104A (en) 2016-08-03

Family

ID=56528586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610312118.7A Pending CN105828104A (en) 2016-05-11 2016-05-11 Video data processing method and device

Country Status (1)

Country Link
CN (1) CN105828104A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111031398A (en) * 2019-12-10 2020-04-17 维沃移动通信有限公司 Video control method and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2560388A1 (en) * 2010-04-12 2013-02-20 Panasonic Corporation Spatial prediction method, image decoding method, and image encoding method
CN102945563A (en) * 2012-09-26 2013-02-27 天津游奕科技有限公司 Showing and interacting system and method for panoramic videos
CN103456035A (en) * 2013-09-06 2013-12-18 福建星网视易信息系统有限公司 Device and method for achieving video three-dimensional display
CN104219584A (en) * 2014-09-25 2014-12-17 广州市联文信息科技有限公司 Reality augmenting based panoramic video interaction method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2560388A1 (en) * 2010-04-12 2013-02-20 Panasonic Corporation Spatial prediction method, image decoding method, and image encoding method
CN102945563A (en) * 2012-09-26 2013-02-27 天津游奕科技有限公司 Showing and interacting system and method for panoramic videos
CN103456035A (en) * 2013-09-06 2013-12-18 福建星网视易信息系统有限公司 Device and method for achieving video three-dimensional display
CN104219584A (en) * 2014-09-25 2014-12-17 广州市联文信息科技有限公司 Reality augmenting based panoramic video interaction method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111031398A (en) * 2019-12-10 2020-04-17 维沃移动通信有限公司 Video control method and electronic equipment

Similar Documents

Publication Publication Date Title
US10659685B2 (en) Control of viewing angles for 360-degree video playback
US11521347B2 (en) Method, apparatus, medium, and device for generating multi-angle free-respective image data
US10595004B2 (en) Electronic device for generating 360-degree three-dimensional image and method therefor
KR102208773B1 (en) Panoramic image compression method and apparatus
DE112016004216T5 (en) General Spherical Observation Techniques
CN110956583B (en) Spherical image processing method and device and server
TWI637355B (en) Methods of compressing a texture image and image data processing system and methods of generating a 360-degree panoramic video thereof
WO2018108104A1 (en) Method and device for transmitting panoramic videos, terminal, server and system
WO2017088491A1 (en) Video playing method and device
CN110728755B (en) Method and system for roaming among scenes, model topology creation and scene switching
JP2017532847A (en) 3D recording and playback
WO2017128887A1 (en) Method and system for corrected 3d display of panoramic image and device
TW201803358A (en) Method, apparatus and stream of formatting an immersive video for legacy and immersive rendering devices
CN114998559A (en) Real-time remote rendering method for mixed reality binocular stereoscopic vision image
CN108769648A (en) A kind of 3D scene rendering methods based on 720 degree of panorama VR
CN114926612A (en) Aerial panoramic image processing and immersive display system
CN105828104A (en) Video data processing method and device
CN108022204A (en) A kind of method that cylinder panorama video is converted to spherical panoramic video
Dunn et al. Resolution-defined projections for virtual reality video compression
CN112130667A (en) Interaction method and system for ultra-high definition VR (virtual reality) video
CN103336678B (en) A kind of resource exhibition method, device and terminal
Jin et al. An efficient spherical video sampling scheme based on Cube model
WO2022191070A1 (en) 3d object streaming method, device, and program
CN107040792A (en) Panoramic video player method, panoramic video playing device and player
Ramachandrappa Panoramic 360◦ videos in virtual reality using two lenses and a mobile phone

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160803