CN104185008A - Method and device for generating 3D media data - Google Patents

Method and device for generating 3D media data Download PDF

Info

Publication number
CN104185008A
CN104185008A CN201410350305.5A CN201410350305A CN104185008A CN 104185008 A CN104185008 A CN 104185008A CN 201410350305 A CN201410350305 A CN 201410350305A CN 104185008 A CN104185008 A CN 104185008A
Authority
CN
China
Prior art keywords
media data
data
information
model
initial media
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410350305.5A
Other languages
Chinese (zh)
Other versions
CN104185008B (en
Inventor
李渊
王文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Tong view Thai Digital Technology Co., Ltd.
Original Assignee
Shanghai Synacast Media Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Synacast Media Technology Co Ltd filed Critical Shanghai Synacast Media Technology Co Ltd
Priority to CN201410350305.5A priority Critical patent/CN104185008B/en
Publication of CN104185008A publication Critical patent/CN104185008A/en
Application granted granted Critical
Publication of CN104185008B publication Critical patent/CN104185008B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention aims at providing a method and device for generating 3D media data. The method includes the following steps: determining a content type of initial media data; according to the content type, determining a 3D scene model corresponding to the initial media data; and according to image data and the 3D scene model, which are corresponding to the initial media data, generating 3D media data corresponding to the initial media data so as to play the 3D media data.

Description

A kind of method and apparatus that generates 3D media data
Technical field
The present invention relates to field of computer technology, relate in particular to a kind of method and apparatus of the 3D of generation media data.
Background technology
In prior art, conventionally only can obtain the 3D video generating based on 3D data source, but, the video information of current main-stream remains 2D video, and, because the video data volume is larger, user's demand is comparatively random, and it is unpractical with response user's request that all videos are all carried out to 3D conversion process.Especially in the time carrying out network direct broadcasting, need to face the user of multiple demand, the user's request that cannot meet various ways is processed in single video is unified.
Summary of the invention
The object of this invention is to provide a kind of method and apparatus of the 3D of generation media data.
According to an aspect of the present invention, provide a kind of method of the 3D of generation media data, wherein, said method comprising the steps of:
A determines the content type of described initial media data;
B determines the 3D model of place corresponding with described initial media data according to described content type;
C, according to the described described view data corresponding with initial media data and described 3D model of place, generates the 3D media data corresponding with described initial media data, to play described 3D media data.
According to an aspect of the present invention, also provide a kind of playing device of the 3D of generation media data, wherein, described playing device comprises the following steps:
Content determining device, for determining the content type of described initial media data;
Model determining device, for determining the 3D model of place corresponding with described initial media data according to described content type;
Generating apparatus, for according to the described described view data corresponding with initial media data and described 3D model of place, generates the 3D media data corresponding with described initial media data, to play described 3D media data.
Compared with prior art, the present invention has the following advantages: determine corresponding 3D model of place according to the content type of media data, to generate corresponding 3D media data based on 3D model of place, improved the efficiency that generates 3D media data; And, can be in conjunction with the motion related information of media data and definite 3D model of place, generate corresponding 3D media data and play, further improve the accuracy that generates 3D media data.
Brief description of the drawings
By reading the detailed description that non-limiting example is done of doing with reference to the following drawings, it is more obvious that other features, objects and advantages of the present invention will become:
Fig. 1 has illustrated according to the method flow diagram of a kind of 3D of generation media data of the present invention;
Fig. 2 has illustrated according to the structural representation of the playing device of a kind of 3D of generation media data of the present invention.
In accompanying drawing, same or analogous Reference numeral represents same or analogous parts.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in further detail.
Fig. 1 has illustrated according to the method flow diagram of a kind of 3D of generation media data of the present invention.The method according to this invention comprises step S1, step S2 and step S3.
Wherein, described 3D media data comprise but be not limited to following any:
1) there is the right and left eyes image pair of parallax;
2) binocular tri-dimensional video.
Wherein, the method according to this invention realizes by the playing device being contained in computer equipment.Described computer equipment comprise a kind of can be according to the instruction of prior setting or storage, automatically carry out the electronic equipment of numerical computations and/or information processing, its hardware includes but not limited to microprocessor, application-specific integrated circuit (ASIC) (ASIC), programmable gate array (FPGA), digital processing unit (DSP), embedded device etc.Described computer equipment comprises the network equipment and/or subscriber equipment.Wherein, the described network equipment includes but not limited to single network server, the server group of multiple webserver composition or the cloud being made up of a large amount of main frames or the webserver based on cloud computing (Cloud Computing), wherein, cloud computing is the one of Distributed Calculation, the super virtual machine being made up of the loosely-coupled computer collection of a group.Described subscriber equipment includes but not limited to any electronic product that can carry out man-machine interaction by modes such as keyboard, mouse, remote controller, touch pad or voice-operated devices with user, for example, personal computer, panel computer, smart mobile phone, PDA, game machine or IPTV etc.Wherein, the residing network of described subscriber equipment and the network equipment includes but not limited to the Internet, wide area network, metropolitan area network, local area network (LAN), VPN network etc.
Preferably, described playing device is contained in subscriber equipment.
It should be noted that; described subscriber equipment, the network equipment and network are only for giving an example; other subscriber equipmenies existing or that may occur from now on, the network equipment and network, as applicable to the present invention, also should be included in protection range of the present invention, and be contained in this with way of reference.
With reference to Fig. 1, in step S1, playing device is determined the content type of described initial media data.
Wherein, described initial media data comprise video data, for example, and the video of one section of programme televised live or one section of film video etc.
Wherein, described initial media data can be corresponding to different content types.For example, one section of television program video can be divided into the content type such as " news ", " physical culture " or " variety ".
Preferably, the scene information of the content of described content type based on playing in described initial media data is determined its classification, for example, can be divided into football match type, baseball match type, tennis match type etc. corresponding to the initial media data of sports tournament, again for example, can be divided into conversation type, select-elite type etc. corresponding to the initial media data of variety show.
Wherein, playing device determine the mode of the content type of described initial media data include but not limited to following any:
1) directly obtain the predetermined content-type information of initial media data;
2) relevant information of initial media data is mated with predetermined content type, to determine the content type corresponding with these initial media data.For example, initial media data are videos of one section of programme televised live, the title of this programme televised live are mated with predetermined content type, to obtain the content type that this video is corresponding.
According to the first example of the present invention, initial media data are that one section of duration is the live video stream_1 of 1 minute, and playing device obtains the video profile of this video, and the content type of determining this video is " baseball game ".
It should be noted that, above-mentioned for example only for technical scheme of the present invention is described better, but not limitation of the present invention those skilled in the art should understand that, the implementation of any content type of determining described initial media data, all should be within the scope of the present invention.
Then,, in step S2, playing device is determined the 3D model of place corresponding with described initial media data according to described content type.
Particularly, at least one 3D model of place corresponding with described content type inquired about and obtained to described playing device, according to described content type,, and by selecting the 3D model of place corresponding with described initial media data in this at least one 3D model of place.
Wherein, described 3D model of place comprises the model of depth information corresponding to view data for predicting initial media data.
Wherein, described 3D model of place can obtain based on multiple media datas are carried out to machine learning process.For example, by obtaining view data that content type is the video of " football match " and definite depth information thereof and carrying out corresponding machine learning process and set up the 3D model of place corresponding with content type " football match ", to export the 3D model of place of corresponding 3D media data based on media data information.
Continue aforementioned the first example to describe, playing device is inquiry the acquisition 3D model of place model_1 corresponding with the content type " baseball game " of initial media data stream_1 in 3D model of place database.
It should be noted that, above-mentioned for example only for technical scheme of the present invention is described better, but not limitation of the present invention, those skilled in the art should understand that, any implementation of determining the 3D model of place corresponding with described initial media data according to described content type, all should be within the scope of the present invention.
Then,, in step S3, playing device, according to the described described view data corresponding with initial media data and described 3D model of place, generates the 3D media data corresponding with described initial media data, to play described 3D media data.
Preferably, described step S3 further comprises step S301 (not shown) and step S302 (not shown).
In step S301, playing device obtains corresponding motion related information according to the described view data corresponding with described initial media data.
Wherein, described view data comprise but be not limited to following any:
1) each frame data in described initial media data;
2) by the one or more view data obtaining after each frame data in initial media data are processed; For example, by automatically carry out the coupling of piece between consecutive frame, using coupling is obtained, there are one or more frames of similar as an item of image data etc.
Preferably, described motion related information includes but not limited to following at least any one information:
1) scene motion information; Wherein, described scene comprises the one or more blocks of cutting apart that can identify in view data.
For example, by more multiple view data, cut apart the variation of block obtain respectively this each cut apart movable information of block etc.
2) the object of which movement information corresponding with at least one object in described view data.
For example, by the one or more objects that comprise in recognition image data, and these one or more objects positional information in multiple view data respectively relatively, determine the movable information of this object etc.
Continue aforementioned the first example to describe, the frame of video that playing device extracts this video is divided into some cut sections as view data the figure based in each frame of video, and by comparing the change in location of this each cut section in each video, divide stagnant zone and moving region in frame of video, and the motion related information of definite moving region.
Then,, in step S302, playing device generates the 3D media data corresponding with described initial media data according to described motion related information and described 3D model of place information, to play described 3D media data.
Preferably, described step 302 further comprises step S3021 (not shown) and step S3022 (not shown).
In step S3021, playing device, according to described moving parameter information and described 3D model of place, obtains the depth information corresponding with described view data.
Preferably, for each image, playing device is by utilizing described 3D model of place to process the moving parameter information of described view data, to obtain the depth information corresponding with this view data.
Wherein, described playing device can utilize described 3D model of place, adopt multiple technologies, as the estimation of Depth (DFM based on motion feature, depth from motion) technology etc., the view data based on inputted and corresponding motion related information are obtained the depth information corresponding with this view data.
It should be noted that, those skilled in the art can be according to actual conditions and demand, selects other suitable methods to obtain described depth information, and the method that is not limited to mention in specification.
Then,, in step S3022, playing device generates and comprises 3D media data view data, corresponding with described initial media data with described depth information according to obtained depth information.
Particularly, playing device is directly using the view data with described depth information as 3D media data; Or playing device is synchronizeed the described view data with described depth information with the voice data of described initial media data, to generate described 3D media data.
Continue aforementioned the first example to describe, in step S3021, the input of playing device using this view data as 3D model of place model_1, obtains depth information corresponding to the stagnant zone such as sky, ground in each image.And playing device utilizes this 3D model of place model_1, adopt DFM technology, the motion related information of the moving region based in this view data, obtains in each image the depth informations corresponding to moving region such as baseballer, baseball.Then,, in step S3022, playing device generates and comprises 3D media data view data, corresponding with this video with described depth information according to the depth information corresponding with view data obtaining.
It should be noted that, above-mentioned for example only for technical scheme of the present invention is described better, but not limitation of the present invention, those skilled in the art should understand that, anyly generate the 3D media data corresponding with described initial media data according to described motion related information and described 3D model of place information, to play the implementation of described 3D media data, all should be within the scope of the present invention.
Preferably, described method also comprises step S4 (not shown) and step S5 (not shown).
In step S4, when playing when live media data, playing device using described live media data in the part of media data of order history time period as initial media data.
At execution of step S1, to step S3, in step S5, playing device is play the 3D media data corresponding with described initial media data and described live media data simultaneously.
For example, while playing live media data, playing device in step S4 using this live media data in the past the media data of 5 minutes as initial media data.Then, playing device execution step S1 is to step S3, to generate the 3D media data corresponding with these initial media data.Then,, in step S5, playing device is play the 3D media data having generated and live media data simultaneously.
The method according to this invention, determines corresponding 3D model of place according to the content type of media data, to generate corresponding 3D media data based on 3D model of place, has improved the efficiency that generates 3D media data; And, can be in conjunction with the motion related information of media data and definite 3D model of place, generate corresponding 3D media data and play, further improve the accuracy that generates 3D media data.
Fig. 2 has illustrated according to the structural representation of the playing device of a kind of 3D of generation media data of the present invention.Playing device according to the present invention comprises content determining device 1, model determining device 2 and generating apparatus 3.
With reference to Fig. 2, content determining device 1 is determined the content type of described initial media data.
Wherein, described initial media data comprise video data, for example, and the video of one section of programme televised live or one section of film video etc.
Wherein, described initial media data can be corresponding to different content types.For example, one section of television program video can be divided into the content type such as " news ", " physical culture " or " variety ".
Preferably, the scene information of the content of described content type based on playing in described initial media data is determined its classification, for example, can be divided into football match type, baseball match type, tennis match type etc. corresponding to the initial media data of sports tournament, again for example, can be divided into conversation type, select-elite type etc. corresponding to the initial media data of variety show.
Wherein, content determining device 1 determine the mode of the content type of described initial media data include but not limited to following any:
1) directly obtain the predetermined content-type information of initial media data;
2) relevant information of initial media data is mated with predetermined content type, to determine the content type corresponding with these initial media data.For example, initial media data are videos of one section of programme televised live, the title of this programme televised live are mated with predetermined content type, to obtain the content type that this video is corresponding.
According to the first example of the present invention, initial media data are that one section of duration is the live video stream_1 of 1 minute, and content determining device 1 is obtained the video profile of this video, and the content type of determining this video is " baseball game ".
It should be noted that, above-mentioned for example only for technical scheme of the present invention is described better, but not limitation of the present invention those skilled in the art should understand that, the implementation of any content type of determining described initial media data, all should be within the scope of the present invention.
Then, model determining device 2 is determined the 3D model of place corresponding with described initial media data according to described content type.
Particularly, at least one 3D model of place corresponding with described content type inquired about and obtained to model determining device 2, according to described content type,, and by selecting the 3D model of place corresponding with described initial media data in this at least one 3D model of place.
Wherein, described 3D model of place comprises the model of depth information corresponding to view data for predicting initial media data.
Wherein, described 3D model of place can obtain based on multiple media datas are carried out to machine learning process.For example, by obtaining view data that content type is the video of " football match " and definite depth information thereof and carrying out corresponding machine learning process and set up the 3D model of place corresponding with content type " football match ", to export the 3D model of place of corresponding 3D media data based on media data information.
Continue aforementioned the first example to describe, model determining device 2 is inquiry the acquisition 3D model of place model_1 corresponding with the content type " baseball game " of initial media data stream_1 in 3D model of place database.
It should be noted that, above-mentioned for example only for technical scheme of the present invention is described better, but not limitation of the present invention, those skilled in the art should understand that, any implementation of determining the 3D model of place corresponding with described initial media data according to described content type, all should be within the scope of the present invention.
Then, generating apparatus 3, according to the described described view data corresponding with initial media data and described 3D model of place, generates the 3D media data corresponding with described initial media data, to play described 3D media data.
Preferably, described generating apparatus 3 further comprises motion acquisition device (not shown) and three-dimensional generating apparatus (not shown).
Wherein, acquisition device obtains corresponding motion related information according to the described view data corresponding with described initial media data.
Wherein, described view data comprise but be not limited to following any:
1) each frame data in described initial media data;
2) by the one or more view data obtaining after each frame data in initial media data are processed; For example, by automatically carry out the coupling of piece between consecutive frame, using coupling is obtained, there are one or more frames of similar as an item of image data etc.
Preferably, described motion related information includes but not limited to following at least any one information:
1) scene motion information; Wherein, described scene comprises the one or more blocks of cutting apart that can identify in view data.
For example, by more multiple view data, cut apart the variation of block obtain respectively this each cut apart movable information of block etc.
2) the object of which movement information corresponding with at least one object in described view data.
For example, by the one or more objects that comprise in recognition image data, and these one or more objects positional information in multiple view data respectively relatively, determine the movable information of this object etc.
Continue aforementioned the first example to describe, the frame of video that playing device extracts this video is divided into some cut sections as view data the figure based in each frame of video, acquisition device is by comparing the change in location of this each cut section in each video, divide stagnant zone and moving region in frame of video, and the motion related information of definite moving region.
Then, three-dimensional generating apparatus generates the 3D media data corresponding with described initial media data according to described motion related information and described 3D model of place information, to play described 3D media data.
Preferably, described three-dimensional generating apparatus further comprises degree of depth acquisition device (not shown) and sub-generating apparatus (not shown).
Wherein, degree of depth acquisition device, according to described moving parameter information and described 3D model of place, obtains the depth information corresponding with described view data.
Preferably, for each image, degree of depth acquisition device is by utilizing described 3D model of place to process the moving parameter information of described view data, to obtain the depth information corresponding with this view data.
Wherein, described degree of depth acquisition device can utilize described 3D model of place, adopt multiple technologies, as the estimation of Depth (DFM based on motion feature, depth from motion) technology etc., the view data based on inputted and corresponding motion related information are obtained the depth information corresponding with this view data.
It should be noted that, those skilled in the art can be according to actual conditions and demand, selects other suitable methods to obtain described depth information, and the method that is not limited to mention in specification.
Then, sub-generating apparatus generates and comprises 3D media data view data, corresponding with described initial media data with described depth information according to obtained depth information.
Particularly, sub-generating apparatus is directly using the view data with described depth information as 3D media data; Or sub-generating apparatus is synchronizeed the described view data with described depth information with the voice data of described initial media data, to generate described 3D media data.
Continue aforementioned the first example to describe, the input of degree of depth acquisition device using this view data as 3D model of place model_1, obtains depth information corresponding to the stagnant zone such as sky, ground in each image.And degree of depth acquisition device utilizes this 3D model of place model_1, adopt DFM technology, the motion related information of the moving region based in this view data, obtains in each image the depth informations corresponding to moving region such as baseballer, baseball.Then, sub-generating apparatus generates and comprises 3D media data view data, corresponding with this video with described depth information according to the depth information corresponding with view data obtaining.
It should be noted that, above-mentioned for example only for technical scheme of the present invention is described better, but not limitation of the present invention, those skilled in the art should understand that, anyly generate the 3D media data corresponding with described initial media data according to described motion related information and described 3D model of place information, to play the implementation of described 3D media data, all should be within the scope of the present invention.
Preferably, described playing device also comprises data acquisition facility (not shown) and simultaneously playing device (not shown).
When playing when live media data, data acquisition facility using described live media data in the part of media data of order history time period as initial media data.
Execute being operated to according to the described described view data corresponding with initial media data and described 3D model of place of content type of determining described initial media data at playing device, generate after the operation of the 3D media data corresponding with described initial media data, simultaneously playing device is play the 3D media data corresponding with described initial media data and described live media data simultaneously.
For example, while playing live media data, playing device in step S4 using this live media data in the past the media data of 5 minutes as initial media data.Then, being operated to according to the described described view data corresponding with initial media data and described 3D model of place of the content type of the definite described initial media data of playing device execution, generate the operation of the 3D media data corresponding with described initial media data, to generate the 3D media data corresponding with these initial media data.Then, simultaneously playing device is play the 3D media data having generated and live media data simultaneously.
According to the solution of the present invention, determine corresponding 3D model of place according to the content type of media data, to generate corresponding 3D media data based on 3D model of place, improve the efficiency that generates 3D media data; And, can be in conjunction with the motion related information of media data and definite 3D model of place, generate corresponding 3D media data and play, further improve the accuracy that generates 3D media data.
Software program of the present invention can carry out to realize step mentioned above or function by processor.Similarly, software program of the present invention (comprising relevant data structure) can be stored in computer readable recording medium storing program for performing, for example, and RAM memory, magnetic or CD-ROM driver or floppy disc and similar devices.In addition, steps more of the present invention or function can adopt hardware to realize, for example, thereby as coordinate the circuit of carrying out each function or step with processor.
In addition, a part of the present invention can be applied to computer program, and for example computer program instructions, in the time that it is carried out by computer, by the operation of this computer, can call or provide the method according to this invention and/or technical scheme.And call the program command of method of the present invention, may be stored in fixing or movably in recording medium, and/or be transmitted by the data flow in broadcast or other signal bearing medias, and/or be stored in according in the working storage of the computer equipment of described program command operation.At this, comprise according to one embodiment of present invention a device, this device comprises memory for storing computer program instructions and the processor for execution of program instructions, wherein, in the time that this computer program instructions is carried out by this processor, trigger this device and move based on aforementioned according to the method for multiple embodiment of the present invention and/or technical scheme.
To those skilled in the art, obviously the invention is not restricted to the details of above-mentioned example embodiment, and in the situation that not deviating from spirit of the present invention or essential characteristic, can realize the present invention with other concrete form.Therefore, no matter from which point, all should regard embodiment as exemplary, and be nonrestrictive, scope of the present invention is limited by claims instead of above-mentioned explanation, is therefore intended to all changes that drop in the implication and the scope that are equal to important document of claim to be included in the present invention.Any Reference numeral in claim should be considered as limiting related claim.In addition, obviously other unit or step do not got rid of in " comprising " word, and odd number is not got rid of plural number.Multiple unit of stating in system claim or device also can be realized by software or hardware by a unit or device.The first, the second word such as grade is used for representing title, and does not represent any specific order.

Claims (16)

1. generate a method for 3D media data, wherein, said method comprising the steps of:
A determines the content type of described initial media data;
B determines the 3D model of place corresponding with described initial media data according to described content type;
C, according to the described described view data corresponding with initial media data and described 3D model of place, generates the 3D media data corresponding with described initial media data, to play described 3D media data.
2. method according to claim 1, wherein, described step c comprises the following steps:
C1 obtains corresponding motion related information according to the described view data corresponding with described initial media data;
C2 generates the 3D media data corresponding with described initial media data according to described motion related information and described 3D model of place information, to play described 3D media data.
3. method according to claim 2, wherein, described motion related information comprises following at least any one information:
-scene motion information;
-object of which movement the information corresponding with at least one object in described view data.
4. according to the method in claim 2 or 3, wherein, described step c2 comprises the following steps:
C21, according to described moving parameter information and described 3D model of place, obtains the depth information corresponding with described view data;
C22 generates and comprises 3D media data view data, corresponding with described initial media data with described depth information according to obtained depth information.
5. method according to claim 4, wherein, described step c22 comprises the following steps:
-the described view data with described depth information is synchronizeed with the voice data of described initial media data, to generate described 3D media data.
6. according to the method described in any one in claim 1 to 5, wherein, described method is further comprising the steps of before step a:
-when playing when live media data, using described live media data in the part of media data of order history time period as initial media data;
Wherein, described method is further comprising the steps of:
-the 3D media data corresponding with described initial media data and described live media data are play simultaneously.
7. according to the method described in any one in claim 1 to 6, wherein, described 3D media data comprise following any:
-there is the right and left eyes image pair of parallax;
-binocular tri-dimensional video.
8. according to the method described in any one in claim 1 to 7, wherein, described method is carried out by subscriber equipment.
9. generate a playing device for 3D media data, wherein, described playing device comprises the following steps:
Content determining device, for determining the content type of described initial media data;
Model determining device, for determining the 3D model of place corresponding with described initial media data according to described content type;
Generating apparatus, for according to the described described view data corresponding with initial media data and described 3D model of place, generates the 3D media data corresponding with described initial media data, to play described 3D media data.
10. playing device according to claim 9, wherein, described generating apparatus comprises:
Motion acquisition device, for obtaining corresponding motion related information according to the described view data corresponding with described initial media data;
Three-dimensional generating apparatus, for generating the 3D media data corresponding with described initial media data according to described motion related information and described 3D model of place information, to play described 3D media data.
11. playing devices according to claim 10, wherein, described motion related information comprises following at least any one information:
-scene motion information;
-object of which movement the information corresponding with at least one object in described view data.
12. according to the playing device described in claim 10 or 11, and wherein, described three-dimensional generating apparatus comprises:
Degree of depth acquisition device, for according to described moving parameter information and described 3D model of place, obtains the depth information corresponding with described view data;
Sub-generating apparatus, comprises 3D media data view data, corresponding with described initial media data with described depth information for generating according to obtained depth information.
13. playing devices according to claim 12, wherein, described sub-generating apparatus also for:
The described view data with described depth information is synchronizeed with the voice data of described initial media data, to generate described 3D media data.
14. according to the playing device described in any one in claim 9 to 13, and wherein, playing device also comprises:
Data acquisition facility, for when playing when live media data, using described live media data in the part of media data of order history time period as initial media data;
Wherein, described playing device also comprises:
Simultaneously playing device, for play the 3D media data corresponding with described initial media data and described live media data simultaneously.
15. according to the playing device described in any one in claim 9 to 14, wherein, described 3D media data comprise following any:
-there is the right and left eyes image pair of parallax;
-binocular tri-dimensional video.
16. according to the method described in any one in claim 9 to 15, and wherein, described playing device is contained in subscriber equipment.
CN201410350305.5A 2014-07-22 2014-07-22 A kind of method and apparatus of generation 3D media datas Expired - Fee Related CN104185008B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410350305.5A CN104185008B (en) 2014-07-22 2014-07-22 A kind of method and apparatus of generation 3D media datas

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410350305.5A CN104185008B (en) 2014-07-22 2014-07-22 A kind of method and apparatus of generation 3D media datas

Publications (2)

Publication Number Publication Date
CN104185008A true CN104185008A (en) 2014-12-03
CN104185008B CN104185008B (en) 2017-07-25

Family

ID=51965704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410350305.5A Expired - Fee Related CN104185008B (en) 2014-07-22 2014-07-22 A kind of method and apparatus of generation 3D media datas

Country Status (1)

Country Link
CN (1) CN104185008B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11062183B2 (en) 2019-05-21 2021-07-13 Wipro Limited System and method for automated 3D training content generation
CN115525181A (en) * 2022-11-28 2022-12-27 深圳飞蝶虚拟现实科技有限公司 Method and device for manufacturing 3D content, electronic device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130070050A1 (en) * 2011-09-15 2013-03-21 Broadcom Corporation System and method for converting two dimensional to three dimensional video
CN103002297A (en) * 2011-09-16 2013-03-27 联咏科技股份有限公司 Method and device for generating dynamic depth values
EP2733670A1 (en) * 2011-09-08 2014-05-21 Samsung Electronics Co., Ltd Apparatus and method for generating depth information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2733670A1 (en) * 2011-09-08 2014-05-21 Samsung Electronics Co., Ltd Apparatus and method for generating depth information
US20130070050A1 (en) * 2011-09-15 2013-03-21 Broadcom Corporation System and method for converting two dimensional to three dimensional video
CN103002297A (en) * 2011-09-16 2013-03-27 联咏科技股份有限公司 Method and device for generating dynamic depth values

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11062183B2 (en) 2019-05-21 2021-07-13 Wipro Limited System and method for automated 3D training content generation
CN115525181A (en) * 2022-11-28 2022-12-27 深圳飞蝶虚拟现实科技有限公司 Method and device for manufacturing 3D content, electronic device and storage medium

Also Published As

Publication number Publication date
CN104185008B (en) 2017-07-25

Similar Documents

Publication Publication Date Title
CN108108821B (en) Model training method and device
US20160199742A1 (en) Automatic generation of a game replay video
CN105745938B (en) Multi-angle of view audio and video interactive playback
US20220222028A1 (en) Guided Collaborative Viewing of Navigable Image Content
CN104394422A (en) Video segmentation point acquisition method and device
KR20210095953A (en) Video data processing method and related devices
CN108683952B (en) Video content segment pushing method and device based on interactive video
CN105338394A (en) Subtitle data processing method and system
US20140146178A1 (en) Image processing apparatus and method using smart glass
CN102231820B (en) Monitoring image processing method, device and system
CN102289839A (en) Method for efficiently rendering levels of detail for three-dimensional digital city
CN105005593A (en) Scenario identification method and apparatus for multi-user shared device
CN105227999A (en) A kind of method and apparatus of video cutting
CN102789648A (en) Apparatus and method for converting 2d content into 3d content, and computer-readable storage medium thereof
US20210248810A1 (en) VR Video Processing Method and Related Apparatus
CN103324488A (en) Method and device for obtaining special effect information
CN114387400A (en) Three-dimensional scene display method, display device, electronic equipment and server
CN103731737B (en) A kind of video information update method and electronic equipment
CN104902292A (en) Television report-based public opinion analysis method and system
CN105516348A (en) Method and system for sharing information
CN103838861A (en) Method and display system for dynamically displaying information based on three-dimensional GIS
CN103200441A (en) Obtaining method, conforming method and device of television channel information
CN104954824A (en) Method, device and system for setting video
CN105245817A (en) Video playback method and video playback device
CN104185008A (en) Method and device for generating 3D media data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20180425

Address after: 201203 China (Shanghai) free trade pilot area 501-2, room 5, 5 Po Bo Road.

Patentee after: Shanghai Tong view Thai Digital Technology Co., Ltd.

Address before: 201204 Room 102, 4 Lane 299, Bi Sheng Road, Zhangjiang hi tech park, Pudong New Area, Shanghai.

Patentee before: Shanghai Synacast Media Tech. Co., Ltd.

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170725

Termination date: 20200722

CF01 Termination of patent right due to non-payment of annual fee