CN109195020A - A kind of the game live broadcasting method and system of AR enhancing - Google Patents
A kind of the game live broadcasting method and system of AR enhancing Download PDFInfo
- Publication number
- CN109195020A CN109195020A CN201811181572.9A CN201811181572A CN109195020A CN 109195020 A CN109195020 A CN 109195020A CN 201811181572 A CN201811181572 A CN 201811181572A CN 109195020 A CN109195020 A CN 109195020A
- Authority
- CN
- China
- Prior art keywords
- data
- module
- game
- audio
- client
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4781—Games
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234309—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/262—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/27—Server based end-user applications
- H04N21/274—Storing end-user multimedia data in response to end-user request, e.g. network recorder
- H04N21/2743—Video hosting of uploaded data from client
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
Abstract
This application discloses a kind of game live broadcasting methods of AR enhancing, comprising: before live streaming, client receives and stores the static models data of game in advance;Acquisition module collects the audio, video data in game live streaming, and is encoded, while acquiring the dynamic model data in game live streaming;After dynamic model data and audio, video data are packaged compression by acquisition module, by network transmission to client;Client loads pre-stored static models data, and combines the dynamic model data received to carry out AR rendering, real-time display game video, and play corresponding audio.Disclosed herein as well is a kind of game live broadcast systems of corresponding AR enhancing.Using technical solution disclosed in the present application, existing direct seeding technique, especially 3D game live streaming aspect can be improved, by the AR technology for introducing forward position, so that spectators obtain the experience updated and preferably impression when watching live streaming, while the development for AR application provides new thinking and approach.
Description
Technical field
This application involves intelligent terminal applied technical field, in particular to game live broadcasting method and be that a kind of AR enhances
System.
Background technique
For current network direct broadcasting technology with extensively and profoundly reaching its maturity from media, network direct broadcasting platform also becomes one
The brand-new social media of kind.Network direct broadcasting technology is broadly divided into real-time live broadcast game, movie or television play etc..In China, network
Over one hundred family under platform is broadcast live not, wherein most notable gas also has tens of families, it is that a hundred flowers blossom.Live streaming platform also expedites the emergence of out
Red economy is netted, relies on and nets red huge fan group, create one and another marketing miracle.Network direct broadcasting relates generally to stream matchmaker
Body technique is broadly divided into three parts: audio-video collection processing, video network transport protocol and client player.The application
Principal concern is broadcast live in 3D game, because game live streaming is most important one piece in network direct broadcasting, and accounting is more than 50%.
Augmented reality (Augmented Reality, abbreviation AR), be a kind of position for calculating camera image in real time and
Angle and the technology for adding respective image, the target of this technology is that virtual world is covered in real world and carried out on the screen
Interaction.Augmented reality not only presents the information of real world, but also virtual information is shown simultaneously, two kinds of letters
Breath is complementary to one another, is superimposed.Real world with computer graphic is multiple is synthesized together, can see the true world around
It.AR technology mainly includes the technologies such as computer vision, three-dimensional registration, recognition and tracking.Present application relates generally to arrive 3D game mould
Type recombination, three-dimensional registration and recognition and tracking.
The development experience of the network direct broadcasting period of general more than ten years, from the emerging of earliest video website (potato, youku.com)
It rises, falls over each other to make by oneself content (net is acute) to major network media, then develop (bucket fish, the platforms such as YY are born) to from media,
Ultimately form complete set slave content to propagate again to the good ecological chain of profit.The birth of live streaming platform has expedited the emergence of the red warp of net
Ji nets red economic will be broadcast live again and has taken new step.
Live streaming needs corresponding equipment and technology, and content method, apparatus is usually IP Camera and microphone, corresponding
Technology is related to acquisition and the coding of video data.Network transmission side mainly passes through internet and is propagated, the skill being related to
Art has transport protocol: RTP and RTCP control instruction, SIP and SDP etc..Broadcasting side is the network terminal of different platform, is related generally to
Video decoding and play-back technology.These technology relative maturities are stablized.
AR (augmented reality) technology is most proposed earlier than nineteen ninety.With the promotion of accompanied electronic product operational capability, enhancing
The purposes of reality is more and more wider.AR technology is a kind of by " seamless " the integrated new skill of real world information and virtual world information
Art, be script is difficult to experience in the certain time spatial dimension of real world entity information (visual information, sound,
Taste, tactile etc.) pass through the science and technology such as computer, it is superimposed again after analog simulation, by virtual Information application to real world, quilt
Human sensory is perceived, to reach the sensory experience of exceeding reality.True environment and virtual object are added in real time
The same picture or space exist simultaneously.
There are three prominent features for AR system tool: the 1. information integration of real world and virtual world;It is handed in real time 2. having
Mutual property;3. being to increase positioning dummy object in three dimension scale space.AR technology can be widely applied to military, medical treatment, building, religion
It educates, engineering, video display, the fields such as amusement.
The content that one good live streaming first has to.In existing live streaming, especially game is broadcast live, and contents screen is adopted
Collection is nothing but camera acquisition or directly transmits game picture, is no longer satisfied viewer's demand.
Often video quality declines the picture of camera acquisition after video compression coding, and directly propagates game picture
It brings sense into and richness is again inadequate, it is necessary to which viewer's interest can just be caused by being equipped with enough explanations.So including current live streaming
Holding acquisition aspect, there is also shortcomings.
Meanwhile for AR technology, the maximum limitation of the technology is application scenarios, although AR has edged close to masses
Sight, but the practicality product is in fact very deficient, and AR maximum application for masses is game class.However to complete one
The AR game that different scenes can be matched also is filled with challenge.Another limitation of AR technology is that real-time is poor, because
A large amount of operation is needed when rendering, so AR just seems not fully up to expectations when doing large-scale real-time game.
Summary of the invention
This application provides the game live broadcasting methods and system of a kind of AR enhancing, it is intended to improve existing direct seeding technique, especially
It is 3D game live streaming aspect, introduce forward position AR technology so that spectators watch live streaming when obtain update experience and preferably
Impression, while the development for AR application provides new thinking and approach.
This application discloses a kind of game live broadcasting methods of AR enhancing, comprising:
Before live streaming, client receives and stores the static models data of game in advance;
Acquisition module collects the audio, video data in game live streaming, and is encoded, while acquiring dynamic in game live streaming
States model data;
After dynamic model data and audio, video data are packaged compression by acquisition module, by network transmission to client;
Client loads pre-stored static models data, and the dynamic model data received is combined to carry out AR wash with watercolours
Dye, real-time display game video, and play corresponding audio.
Preferably, this method further include:
Model reconstruction is carried out to the 3D scene of game of live streaming in advance, is changed into the model shown advantageously by AR, it is described quiet
States model data and dynamic model data constitute AR model data.
Preferably, the dynamic model data include: model dynamic behaviour data, coordinate data, status data;
The static models data include: 3D contextual data, person model data.
Preferably, passing through network transmission after dynamic model data and audio, video data are packaged compression by the acquisition module
Include: to client
After dynamic model data and audio, video data are packaged compression by acquisition module, to server push information;
After server receives the information push of acquisition module, it is pushed to CDN node, and continue to be pushed to each client,
Alternatively, server carries out dynamic coding to the information received to adapt to different client needs, store data in simultaneously
In cloud database, calls and play when being reviewed for client request.
Preferably, client progress AR rendering includes:
Client is decoded the information received from server, while obtaining real field using the camera of client
Scape analyzes reality scene, determines the display area AR direction, establishes three-dimensional system of coordinate, by decoded 3D scene information
It is shown in the specified display area AR, while playing live video and audio.
Preferably, the decoding of information is divided into the decoding of audio/video information and the decoding of 3D scene information, wherein 3D scene letter
The decoding of breath is divided into the decoding of static models data and dynamic model data;When carrying out AR display, static models number is first rendered
According to, then dynamic model data investigation is subjected to rendering in static models data and is shown.
Preferably, this method further include:
User interactive operation is carried out, is specifically included: the camera site of adjustment client camera, to adjust the aobvious of AR model
Show position, angle, adjust camera focal length, to adjust the size of AR model, reviews request using user end to server sending,
The data being stored in cloud database are transferred to be reviewed.
Disclosed herein as well is a kind of game live broadcast systems of AR enhancing, comprising: acquisition module, transmission module and client
Module, in which:
Acquisition module is used to collect the audio, video data in game live streaming, and is encoded, while acquiring in game live streaming
Dynamic model data be pushed to transmission module after dynamic model data and audio, video data are packaged compression;
Transmission module is used to that acquisition module data collected to be transferred to client by network;
Client receives and stores the static models data of game in advance before live streaming, in the data for receiving transmission module
Afterwards, pre-stored static models data are loaded, and the dynamic model data received is combined to carry out AR rendering, real-time display trip
Play video, and play corresponding audio.
Preferably, the acquisition module includes: audio-video collection module, AR scene modeling module and data compression coding mould
Block, in which:
The audio-video collection module acquires the audio/video information of live streaming person by microphone and camera;
The 3D scene of game of live streaming is carried out model reconstruction by the AR scene modeling module, is changed into and is shown advantageously by AR
Model, model reconstruction includes: to remove static models and dynamic model, and the model scope frame of static models is selected in a conjunction
In suitable size, and static models and dynamic model are rendered respectively;
The data compression coding module is used for audio, video data compressed encoding at can pass through the Streaming Media of network transmission
Form, and the static models data of AR and dynamic model data are distinguished into compressed encoding, finally all data are transmitted
Transmission module.
Preferably, the transmission module includes: direct broadcast server plug-flow module, real-time transcoding service module, data storage
Service module and video distribution service module, in which:
The service of the direct broadcast server plug-flow module includes: by acquisition module data-pushing collected to server
End, and by the data-pushing in server to each video distribution service module;
The real-time transcoding service module is used to meet the client demand of different platform, converts the data into corresponding difference
The analysable coding of platform;
Live data is synchronized for providing the function that can be reviewed and is stored in database by the data storage service module
In, user can transfer corresponding live data according to their needs and review;
The video distribution service module is used to for cache server being distributed to area or net that user accesses Relatively centralized
In network, when user accesses website, the access of user is directed toward apart from the slow of recently working properly using global load technology
It deposits on server, is requested by cache server directly in response to user.
Preferably, the client modules include data decoder module, scene calibration module, display module and AR interaction mould
Block, in which:
The data decoder module is decoded the data for receiving the defeated module of autobiography, obtains audio, video data and AR model
Data, wherein AR model data is divided into dynamic model data and static models data, and audio, video data can be directly in display module
It has been shown that, AR model data are sent to scene calibration module and are reconstructed;
The scene calibration module obtains reality scene by the camera of client modules, then operation environment understands skill
Art carries out plane reference to reality scene, if without plane in display scene, demarcate middle position in scene or by with
Display position is specified at family, and model is superimposed upon in reality scene for showing AR model, finally transfers to show by the region demarcated
Module is shown;
The AR interactive module adjusts camera for receiving user's operation, including dollying head viewing different direction
Distance needs real-time perfoming to render when operated by the user for zoom AR model, reaches the stability that AR is shown.
As seen from the above technical solution, the present invention is used primarily in 3D game by the existing direct-seeding of AR technological improvement
Live streaming.By in live streaming map and personage be rendered into real-time threedimensional model, which is arrived by network transmission
On the intelligent terminal of spectators, and shown by AR mode.Spectators by the modes such as movement, rotation, scaling controlling terminal come
Any angle locally or globally of live play is watched, then is cooperated with phonetic explaining and player's video content, spectators can be more
Well more fully experience game live streaming, and seemingly place oneself in the midst of that game battlefield is general, and feeling of immersion is full.
The present invention is a kind of for enhancing the technical solution of existing direct seeding technique, is the improvement and increasing of existing live broadcasting method
By force, and the use scope of AR has been widened, has provided a kind of brand-new thinking for following live streaming and AR industry.
Detailed description of the invention
Fig. 1 is the structure chart of the game live broadcast system of AR of the present invention enhancing;
Fig. 2 is the flow chart of the game live broadcasting method of AR of the present invention enhancing;
Fig. 3 is the schematic diagram of direct broadcast server plug-flow service of the present invention;
Fig. 4 is the idiographic flow schematic diagram of client of the present invention;
Fig. 5 is the schematic diagram for carrying out display calibration;
Fig. 6 is the display picture that embodiment is broadcast live in regular player game of the present invention;
Fig. 7 is the flow chart that embodiment is broadcast live in regular player game of the present invention;
Fig. 8 is the display picture that embodiment is broadcast live in the professional game contest of the present invention;
Fig. 9 is the embodiment flow chart of the professional game contest live streaming of the present invention;
Specific embodiment
It is right hereinafter, referring to the drawings and the embodiments, for the objects, technical solutions and advantages of the application are more clearly understood
The application is described in further detail.
In view of the deficiencies of the prior art, the application provides two solutions:
1, in the drawbacks of being directed to current live content acquisition, especially game live streaming, spectators can not be allowed preferably to experience directly
Content this point is broadcast, the application is improved.Specific: the application provides one kind and shows that picture is broadcast live in game by AR mode
Method, comprising: content acquisition end using AR render game picture, data are packaged and pass through network transmission (can also to terminal
Referred to as client), terminal plays audio by AR mode real-time display game picture.This method can allow spectators' immersion
Game live streaming is experienced on ground, and live streaming effect is more preferable.
2, for the limitation of current AR application scenario, the application ties the live streaming of current prevalence and cutting edge technology AR phase
It closes, not only allows live streaming to have new mode, AR is also allowed to widen application scenarios, while also solving the imeliness problem of AR.It is interior
Hold collection terminal and model is divided into static models data and dynamic model data, wherein static models (data volume is larger) data can
To be transferred to client in advance before live streaming, these data can be locally stored in client.Dynamic data is mostly game play
The user data of middle generation, such as the movement of personage, the data such as movement, the data volume of these data is small, and it is real-time to can use network
Transmission postpones very low.When client carries out AR rendering, pre-stored static models data are loaded, and network is cooperated to pass
Defeated dynamic model data can accomplish that real-time, technical difficulty are also compared very low on data render.It can thus improve
The slow problem of AR real-time rendering.
In order to solve the above problem, the present invention proposes a kind of game live broadcast system of AR enhancing, as shown in Figure 1, mainly by with
Lower module is constituted:
1. acquisition module: the main functional modules of acquisition module include that audio-video collection, AR scene modeling, data compression are beaten
Packet encoder etc.;
2. transmission module: the main functional modules of transmission module include direct broadcast server plug-flow, real-time transcoding service, data
Storage service and video distribution service etc.;
3. client modules: the main functional modules of client modules include data decoding, scene calibration, AR interaction, it is aobvious
Show.
Based on structure shown in Fig. 1, process of the invention is as shown in Fig. 2, specific:
One, acquisition module:
Acquisition module is mainly used for the side's of live streaming acquisition audio-video, and carries out AR modeling to 3D game, is located in advance to data
It manages, is issued after final compressed encoding.Wherein:
1) audio-video collection module acquires the audio/video information of live streaming person by microphone and camera, is at this time original number
According to.
2) the 3D scene of game of live streaming is carried out model reconstruction by AR scene modeling module, is changed into and is shown advantageously by AR
Model.The groundwork of model reconstruction has: static models and dynamic model being removed, the model scope frame of static models is selected in
In one suitable size, and static models and dynamic model are rendered respectively.
3) data compression coding module is used for audio, video data compressed encoding at can pass through the Streaming Media of network transmission
Form, and the static models data of AR and dynamic model data are distinguished into compressed encoding, finally all data are transmitted
Direct broadcast server end.
Two, transmission module
Transmission module is mainly used for the transmission of live data, is distributed, transcoding, deposits to acquisition module data collected
The work such as storage, it is ensured that the accuracy and actual effect of data transmission.Wherein:
1) service of direct broadcast server plug-flow module is divided into two parts: first part is by acquisition terminal data collected
It is pushed to server end, second part is by the data-pushing in server to each CDN node (content delivery network node).
The main protocol being directed to has RTMP (Routing Table Maintenance Protocol routing table
Maintenance agreement) and RTSP (Real Time Streaming Protocol real time streaming transport protocol) etc..In particular
Be: the data in the application are divided into two kinds, and one is traditional audio, video data, another kind is AR model data, model data
Middle static models data are often larger, but change frequency is low, can be transmitted when its variation;Dynamic model data volume
Small, change frequency is high, can be with real-time Transmission.Dynamic model data may include: model dynamic behaviour data, coordinate data, shape
State data etc.;Static models data may include: 3D contextual data, person model data etc..
2) real-time transcoding service module is arranged to be adapted to different clients, for meeting the visitor of different platform
Family end demand converts the data into the analysable coding of corresponding different platform.
Specifically see direct broadcast server plug-flow service shown in attached drawing 3.
3) data storage service module is for providing the function that can be reviewed, in the database by the synchronous storage of live data,
User can transfer corresponding live data according to their needs and review.
4) video distribution service module is CDN node, avoids being possible to influence on internet as far as possible using CDN node
The bottleneck and link of data transmission bauds and stability, make content transmission obtain faster, it is more stable.The basic principle is that being widely used
These cache servers are distributed to user and accessed in area or the network of Relatively centralized by various cache servers, are visited in user
When asking website, the access of user is directed toward on the nearest cache server working properly of distance using global load technology, by
Cache server is requested directly in response to user.
Three, client modules
Client modules are mainly used for the parsing of live data and AR is presented, and user can also change viewing by order
Angle, size etc. are finally reached the effect of AR live streaming.Wherein:
1) data that data decoder module transmits server end are decoded, and are parsed into two parts data, and a part is
Audio, video data, another part are AR model data, and wherein AR model data is divided into dynamic model data and static models number again
According to.Audio, video data can directly show that AR model data needs to give scene calibration module and is reconstructed in display end.
2) scene calibration module obtains reality scene by the camera of user terminal, then operation environment understands technology, right
Reality scene carries out plane reference, if without plane in display scene, calibration refers in the middle position of scene or by user
Determine display position.Model is superimposed upon in reality scene for showing AR model, finally transfers to display module by the region demarcated
It is shown.
3) AR interactive module is for receiving user's operation, and if dollying head watches different direction, adjustment camera is far and near
For zoom AR model etc..When operated by the user, it needs real-time perfoming to render, reaches the stability that AR is shown.AR
The user interactive operation that interactive module carries out, can specifically include: the camera site of adjustment client camera, to adjust AR mould
The display position of type, angle, adjustment camera focal length are issued back with adjusting the size of AR model using user end to server
It sees request, transfers the data being stored in cloud database and reviewed.
The specific process of client referring to fig. 4, specifically includes: shooting to obtain real field by the camera of user terminal
Scape understands that technology carries out plane reference to reality scene or user specifies display position by scene calibration module operation environment,
Three-dimensional reconstruction is carried out, AR data/video data for finally parsing according to data decoder module, the result of plane reference, three
Dimension rebuilds obtained model, and model is superimposed upon in reality scene and carries out rendering synthesis, and finally display module is transferred to be shown
Show.
Some supplements are done to the AR technology being specifically related to below to illustrate:
1, camera parameter is demarcated, and the camera of client is therefore the significant components of the AR system of view-based access control model are using
In must first demarcate the intrinsic parameter of camera.For common camera, the camera calibration work that can be carried using matlab
Has case to demarcate.The intrinsic parameter of camera can not only be calibrated, moreover it is possible to calibrate lens distortion.The tool box is using chess
Disk lattice standardization.Staking-out work needs to complete in advance, otherwise can not carry out three-dimensional reconstruction.
2, display calibration, be by display object space and actual object position carry out a kind of conversion mode,
It is simultaneously also component part important in three-dimensional reconstruction.A kind of most common method is single-point active alignment method (SPAAM), will be shielded
Some cross cursors on curtain are repeatedly aligned with the object in real world, and repeatedly alignment is needed to rotate by hand and be completed.
After data acquisition, solving equations projection matrix is constructed by DLT method, shown in lower Fig. 5.
3, visual consistency includes Geometrical consistency and illumination consistency, and Geometrical consistency includes that dummy object is placed on
Behind correct position, no matter how camera is moved, and will not change the position for not being in the mood for object, in addition, one will also be kept by blocking
It causes, the difference of spatial position will lead to block and change.Illumination consistency refer to the dummy object of rendering how to keep with very
The consistent lighting effect of real environment, the approach for solving this problem are then existed by obtaining the distribution of light sources in true environment
The lighting effect is simulated in virtual world.
Final purpose of the invention is to increase AR in existing direct seeding technique to present, and makes live streaming more vivid.It connects
Get off and detailed implementation analysis is carried out to each part of the present invention by two preferred embodiments.
By taking " king's honor " this kind of MOBA class 3D game live streaming as an example, MOBA class 3D game why may be used for this implementation analysis
It is because this kind of game has the scene of a fixed size, all people's object is in this scene with the mentioned technology of the application
Middle activity.It thus can easily distinguish static models and dynamic model.Similarly, there is the 3D game of fixed size scene
(such as space craft, CS etc.) can be applied.
Embodiment one, regular player game live streaming
The present embodiment is illustrated by taking regular player game live streaming as an example, comprising the following steps:
Step 1. camera acquires player's current video, and microphone acquires player's audio, this part is audio-video original number
According to.
The acquisition of step 2.3D game AR model is divided into two parts, by taking " king's honor " as an example:
A part acquisition game entirety map because every innings of map be it is the same, map datum can be local
It saves, does not need repeated acquisition, this kind of data belong to static data;
Another part is personage, dogface, the wild monster etc. that player controls, this kind of to belong to dynamic data, but these person models
All be mostly it is determining, it is unique it is uncertain is personage's posture and coordinate towards etc..
When collecting data in order to reduce data volume, to collect based on these uncertain data, and confirmable number
According to most static preservation.Wherein static data such as map, person model etc. is that game manufacturer provides, and dynamic data is from object for appreciation
It is collected in the game of family.
Step 3. carries out coded treatment to collected audio-video initial data, and common coding mode includes: CBR, VBR
Deng, common video code model include: H.265, H.264, MPEG-4 etc., packaging container has TS, MKV, AVI, MP4 etc., sound
Frequency includes: G.711 μ, AAC, Opus etc., is packaged with MP3, OGG, AAC etc..
Step 4. by after coded treatment audio, video data and AR model data integration be packaged, to server send request push away
It send.After server receives request command, allow plug-flow, live streaming side pushes corresponding data to server end, RTMP/ can be used
The agreements such as RTSP are pushed.RTMP agreement is that the transmission of object, video, audio is used for by Flash.The agreement is established in TCP
On agreement or poll http protocol.RTMP agreement is used to fill the container of data packet just as one, these data either
The data of AMF format, the video/audio being also possible in FLV.One single connection can pass through different channel transfers
Multi-channel network stream, the packet in these channels are all the packet transmission according to fixed size.
Terminal is broadcast live in order to be adapted to different viewings in step 5., real-time transcoding is carried out to audio, video data, after transcoding
Data can be distributed to each CDN node.
The live content that each CDN node of step 6. is selected according to spectators user pushes corresponding data to user.User
It is decoded after receiving data, decoding is divided into two steps: the first step is to discriminate between out audio, video data and AR model data;Second
Step is parsed respectively to this two parts data, wherein related hardware or software can be used to receiving in audio, video data
Data be decoded, obtain the image/sound that can be directly displayed, general corresponding encoder has corresponding decoding
Device, there are also third party's decoding plug-ins etc..AR model data needs to distinguish static and dynamic two parts data.
Step 7. client utilizes the camera in terminal simultaneously, obtains realities of the day scene, passes through computer vision
Etc. technologies, current environment is understood, suitable rendering position (such as desktop, ground etc.) is selected to establish corresponding three-dimensional sit
Mark.If shown without plane in scene, corresponding three-dimensional seat is established in the middle position of scene or by specified location in user
Mark.
Step 8. starts to render AR model in specified rendering position, first constructs static models, then dynamic model is superimposed upon
On static models, complete AR scene is ultimately formed.
The image rendered is transferred to display terminal to show by step 9., while being opened up a part of region in display terminal and being shown
Show video data, and with the audio player plays audio data in terminal.It is final to present as shown in Figure 6.
The adjustable terminal posture of step 10. user is to adjust AR display effect, such as amplification, diminution AR model.Citing
Illustrate: if it is desired to some part in observation game, terminal can be directed at the position by user, and be moved forward, position local in this way
Setting will amplify as terminal moves forward, and similarly, when wanting to observe the overall situation, then move backward terminal.If it is desired to adjustment visual angle can be with
It is adjusted by moving to left or moving to right terminal.
Attached drawing 7 is the flow chart of the present embodiment, which corresponds to above-mentioned steps 1-10.
Embodiment two, professional game contest live streaming
The present embodiment is illustrated by taking the live streaming of professional game contest as an example, and professional game contest live streaming is different from regular player
Live streaming, contest live streaming needs to provide more game datas such as scoreboard, player performance etc., and needs support playback function
Can, therefore, it is necessary to use cloud storage.It is repeated no more with step as above-described embodiment one kind, the specific steps are as follows:
Step 1, camera acquire player's video information, and microphone acquisition explains audio-frequency information, and also needs record current
Game data (score, player performance etc.).
It is step 2, identical as one step 2 of embodiment.
It is step 3, identical as one step 3 of embodiment.
It is step 4, identical as one step 4 of embodiment, while being packaged game data.
It is step 5, identical as one step 5 of embodiment, while database purchase backs up beyond the clouds by all data.
It is step 6, identical as upper one step 6 of embodiment, while decompressing game data.
Step 7-9, identical as one 7-9 of embodiment, the displaying of game data is added in rendering, it is specific to present such as Fig. 8 institute
Show.
It is step 10, identical as one step 10 of embodiment, and user can choose the live content for oneself wanting to see, from cloud
Database is transferred corresponding data and is watched locally.
Fig. 9 is the flow chart of the present embodiment, which corresponds to above-mentioned steps 1-10.
Compared with prior art, the present invention provides a completely new direct-seeding for existing live broadcast system, introduces
AR technology, its advantage is that spectators can better and more comprehensively experience game live streaming, and it is general seemingly to place oneself in the midst of game battlefield, sinks
Leaching sense is full.
New mode is also provided for AR application scenarios simultaneously, proposes new thinking for the future development of AR.By each
Big live streaming platform, facilitates the popularization of AR technology, everybody can be allowed to receive the technology more extensively.
The foregoing is merely the preferred embodiments of the application, not to limit the application, all essences in the application
Within mind and principle, any modification, equivalent substitution, improvement and etc. done be should be included within the scope of the application protection.
Claims (11)
1. a kind of game live broadcasting method of AR enhancing characterized by comprising
Before live streaming, client receives and stores the static models data of game in advance;
Acquisition module collects the audio, video data in game live streaming, and is encoded, while acquiring the dynamic analog in game live streaming
Type data;
After dynamic model data and audio, video data are packaged compression by acquisition module, by network transmission to client;
Client loads pre-stored static models data, and the dynamic model data received is combined to carry out AR rendering, real
When show game video, and play corresponding audio.
2. the method according to claim 1, wherein this method further include:
Model reconstruction is carried out to the 3D scene of game of live streaming in advance, is changed into the model shown advantageously by AR, the static state mould
Type data and dynamic model data constitute AR model data.
3. according to the method described in claim 2, it is characterized by:
The dynamic model data include: model dynamic behaviour data, coordinate data, status data;
The static models data include: 3D contextual data, person model data.
4. method according to any one of claims 1 to 3, which is characterized in that the acquisition module is by dynamic model data
After being packaged compression with audio, video data, include: to client by network transmission
After dynamic model data and audio, video data are packaged compression by acquisition module, to server push information;
After server receives the information push of acquisition module, it is pushed to CDN node, and continue to be pushed to each client, alternatively,
Server carries out dynamic coding to the information received to adapt to different client needs, while storing data in cloud number
According in library, calls and play when being reviewed for client request.
5. method according to any one of claims 1 to 3, which is characterized in that client carries out AR rendering and includes:
Client is decoded the information received from server, while obtaining reality scene using the camera of client, right
Reality scene is analyzed, and is determined the display area AR direction, is established three-dimensional system of coordinate, by decoded 3D scene information specified
The display area AR is shown, while playing live video and audio.
6. according to the method described in claim 5, it is characterized by:
The decoding of information is divided into the decoding of audio/video information and the decoding of 3D scene information, wherein the decoding of 3D scene information point
For the decoding of static models data and dynamic model data;When carrying out AR display, static models data are first rendered, then will dynamic
Model data, which is superimposed upon in static models data, to be carried out rendering and shows.
7. method according to any one of claims 1 to 3, which is characterized in that this method further include:
User interactive operation is carried out, is specifically included: the camera site of adjustment client camera, to adjust the display position of AR model
It sets, angle, adjusts camera focal length, to adjust the size of AR model, review request using user end to server sending, transfer
The data being stored in cloud database are reviewed.
8. a kind of game live broadcast system of AR enhancing characterized by comprising acquisition module, transmission module and client modules,
Wherein:
Acquisition module is used to collect the audio, video data in game live streaming, and is encoded, while acquiring dynamic in game live streaming
States model data are pushed to transmission module after dynamic model data and audio, video data are packaged compression;
Transmission module is used to that acquisition module data collected to be transferred to client by network;
Client receives and stores the static models data of game in advance before live streaming, after receiving the data of transmission module,
Pre-stored static models data are loaded, and the dynamic model data received is combined to carry out AR rendering, real-time display game
Video, and play corresponding audio.
9. system according to claim 8, it is characterised in that:
The acquisition module includes: audio-video collection module, AR scene modeling module and data compression coding module, in which:
The audio-video collection module acquires the audio/video information of live streaming person by microphone and camera;
The 3D scene of game of live streaming is carried out model reconstruction by the AR scene modeling module, is changed into the mould shown advantageously by AR
Type, model reconstruction include: to remove static models and dynamic model, and the model scope frame of static models is selected in one suitably
In size, and static models and dynamic model are rendered respectively;
The data compression coding module is used for audio, video data compressed encoding into the Streaming Media form that can pass through network transmission,
And the static models data of AR and dynamic model data are distinguished into compressed encoding, all data are finally transmitted into transmission mould
Block.
10. system according to claim 8, it is characterised in that:
The transmission module includes: direct broadcast server plug-flow module, real-time transcoding service module, data storage service module and view
Frequency division sends out service module, in which:
The service of the direct broadcast server plug-flow module include: by acquisition module data-pushing collected to server end, with
And by the data-pushing in server to each video distribution service module;
The real-time transcoding service module is used to meet the client demand of different platform, converts the data into corresponding different platform
Analysable coding;
The data storage service module in the database by the synchronous storage of live data, is used for providing the function that can be reviewed
Family can transfer corresponding live data according to their needs and be reviewed;
The video distribution service module is used to for cache server to be distributed to user and access in area or the network of Relatively centralized,
When user accesses website, the nearest buffer service working properly of distance is directed toward in the access of user using global load technology
On device, requested by cache server directly in response to user.
11. system according to claim 8, it is characterised in that:
The client modules include data decoder module, scene calibration module, display module and AR interactive module, in which:
The data decoder module is decoded the data for receiving the defeated module of autobiography, obtains audio, video data and AR pattern number
According to, wherein AR model data is divided into dynamic model data and static models data, and audio, video data can be directly aobvious in display module
Show, AR model data is sent to scene calibration module and is reconstructed;
The scene calibration module obtains reality scene by the camera of client modules, then operation environment understands technology, right
Reality scene carries out plane reference, if without plane in display scene, calibration refers in the middle position of scene or by user
Determine display position, model is superimposed upon in reality scene for showing AR model, finally transfers to display module by the region demarcated
It is shown;
The AR interactive module is for receiving user's operation, including dollying head viewing different direction, adjustment camera distance
It needs real-time perfoming to render when operated by the user for zoom AR model, reaches the stability that AR is shown.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811181572.9A CN109195020B (en) | 2018-10-11 | 2018-10-11 | AR enhanced game live broadcast method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811181572.9A CN109195020B (en) | 2018-10-11 | 2018-10-11 | AR enhanced game live broadcast method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109195020A true CN109195020A (en) | 2019-01-11 |
CN109195020B CN109195020B (en) | 2021-07-02 |
Family
ID=64948102
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811181572.9A Active CN109195020B (en) | 2018-10-11 | 2018-10-11 | AR enhanced game live broadcast method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109195020B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109842811A (en) * | 2019-04-03 | 2019-06-04 | 腾讯科技(深圳)有限公司 | A kind of method, apparatus and electronic equipment being implanted into pushed information in video |
CN110288658A (en) * | 2019-05-24 | 2019-09-27 | 联想(上海)信息技术有限公司 | A kind of information processing method, device and computer storage medium |
CN110536146A (en) * | 2019-08-19 | 2019-12-03 | 广州点云科技有限公司 | A kind of live streaming based on cloud game is started broadcasting method, apparatus and storage medium |
CN110689570A (en) * | 2019-09-29 | 2020-01-14 | 北京达佳互联信息技术有限公司 | Live virtual image broadcasting method and device, electronic equipment and storage medium |
CN111447458A (en) * | 2020-04-01 | 2020-07-24 | 广州市百果园信息技术有限公司 | Live broadcast system, method and device based on content explanation and live broadcast server |
CN111447485A (en) * | 2020-03-31 | 2020-07-24 | 广州微算互联信息技术有限公司 | Real-time cloud game video recording method, system, device and storage medium |
CN111935495A (en) * | 2020-08-13 | 2020-11-13 | 上海识装信息科技有限公司 | AR technology-based live video commodity display method and system |
CN111970522A (en) * | 2020-07-31 | 2020-11-20 | 北京琳云信息科技有限责任公司 | Processing method and device of virtual live broadcast data and storage medium |
CN112118212A (en) * | 2019-06-21 | 2020-12-22 | 广州虎牙科技有限公司 | Video data output method and system based on cloud platform and cloud platform |
CN112118213A (en) * | 2019-06-21 | 2020-12-22 | 广州虎牙科技有限公司 | Online video data output method and system and cloud platform |
CN112702611A (en) * | 2019-10-22 | 2021-04-23 | 上海华为技术有限公司 | Playing method and playing system |
WO2021088973A1 (en) * | 2019-11-07 | 2021-05-14 | 广州虎牙科技有限公司 | Live stream display method and apparatus, electronic device, and readable storage medium |
CN113382305A (en) * | 2021-05-27 | 2021-09-10 | 北京工业大学 | Online video live broadcast system based on three-dimensional scene |
CN113490048A (en) * | 2021-07-28 | 2021-10-08 | 广东金马游乐股份有限公司 | Dynamic movie and television system |
WO2021218547A1 (en) * | 2020-04-26 | 2021-11-04 | 北京外号信息技术有限公司 | Method for superimposing live image of person onto real scene, and electronic device |
CN114390048A (en) * | 2021-12-31 | 2022-04-22 | 凌宇科技(北京)有限公司 | Cloud VR screen projection system and method |
CN114629812A (en) * | 2022-03-28 | 2022-06-14 | 中国电子科技集团公司第三十八研究所 | Cluster visualization system and method based on autonomous controllable platform |
US11880956B2 (en) | 2019-08-28 | 2024-01-23 | Shenzhen Sensetime Technology Co., Ltd. | Image processing method and apparatus, and computer storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8012023B2 (en) * | 2006-09-28 | 2011-09-06 | Microsoft Corporation | Virtual entertainment |
CN106101735A (en) * | 2016-06-23 | 2016-11-09 | 赵涛 | A kind of billiard ball live broadcasting method based on virtual reality technology |
CN107174825A (en) * | 2017-04-28 | 2017-09-19 | 苏州蜗牛数字科技股份有限公司 | A kind of remote image method for reconstructing and system based on model |
CN107376360A (en) * | 2017-06-19 | 2017-11-24 | 深圳市铂岩科技有限公司 | game live broadcasting method and game live broadcast system |
CN107454434A (en) * | 2017-08-14 | 2017-12-08 | 姜汉龙 | Virtual reality net cast method and video playing terminal |
CN108364353A (en) * | 2017-12-27 | 2018-08-03 | 广东鸿威国际会展集团有限公司 | The system and method for guiding viewer to watch the three-dimensional live TV stream of scene |
-
2018
- 2018-10-11 CN CN201811181572.9A patent/CN109195020B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8012023B2 (en) * | 2006-09-28 | 2011-09-06 | Microsoft Corporation | Virtual entertainment |
CN106101735A (en) * | 2016-06-23 | 2016-11-09 | 赵涛 | A kind of billiard ball live broadcasting method based on virtual reality technology |
CN107174825A (en) * | 2017-04-28 | 2017-09-19 | 苏州蜗牛数字科技股份有限公司 | A kind of remote image method for reconstructing and system based on model |
CN107376360A (en) * | 2017-06-19 | 2017-11-24 | 深圳市铂岩科技有限公司 | game live broadcasting method and game live broadcast system |
CN107454434A (en) * | 2017-08-14 | 2017-12-08 | 姜汉龙 | Virtual reality net cast method and video playing terminal |
CN108364353A (en) * | 2017-12-27 | 2018-08-03 | 广东鸿威国际会展集团有限公司 | The system and method for guiding viewer to watch the three-dimensional live TV stream of scene |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109842811A (en) * | 2019-04-03 | 2019-06-04 | 腾讯科技(深圳)有限公司 | A kind of method, apparatus and electronic equipment being implanted into pushed information in video |
CN109842811B (en) * | 2019-04-03 | 2021-01-19 | 腾讯科技(深圳)有限公司 | Method and device for implanting push information into video and electronic equipment |
CN110288658A (en) * | 2019-05-24 | 2019-09-27 | 联想(上海)信息技术有限公司 | A kind of information processing method, device and computer storage medium |
CN112118212B (en) * | 2019-06-21 | 2021-09-24 | 广州虎牙科技有限公司 | Video data output method and system based on cloud platform and cloud platform |
CN112118212A (en) * | 2019-06-21 | 2020-12-22 | 广州虎牙科技有限公司 | Video data output method and system based on cloud platform and cloud platform |
CN112118213A (en) * | 2019-06-21 | 2020-12-22 | 广州虎牙科技有限公司 | Online video data output method and system and cloud platform |
CN110536146A (en) * | 2019-08-19 | 2019-12-03 | 广州点云科技有限公司 | A kind of live streaming based on cloud game is started broadcasting method, apparatus and storage medium |
CN110536146B (en) * | 2019-08-19 | 2021-12-31 | 广州点云科技有限公司 | Live broadcast method and device based on cloud game and storage medium |
US11880956B2 (en) | 2019-08-28 | 2024-01-23 | Shenzhen Sensetime Technology Co., Ltd. | Image processing method and apparatus, and computer storage medium |
CN110689570A (en) * | 2019-09-29 | 2020-01-14 | 北京达佳互联信息技术有限公司 | Live virtual image broadcasting method and device, electronic equipment and storage medium |
US11166002B2 (en) | 2019-09-29 | 2021-11-02 | Beijing Dajia Internet Information Technology Co., Ltd. | Method and device for live broadcasting virtual avatar |
CN112702611A (en) * | 2019-10-22 | 2021-04-23 | 上海华为技术有限公司 | Playing method and playing system |
WO2021088973A1 (en) * | 2019-11-07 | 2021-05-14 | 广州虎牙科技有限公司 | Live stream display method and apparatus, electronic device, and readable storage medium |
CN111447485A (en) * | 2020-03-31 | 2020-07-24 | 广州微算互联信息技术有限公司 | Real-time cloud game video recording method, system, device and storage medium |
CN111447458A (en) * | 2020-04-01 | 2020-07-24 | 广州市百果园信息技术有限公司 | Live broadcast system, method and device based on content explanation and live broadcast server |
WO2021218547A1 (en) * | 2020-04-26 | 2021-11-04 | 北京外号信息技术有限公司 | Method for superimposing live image of person onto real scene, and electronic device |
TWI795762B (en) * | 2020-04-26 | 2023-03-11 | 大陸商北京外號信息技術有限公司 | Method and electronic equipment for superimposing live broadcast character images in real scenes |
CN111970522A (en) * | 2020-07-31 | 2020-11-20 | 北京琳云信息科技有限责任公司 | Processing method and device of virtual live broadcast data and storage medium |
CN111935495A (en) * | 2020-08-13 | 2020-11-13 | 上海识装信息科技有限公司 | AR technology-based live video commodity display method and system |
CN113382305A (en) * | 2021-05-27 | 2021-09-10 | 北京工业大学 | Online video live broadcast system based on three-dimensional scene |
CN113382305B (en) * | 2021-05-27 | 2023-05-23 | 北京工业大学 | Online video live broadcast system based on three-dimensional scene |
CN113490048A (en) * | 2021-07-28 | 2021-10-08 | 广东金马游乐股份有限公司 | Dynamic movie and television system |
CN113490048B (en) * | 2021-07-28 | 2023-06-27 | 广东金马游乐股份有限公司 | Dynamic video system |
CN114390048A (en) * | 2021-12-31 | 2022-04-22 | 凌宇科技(北京)有限公司 | Cloud VR screen projection system and method |
CN114629812A (en) * | 2022-03-28 | 2022-06-14 | 中国电子科技集团公司第三十八研究所 | Cluster visualization system and method based on autonomous controllable platform |
Also Published As
Publication number | Publication date |
---|---|
CN109195020B (en) | 2021-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109195020A (en) | A kind of the game live broadcasting method and system of AR enhancing | |
US9704298B2 (en) | Systems and methods for generating 360 degree mixed reality environments | |
CN106713895A (en) | Method and device for processing content | |
CN107801083A (en) | A kind of network real-time interactive live broadcasting method and device based on three dimensional virtual technique | |
CN102340690A (en) | Interactive television program system and realization method | |
Doumanoglou et al. | Quality of experience for 3-D immersive media streaming | |
CN107438183A (en) | A kind of virtual portrait live broadcasting method, apparatus and system | |
CN105379302B (en) | Information processing equipment and information processing method | |
CN107846633A (en) | A kind of live broadcasting method and system | |
CN106792214A (en) | A kind of living broadcast interactive method and system based on digital audio-video place | |
US20220044704A1 (en) | Systems and methods for generating and presenting virtual experiences | |
CN110178158A (en) | Information processing unit, information processing method and program | |
CN113382275B (en) | Live broadcast data generation method and device, storage medium and electronic equipment | |
KR20150105058A (en) | Mixed reality type virtual performance system using online | |
CN113891117B (en) | Immersion medium data processing method, device, equipment and readable storage medium | |
CN106534618A (en) | Method, device and system for realizing pseudo field interpretation | |
CN109120990A (en) | Live broadcasting method, device and storage medium | |
van der Hooft et al. | A tutorial on immersive video delivery: From omnidirectional video to holography | |
KR20210084248A (en) | Method and apparatus for providing a platform for transmitting vr contents | |
EP3606085A1 (en) | Information processing device, information processing method, and program | |
Zhang et al. | Design and Implementation of Two Immersive Audio and Video Communication Systems Based on Virtual Reality | |
US20210049824A1 (en) | Generating a mixed reality | |
JP7054351B2 (en) | System to play replay video of free viewpoint video | |
Duncan et al. | Voxel-based immersive mixed reality: A framework for ad hoc immersive storytelling | |
WO2022024780A1 (en) | Information processing device, information processing method, video distribution method, and information processing system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |