CN105635712B - Video real time recording method and recording arrangement based on augmented reality - Google Patents

Video real time recording method and recording arrangement based on augmented reality Download PDF

Info

Publication number
CN105635712B
CN105635712B CN201511020454.6A CN201511020454A CN105635712B CN 105635712 B CN105635712 B CN 105635712B CN 201511020454 A CN201511020454 A CN 201511020454A CN 105635712 B CN105635712 B CN 105635712B
Authority
CN
China
Prior art keywords
target object
engines
content
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201511020454.6A
Other languages
Chinese (zh)
Other versions
CN105635712A (en
Inventor
张小军
王凤伟
王伟楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EASYAR INFORMATION TECHNOLOGY (SHANGHAI) Co Ltd
Original Assignee
EASYAR INFORMATION TECHNOLOGY (SHANGHAI) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by EASYAR INFORMATION TECHNOLOGY (SHANGHAI) Co Ltd filed Critical EASYAR INFORMATION TECHNOLOGY (SHANGHAI) Co Ltd
Priority to CN201710986097.1A priority Critical patent/CN107682688B/en
Priority to CN201511020454.6A priority patent/CN105635712B/en
Publication of CN105635712A publication Critical patent/CN105635712A/en
Application granted granted Critical
Publication of CN105635712B publication Critical patent/CN105635712B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals

Abstract

The embodiments of the invention provide a kind of video real time recording method and recording arrangement based on augmented reality, the recording arrangement includes AR engines and 3D engines, wherein:The AR engines, for receiving image information, identify the target object in described image information, and extract the attribute information of the target object, ambient Property residing for the attribute information of the target object and the target object is sent to the 3D engines, the ambient Property includes the environmental information residing for the target object;The 3D engines, the attribute information and ambient Property of the target object sent for receiving the AR engines, and the virtual content according to corresponding to the attribute information of the target object determines the target object, the virtual content and the ambient Property are overlapped, form interaction content, so that the interaction content can be synthesized into image, and/or, so that the interaction content is shown by display unit.

Description

Video real time recording method and recording arrangement based on augmented reality
Technical field
The present invention relates to augmented reality field, more particularly to a kind of video real time recording method based on augmented reality And recording arrangement.
Background technology
AR (Augmented Reality, augmented reality) technology is a kind of brand-new human-computer interaction technology, passes through intelligent end Virtual content is applied to real world by end equipment and visualization technique so that virtual content and real world are added to together simultaneously One picture or space are presented to user.With the popularization of intelligent terminal, the application of AR technologies is further extensive, can be by intelligence AR applications are installed in energy terminal to be experienced.Specifically, the workflow of AR applications is as follows:Intelligent terminal is shot by camera Picture frame, picture frame is identified, determines AR target objects;AR target objects in picture frame are tracked, determine AR The position of target object, the AR virtual content associated with the AR target objects is obtained, picture frame is rendered, by described in AR virtual contents are superimposed upon on AR target objects and shown, show that AR target objects and AR are virtually interior simultaneously on a terminal screen Hold so that user interacts.
At present in AR treatment technologies, virtual content is superimposed on image that not only can be static to a frame or a pair, and And virtual content can also be superimposed on the video recorded.
In the prior art, virtual content will be superimposed on the video of recording and usually first records one section of video, then again will The virtual content for needing to be superimposed is added in the video recorded, and virtual content is added to recording by way of post-production In good video, user is then presented to again, but can not realize that superposition in real time is virtual interior same during recording.
Therefore, in current video AR treatment technologies, video recorders do not know the appearance form of virtual content, gone out Between current, rhythm of action, therefore the video recorded be difficult with virtual content organically combined with and realize interaction.
The content of the invention
The invention provides a kind of video real time recording method and recording arrangement based on augmented reality, by target object After ambient Property is superimposed with virtual content, real-time recording can not only be realized, and the content real-time display after superposition can also be existed On display unit, it can not only realize virtual content and recorded video combination, and can also realize and be carried out with virtual content It is interactive.
The embodiments of the invention provide a kind of recording arrangement, including AR engines and 3D engines, wherein:
The AR engines, for receiving image information, the target object in described image information is identified, and extract the mesh The attribute information of object is marked, the ambient Property residing for the attribute information of the target object and the target object is sent to institute 3D engines are stated, the ambient Property includes the environmental information residing for the target object;
The 3D engines, the attribute information and ambient Property of the target object sent for receiving the AR engines, and root The virtual content corresponding to the target object is determined according to the attribute information of the target object, will be needed in the virtual content The content of recording is overlapped with the ambient Property, forms interaction content, so that the interaction content can be synthesized into image, Shown after the interaction content is superimposed with the content that need not be recorded in the virtual content by display unit.
Preferably, the intermediate state off screen image that the 3D engines are specifically used for using blank is as drafting target, by described in Draw and the image information of ambient Property and the image information of virtual content are superimposed in target, complete to the intermediate state off screen image Render, form the interaction content.
Preferably, the 3D engines are specifically used for:
In the image information that first draws drafting ambient Property in target, needed in the second drafting target draws virtual content The image information of the content to be recorded, then draw target and second by described first and draw target superposition, complete in described Between state off screen image render;Or
The image information of the ambient Property is first drawn in same drafting target, then draws in the virtual content and needs The image information of the content to be recorded, complete to render the intermediate state off screen image;Or
The image information of the virtual content is first drawn in same drafting target, then draws in the ambient Property and needs The image information of the content to be recorded, complete to render the intermediate state off screen image.
Preferably, the recording arrangement also includes:
Camera, for the image information photographed to be sent into the AR engines.
Preferably, the interaction content is sent to coding unit and carries out coded treatment by the 3D engines, so that after coding Interaction content can be synthesized into image.
Preferably, when the coding unit includes video encoding unit:
The video encoding unit, for the off screen image of the intermediate state received to be encoded, and by image Coding information is sent to media file processing unit;
The media file processing unit, for the image coding information received to be synthesized into image.
Preferably, when the coding unit also includes audio coding unit, the recording arrangement also includes:
The 3D engines, it is additionally operable to the audio-frequency information in virtual content corresponding to the target object being sent to audio conjunction Into unit;
The audio synthesizer unit, the audio-frequency information of the virtual content sent for receiving the 3D engines, and and environment The audio-frequency information of content is mixed, and mixed audio-frequency information is sent into audio coding unit;
The audio coding unit, the audio-frequency information sent for receiving the audio synthesizer unit, and to the audio Information is encoded, and audio coding result is sent into media file processing unit is handled.
Preferably, the media file processing unit, the image coded data and audio coding number that will be received are additionally operable to According to being merged into image.
Preferably, the recording arrangement also includes:
Memory cell, for storing virtual content, and the corresponding relation of the attribute information of target object and virtual content.
The embodiment of the present invention additionally provides a kind of video real time recording method based on augmented reality, and methods described includes:
3D engines receive the attribute information of target object and the ambient Property of the target object that AR engines are sent, institute Stating ambient Property includes the environmental information residing for the target object;
Virtual content of the 3D engines according to corresponding to the attribute information of the target object determines the target object, And the content recorded and the ambient Property of the target object will be needed to be overlapped in the virtual content of the target object, shape Into interaction content, so that the interaction content can be synthesized into image, and by the interaction content and the interaction content Shown after the content superposition that need not be recorded by display unit.
Preferably, methods described also includes:
The AR engines receive the image information that camera is sent, and the image information to receiving is analyzed, it is determined that Target object in described image information, and obtain the attribute information of the target object;
The AR engines send the ambient Property of the attribute information of the target object and the target object To the 3D engines.
Preferably, void of the 3D engines according to corresponding to the attribute information of the target object determines the target object Intend content, including:
The 3D engines are according to the attribute information of the target object and the virtual content of the target object prestored The virtual content corresponding to the target object is determined with the corresponding relation of target object attribute.
Preferably, the 3D engines will need the content recorded and the object in the virtual content of the target object The ambient Property of body is overlapped, and is formed interaction content, is specifically included:
The 3D engines use the off screen image of blank to be superimposed ambient Property in the drafting target as target is drawn Image information, while be also superimposed the content that needs to record in the virtual content of the target object in the drafting target Image information, the off screen image of intermediate state is formed, complete to render the intermediate state off screen image.
Preferably, the 3D engines will be superimposed the image information of ambient Property in the drafting target, while also in institute The image information drawn and virtual content is superimposed in target is stated, completes to render intermediate state off screen image, specifically includes:
The 3D engines draw the image information of environment in the first drafting target, and it is virtual interior to draw target drafting second The image information of content recorded is needed in appearance, then target and second is drawn by described first and draws target superposition, completion pair Intermediate state off screen image renders;Or
The 3D engines first draw the image information of ambient Property in same drafting target, then draw in virtual content The image information of content recorded is needed, completes to render intermediate state off screen image;Or
The 3D engines first draw the image information for the content for needing to record in virtual content in same drafting target, The image information of ambient Property is drawn again, completes to render intermediate state off screen image.
Preferably, the 3D engines are by the figure for the content that need not be recorded in the intermediate state off screen image and virtual content After being overlapped as information, send to the display unit and shown.
Preferably, methods described also includes:
The interaction content is sent to coding unit by the 3D engines, so that the coding unit is by the interaction content Media file processing unit is sent to after coding and carries out image synthesis.
Preferably, methods described also includes:
The audio-frequency information of the target object virtual content is sent to audio synthesizer unit by the 3D engines, so that described After audio synthesizer unit mixes the audio-frequency unit of the audio-frequency unit of the virtual content and the ambient Property, audio is sent to Coding unit is encoded.
Preferably, methods described also includes:
Audio synthesizer unit receives the audio-frequency information for the target object that the 3D engines are sent, and with the object Audio coding unit is sent to after the audio-frequency information mixing of body ambient Property;
The audio coding unit receives the mixed audio-frequency information that the audio synthesizer unit is sent, and to the sound Frequency information is encoded, and coding result is sent into the media file processing unit;
The off screen image of the intermediate state is also sent to video encoding unit by the 3D engines;
The video encoding unit is encoded the off screen image of the intermediate state received, and Image Coding is believed Breath is sent to media file processing unit;
The media file processing unit is synthesized the image coded data received and coded audio data.
The video real time recording method and recording arrangement based on augmented reality that the embodiment of the present invention is provided, have following Beneficial effect:
1st, not only the video after the virtual content of target object and ambient Property superposition can be passed through display unit in real time It has been shown that, and the function of synchronous recording can be completed to the video after superposition;
2nd, when the content after to superposition carries out video record or display, virtual content and actual scene can also be realized Interaction, realize real-time interactive.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, embodiment or description of the prior art will be converged below The accompanying drawing used required for total is briefly described, it should be apparent that, drawings discussed below is only some implementations of the present invention Example, for those of ordinary skill in the art, on the premise of not paying creative work, can also be obtained according to these accompanying drawings Obtain other accompanying drawings.
Fig. 1 is the recording arrangement first structure schematic diagram that the embodiment of the present invention is provided;
Fig. 2 is the video real time recording method schematic diagram based on augmented reality that the embodiment of the present invention is provided;
Fig. 3 is the structural representation of recording arrangement second that the embodiment of the present invention is provided.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, rather than whole embodiments.Based on this hair Bright embodiment, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made Example, belongs to the scope of protection of the invention.
Embodiment one:
The embodiments of the invention provide a kind of recording arrangement, it is possible to achieve based on AR (Augmented Reality, enhancing Reality) video real time recording method, as shown in figure 1, the recording arrangement includes camera 11, AR engines 12,3D engines 13, regarded Frequency coding unit 14, audio synthesizer unit 15, audio coding unit 16, media file processing unit 17 and display unit 18, its In:
Camera 11 sends the image information photographed to AR engines 12, and AR engines 12 identify after receiving image information Target object therein, and the attribute information of target object is extracted, the attribute information of target object and ambient Property are sent to 3D engines 13, wherein, ambient Property is the actual scene that camera photographs, namely the environment residing for target object is in itself, can So that comprising image information, the audio-frequency information in environment can also be included;3D engines 13 receive target object attribute information and After ambient Property, the virtual content according to corresponding to the attribute information of target object determines the target object, and by virtual content After the ambient Property, interaction content is formed, so-called interaction content can include image information and audio-frequency information, specifically 3D engines 13 are superimposed by the graphical information of the image information of ambient Property and virtual content, complete to intermediate state Off-Screen (off screen) image renders, and the Off-Screen Image Sharings after rendering are to video encoding unit 14, video encoding unit 14 Off-Screen images are encoded after receiving Off-Screen images, and by the Video coding number of Off-Screen images According to being sent to media file processing unit 17;The audio-frequency information of target object virtual content is also sent to audio and closed by 3D engines 13 Into unit 15, audio synthesizer unit 15 enters according to by the audio-frequency information of the audio-frequency information of target object virtual content and ambient Property Row mixing, audio coding unit 16 is sent to by mixed audio-frequency information, after audio coding unit 16 receives audio-frequency information, Audio coding information is encoded, and the coded audio data after coding is sent to media file processing unit 17;Media Document handling unit 17 is synthesized the video data encoder received and coded audio data, forms final image.
In embodiments of the present invention, in the virtual content for the target object that 3D engines are got, according to the configuration of system, portion Virtual content is divided to need to record, and partial content need not be recorded and need to directly display.As such, it is desirable to the virtual content recorded The generation interaction content that can be stacked up with ambient Property is sent to video encoding unit, and the content for that need not record, Then display unit is sent jointly to interaction content to be shown.
In embodiments of the present invention, the 3D engines can also by the intermediate state Off-Screen images after described render and After the image information for the virtual content that need not be recorded is overlapped, it is plotted on the post-processing object as upper screen, and finally lead to Display unit 18 is crossed to be shown.
In embodiments of the present invention, image information can include picture or photo that camera photographs, can also include The video that camera photographs, therefore image information is in some scenarios, can also refer to the video information (frame one in video Two field picture), the present invention is not limited this.
In embodiments of the present invention, described ambient Property refers to the environment residing for target object, if target object is people Thing portrait, for camera when shooting personal portrait, the background of personal portrait, true environment etc. can be considered as environmental information.
In embodiments of the present invention, the virtual content and ambient Property after superposition, is referred to as interaction content, Huo Zhehuan Border interaction content.The virtual content scene interactive with ambient Property can be presented in interaction content, such as build game environment, war ring Border, animation environment, magic environment, 3D effect and other environmental imagings etc., it can be completed finally by display unit Display.
The recording arrangement that the embodiment of the present invention is provided can also include memory cell, for storing virtual content, and The attribute of target object and the corresponding relation of virtual content.Further, the memory cell can also store the mould of target object Plate information.
In embodiments of the present invention, can also be corresponding with virtual content by virtual content, and the attribute of target object Relation stores in the server, and recording arrangement needs the attribute of target object being sent to server right to obtain target object institute The virtual content answered.
It can also include arithmetic element in the recording arrangement that the embodiment of the present invention is provided, can be used for according to target object GIS (Geographic Information System, GIS-Geographic Information System) information, and combine image trace and space weight The method built, calculate the attribute information of target object.
In embodiments of the present invention, recording arrangement can be user terminal, such as smart mobile phone, tablet personal computer and other intelligence Energy terminal, can also be other smart machines for possessing function described in the embodiment of the present invention.
In embodiments of the present invention, display unit can be integrated on recording arrangement, as a part for recording arrangement, Single equipment can be used as to exist, such as Intelligent bracelet, intelligent glasses or other display devices.
In embodiments of the present invention, audio synthesizer unit 15 can be microphone or other audio collecting devices, integrate On recording arrangement.Audio synthesizer unit 15 can collect the audio-frequency information in ambient Property automatically, and automatically by ambient Property In audio-frequency information and the audio-frequency information of virtual content that receives mixed.
In embodiments of the present invention, so-called interaction content can be reunited into image, can specifically be referred in interaction After appearance is re-coded, image can be reunited into by media file processing unit.In the present embodiment another scheme, Interaction content can also be synthesized into video, i.e., image can be synthesized video, namely so-called AR skills after synthesizing one by one Record screen function in art.In the present embodiment another scheme, audio can also be included in the image or video that are synthesized Partial content.
In embodiments of the present invention, so-called AR engines and 3D engines are a module in recording arrangement, in addition A scheme in, the function of AR engines and 3D engines can also integrate to be realized on one module.
It is emphasized that in the recording arrangement 1 that the embodiment of the present invention is provided, the concrete function of each main modular is such as Under:
AR engines 12, augmented reality engine is referred to as, it is right after the image information information of camera transmission is received Image information is analyzed, and identifies the target object in video or image, and combine the GIS information of image, utilization space weight Build and image tracking algorithm, parse the attribute information of target object, such as position, scene depth, direction, intensity attribute information, And the ambient Property of the attribute information of target object and image information is sent to 3D engines 13;
3D engines 13, game engine or rendering engine are referred to as, for receiving the target object of AR engines transmission The ambient Property of attribute information and image information, it is virtual according to corresponding to the attribute information of target object searches the target object Content, and virtual content and ambient Property are overlapped, interaction content is formed, and the image information in interaction content is shared To video encoding unit 14.It should be noted that:Interaction content can be the virtual content and ambient Property being overlapped mutually, can be with Contain image information and audio-frequency information.
Video encoding unit 14, is referred to as image coding unit, is carried out for 3D engines 13 to be sent into interaction content Coding, and input to media file processing unit 17.It is emphasized that due to occupied by the image information in interaction content Memory headroom is larger, therefore the mode that image information is transmitted between 3D engines and video encoding unit uses shared pattern, i.e., The image that 3D engines and video encoding unit are directly transmitted needs using same data format, 3D engines in data transfer Information package is the file bag form that video encoding unit can be directly read, and such video encoding unit is receiving 3D engines hair Can directly it be used after the packet sent, the file reduced between internal memory replicates, and improves efficiency of transmission and fluency.
Audio synthesizer unit 15, receives the audio-frequency information of the virtual content for the target object that 3D engines are sent, and with collection To the audio-frequency information of ambient Property mixed, mixed audio-frequency information is sent to audio coding unit 16
Audio coding unit 16, the mixed audio-frequency information that audio synthesizer unit 15 is sent is received, and completed to mixing Audio-frequency information afterwards completes coding, and inputs to media file processing unit 17;
Media file processing unit 17:Receive video data encoder and audio coding unit that video encoding unit 14 is sent 16 coded audio datas sent, and the video data encoder received and coded audio data are completed to merge, merge into most Whole image.
The recording arrangement that the embodiment of the present invention is provided, the ambient Property of target object and virtual content can be rendered into The Off-Screen images of intermediate state, and the Off-Screen images after rendering recompile and synthesize new video or figure Picture, after Off-Screen images and other virtual contents that need not be recorded and system for content after can also rendering are superimposed Display unit is sent to be shown.The recording arrangement that the embodiment of the present invention is provided, it can not only realize virtual content and target Object interacting and showing, and can also be by interaction content Record and Save.
Embodiment two:
As shown in Fig. 2 the embodiments of the invention provide a kind of video real time recording method based on augmented reality, using such as Recording arrangement shown in embodiment one, is specifically included:
Image information under S1, camera collection real scene, and the image information collected is sent to AR engines;
Specifically, opening the camera in recording arrangement, the part that interaction is needed under real scene is taken pictures.
S2, AR engine are analyzed the image information received, are determined the target object in described image information and are obtained The attribute information of the target object is taken, and the attribute information of the target object and ambient Property are sent into 3D engines;
In step s 2, AR engines receive the image information that camera is sent, and determination figure the methods of pass through image retrieval Target object as in, when the target object in image is two or more, can select wherein one as needed It is individual or it is multiple be used as target object, its select method present invention do not limit, can be selected by user according to hobby Select, or system is according to setting Auto-matching selection etc..
In step s 2, AR engines obtain the attribute information of the target object, can be specifically according to target object sheet Body, and the GIS of the image received are carried out and are combined image trace and three-dimensional reconstruction obtains the attribute information of target object, The attribute information can be mark target object three dimensions parameter information and target object interaction field parameters information, specifically may be used To be the information such as target object position in three dimensions, posture (scene depth/direction), intensity.
In step s 2, ambient Property can be the real scene captured by camera, be embodied in captured image Residing environment, can also be including audio-frequency information in captured image-context etc..
S3, the 3D engines are virtual according to corresponding to the attribute information for receiving target object obtains the target object Content, the virtual content and the ambient Property is superimposed, interaction content is formed, wherein:3D engines can be incited somebody to action in virtual The image information of appearance and the image information of ambient Property are overlapped, and the image information after superposition is sent into Video coding list Image information after superposition is sent to display unit and shown by member;3D engines send the audio-frequency information of virtual content To audio synthesizer unit;
In step s3, in order that image information transmission it is more smooth, reduce transmitting procedure in memory headroom is accounted for With shared mode can be used to complete the transmission of image between 3D engines and video encoding unit.
S4, video encoding unit are sent to media file processing unit after the image information received is completed into coding;
In step s 4, in order that the transmission for obtaining image information is more smooth, reduce in transmitting procedure to memory headroom Take, shared mode can be used to complete the transmission of image between video encoding unit and media file processing unit.
S5, audio synthesizer unit receive after the audio-frequency information of virtual content with the audio-frequency information of the ambient Property collected Mixed, and mixed audio-frequency information is sent to audio coding module;
S6, audio coding module will be sent to media file processing unit after coded audio information after the mixing received;
The coded audio data received and video data encoder are synthesized complete image by S7, media file processing unit Information.
In embodiments of the present invention, the 3D engines obtain the object according to the attribute information for receiving target object Virtual content corresponding to body, the image information of the virtual content and the image information of the ambient Property is superimposed, it is complete Rendering for paired intermediate state image, is sent to video encoding unit by the intermediate state image being rendered to, specifically includes:
After S21,3D engine receive the attribute information of target object, the target is searched according to the attribute information of target object Virtual content corresponding to object, the virtual content can include image information, further, can also include audio and believe Breath, specifically:
S211, the corresponding virtual content for being stored with different target object on user terminal or server in advance, The attribute information of target object can be used to establish the attribute information of target object and the concordance list of virtual content as index;
S212, according to the attribute information of target object go search target object attribute information and virtual content index Table, determine the virtual content corresponding to target object;
S22,3D engine are according under the virtual content of determination and the ambient Property generation mutual environment of the target object Augmented reality content, specifically:
The Off-Screen post-processing objects of S221,3D Engine draws intermediate state, are rendered to image, also i.e. by object The virtual content of body carries out content with the ambient Property of target object and is superimposed, specifically:Using blank Off-Screen as Target is drawn, the image information for the ambient Property that camera collects will be superimposed in the drafting target, while also in the drafting mesh The image information for the virtual content that superposition is found is put on, the Off-Screen images of intermediate state is formed, completes to intermediate state Off-Screen images render, and will render the intermediate state off-Screen Image Sharings of completion to video encoding unit.
It should be noted that:The Off-Screen of blank can be regarded as OpenGL (Open Graphics Library, Open graphic library) type drafting target, (Direct3D, 3D accelerator card) D3D drafting target can also be regarded as, it is foregoing to paint Target (Target) processed can be Texture (structure) or renderBuffer (Render Buffer) etc., foregoing drafting target Include drawing area, supported with video memory or internal memory, drafting content results do not make screen directly and shown, namely drawing result is not It can directly display on the display device.
S222, the method shown according to step S221, follow-up each two field picture is completed after rendering, share to video volume Code unit, a frame environmental interaction image information and a frame are drawn or are superimposed in principle on the off-Screen of each blank Virtual part content.
In embodiments of the present invention, can be by the intermediate state off-Screen images for rendering completion, the portion that need not be recorded Divide virtual content, such as UI elements, or even the system for content that other needs are shown can also be included and be overlapped together, be plotted to On post-processing object as upper screen, finally showed by display screen.
In embodiments of the present invention:Scheme comprises at least following three used by rendering intermediate state Off-Screen images Kind:
Scheme one:Drawn at one in target (such as TextureA) and draw ambient Property, in another drafting target (such as TextureB the partial virtual content for needing to record is drawn on), then target superposition is drawn by two, completes to intermediate state Off- Screen images render;
Scheme two:After first drawing ambient Property on same drafting target Texture, then draw the part for needing to record Virtual content, complete to render intermediate state Off-Screen images;
Scheme three:The partial virtual content for needing to record first is drawn on same drafting target Texture, then draws ring It is domestic to hold, complete to render intermediate state Off-Screen images.
The video real time recording method based on augmented reality that the embodiment of the present invention is provided, 3D engines can be in real time by rings It is domestic to hold and virtual content is overlapped, and generate the off-screen images of intermediate state, and after recompiling, recombine new Image information, so can not only complete the real-time, interactive to augmented reality image, and real-time recording can be completed.
In embodiments of the present invention, using shared intermediate result can realize record path on, from augmented reality engine, The fluency of path and display module path, as far as possible answering between reduction internal memory bulk are shielded in the record of 3D engines to coding module System.Between if Off-Screen middle drawing result is directly sent into the input memory block of Video coding step by step, Have in 2~3 potential CPU and be stored to CPU internal memories, or the copy of GPU internal memories is stored in GPU, realize GPU drawing results to volume The internal memory mode of code device identification.The drafting target at both ends is shared (such as by using shared scheme such as ShareContext OpenGLTexture and other drafting targets);Or Eg1Image shares drafting target (such as OpenGL Texture at both ends And other draw target), scheme is deeply understood by the technical characteristic to system, and copy has been reduced to 0~1 time, realized Augmented reality content that is product-level while recording and show complexity.
In embodiments of the present invention, for potentially code optimization and the content of media file may further consume, side Case has used the coding of codified rate and resolution ratio to export, and can export the recorded file of particular file size and resolution ratio in real time; And scheme is stabbed to the audio/video coding unit write time in real time, it ensure that packing comes out the audio-visual synchronization of file, avoid Use the asynchronous problems of post processing scheme audio frequency and video that may be present.
Embodiment three:
The embodiment of the present invention additionally provides a kind of video real time recording method based on augmented reality, uses the institute of embodiment one The recording arrangement stated, this method include:
301st, the image under smart mobile phone collection real scene, as smart mobile phone opens camera, startup video capture mould Formula, reality scene is recorded, such as record and include at least one in personage, building, flowers, plants and trees and interior space image Kind;
In the present embodiment, it is smart mobile phone with recording arrangement, and all processing and storage are on smart mobile phone It is described exemplified by completion.
302nd, the image information collected is sent to AR engines and handled by smart mobile phone;
303rd, after AR engines receive image information, image is analyzed first, uses the method for image retrieval, identification Go out target object present in image, i.e., the picture material received is retrieved, retrieve the target included in image Object, such as personage, building, flowers and plants, secondly, according to the target object of determination and combine image GIS information carry out image with Track and three-dimensional reconstruction, so as to calculate target object position in three dimensions, posture (scene depth/direction), intensity etc. Attribute information.
Specifically, after for example AR engines are analyzed image, it is determined that included in target object be personal portrait, people The GIS information of thing portrait can integrate personage's relative position in the picture, the relative orientation of personal portrait (relative to camera Direction), the absolute position GPS position information of personal portrait, and the information such as size determines.Progress slight to personage After image trace and three-dimensional reconstruction, the spatial positional information comprising personage and pre-defined interactive information are obtained, it is such as right The feedback of some work, and specially treated to some object etc..
304th, the attribute information of the target object got is sent to 3D engines and handled by AR engines;
Specifically:The attribute information of personal portrait is sent to 3D engines by AR engines, while also by ambient Property, i.e. personage Portrait and the current residing true environment of the personal portrait are sent to 3D engines;
305th, 3D engines receive the attribute information of target object, according to the attribute information search it is corresponding it is virtual in Hold;The augmented reality content under virtual content and ambient Property generation mutual environment in combination with determination, specifically:
3051st, the corresponding virtual content of different target object is stored with smart mobile phone or server in advance, The attribute information of target object can be used to establish the attribute information of target object and the concordance list of virtual content as indexing, Such as:Mark 001 (attribute of mark personal portrait), 002 (the mark flower of target object are configured in the memory module of smart mobile phone The attribute of grass), the corresponding relation with virtual content such as 003 attribute of personage's building (mark).In the present embodiment, object Body is personal portrait, therefore searches the virtual content corresponding to the attribute (001) of personal portrait.In the present embodiment, It is described so that the virtual content corresponding to 001 is " battle game " as an example.
3052nd, gone to search the attribute information of target object and the index of virtual content according to the attribute information of target object Table, the virtual content corresponding to target object is determined, according to described above, then can found corresponding to the attribute of personal portrait Virtual content be " battle game ", and obtain should " battle game " it is virtual in it is same.
3053rd, the virtual content of target object is carried out into content with the ambient Property of target object to be superimposed, that is, drawn middle The Off-Screen post-processing objects of state, are rendered, specifically to image:Drafting is used as using the Off-Screen of a blank The Off-Screen of target, wherein blank can be OpenGL/D3D drafting target-Target, such as Texture, RenderBuffer etc., it includes drawing area, is supported with video memory or internal memory, draws content results and does not do screen display directly Show;By the image information that superposition user terminal collects in the drafting target, namely environmental interaction content, while also in the drafting The virtual content found is superimposed in target, the Off-Screen images of intermediate state is formed, completes to intermediate state Off-Screen Image renders, and will render the intermediate state off-Screen Image Sharings of completion to video encoding unit.Such as:Can be in sky Personal portrait or " battle game " are drawn on white Off-Screen image, can also first blank Off-Screen figure As the true environment residing for upper drafting personal portrait, then it is further continued for drawing " battle game ", or can also be first in blank " battle game " is drawn on Off-Screen image, is then further continued for drawing the true environment residing for personal portrait.
3054th, according to the method shown in step 3051, follow-up each two field picture is completed after rendering, shares to video volume Code unit, the image information and one of a frame ambient Property is drawn or is superimposed in principle on the off-Screen of each blank The image information of frame virtual content.
It should be noted that:When being rendered to image, if it is considered to partial virtual content need not record screen display, such as UI (connection interface) element etc., it will can be painted after the Off-Screen images of intermediate state and the virtual content that need not be recorded superposition Make on the post-processing object as upper screen, be sent on the display screen of user terminal and shown, so-called upper screen post-processing object The target that display is selected on the display screen of user terminal can be referred to.
306th, after video encoding unit receives the intermediate state off-Screen images after shared render, by AR interactions institute The content synchronization needed, by the drafting target of intermediate state image identification, is synchronized to between video encoding module and 3D engine modules Coding module, drawn by coding module or copy coding input unit (being supported to input Buffer queues) to, by coding mould Block completes coding, exports Video coding frame data and gives media file processing unit.
Within step 306, shared scheme to can use 1) using ShareContext (content is shared) share both ends Drafting target (such as OpenGLTexture or other draw targets);2) the drafting target at both ends is shared (such as using EglImage OpenGL Texture or other drafting targets).
307th, the audio-frequency information of the virtual content of target object is sent to audio synthesizer unit by 3D engines, in the present embodiment In, audio synthesizer unit is described by taking microphone as an example.
308th, after audio synthesizer unit receives the audio-frequency information of virtual content, the audio portion of the ambient Property collected with it Divide and mixed, and mixed audio-frequency information is sent to audio coding module;
309th, audio coding module is encoded the audio-frequency information received, and the coded audio data after coding is sent out Give media file processing unit;
310th, media file processing unit is synthesized the coded audio data received and Video coding frame data, defeated Go out complete media data.
In embodiments of the present invention, media file processing unit is synthesized the media data of synthesis, further, also The media data of synthesis can be preserved, realize so-called record screen function.
Example IV
As shown in figure 3, the embodiment of the present invention additionally provides a kind of recording arrangement, including the He of augmented reality (AR) engine 401 3D engines 403, wherein:
The AR engines 401, for receiving image information, the target object in described image information is identified, and extract institute The attribute information of target object is stated, the ambient Property by residing for the attribute information of target object and the target object is sent out The 3D engines 403 are given, the ambient Property is the audio-frequency information or image information included in image information;
The 3D engines 403, in the attribute information and environment of the target object sent for receiving the AR engines 401 Hold, and the virtual content according to corresponding to the attribute information of the target object determines the target object, will be described virtual interior Appearance is overlapped with the ambient Property, formation interaction content, so that the interaction content can be recorded as video, and/or with The interaction content is set to be shown by display unit.
In embodiments of the present invention, the 3D engines 403 be specifically used for using blank intermediate state Off-Screen (from Screen) image as draw target, will it is described draw target on be superimposed ambient Property image information and virtual content image letter Breath, completes to render the intermediate state Off-Screen images, forms the interaction content.
In embodiments of the present invention, the 3D engines 403 are additionally operable to send the interaction content to the display unit Shown.
In embodiments of the present invention, the interaction content is sent to coding unit 405 and re-started by the 3D engines 403 Coded treatment, so that the interaction content after recompiling can be synthesized into video or image again.
In embodiments of the present invention, when the coding unit 405 includes video encoding unit 4051:
The video encoding unit 4051, for the Off-Screen images of the intermediate state received to be compiled Code, and image coding information is sent to media file processing unit 407;
The media file processing unit 407, for the image coding information received to be synthesized into video or figure Picture, and the video after synthesis or image are shown by display unit.
In embodiments of the present invention, when the coding unit 405 also includes audio coding unit 4052, the recording arrangement 4 also include:
The 3D engines 403, it is additionally operable to the audio-frequency information in virtual content corresponding to the target object being sent to sound Frequency synthesis unit 409;
The audio synthesizer unit 409, the audio-frequency information of the virtual content sent for receiving the 3D engines 403, and Mixed with the audio-frequency information of ambient Property, mixed audio-frequency information is sent to audio coding unit 4052;
The audio coding unit 4052, the audio-frequency information sent for receiving the audio synthesizer unit, and to described Audio-frequency information is encoded, and audio coding result is sent into media file processing unit 407 is handled.
It should be noted that the described recording arrangement in embodiment one, the place of its not detailed description is referred to Description in embodiment two and embodiment three, and the method arrived described by embodiment two and embodiment three, it can also use and implement Recording arrangement described by example one.Described recording arrangement in example IV, the place of its not detailed description are referred to Description in embodiment one two, three.
It should be understood that in various embodiments of the present invention, it is above-mentioned it is each during sequence number size be not meant to perform it is suitable The priority of sequence, the execution sequence of each process should be determined with its function and internal logic, without the implementation of the reply embodiment of the present invention Process forms any restriction.
In addition, the terms " system " and " network " are often used interchangeably herein.The terms " and/ Or ", only a kind of incidence relation for describing affiliated partner, represents there may be three kinds of relations, for example, A and/or B, can be with table Show:Individualism A, while A and B be present, these three situations of individualism B.In addition, character "/" herein, is typicallyed represent front and rear Affiliated partner is a kind of relation of "or".
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein Member and algorithm steps, it can be realized with electronic hardware, computer software or the combination of the two, in order to clearly demonstrate hardware With the interchangeability of software, the composition and step of each example are generally described according to function in the above description.This Actually or a little function is performed with hardware software mode, application-specific and design constraint depending on technical scheme. Professional and technical personnel can realize described function using distinct methods to each specific application, but this realization It is it is not considered that beyond the scope of this invention.
Those skilled in the art is apparent that, for convenience of description and succinctly, the system of foregoing description, The specific work process of device and unit, the corresponding process in preceding method embodiment is may be referred to, will not be repeated here.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, can be with Realize in other way.For example, device embodiment described above is only schematical, such as stroke of the unit Point, only a kind of division of logic function, can there is other dividing mode, such as multiple units or component when actually realizing Can combine can either inherit to a system or some features and can ignore, or not perform.In addition, shown or beg for Either direct-coupling or communication connection can be by the indirect of some interfaces, device or unit for the mutual coupling of opinion Coupling or communication connection or electricity, the connection of mechanical or other forms.
The unit illustrated as separating component may or may not be physically separate, be shown as unit Part can be or may not be physical location, you can with positioned at a place, or multiple nets can also be distributed to On network unit.Some or all of unit therein can be selected to realize scheme of the embodiment of the present invention according to the actual needs Purpose.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also It is that unit is individually physically present or two or more units are integrated in a unit.Above-mentioned integrated list Member can be realized in the form of hardware, can also be realized in the form of SFU software functional unit.
Through the above description of the embodiments, it is apparent to those skilled in the art that the present invention can be with Realized with hardware, or software is realized, or combinations thereof mode is realized.When implemented in software, above-mentioned work(can be seen It can be stored in computer-readable medium or be transmitted as one or more instructions on computer-readable medium or code. Computer-readable medium includes computer-readable storage medium and communication media, and wherein communication media includes being easy to from a place to another Any medium of one place transmission computer program.Storage medium can be any medium that computer can access.With this Exemplified by but be not limited to:Computer-readable medium can include RAM, ROM, EEPROM, CD-ROM or other optical disc storages, disk are situated between Matter or other magnetic storage apparatus or can be used in carrying or store with instruction or data structure form desired program Code simultaneously can be by any other medium of computer access.In addition, any connection can be suitably turn into computer-readable Jie Matter, if for example, software be using coaxial cable, optical fiber cable, twisted-pair feeder, Digital Subscriber Line (SDL) or such as infrared ray, The wireless technology of radio and microwave etc be included in belonging to medium it is fixing in.Disk (Disk) and dish as used in the present invention (Disc) include compression laser disc (CD), laser disc, laser disc, Digital Versatile Disc (DVD), floppy disk and Blu-ray Disc, which disk to lead to The replicate data of normal magnetic, and dish is then with laser come optical replicate data.Above combination above should also be as being included in computer can Within the protection domain for reading medium.
In a word, the preferred embodiment of technical solution of the present invention is the foregoing is only, is not intended to limit the present invention's Protection domain.Within the spirit and principles of the invention, any modification, equivalent substitution and improvements made etc., should be included in Within protection scope of the present invention.

Claims (18)

  1. A kind of 1. recording arrangement, it is characterised in that:Including AR engines and 3D engines, wherein:
    The AR engines, for receiving image information, the target object in described image information is identified, and extract the object The attribute information of body, the ambient Property residing for the attribute information of the target object and the target object is sent to the 3D Engine, the ambient Property include the environmental information residing for the target object;
    The 3D engines, the attribute information and ambient Property of the target object sent for receiving the AR engines, and according to institute The attribute information for stating target object determines virtual content corresponding to the target object, will need to record in the virtual content Content be overlapped with the ambient Property, form interaction content so that the interaction content can be synthesized into image, and Shown after the interaction content is superimposed with the content that need not be recorded in the virtual content by display unit.
  2. 2. recording arrangement according to claim 1, it is characterised in that:The 3D engines are specifically used for the centre using blank State off screen image needs the image information that ambient Property is superimposed in the drafting target and in virtual content as target is drawn The image information of the content of recording, complete to render the intermediate state off screen image, form the interaction content.
  3. 3. recording arrangement according to claim 2, it is characterised in that:The 3D engines are specifically used for:
    In the image information that first draws drafting ambient Property in target, need to record in the second drafting target draws virtual content The image information of the content of system, then draw target and second by described first and draw target superposition, complete to the intermediate state Off screen image renders;Or
    The image information of the ambient Property is first drawn in same drafting target, then draws in the virtual content and needs to record The image information of the content of system, complete to render the intermediate state off screen image;Or
    The image information for the content for needing to record in the virtual content is first drawn in same drafting target, then is drawn described The image information of ambient Property, complete to render the intermediate state off screen image.
  4. 4. recording arrangement according to any one of claims 1 to 3, it is characterised in that:The recording arrangement also includes:
    Camera, for the image information photographed to be sent into the AR engines.
  5. 5. the recording arrangement piece according to Claims 2 or 3, it is characterised in that:The 3D engines send out the interaction content Give coding unit and carry out coded treatment, so that the interaction content after coding can be synthesized into image.
  6. 6. recording arrangement according to claim 5, it is characterised in that:When the coding unit includes video encoding unit:
    The video encoding unit, for the off screen image of the intermediate state received to be encoded, and by Image Coding Information is sent to media file processing unit;
    The media file processing unit, for the image coding information received to be synthesized into image.
  7. 7. recording arrangement according to claim 6, it is characterised in that:The coding unit also includes audio coding unit When, the recording arrangement also includes:
    The 3D engines, it is additionally operable to the audio-frequency information in virtual content corresponding to the target object being sent to audio synthesis list Member;
    The audio synthesizer unit, the audio-frequency information of the virtual content sent for receiving the 3D engines, and and ambient Property Audio-frequency information mixed, mixed audio-frequency information is sent to audio coding unit;
    The audio coding unit, the audio-frequency information sent for receiving the audio synthesizer unit, and to the audio-frequency information Encoded, audio coding result is sent into media file processing unit is handled.
  8. 8. recording arrangement according to claim 7, it is characterised in that:
    The media file processing unit, is additionally operable to the image coded data that will receive and coded audio data is merged into figure Picture.
  9. 9. recording arrangement according to any one of claims 1 to 3, it is characterised in that:The recording arrangement also includes:
    Memory cell, for storing virtual content, and the corresponding relation of the attribute information of target object and virtual content.
  10. A kind of 10. video real time recording method based on augmented reality, it is characterised in that:Methods described includes:
    3D engines receive the attribute information of target object and the ambient Property of the target object that AR engines are sent, the ring It is domestic to hold the environmental information included residing for the target object;
    Virtual content of the 3D engines according to corresponding to the attribute information of the target object determines the target object, and will The content for needing to record in the virtual content of the target object and the ambient Property of the target object are overlapped, and are formed and handed over Mutual content, so that the interaction content can be synthesized into image, and by the interaction content with being not required in the virtual content Shown after the superposition of the content to be recorded by display unit.
  11. 11. according to the method for claim 10, it is characterised in that:Methods described also includes:
    The AR engines receive the image information that camera is sent, and the image information to receiving is analyzed, it is determined that described Target object in image information, and obtain the attribute information of the target object;
    The ambient Property of the attribute information of the target object and the target object is sent to institute by the AR engines State 3D engines.
  12. 12. according to the method for claim 10, it is characterised in that:The 3D engines are believed according to the attribute of the target object Breath determines the virtual content corresponding to the target object, including:
    The 3D engines are according to the attribute information of the target object and the virtual content and mesh of the target object prestored The corresponding relation of mark thingness determines the virtual content corresponding to the target object.
  13. 13. according to any described method of claim 10 to 12, it is characterised in that:The 3D engines are by the target object The content for needing to record in virtual content and the ambient Property of the target object are overlapped, and form interaction content, specific bag Include:
    The 3D engines use the off screen image of blank to be superimposed the figure of ambient Property in the drafting target as target is drawn As information, while the image for the content for needing to record in the virtual content of the target object is also superimposed in the drafting target Information, the off screen image of intermediate state is formed, complete to render the intermediate state off screen image.
  14. 14. according to the method for claim 13, it is characterised in that:The 3D engines will be superimposed environment in the drafting target The image information of content, at the same also it is described drafting target on be superimposed virtual content image information, complete to intermediate state from Rendering for screen image, is specifically included:
    The 3D engines draw the image information of environment in the first drafting target, in the second drafting target draws virtual content The image information of content recorded is needed, then target and second is drawn by described first and draws target superposition, complete to centre State off screen image renders;Or
    The 3D engines first draw the image information of ambient Property in same drafting target, then draw in virtual content and need The image information of the content of recording, complete to render intermediate state off screen image;Or
    The 3D engines first draw the image information for the content for needing to record in virtual content in same drafting target, then paint The image information of ambient Property processed, complete to render intermediate state off screen image.
  15. 15. according to the method for claim 14, it is characterised in that:The 3D engines are by the intermediate state off screen image and void After the image information for the content for intending to record in content is overlapped, sends to the display unit and shown.
  16. 16. according to the method for claim 13, it is characterised in that:Methods described also includes:
    The interaction content is sent to coding unit by the 3D engines, so that the coding unit encodes the interaction content After be sent to media file processing unit carry out image synthesis.
  17. 17. according to the method for claim 16, it is characterised in that:Methods described also includes:
    The audio-frequency information of the target object virtual content is sent to audio synthesizer unit by the 3D engines, so that the audio After synthesis unit mixes the audio-frequency unit of the audio-frequency unit of the virtual content and the ambient Property, audio coding is sent to Unit is encoded.
  18. 18. according to the method for claim 17, it is characterised in that:Methods described also includes:
    Audio synthesizer unit receives the audio-frequency information for the target object that the 3D engines are sent, and with the target object ring Audio coding unit is sent to after the audio-frequency information mixing held within the border;
    The audio coding unit receives the mixed audio-frequency information that the audio synthesizer unit is sent, and the audio is believed Breath is encoded, and coding result is sent into the media file processing unit;
    The off screen image of the intermediate state is also sent to video encoding unit by the 3D engines;
    The video encoding unit is encoded the off screen image of the intermediate state received, and image coding information is sent out Give media file processing unit;
    The media file processing unit is synthesized the image coded data received and coded audio data.
CN201511020454.6A 2015-12-30 2015-12-30 Video real time recording method and recording arrangement based on augmented reality Active CN105635712B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710986097.1A CN107682688B (en) 2015-12-30 2015-12-30 Video real-time recording method and recording equipment based on augmented reality
CN201511020454.6A CN105635712B (en) 2015-12-30 2015-12-30 Video real time recording method and recording arrangement based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511020454.6A CN105635712B (en) 2015-12-30 2015-12-30 Video real time recording method and recording arrangement based on augmented reality

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201710986097.1A Division CN107682688B (en) 2015-12-30 2015-12-30 Video real-time recording method and recording equipment based on augmented reality

Publications (2)

Publication Number Publication Date
CN105635712A CN105635712A (en) 2016-06-01
CN105635712B true CN105635712B (en) 2018-01-19

Family

ID=56050146

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201710986097.1A Active CN107682688B (en) 2015-12-30 2015-12-30 Video real-time recording method and recording equipment based on augmented reality
CN201511020454.6A Active CN105635712B (en) 2015-12-30 2015-12-30 Video real time recording method and recording arrangement based on augmented reality

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201710986097.1A Active CN107682688B (en) 2015-12-30 2015-12-30 Video real-time recording method and recording equipment based on augmented reality

Country Status (1)

Country Link
CN (2) CN107682688B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106130886A (en) * 2016-07-22 2016-11-16 聂迪 The methods of exhibiting of extension information and device
CN106295504A (en) * 2016-07-26 2017-01-04 车广为 Enhancing display packing on the basis of recognition of face
CN106373198A (en) * 2016-09-18 2017-02-01 福州大学 Method for realizing augmented reality
US10110871B2 (en) * 2016-10-31 2018-10-23 Disney Enterprises, Inc. Recording high fidelity digital immersive experiences through off-device computation
CN106791620A (en) * 2016-12-05 2017-05-31 西南石油大学 Buried pipeline method for inspecting and device based on AR technologies and geographical information technology
CN107066975B (en) * 2017-04-17 2019-09-13 合肥工业大学 Video identification and tracking system and its method based on depth transducer
CN108875460B (en) * 2017-05-15 2023-06-20 腾讯科技(深圳)有限公司 Augmented reality processing method and device, display terminal and computer storage medium
CN107441714A (en) * 2017-06-01 2017-12-08 杨玉苹 A kind of image processing method and its device, shooting game fighting system and its method of work for realizing AR first person shooting games
CN107277494A (en) * 2017-08-11 2017-10-20 北京铂石空间科技有限公司 three-dimensional display system and method
CN108111832A (en) * 2017-12-25 2018-06-01 北京麒麟合盛网络技术有限公司 The asynchronous interactive method and system of augmented reality AR videos
KR102549932B1 (en) * 2018-01-22 2023-07-03 애플 인크. Method and device for presenting synthesized reality companion content
CN108600858B (en) * 2018-05-18 2020-08-04 高新兴科技集团股份有限公司 Video playing method for synchronously displaying AR information
CN109035420A (en) * 2018-08-21 2018-12-18 维沃移动通信有限公司 A kind of processing method and mobile terminal of augmented reality AR image
CN109040619A (en) * 2018-08-24 2018-12-18 合肥景彰科技有限公司 A kind of video fusion method and apparatus
CN109302617B (en) * 2018-10-19 2020-12-15 武汉斗鱼网络科技有限公司 Multi-element-designated video microphone connecting method, device, equipment and storage medium
CN109408128B (en) * 2018-11-10 2022-10-11 歌尔光学科技有限公司 Split AR (augmented reality) device communication method and AR device
CN110300322B (en) * 2019-04-24 2021-07-13 网宿科技股份有限公司 Screen recording method, client and terminal equipment
CN110738737A (en) * 2019-10-15 2020-01-31 北京市商汤科技开发有限公司 AR scene image processing method and device, electronic equipment and storage medium
CN111131776A (en) * 2019-12-20 2020-05-08 中译语通文娱科技(青岛)有限公司 Intelligent video object replacement system based on Internet of things
CN111815782A (en) * 2020-06-30 2020-10-23 北京市商汤科技开发有限公司 Display method, device and equipment of AR scene content and computer storage medium

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100470452C (en) * 2006-07-07 2009-03-18 华为技术有限公司 Method and system for implementing three-dimensional enhanced reality
EP1887526A1 (en) * 2006-08-11 2008-02-13 Seac02 S.r.l. A digitally-augmented reality video system
CN101520904B (en) * 2009-03-24 2011-12-28 上海水晶石信息技术有限公司 Reality augmenting method with real environment estimation and reality augmenting system
CN103136793A (en) * 2011-12-02 2013-06-05 中国科学院沈阳自动化研究所 Live-action fusion method based on augmented reality and device using the same
CA2876130A1 (en) * 2012-06-14 2013-12-19 Bally Gaming, Inc. System and method for augmented reality gaming
CN102799456B (en) * 2012-07-24 2015-11-25 上海晨思电子科技有限公司 A kind of game engine loads the method for resource file, device and computing machine
CN102831401B (en) * 2012-08-03 2016-01-13 樊晓东 To following the tracks of without specific markers target object, three-dimensional overlay and mutual method and system
CN102903144B (en) * 2012-08-03 2015-05-27 樊晓东 Cloud computing based interactive augmented reality system implementation method
CN102902710B (en) * 2012-08-08 2015-08-26 成都理想境界科技有限公司 Based on the augmented reality method of bar code, system and mobile terminal
CN104704535A (en) * 2012-10-02 2015-06-10 索尼公司 Augmented reality system
CN103677211B (en) * 2013-12-09 2016-07-06 华为软件技术有限公司 Realize the device and method of augmented reality application
CN103996314A (en) * 2014-05-22 2014-08-20 南京奥格曼提软件科技有限公司 Teaching system based on augmented reality
CN104394324B (en) * 2014-12-09 2018-01-09 成都理想境界科技有限公司 Special efficacy video generation method and device
CN104616243B (en) * 2015-01-20 2018-02-27 北京道和汇通科技发展有限公司 A kind of efficient GPU 3 D videos fusion method for drafting
CN104834897A (en) * 2015-04-09 2015-08-12 东南大学 System and method for enhancing reality based on mobile platform
CN105120191A (en) * 2015-07-31 2015-12-02 小米科技有限责任公司 Video recording method and device
CN105184858A (en) * 2015-09-18 2015-12-23 上海历影数字科技有限公司 Method for augmented reality mobile terminal

Also Published As

Publication number Publication date
CN107682688A (en) 2018-02-09
CN105635712A (en) 2016-06-01
CN107682688B (en) 2020-02-07

Similar Documents

Publication Publication Date Title
CN105635712B (en) Video real time recording method and recording arrangement based on augmented reality
CN106816077B (en) Interactive sandbox methods of exhibiting based on two dimensional code and augmented reality
CN106710003B (en) OpenG L ES-based three-dimensional photographing method and system
CN104392045B (en) A kind of real time enhancing virtual reality system and method based on intelligent mobile terminal
CN105323252A (en) Method and system for realizing interaction based on augmented reality technology and terminal
CN107018336A (en) The method and apparatus of image procossing and the method and apparatus of Video processing
CN106157354B (en) A kind of three-dimensional scenic switching method and system
US20090179892A1 (en) Image viewer, image displaying method and information storage medium
CN107798932A (en) A kind of early education training system based on AR technologies
CN108986190A (en) A kind of method and system of the virtual newscaster based on human-like persona non-in three-dimensional animation
CN109078327A (en) Game implementation method and equipment based on AR
CN108961367A (en) The method, system and device of role image deformation in the live streaming of three-dimensional idol
CN108668168A (en) Android VR video players and its design method based on Unity 3D
CN108961375A (en) A kind of method and device generating 3-D image according to two dimensional image
CN108416832A (en) Display methods, device and the storage medium of media information
CN111833458A (en) Image display method and device, equipment and computer readable storage medium
CN110120087A (en) The label for labelling method, apparatus and terminal device of three-dimensional sand table
CN110503707A (en) A kind of true man's motion capture real-time animation system and method
CN110290291A (en) Picture synthesis method and device and interactive approach
Wang et al. The intangible cultural heritage show mode based on AR technology in museums-take the Li nationality non-material cultural heritage as an example
CN107564084A (en) A kind of cardon synthetic method, device and storage device
CN108320331A (en) A kind of method and apparatus for the augmented reality video information generating user's scene
CN203825856U (en) Power distribution simulation training system
CN116152416A (en) Picture rendering method and device based on augmented reality and storage medium
CN102855652B (en) Method for redirecting and cartooning face expression on basis of radial basis function for geodesic distance

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Augmented-reality-based real-time video recording method and recording equipment

Effective date of registration: 20190103

Granted publication date: 20180119

Pledgee: Zhejiang Tailong Commercial Bank Co., Ltd. Shanghai Branch

Pledgor: EASYAR INFORMATION TECHNOLOGY (SHANGHAI) CO., LTD.

Registration number: 2019310000001