CN108074278A - Video presentation method, device and equipment - Google Patents

Video presentation method, device and equipment Download PDF

Info

Publication number
CN108074278A
CN108074278A CN201611024592.6A CN201611024592A CN108074278A CN 108074278 A CN108074278 A CN 108074278A CN 201611024592 A CN201611024592 A CN 201611024592A CN 108074278 A CN108074278 A CN 108074278A
Authority
CN
China
Prior art keywords
user
sight spot
information
dynamic
fictitious tour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611024592.6A
Other languages
Chinese (zh)
Inventor
陈鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201611024592.6A priority Critical patent/CN108074278A/en
Publication of CN108074278A publication Critical patent/CN108074278A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

This application discloses video presentation method, device and equipment.One specific embodiment of this method includes:Receive the fictitious tour request that the user of wearing virtual reality device is sent by virtual reality device;Obtain the three-dimensional scene models at the fictitious tour sight spot corresponding with fictitious tour request pre-established;The characteristic information of the user of real-time reception virtual reality device acquisition, and according to the characteristic information of user, establish the three-dimensional character model of user;The three-dimensional character model of user is added in three-dimensional scene models in real time to form dynamic 3 D model of the user in fictitious tour sight spot, and renders dynamic 3 D model to form dynamic 3 D video;Dynamic 3 D video is sent to virtual reality device so that dynamic 3 D video is presented in virtual reality device in real time.The embodiment is realized in the virtual reality device for dressing the dynamic 3 D video real-time display of the 3 D video at fictitious tour sight spot and user in user.

Description

Video presentation method, device and equipment
Technical field
This application involves field of computer technology, and in particular to technical field of virtual reality more particularly to video presentation side Method, device and equipment.
Background technology
With the development of economy, the accelerating rhythm of life much likes the people of tourism due to money or the relation of time, The sight spot oneself liked cannot be gone to play in person.Fictitious tour is to use computer technology so that user need not reach The technology of the landscape at travelling sight spot can be gone sight-seeing in the true environment at travelling sight spot.
However, existing fictitious tour method mostly provides a user trip using using computer or Intelligent mobile equipment The image (two dimension or three-dimensional) at row sight spot is realized.User needs through mouse, keyboard or touches screen realization in travelling scape Move to watch the image at different travelling sight spots in the image of point.And user cannot be realized by the mobile body of oneself It moves to realize user and the interaction at sight spot of travelling in virtual travelling sight spot.
The content of the invention
The purpose of the application is to propose a kind of improved video presentation method, device and equipment, to solve background above The technical issues of technology segment is mentioned.
In a first aspect, this application provides a kind of video presentation method, the above method includes:Wearing virtual reality is received to set The fictitious tour that standby user is sent by above-mentioned virtual reality device is asked, and above-mentioned fictitious tour request includes fictitious tour scape Point information;According to above-mentioned fictitious tour sight spot information, the void corresponding with above-mentioned fictitious tour sight spot information pre-established is obtained Intend the three-dimensional scene models at travelling sight spot;The characteristic information of the above-mentioned user of the above-mentioned virtual reality device acquisition of real-time reception, and According to the characteristic information of above-mentioned user, the three-dimensional character model of above-mentioned user is established;In real time by the three-dimensional character mould of above-mentioned user Type is added in above-mentioned three-dimensional scene models to form dynamic 3 D model of the above-mentioned user in above-mentioned fictitious tour sight spot, and Above-mentioned dynamic 3 D model is rendered to form dynamic 3 D video;Above-mentioned dynamic 3 D video is sent in real time above-mentioned virtual existing Above-mentioned dynamic 3 D video is presented for above-mentioned virtual reality device in real equipment.
In some embodiments, the above method further includes:Detect in real time in above-mentioned fictitious tour sight spot whether there is with it is upper State the first user different from above-mentioned user that user is in same visual scene;In response to detecting that at least one first uses Family, then:The three-dimensional character model of each first user is added in above-mentioned dynamic 3 D model to form above-mentioned fictitious tour Multi-user's dynamic 3 D model at sight spot, and render above-mentioned multi-user's dynamic 3 D model and regarded with forming multi-user's dynamic 3 D Frequently;Above-mentioned multi-user's dynamic 3 D video is sent to above-mentioned virtual reality device, so that above-mentioned virtual reality device is presented State multi-user's dynamic 3 D video.
In some embodiments, the image information of the characteristic information of above-mentioned user including above-mentioned user and above-mentioned user with It is at least one of lower:Acoustic information, action message.
In some embodiments, above-mentioned fictitious tour sight spot information includes fictitious tour sight spot mark and fictitious tour sight spot It is at least one of following:Location information, season information, Weather information, temporal information, light information.
In some embodiments, the above method further includes the step of three-dimensional scene models for establishing fictitious tour sight spot, on The step of stating the three-dimensional scene models for establishing fictitious tour sight spot includes:It obtains corresponding with above-mentioned fictitious tour sight spot information true At least one image of at least one camera acquisition set in real travelling sight spot;It is established according to above-mentioned at least one image State the three-dimensional scene models at fictitious tour sight spot.
In some embodiments, the above-mentioned three-dimensional scenic that above-mentioned fictitious tour sight spot is established according to above-mentioned at least one image Before model, it is above-mentioned establish fictitious tour sight spot three-dimensional scene models the step of further include:Obtain above-mentioned real travel sight spot In be provided with camera position set laser radar acquisition laser point cloud data;It is and above-mentioned according to above-mentioned at least one Image establishes the three-dimensional scene models at above-mentioned fictitious tour sight spot, including:According to above-mentioned at least one image and above-mentioned laser point Cloud data determine opposite position of the physical objects indicated by the pixel in above-mentioned at least one image compared with above-mentioned camera Confidence ceases;The phase of the physical objects indicated by pixel in above-mentioned at least one image and above-mentioned at least one image To location information, the three-dimensional scene models at above-mentioned fictitious tour sight spot are established.
In some embodiments, the above-mentioned pixel in above-mentioned at least one image and above-mentioned at least one image The relative position information of indicated physical objects is established before the three-dimensional scene models at above-mentioned fictitious tour sight spot, above-mentioned to build The step of three-dimensional scene models at vertical fictitious tour sight spot, further includes:It obtains in above-mentioned real travel sight spot and is provided with camera First absolute location information of the location receivers acquisition that position is set;And it is above-mentioned according to above-mentioned at least one image and on The relative position information of the physical objects indicated by the pixel at least one image is stated, establishes above-mentioned fictitious tour sight spot Three-dimensional scene models, including:Indicated by pixel in above-mentioned first absolute location information and above-mentioned at least one image Physical objects relative position information, determine second of the physical objects indicated by the pixel in above-mentioned at least one image Absolute location information;The physics object indicated by pixel in above-mentioned at least one image and above-mentioned at least one image Second absolute location information of body establishes the three-dimensional scene models at above-mentioned fictitious tour sight spot.
Second aspect, this application provides a kind of video exhibition devices, above device includes:First receiving unit, configuration The fictitious tour sent for the user for receiving wearing virtual reality device by above-mentioned virtual reality device is asked, above-mentioned virtual Travel Request includes fictitious tour sight spot information;Acquiring unit is configured to according to above-mentioned fictitious tour sight spot information, is obtained pre- The three-dimensional scene models at the fictitious tour sight spot corresponding with above-mentioned fictitious tour sight spot information first established;Second receiving unit, The characteristic information of the above-mentioned user of the above-mentioned virtual reality device acquisition of real-time reception is configured to, and according to the feature of above-mentioned user Information establishes the three-dimensional character model of above-mentioned user;Rendering unit is configured to the three-dimensional character model of above-mentioned user in real time It is added in above-mentioned three-dimensional scene models to form dynamic 3 D model of the above-mentioned user in above-mentioned fictitious tour sight spot, and wash with watercolours It catches and states dynamic 3 D model to form dynamic 3 D video;First transmitting element is configured to above-mentioned dynamic 3 D in real time Video is sent to above-mentioned virtual reality device so that above-mentioned dynamic 3 D video is presented in above-mentioned virtual reality device.
In some embodiments, above device further includes:Detection unit is configured to detect above-mentioned fictitious tour scape in real time With the presence or absence of the first user different from above-mentioned user that same visual scene is in above-mentioned user in point;Second sends list Member is in same there are at least one if being configured to detection unit and detecting in above-mentioned fictitious tour sight spot with above-mentioned user The first user different from above-mentioned user of visual scene, then:The three-dimensional character model of each first user is added to above-mentioned To form multi-user's dynamic 3 D model at above-mentioned fictitious tour sight spot in dynamic 3 D model, and render above-mentioned multi-user's dynamic Threedimensional model is to form multi-user's dynamic 3 D video;Above-mentioned multi-user's dynamic 3 D video is sent to above-mentioned virtual reality to set It is standby, so that above-mentioned multi-user's dynamic 3 D video is presented in above-mentioned virtual reality device.
In some embodiments, the image information of the characteristic information of above-mentioned user including above-mentioned user and above-mentioned user with It is at least one of lower:Acoustic information, action message.
In some embodiments, above-mentioned fictitious tour sight spot information includes fictitious tour sight spot mark and above-mentioned fictitious tour Sight spot it is at least one of following:Location information, season information, Weather information, temporal information, light information.
In some embodiments, above device further includes three-dimensional scene models and establishes unit, and above-mentioned three-dimensional scene models are built Vertical unit includes:First acquisition module is configured to obtain real travel sight spot corresponding with above-mentioned fictitious tour sight spot information At least one image of at least one camera acquisition of middle setting;Module is established, is configured to according to above-mentioned at least one figure Three-dimensional scene models as establishing above-mentioned fictitious tour sight spot.
In some embodiments, above-mentioned three-dimensional scene models are established unit and are further included:Second acquisition module is configured to obtain Take the laser point cloud data that the laser radar that the position that camera is provided in above-mentioned real travel sight spot is set gathers;On and It states and establishes module and be further configured to:According to above-mentioned at least one image and above-mentioned laser point cloud data, determine it is above-mentioned at least Physical objects indicated by pixel in one image compared with above-mentioned camera relative position information;According to it is above-mentioned at least The relative position information of the physical objects indicated by pixel in one image and above-mentioned at least one image, is established above-mentioned The three-dimensional scene models at fictitious tour sight spot.
In some embodiments, above-mentioned three-dimensional scene models are established unit and are further included:3rd acquisition module is configured to obtain Take the first absolute location information that the location receivers that the position that camera is provided in above-mentioned real travel sight spot is set gather; And above-mentioned module of establishing further is configured to:According in above-mentioned first absolute location information and above-mentioned at least one image The relative position information of physical objects indicated by pixel determines the object indicated by the pixel in above-mentioned at least one image Manage the second absolute location information of object;According to the pixel institute in above-mentioned at least one image and above-mentioned at least one image Second absolute location information of the physical objects of instruction establishes the three-dimensional scene models at above-mentioned fictitious tour sight spot.
The third aspect, this application provides a kind of equipment, including one or more processors and memory, above-mentioned storage The one or more programs of device storage, when said one or multiple programs are performed by said one or multiple processors so that on It states one or more processors and performs above-mentioned video presentation method.
Video presentation method, device and the equipment that the application provides are led to by the user for receiving wearing virtual reality device Cross the fictitious tour request of virtual reality device transmission;Then, according to fictitious tour sight spot information, obtain pre-establish with void Intend the three-dimensional scene models at the corresponding fictitious tour sight spot of travelling sight spot information;The use of real-time reception virtual reality device acquisition again The characteristic information at family, and according to the characteristic information of user, establish the three-dimensional character model of user;Then, in real time by the three of user Dimension actor model is added in three-dimensional scene models to form dynamic 3 D model of the user in fictitious tour sight spot, and renders Dynamic 3 D model is to form dynamic 3 D video;Finally, in real time by dynamic 3 D video be sent to virtual reality device for Dynamic 3 D video is presented in virtual reality device.It is achieved thereby that by the 3 D video at fictitious tour sight spot and the dynamic three of user Dimension video is shown in the virtual reality device of user's wearing, i.e. is realized so that user does not have to be actually reached travelling sight spot True address can real-time experience gone sight-seeing in fictitious tour sight spot.
Description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the video presentation method of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the video presentation method of the application;
Fig. 4 is the flow chart according to another embodiment of the video presentation method of the application;
Fig. 5 is the schematic diagram according to another application scenarios of the video presentation method of the application;
Fig. 6 is the structure diagram according to one embodiment of the video exhibition devices of the application;
Fig. 7 is adapted for the structure diagram of the computer system of the server for realizing the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention rather than the restriction to the invention.It also should be noted that in order to Convenient for description, illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the case where there is no conflict, the feature in embodiment and embodiment in the application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the exemplary system of the embodiment of the video presentation method that can apply the application or video exhibition devices System framework 100.
As shown in Figure 1, system architecture 100 can include virtual reality device 101,102,103, network 104 and server 105.Network 104 between virtual reality device 101,102,103 and server 105 provide communication link medium.Net Network 104 can include various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can dress virtual reality device 101,102,103 and be interacted by network 104 with server 105, to receive Or send message etc..The spy that 3 D video is presented and gathers user can be installed on virtual reality device 101,102,103 The various applications of reference breath.
Server 105 can be to provide the server of various services, such as virtual reality device 101,102,103 is sent Fictitious tour request provide support background server.Background server such as can ask the fictitious tour that receives at the data It carries out the processing such as analyzing, and handling result (such as the dynamic 3 D video of user in fictitious tour sight spot) is fed back to virtually Real world devices.
It should be noted that the video presentation method that the embodiment of the present application is provided generally is performed by server 105, some Step can also be performed by virtual reality device 101,102,103, and correspondingly, video exhibition devices are generally positioned at server In 105, some units can also be arranged in virtual reality device 101,102,103.
It should be understood that the number of the virtual reality device, network and server in Fig. 1 is only schematical.According to reality It now needs, can have any number of virtual reality device, network and server.
With continued reference to Fig. 2, it illustrates the flows 200 of one embodiment of the video presentation method according to the application.It should Video presentation method comprises the following steps:
Step 201, the fictitious tour request that the user of wearing virtual reality device is sent by virtual reality device is received.
In the present embodiment, the electronic equipment (such as server shown in FIG. 1) of video presentation method operation thereon can Virtual reality device is passed through with the user that wearing virtual reality device is received by wired connection mode or radio connection The fictitious tour request of transmission.Wherein, above-mentioned fictitious tour request includes fictitious tour sight spot information.On it is pointed out that It states radio connection and can include but is not limited to 3G/4G connections, WiFi connections, bluetooth connection, WiMAX connections, Zigbee companies Connect, UWB (ultra wideband) connections and other it is currently known or in the future exploitation radio connections.
In the present embodiment, virtual reality device can include virtual reality glasses, for 3 D video to be presented for user.
In some optional realization methods of the present embodiment, virtual reality device can also include sound is presented Loud speaker.
In some optional realization methods of the present embodiment, virtual reality device can also include gathering user's sound The microphone of sound.
In some optional realization methods of the present embodiment, virtual reality device can also include moving for gathering user Make the sensor of information.
In some optional realization methods of the present embodiment, virtual reality device can also include gathering user's figure The camera of picture.
In the present embodiment, virtual reality device may be employed various modes and be asked to the initiation fictitious tour of above-mentioned electronic equipment It asks.For example, it can be sent out when detecting that user has pressed the button of default triggering fictitious tour request to above-mentioned electronic equipment Play fictitious tour request;It can also make the default action for triggering fictitious tour request detecting user and (e.g., detected of Family continuity point the first two times) when to above-mentioned electronic equipment initiate fictitious tour request.
Step 202, according to fictitious tour sight spot information, the void corresponding with fictitious tour sight spot information pre-established is obtained Intend the three-dimensional scene models at travelling sight spot.
In the present embodiment, it is above-mentioned based on the fictitious tour sight spot information in the fictitious tour request obtained in step 201 Electronic equipment (such as server shown in FIG. 1) can obtain the virtual trip corresponding with fictitious tour sight spot information pre-established The three-dimensional scene models at row sight spot.The three-dimensional scene models at fictitious tour sight spot can be stored in above-mentioned electronic equipment local, this Sample, above-mentioned electronic equipment can locally obtain the three-dimensional scene models at fictitious tour sight spot;The three-dimensional scenic at fictitious tour sight spot Model can also be stored in in other electronic equipments of above-mentioned electronic equipment network connection, in this way, above-mentioned electronic equipment can be with Remotely from the three-dimensional scene models that fictitious tour sight spot is obtained with other electronic equipments of above-mentioned electronic equipment network connection.
In some optional realization methods of the present embodiment, fictitious tour sight spot information can include fictitious tour sight spot Mark and fictitious tour sight spot it is at least one of following:Location information, season information, Weather information, temporal information, light letter Breath.Wherein, fictitious tour sight spot mark is for distinguishing the unique mark at each fictitious tour sight spot;Location information is for table Wish the information of the concrete scene in the fictitious tour sight spot of experience (for example, location information can be fictitious tour scape in requisition family Concrete scene " Garden of Harmonious Interests " in point " Summer Palace ");Season information is to wish that experiencing virtual travelling sight spot exists for characterizing user The information of the landscape in which season, for example, season information can be spring, summer, autumn or winter;Weather information is to be used for Characterization user wishes the information of landscape of the fictitious tour sight spot of experience under what weather, for example, Weather information can be fine My god, the cloudy day, the rainy day, greasy weather or snowy day etc.;Temporal information is to wish that experiencing virtual travels scene at what for characterizing user The information of the landscape of time, for example, temporal information can be the sun rise when, dawn, morning, the morning, noon, afternoon, the dusk, Evening or night etc.;And light information can wish that experiencing virtual travels sight spot in different light environments for characterizing user Under landscape information, for example, light information can be strong illumination, low light irradiation, cold light source irradiation, warm light source irradiation etc..
In the present embodiment, the three-dimensional scene models at fictitious tour sight spot are to be built in advance using Computerized three-dimensional modeling technique The vertical model for being used to characterize the three-dimensional information at fictitious tour sight spot.
In some optional realization methods of the present embodiment, polygonal mesh (Polygon Mesh) may be employed and carry out table Show the three-dimensional scene models at fictitious tour sight spot.Threedimensional model is represented with polygonal mesh, is exactly using polygon facet group Into grid carry out approximate representation object.Basic object used is the vertex in three dimensions;The line that two vertex connect Section is known as side;Three three, vertex side is connected as triangle;Multiple triangles can form more complicated polygon Shape or the single body on generation more than three vertex.Quadrangle and triangle are most common shapes in Polygons Representation.It is logical It crosses one group of polygon that shared side links together and is called a grid.Polygon can be used according to required accuracy Grid approaches the object of any shape, represents that the number of the polygon of an object can be a from tens to hundreds of thousands.It is this Method for expressing is a kind of face representation.
In some optional realization methods of the present embodiment, parametric surface piece (Parametric can also be used Patches the three-dimensional scene models at fictitious tour sight spot are represented).It is represented threedimensional model with parametric surface piece and is used polygon Grid represents that threedimensional model is much like, and simply the surface of each polygon becomes curved.Each patch is with one A mathematical formulae defines, and can generate each point on patch surface according to this formula.By changing patch Mathematical formulae defines position and the shape that can just change patch, has very strong interaction capabilities.This method for expressing is also A kind of face representation.
In some optional realization methods of the present embodiment, subdivision curved surface (Subdivision can also be used Surface the three-dimensional scene models at fictitious tour sight spot are represented).Subdivision curved surface is the control grid and definition with low resolution Subdivision rules on control grid represent smooth surface.This method for expressing is a kind of also face representation.
In some optional realization methods of the present embodiment, constructive geometry entity (Constructive can also be used Solid Geometry, CSG) method represent the three-dimensional scene models at fictitious tour sight spot.Constructive geometry entity is exactly handle Object be expressed as some basic configurations carry out Boolean calculation as a result, i.e. with simply several objects be combined into complexity object mould Type.This method for expressing is a kind of body representation.
In some optional realization methods of the present embodiment, spatial decomposition (Spatial can also be used Subdivision method) represents the three-dimensional scene models at fictitious tour sight spot.It is exactly the sky object using spatial decomposition Between resolve into basic unit, such as cube, be known as voxel (voxel);Each voxel labeled as sky or is contained object again Certain part of body.This method for expressing illustrates the three dimensions and a kind of body representation that object occupies.The unit of division is got over It is small, it is necessary to memory space it is also more.
In some optional realization methods of the present embodiment, implicit representation (Implicit can also be used Representation method) represents the three-dimensional scene models at fictitious tour sight spot.That is, by the use of implicit function as the table of object Show form.One implicit function is defined as:
F (x, y, z)=0
The point for meeting this equation is located on body surface.This expression can be seen as a kind of " test ", point Coordinate (x, y, z) is substituted into functional expression above, and whether detection function value is 0.If functional value is equal to 0, this point position is represented In on body surface, body surface is otherwise not belonging to.Wherein, x, y and z are respectively the abscissa of a three dimensions point, vertical seat Mark and height coordinate.
In some optional realization methods of the present embodiment, the three-dimensional scene models at fictitious tour sight spot can be included extremely A few component.Hierarchical model (Hierarchical Model) may be employed to represent each group of three-dimensional scene models Relation between part and each component, such as tree structure.It can also be using scene graph (Scene Graph) come table Show three-dimensional scene models each component and each component between relation, a scene graph is a directed acyclic Scheme (Directed Acyclic Graph, DAG), which safeguards the information contained by node.In scene graph, one Node can also be sound, illumination, mist and other environmental effects etc. in addition to it can be threedimensional model.
It should be noted that the various method for expressing of above-mentioned three-dimensional scene models are widely studied at present and application known Technology, details are not described herein.
Step 203, the characteristic information of the user of real-time reception virtual reality device acquisition, and being believed according to the feature of user Breath establishes the three-dimensional character model of user.
In the present embodiment, above-mentioned electronic equipment can be believed with the feature for the user that real-time reception virtual reality device gathers Breath, and according to the characteristic information of user, the three-dimensional character model of user is established using various three-dimensional modeling methods.For example, can be with Using the polygonal mesh, parametric surface piece, subdivision curved surface, constructive geometry entity, spatial decomposition, implicit described in step 201 The three-dimensional modeling methods such as expression.
In some optional realization methods of the present embodiment, the characteristic information of user can include the image of above-mentioned user Information and above-mentioned user's is at least one of following:Acoustic information, action message.
In some optional realization methods of the present embodiment, may be employed set in virtual reality device for gathering The camera of user images gathers the image information of user, and establishes the three-dimensional character mould of user according to the image information of user Type.
In some optional realization methods of the present embodiment, except being gathered using being set in virtual reality device The camera of user images can also use what is set in virtual reality device to be used to gather to gather outside the image information of user The sensor of user action information gathers the action message of user, and is established according to the image information of user and action message The three-dimensional character model of user.
In some optional realization methods of the present embodiment, except being gathered using being set in virtual reality device The camera of user images can also use what is set in virtual reality device to be used to gather to gather outside the image information of user The microphone of user voice gathers the voice data of user, and the voice data of user is added to the three-dimensional character mould of user In type.
Step 204, in real time the three-dimensional character model of user is added in three-dimensional scene models to form user virtual Dynamic 3 D model in travelling sight spot, and dynamic 3 D model is rendered to form dynamic 3 D video.
In the present embodiment, can the three-dimensional character model of the user established in step 203 be added to step in real time To form dynamic 3 D model of the user in fictitious tour sight spot in the three-dimensional scene models obtained in 202, and render dynamic Threedimensional model is to form dynamic 3 D video.
In some optional realization methods of the present embodiment, hierarchical model may be employed to represent three-dimensional scene models Relation between each component and each component.For example, tree structure may be employed to represent three-dimensional scene models Each component and each component between relation, the real-time three-dimensional character model by user in this sample step It is added in three-dimensional scene models, it is possible to by increasing a node in the corresponding tree structure of three-dimensional scene models come real It is existing, wherein, which just corresponds to the three-dimensional character model of user.
In some optional realization methods of the present embodiment, it can also be represented using scene graph (Scene Graph) Relation between each component and each component of three-dimensional scene models, a scene graph are a directed acyclic graphs (Directed Acyclic Graph, DAG), which safeguards the information contained by node.In this way, in this sample step Real-time the three-dimensional character model of user is added in three-dimensional scene models, it is possible to by corresponding in three-dimensional scene models Increase a point in directed acyclic graph, and the three-dimensional character model of user is corresponded to the point, the point and three-dimensional can also be established Directed connection line in model of place between other points with formed the three-dimensional character model of user and three-dimensional scene models other Relation between component.
It in the present embodiment, can be according to the specific method for expressing of dynamic 3 D model of the user in fictitious tour sight spot Difference and take different rendering intents to generate dynamic 3 D video, here, dynamic 3 D video can include picture number According to dynamic 3 D video can also include voice data.It will be appreciated by persons skilled in the art that how to render dynamic 3 D Model with formed dynamic 3 D video be it is known in those skilled in the art, the present embodiment compared with the prior art key point It is added in by the three-dimensional character model of user in three-dimensional scene models, therefore, to how to render dynamic 3 D model with shape Into dynamic 3 D video, details are not described herein.
Step 205, dynamic 3 D video is sent to virtual reality device so that dynamic is presented in virtual reality device in real time 3 D video.
In the present embodiment, the dynamic 3 D video that step 204 is generated can be sent to by above-mentioned electronic equipment in real time Virtual reality device, such virtual reality device can be presented above-mentioned dynamic 3 D after above-mentioned dynamic 3 D video is received and regard Frequently.Include the corresponding 3 D video of three-dimensional scene models at fictitious tour sight spot in above-mentioned dynamic 3 D video, further include The three-dimensional character model of user oneself, in this way, user can be realized by dressing virtual reality device in the virtual of simulation Experience is in tourism therein in travelling sight spot, rather than just the image or video at viewing travelling sight spot.
In some optional realization methods of the present embodiment, the above method can also include establishing fictitious tour sight spot The step of the step of three-dimensional scene models, this establishes the three-dimensional scene models at fictitious tour sight spot, can include:Obtain with it is above-mentioned At least one image of at least one camera acquisition set in the corresponding real travel sight spot of fictitious tour sight spot information;Root The three-dimensional scene models at above-mentioned fictitious tour sight spot are established according to above-mentioned at least one image.Here it is possible to it is built using various three-dimensionals Mould method establishes the three-dimensional scene models at above-mentioned fictitious tour sight spot according to above-mentioned at least one image.For example, step may be employed Polygonal mesh, parametric surface piece, subdivision curved surface, constructive geometry entity, spatial decomposition, implicit representation described in rapid 201 etc. Three-dimensional modeling method.
In some optional realization methods of the present embodiment, the three of fictitious tour sight spot are established according at least one image Before tieing up model of place, the step of establishing the three-dimensional scene models at fictitious tour sight spot, can also include:Obtain real travel scape The laser point cloud data of the laser radar acquisition of the position setting of camera is provided in point;And it is built according at least one image The three-dimensional scene models at vertical fictitious tour sight spot, can include:First, according at least one image and laser point cloud data, really Determine relative position information of the physical objects indicated by the pixel at least one image compared with camera;Then, according to The relative position information of the physical objects indicated by pixel at least one image and at least one image is established virtual The three-dimensional scene models at travelling sight spot.
It is above-mentioned according at least one image and at least one image in some optional realization methods of the present embodiment In pixel indicated by physical objects relative position information, establish before the three-dimensional scene models at fictitious tour sight spot, It is above-mentioned establish fictitious tour sight spot three-dimensional scene models the step of can also include:It obtains to be provided in real travel sight spot and take the photograph The first absolute location information gathered as the location receivers that the position of head is set;And it is above-mentioned according at least one image and The relative position information of the physical objects indicated by pixel at least one image establishes the three dimensional field at fictitious tour sight spot The step of scape model, can carry out as follows:First, the pixel institute in the first absolute location information and at least one image The relative position information of the physical objects of instruction determines second of the physical objects indicated by the pixel at least one image Absolute location information;Then, the physical objects indicated by the pixel at least one image and at least one image The second absolute location information, establish the three-dimensional scene models at fictitious tour sight spot.
With continued reference to Fig. 3, Fig. 3 is a schematic diagram according to the application scenarios of the video presentation method of the present embodiment. In the application scenarios of Fig. 3, user 301 dresses virtual reality device 302 and has been initiated by network 303 to server 304 virtual The the first fictitious tour request gone sight-seeing in travelling sight spot.Server 304 is receiving virtual reality device 302 by network 303 After the first fictitious tour request sent, the three-dimensional scene models at the fictitious tour sight spot pre-established are obtained first;Then, it is real When receive the image information and action message of the user 301 that virtual reality device 302 gathers, and according to the user collected 301 image information and action message establishes the three-dimensional character model of user;Then, in real time by the three-dimensional character model of user It is added in above-mentioned three-dimensional scene models to form dynamic 3 D model of the user 301 in above-mentioned fictitious tour sight spot, and wash with watercolours It catches and states dynamic 3 D model to form dynamic 3 D video;Finally, above-mentioned dynamic 3 D video is sent out by network 303 in real time It is sent to virtual reality device 302.Virtual reality device 302 is after above-mentioned dynamic 3 D video is received, by above-mentioned dynamic 3 D Video is presented to the user 301 as shown in icon 305 in figure.
Equally, the wearing of user 306 and another virtual reality device 307 of above-mentioned 304 network connection of server pass through network 303 have initiated the second fictitious tour gone sight-seeing in fictitious tour sight spot request to server 304.Server 304 is passing through net After network 303 receives the second fictitious tour request that virtual reality device 307 is sent, the fictitious tour pre-established is obtained first The three-dimensional scene models at sight spot;Then, the image information for the user 306 that real-time reception virtual reality device 307 gathers and action Information, and according to the image information and action message of the user 306 collected, establish the three-dimensional character model of user;Then, The three-dimensional character model of user is added in above-mentioned three-dimensional scene models to be formed user 306 in above-mentioned fictitious tour scape in real time Dynamic 3 D model in point, and above-mentioned dynamic 3 D model is rendered to form dynamic 3 D video;Finally, network is passed through in real time Above-mentioned dynamic 3 D video is sent to virtual reality device 307 by 303.Virtual reality device 307 is receiving above-mentioned dynamic three After tieing up video, above-mentioned dynamic 3 D video is presented to the user 306 as shown in icon 308 in figure.
The method that above-described embodiment of the application provides is three-dimensional by being established according to the characteristic information of the user gathered in real time Actor model, and the three-dimensional character model of user is added in the three-dimensional scene models at fictitious tour sight spot, it is achieved thereby that The 3 D video at fictitious tour sight spot and the dynamic 3 D video of user are included in the virtual reality device of user's wearing, That is, realize so that user do not have to be actually reached travelling sight spot true address can real-time experience in fictitious tour sight spot Visit.
With further reference to Fig. 4, it illustrates the flows 400 of another embodiment of video presentation method.The video is presented The flow 400 of method, comprises the following steps:
Step 401, the fictitious tour request that the user of wearing virtual reality device is sent by virtual reality device is received.
In the present embodiment, the specific processing of step 201 is basic in the specific processing embodiment corresponding with Fig. 2 of step 401 Identical, which is not described herein again.
Step 402, according to fictitious tour sight spot information, the void corresponding with fictitious tour sight spot information pre-established is obtained Intend the three-dimensional scene models at travelling sight spot.
In the present embodiment, the specific processing of step 202 is basic in the specific processing embodiment corresponding with Fig. 2 of step 402 Identical, which is not described herein again.
Step 403, the characteristic information of the user of real-time reception virtual reality device acquisition, and being believed according to the feature of user Breath establishes the three-dimensional character model of user.
In the present embodiment, the specific processing of step 203 is basic in the specific processing embodiment corresponding with Fig. 2 of step 403 Identical, which is not described herein again.
Step 404, in real time the three-dimensional character model of user is added in three-dimensional scene models to form user virtual Dynamic 3 D model in travelling sight spot, and dynamic 3 D model is rendered to form dynamic 3 D video.
In the present embodiment, the specific processing of step 204 is basic in the specific processing embodiment corresponding with Fig. 2 of step 404 Identical, which is not described herein again.
Step 405, dynamic 3 D video is sent to virtual reality device so that dynamic is presented in virtual reality device in real time 3 D video.
In the present embodiment, the specific processing of step 205 is basic in the specific processing embodiment corresponding with Fig. 2 of step 405 Identical, which is not described herein again.
Step 406, it whether there is in real time in detection fictitious tour sight spot and be in same visual scene not with above-mentioned user The first user of above-mentioned user is same as, if so, step 407 is gone to, if it is not, then continuing to execute this step.
In the present embodiment, above-mentioned electronic equipment can be detected in fictitious tour sight spot and whether there is and above-mentioned user in real time The first user different from above-mentioned user in same visual scene, continues to execute if so, going to step 407.
In the present embodiment, above-mentioned electronic equipment can be according to the specific structure of the three-dimensional scene models at fictitious tour sight spot Method is in same field of vision to determine to whether there is in fictitious tour sight spot to take different detection methods with above-mentioned user The first user different from above-mentioned user of scape.It will be appreciated by persons skilled in the art that how according to fictitious tour sight spot Three-dimensional scene models specific construction method, detect in fictitious tour sight spot whether there is in real time and be in same with above-mentioned user The first user different from above-mentioned user of visual scene is known in those skilled in the art, and the present embodiment is compared with existing The key point of technology is to determine there are operating procedure 407, step 408 and the step 409 after at least one first user, because This, to how to detect in real time whether there is in fictitious tour sight spot with above-mentioned user be in same visual scene different from above-mentioned Details are not described herein by the first user of user.
Step 407, the three-dimensional character model of each first user is added in dynamic 3 D model to form virtual trip Multi-user's dynamic 3 D model at row sight spot.
In the present embodiment, above-mentioned electronic equipment can exist in fictitious tour sight spot is detected and is in above-mentioned user After at least one first user different from above-mentioned user of same visual scene, by the three-dimensional character model of each first user It is added in dynamic 3 D model to form multi-user's dynamic 3 D model at fictitious tour sight spot.Specific how to add can be referring to Fig. 2 corresponds to the associated description of step 204 in embodiment, and details are not described herein.
Step 408, multi-user's dynamic 3 D model is rendered to form multi-user's dynamic 3 D video.
In the present embodiment, above-mentioned electronic equipment can be with the multi-user at the fictitious tour sight spot formed in rendering step 407 Dynamic 3 D model is to form multi-user's dynamic 3 D video.Specifically how to render and can correspond to step 204 in embodiment referring to Fig. 2 Associated description, details are not described herein.
Step 409, multi-user's dynamic 3 D video is sent to virtual reality device, so that virtual reality device presentation is more User's dynamic 3 D video.
In the present embodiment, above-mentioned electronic equipment can be after the multi-purpose dynamic 3 D video of generation by above-mentioned multi-user dynamic 3 D video is sent to virtual reality device, so that multi-user's dynamic 3 D video is presented in virtual reality device.In this way, above-mentioned use Family can be interacted by action and/or sound with the other users in same visual scene.
Figure 4, it is seen that compared with the corresponding embodiments of Fig. 2, the flow of the video presentation method in the present embodiment 400 have had more three-dimensional character model rendering by the other users of same visual scene are in user in multi-user's dynamic 3 D In model, and the virtual reality device of user's wearing is sent to, so as to realize between the multi-user of same visual scene Interaction.
With continued reference to Fig. 5, Fig. 5 is the schematic diagram according to another application scenarios of the video presentation method of the present embodiment. In the application scenarios of Fig. 5, user 501,502 and 503 dresses virtual reality device 504,505 and 506 and passes through network 507 respectively The fictitious tour gone sight-seeing in fictitious tour sight spot request has been initiated to server 508.Server 508 is distinguished by network 507 Each fictitious tour request that virtual reality device 504,505 and 506 is sent is received, then respectively by user 501,502 and The 503 dynamic 3 D video in fictitious tour sight spot is sent to virtual reality device 504,505 and 506 so that virtual reality is set Above-mentioned dynamic 3 D video is presented to the user 501,502 and 503 by standby 504,505 and 506.Pass through in user 501,502 and 503 During virtual reality device 504,505 and 506 is gone sight-seeing in fictitious tour sight spot, server 508 detects user 501st, 502 and 503 add in same visual scene, thus by the three-dimensional character model of the user in same visual scene It is added in the three-dimensional scene models at the corresponding fictitious tour sight spot of the visual scene to form multi-user's dynamic 3 D model, and wash with watercolours It catches and states multi-user's dynamic 3 D model to form multi-user's dynamic 3 D video, then above-mentioned multi-user's dynamic 3 D video is real When be sent to above-mentioned virtual reality device 504,505 and 506, so that virtual reality device 504,505 and 506 is respectively such as icon 509th, above-mentioned multi-user's dynamic 3 D video is presented to the user 501,502 and 503 shown in 510 and 511.So as to realize position Interaction between the multi-user of same visual scene.
With further reference to Fig. 6, as the realization to method shown in above-mentioned each figure, present and fill this application provides a kind of video The one embodiment put, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to respectively In kind electronic equipment.
As shown in fig. 6, the video exhibition devices 600 in the present embodiment include:First receiving unit 601, acquiring unit 602nd, the second receiving unit 603,604 and first transmitting element 605 of rendering unit.Wherein, the first receiving unit 601, configuration are used It is asked in the fictitious tour that the user for receiving wearing virtual reality device is sent by above-mentioned virtual reality device, above-mentioned virtual trip Row request includes fictitious tour sight spot information;Acquiring unit 602 is configured to, according to above-mentioned fictitious tour sight spot information, obtain The three-dimensional scene models at the fictitious tour sight spot corresponding with above-mentioned fictitious tour sight spot information pre-established;Second receiving unit 603, the characteristic information of the above-mentioned user of the above-mentioned virtual reality device acquisition of real-time reception is configured to, and according to above-mentioned user's Characteristic information establishes the three-dimensional character model of above-mentioned user;Rendering unit 604 is configured to the three-dimensional of above-mentioned user in real time Actor model is added in above-mentioned three-dimensional scene models to form dynamic 3 D of the above-mentioned user in above-mentioned fictitious tour sight spot Model, and above-mentioned dynamic 3 D model is rendered to form dynamic 3 D video;First transmitting element 605 in real time will be configured to Above-mentioned dynamic 3 D video is sent to above-mentioned virtual reality device and is regarded so that above-mentioned dynamic 3 D is presented in above-mentioned virtual reality device Frequently.
In the present embodiment, the first receiving unit 601 of video exhibition devices 600, acquiring unit 602, second receive single Member 603, the specific processing of 604 and first transmitting element 605 of rendering unit and its caused technique effect can be respectively with reference to figures 2 The related description of step 201 in embodiment, step 202, step 203, step 204 and step 205 is corresponded to, details are not described herein.
In the optional realization method of some of the present embodiment, above-mentioned video exhibition devices 600 can also include:Detection unit 606, it is configured to detect in real time to whether there is in above-mentioned fictitious tour sight spot and with above-mentioned user is in same visual scene not It is same as the first user of above-mentioned user;Second transmitting element 607, if be configured to detection unit 606 detect it is above-mentioned virtual There are at least one the first user different from above-mentioned user that same visual scene is in above-mentioned user in travelling sight spot, Then:The three-dimensional character model of each first user is added in above-mentioned dynamic 3 D model to form above-mentioned fictitious tour sight spot Multi-user's dynamic 3 D model, and render above-mentioned multi-user's dynamic 3 D model to form multi-user's dynamic 3 D video;It will Above-mentioned multi-user's dynamic 3 D video is sent to above-mentioned virtual reality device, so that above-mentioned be mostly used is presented in above-mentioned virtual reality device Family dynamic 3 D video.The specific processing of 606 and second transmitting element 607 of detection unit and its caused technique effect can divide Other reference chart 4 corresponds to the related description that step 406 in embodiment arrives step 409, and details are not described herein.
In the optional realization method of some of the present embodiment, the characteristic information of above-mentioned user can include the figure of above-mentioned user It is at least one of following as information and above-mentioned user:Acoustic information, action message.
In the optional realization method of some of the present embodiment, above-mentioned fictitious tour sight spot information can include fictitious tour scape Point identification and above-mentioned fictitious tour sight spot it is at least one of following:Location information, season information, Weather information, temporal information, light Line information.
In the optional realization method of some of the present embodiment, above-mentioned video exhibition devices 600 can also include three-dimensional scenic Model foundation unit 608, above-mentioned three-dimensional scene models, which establish unit 608, to be included:First acquisition module 6081, is configured to Obtain at least one camera acquisition set in real travel sight spot corresponding with above-mentioned fictitious tour sight spot information at least One image;Module 6082 is established, is configured to establish the three-dimensional at above-mentioned fictitious tour sight spot according to above-mentioned at least one image Model of place.Three-dimensional scene models establish the specific processing of unit 608 and its caused technique effect can refer to Fig. 2 and correspond in fact The related description in example is applied, details are not described herein.
In the optional realization method of some of the present embodiment, above-mentioned three-dimensional scene models, which establish unit 608, to be included: Second acquisition module 6083 is configured to obtain the laser thunder of the position that camera is provided in above-mentioned real travel sight spot setting Up to the laser point cloud data of acquisition;And above-mentioned module 6082 of establishing can be further configured to:According to above-mentioned at least one Image and above-mentioned laser point cloud data determine the physical objects indicated by the pixel in above-mentioned at least one image compared with upper State the relative position information of camera;Pixel in above-mentioned at least one image and above-mentioned at least one image is signified The relative position information of the physical objects shown establishes the three-dimensional scene models at above-mentioned fictitious tour sight spot.Second acquisition module 6083 can refer to Fig. 2 and correspond to mutually speaking on somebody's behalf in embodiment with the specific processing and its caused technique effect for establishing module 6082 Bright, details are not described herein.
In the optional realization method of some of the present embodiment, above-mentioned three-dimensional scene models, which establish unit 608, to be included: 3rd acquisition module 6084, the positioning for being configured to obtain the position that camera is provided in above-mentioned real travel sight spot setting connect Receive the first absolute location information of device acquisition;And above-mentioned module 6082 of establishing can be further configured to:According to above-mentioned The relative position information of the physical objects indicated by pixel in one absolute location information and above-mentioned at least one image determines Second absolute location information of the physical objects indicated by pixel in above-mentioned at least one image;According to above-mentioned at least one Second absolute location information of the physical objects indicated by pixel in image and above-mentioned at least one image, is established above-mentioned The three-dimensional scene models at fictitious tour sight spot.3rd acquisition module 6084 is handled and its brought with the specific of module 6082 is established Technique effect can refer to Fig. 2 and correspond to related description in embodiment, details are not described herein.
Below with reference to Fig. 7, it illustrates suitable for being used for realizing the computer system 700 of the server of the embodiment of the present application Structure diagram.
As shown in fig. 7, computer system 700 includes one or more processors 701 (as an example, only being shown in Fig. 7 One processor), it can be according to being stored in read-only memory (ROM) one or more of 702 program or from storage Part 706 is loaded into one or more of 703 program of random access storage device (RAM) and performs various appropriate actions and place Reason.When said one or multiple programs are performed by said one or multiple processors so that said one or multiple processors The method described in embodiment as shown in figures 2 or 4 can be performed.In RAM 703, also it is stored with system 700 and operates institute The various programs and data needed.Processor 701, ROM 702 and RAM 703 are connected with each other by bus 704.Input/output (I/O) interface 705 is also connected to bus 704.
I/O interfaces 705 are connected to lower component:Storage part 706 including hard disk etc.;And including such as LAN card, tune The communications portion 707 of the network interface card of modulator-demodulator etc..Communications portion 707 performs mailing address via the network of such as internet Reason.Driver 708 is also according to needing to be connected to I/O interfaces 705.Detachable media 709, such as disk, CD, magneto-optic disk, half Conductor memory etc. is mounted on driver 708, as needed in order to which the computer program read from it is as needed It is mounted into storage part 706.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product, it is machine readable including being tangibly embodied in Computer program on medium, above computer program are included for the program code of the method shown in execution flow chart.At this In the embodiment of sample, which can be downloaded and installed from network by communications portion 707 and/or from removable Medium 709 is unloaded to be mounted.When the computer program is performed by processor 701, limited in execution the present processes above-mentioned Function.
Flow chart and block diagram in attached drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey Architectural framework in the cards, function and the operation of sequence product.In this regard, each box in flow chart or block diagram can generation The part of one module of table, program segment or code, a part for above-mentioned module, program segment or code include one or more The executable instruction of logic function as defined in being used to implement.It should also be noted that some as replace realization in, institute in box The function of mark can also be occurred with being different from the order marked in attached drawing.For example, two boxes succeedingly represented are actual On can perform substantially in parallel, they can also be performed in the opposite order sometimes, this is depending on involved function.Also It is noted that the combination of each box in block diagram and/or flow chart and the box in block diagram and/or flow chart, Ke Yiyong The dedicated hardware based systems of functions or operations as defined in execution is realized or can referred to specialized hardware and computer The combination of order is realized.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit can also be set in the processor, for example, can be described as:A kind of processor bag Include the first receiving unit, acquiring unit, the second receiving unit, rendering unit and the first transmitting element.Wherein, the name of these units Claim not forming the restriction to the unit in itself under certain conditions, for example, the first receiving unit is also described as " receiving The unit of the fictitious tour request of user ".
As on the other hand, present invention also provides a kind of nonvolatile computer storage media, the non-volatile calculating Machine storage medium can be nonvolatile computer storage media included in above device in above-described embodiment;Can also be Individualism, without the nonvolatile computer storage media in supplying terminal.Above-mentioned nonvolatile computer storage media is deposited One or more program is contained, when said one or multiple programs are performed by an equipment so that above equipment:It receives The fictitious tour that the user for dressing virtual reality device is sent by above-mentioned virtual reality device is asked, above-mentioned fictitious tour request Including fictitious tour sight spot information;According to above-mentioned fictitious tour sight spot information, obtain pre-establishing with above-mentioned fictitious tour scape The three-dimensional scene models at the corresponding fictitious tour sight spot of point information;The above-mentioned user of the above-mentioned virtual reality device acquisition of real-time reception Characteristic information, and according to the characteristic information of above-mentioned user, establish the three-dimensional character model of above-mentioned user;In real time by above-mentioned user Three-dimensional character model be added in above-mentioned three-dimensional scene models to form above-mentioned user it is dynamic in above-mentioned fictitious tour sight spot State threedimensional model, and above-mentioned dynamic 3 D model is rendered to form dynamic 3 D video;Above-mentioned dynamic 3 D video is sent out in real time Above-mentioned virtual reality device is sent to so that above-mentioned dynamic 3 D video is presented in above-mentioned virtual reality device.
The preferred embodiment and the explanation to institute's application technology principle that above description is only the application.People in the art Member should be appreciated that invention scope involved in the application, however it is not limited to the technology that the particular combination of above-mentioned technical characteristic forms Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature The other technical solutions for being combined and being formed.Such as features described above has similar work(with (but not limited to) disclosed herein The technical solution that the technical characteristic of energy is replaced mutually and formed.

Claims (15)

1. a kind of video presentation method, which is characterized in that the described method includes:
The fictitious tour request that the user of wearing virtual reality device is sent by the virtual reality device is received, it is described virtual Travel Request includes fictitious tour sight spot information;
According to the fictitious tour sight spot information, the virtual trip corresponding with the fictitious tour sight spot information pre-established is obtained The three-dimensional scene models at row sight spot;
The characteristic information for the user that virtual reality device described in real-time reception gathers, and believed according to the feature of the user Breath establishes the three-dimensional character model of the user;
The three-dimensional character model of the user is added in the three-dimensional scene models in real time to form the user described Dynamic 3 D model in fictitious tour sight spot, and the dynamic 3 D model is rendered to form dynamic 3 D video;
The dynamic 3 D video is sent to the virtual reality device for described in virtual reality device presentation in real time Dynamic 3 D video.
2. according to the method described in claim 1, it is characterized in that, the method further includes:
Detect in real time in the fictitious tour sight spot whether there is with the user be in same visual scene be different from described in The first user of user;
In response to detecting at least one first user, then:The three-dimensional character model of each first user is added to described dynamic To form multi-user's dynamic 3 D model at the fictitious tour sight spot in state threedimensional model, and render multi-user's dynamic three Dimension module is to form multi-user's dynamic 3 D video;Multi-user's dynamic 3 D video is sent to the virtual reality to set It is standby, so that multi-user's dynamic 3 D video is presented in the virtual reality device.
3. method according to claim 1 or 2, which is characterized in that the characteristic information of the user includes the user's Image information and the user's is at least one of following:Acoustic information, action message.
4. according to the method described in claim 3, it is characterized in that, the fictitious tour sight spot information includes fictitious tour sight spot Mark and fictitious tour sight spot it is at least one of following:Location information, season information, Weather information, temporal information, light letter Breath.
5. according to the described method of any one of claim 1-4, which is characterized in that the method, which further includes, establishes fictitious tour The step of three-dimensional scene models at sight spot, it is described establish fictitious tour sight spot three-dimensional scene models the step of include:
Obtain what at least one camera set in real travel sight spot corresponding with the fictitious tour sight spot information gathered At least one image;
The three-dimensional scene models at the fictitious tour sight spot are established according at least one image.
6. according to the method described in claim 5, it is characterized in that, it is described established according at least one image it is described virtual Travel sight spot three-dimensional scene models before, it is described establish fictitious tour sight spot three-dimensional scene models the step of further include:
Obtain the laser point cloud data of the laser radar acquisition for the position setting that camera is provided in the real travel sight spot; And
The three-dimensional scene models that the fictitious tour sight spot is established according at least one image, including:
According at least one image and the laser point cloud data, determine that the pixel at least one image is signified The physical objects shown compared with the camera relative position information;
Physical objects indicated by pixel at least one image and at least one image it is opposite Location information establishes the three-dimensional scene models at the fictitious tour sight spot.
7. according to the method described in claim 6, it is characterized in that, it is described according at least one image and it is described at least The relative position information of the physical objects indicated by pixel in one image establishes the three dimensional field at the fictitious tour sight spot Before scape model, it is described establish fictitious tour sight spot three-dimensional scene models the step of further include:
Obtain the first absolute position of the location receivers acquisition for the position setting that camera is provided in the real travel sight spot Confidence ceases;And
Physical objects indicated by the pixel at least one image and at least one image Relative position information establishes the three-dimensional scene models at the fictitious tour sight spot, including:
The phase of the physical objects indicated by pixel in first absolute location information and at least one image To location information, the second absolute location information of the physical objects indicated by the pixel at least one image is determined;
Second of physical objects indicated by pixel at least one image and at least one image Absolute location information establishes the three-dimensional scene models at the fictitious tour sight spot.
8. a kind of video exhibition devices, which is characterized in that described device includes:
First receiving unit is configured to receive what the user for dressing virtual reality device was sent by the virtual reality device Fictitious tour is asked, and the fictitious tour request includes fictitious tour sight spot information;
Acquiring unit is configured to according to the fictitious tour sight spot information, is obtained pre-establishing with the fictitious tour scape The three-dimensional scene models at the corresponding fictitious tour sight spot of point information;
Second receiving unit is configured to the characteristic information for the user that virtual reality device described in real-time reception gathers, and According to the characteristic information of the user, the three-dimensional character model of the user is established;
Rendering unit is configured to that the three-dimensional character model of the user is added in the three-dimensional scene models with shape in real time Into dynamic 3 D model of the user in the fictitious tour sight spot, and the dynamic 3 D model is rendered to form dynamic 3 D video;
First transmitting element is configured to that the dynamic 3 D video is sent to the virtual reality device for described in real time The dynamic 3 D video is presented in virtual reality device.
9. device according to claim 8, which is characterized in that described device further includes:
Detection unit, is configured to detect in real time to whether there is in the fictitious tour sight spot and with the user is in same vision The first user different from the user of scene;
Second transmitting element, if being configured to detection unit detects that there are at least one and institutes in the fictitious tour sight spot The first user different from the user that user is in same visual scene is stated, then:By the three-dimensional character of each first user Model is added to form multi-user's dynamic 3 D model at the fictitious tour sight spot in the dynamic 3 D model, and renders Multi-user's dynamic 3 D model is to form multi-user's dynamic 3 D video;Multi-user's dynamic 3 D video is sent to The virtual reality device, so that multi-user's dynamic 3 D video is presented in the virtual reality device.
10. device according to claim 8 or claim 9, which is characterized in that the characteristic information of the user includes the user's Image information and the user's is at least one of following:Acoustic information, action message.
11. device according to claim 10, which is characterized in that the fictitious tour sight spot information includes fictitious tour scape Point identification and the fictitious tour sight spot it is at least one of following:Location information, season information, Weather information, temporal information, light Line information.
12. according to the device any one of claim 8-11, which is characterized in that described device further includes three-dimensional scenic mould Type establishes unit, and the three-dimensional scene models, which establish unit, to be included:
First acquisition module is configured to obtain what is set in real travel sight spot corresponding with the fictitious tour sight spot information At least one image of at least one camera acquisition;
Module is established, is configured to establish the three-dimensional scene models at the fictitious tour sight spot according at least one image.
13. device according to claim 12, which is characterized in that the three-dimensional scene models are established unit and further included:
Second acquisition module is configured to obtain the laser thunder of the position that camera is provided in real travel sight spot setting Up to the laser point cloud data of acquisition;And
The module of establishing further is configured to:
According at least one image and the laser point cloud data, determine that the pixel at least one image is signified The physical objects shown compared with the camera relative position information;
Physical objects indicated by pixel at least one image and at least one image it is opposite Location information establishes the three-dimensional scene models at the fictitious tour sight spot.
14. device according to claim 13, which is characterized in that the three-dimensional scene models are established unit and further included:
3rd acquisition module, the positioning for being configured to obtain the position that camera is provided in real travel sight spot setting connect Receive the first absolute location information of device acquisition;And
The module of establishing further is configured to:
The phase of the physical objects indicated by pixel in first absolute location information and at least one image To location information, the second absolute location information of the physical objects indicated by the pixel at least one image is determined;
Second of physical objects indicated by pixel at least one image and at least one image Absolute location information establishes the three-dimensional scene models at the fictitious tour sight spot.
15. a kind of equipment includes:
One or more processors;
Memory, for storing one or more programs,
When one or more of programs are performed by one or more of processors so that one or more of processors Perform the method as described in any in claim 1 to 7.
CN201611024592.6A 2016-11-17 2016-11-17 Video presentation method, device and equipment Pending CN108074278A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611024592.6A CN108074278A (en) 2016-11-17 2016-11-17 Video presentation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611024592.6A CN108074278A (en) 2016-11-17 2016-11-17 Video presentation method, device and equipment

Publications (1)

Publication Number Publication Date
CN108074278A true CN108074278A (en) 2018-05-25

Family

ID=62160742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611024592.6A Pending CN108074278A (en) 2016-11-17 2016-11-17 Video presentation method, device and equipment

Country Status (1)

Country Link
CN (1) CN108074278A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109189224A (en) * 2018-08-29 2019-01-11 合肥市徽马信息科技有限公司 A kind of guide system that the virtual scenic spot based on VR technology is visited
CN110322377A (en) * 2019-06-28 2019-10-11 德普信(天津)软件技术有限责任公司 Teaching method and system based on virtual reality
CN111241426A (en) * 2018-11-29 2020-06-05 本田技研工业株式会社 Content providing device, content providing method, and storage medium
CN111563357A (en) * 2020-04-28 2020-08-21 众妙之门(深圳)科技有限公司 Three-dimensional visual display method and system for electronic device
CN111741285A (en) * 2020-06-08 2020-10-02 上海龙旗科技股份有限公司 Real-time 3D scene implementation method and device
CN111862348A (en) * 2020-07-30 2020-10-30 腾讯科技(深圳)有限公司 Video display method, video generation method, video display device, video generation device, video display equipment and storage medium
CN112150603A (en) * 2019-06-28 2020-12-29 上海交通大学 Initial visual angle control and presentation method and system based on three-dimensional point cloud
CN112395518A (en) * 2020-11-30 2021-02-23 浙江神韵文化科技有限公司 Intelligent virtual tourism system based on Internet
CN113256815A (en) * 2021-02-24 2021-08-13 北京华清易通科技有限公司 Virtual reality scene fusion and playing method and virtual reality equipment
CN113593351A (en) * 2021-09-27 2021-11-02 华中师范大学 Three-dimensional comprehensive teaching field system and working method thereof
US11410570B1 (en) 2021-09-27 2022-08-09 Central China Normal University Comprehensive three-dimensional teaching field system and method for operating same
CN117455578A (en) * 2023-11-30 2024-01-26 北京英政科技有限公司 Travel destination popularization system based on virtual reality technology

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932799A (en) * 2006-09-04 2007-03-21 罗中根 System and method for simulating real three-dimensional virtual network travel
CN102122393A (en) * 2010-01-09 2011-07-13 鸿富锦精密工业(深圳)有限公司 Method and system for building three-dimensional model and modeling device with system
CN102867280A (en) * 2012-08-23 2013-01-09 上海创图网络科技发展有限公司 Virtual tourism platform construction device and application thereof
CN103136784A (en) * 2011-11-29 2013-06-05 鸿富锦精密工业(深圳)有限公司 Street view establishing system and street view establishing method
US20140168416A1 (en) * 2008-02-05 2014-06-19 Olympus Imaging Corp. Virtual image generating apparatus, virtual image generating method, and recording medium storing virtual image generating program
CN104793740A (en) * 2015-04-02 2015-07-22 福建省纳金网信息技术有限公司 Method for achieving exercise based on virtual travel
CN105955455A (en) * 2016-04-15 2016-09-21 北京小鸟看看科技有限公司 Device and method for adding object in virtual scene
CN105955483A (en) * 2016-05-06 2016-09-21 乐视控股(北京)有限公司 Virtual reality terminal and visual virtualization method and device thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932799A (en) * 2006-09-04 2007-03-21 罗中根 System and method for simulating real three-dimensional virtual network travel
US20140168416A1 (en) * 2008-02-05 2014-06-19 Olympus Imaging Corp. Virtual image generating apparatus, virtual image generating method, and recording medium storing virtual image generating program
CN102122393A (en) * 2010-01-09 2011-07-13 鸿富锦精密工业(深圳)有限公司 Method and system for building three-dimensional model and modeling device with system
CN103136784A (en) * 2011-11-29 2013-06-05 鸿富锦精密工业(深圳)有限公司 Street view establishing system and street view establishing method
CN102867280A (en) * 2012-08-23 2013-01-09 上海创图网络科技发展有限公司 Virtual tourism platform construction device and application thereof
CN104793740A (en) * 2015-04-02 2015-07-22 福建省纳金网信息技术有限公司 Method for achieving exercise based on virtual travel
CN105955455A (en) * 2016-04-15 2016-09-21 北京小鸟看看科技有限公司 Device and method for adding object in virtual scene
CN105955483A (en) * 2016-05-06 2016-09-21 乐视控股(北京)有限公司 Virtual reality terminal and visual virtualization method and device thereof

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109189224A (en) * 2018-08-29 2019-01-11 合肥市徽马信息科技有限公司 A kind of guide system that the virtual scenic spot based on VR technology is visited
CN111241426A (en) * 2018-11-29 2020-06-05 本田技研工业株式会社 Content providing device, content providing method, and storage medium
CN112150603B (en) * 2019-06-28 2023-03-28 上海交通大学 Initial visual angle control and presentation method and system based on three-dimensional point cloud
CN110322377A (en) * 2019-06-28 2019-10-11 德普信(天津)软件技术有限责任公司 Teaching method and system based on virtual reality
CN112150603A (en) * 2019-06-28 2020-12-29 上海交通大学 Initial visual angle control and presentation method and system based on three-dimensional point cloud
US11836882B2 (en) 2019-06-28 2023-12-05 Shanghai Jiao Tong University Three-dimensional point cloud-based initial viewing angle control and presentation method and system
CN111563357A (en) * 2020-04-28 2020-08-21 众妙之门(深圳)科技有限公司 Three-dimensional visual display method and system for electronic device
CN111563357B (en) * 2020-04-28 2024-03-01 纳威科技有限公司 Three-dimensional visual display method and system for electronic device
CN111741285A (en) * 2020-06-08 2020-10-02 上海龙旗科技股份有限公司 Real-time 3D scene implementation method and device
CN111862348A (en) * 2020-07-30 2020-10-30 腾讯科技(深圳)有限公司 Video display method, video generation method, video display device, video generation device, video display equipment and storage medium
CN111862348B (en) * 2020-07-30 2024-04-30 深圳市腾讯计算机系统有限公司 Video display method, video generation method, device, equipment and storage medium
CN112395518A (en) * 2020-11-30 2021-02-23 浙江神韵文化科技有限公司 Intelligent virtual tourism system based on Internet
CN113256815A (en) * 2021-02-24 2021-08-13 北京华清易通科技有限公司 Virtual reality scene fusion and playing method and virtual reality equipment
CN113256815B (en) * 2021-02-24 2024-03-22 北京华清易通科技有限公司 Virtual reality scene fusion and playing method and virtual reality equipment
CN113593351A (en) * 2021-09-27 2021-11-02 华中师范大学 Three-dimensional comprehensive teaching field system and working method thereof
CN113593351B (en) * 2021-09-27 2021-12-17 华中师范大学 Working method of three-dimensional comprehensive teaching field system
US11410570B1 (en) 2021-09-27 2022-08-09 Central China Normal University Comprehensive three-dimensional teaching field system and method for operating same
CN117455578A (en) * 2023-11-30 2024-01-26 北京英政科技有限公司 Travel destination popularization system based on virtual reality technology

Similar Documents

Publication Publication Date Title
CN108074278A (en) Video presentation method, device and equipment
CN106887183B (en) A kind of interactive demonstration method and system of BIM augmented reality in building sand table
CA3090747C (en) Automatic rig creation process
CN110392902A (en) Use the operation of sparse volume data
JP2016218999A (en) Method for training classifier to detect object represented in image of target environment
WO2016114930A2 (en) Systems and methods for augmented reality art creation
CN109523345A (en) WebGL virtual fitting system and method based on virtual reality technology
CN106327589A (en) Kinect-based 3D virtual dressing mirror realization method and system
CN106575158A (en) Environmentally mapped virtualization mechanism
JP2022539160A (en) Simple environment solver with plane extraction
Li et al. Key technology of virtual roaming system in the museum of ancient high-imitative calligraphy and paintings
CN106447786A (en) Parallel space establishing and sharing system based on virtual reality technologies
CN110378947A (en) 3D model reconstruction method, device and electronic equipment
CN111815785A (en) Method and device for presenting reality model, electronic equipment and storage medium
CN112530005A (en) Three-dimensional model linear structure recognition and automatic restoration method
CN112053440A (en) Method for determining individualized model and communication device
Zhang et al. [Retracted] Virtual Reality Design and Realization of Interactive Garden Landscape
WO2021106855A1 (en) Data generation method, data generation device, model generation method, model generation device, and program
CN113144613A (en) Model-based volume cloud generation method
US20100134500A1 (en) Apparatus and method for producing crowd animation
Dong et al. A time-critical adaptive approach for visualizing natural scenes on different devices
CN116486018A (en) Three-dimensional reconstruction method, apparatus and storage medium
Bao et al. [Retracted] Artificial Intelligence and VR Environment Design of Digital Museum Based on Embedded Image Processing
Hempe Bridging the gap between rendering and simulation frameworks: concepts, approaches and applications for modern multi-domain VR simulation systems
JP7232552B1 (en) Information processing apparatus, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180525

RJ01 Rejection of invention patent application after publication