WO2018049682A1 - 一种虚拟3d场景制作方法及相关设备 - Google Patents

一种虚拟3d场景制作方法及相关设备 Download PDF

Info

Publication number
WO2018049682A1
WO2018049682A1 PCT/CN2016/099341 CN2016099341W WO2018049682A1 WO 2018049682 A1 WO2018049682 A1 WO 2018049682A1 CN 2016099341 W CN2016099341 W CN 2016099341W WO 2018049682 A1 WO2018049682 A1 WO 2018049682A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
objects
operation object
time axis
virtual
Prior art date
Application number
PCT/CN2016/099341
Other languages
English (en)
French (fr)
Inventor
李西峙
Original Assignee
深圳市大富网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大富网络技术有限公司 filed Critical 深圳市大富网络技术有限公司
Priority to CN201680039121.4A priority Critical patent/CN107820622A/zh
Priority to PCT/CN2016/099341 priority patent/WO2018049682A1/zh
Publication of WO2018049682A1 publication Critical patent/WO2018049682A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Definitions

  • the present invention relates to the field of multimedia technologies, and in particular, to a virtual 3D scene making method and related device.
  • Role-playing refers to the user performing a character's performance in a 3D movie or game by operating a virtual character using an input device such as a mouse or a keyboard through a first or third person perspective.
  • Role-playing as a method of operating virtual characters is widely used in the field of 3D computer games, especially in FPS (first person shooter) and RPG (role play) games.
  • a traditional 3D movie refers to a non-real-time movie, which typically outputs a compressed still image sequence file through long-time rendering, and then views by decompressing and playing a sequence of still images.
  • the traditional 3D movie production process is: separately create independent 3D scenes, models, characters, and actions; in the online block diagram mode, add 3D models, characters and animations to the scene from the perspective of the camera; after long-time rendering, output The effect diagram or sequence of graphs; according to the rendered effect diagram or sequence of graphs, return to the wireframe mode, adjust the original scene, and then output the renderings or sequence of images again, and so on, until a satisfactory rendering or sequence of graphics is obtained.
  • you can refer to the famous 3D movie production software such as Autodesk 3dsmax and Maya.
  • the production of traditional 3D games is generally divided into production development environment (game editor) and game running environment.
  • the production and development environment of the game is similar to the production environment of 3D movies.
  • the production process is: making independent 3D scenes, models, characters, and actions separately; in the game editor, adding 3D models to the scene from the perspective of the camera. Characters and animations; in the game editor, select objects in the scene, change their properties in the properties window; when editing the game, switch to the game running environment, watch the effect, and repeat until you are satisfied with the correct Game experience.
  • the above-mentioned 3D video or game making method in the prior art has high technical requirements for the production staff, and needs to master the use method of the professional 3D development software, which is very difficult for the ordinary user; and the existing 3D is used.
  • the video or game produced by the development software generally has a very large amount of data, which is not conducive to transmission and storage.
  • the present invention provides a virtual 3D scene making method and related equipment.
  • an embodiment of the present invention provides a method for creating a virtual 3D scene, where the method includes:
  • the method further includes:
  • a plurality of the second scenes are clipped to obtain a virtual 3D movie.
  • the operation includes at least one of the following operations: keeping still, moving, moving, interacting with other operating objects, and interacting with the first scene.
  • the first scene corresponding to the background required to generate the virtual 3D scene specifically includes:
  • the obtaining the operation instruction for the multiple operation objects and controlling the multiple operation objects to perform the corresponding operations in the first scenario according to the operation instruction specifically includes:
  • the acquiring the time axis of the operation of each of the plurality of operation objects in the first scenario specifically includes:
  • the second scenario obtained in the first scenario specifically includes:
  • the first operation object and the operation performed by the second operation object and the operations performed by the first operation object are merged into the first scene according to the time axis to obtain a second scene.
  • the first user and the second user are the same user or different users.
  • the cutting the plurality of the second scenes to obtain a virtual 3D movie specifically includes:
  • the plurality of second scenes are combined in a serial, parallel or serial combination to obtain a virtual 3D video.
  • the operation object includes one or more camera objects
  • the method also includes outputting an image or video captured by the camera object in the second scene.
  • the operation object includes a plurality of camera objects, and the operation instructions for the plurality of operation objects are respectively acquired, and the plurality of operation objects are controlled to perform corresponding operations in the first scene according to the operation instruction.
  • the operation instructions for the plurality of operation objects are respectively acquired, and the plurality of operation objects are controlled to perform corresponding operations in the first scene according to the operation instruction.
  • the present invention provides a virtual 3D scene making apparatus, where the apparatus includes:
  • a generating unit configured to generate a first scene corresponding to a background required by the virtual 3D scene
  • An operation unit configured to respectively acquire operation instructions for the plurality of operation objects, and control the plurality of operation objects to perform corresponding operations in the first scene according to the operation instructions;
  • An acquiring unit configured to acquire a time axis in which each of the plurality of operation objects performs an operation in the first scenario
  • a merging unit configured to merge each of the operation objects and operations performed by the operation into the first scene according to the time axis to obtain a second scenario.
  • the device further includes:
  • a clipping unit configured to edit a plurality of the second scenes to obtain a virtual 3D video.
  • the operation includes at least one of the following operations: keeping still, moving, moving, interacting with other operating objects, and interacting with the first scene.
  • the generating unit is further configured to:
  • the operating unit is further configured to:
  • the obtaining unit is used to:
  • the merging unit is used to:
  • the first operation object and the operation performed by the second operation object and the operations performed by the first operation object are merged into the first scene according to the time axis to obtain a second scene.
  • the editing unit is further configured to:
  • the plurality of second scenes are combined in a serial, parallel or serial combination to obtain a virtual 3D video.
  • the operation object includes one or more camera objects; and the generating unit is further configured to:
  • An image or video captured by the camera object in the second scene is output.
  • the first user and the second user are the same user or different users.
  • the operation object includes a plurality of camera objects, and the operation unit is configured to:
  • the present invention provides a virtual 3D scene making device, where the device includes:
  • a computer readable program is stored in the memory
  • the processor runs the program in the memory and is used at least for:
  • the virtual 3D scene production method and device provided by the invention simplify the production process of the 3D video or the game, and the operation is simple, and the user is not required to have a more professional 3D production technology, thereby reducing the technical limitation on the user.
  • FIG. 1 is a flowchart of an embodiment of a method for fabricating a virtual 3D scene provided by the present invention
  • FIG. 1 b is a flowchart of an embodiment of a method for fabricating a virtual 3D scene provided by the present invention.
  • FIG. 1 is a structural diagram of an embodiment of a virtual 3D scene creation apparatus provided by the present invention
  • FIG. 2 is a hardware structural diagram of an embodiment of a virtual 3D scene making device provided by the present invention.
  • FIG. 1 An embodiment of a method for fabricating a virtual 3D scene provided in the embodiment of the present invention, as shown in FIG.
  • S1 Generate a first scene corresponding to a background required by the virtual 3D scene.
  • the user can retrieve pre-generated various models and objects from the software system, and place them in the basic scene (which may be an optional basic background provided by the software system, such as a blank background), or set the attributes of the model or object.
  • the basic scene which may be an optional basic background provided by the software system, such as a blank background
  • the attributes of the model or object may be set.
  • S2 Obtain an operation instruction for the plurality of operation objects respectively, and control the plurality of operation objects to perform corresponding operations in the first scene according to the operation instruction.
  • the corresponding operation is performed in the first scene according to the operation instruction to the first operation object, and the operation may include at least one of the following operations: keeping still, moving, interacting with other operating objects, and interacting with the first scene Interaction
  • the operation object mentioned here may be a user-controlled character in a 3D movie or a 3D game, and may be a first person perspective or a third person perspective, for example, controlling a person in the scene to perform a forward, a turn, etc., or controlling a person picking The props in the scene complete the task of the game character in the scene, which is not limited here.
  • the time taken by the operation object to complete an operation according to an operation instruction in the first scenario may be different.
  • Each operation object has its own operation in the time period corresponding to the time axis.
  • multiple operation objects can be merged into the first scene, that is, the operation of multiple operation objects under the time axis can be performed together.
  • Presenting or recording in the first scene the merged scene having the plurality of operation objects is used as the second scene, so that operations on the plurality of operation objects at the same time can be simultaneously recorded in the second scene or Presented.
  • the virtual 3D scene making method provided by the invention simplifies the production process of the 3D video or the game, has simple operation, does not require the user to have a more professional 3D production technology, and reduces the technical limitation on the user.
  • An embodiment of a method for fabricating a virtual 3D scene provided in the embodiment of the present invention, where the method includes:
  • the user can retrieve various pre-generated models, objects (including operating objects) from the software system, place them in the base scene, and set properties on the model or objects to complete the generation of the first scene. It is understood that the user needs to arrange the required scene according to the needs of the virtual 3D scene for use, which is not limited herein.
  • the operation may include at least one of the following operations: maintaining stillness, movement, interaction of the action with other operation objects, and interaction with the first scene,
  • the operation object mentioned here may be a user-controlled person in a 3D movie or a 3D game, and may be a first person perspective or a third person perspective, for example, controlling a person in the scene to perform a forward, a turn, etc., and may also control a person to pick up a scene.
  • the props complete the task of the game character in the scene, which is not limited here.
  • the operation object is controlled to perform an operation in the first scene according to an operation instruction of each operation object, for example, controlling the first operation object in the first scene according to an operation instruction of the first operation object.
  • Walking forward, controlling the second operation object to move to the right in the first scene according to the operation instruction of the second operation object, and the operations on different operation objects may be the same or different, depending on the needs of the operation object,
  • the operation objects in the first scene can be operated separately.
  • the time taken by the operation object to complete an operation according to an operation instruction in the first scene may be different, and it is not difficult to understand.
  • the first operation object takes 5 seconds to complete an operation, and the 5 seconds can be used as the operation.
  • the time axis the operation of the first operation object is decomposed into a plurality of frames, and 24 frames can be set every second, so that the images of the first operation object can be consecutively connected.
  • the second operation object is completed in the first scene.
  • the operation takes 10 seconds, and can also be set to 24 frames per second, so that you can know the operation of the operation object in the first scene at each moment.
  • the scene includes only the time axis of the first operation object and the time axis of the second operation object, and plays in the first scene for 10 seconds.
  • the first operation occurs in the first 5 seconds of the screen.
  • the operation of the object and the second operation object, and only the operation of the second operation object occurs in 5 seconds, so the time axis of the operation object in the first scene can be understood as the operation record of the operation object, and each time can be obtained through the time axis.
  • the operation of the operation object can be understood as the operation record of the operation object, and each time can be obtained through the time axis.
  • the obtaining the operation instruction of the plurality of operation objects respectively and controlling the plurality of operation objects to perform the corresponding operations in the scene according to the operation instruction may specifically include:
  • Each operation object has its own operation in the time period corresponding to the time axis.
  • multiple operation objects can be merged into the first scene, that is, the operation of multiple operation objects under the time axis can be performed together.
  • Displaying or recording in the first scene the merged scene with multiple operation objects is used as the second scene. Since the second scene has the required operation object and related operations, it can be understood as completing a 3D movie. A fragment of a 3D movie with multiple clips formed.
  • Combining each of the operation objects into the first scene according to the time axis to obtain a second scenario may be specifically:
  • the second scene is the virtual 3D scene that we need to make.
  • the virtual 3D scene can be a 3D video scene or a 3D game scene.
  • the method may further include:
  • the second scene is created, and the obtained second scenes are subjected to the editing process to obtain the video, and the video can be applied in the 3D movie or the 3D game, and of course, the dubbing effects can be added as needed, specifically Make a limit.
  • the obtaining the operation instruction for the plurality of operation objects respectively and controlling the plurality of operation objects to perform the corresponding operations in the first scenario according to the operation instruction specifically includes:
  • the first operation object and the operation performed by the second operation object and the operations performed by the first operation object are merged into the first scene according to the time axis to obtain a second scene.
  • a plurality of the second scenes are clipped to obtain a virtual 3D video.
  • the cutting the plurality of the second scenes to obtain the virtual 3D video specifically includes:
  • the series, parallel referred to in the context of the embodiments of the present invention refers to virtual wires and movie blocks in the 3D world.
  • the method further includes:
  • a group of characters can be recorded into a movie square by a physical simulation of the role-playing action.
  • a movie square can be placed in the scene and visually edited by virtual wires, in parallel or in series.
  • a plurality of second scenes are connected by wires and connectors to realize editing of the scene, that is, a segment of a 3D video, and the wires and connectors are editing tools provided by the software system, and are merged in the software system.
  • the second scene is used for clip use.
  • the operation object includes one camera object or multiple camera objects.
  • the user needs to control the operation object to complete the operation, and when multiple camera objects are used, multiple users can simultaneously control the completion operation.
  • the above method may further include: outputting the camera object
  • the image or video captured in the second scene that is, the content of each operation object in the second scene and the operation performed by the camera object in the perspective of the camera object, is a situation in which the video is captured by the camera in the virtual real world.
  • the operation object includes multiple camera objects, acquires an operation instruction for the first operation object at the same time, and controls the first operation object according to the operation instruction and acquires the second operation.
  • Multiple camera objects can be operated by multiple users. For example, when a networked game is played, multiple users can perform their respective operations.
  • the virtual 3D scene production method provided by the invention simplifies the production process of the 3D video or the game, is simple in operation, does not require the user to have a more professional 3D production technology, reduces the technical limitation on the user, and makes the produced 3D video or The amount of game data is relatively small, making it easy to transfer and store.
  • a virtual 3D scene creation method is introduced.
  • a virtual 3D scene creation apparatus is provided in the embodiment of the present invention, and the apparatus includes:
  • a generating unit 101 configured to generate a first scene corresponding to a background required by the virtual 3D scene
  • the operation unit 102 is configured to respectively acquire operation instructions for the plurality of operation objects, and control the plurality of operation objects to perform corresponding operations in the first scene according to the operation instructions;
  • the obtaining unit 103 is configured to acquire a time axis in which each of the plurality of operation objects performs an operation in the first scene;
  • the merging unit 104 is configured to merge each of the operation objects and the operations performed by the operation object into the first scene according to the time axis to obtain a second scenario.
  • the device further includes:
  • the clipping unit 105 is configured to edit a plurality of the second scenes to obtain a virtual 3D video.
  • the generating unit 101 is further configured to:
  • the operating unit 102 is further configured to:
  • the obtaining unit 103 is configured to:
  • the merging unit is used to:
  • the first operation object and the operation performed by the second operation object and the operations performed by the first operation object are merged into the first scene according to the time axis to obtain a second scene.
  • the editing unit 105 is further configured to:
  • the plurality of second scenes are combined in a serial, parallel or serial combination to obtain a virtual 3D video.
  • the operation object includes one or more camera objects; and the generating unit is further configured to:
  • An image or video captured by the camera object in the second scene is output.
  • the first user and the second user are the same user or different users.
  • the operation object includes a plurality of camera objects
  • the operation unit 102 is further configured to:
  • the present invention further provides a virtual 3D scene making device, where the device includes:
  • Memory 203 and processor 201 Memory 203 and processor 201;
  • the memory 203 stores a computer readable program
  • the processor 201 is configured to execute a virtual 3D scene making method by running a program in the memory.
  • the processor 201 is configured to:
  • the virtual 3D scene production device simplifies the production process of the 3D video or the game, is simple in operation, does not require the user to have a more professional 3D production technology, reduces the technical limitation on the user, and makes the produced 3D video or The amount of game data is relatively small, making it easy to transfer and store.
  • the virtual 3D scene production device can be implemented by means of the computer device (or system) in FIG. 2.
  • FIG. 2 is a schematic diagram of a computer device according to an embodiment of the present invention.
  • the computer device 200 includes at least one processor 201, a communication bus 202, a memory 203, and at least one communication interface 204.
  • the processor 201 can be a general purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the program of the present invention.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • Communication bus 202 can include a path for communicating information between the components described above.
  • the communication interface 204 uses devices such as any transceiver for communicating with other devices or communication networks, such as Ethernet, Radio Access Network (RAN), Wireless Local Area Networks (WLAN), and the like.
  • RAN Radio Access Network
  • WLAN Wireless Local Area Networks
  • the memory 203 can be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (RAM) or other type that can store information and instructions.
  • the dynamic storage device can also be an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical disc storage, and a disc storage device. (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can be Any other media accessed, but not limited to this.
  • the memory can exist independently and be connected to the processor via a bus.
  • the memory can also be integrated with the processor.
  • the memory 203 is configured to store program code for executing the solution of the present invention, and is configured by a processor. 201 to control execution.
  • the processor 201 is configured to execute program code stored in the memory 203.
  • processor 201 may include one or more CPUs, such as CPU0 and CPU1 in FIG.
  • computer device 200 can include multiple processors, such as processor 201 and processor 208 in FIG. Each of these processors can be a single-CPU processor or a multi-core processor.
  • a processor herein may refer to one or more devices, circuits, and/or processing cores for processing data, such as computer program instructions.
  • computer device 200 may also include an output device 205 and an input device 206.
  • Output device 205 is in communication with processor 201 and can display information in a variety of ways.
  • the output device 205 can be a liquid crystal display (LCD), a light emitting diode (LED) display device, a cathode ray tube (CRT) display device, or a projector.
  • Input device 206 is in communication with processor 201 and can accept user input in a variety of ways.
  • input device 206 can be a mouse, keyboard, touch screen device or sensing device, and the like.
  • the computer device 200 described above can be a general purpose computer device or a special purpose computer device.
  • the computer device 200 can be a desktop computer, a portable computer, a network server, a personal digital assistant (PDA), a mobile phone, a tablet, a wireless terminal device, a communication device, an embedded device, or have FIG. A device of similar structure.
  • Embodiments of the invention do not limit the type of computer device 200.
  • One or more software modules are stored in the memory of the virtual 3D scene production device.
  • the virtual 3D scene making device can realize the virtual 3D scene making function by implementing the software module through the processor and the program code in the memory.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division, and the actual implementation may have another
  • the manner of division, such as multiple units or components, may be combined or integrated into another system, or some features may be omitted or not performed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Read Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.
  • ROM Read Only Memory
  • RAM Random Access Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

虚拟3D场景制作方法及相关设备,简化了3D视频或游戏的制作流程,操作简单,不要求用户具备较为专业的3D制作技术,降低了对使用者的技术限制,使得制成的3D视频或游戏数据量相对较小,便于传输和存储。

Description

一种虚拟3D场景制作方法及相关设备 技术领域
本发明涉及多媒体技术领域,特别涉及一种虚拟3D场景制作方法及相关设备。
背景技术
角色扮演是指使用者通过第一或第三人称视角,利用鼠标和键盘等输入设备操作虚拟角色实现角色在3D电影或游戏中的表演。角色扮演作为操作虚拟角色的方法被广泛的应用在3D计算机游戏领域中,尤其是FPS(第一人称射击)与RPG(角色扮演)类游戏中。
传统的3D电影指非实时的电影,它一般通过长时间的渲染,输出压缩的静态图像序列文件,然后通过解压并播放静态图像序列来观看。传统的3D电影的制作流程为:分别制作独立的3D场景,模型,人物,动作;在线框图模式下,以摄影机的视角,向场景中添加3D模型,人物与动画;经过长时间的渲染,输出效果图或图序列;根据渲染出的效果图或图序列,回到线框图模式下,调整原场景,然后再次输出效果图或图序列,如此反复,直到得到满意的效果图或图序列。关于传统的3D电影片段的制作可以参考Autodesk 3dsmax和Maya等著名3D电影制作软件。
传统的3D游戏的制作一般分为制作开发环境(游戏编辑器)和游戏运行环境。游戏的制作开发环境同3D电影的制作环境相类似,其制作流程为:分别制作独立的3D场景,模型,人物,动作;在游戏编辑器中,以摄影机的视角,向场景中添加3D模型,人物与动画;在游戏编辑器中,选择场景中的物体,在属性窗口中,更改其属性;当完成对游戏的编辑后,切换到游戏运行环境,观看效果,如此反复,直到得到满意的正确的游戏体验。关于传统的3D游戏的制作可以参考Unreal Engine等著名3D游戏开发软件。
现有技术中的上述3D视频或游戏的制作方法对于制作人员有很高的技术要求,需要掌握专业的3D开发软件的使用方法,这对于普通的用户来说非常困难;而且使用现有的3D开发软件制作的视频或游戏一般数据量非常庞大,不利于传输和存储。
发明内容
有鉴于此,本发明提供了一种虚拟3D场景制作方法及相关设备。
第一方面,本发明实施例提供了一种虚拟3D场景制作方法,所述方法包括:
生成虚拟3D场景所需背景对应的第一场景;
分别获取对多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述第一场景一一执行相应操作;
获取所述多个操作对象中每一个操作对象在所述第一场景中执行操作的时间轴;
按照所述时间轴将每一个所述操作对象及其执行的操作合并到所述第一场景中得到第二场景。
可选的,所述方法还包括:
将多个所述第二场景进行剪辑得到虚拟3D电影。
可选的,所述操作至少包括下列操作中的一种:保持静止、移动、动作、与其他操作对象互动以及与所述第一场景的交互。
可选地,所述生成虚拟3D场景所需背景对应的第一场景具体包括:
抓取预先生成的各种模型、对象,对模型或对象进行属性设定,放置在基础场景中形成第一场景。
可选地,所述分别获取对多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述第一场景一一执行相应操作具体包括:
获取第一用户对第一操作对象的操作指令,并控制第一操作对象在所述第一场景中执行相应操作,记录下第一操作对象执行操作的时间轴;
获取第二用户对第二操作对象的操作指令,并控制第二操作对象在所述第一场景中执行相应操作,记录下第二操作对象执行操作的时间轴;
所述获取所述多个操作对象中每一个操作对象在所述第一场景中执行操作的时间轴具体包括:
获取第一操作对象在所述第一场景中执行操作的时间轴;
获取第二操作对象在所述第一场景中执行操作的时间轴;
所述按照所述时间轴将每一个所述操作对象及其执行的操作合并到所述 第一场景中得到第二场景具体包括:
按照所述时间轴将所述第一操作对象及其执行的操作、所述第二操作对象及其执行的操作合并到所述第一场景中,得到第二场景。
可选地,所述第一用户和所述第二用户为同一用户或不同用户。
可选地,所述将多个所述第二场景进行剪辑得到虚拟3D电影具体包括:
将多个第二场景以串行、并行或串并结合的形式进行组合,得到虚拟3D视频。
可选地,所述操作对象包括一个或多个摄像机对象;
所述方法还包括:输出所述摄像机对象在所述第二场景中所拍摄的图像或视频。
可选地,所述操作对象包括多个摄像机对象,所述分别获取对多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述第一场景一一执行相应操作具体包括:
在同一时间内获取对所述第一操作对象的操作指令并根据所述操作指令控制所述第一操作对象以及获取对所述第二操作对象的操作指令并根据所述操作指令控制所述第二操作对象。
第二方面,本发明提供一种虚拟3D场景制作装置,所述装置包括:
生成单元,用于生成虚拟3D场景所需背景对应的第一场景;
操作单元,用于分别获取对多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述第一场景一一执行相应操作;
获取单元,用于获取所述多个操作对象中每一个操作对象在所述第一场景中执行操作的时间轴;
合并单元,用于按照所述时间轴将每一个所述操作对象及其执行的操作合并到所述第一场景中得到第二场景。
可选地,所述装置还包括:
剪辑单元,用于将多个所述第二场景进行剪辑得到虚拟3D视频。
可选的,所述操作至少包括下列操作中的一种:保持静止、移动、动作、与其他操作对象互动以及与所述第一场景的交互。
可选地,所述生成单元还用于:
抓取预先生成的各种模型、对象,对模型或对象进行属性设定,放置在基础场景中形成第一场景。
可选地,所述操作单元还用于:
获取第一用户对第一操作对象的操作指令,并控制第一操作对象在所述第一场景中执行相应操作,记录下第一操作对象执行操作的时间轴;
获取第二用户对第二操作对象的操作指令,并控制第二操作对象在所述第二场景中执行相应操作,记录下第二操作对象执行操作的时间轴;
所述获取单元用于:
获取第一操作对象在所述第一场景中执行操作的时间轴;
获取第二操作对象在所述第一场景中执行操作的时间轴;
所述合并单元用于:
按照所述时间轴将所述第一操作对象及其执行的操作、所述第二操作对象及其执行的操作合并到所述第一场景中,得到第二场景。
可选地,所述剪辑单元还用于:
将多个第二场景以串行、并行或串并结合的形式进行组合,得到虚拟3D视频。
可选地,所述操作对象包括一个或多个摄像机对象;所述生成单元还用于:
输出所述摄像机对象在所述第二场景中所拍摄的图像或视频。
可选地,所述第一用户和所述第二用户为同一用户或不同用户。
可选地,所述操作对象包括多个摄像机对象,所述操作单元用于:
在同一时间内获取对所述第一操作对象的操作指令并根据所述操作指令控制所述第一操作对象以及获取对所述第二操作对象的操作指令并根据所述操作指令控制所述第二操作对象。
第三方面,本发明提供一种虚拟3D场景制作设备,所述设备包括:
存储器和处理器;
其中,
所述存储器中存有计算机可读程序;
所述处理器通过运行所述存储器中的程序,并至少用于:
生成虚拟3D场景所需背景对应的第一场景;
分别获取对多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述第一场景一一执行相应操作;
获取所述多个操作对象中每一个操作对象在所述第一场景中执行操作的时间轴;
按照所述时间轴将每一个所述操作对象及其执行的操作合并到所述第一场景中得到第二场景。
从以上技术方案可以看出,本发明实施例具有以下优点:
本发明提供的虚拟3D场景制作方法及设备,简化了3D视频或游戏的制作流程,操作简单,不要求用户具备较为专业的3D制作技术,降低了对使用者的技术限制。
附图说明
图1_a是本发明提供的虚拟3D场景制作方法的一种实施例的流程图;
图1_b是本发明提供的虚拟3D场景制作方法的一种实施例的流程图
图1_c是本发明提供的虚拟3D场景制作装置的一种实施例的结构图;
图2是本发明提供的虚拟3D场景制作设备的一种实施例的硬件结构图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
结合图1_a所示,本发明实施例中提供的虚拟3D场景制作方法的一种实施例,所述方法包括:
S1、生成虚拟3D场景所需背景对应的第一场景。
用户可以从软件系统中抓取预先生成的各种模型、对象,放置在基础场景(可以是软件系统提供可选的基础背景,例如空白背景)中,也可对模型或对象进行属性设定,以完成对第一场景的生成,可以理解为用户根据虚拟3D场景的需要布置所需的场景供使用,此处不进行限定。
S2、分别获取对多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述第一场景一一执行相应操作。
例如,根据对第一操作对象的操作指令在第一场景中执行对应的操作,操作可以包括下列操作中的至少一种:保持静止、移动、动作与其他操作对象的互动以及与第一场景的交互,这里提到的操作对象可以是3D电影或3D游戏中的用户控制人物,可以是第一人称视角或第三人称视角,例如控制场景中的人物进行前行、转弯等操作,也可以控制人物拾取场景中的道具,完成游戏角色在场景中的任务,此处不做限定。
S3、获取所述多个操作对象中每一个操作对象在所述第一场景中执行操作的时间轴。
操作对象在第一场景中根据一个操作指令完成一项操作花费的时间可以不同。
S4、按照所述时间轴将每一个所述操作对象及其执行的操作合并到所述第一场景中得到第二场景。
每个操作对象在时间轴所对应的时间段内具有各自的操作,通过利用统一的时间轴可以将多个操作对象合并到第一场景中,即在时间轴下多个操作对象的操作可以一同在第一场景中展现或记录,合并后得到的具有多个操作对象的场景作为第二场景,以使得在同一时刻对所述多个操作对象的操作可以同时在所述第二场景中记录或呈现。
本发明提供的虚拟3D场景制作方法,简化了3D视频或游戏的制作流程,操作简单,不要求用户具备较为专业的3D制作技术,降低了对使用者的技术限制。
结合图1_b所示,本发明实施例中提供的虚拟3D场景制作方法的一种实施例,所述方法包括:
S101、生成虚拟3D场景所需背景对应的第一场景;
用户可以从软件系统中抓取预先生成的各种模型、对象(可包括操作对象),放置在基础场景中,也可对模型或对象进行属性设定,以完成对第一场景的生成,可以理解为用户根据虚拟3D场景的需要布置所需的场景供使用,此处不进行限定。
S102、分别获取对多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述第一场景一一执行相应操作。
根据对第一操作对象的操作指令在第一场景中执行对应的操作,操作可以包括下列操作中的至少一种:保持静止、移动、动作与其他操作对象的互动以及与第一场景的交互,这里提到的操作对象可以是3D电影或3D游戏中的用户控制人物,可以是第一人称视角或第三人称视角,例如控制场景中的人物进行前行、转弯等操作,也可以控制人物拾取场景中的道具,完成游戏角色在场景中的任务,此处不做限定。
在一个场景中可以有多个操作对象,分别根据每个操作对象的操作指令控制操作对象在第一场景中执行操作,例如,根据第一操作对象的操作指令控制第一操作对象在第一场景中向前行走,根据第二操作对象的操作指令控制第二操作对象在第一场景中向右行走,对不同的操作对象的操作可以相同也可以不同,具体根据操作对象的需要而定,将第一场景中的操作对象分别进行操作即可。
S103、获取所述多个操作对象中每一个操作对象在所述第一场景中执行操作的时间轴。
操作对象在第一场景中根据一个操作指令完成一项操作花费的时间可以不同,不难理解,例如在第一场景中第一操作对象完成一项操作用时5秒,可以将这个5秒钟作为时间轴,将第一操作对象的操作进行分解为多个帧,每一秒可以设置24帧,使得第一操作对象的图像可以连贯起来,同样,将第二操作对象在第一场景中完成一项操作用时10秒钟,也可以设置为每秒钟24帧,这样可以知道每一时刻中操作对象在第一场景中的操作情况,例如,假设第一 场景中仅包括第一操作对象的时间轴和第二操作对象的时间轴,在第一场景中播放10秒,按照每秒24帧的播放速度,则前5秒的画面中会出现第一操作对象和第二操作对象的操作,而后5秒仅会出现第二操作对象的操作,所以操作对象在第一场景中的时间轴可以理解为操作对象的操作记录,通过时间轴可以获得每一时刻操作对象的操作情况。
举例来说,所述分别获取多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述场景一一执行相应操作可以具体包括:
获取用户对第一操作对象的操作指令,并控制第一操作对象在场景中移动、做动作、与场景进行交互,记录下第一操作对象的时间轴;
获取用户对第二操作对象的操作指令,并控制第二操作对象在场景中移动、做动作、与场景及其他对象进行交互,记录下第二操作对象的时间轴。
以上是以两个操作对象来进行举例说明,当然操作对象的个数可以为任意个数,在此不做具体限制。
S104、按照所述时间轴将每一个所述操作对象及其执行的操作合并到所述第一场景中得到第二场景。
这样即可以使得在同一时刻对所述多个操作对象的操作可以同时在所述第二场景中记录或呈现。
每个操作对象在时间轴所对应的时间段内具有各自的操作,通过利用统一的时间轴可以将多个操作对象合并到第一场景中,即在时间轴下多个操作对象的操作可以一同在第一场景中展现或记录,合并后得到的具有多个操作对象的场景作为第二场景,由于第二场景中具备了所需要的操作对象及相关操作,则可以理解为完成了一个3D电影的一个片段,一部3D电影有多个片段剪辑形成。
按照所述时间轴将每一个所述操作对象合并到所述第一场景中得到第二场景具体可以为:
按照所述时间轴将所述第一操作对象及其执行的操作和所述第二操作对象及其执行的操作合并到所述第一场景中,得到第二场景。
第二场景即是我们需要制作得到的虚拟3D场景,该虚拟3D场景可以是3D视频场景或者3D游戏场景。
可选的,该方法还可包括:
S105、将多个所述第二场景进行剪辑得到虚拟3D视频。
按照前面步骤的介绍制作需要的第二场景,将得到的多个第二场景进过剪辑处理得到视频,视频可以应用在3D电影或3D游戏中,当然还可以根据需要加入配音特效等,具体不做限定。
具体地,所述分别获取对多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述第一场景一一执行相应操作具体包括:
获取用户对第一操作对象的操作指令,并控制第一操作对象在所述第一场景中执行相应操作,记录下第一操作对象执行操作的时间轴;
获取用户对第二操作对象的操作指令,并控制第二操作对象在所述第二场景中执行相应操作,记录下第二操作对象执行操作的时间轴。
按照所述时间轴将所述第一操作对象及其执行的操作、所述第二操作对象及其执行的操作合并到所述第一场景中,得到第二场景。
将多个所述第二场景进行剪辑得到虚拟3D视频。
可选地,所述将多个所述第二场景进行剪辑得到虚拟3D视频具体包括:
将多个场景以串行、并行或串并结合的形式进行组合,得到虚拟3D视频。本发明实施例内容中提到的串联、并联指的是3D世界中的虚拟导线和电影方块。
可选地,所述将多个场景以串行、并行或串并结合的形式进行组合之后,所述方法还包括:
在3D场景中,通过有物理仿真的角色扮演逐个录制动作,一组人物可以录制到一个电影方块中,电影方块可以摆放到场景中,并通过虚拟的导线,并联或串联来进行直观的剪辑,具体地说,用导线、连接器将多个第二场景连接起来,以实现对场景的剪辑,即一个3D视频的片段,导线和连接器是软件系统提供的剪辑工具,在软件系统合并得到第二场景用来进行剪辑使用。
可选地,所述操作对象包括一个摄像机对象或多个摄像机对象,采用一个摄像机对象时需要用户先后控制操作对象完成操作,采用多个摄像机对象时候可以是多个用户同时控制完成操作。
当操作对象中包括摄像机对象时,上述方法还可以包括:输出摄像机对象 在第二场景中所拍摄的图像或视频,即是以摄像机对象的视角,记录下第二场景中的各个操作对象及其执行的操作等内容,以虚拟真实世界中摄像机拍摄视频的情形。
可选地,所述操作对象包括多个摄像机对象,在同一时间内获取对所述第一操作对象的操作指令并根据所述操作指令控制所述第一操作对象以及获取对所述第二操作对象的操作指令并根据所述操作指令控制所述第二操作对象。多个摄像机对象时可以采用多个用户一起进行操作,例如进行联网游戏时候,可以有多个用户分别进行各自的操作。
本发明提供的虚拟3D场景制作方法,简化了3D视频或游戏的制作流程,操作简单,不要求用户具备较为专业的3D制作技术,降低了对使用者的技术限制,使得制成的3D视频或游戏数据量相对较小,便于传输和存储。
结合图1_c所示,前面介绍了虚拟3D场景制作方法,对应地,本发明实施例中提供了虚拟3D场景制作装置,所述装置包括:
生成单元101,用于生成虚拟3D场景所需背景对应的第一场景;
操作单元102,用于分别获取对多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述第一场景一一执行相应操作;
获取单元103,用于获取所述多个操作对象中每一个操作对象在所述第一场景中执行操作的时间轴;
合并单元104,用于按照所述时间轴将每一个所述操作对象及其执行的操作合并到所述第一场景中得到第二场景。
可选地,所述装置还包括:
剪辑单元105,用于将多个所述第二场景进行剪辑得到虚拟3D视频。
可选地,所述生成单元101还用于:
抓取预先生成的各种模型、对象,对模型或对象进行属性设定,放置在基础场景中形成第一场景。
可选地,所述操作单元102还用于:
获取第一用户对第一操作对象的操作指令,并控制第一操作对象在所述第一场景中执行相应操作,记录下第一操作对象执行操作的时间轴;
获取第二用户对第二操作对象的操作指令,并控制第二操作对象在所述第 二场景中执行相应操作,记录下第二操作对象执行操作的时间轴;
所述获取单元103用于:
获取第一操作对象在所述第一场景中执行操作的时间轴;
获取第二操作对象在所述第一场景中执行操作的时间轴;
所述合并单元用于:
按照所述时间轴将所述第一操作对象及其执行的操作、所述第二操作对象及其执行的操作合并到所述第一场景中,得到第二场景。
可选地,所述剪辑单元105还用于:
将多个第二场景以串行、并行或串并结合的形式进行组合,得到虚拟3D视频。
可选地,所述操作对象包括一个或多个摄像机对象;所述生成单元还用于:
输出所述摄像机对象在所述第二场景中所拍摄的图像或视频。
可选地,所述第一用户和所述第二用户为同一用户或不同用户。
可选地,所述操作对象包括多个摄像机对象,所述操作单元102还用于:
在同一时间内获取对所述第一操作对象的操作指令并根据所述操作指令控制所述第一操作对象以及获取对所述第二操作对象的操作指令并根据所述操作指令控制所述第二操作对象。
结合图2所示,本发明还提供了一种虚拟3D场景制作设备,所述设备包括:
存储器203和处理器201;
其中,
所述存储器203中存有计算机可读程序;
所述处理器201通过运行所述存储器中的程序,以用于完成虚拟3D场景制作方法。
具体地说,所述处理器201用于:
生成虚拟3D场景所需背景对应的第一场景;
分别获取对多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述第一场景一一执行相应操作,所述操作包括移动、动作、与其他操作对象互动以及与所述第一场景的交互;
获取所述多个操作对象中每一个操作对象在所述第一场景中执行操作的时间轴;
按照所述时间轴将每一个所述操作对象及其执行的操作合并到所述第一场景中得到第二场景。
本发明提供的虚拟3D场景制作设备,简化了3D视频或游戏的制作流程,操作简单,不要求用户具备较为专业的3D制作技术,降低了对使用者的技术限制,使得制成的3D视频或游戏数据量相对较小,便于传输和存储。
如图2所示,虚拟3D场景制作设备可以通过图2中的计算机设备(或系统)的方式来实现。
图2所示为本发明实施例提供的计算机设备示意图。计算机设备200包括至少一个处理器201,通信总线202,存储器203以及至少一个通信接口204。
处理器201可以是一个通用中央处理器(CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制本发明方案程序执行的集成电路。
通信总线202可包括一通路,在上述组件之间传送信息。所述通信接口204,使用任何收发器一类的装置,用于与其他设备或通信网络通信,如以太网,无线接入网(RAN),无线局域网(Wireless Local Area Networks,WLAN)等。
存储器203可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过总线与处理器相连接。存储器也可以和处理器集成在一起。
其中,所述存储器203用于存储执行本发明方案的程序代码,并由处理器 201来控制执行。所述处理器201用于执行所述存储器203中存储的程序代码。
在具体实现中,作为一种实施例,处理器201可以包括一个或多个CPU,例如图2中的CPU0和CPU1。
在具体实现中,作为一种实施例,计算机设备200可以包括多个处理器,例如图2中的处理器201和处理器208。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
在具体实现中,作为一种实施例,计算机设备200还可以包括输出设备205和输入设备206。输出设备205和处理器201通信,可以以多种方式来显示信息。例如,输出设备205可以是液晶显示器(liquid crystal display,LCD),发光二级管(light emitting diode,LED)显示设备,阴极射线管(cathode ray tube,CRT)显示设备,或投影仪(projector)等。输入设备206和处理器201通信,可以以多种方式接受用户的输入。例如,输入设备206可以是鼠标、键盘、触摸屏设备或传感设备等。
上述的计算机设备200可以是一个通用计算机设备或者是一个专用计算机设备。在具体实现中,计算机设备200可以是台式机、便携式电脑、网络服务器、掌上电脑(Personal Digital Assistant,PDA)、移动手机、平板电脑、无线终端设备、通信设备、嵌入式设备或有图5中类似结构的设备。本发明实施例不限定计算机设备200的类型。
虚拟3D场景制作设备的存储器中存储了一个或多个软件模块(例如:交互模块、处理模块等)。虚拟3D场景制作设备可以通过处理器以及存储器中的程序代码来实现软件模块,实现虚拟3D场景制作功能。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另 外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘或光盘等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上对本发明所提供的一种虚拟3D场景制作方法及设备进行了详细介绍,对于本领域的一般技术人员,依据本发明实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (19)

  1. 一种虚拟3D场景制作方法,其特征在于,所述方法包括:
    生成虚拟3D场景所需背景对应的第一场景;
    分别获取对多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述第一场景一一执行相应操作;
    获取所述多个操作对象中每一个操作对象在所述第一场景中执行操作的时间轴;
    按照所述时间轴将每一个所述操作对象及其执行的操作合并到所述第一场景中得到第二场景。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    将多个所述第二场景进行剪辑得到虚拟3D视频。
  3. 根据权利要求1所述的方法,其特征在于,所述操作至少包括下列操作中的一种:保持静止、移动、动作、与其他操作对象互动以及与所述第一场景的交互。
  4. 根据权利要求3所述的方法,其特征在于,所述生成虚拟3D场景所需背景对应的第一场景具体包括:
    抓取预先生成的各种模型、对象,对模型或对象进行属性设定,放置在基础场景中形成第一场景。
  5. 根据权利要求4所述的方法,其特征在于,所述分别获取对多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述第一场景一一执行相应操作具体包括:
    获取第一用户对第一操作对象的操作指令,并控制第一操作对象在所述第一场景中执行相应操作,记录下第一操作对象执行操作的时间轴;
    获取第二用户对第二操作对象的操作指令,并控制第二操作对象在所述第二场景中执行相应操作,记录下第二操作对象执行操作的时间轴;
    所述获取所述多个操作对象中每一个操作对象在所述第一场景中执行操作的时间轴具体包括:
    获取第一操作对象在所述第一场景中执行操作的时间轴;
    获取第二操作对象在所述第一场景中执行操作的时间轴;
    所述按照所述时间轴将每一个所述操作对象及其执行的操作合并到所述第一场景中得到第二场景具体包括:
    按照所述时间轴将所述第一操作对象及其执行的操作、所述第二操作对象及其执行的操作合并到所述第一场景中,得到第二场景。
  6. 根据权利要求5所述的方法,其特征在于,所述第一用户和所述第二用户为同一用户或不同用户。
  7. 根据权利要求2所述的方法,其特征在于,所述将多个所述第二场景进行剪辑得到虚拟3D视频具体包括:
    将多个第二场景以串行、并行或串并结合的形式进行组合,得到虚拟3D视频。
  8. 根据权利要求1所述的方法,其特征在于,所述操作对象包括一个或多个摄像机对象;
    所述方法还包括:输出所述摄像机对象在所述第二场景中所拍摄的图像或视频。
  9. 根据权利要求8所述的方法,其特征在于,所述操作对象包括多个摄像机对象,所述分别获取对多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述第一场景一一执行相应操作具体包括:
    在同一时间内获取对所述第一操作对象的操作指令并根据所述操作指令控制所述第一操作对象以及获取对所述第二操作对象的操作指令并根据所述操作指令控制所述第二操作对象。
  10. 一种虚拟3D场景制作装置,其特征在于,所述装置包括:
    生成单元,用于生成虚拟3D场景所需背景对应的第一场景;
    操作单元,用于分别获取对多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述第一场景一一执行相应操作;
    获取单元,用于获取所述多个操作对象中每一个操作对象在所述第一场景中执行操作的时间轴;
    合并单元,用于按照所述时间轴将每一个所述操作对象及其执行的操作合并到所述第一场景中得到第二场景。
  11. 根据权利要求10所述的装置,其特征在于,所述装置还包括:
    剪辑单元,用于将多个所述第二场景进行剪辑得到虚拟3D视频。
  12. 根据权利要求10所述的装置,其特征在于,所述操作至少包括下列操作中的一种:保持静止、移动、动作、与其他操作对象互动以及与所述第一场景的交互。
  13. 根据权利要求12所述的装置,其特征在于,所述生成单元还用于:
    抓取预先生成的各种模型、对象,对模型或对象进行属性设定,放置在基础场景中形成第一场景。
  14. 根据权利要求13所述的装置,其特征在于,所述操作单元还用于:
    获取第一用户对第一操作对象的操作指令,并控制第一操作对象在所述第一场景中执行相应操作,记录下第一操作对象执行操作的时间轴;
    获取第二用户对第二操作对象的操作指令,并控制第二操作对象在所述第二场景中执行相应操作,记录下第二操作对象执行操作的时间轴;
    所述获取单元用于:
    获取第一操作对象在所述第一场景中执行操作的时间轴;
    获取第二操作对象在所述第一场景中执行操作的时间轴;
    所述合并单元用于:
    按照所述时间轴将所述第一操作对象及其执行的操作、所述第二操作对象及其执行的操作合并到所述第一场景中,得到第二场景。
  15. 根据权利要求14所述的装置,其特征在于,所述第一用户和所述第二用户为同一用户或不同用户。
  16. 根据权利要求11所述的装置,其特征在于,所述剪辑单元还用于:
    将多个第二场景以串行、并行或串并结合的形式进行组合,得到虚拟3D视频。
  17. 根据权利要求10所述的装置,其特征在于,所述操作对象包括一个或多个摄像机对象;所述生成单元还用于:
    输出所述摄像机对象在所述第二场景中所拍摄的图像或视频。
  18. 根据权利要求17所述的装置,其特征在于,所述操作对象包括多个摄像机对象,所述操作单元用于:
    在同一时间内获取对所述第一操作对象的操作指令并根据所述操作指令 控制所述第一操作对象以及获取对所述第二操作对象的操作指令并根据所述操作指令控制所述第二操作对象。
  19. 一种虚拟3D场景制作设备,其特征在于,所述设备包括:
    存储器和处理器;
    其中,
    所述存储器中存有计算机可读程序;
    所述处理器通过运行所述存储器中的程序,并至少用于:
    生成虚拟3D场景所需背景对应的第一场景;
    分别获取对多个操作对象的操作指令并根据所述操作指令控制所述多个操作对象在所述第一场景一一执行相应操作;
    获取所述多个操作对象中每一个操作对象在所述第一场景中执行操作的时间轴;
    按照所述时间轴将每一个所述操作对象及其执行的操作合并到所述第一场景中得到第二场景。
PCT/CN2016/099341 2016-09-19 2016-09-19 一种虚拟3d场景制作方法及相关设备 WO2018049682A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680039121.4A CN107820622A (zh) 2016-09-19 2016-09-19 一种虚拟3d场景制作方法及相关设备
PCT/CN2016/099341 WO2018049682A1 (zh) 2016-09-19 2016-09-19 一种虚拟3d场景制作方法及相关设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/099341 WO2018049682A1 (zh) 2016-09-19 2016-09-19 一种虚拟3d场景制作方法及相关设备

Publications (1)

Publication Number Publication Date
WO2018049682A1 true WO2018049682A1 (zh) 2018-03-22

Family

ID=61601120

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/099341 WO2018049682A1 (zh) 2016-09-19 2016-09-19 一种虚拟3d场景制作方法及相关设备

Country Status (2)

Country Link
CN (1) CN107820622A (zh)
WO (1) WO2018049682A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115525181A (zh) * 2022-11-28 2022-12-27 深圳飞蝶虚拟现实科技有限公司 3d内容的制作方法、装置、电子设备及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109847347B (zh) * 2019-03-13 2022-11-04 网易(杭州)网络有限公司 游戏中虚拟操作的控制方法、装置、介质及电子设备
CN110090437A (zh) * 2019-04-19 2019-08-06 腾讯科技(深圳)有限公司 视频获取方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101247481A (zh) * 2007-02-16 2008-08-20 李西峙 基于角色扮演的实时三维电影/游戏的制作及播放的系统和方法
EP2469474A1 (en) * 2010-12-24 2012-06-27 Dassault Systèmes Creation of a playable scene with an authoring system
CN102915553A (zh) * 2012-09-12 2013-02-06 珠海金山网络游戏科技有限公司 一种3d游戏视频拍摄系统及其方法
CN102930582A (zh) * 2012-10-16 2013-02-13 郅刚锁 一种基于游戏引擎的动漫制作方法
CN104240293A (zh) * 2014-09-26 2014-12-24 上海水晶石视觉展示有限公司 一种高逼真虚拟舞台的实现方法
CN105354872A (zh) * 2015-11-04 2016-02-24 深圳墨麟科技股份有限公司 一种基于3d网页游戏的渲染引擎、实现方法及制作工具

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001149640A (ja) * 1999-09-16 2001-06-05 Sega Corp ゲーム機およびゲーム処理方法並びにプログラムを記録した記録媒体
CN1801899A (zh) * 2005-08-31 2006-07-12 珠海市西山居软件有限公司 一种游戏录像回放系统
CN101261660A (zh) * 2008-04-23 2008-09-10 广州国信达计算机网络通讯有限公司 一种在网络游戏中用真人影像实现游戏角色动作的方法
CN103345395B (zh) * 2013-07-01 2016-06-29 绵阳市武道数码科技有限公司 一种用于大型多人在线角色扮演的3d游戏引擎

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101247481A (zh) * 2007-02-16 2008-08-20 李西峙 基于角色扮演的实时三维电影/游戏的制作及播放的系统和方法
EP2469474A1 (en) * 2010-12-24 2012-06-27 Dassault Systèmes Creation of a playable scene with an authoring system
CN102915553A (zh) * 2012-09-12 2013-02-06 珠海金山网络游戏科技有限公司 一种3d游戏视频拍摄系统及其方法
CN102930582A (zh) * 2012-10-16 2013-02-13 郅刚锁 一种基于游戏引擎的动漫制作方法
CN104240293A (zh) * 2014-09-26 2014-12-24 上海水晶石视觉展示有限公司 一种高逼真虚拟舞台的实现方法
CN105354872A (zh) * 2015-11-04 2016-02-24 深圳墨麟科技股份有限公司 一种基于3d网页游戏的渲染引擎、实现方法及制作工具

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115525181A (zh) * 2022-11-28 2022-12-27 深圳飞蝶虚拟现实科技有限公司 3d内容的制作方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN107820622A (zh) 2018-03-20

Similar Documents

Publication Publication Date Title
US8405662B2 (en) Generation of video
US11638871B2 (en) Method, system and apparatus of recording and playing back an experience in a virtual worlds system
US10419510B2 (en) Selective capture with rapid sharing of user or mixed reality actions and states using interactive virtual streaming
KR102441514B1 (ko) 하이브리드 스트리밍
US20180143741A1 (en) Intelligent graphical feature generation for user content
WO2018049682A1 (zh) 一种虚拟3d场景制作方法及相关设备
KR20220086648A (ko) 몰입형 콘텐츠에서 2d 영화를 만들기 위한 시스템 및 방법
US20240004529A1 (en) Metaverse event sequencing
US11165842B2 (en) Selective capture with rapid sharing of user or mixed reality actions and states using interactive virtual streaming
CN116017082A (zh) 一种信息处理方法和电子设备
Chu et al. Navigable videos for presenting scientific data on affordable head-mounted displays
JP2017167619A (ja) 三次元コンテンツの生成方法、プログラム、及びクライアント装置
US11842190B2 (en) Synchronizing multiple instances of projects
US12002491B2 (en) Visual effect design using multiple preview windows
KR102396060B1 (ko) 전자 게임에서 카메라 뷰 변경
KR102158676B1 (ko) 분기가 있는 시나리오를 위한 시나리오 플레이어 시스템
US20230215465A1 (en) Visual effect design using multiple preview windows
Hillmann et al. VR Production Tools, Workflow, and Pipeline
JP2022167888A (ja) リアルタイム3dレンダリングプラットフォームを使用したインタラクティブデジタル体験の生成
EP2070570A1 (en) Generation of video
CN116342757A (zh) 镜头变速处理方法、装置、存储介质及电子装置
CN114205668A (zh) 视频播放方法、装置、电子设备及计算机可读介质
CN114627266A (zh) 一种模拟式视频制作方法、系统、装置及存储介质
CN116528005A (zh) 虚拟模型动画的编辑方法、装置、电子设备及存储介质
CN116437153A (zh) 虚拟模型的预览方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16916062

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16916062

Country of ref document: EP

Kind code of ref document: A1