WO2022013950A1 - Three-dimensional video image provision device, three-dimensional video image provision method, and program - Google Patents

Three-dimensional video image provision device, three-dimensional video image provision method, and program Download PDF

Info

Publication number
WO2022013950A1
WO2022013950A1 PCT/JP2020/027401 JP2020027401W WO2022013950A1 WO 2022013950 A1 WO2022013950 A1 WO 2022013950A1 JP 2020027401 W JP2020027401 W JP 2020027401W WO 2022013950 A1 WO2022013950 A1 WO 2022013950A1
Authority
WO
WIPO (PCT)
Prior art keywords
moving image
virtual camera
dimensional
unit
time
Prior art date
Application number
PCT/JP2020/027401
Other languages
French (fr)
Japanese (ja)
Inventor
佑介 横須賀
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2020/027401 priority Critical patent/WO2022013950A1/en
Priority to JP2021505942A priority patent/JPWO2022013950A1/ja
Publication of WO2022013950A1 publication Critical patent/WO2022013950A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation

Definitions

  • This disclosure relates to a 3D moving image providing device, a 3D moving image providing method and a program.
  • the 3D moving image is a moving image of a 3D object taken by a virtual camera arranged in a 3D virtual space.
  • a moving image generation method described in Patent Document 1 generates motion information that defines the motion of a three-dimensional model (three-dimensional object) in a three-dimensional virtual space and the movement path of a viewpoint position, and uses the motion information from the motion information. Determines the shooting conditions of the subject that the movement of the 3D model is smooth and the shooting interval of the subject is maximized from the viewpoint of the person.
  • a three-dimensional model is generated from a plurality of two-dimensional still images acquired according to the determined shooting conditions, and a three-dimensional moving image is generated by using the generated three-dimensional model and motion information.
  • 3D three dimensions will be referred to as 3D.
  • the 3D moving image providing device is a 3D moving image providing device that provides a 3D moving image virtually taken by a virtual camera arranged in a 3D virtual space, and is in an edited state of the 3D moving image. It is equipped with an editorial unit that determines operation information according to the passage of time of the 3D object existing in the 3D virtual space, and a reproduction unit that reproduces the 3D moving image of the 3D object that operates according to the operation information in the reproduction state of the 3D moving image. , The reproduction unit reproduces a 3D moving image in which the direction of the virtual camera is changed so that the line of sight of the virtual camera follows the movement of the 3D object.
  • a 3D moving image in which the direction of the virtual camera is changed so that the line of sight of the virtual camera follows the movement of the 3D object is reproduced.
  • the 3D moving image providing device can prevent the 3D object from coming out of the field of view of the virtual camera.
  • FIG. It is a block diagram which shows the structure of the 3D moving image providing apparatus which concerns on Embodiment 1.
  • FIG. It is a flowchart which shows the 3D moving image providing method which concerns on Embodiment 1.
  • It is an image diagram which shows the outline of a 3D moving image.
  • It is an image diagram which shows the outline of an edit state and a play state.
  • It is an image diagram which shows the example 1 of the process which determines the operation information of a 3D object.
  • It is an image diagram which shows the example 2 of the process which determines the operation information of a 3D object.
  • It is an image diagram which shows the outline of reproduction of the 3D moving image in Embodiment 1.
  • FIG. It is an image diagram which shows the outline of reproduction of the conventional 3D animation image.
  • FIG. 9A is a block diagram showing a hardware configuration that realizes the function of the 3D moving image providing device according to the first embodiment
  • FIG. 9B realizes the function of the 3D moving image providing device according to the first embodiment.
  • It is a block diagram which shows the hardware configuration which executes software.
  • It is a block diagram which shows the structure of the 3D moving image providing apparatus which concerns on Embodiment 2.
  • It is an image diagram which shows the outline of the process which widens the field of view of a virtual camera.
  • the 3D moving image providing device 1 includes a 3D space construction unit 11, an editing unit 12, and a reproduction unit 13. Further, the editing unit 12 includes an operation determining unit 121.
  • the reproduction unit 13 includes a gazing point determination unit 131 and a 3D object tracking unit 132.
  • the 3D moving image providing device 1 uses the already constructed 3D virtual space information
  • the 3D space building unit 11 is omitted from the components of the 3D moving image providing device 1.
  • the 3D moving image providing device 1 includes only an editing unit 12 and a reproducing unit 13, and the editing unit 12 and the reproducing unit 13 are edited or reproduced from an external storage device that stores 3D virtual space information that has already been constructed. 3D virtual space information is read out and the 3D moving image is edited or reproduced.
  • the operation determination unit 121 determines the operation information of the 3D object of the 3D moving image in the editing state of the 3D moving image.
  • the operation information of the 3D object is, for example, information including the movement locus of the 3D object according to the passage of time and the posture of the 3D object at each movement position.
  • the operation information also includes information indicating the orientation of the virtual camera according to the passage of time.
  • the reproduction unit 13 reproduces a 3D moving image and displays it on the display device 2. For example, when shifting to the reproduction state of the 3D moving image according to the instruction information received by the input device 3, the reproduction unit 13 reproduces the 3D moving image of the 3D object that operates according to the operation information determined by the operation determining unit 121. Further, in the reproduction state of the 3D moving image, the reproduction unit 13 reproduces the 3D moving image in which the direction of the virtual camera is changed so that the line of sight of the virtual camera follows the movement of the 3D object.
  • the gazing point determination unit 131 determines the intersection of the 3D object in the line-of-sight direction of the virtual camera and the line-of-sight of the virtual camera as the gazing point in the reproduced 3D moving image. For example, the gaze point determination unit 131 identifies the intersection closest to the virtual camera among the intersections between the line of sight of the virtual camera and the 3D object, and determines the specified intersection as the gaze point.
  • the 3D object tracking unit 132 reproduces the 3D moving image in which the direction of the virtual camera is changed so that the line of sight of the virtual camera follows the moving 3D object in the reproduced 3D moving image. For example, the 3D object tracking unit 132 changes the orientation of the virtual camera so that the line of sight of the virtual camera follows the gazing point of the 3D object, and the 3D moving image virtually taken by the virtual camera in the changed orientation. To play.
  • FIG. 2 is a flowchart showing a 3D moving image providing method according to the first embodiment.
  • the 3D space construction unit 11 constructs a 3D virtual space using the 3D space display data (step ST1).
  • FIG. 3 is an image diagram showing an outline of a 3D moving image.
  • the 3D virtual space illustrated in FIG. 3 is a 3D space in which 3D objects 31a and 31b are virtually arranged.
  • the direction of the line of sight A of the virtual camera 21 shown in FIG. 3 is the z direction of the xyz coordinate system set in the 3D virtual space.
  • the 3D moving image is a moving image in which 3D objects 31a and 31b in the 3D virtual space virtually taken by the virtual camera 21 are taken.
  • Reproduction of a 3D moving image is a process of generating a 2D image 2A including 2D objects 32a and 32b which are 3D objects 31a and 31b projected on a projection plane in the xy plane, and the display device 2 has a 2D image 2A. Is displayed.
  • a timeline image 33a showing the timeline of the 3D object 32 and a timeline image 33b showing the timeline of the virtual camera 21 are displayed.
  • the timeline images 33a and 33b, which are band-shaped images, are the 3D object 32 in the 3D moving image by moving the slide bar 33c along the longitudinal direction of the timeline images 33a and 33b using the input device 3.
  • the time and the time of the virtual camera 21 can be changed.
  • switching between the edit state and the playback state by the play button 34 and the stop button 35 is an example, and is not limited to this.
  • the edit unit 12 and the play unit 13 may switch between the edit state and the play state by a button dedicated to switching.
  • FIG. 5 is an image diagram showing Example 1 of a process for determining the operation information of the 3D object 32.
  • the motion determination unit 121 determines the posture of the virtual camera 21 and the posture of the 3D object 32 in the period between the previous time (time in the upper diagram of FIG. 5) and the changed time (time in the lower diagram of FIG. 5). Interpolate. For example, the motion determination unit 121 expresses each state of the virtual camera 21 and the 3D object 32 in a quaternion, so that the posture of the virtual camera 21 and the 3D object 32 in the period between the previous time and the changed time can be expressed. Interpolate the posture.
  • the motion determination unit 121 sets the movement locus of the 3D object 32 in the period between the previous time and the changed time as the above motion constraint. Interpolate accordingly.
  • the moving object indicated by the 3D object 32 is an automobile
  • the automobile is limited to the movements that can be realized by the wheels such as going straight, reversing, turning right, and turning left, and the movement that slides to the side is impossible.
  • FIG. 6 is an image diagram showing Example 2 of the process of determining the operation information of the 3D object 32.
  • the position of the 3D object 32 on the display screen 2B shown in the upper part of FIG. 6 is the position of the 3D object 32 on the frame image corresponding to the time indicated by the index 33d.
  • the frame image corresponding to the time indicated by the index 33d is referred to as a key frame.
  • the operation determination unit 121 changes the position of the 3D object 32 in the key frame (the position shown in the upper part of FIG. 6). , Change to the position of the 3D object 32 at the time after the change.
  • the time at which the key frame is reproduced is the time after the change indicated by the index 33d.
  • the operation determination unit 121 changes the position of the 3D object 32 in the previous time to the position in the accepted changed time. Determine the information.
  • the process shown in FIG. 6 can be performed in both the editing state and the reproducing state. For example, when the time of the index 33d is changed, the reproduction unit 13 reproduces the key frame corresponding to the previous time at the changed time.
  • the images 2C (0) to 2C (2) are displayed on the display device 2.
  • the 3D object tracking unit 132 reproduces a 3D moving image in which the direction of the virtual camera 21 is changed so that the line of sight A of the virtual camera 21 follows the movement of the 3D object 32.
  • the image of the 3D object 32 is included in any of the images 2C (0), 2C (1), and 2C (2) displayed on the display device 2.
  • the movement of the 3D object 32 and the orientation of the virtual camera 21 are determined by separate controls. Therefore, as shown in FIG. 8, the line of sight A of the virtual camera 21 follows the movement of the 3D object 32. A non-existent state can occur.
  • the 3D object 32 may be out of the field of view of the virtual camera 21, and an image in which the 3D object 32 is not shown may be displayed, for example, as shown in the images 2C (1) and 2C (2).
  • FIG. 9A is a block diagram showing a hardware configuration that realizes the functions of the 3D moving image providing device 1.
  • FIG. 9B is a block diagram showing a hardware configuration for executing software that realizes the functions of the 3D moving image providing device 1.
  • the input interface 100 is an interface for relaying input information or 3D spatial display data received by the input device 3.
  • the output interface 101 is an interface for relaying display information output to the display device 2.
  • the 3D moving image providing device 1 includes a processing circuit for executing the processing from step ST1 to step ST3 shown in FIG.
  • the processing circuit may be dedicated hardware, or may be a CPU (Central Processing Unit) that executes a program stored in the memory.
  • CPU Central Processing Unit
  • the processing circuit 102 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, or an ASIC (Application Specific Integrated Circuitd). Circuit), FPGA (Field-Programmable Gate Array) or a combination thereof is applicable.
  • the functions of the 3D space construction unit 11, the editing unit 12, and the reproduction unit 13 included in the 3D moving image providing device 1 may be realized by separate processing circuits, and these functions are collectively realized by one processing circuit. May be good.
  • the processor 103 realizes the functions of the 3D space construction unit 11, the editing unit 12, and the reproduction unit 13 included in the 3D moving image providing device 1 by reading and executing the program stored in the memory 104.
  • the 3D moving image providing device 1 includes a memory 104 that stores a program in which the processes from step ST1 to step ST3 shown in FIG. 2 are executed as a result when executed by the processor 103.
  • These programs cause a computer to execute the procedures or methods of the 3D space construction unit 11, the editing unit 12, and the reproduction unit 13.
  • the memory 104 may be a computer-readable storage medium in which a program for making the computer function as a 3D space construction unit 11, an editing unit 12, and a reproduction unit 13 is stored.
  • the memory 104 may be, for example, a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically-volatile), or an EEPROM (Electrically-EMMORY).
  • RAM Random Access Memory
  • ROM Read Only Memory
  • flash memory an EPROM (Erasable Programmable Read Only Memory)
  • EEPROM Electrically-volatile
  • EEPROM Electrically-EMMORY
  • a part of the functions of the 3D space construction unit 11, the editing unit 12, and the reproduction unit 13 included in the 3D moving image providing device 1 may be realized by dedicated hardware, and the remaining part may be realized by software or firmware.
  • the function of the 3D space construction unit 11 is realized by the processing circuit 102, which is dedicated hardware, and the editing unit 12 and the reproduction unit 13 read and execute the program stored in the memory 104 by the processor 103. The function is realized.
  • the processing circuit can realize the above-mentioned functions by hardware, software, firmware or a combination thereof.
  • the 3D moving image providing device 1 reproduces the 3D moving image in which the direction of the virtual camera 21 is changed so that the line of sight A of the virtual camera 21 follows the movement of the 3D object 32.
  • the 3D moving image providing device 1 can prevent the 3D object 32 from coming out of the field of view of the virtual camera 21. Even if the 3D object 32 to be watched moves in the 3D virtual space, the image displayed on the display device 2 always includes the 3D object 32 to be watched. Therefore, the user can recognize the position change of the 3D object 32 to be watched with the passage of time by looking at the image displayed on the display device 2.
  • FIG. 11 is an image diagram showing an outline of the process of widening the field of view of the virtual camera 21.
  • the display device 2 displays the display screen 2C including a part of the 3D object 32. ..
  • the 3D object tracking unit 132A moves the virtual camera 21 in the direction D away from the 3D object 32 along the extension line of the line of sight A. As a result, the field of view of the virtual camera 21 expands from the field of view C (1) to the field of view C (2).
  • the 3D object tracking unit 132A reproduces a 3D moving image virtually taken at a position closest to the virtual camera 21 while all of the 3D object 32 fits in the field of view C (2) of the virtual camera 21.
  • the 3D object tracking unit 132A performs a process of widening the field of view of the virtual camera 21 on each frame image of the 3D moving image. As a result, the display device 2 displays the display screen 2D including all of the 3D objects 32.
  • the 3D moving image providing device 1A includes a processing circuit for executing the above-mentioned processing.
  • the processing circuit may be the hardware processing circuit 102 shown in FIG. 9A, or may be the processor 103 that executes the program stored in the memory 104 shown in FIG. 9B.
  • the reproduction unit 13A moves the virtual camera 21 away from the 3D object 32 along the extension line of the line of sight A of the virtual camera 21 so that the virtual camera 21 can be moved. Widen your view.
  • the 3D moving image providing device 1A can provide a 3D moving image in which not only the portion of the 3D object 32 where the gazing point exists but also the entire movement of the 3D object 32 can be visually recognized.
  • 1,1A 3D moving image providing device 2 display device, 2A 2D image, 2B, 2C, 2D display screen, 2C (0) to 2 (2) image, 3 input device, 11 3D space construction unit, 12 editorial department, 13,13A playback unit, 21 virtual camera, 31a, 32 3D object, 32a, 32b 2D object, 33a, 33b timeline image, 33c slide bar, 33d index, 34 play button, 35 stop button, 36 gaze point, 100 input Interface, 101 output interface, 102 processing circuit, 103 processor, 104 memory, 121 operation determination unit, 131 gaze point determination unit, 132, 132A 3D object tracking unit.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A 3D video image provision device (1) is provided with: an editing unit (12) which, in a 3D video image editing state, determines motion information for a 3D object (32) according to the passage of time; and a reproduction unit (13) which, in a 3D video image reproduction state, reproduces a 3D video image of the 3D object (32), which moves according to the motion information. The reproduction unit (13) reproduces the 3D video image by changing the direction of a virtual camera (21) so that the line of sight of the virtual camera (21) follows the movement of the 3D object.

Description

3次元動画像提供装置、3次元動画像提供方法およびプログラム3D moving image providing device, 3D moving image providing method and program
 本開示は、3次元動画像提供装置、3次元動画像提供方法およびプログラムに関する。 This disclosure relates to a 3D moving image providing device, a 3D moving image providing method and a program.
 3次元動画像は、3次元仮想空間内に配置された仮想カメラによって撮影された3次元オブジェクトの動画像である。3次元仮想空間の動画像を生成する従来の技術としては、例えば、特許文献1に記載される動画像生成方法がある。特許文献1に記載された動画像生成方法は、3次元仮想空間内の3次元モデル(3次元オブジェクト)の動作と視点位置の移動経路とを規定する動作情報を生成し、動作情報から、利用者から見て3次元モデルの動きが滑らかでかつ被写体の撮影間隔が最大になる被写体の撮影条件を決定する。決定した撮影条件に従って取得した複数の2次元静止画像から3次元モデルを生成し、生成した3次元モデルと動作情報とを用いることにより3次元動画像が生成される。以下、3次元を3Dと記載する。 The 3D moving image is a moving image of a 3D object taken by a virtual camera arranged in a 3D virtual space. As a conventional technique for generating a moving image in a three-dimensional virtual space, for example, there is a moving image generation method described in Patent Document 1. The moving image generation method described in Patent Document 1 generates motion information that defines the motion of a three-dimensional model (three-dimensional object) in a three-dimensional virtual space and the movement path of a viewpoint position, and uses the motion information from the motion information. Determines the shooting conditions of the subject that the movement of the 3D model is smooth and the shooting interval of the subject is maximized from the viewpoint of the person. A three-dimensional model is generated from a plurality of two-dimensional still images acquired according to the determined shooting conditions, and a three-dimensional moving image is generated by using the generated three-dimensional model and motion information. Hereinafter, three dimensions will be referred to as 3D.
特開2008-140297号公報Japanese Unexamined Patent Publication No. 2008-140297
 特許文献1に記載される動画像生成方法は、仮想カメラによる3Dオブジェクトの撮影と3Dオブジェクトの動きとが連携していないため、仮想カメラの視界から3Dオブジェクトが外れた3D動画像が生成される場合があるという課題があった。 In the moving image generation method described in Patent Document 1, since the shooting of the 3D object by the virtual camera and the movement of the 3D object are not linked, the 3D moving image in which the 3D object is out of the view of the virtual camera is generated. There was a problem that there were cases.
 本開示は上記課題を解決するものであり、仮想カメラの視界から3Dオブジェクトが外れることを防止できる3D動画像提供装置、3D動画像提供方法およびプログラムを得ることを目的とする。 The present disclosure solves the above-mentioned problems, and an object of the present invention is to obtain a 3D moving image providing device, a 3D moving image providing method and a program capable of preventing a 3D object from coming out of the field of view of a virtual camera.
 本開示に係る3D動画像提供装置は、3D仮想空間に配置された仮想カメラで仮想的に撮影された3D動画像を提供する3D動画像提供装置であって、3D動画像の編集状態において、3D仮想空間に存在する3Dオブジェクトの時間経過に応じた動作情報を決定する編集部と、3D動画像の再生状態において、動作情報に従って動作する3Dオブジェクトの3D動画像を再生する再生部とを備え、再生部は、仮想カメラの視線が3Dオブジェクトの動きに追従するように当該仮想カメラの向きを変更した3D動画像を再生する。 The 3D moving image providing device according to the present disclosure is a 3D moving image providing device that provides a 3D moving image virtually taken by a virtual camera arranged in a 3D virtual space, and is in an edited state of the 3D moving image. It is equipped with an editorial unit that determines operation information according to the passage of time of the 3D object existing in the 3D virtual space, and a reproduction unit that reproduces the 3D moving image of the 3D object that operates according to the operation information in the reproduction state of the 3D moving image. , The reproduction unit reproduces a 3D moving image in which the direction of the virtual camera is changed so that the line of sight of the virtual camera follows the movement of the 3D object.
 本開示によれば、仮想カメラの視線が、3Dオブジェクトの動きに追従するように仮想カメラの向きを変更した3D動画像を再生する。これにより、本開示に係る3D動画像提供装置は、仮想カメラの視界から3Dオブジェクトが外れることを防止できる。 According to the present disclosure, a 3D moving image in which the direction of the virtual camera is changed so that the line of sight of the virtual camera follows the movement of the 3D object is reproduced. Thereby, the 3D moving image providing device according to the present disclosure can prevent the 3D object from coming out of the field of view of the virtual camera.
実施の形態1に係る3D動画像提供装置の構成を示すブロック図である。It is a block diagram which shows the structure of the 3D moving image providing apparatus which concerns on Embodiment 1. FIG. 実施の形態1に係る3D動画像提供方法を示すフローチャートである。It is a flowchart which shows the 3D moving image providing method which concerns on Embodiment 1. 3D動画像の概要を示すイメージ図である。It is an image diagram which shows the outline of a 3D moving image. 編集状態と再生状態の概要を示すイメージ図である。It is an image diagram which shows the outline of an edit state and a play state. 3Dオブジェクトの動作情報を決定する処理の例1を示すイメージ図である。It is an image diagram which shows the example 1 of the process which determines the operation information of a 3D object. 3Dオブジェクトの動作情報を決定する処理の例2を示すイメージ図である。It is an image diagram which shows the example 2 of the process which determines the operation information of a 3D object. 実施の形態1における3D動画像の再生の概要を示すイメージ図である。It is an image diagram which shows the outline of reproduction of the 3D moving image in Embodiment 1. FIG. 従来の3D動画像の再生の概要を示すイメージ図である。It is an image diagram which shows the outline of reproduction of the conventional 3D animation image. 図9Aは、実施の形態1に係る3D動画像提供装置の機能を実現するハードウェア構成を示すブロック図であり、図9Bは、実施の形態1に係る3D動画像提供装置の機能を実現するソフトウェアを実行するハードウェア構成を示すブロック図である。FIG. 9A is a block diagram showing a hardware configuration that realizes the function of the 3D moving image providing device according to the first embodiment, and FIG. 9B realizes the function of the 3D moving image providing device according to the first embodiment. It is a block diagram which shows the hardware configuration which executes software. 実施の形態2に係る3D動画像提供装置の構成を示すブロック図である。It is a block diagram which shows the structure of the 3D moving image providing apparatus which concerns on Embodiment 2. 仮想カメラの視界を広げる処理の概要を示すイメージ図である。It is an image diagram which shows the outline of the process which widens the field of view of a virtual camera.
実施の形態1.
 図1は、実施の形態1に係る3D動画像提供装置1の構成を示すブロック図である。図1に例示した3D動画像提供装置1は、3D仮想空間に配置した仮想カメラによって撮影された3D動画像を再生し、再生した3D動画像を表示装置2に表示する。また、3D動画像提供装置1は、入力装置3が受け付けた入力情報に基づいて3D動画像を編集する。
Embodiment 1.
FIG. 1 is a block diagram showing a configuration of a 3D moving image providing device 1 according to the first embodiment. The 3D moving image providing device 1 illustrated in FIG. 1 reproduces a 3D moving image taken by a virtual camera arranged in a 3D virtual space, and displays the reproduced 3D moving image on the display device 2. Further, the 3D moving image providing device 1 edits the 3D moving image based on the input information received by the input device 3.
 表示装置2は、3D動画像提供装置1から出力された表示情報を表示する装置であり、例えば3D動画像提供装置1として機能するコンピュータが備えるディスプレイである。入力装置3は、利用者からの情報の入力を受け付ける装置であり、例えば、表示装置2の画面上に設けられたタッチパネル、上記コンピュータが備えるマウス、ハードウェアキーまたはマイクで集音した音声を認識する音声認識部などである。 The display device 2 is a device that displays display information output from the 3D animation providing device 1, and is, for example, a display provided in a computer that functions as the 3D animation providing device 1. The input device 3 is a device that receives input of information from the user, and recognizes, for example, a touch panel provided on the screen of the display device 2, a mouse provided in the computer, a hardware key, or a sound collected by a microphone. It is a voice recognition unit and the like.
 3D動画像提供装置1は、図1に示すように、3D空間構築部11、編集部12および再生部13を備える。また、編集部12は、動作決定部121を備える。再生部13は、注視点決定部131および3Dオブジェクト追従部132を備える。 As shown in FIG. 1, the 3D moving image providing device 1 includes a 3D space construction unit 11, an editing unit 12, and a reproduction unit 13. Further, the editing unit 12 includes an operation determining unit 121. The reproduction unit 13 includes a gazing point determination unit 131 and a 3D object tracking unit 132.
 3D空間構築部11は、3D空間表示データを用いて、3D仮想空間情報を生成する。3D空間表示データは、3D仮想空間内に配置される3Dオブジェクトに関するデータであり、例えば、各種の3Dオブジェクト、各3Dオブジェクトの動作および3Dオブジェクト間の位置関係を示すデータが含まれる。3D仮想空間情報は、3D空間表示データが示す3Dオブジェクトが仮想的に配置された3D空間を示す情報である。 The 3D space construction unit 11 generates 3D virtual space information using the 3D space display data. The 3D space display data is data related to 3D objects arranged in the 3D virtual space, and includes, for example, various 3D objects, data indicating the operation of each 3D object, and data indicating the positional relationship between the 3D objects. The 3D virtual space information is information indicating the 3D space in which the 3D object indicated by the 3D space display data is virtually arranged.
 なお、3D動画像提供装置1が既に構築済みの3D仮想空間情報を使用する場合、3D空間構築部11は、3D動画像提供装置1の構成要素から省略される。例えば、3D動画像提供装置1は、編集部12および再生部13のみを備え、編集部12および再生部13は、既に構築された3D仮想空間情報を記憶する外部記憶装置から、編集または再生対象の3D仮想空間情報を読み出して3D動画像の編集または再生処理を行う。 When the 3D moving image providing device 1 uses the already constructed 3D virtual space information, the 3D space building unit 11 is omitted from the components of the 3D moving image providing device 1. For example, the 3D moving image providing device 1 includes only an editing unit 12 and a reproducing unit 13, and the editing unit 12 and the reproducing unit 13 are edited or reproduced from an external storage device that stores 3D virtual space information that has already been constructed. 3D virtual space information is read out and the 3D moving image is edited or reproduced.
 編集部12は、3D仮想空間に配置された仮想カメラによって撮影された3D動画像を編集する。また、編集部12は、表示装置2に3D動画像を表示させながら、入力装置3が受け付けた入力情報に基づいて、3D動画像を編集可能である。 The editorial unit 12 edits a 3D moving image taken by a virtual camera arranged in a 3D virtual space. Further, the editing unit 12 can edit the 3D moving image based on the input information received by the input device 3 while displaying the 3D moving image on the display device 2.
 動作決定部121は、3D動画像の編集状態において、3D動画像の3Dオブジェクトの動作情報を決定する。3Dオブジェクトの動作情報は、例えば、時間経過に応じた3Dオブジェクトの移動軌跡と各移動位置での3Dオブジェクトの姿勢とを含む情報である。なお、動作情報には、時間経過に応じた仮想カメラの向きを示す情報も含まれる。 The operation determination unit 121 determines the operation information of the 3D object of the 3D moving image in the editing state of the 3D moving image. The operation information of the 3D object is, for example, information including the movement locus of the 3D object according to the passage of time and the posture of the 3D object at each movement position. The operation information also includes information indicating the orientation of the virtual camera according to the passage of time.
 再生部13は、3D動画像を再生して表示装置2に表示させる。例えば、入力装置3によって受け付けられた指示情報に従って3D動画像の再生状態へ移行すると、再生部13は、動作決定部121によって決定された動作情報に従って動作する3Dオブジェクトの3D動画像を再生する。また、3D動画像の再生状態において、再生部13は、仮想カメラの視線が3Dオブジェクトの動きに追従するように、仮想カメラの向きを変更した3D動画像を再生する。 The reproduction unit 13 reproduces a 3D moving image and displays it on the display device 2. For example, when shifting to the reproduction state of the 3D moving image according to the instruction information received by the input device 3, the reproduction unit 13 reproduces the 3D moving image of the 3D object that operates according to the operation information determined by the operation determining unit 121. Further, in the reproduction state of the 3D moving image, the reproduction unit 13 reproduces the 3D moving image in which the direction of the virtual camera is changed so that the line of sight of the virtual camera follows the movement of the 3D object.
 注視点決定部131は、再生されている3D動画像において、仮想カメラの視線方向にある3Dオブジェクトと仮想カメラの視線との交点を、注視点に決定する。例えば、注視点決定部131は、仮想カメラの視線と3Dオブジェクトとの交点のうち、仮想カメラに最も近い交点を特定し、特定した交点を注視点に決定する。 The gazing point determination unit 131 determines the intersection of the 3D object in the line-of-sight direction of the virtual camera and the line-of-sight of the virtual camera as the gazing point in the reproduced 3D moving image. For example, the gaze point determination unit 131 identifies the intersection closest to the virtual camera among the intersections between the line of sight of the virtual camera and the 3D object, and determines the specified intersection as the gaze point.
 3Dオブジェクト追従部132は、再生されている3D動画像において、移動する3Dオブジェクトに仮想カメラの視線が追従するように仮想カメラの向きを変更した3D動画像を再生する。例えば、3Dオブジェクト追従部132は、仮想カメラの視線があたかも3Dオブジェクトの注視点に追従するように仮想カメラの向きを変更し、変更後の向きの仮想カメラによって仮想的に撮影された3D動画像を再生する。 The 3D object tracking unit 132 reproduces the 3D moving image in which the direction of the virtual camera is changed so that the line of sight of the virtual camera follows the moving 3D object in the reproduced 3D moving image. For example, the 3D object tracking unit 132 changes the orientation of the virtual camera so that the line of sight of the virtual camera follows the gazing point of the 3D object, and the 3D moving image virtually taken by the virtual camera in the changed orientation. To play.
 実施の形態1に係る3D動画像提供方法の詳細は、以下の通りである。
 図2は、実施の形態1に係る3D動画像提供方法を示すフローチャートである。
 まず、3D空間構築部11は、3D空間表示データを用いて3D仮想空間を構築する(ステップST1)。図3は、3D動画像の概要を示すイメージ図である。図3に例示した3D仮想空間は、3Dオブジェクト31aおよび31bが仮想的に配置された3D空間である。
The details of the 3D moving image providing method according to the first embodiment are as follows.
FIG. 2 is a flowchart showing a 3D moving image providing method according to the first embodiment.
First, the 3D space construction unit 11 constructs a 3D virtual space using the 3D space display data (step ST1). FIG. 3 is an image diagram showing an outline of a 3D moving image. The 3D virtual space illustrated in FIG. 3 is a 3D space in which 3D objects 31a and 31b are virtually arranged.
 図3に示す仮想カメラ21の視線Aの方向は、3D仮想空間に設定されたxyz座標系のz方向である。3D動画像は、仮想カメラ21によって仮想的に撮影された3D仮想空間内の3Dオブジェクト31aおよび31bが撮影された動画像である。3D動画像の再生は、xy平面内の投影面に投影された3Dオブジェクト31aおよび31bである2Dオブジェクト32aおよび32bを含む2D画像2Aを生成する処理であり、表示装置2には、2D画像2Aが表示される。 The direction of the line of sight A of the virtual camera 21 shown in FIG. 3 is the z direction of the xyz coordinate system set in the 3D virtual space. The 3D moving image is a moving image in which 3D objects 31a and 31b in the 3D virtual space virtually taken by the virtual camera 21 are taken. Reproduction of a 3D moving image is a process of generating a 2D image 2A including 2D objects 32a and 32b which are 3D objects 31a and 31b projected on a projection plane in the xy plane, and the display device 2 has a 2D image 2A. Is displayed.
 例えば、入力装置3によって3D動画像の編集指示が受け付けられると、3D動画像提供装置1は、3D動画像の編集状態に移行する。3D動画像の編集状態において、編集部12は、3Dオブジェクトの動作情報を決定する(ステップST2)。また、入力装置3によって3D動画像の再生指示が受け付けられると、3D動画像提供装置1は、3D動画像の再生状態に移行する。3D動画像の再生状態において、再生部13は、仮想カメラ21の視線が3Dオブジェクトの動きに追従するように、仮想カメラ21の向きを変更した3D動画像を再生する(ステップST3)。 For example, when the input device 3 receives an instruction to edit a 3D moving image, the 3D moving image providing device 1 shifts to the editing state of the 3D moving image. In the editing state of the 3D moving image, the editing unit 12 determines the operation information of the 3D object (step ST2). Further, when the input device 3 receives the reproduction instruction of the 3D moving image, the 3D moving image providing device 1 shifts to the reproduction state of the 3D moving image. In the reproduction state of the 3D moving image, the reproduction unit 13 reproduces the 3D moving image in which the direction of the virtual camera 21 is changed so that the line of sight of the virtual camera 21 follows the movement of the 3D object (step ST3).
 図4は、編集状態と再生状態の概要を示すイメージ図である。3D動画像の編集および再生は、表示装置2に表示された状態で切り替え可能である。例えば、表示画面2Bは、3D動画像の編集状態における画像が表示された画面であり、表示画面2Cは、3D動画像の再生状態における画像が表示された画面である。表示画面2Bには、仮想カメラ21と、自動車の3Dオブジェクト32とが、動作情報に従って破線の矢印で示す軌跡で移動する様子が表示されている。表示画面2Cには、仮想カメラ21の視界の様子が表示される。 FIG. 4 is an image diagram showing an outline of an editing state and a playback state. Editing and playback of the 3D moving image can be switched while being displayed on the display device 2. For example, the display screen 2B is a screen on which an image in the edited state of the 3D moving image is displayed, and the display screen 2C is a screen on which the image in the reproduced state of the 3D moving image is displayed. On the display screen 2B, a state in which the virtual camera 21 and the 3D object 32 of the automobile move according to the motion information along the locus indicated by the broken line arrow is displayed. The state of view of the virtual camera 21 is displayed on the display screen 2C.
 表示画面2Bと表示画面2Cには、3Dオブジェクト32のタイムラインを示すタイムライン画像33aと、仮想カメラ21のタイムラインを示すタイムライン画像33bとが表示されている。帯形状の画像であるタイムライン画像33aおよび33bは、スライドバー33cを、入力装置3を用いてタイムライン画像33aおよび33bの長手方向に沿って移動させることによって、3D動画像における3Dオブジェクト32の時間と仮想カメラ21の時間を変更可能である。 On the display screen 2B and the display screen 2C, a timeline image 33a showing the timeline of the 3D object 32 and a timeline image 33b showing the timeline of the virtual camera 21 are displayed. The timeline images 33a and 33b, which are band-shaped images, are the 3D object 32 in the 3D moving image by moving the slide bar 33c along the longitudinal direction of the timeline images 33a and 33b using the input device 3. The time and the time of the virtual camera 21 can be changed.
 表示画面2Bに表示されている再生ボタン34は、3D動画像の再生状態へ移行させるためのボタンである。例えば、入力装置3を用いて再生ボタン34を押下することで、表示画面2Bから表示画面2Cへ切り替わり、再生部13が3D動画像の再生を開始する。表示画面2Cに表示されている停止ボタン35は、3D動画像の再生を停止して編集状態へ移行させるためのボタンである。例えば、入力装置3を用いて停止ボタン35を押下することで、表示画面2Cから表示画面2Bへ切り替わり、編集部12が3D動画像の編集を開始する。 The play button 34 displayed on the display screen 2B is a button for shifting to the play state of the 3D moving image. For example, by pressing the play button 34 using the input device 3, the display screen 2B is switched to the display screen 2C, and the playback unit 13 starts playing the 3D moving image. The stop button 35 displayed on the display screen 2C is a button for stopping the reproduction of the 3D moving image and shifting to the editing state. For example, by pressing the stop button 35 using the input device 3, the display screen 2C is switched to the display screen 2B, and the editing unit 12 starts editing the 3D moving image.
 なお、再生ボタン34および停止ボタン35による編集状態と再生状態の切り替えは、一例であり、これに限定されるものではない。再生ボタン34および停止ボタン35以外に、編集部12および再生部13は、切り替え専用のボタンによって編集状態と再生状態を切り替えてもよい。 Note that switching between the edit state and the playback state by the play button 34 and the stop button 35 is an example, and is not limited to this. In addition to the play button 34 and the stop button 35, the edit unit 12 and the play unit 13 may switch between the edit state and the play state by a button dedicated to switching.
 図5は、3Dオブジェクト32の動作情報を決定する処理の例1を示すイメージ図である。3D動画像の編集状態において、図5の上段に記載された表示画面2Bから、仮想カメラ21および3Dオブジェクト32のタイムライン上の位置(時間)が変更されると、図5の下段に記載された表示画面2Bに切り替わる。 FIG. 5 is an image diagram showing Example 1 of a process for determining the operation information of the 3D object 32. When the positions (time) of the virtual camera 21 and the 3D object 32 on the timeline are changed from the display screen 2B shown in the upper part of FIG. 5 in the editing state of the 3D moving image, it is described in the lower part of FIG. Switch to the display screen 2B.
 動作決定部121は、従前の時間(図5の上段図の時間)と変更後の時間(図5の下段図の時間)との間の期間における仮想カメラ21の姿勢と3Dオブジェクト32の姿勢を補間する。例えば、動作決定部121は、仮想カメラ21および3Dオブジェクト32の各状態をクオータニオンで表現することにより、従前の時間と変更後の時間との間の期間における仮想カメラ21の姿勢と3Dオブジェクト32の姿勢を補間する。 The motion determination unit 121 determines the posture of the virtual camera 21 and the posture of the 3D object 32 in the period between the previous time (time in the upper diagram of FIG. 5) and the changed time (time in the lower diagram of FIG. 5). Interpolate. For example, the motion determination unit 121 expresses each state of the virtual camera 21 and the 3D object 32 in a quaternion, so that the posture of the virtual camera 21 and the 3D object 32 in the period between the previous time and the changed time can be expressed. Interpolate the posture.
 また、3Dオブジェクト32が動作制約を有した移動体を表す場合、動作決定部121は、従前の時間と変更後の時間との間の期間における、3Dオブジェクト32の移動軌跡を、上記動作制約に応じて補間する。例えば、3Dオブジェクト32が示す移動体が自動車である場合、自動車は、直進、後退、右折および左折といった車輪で実現可能な動作に限られ、真横にスライドするような動作は不可能である。 Further, when the 3D object 32 represents a moving body having an operation constraint, the motion determination unit 121 sets the movement locus of the 3D object 32 in the period between the previous time and the changed time as the above motion constraint. Interpolate accordingly. For example, when the moving object indicated by the 3D object 32 is an automobile, the automobile is limited to the movements that can be realized by the wheels such as going straight, reversing, turning right, and turning left, and the movement that slides to the side is impossible.
 動作決定部121は、自動車の動作制約に従って、3Dオブジェクト32の移動軌跡を補間する。例えば、図5の上段に示した従前の時間での自動車の姿勢から、図5の下段に示した変更後の時間では自動車の姿勢が右側に曲がっていた場合、動作決定部121は、従前の時間と変更後の時間との間の期間では、図5の下段の表示画面2Bにおいて破線の矢印Bで示すように、自動車が後退しながら右側に曲がる軌跡で移動したと決定する。 The motion determination unit 121 interpolates the movement locus of the 3D object 32 according to the motion constraint of the automobile. For example, if the posture of the car is bent to the right in the changed time shown in the lower part of FIG. 5 from the posture of the car in the previous time shown in the upper part of FIG. In the period between the time and the changed time, it is determined that the automobile has moved in a trajectory that turns to the right while reversing, as shown by the broken line arrow B on the lower display screen 2B of FIG.
 図6は、3Dオブジェクト32の動作情報を決定する処理の例2を示すイメージ図である。図6の上段に示す表示画面2Bにおける3Dオブジェクト32の位置は、指標33dが示す時間に対応するフレーム画像上の3Dオブジェクト32の位置である。以下、指標33dが示す時間に対応するフレーム画像をキーフレームと呼ぶ。 FIG. 6 is an image diagram showing Example 2 of the process of determining the operation information of the 3D object 32. The position of the 3D object 32 on the display screen 2B shown in the upper part of FIG. 6 is the position of the 3D object 32 on the frame image corresponding to the time indicated by the index 33d. Hereinafter, the frame image corresponding to the time indicated by the index 33d is referred to as a key frame.
 図6の下段に示すように、タイムライン画像33aにおいて、指標33dが示す時間が変更されると、動作決定部121は、キーフレームにおける3Dオブジェクト32の位置(図6の上段に示す位置)を、変更後の時間における3Dオブジェクト32の位置に変更する。キーフレームが再生される時間が、指標33dが示す変更後の時間となる。 As shown in the lower part of FIG. 6, when the time indicated by the index 33d is changed in the timeline image 33a, the operation determination unit 121 changes the position of the 3D object 32 in the key frame (the position shown in the upper part of FIG. 6). , Change to the position of the 3D object 32 at the time after the change. The time at which the key frame is reproduced is the time after the change indicated by the index 33d.
 このように、動作決定部121は、入力装置3によってタイムライン画像33aの時間変更が受け付けられると、従前の時間における3Dオブジェクト32の位置を、受け付けられた変更後の時間における位置に変更した動作情報を決定する。なお、図6に示す処理は、編集状態および再生状態のいずれにおいても実施可能である。例えば、指標33dの時間が変更されると、再生部13は、従前の時間に対応するキーフレームを変更後の時間に再生する。 As described above, when the input device 3 accepts the time change of the timeline image 33a, the operation determination unit 121 changes the position of the 3D object 32 in the previous time to the position in the accepted changed time. Determine the information. The process shown in FIG. 6 can be performed in both the editing state and the reproducing state. For example, when the time of the index 33d is changed, the reproduction unit 13 reproduces the key frame corresponding to the previous time at the changed time.
 図7は、実施の形態1における3D動画像の再生の概要を示すイメージ図であり、時刻T=0、1、2における3D仮想空間内の3Dオブジェクト32の位置と仮想カメラ21の向き、および、時刻T=0、1、2における再生された3D動画像を示している。時刻T=0、T=1およびT=2の順で時間が経過する。画像2C(0)は、再生された3D動画像の時刻T=0のフレーム画像である。画像2C(1)は、再生された3D動画像の時刻T=1のフレーム画像である。画像2C(2)は、再生された3D動画像の時刻T=2のフレーム画像である。画像2C(0)~2C(2)は、表示装置2に表示される。 FIG. 7 is an image diagram showing an outline of reproduction of a 3D moving image according to the first embodiment, in which the position of the 3D object 32 and the orientation of the virtual camera 21 in the 3D virtual space at times T = 0, 1 and 2 are shown. The reproduced 3D moving image at time T = 0, 1, 2 is shown. Time elapses in the order of time T = 0, T = 1 and T = 2. Image 2C (0) is a frame image of the reproduced 3D moving image at time T = 0. Image 2C (1) is a frame image of the reproduced 3D moving image at time T = 1. Image 2C (2) is a frame image of the reproduced 3D moving image at time T = 2. The images 2C (0) to 2C (2) are displayed on the display device 2.
 注視点決定部131は、図7に示すように、仮想カメラ21の視線Aの方向に存在する3Dオブジェクト32と仮想カメラ21の視線Aとの交点のうち、仮想カメラ21に最も近い交点を、時刻T=0、1、2における注視点36(0)、36(1)、36(2)に決定する。 As shown in FIG. 7, the gazing point determination unit 131 determines the intersection of the 3D object 32 existing in the direction of the line of sight A of the virtual camera 21 and the line of sight A of the virtual camera 21 that is closest to the virtual camera 21. It is determined to be the gazing points 36 (0), 36 (1), and 36 (2) at time T = 0, 1, and 2.
 次に、3Dオブジェクト追従部132は、図7に示すように、仮想カメラ21の視線Aが3Dオブジェクト32の動きに追従するように、仮想カメラ21の向きを変更した3D動画像を再生する。これにより、表示装置2に表示された画像2C(0)、2C(1)、2C(2)のいずれにおいても、3Dオブジェクト32の画像が含まれている。 Next, as shown in FIG. 7, the 3D object tracking unit 132 reproduces a 3D moving image in which the direction of the virtual camera 21 is changed so that the line of sight A of the virtual camera 21 follows the movement of the 3D object 32. As a result, the image of the 3D object 32 is included in any of the images 2C (0), 2C (1), and 2C (2) displayed on the display device 2.
 図8は、従来の3D動画像の再生の概要を示すイメージ図であり、図7と同様に、時刻T=0、1、2における3D仮想空間内の3Dオブジェクト32の位置と仮想カメラ21の向き、および、時刻T=0、1、2における再生された3D動画像を示している。 FIG. 8 is an image diagram showing an outline of reproduction of a conventional 3D moving image, and is the position of the 3D object 32 and the orientation of the virtual camera 21 in the 3D virtual space at times T = 0, 1 and 2 as in FIG. 7. , And the reproduced 3D moving image at time T = 0, 1, 2 is shown.
 従来の技術では、3Dオブジェクト32の動きと仮想カメラ21の向きとが別々の制御で決定されていたので、図8に示すように、仮想カメラ21の視線Aが3Dオブジェクト32の動きに追従していない状態が発生し得る。この場合、3Dオブジェクト32が仮想カメラ21の視界から外れて、例えば、画像2C(1)および2C(2)に示すように、3Dオブジェクト32が写っていない画像が表示される可能性がある。 In the conventional technique, the movement of the 3D object 32 and the orientation of the virtual camera 21 are determined by separate controls. Therefore, as shown in FIG. 8, the line of sight A of the virtual camera 21 follows the movement of the 3D object 32. A non-existent state can occur. In this case, the 3D object 32 may be out of the field of view of the virtual camera 21, and an image in which the 3D object 32 is not shown may be displayed, for example, as shown in the images 2C (1) and 2C (2).
 3D動画像提供装置1の機能を実現するハードウェア構成は、以下の通りである。
 図9Aは、3D動画像提供装置1の機能を実現するハードウェア構成を示すブロック図である。また、図9Bは、3D動画像提供装置1の機能を実現するソフトウェアを実行するハードウェア構成を示すブロック図である。図9Aおよび図9Bにおいて、入力インタフェース100は、入力装置3が受け付けた入力情報または3D空間表示データを中継するインタフェースである。出力インタフェース101は、表示装置2へ出力される表示情報を中継するインタフェースである。
The hardware configuration that realizes the functions of the 3D moving image providing device 1 is as follows.
FIG. 9A is a block diagram showing a hardware configuration that realizes the functions of the 3D moving image providing device 1. Further, FIG. 9B is a block diagram showing a hardware configuration for executing software that realizes the functions of the 3D moving image providing device 1. In FIGS. 9A and 9B, the input interface 100 is an interface for relaying input information or 3D spatial display data received by the input device 3. The output interface 101 is an interface for relaying display information output to the display device 2.
 3D動画像提供装置1が備える3D空間構築部11、編集部12および再生部13の機能は、処理回路によって実現される。すなわち、3D動画像提供装置1は、図2に示すステップST1からステップST3までの処理を実行するための処理回路を備える。処理回路は、専用のハードウェアであってもよいが、メモリに記憶されたプログラムを実行するCPU(Central Processing Unit)であってもよい。 The functions of the 3D space construction unit 11, the editing unit 12, and the reproduction unit 13 included in the 3D moving image providing device 1 are realized by a processing circuit. That is, the 3D moving image providing device 1 includes a processing circuit for executing the processing from step ST1 to step ST3 shown in FIG. The processing circuit may be dedicated hardware, or may be a CPU (Central Processing Unit) that executes a program stored in the memory.
 処理回路が図9Aに示す専用のハードウェアの処理回路102である場合に、処理回路102は、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)またはこれらを組み合わせたものが該当する。3D動画像提供装置1が備える3D空間構築部11、編集部12および再生部13の機能は、別々の処理回路で実現されてもよく、これらの機能がまとめて1つの処理回路で実現されてもよい。 When the processing circuit is the processing circuit 102 of the dedicated hardware shown in FIG. 9A, the processing circuit 102 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, or an ASIC (Application Specific Integrated Circuitd). Circuit), FPGA (Field-Programmable Gate Array) or a combination thereof is applicable. The functions of the 3D space construction unit 11, the editing unit 12, and the reproduction unit 13 included in the 3D moving image providing device 1 may be realized by separate processing circuits, and these functions are collectively realized by one processing circuit. May be good.
 処理回路が図9Bに示すプロセッサ103である場合に、3D動画像提供装置1が備える3D空間構築部11、編集部12および再生部13の機能は、ソフトウェア、ファームウェアまたはソフトウェアとファームウェアとの組み合わせによって実現される。なお、ソフトウェアまたはファームウェアは、プログラムとして記述され、メモリ104に記憶される。 When the processing circuit is the processor 103 shown in FIG. 9B, the functions of the 3D space construction unit 11, the editing unit 12, and the playback unit 13 included in the 3D moving image providing device 1 depend on software, firmware, or a combination of software and firmware. It will be realized. The software or firmware is described as a program and stored in the memory 104.
 プロセッサ103は、メモリ104に記憶されたプログラムを読み出して実行することによって、3D動画像提供装置1が備える3D空間構築部11、編集部12および再生部13の機能を実現する。例えば、3D動画像提供装置1は、プロセッサ103によって実行されるときに、図2に示したステップST1からステップST3までの処理が結果的に実行されるプログラムを記憶するメモリ104を備える。これらのプログラムは、3D空間構築部11、編集部12および再生部13の手順または方法をコンピュータに実行させる。メモリ104は、コンピュータを、3D空間構築部11、編集部12および再生部13として機能させるためのプログラムが記憶されたコンピュータ可読記憶媒体であってもよい。 The processor 103 realizes the functions of the 3D space construction unit 11, the editing unit 12, and the reproduction unit 13 included in the 3D moving image providing device 1 by reading and executing the program stored in the memory 104. For example, the 3D moving image providing device 1 includes a memory 104 that stores a program in which the processes from step ST1 to step ST3 shown in FIG. 2 are executed as a result when executed by the processor 103. These programs cause a computer to execute the procedures or methods of the 3D space construction unit 11, the editing unit 12, and the reproduction unit 13. The memory 104 may be a computer-readable storage medium in which a program for making the computer function as a 3D space construction unit 11, an editing unit 12, and a reproduction unit 13 is stored.
 メモリ104は、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)、EEPROM(Electrically-EPROM)などの不揮発性または揮発性の半導体メモリ、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ミニディスク、DVDなどが該当する。 The memory 104 may be, for example, a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically-volatile), or an EEPROM (Electrically-EMMORY). This includes disks, flexible disks, optical disks, compact disks, mini disks, DVDs, and the like.
 3D動画像提供装置1が備える3D空間構築部11、編集部12および再生部13の機能の一部が専用のハードウェアによって実現され、残りの一部がソフトウェアまたはファームウェアによって実現されてもよい。例えば、3D空間構築部11は、専用のハードウェアである処理回路102によって機能が実現され、編集部12および再生部13は、プロセッサ103がメモリ104に記憶されたプログラムを読み出して実行することによって機能が実現される。このように、処理回路は、ハードウェア、ソフトウェア、ファームウェアまたはこれらの組み合わせによって、上記機能を実現することができる。 A part of the functions of the 3D space construction unit 11, the editing unit 12, and the reproduction unit 13 included in the 3D moving image providing device 1 may be realized by dedicated hardware, and the remaining part may be realized by software or firmware. For example, the function of the 3D space construction unit 11 is realized by the processing circuit 102, which is dedicated hardware, and the editing unit 12 and the reproduction unit 13 read and execute the program stored in the memory 104 by the processor 103. The function is realized. As described above, the processing circuit can realize the above-mentioned functions by hardware, software, firmware or a combination thereof.
 以上のように、3D動画像提供装置1は、仮想カメラ21の視線Aが3Dオブジェクト32の動きに追従するように、仮想カメラ21の向きを変更した3D動画像を再生する。これにより、3D動画像提供装置1は、仮想カメラ21の視界から3Dオブジェクト32が外れることを防止できる。注視対象の3Dオブジェクト32が3D仮想空間内で移動しても、表示装置2に表示される画像には、常に、注視対象の3Dオブジェクト32が含まれる。このため、ユーザは、表示装置2に表示された画像を見ることにより、注視対象の3Dオブジェクト32の時間経過に伴う位置変化を認識することが可能である。 As described above, the 3D moving image providing device 1 reproduces the 3D moving image in which the direction of the virtual camera 21 is changed so that the line of sight A of the virtual camera 21 follows the movement of the 3D object 32. As a result, the 3D moving image providing device 1 can prevent the 3D object 32 from coming out of the field of view of the virtual camera 21. Even if the 3D object 32 to be watched moves in the 3D virtual space, the image displayed on the display device 2 always includes the 3D object 32 to be watched. Therefore, the user can recognize the position change of the 3D object 32 to be watched with the passage of time by looking at the image displayed on the display device 2.
実施の形態2.
 図10は、実施の形態2に係る3D動画像提供装置1Aの構成を示すブロック図である。図10に例示した3D動画像提供装置1Aは、3D空間構築部11、編集部12および再生部13Aを備える。再生部13Aは、注視点決定部131および3Dオブジェクト追従部132Aを備えている。3Dオブジェクト追従部132Aは、仮想カメラの視線の延長線に沿って仮想カメラが3Dオブジェクトから離れる方向に移動させることにより、仮想カメラの視界を広げる。
Embodiment 2.
FIG. 10 is a block diagram showing a configuration of the 3D moving image providing device 1A according to the second embodiment. The 3D moving image providing device 1A illustrated in FIG. 10 includes a 3D space construction unit 11, an editing unit 12, and a reproduction unit 13A. The reproduction unit 13A includes a gazing point determination unit 131 and a 3D object tracking unit 132A. The 3D object tracking unit 132A widens the field of view of the virtual camera by moving the virtual camera in a direction away from the 3D object along the extension line of the line of sight of the virtual camera.
 図11は、仮想カメラ21の視界を広げる処理の概要を示すイメージ図である。図11に示すように、注視対象の3Dオブジェクト32が仮想カメラ21の視界C(1)に完全に収まらない場合、表示装置2は、3Dオブジェクト32の一部を含んだ表示画面2Cを表示する。3Dオブジェクト追従部132Aは、仮想カメラ21を、視線Aの延長線に沿って3Dオブジェクト32から遠ざかる方向Dに移動させる。これにより、仮想カメラ21の視界が、視界C(1)から視界C(2)に広がる。 FIG. 11 is an image diagram showing an outline of the process of widening the field of view of the virtual camera 21. As shown in FIG. 11, when the 3D object 32 to be watched does not completely fit in the field of view C (1) of the virtual camera 21, the display device 2 displays the display screen 2C including a part of the 3D object 32. .. The 3D object tracking unit 132A moves the virtual camera 21 in the direction D away from the 3D object 32 along the extension line of the line of sight A. As a result, the field of view of the virtual camera 21 expands from the field of view C (1) to the field of view C (2).
 3Dオブジェクト追従部132Aは、3Dオブジェクト32の全てが仮想カメラ21の視界C(2)に収まり、かつ、仮想カメラ21に最も近い位置で仮想的に撮影された3D動画像を再生する。3Dオブジェクト追従部132Aは、仮想カメラ21の視界を広げる処理を、3D動画像の各フレーム画像で実施する。これにより、表示装置2は、3Dオブジェクト32の全てが含まれた表示画面2Dを表示する。 The 3D object tracking unit 132A reproduces a 3D moving image virtually taken at a position closest to the virtual camera 21 while all of the 3D object 32 fits in the field of view C (2) of the virtual camera 21. The 3D object tracking unit 132A performs a process of widening the field of view of the virtual camera 21 on each frame image of the 3D moving image. As a result, the display device 2 displays the display screen 2D including all of the 3D objects 32.
 3D動画像提供装置1Aが備える3D空間構築部11、編集部12および再生部13Aの機能は、3D動画像提供装置1と同様に処理回路によって実現される。すなわち、3D動画像提供装置1Aは、前述した処理を実行するための処理回路を備える。処理回路は、図9Aに示したハードウェアの処理回路102であってもよいし、図9Bに示したメモリ104に記憶されたプログラムを実行するプロセッサ103であってもよい。 The functions of the 3D space construction unit 11, the editing unit 12, and the reproduction unit 13A included in the 3D moving image providing device 1A are realized by the processing circuit in the same manner as the 3D moving image providing device 1. That is, the 3D moving image providing device 1A includes a processing circuit for executing the above-mentioned processing. The processing circuit may be the hardware processing circuit 102 shown in FIG. 9A, or may be the processor 103 that executes the program stored in the memory 104 shown in FIG. 9B.
 なお、前述の説明では、3Dオブジェクト追従部132Aが、3Dオブジェクト32の全てが仮想カメラ21の視界C(2)に収まり、かつ、仮想カメラ21に最も近い位置で仮想的に撮影された3D動画像を再生する場合を示した。ただし、画面内で3Dオブジェクト32の周りに余白部分ができる程度に、仮想カメラ21を、視線Aの延長線に沿って3Dオブジェクト32からさらに遠ざかる方向Dに移動させてもよい。 In the above description, the 3D object tracking unit 132A is a 3D moving image in which all of the 3D object 32 fits in the field of view C (2) of the virtual camera 21 and is virtually shot at the position closest to the virtual camera 21. The case of reproducing the image is shown. However, the virtual camera 21 may be moved in the direction D further away from the 3D object 32 along the extension line of the line of sight A to the extent that a margin portion is formed around the 3D object 32 in the screen.
 以上のように、3D動画像提供装置1Aにおいて、再生部13Aが、仮想カメラ21の視線Aの延長線に沿って仮想カメラ21が3Dオブジェクト32から離れる方向に移動させることにより、仮想カメラ21の視界を広げる。これにより、3D動画像提供装置1Aは、3Dオブジェクト32における注視点が存在する部分だけでなく、3Dオブジェクト32の全体の動きを視認可能な3D動画像を提供することができる。 As described above, in the 3D moving image providing device 1A, the reproduction unit 13A moves the virtual camera 21 away from the 3D object 32 along the extension line of the line of sight A of the virtual camera 21 so that the virtual camera 21 can be moved. Widen your view. As a result, the 3D moving image providing device 1A can provide a 3D moving image in which not only the portion of the 3D object 32 where the gazing point exists but also the entire movement of the 3D object 32 can be visually recognized.
 なお、各実施の形態の組み合わせまたは実施の形態のそれぞれの任意の構成要素の変形もしくは実施の形態のそれぞれにおいて任意の構成要素の省略が可能である。 It should be noted that it is possible to combine each embodiment, modify any arbitrary component of each embodiment, or omit any component in each of the embodiments.
 本開示に係る3D動画像提供装置は、例えば、自動車に搭載されたライトが照射した光によって路面に図形を表示する自動車向けのライティング技術の設計に利用可能である。 The 3D moving image providing device according to the present disclosure can be used, for example, for designing a lighting technology for an automobile that displays a figure on a road surface by a light radiated by a light mounted on the automobile.
 1,1A 3D動画像提供装置、2 表示装置、2A 2D画像、2B,2C,2D 表示画面、2C(0)~2(2) 画像、3 入力装置、11 3D空間構築部、12 編集部、13,13A 再生部、21 仮想カメラ、31a,32 3Dオブジェクト、32a,32b 2Dオブジェクト、33a,33b タイムライン画像、33c スライドバー、33d 指標、34 再生ボタン、35 停止ボタン、36 注視点、100 入力インタフェース、101 出力インタフェース、102 処理回路、103 プロセッサ、104 メモリ、121 動作決定部、131 注視点決定部、132,132A 3Dオブジェクト追従部。 1,1A 3D moving image providing device, 2 display device, 2A 2D image, 2B, 2C, 2D display screen, 2C (0) to 2 (2) image, 3 input device, 11 3D space construction unit, 12 editorial department, 13,13A playback unit, 21 virtual camera, 31a, 32 3D object, 32a, 32b 2D object, 33a, 33b timeline image, 33c slide bar, 33d index, 34 play button, 35 stop button, 36 gaze point, 100 input Interface, 101 output interface, 102 processing circuit, 103 processor, 104 memory, 121 operation determination unit, 131 gaze point determination unit, 132, 132A 3D object tracking unit.

Claims (8)

  1.  3次元仮想空間に配置された仮想カメラで仮想的に撮影された3次元動画像を提供する3次元動画像提供装置であって、
     前記3次元動画像の編集状態において、前記3次元仮想空間に存在する3次元オブジェクトの時間経過に応じた動作情報を決定する編集部と、
     前記3次元動画像の再生状態において、前記動作情報に従って動作する前記3次元オブジェクトの前記3次元動画像を再生する再生部と、
     を備え、
     前記再生部は、前記仮想カメラの視線が前記3次元オブジェクトの動きに追従するように当該仮想カメラの向きを変更した前記3次元動画像を再生すること
     を特徴とする3次元動画像提供装置。
    It is a 3D moving image providing device that provides a 3D moving image virtually taken by a virtual camera arranged in a 3D virtual space.
    In the editing state of the 3D moving image, an editorial unit that determines operation information according to the passage of time of a 3D object existing in the 3D virtual space, and an editorial unit.
    In the reproduction state of the three-dimensional moving image, a reproduction unit that reproduces the three-dimensional moving image of the three-dimensional object that operates according to the operation information, and a reproduction unit.
    Equipped with
    The reproduction unit is a three-dimensional moving image providing device, which reproduces the three-dimensional moving image in which the direction of the virtual camera is changed so that the line of sight of the virtual camera follows the movement of the three-dimensional object.
  2.  前記再生部は、前記仮想カメラの視線方向にある前記3次元オブジェクトと、前記仮想カメラの視線との交点が注視点となるように、前記仮想カメラの向きを変更すること
     を特徴とする請求項1に記載の3次元動画像提供装置。
    The reproduction unit is characterized in that the direction of the virtual camera is changed so that the intersection of the three-dimensional object in the line-of-sight direction of the virtual camera and the line-of-sight of the virtual camera is the gaze point. The three-dimensional moving image providing device according to 1.
  3.  前記編集部は、前記3次元動画像のタイムラインを示すタイムライン画像を表示装置に表示して、入力装置を用いた情報の入力を受け付け、前記タイムライン画像の時間を変更する情報が受け付けられた場合、従前の時間における前記3次元オブジェクトの位置を、受け付けられた変更後の時間における位置に変更した前記動作情報を決定すること
     を特徴とする請求項1または請求項2に記載の3次元動画像提供装置。
    The editorial unit displays a timeline image showing a timeline of the three-dimensional moving image on a display device, accepts input of information using the input device, and receives information for changing the time of the timeline image. If so, the three-dimensional aspect according to claim 1 or 2, wherein the operation information in which the position of the three-dimensional object in the previous time is changed to the position in the accepted changed time is determined. Video providing device.
  4.  前記再生部は、前記3次元動画像のタイムラインを示すタイムライン画像を表示装置に表示して、入力装置を用いた情報の入力を受け付け、前記タイムライン画像の時間を変更する情報が受け付けられた場合、従前の時間における前記3次元オブジェクトの位置を、受け付けられた変更後の時間における位置に変更すること
     を特徴とする請求項1または請求項2に記載の3次元動画像提供装置。
    The reproduction unit displays a timeline image showing a timeline of the three-dimensional moving image on a display device, accepts input of information using the input device, and receives information for changing the time of the timeline image. If so, the 3D moving image providing device according to claim 1 or 2, wherein the position of the 3D object in the previous time is changed to the position in the accepted changed time.
  5.  前記編集部は、前記3次元動画像のタイムラインを示すタイムライン画像を表示装置に表示して、入力装置を用いた情報の入力を受け付け、前記3次元オブジェクトが動作制約を有した移動体を表す場合、前記タイムライン画像の時間を変更する情報が受け付けられると、従前の時間と、受け付けられた変更後の時間との間での前記3次元オブジェクトの移動軌跡を、前記動作制約に応じて補間した前記動作情報を決定すること
     を特徴とする請求項1または請求項2に記載の3次元動画像提供装置。
    The editorial unit displays a timeline image showing the timeline of the three-dimensional moving image on the display device, accepts input of information using the input device, and causes the three-dimensional object to move a moving body having an operation constraint. In the case of expressing, when the information for changing the time of the timeline image is received, the movement locus of the three-dimensional object between the previous time and the accepted time after the change is set according to the operation constraint. The three-dimensional moving image providing device according to claim 1 or 2, wherein the interpolated motion information is determined.
  6.  前記再生部は、前記仮想カメラの視線の延長線に沿って当該仮想カメラが前記3次元オブジェクトから離れる方向に移動させることにより、前記仮想カメラの視界を広げること
     を特徴とする請求項1または請求項2に記載の3次元動画像提供装置。
    1. Item 2. The three-dimensional moving image providing device according to item 2.
  7.  3次元仮想空間に配置された仮想カメラで仮想的に撮影された3次元動画像を提供する3次元動画像提供方法であって、
     編集部が、前記3次元動画像の編集状態において、前記3次元仮想空間に存在する3次元オブジェクトの時間経過に応じた動作情報を決定するステップと、
     再生部が、前記3次元動画像の再生状態において、前記動作情報に従って動作する前記3次元オブジェクトの前記3次元動画像を再生するステップと、
     を備え、
     前記再生部は、前記仮想カメラの視線が前記3次元オブジェクトの動きに追従するように当該仮想カメラの向きを変更した前記3次元動画像を再生すること
     を特徴とする3次元動画像提供方法。
    It is a 3D moving image providing method that provides a 3D moving image virtually taken by a virtual camera arranged in a 3D virtual space.
    A step in which the editorial unit determines operation information according to the passage of time of a 3D object existing in the 3D virtual space in the editing state of the 3D moving image.
    A step in which the reproduction unit reproduces the 3D moving image of the 3D object that operates according to the operation information in the reproduction state of the 3D moving image.
    Equipped with
    The reproduction unit is a method for providing a three-dimensional moving image, which reproduces the three-dimensional moving image in which the direction of the virtual camera is changed so that the line of sight of the virtual camera follows the movement of the three-dimensional object.
  8.  コンピュータを、
     3次元仮想空間に配置された仮想カメラで仮想的に撮影された3次元動画像の編集状態において、前記3次元仮想空間に存在する3次元オブジェクトの時間経過に応じた動作情報を決定する編集部と、
     前記3次元動画像の再生状態において、前記動作情報に従って動作する前記3次元オブジェクトの前記3次元動画像を再生する再生部と、
     を備え、
     前記再生部が、前記仮想カメラの視線が前記3次元オブジェクトの動きに追従するように当該仮想カメラの向きを変更した前記3次元動画像を再生すること
     を特徴とする3次元動画像提供装置として機能させるためのプログラム。
    Computer,
    An editorial unit that determines operation information according to the passage of time of a 3D object existing in the 3D virtual space in an edited state of a 3D moving image virtually taken by a virtual camera arranged in the 3D virtual space. When,
    In the reproduction state of the three-dimensional moving image, a reproduction unit that reproduces the three-dimensional moving image of the three-dimensional object that operates according to the operation information, and a reproduction unit.
    Equipped with
    As a 3D moving image providing device, the reproducing unit reproduces the 3D moving image in which the direction of the virtual camera is changed so that the line of sight of the virtual camera follows the movement of the 3D object. A program to make it work.
PCT/JP2020/027401 2020-07-14 2020-07-14 Three-dimensional video image provision device, three-dimensional video image provision method, and program WO2022013950A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2020/027401 WO2022013950A1 (en) 2020-07-14 2020-07-14 Three-dimensional video image provision device, three-dimensional video image provision method, and program
JP2021505942A JPWO2022013950A1 (en) 2020-07-14 2020-07-14

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/027401 WO2022013950A1 (en) 2020-07-14 2020-07-14 Three-dimensional video image provision device, three-dimensional video image provision method, and program

Publications (1)

Publication Number Publication Date
WO2022013950A1 true WO2022013950A1 (en) 2022-01-20

Family

ID=79555513

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/027401 WO2022013950A1 (en) 2020-07-14 2020-07-14 Three-dimensional video image provision device, three-dimensional video image provision method, and program

Country Status (2)

Country Link
JP (1) JPWO2022013950A1 (en)
WO (1) WO2022013950A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04127279A (en) * 1990-06-11 1992-04-28 Hitachi Ltd Device and method for kinetic path generation of object
JP2003150980A (en) * 2001-11-15 2003-05-23 Square Co Ltd Video game device, display method of character in video game, program, and recording medium
US9734615B1 (en) * 2013-03-14 2017-08-15 Lucasfilm Entertainment Company Ltd. Adaptive temporal sampling

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04127279A (en) * 1990-06-11 1992-04-28 Hitachi Ltd Device and method for kinetic path generation of object
JP2003150980A (en) * 2001-11-15 2003-05-23 Square Co Ltd Video game device, display method of character in video game, program, and recording medium
US9734615B1 (en) * 2013-03-14 2017-08-15 Lucasfilm Entertainment Company Ltd. Adaptive temporal sampling

Also Published As

Publication number Publication date
JPWO2022013950A1 (en) 2022-01-20

Similar Documents

Publication Publication Date Title
US9070402B2 (en) 3D model presentation system with motion and transitions at each camera view point of interest (POI) with imageless jumps to each POI
JP5293587B2 (en) Display control apparatus, display control method, and program
US7589732B2 (en) System and method of integrated spatial and temporal navigation
KR101860313B1 (en) Method and system for editing scene in three-dimensional space
EP1441541A2 (en) Apparatus and method for displaying three-dimensional image
JP7458889B2 (en) Image display device, control method, and program
WO2018173791A1 (en) Image processing device and method, and program
JP2011126473A (en) Parking navigation system
US20110200303A1 (en) Method of Video Playback
JP2019512177A (en) Device and related method
JP5312505B2 (en) GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM
WO2022013950A1 (en) Three-dimensional video image provision device, three-dimensional video image provision method, and program
US20200168252A1 (en) System and method of determining a virtual camera path
US11070734B2 (en) Image pickup apparatus having grip, and image pickup lens therefor
US10200606B2 (en) Image processing apparatus and control method of the same
US20200296316A1 (en) Media content presentation
JP7070547B2 (en) Image processing equipment and methods, as well as programs
US20100225648A1 (en) Story development in motion picture
CN109792554B (en) Reproducing apparatus, reproducing method, and computer-readable storage medium
JP2021197082A (en) Information processing apparatus, method for controlling information processing apparatus, and program
JP2021531587A (en) How and devices to add interactive objects to a virtual reality environment
US11997413B2 (en) Media content presentation
JP5460277B2 (en) Monochrome moving image coloring apparatus and monochrome moving image coloring method
JP7091073B2 (en) Electronic devices and their control methods
KR20070089503A (en) Method of inserting a transition movement between two different movements for making 3-d video

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021505942

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20945601

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20945601

Country of ref document: EP

Kind code of ref document: A1