WO2022176720A1 - 情報処理装置、情報処理方法、およびプログラム - Google Patents
情報処理装置、情報処理方法、およびプログラム Download PDFInfo
- Publication number
- WO2022176720A1 WO2022176720A1 PCT/JP2022/004992 JP2022004992W WO2022176720A1 WO 2022176720 A1 WO2022176720 A1 WO 2022176720A1 JP 2022004992 W JP2022004992 W JP 2022004992W WO 2022176720 A1 WO2022176720 A1 WO 2022176720A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- virtual viewpoint
- partial
- dimensional shape
- divided
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 36
- 238000003672 processing method Methods 0.000 title claims description 6
- 238000000034 method Methods 0.000 claims description 39
- 238000003384 imaging method Methods 0.000 claims description 13
- 238000007726 management method Methods 0.000 description 26
- 238000012545 processing Methods 0.000 description 26
- 230000005540 biological transmission Effects 0.000 description 25
- 238000010586 diagram Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 16
- 238000004590 computer program Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 9
- 230000000694 effects Effects 0.000 description 5
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004040 coloring Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/21805—Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/189—Recording image signals; Reproducing recorded image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/194—Transmission of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
- H04N13/279—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/282—Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/24—Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- the present disclosure relates to technology for transmitting three-dimensional shape data.
- Patent Document 1 discloses a system that generates a virtual viewpoint image from a plurality of images. Specifically, three-dimensional shape data representing the three-dimensional shape of the object is generated from a plurality of images. Using this three-dimensional shape data, a virtual viewpoint image representing the view from the virtual viewpoint is generated.
- an object of the present disclosure is to reduce the transmission load of three-dimensional shape data.
- An information processing apparatus includes a first acquisition unit that acquires virtual viewpoint information for specifying a position of a virtual viewpoint and a line-of-sight direction from the virtual viewpoint, and a second acquisition unit that acquires three-dimensional shape data of an object. and identifying a partial area of the object, which is displayed in a virtual viewpoint image representing an appearance from the virtual viewpoint, based on the virtual viewpoint information obtained by the first obtaining means. and output means for outputting partial data corresponding to the partial area specified by the specifying means in the three-dimensional shape data acquired by the second acquiring means.
- FIG. 1 is a diagram showing an example of a configuration of a virtual viewpoint image generation system including a 3D information processing apparatus according to Embodiment 1; FIG. It is a figure which shows an example of arrangement
- FIG. 4 is a diagram showing an example of a method of dividing a foreground model;
- FIG. 4 is a diagram showing an example of a method of dividing a foreground model;
- It is a figure which shows an example of the division
- FIG. 4 is a diagram showing an example of a data structure of a stored foreground model;
- FIG. 4 is a diagram showing an example of a data structure of a stored foreground model;
- FIG. 4 is a diagram showing an example of a data structure of a stored foreground model;
- FIG. 4 is a diagram showing an example of a data structure of a stored foreground model;
- FIG. 4 is a diagram showing an example of a data structure of a stored foreground model
- FIG. 4 is a diagram showing an example of a data structure of a stored foreground model
- It is a figure which shows an example of the data structure of the background model stored.
- It is a figure which shows an example of the data structure of the background model stored.
- 4 is a flowchart showing processing of the virtual viewpoint image generation system according to the first embodiment
- 4 is a diagram showing the communication status of each part of the virtual viewpoint image generation system according to Embodiment 1.
- FIG. FIG. 10 is a diagram showing an example of the configuration of a virtual viewpoint image generation system including a 3D information processing apparatus according to Embodiment 2;
- 10 is a diagram showing an example of a method for dividing a foreground model according to the second embodiment; 10 is a flowchart showing processing of the virtual viewpoint image generation system according to Embodiment 2; FIG. 4 is a diagram showing an example of a data structure of a stored foreground model; 1 is a block diagram showing an example of a hardware configuration of a 3D information processing apparatus; FIG.
- a virtual viewpoint image is an image generated by a user and/or a dedicated operator or the like freely manipulating the position and orientation of a virtual camera, and is an image representing a view from a virtual viewpoint.
- a virtual viewpoint image is also called a free viewpoint image, an arbitrary viewpoint image, or the like.
- the case where the virtual viewpoint is specified by user operation will be mainly described, but the virtual viewpoint may be specified automatically based on the result of image analysis or the like.
- the term "image" is assumed to include the concepts of both moving images and still images.
- a virtual camera is a virtual camera different from a plurality of imaging devices actually installed around an imaging area, and is a concept for conveniently explaining a virtual viewpoint related to generation of a virtual viewpoint image.
- the virtual viewpoint image can be regarded as an image captured from a virtual viewpoint set within the virtual space associated with the imaging region.
- the position and orientation of the viewpoint in the virtual imaging can be expressed as the position and orientation of the virtual camera.
- the virtual viewpoint image is an image simulating the captured image obtained by the camera when it is assumed that the camera exists at the position of the virtual viewpoint set in the space.
- the content of temporal transition of the virtual viewpoint is referred to as a virtual camera path.
- the imaging device only needs to have a physical camera (real camera). Also, the imaging device may have a function of performing various image processing in addition to the physical camera. For example, the imaging device may have a processing unit that performs foreground/background separation processing. Further, the imaging device may have a control unit that performs transmission control for transmitting an image of a part of the captured image. Also, the imaging device may have a plurality of physical cameras.
- FIG. 1 is a virtual viewpoint image generation system configuration diagram of a three-dimensional information processing apparatus 100 that processes three-dimensional shape data generated by installing a plurality of cameras in facilities such as stadiums and concert halls.
- the virtual viewpoint image generation system includes cameras 101a to 101t, an input unit 102, a foreground model generation unit 103, a background model generation unit 104, a model acquisition unit 105, a model division unit 106, a management unit 107, a storage unit 108, a transmission/reception unit 109, It has a selection unit 110 and terminals 111a to 111d. Note that the cameras 101a to 101t are explained as the camera 101 unless otherwise specified.
- the terminals 111a to 111d are explained as the terminal 111 unless otherwise specified.
- the three-dimensional shape data may be referred to as a model below.
- the model may refer to three-dimensional shape data indicating the three-dimensional shape of the foreground or background, or may refer to data having color information of the foreground or background in addition to the three-dimensional shape data.
- the cameras 101 are arranged so as to surround a subject (object), and are photographed in synchronization. Synchronization refers to a state in which imaging timings are controlled to be substantially the same.
- FIG. 2 shows an example of camera arrangement. However, the number and arrangement of cameras are not limited to this.
- each of the cameras 101a-t is aimed at one of three points of regard 150-152. In order to simplify the explanation, the case where there is one subject 210 will be described, but the same processing can be performed for a plurality of subjects as well.
- the cameras 101a to 101t are network-connected via wired cables and are connected to the input unit 102 . Each frame is photographed at the same time.
- the photographed image data is given a time code and a frame number, and the image data is transmitted.
- Each camera is assigned a camera ID.
- the point of gaze may be the intersection of the optical axes of a plurality of cameras directed to the same point of gaze. Also, the optical axes of the cameras directed to the same gaze point do not have to pass through the gaze point. Also, the points of gaze may be three or more, or may be one or two. Also, each camera may be directed to a different gaze point.
- the input unit 102 inputs image data captured by the camera 101 and outputs it to the foreground model generation unit 103 and background model generation unit 104 .
- the image data may be captured image data, or may be image data obtained by extracting a partial area from the captured image. In the latter case, for example, the input unit 102 may output to the foreground model generation unit 103 foreground image data obtained by extracting the foreground object area from the captured image.
- the input unit 102 may output background image data obtained by extracting a background object area from the captured image to the background model generation unit 104 .
- the process of extracting the subject portion, the process of generating the silhouette image, and the process of generating the foreground image can be omitted in the foreground model generation unit 103, which will be described later.
- these processes may be performed in an imaging device having a camera.
- the foreground model generation unit 103 generates one or more types of three-dimensional shape data of the subject from the input image data.
- a point cloud model, a foreground image, and a mesh model of an object are generated.
- a distance image from a camera or a colored point group in which color information is attached to each point of the point group may be used.
- the foreground model generation unit 103 extracts the image of the subject from the synchronously captured image data.
- the method of extracting the image of the subject is not particularly limited, but it is possible to capture an image in which the subject is not shown as a reference image and use the difference from the input image to extract the subject.
- the method for shape estimation is not particularly limited, but for example, the foreground model generation unit 103 may generate three-dimensional shape data using a visual volume intersection method (shape from silhouette method). More specifically, the foreground model generation unit 103 generates a silhouette image with pixel values of 1 at pixel positions of the subject portion and 0 at pixel positions of other portions.
- the foreground model generation unit 103 uses the generated silhouette image to generate point cloud model data, which is three-dimensional shape data of the subject, using the visual volume intersection method.
- the foreground model generation unit 103 obtains a circumscribing rectangle of the subject from the silhouette image, cuts out the subject image of the input image using the circumscribing rectangle, and extracts this as a foreground image.
- the foreground model generating unit 103 obtains parallax images of a plurality of cameras, creates distance images, and generates a mesh model.
- the method of generating the mesh model is not particularly limited. However, in the present embodiment, a plurality of types of three-dimensional shape data are generated, but the present disclosure can also be applied to a form in which one type of three-dimensional shape data is generated.
- the background model generation unit 104 generates a background model.
- the background is a stadium, a concert, or a theater stage.
- the background model generation method is not limited.
- three-dimensional shape data of a stadium or the like having a background field may be generated.
- the three-dimensional shape data of the stadium may be generated using blueprints of the stadium.
- the CAD data may be used as the three-dimensional shape data of the stadium.
- three-dimensional shape data may be generated by laser scanning the stadium.
- the entire stadium is generated as one piece of three-dimensional shape data.
- the background image of the audience or the like may be acquired each time the image is captured.
- the model acquisition unit 105 acquires the three-dimensional shape data regarding the subject and the three-dimensional shape data regarding the background generated by the foreground model generation unit 103 and the background model generation unit 104 .
- the model dividing unit 106 divides the input three-dimensional shape data into multiple pieces of three-dimensional shape data. The division method will be described later.
- the management unit 107 acquires the three-dimensional shape data acquired by the foreground model generation unit 103 and the three-dimensional shape data generated by dividing by the model division unit 106 and stores them in the storage unit 108 . At the time of saving, a table for data access for reading each data is generated, and managed so that the data can be read and written in association with the time code, frame number, and the like. In addition, data is output based on an instruction from the selection unit 110, which will be described later.
- the storage unit 108 stores the input data.
- it is composed of a semiconductor memory, a magnetic recording device, or the like. The saving format will be described later.
- Data is written and read based on instructions from the management unit 107, and the written data is output to the transmission/reception unit 109 according to the read instructions.
- the transmitting/receiving unit 109 communicates with a terminal 111, which will be described later, and transmits/receives requests from the terminal and data.
- the selection unit 110 is a selection unit that selects three-dimensional shape data to be transmitted to the terminal, and its operation will be described later. It selects which part of the divided three-dimensional shape data is to be output, and outputs the information to the management unit 107 .
- the terminal 111 Based on the three-dimensional shape data acquired from the three-dimensional information processing apparatus 100, the terminal 111 generates virtual viewpoint information by setting a virtual viewpoint by the user, and based on this, displays and provides a virtual viewpoint image. .
- the number is not limited to this.
- the number of terminals 111 may be one.
- FIG. 17 is a block diagram showing a configuration example of computer hardware applicable to the three-dimensional information processing apparatus 100 according to the present embodiment.
- the CPU 1701 controls the entire computer using the computer programs and data stored in the RAM 1702 and ROM 1703, and executes the processes described later as those performed by the 3D information processing apparatus 100 according to this embodiment. That is, the CPU 1701 functions as each processing unit within the three-dimensional information processing apparatus 100 shown in FIG.
- the RAM 1702 has an area for temporarily storing computer programs and data loaded from an external storage device 1706, data acquired from the outside via an I/F (interface) 1707, and the like. Furthermore, the RAM 1702 has a work area used when the CPU 1701 executes various processes. That is, the RAM 1702 can be allocated, for example, as frame memory, or can provide other various areas as appropriate.
- the ROM 1703 stores setting data for this computer, a boot program, and the like.
- An operation unit 1704 includes a keyboard, a mouse, and the like, and can be operated by the user of the computer to input various instructions to the CPU 1701 .
- An output unit 1705 displays the results of processing by the CPU 1701 . Also, the output unit 1705 is configured by, for example, a liquid crystal display.
- the external storage device 1706 is a large-capacity information storage device typified by a hard disk drive.
- the external storage device 1706 stores an OS (operating system) and a computer program for causing the CPU 1701 to implement the functions of the units shown in FIG. Furthermore, each image data to be processed may be stored in the external storage device 1706 .
- the computer programs and data stored in the external storage device 1706 are appropriately loaded into the RAM 1702 under the control of the CPU 1701 and processed by the CPU 1701 .
- the I/F 1707 can be connected to a network such as a LAN or the Internet, or other equipment such as a projection device or a display device. can be
- a bus 1708 connects the above-described units.
- FIG. 5(a) An example of the format of the three-dimensional shape data stored in the storage unit 108 is shown in FIG. 5(a).
- the three-dimensional shape data is saved as sequence data representing a series of shots. For example, sequences correspond to events and cuts.
- the management unit 107 manages data in units of sequences.
- the sequence data includes a sequence header, and the sequence header stores a sequence header start code indicating the beginning of the sequence.
- This data also stores information about the entire sequence.
- the information about the entire sequence includes the name of the sequence, the shooting location, the date and time when shooting was started, the time code representing the time, the frame rate, and the image size.
- the information about the entire sequence also includes the ID and parameter information of each camera.
- various three-dimensional shape data are saved in units called data sets. The number M of the data sets is described in the sequence header. Information for each data set is stored below. In this embodiment, two data sets, a data set of foreground model data and a data set of background model data, are included.
- a data set identification ID is first given.
- the identification ID a unique ID is assigned to the storage unit 108 or all data sets.
- the dataset type is then saved.
- the data set includes point cloud model data, foreground image, colored point cloud data, distance image data, and mesh model data.
- the data set class code is expressed as a 2-byte code shown in FIG. 5(e).
- data types and codes are not limited to these. Data representing other three-dimensional shape data may also be used.
- the pointer to the data set is saved.
- it is not limited to pointers as long as it is information for accessing each data set.
- a file system may be constructed in the storage unit and used as the file name.
- point cloud model data and foreground images will be described as examples of the types of foreground model datasets.
- FIG. 6(a) An example of the configuration of the foreground model data set is shown in FIG. 6(a).
- the foreground model data set is saved in units of frames, but it is not limited to this.
- a foreground model data header is stored at the head of the dataset, and the header stores information such as the fact that this dataset is a foreground model dataset and the number of frames.
- the time code representing the time of the first frame of the foreground model data and the data size of the frame are stored in this order.
- the data size is for referring to the data of the next frame, and may be collectively stored in the header. Subsequently, the number P of subjects for generating a virtual viewpoint image at the time indicated by the time code is saved.
- the number C of cameras used for photographing at that time is stored. It should be noted that instead of the number of cameras used for photographing, the number of cameras in which the object appears in the photographed image may be used. Subsequently, the camera ID of the camera used is saved.
- FIG. In this embodiment, a method of setting the x-axis, the y-axis, and the z-axis and equally dividing them will be described.
- the longitudinal direction of the stadium is defined as the x-axis, the lateral direction as the y-axis, and the height as the z-axis. Let this be the reference coordinate axis. However, it is not limited to this.
- dx be the number of divisions in the x-axis direction
- dy be the number of divisions in the y-axis direction
- dz be the number of divisions in the z-axis direction.
- FIG. 3A An example of division is shown in FIG. 3A.
- FIG. 3A shows a situation where dx is 2, dy is 2, and dz is 2.
- FIG. 3A shows a situation where dx is 2, dy is 2, and dz is 2.
- FIG. 3 This divides the sphere into eight directions. That is, it is divided into divisions 300-1 to 300-8.
- the center of the division is the center of the model (center of gravity), and the foreground model is divided into eight.
- the one on the left side of FIG. 3A shows a split model 300-1.
- FIG. 3B shows the case where dx is 2, dy is 2, and dz is 1.
- the division method is not limited to this.
- the lateral direction of the stadium may be the x-axis
- the longitudinal direction may be the y-axis
- the height may be the z-axis.
- the division is performed by defining the mutually orthogonal x-axis, y-axis, and z-axis
- the division axis is not limited to these.
- a division method other than the coordinate system may be used.
- the subject may be divided into parts of the body of a person or animal, such as the face, body, limbs, and the like.
- the data of the foreground model of each division follows.
- a data size of the data of the foreground model of the first subject is saved.
- the point cloud model data included in the division 300-1 of the point cloud data of the first object is saved.
- the divided point cloud data as shown in FIG. 6C, the data size of the included point cloud is saved, and the number R of points forming the point cloud model is saved.
- the point cloud data of the divided data are stored in order below. First, the number of coordinate points that make up the point cloud of the initial object is saved. Thereafter, the coordinates of that number of points are saved.
- the coordinate system is stored as three-axis data, but the present invention is not limited to this, and polar coordinates or other coordinate systems may be used.
- polar coordinates or other coordinate systems may be used.
- point cloud data is saved for each divided portion of the first object. Furthermore, the divided data included in the point cloud data after the second subject are stored in order. Point cloud data up to the P-th subject are saved.
- the foreground image data is saved for each camera ID.
- data size of each foreground image data, image size, bit depth of pixel value, pixel value, etc. are stored.
- the image data may be encoded by JPEG, for example.
- the foreground image data from each camera is successively stored for each subject. If the subject is not captured by the camera, NULL data may be written, or the number of cameras capturing the subject and the corresponding camera ID may be stored.
- FIG. 9(a) shows an example of the configuration of the background model data set.
- a background model data header is stored at the head of the dataset, and as shown in FIG. ing.
- the format of the background model data is described.
- the data set class code of the format of the background model data is 0x0006.
- the number of divisions of the background model data is described. In the present embodiment, an example in which a plane is divided into B pieces will be described.
- the main viewpoint of the virtual viewpoint image in the stadium is directed toward the field, it is possible to easily specify the division of the background centering on the division of the x-axis and the y-axis.
- a method of setting the x-axis, y-axis, and z-axis and dividing each may be used in the same manner as the division of the foreground model data.
- the background since the structure of the stadium does not change during the shooting period, one is saved in the sequence. If the background model data changes during the shooting period, it may be generated for each frame in the same manner as the image, or may be stored for each period during which it does not change.
- division by the content of each background may also be performed.
- the field surface may be divided differently.
- the number of divisions is not limited to this. Different division methods and number of divisions may be used for the foreground and the background. For example, if the number of divisions is increased, the data is reduced and the effect of improving the processing speed is increased. Furthermore, by finely dividing a large amount of data, the amount of data to be transmitted can be optimized.
- each division can indicate the range of data included in that division.
- the method of description is not limited, for example, divisions depending on the structure may be divided into seat classes, reserved seats, non-reserved seats, etc., or area units such as back screen direction, main stand, back stand, etc. Any description may be used as long as it appropriately describes the range of the divided background model data.
- the background model data is divided into four as shown in FIG. 4 will be described. A line with an angle of 45 degrees with respect to the x-axis and the y-axis with the center of the field as the center is the division boundary.
- the back stand side is divided 1300-1, the right side is divided 1300-2, the main stand side is divided 1300-3, and the left side facing the back stand is divided 1300-4.
- the stadium is divided into four parts. That is, the description includes the central coordinates of the division and the position of the boundary line of the division. By dividing it in this way, in competitions where the movement of the athletes is centered in the longitudinal direction, the camera that follows the movement of the athletes will move more on the x-axis, and the left and right stands will have large vision monitors. The direction with is the main shot. In addition, since the camera on the main stand and back stand mainly focuses on the players moving left and right, the background of the stand on the other side is often used. By dividing in this way, the number of updates of the background model data can be reduced.
- the background model data is saved.
- the data size of the background model data is saved.
- the data for each division is then saved.
- the data size of the background model data of the first division, here division 1300-1 is saved.
- point cloud data which is the background model data of the division 1300-1, is saved.
- the size of the point cloud data is first indicated, the number of points of the point cloud data is stored, and the coordinates of each point are stored.
- the pointer of the background image data of division 1300-1 is saved. Background image data to be pasted on the model of division 1300-1 is stored in the destination of the pointer. That is, as shown in FIG.
- the time code, data size, and image data of each frame are saved in addition to descriptions such as the image size and bit depth of the background image. After that, the background image data of each frame is saved. Similarly, data is stored in the order of division 1300-2, division 1300-3, and division 1300-4.
- FIG. 11 The processing shown in FIG. 11 is started when image data is received by the input unit 102 .
- step S1100 the management unit 107 generates a sequence header for sequence data.
- the management unit 107 determines whether to generate a data set to be saved.
- step S1101 the model acquisition unit 105 acquires background model data.
- step S1102 the model dividing unit 106 divides the background model data based on a predetermined dividing method.
- step S1103 the management unit 107 stores the divided background model data in the storage unit 108 according to a predetermined format.
- step S1104 input is repeated frame by frame from the start of shooting.
- step S1105 image frame data is obtained from the cameras 101a to 101t.
- step S1106 the foreground model generation unit 103 generates a foreground image and a silhouette image.
- step S1107 the foreground model generation unit 103 generates point cloud model data of the subject using the silhouette image.
- step S1108 the model dividing unit 106 divides the generated point cloud model data of the object according to a predetermined method.
- a predetermined method In this embodiment, as shown in FIG. 3A, since the point cloud model is divided into eight, it is determined which division it belongs to from the coordinates of the points of the point cloud, and the division is performed. If a point exists on the boundary, it may belong to either division or both divisions.
- the management unit 107 stores the divided foreground model data in the storage unit 108 according to a predetermined format.
- step S1110 the management unit 107 stores the foreground image generated in step S1106 in the storage unit 108 according to a predetermined format.
- step S1111 the model dividing unit 106 integrates regions other than the foreground image from the input image and the foreground image generated by the foreground model generation unit 103 to generate a background image.
- the generation of the background image is not particularly limited.
- the background image is generated by existing techniques such as stitching of a plurality of images, or interpolation using an image from another camera of a part of the subject, surrounding pixels, or an image of another frame.
- step S1112 the model dividing unit 106 divides the generated background image according to a predetermined method. In this embodiment, since the image is divided into four parts as shown in FIG. 4, divided background image data is generated by determining which division each pixel belongs to.
- the management unit 107 stores the divided background image data in the storage unit 108 according to a predetermined format.
- steps S1104 to S1113 are repeated until shooting is completed or input in frame units is completed.
- step S1115 the transmitting/receiving unit 109 receives from the terminal 111 information necessary for the terminal 111 to generate a virtual viewpoint image. At least the information about the sequence information to use. A sequence may be specified directly, or a search may be performed based on the shooting location, date and time, and event details.
- the selection unit 110 selects corresponding sequence data according to the input information.
- step S1116 input is repeated frame by frame from the start of virtual viewpoint image generation.
- step S ⁇ b>1117 the transmission/reception unit 109 receives the virtual viewpoint information from the terminal 111 and inputs it to the selection unit 110 .
- the virtual viewpoint information is information including the position, posture, angle of view, etc. of the virtual camera when the virtual viewpoint is likened to a virtual camera.
- the virtual viewpoint information is information for specifying the position of the virtual viewpoint, the line-of-sight direction from the virtual viewpoint, and the like.
- the selection unit 110 selects a division model of the background model data included in the virtual viewpoint image from the acquired virtual viewpoint information.
- the area 201 is within the field of view of the virtual camera.
- FIG. 4 the situation of the virtual camera 200 and the area 201 is shown. It is determined that the area 201 includes division 1300-2 and division 1300-3 in the background image data, and these divided background model data are selected.
- the background data included in division 1300-2 is the second division data.
- the background model data included in division 1300-3 is the third division data.
- the second divided data includes the size Data size of 2nd Sub Background model data of the divided data of the background model data.
- the second divided data includes a data set Data set of 2nd Sub Background model data.
- the third divided data includes the size Data size of 3rd Sub Background model data of the background model data divided data.
- the second divided data includes a data set Data set of 3rd Sub Background model data.
- the divided data correspond to partial areas of the background displayed in the virtual viewpoint image, and are partial data of the background model data.
- the information selected by the selection unit 110 is input to the management unit 107 in step S1119.
- the management unit 107 then outputs the divided model data (the second divided model data and the third divided model data) of the background model data selected from the storage unit 108 to the transmission/reception unit 109 .
- the transmitting/receiving unit 109 transmits divided model data of the selected background model data to the terminal 111 .
- the first divided model data and the fourth divided model data that are not selected among the background model data are not output to the terminal 111 . Therefore, the amount of data output to the terminal 111 can be reduced. Since the first split model data and the fourth split model data do not contribute to the generation of the virtual viewpoint image, even if the first split model data and the fourth split model data are not output, the terminal 111 It does not affect the image quality of the virtual viewpoint image generated by
- step S 1120 the selection unit 110 selects a frame of the specified time code from the time codes for generating the virtual viewpoint image input via the transmission/reception unit 109 .
- step S1121 the selection unit 110 selects background image data included in the virtual viewpoint image from the virtual viewpoint information. Similar to the selection of the divided data of the background model data, it is determined that the region 201 includes division 1300-2 and division 1300-3 in the background image data, and these divided background image data are selected. Specifically, in FIG. 9, the background image data included in division 1300-2 is the second division data.
- the second divided data is the image data of the time code by reading the information on the image specifications from the data indicated by the Pointer of 2nd Sub Background Image and tracing the frame of the corresponding time code based on the data size.
- the background image data included in division 1300-3 is the third division data.
- the third divided data is the image data of the time code by reading the information on the image specification from the data indicated by the Pointer of 3rd Sub Background Image, tracing the frame of the corresponding time code based on the data size.
- step S1122 the information selected by the selection unit 110 is input to the management unit 107.
- Management unit 107 then outputs the divided data (the second divided data and the third divided data) of the background image data selected from storage unit 108 to transmission/reception unit 109 .
- the transmitting/receiving unit 109 transmits divided data of the selected background image data to the terminal 111 .
- the first divided data and the fourth divided data that have not been selected are not output to the terminal 111 . Therefore, the amount of data output to the terminal 111 can be reduced.
- the first divided data and the fourth divided data do not contribute to the generation of the virtual viewpoint image, the first divided data and the fourth divided data are generated by the terminal 111 even if the first divided data and the fourth divided data are not output. It does not affect the image quality of the virtual viewpoint image.
- step S1123 the subsequent processing is repeated for all subjects included in the field of view of the virtual camera 200 in the frame at the time of the time code.
- step S1124 the selection unit 110 selects foreground model data included in the virtual viewpoint image from the virtual viewpoint information. For example, foreground model data for object 210 in FIG. 2 is selected.
- step S1125 the subject 210 is divided as indicated by thin lines when viewed from above, as shown in FIG. Therefore, selection unit 110 determines that division 300-1, division 300-2, division 300-3, division 300-5, division 300-6, and division 300-7 are visible from virtual camera 200. FIG. Therefore, the selection unit 110 selects data belonging to these divisions.
- step S1126 first, selecting section 110 selects a frame to be processed from the input timecode. This makes it possible to select the frame data of the time code by comparing the time code at the beginning of the data of each frame with the input time code, and skipping data in data size units. Alternatively, the time code and the pointer of the frame data of the time code may be stored in a table and searched for determination. In the frame data of the time code, the data size, the number of subjects, the number of cameras, and each camera ID are read, and the necessary divided data is selected. Subsequently, foreground model data is selected from the position of the subject 210 . For example, assume that it is the first subject. For the first subject, the foreground model data of division 300-1 is first selected. In FIG.
- the foreground data included in division 300-1 is the first division data.
- This divided data corresponds to the partial area of the subject displayed in the virtual viewpoint image, and is partial data of the foreground object.
- management unit 107 Upon receiving the information from selection unit 110, management unit 107 reads the first divided data from storage unit 108 and outputs it.
- the first divided data is a data set Data set of 1st sub point cloud in 1st Object .
- the selection unit 110 selects the foreground model data of the division 300-2.
- the foreground data included in division 300-2 is the second division data.
- management unit 107 reads the second divided data from storage unit 108 and outputs it.
- the second divided data is the divided data set Data set of 2nd sub point cloud in 1st Object of the background model data.
- foreground model data corresponding to division 300-3, division 300-5, division 300-6, and division 300-7 are similarly output.
- the foreground model data corresponding to the divisions 300-4 and 300-8 are not output. Therefore, the amount of data output to the terminal 111 can be reduced.
- the foreground model data corresponding to the division 300-4 and the division 300-8 do not contribute to the generation of the virtual viewpoint image. does not affect
- a foreground image is selected to determine the color of the object viewed by the virtual camera.
- the foreground image of a camera close to virtual camera 200 is selected.
- cameras 101-b, 101-o, 101-p, 101-q, and 101-r are photographing the visible side of the subject 210.
- FIG. For example, all cameras that are closer to the virtual camera 200 than a plane 212 that is visible from the virtual camera 200 and that traverses the subject include the subject 210 in their angle of view. This is possible by selecting foreground images taken from those cameras based on camera IDs. Based on the camera ID, the foreground image data of each camera below the Foreground Image of 2nd Camera is selected.
- step S1128 the selected foreground image data is read from the storage unit 108 and output to the terminal 111 via the transmission/reception unit 109.
- steps S1123 to S1128 are repeated until the output of foreground model data and foreground image data for all subjects within the field of view is completed.
- step S1130 the terminal 111 generates a virtual viewpoint image based on each acquired data.
- steps S1116 to S1130 are repeated until the generation of the virtual viewpoint image ends or until the input per frame ends. When the repetition ends, the three-dimensional information processing and the virtual viewpoint image generation processing end.
- FIG. 12 is a diagram showing the communication status of each part.
- the terminal 111 is activated.
- a start of generating a virtual viewpoint image is transmitted to the transmitting/receiving unit 109 of the three-dimensional information processing apparatus.
- the transmitting/receiving unit 109 notifies each unit to start generating a virtual viewpoint image, and each unit prepares for it.
- the terminal 111 transmits sequence data for generating virtual viewpoint images to the transmitting/receiving unit 109 . This can be determined by the user, via the terminal 111, searching for or specifying sequence data stored in the storage unit 108.
- FIG. Information about the sequence data transmitted from terminal 111 is input to selection section 110 via transmission/reception section 109 .
- the selection unit 110 instructs the management unit 107 to read the selected sequence.
- the terminal 111 transmits to the transmitting/receiving unit 109 the time to start generating the virtual viewpoint image, the time code, and the virtual viewpoint information.
- the transmitting/receiving section 109 sends these pieces of information to the selecting section 110 .
- a selection unit 110 selects a frame for generating a virtual viewpoint image from the input time code.
- the selection unit 110 selects divided background model data, divided background image data, divided foreground model data, and divided foreground image data based on the virtual viewpoint information.
- Information on the data selected by the selection unit 110 is sent to the management unit 107 , and based on this, the necessary data of the frame for generating the virtual viewpoint image is read from the storage unit 108 and sent to the transmission/reception unit 109 .
- the transmitting/receiving unit 109 transmits these data to the terminal that has made the request.
- the terminal 111 performs rendering based on these to generate a virtual viewpoint image. Thereafter, transmission of virtual viewpoint information, selection of divided data, and generation of virtual viewpoint images are repeated in order to process the next frame.
- the transmission end is transmitted from the terminal 111 to the transmitting/receiving unit 109, all the processing ends.
- the processing is shown in the flowchart as a sequential flow, but it is not limited to this.
- foreground model data and background model data may be selected and output in parallel.
- the terminal 111 can continue to use the divided data of the previous frame as it is to generate the background. In addition, repeated transmission of the same background model data is reduced, and the amount of data transmission is reduced.
- the three-dimensional information processing apparatus 100 may generate virtual viewpoint information.
- the virtual viewpoint information is input to the selection unit 110, and the subsequent processing may be the same as the processing described above.
- the data transmitted to the terminal 111 also includes virtual viewpoint information. This virtual viewpoint information may be automatically generated by the three-dimensional information processing apparatus 100 or may be input by a user other than the user operating the terminal 111 .
- the foreground model generation unit 103 and the background model generation unit 104 generate three-dimensional shape data from images captured by a plurality of cameras, the present invention is not limited to this. It may be generated artificially. Also, although point cloud model data and foreground image data have been used as three-dimensional shape data to be stored in the storage unit 108, the present invention is not limited to this.
- FIG. 7A is an example of a data set configuration of colored point cloud model data in which color information is added to each point of the point cloud.
- the colored point cloud model data is divided in the same manner as the foreground model data shown in FIG. Specifically, as shown in FIG. 7(b), data is composed of each frame in the same manner as the foreground model data. The number of cameras used and the camera ID are saved. Subsequently, the number of divisions of the colored point cloud model data is described, and the data size of the colored point cloud model data of each subject is followed by the data of each divided colored point cloud model data.
- the divided colored point cloud model data has the data size, the number of points of the divided colored point cloud model data, and the coordinates and color information of each point. Saved.
- the colored point cloud model is used instead of the foreground model data described above. Specifically, in generating a virtual viewpoint image, colored point cloud model data is selected and transmitted to the terminal 111 .
- the terminal 111 colors the pixel values of the point positions of the point cloud model data with color information.
- FIG. 8(a) is an example of a data set configuration of mesh model data that constitutes a mesh or the like.
- the mesh model it is divided like the foreground model data and the colored point cloud model data.
- data is composed of each frame in the same manner as the foreground model data, and from the beginning, the time code, the data size of the frame, and the number of subjects are saved. .
- the number of divisions of the mesh model data is described, and the data size of the mesh model data of each subject is followed by the data of each divided mesh model data.
- the divided mesh model data includes the data size, the number of polygons in the divided mesh model data, and the data for each polygon, that is, the coordinates of the vertices of the polygons and the polygons. are stored in order.
- the coordinate system describing the vertices is 3-axis data, and the color information is saved as the values of the three primary colors of RGB, but it is not limited to this.
- the coordinate system may be a polar coordinate system or another coordinate system.
- Color information may also be represented by information such as a uniform color space, luminance, and chromaticity.
- mesh model data is selected instead of the foreground model data described above and transmitted to the terminal 111 . In the terminal 111, it is generated by coloring the area surrounded by the vertices of the mesh model data with color information.
- this 3D shape data it is possible to select and specify data as easily as with the colored point cloud model data, and it is possible to reduce the amount of data compared to the colored point cloud model data, thereby reducing the cost of terminals. As a result, more terminals can be connected.
- the mesh model data may be generated without color as data for texture mapping the foreground image data in the same manner as the foreground model data. That is, the data structure of the mesh model data may be described in a format of only shape information without color information.
- FIGS. 10A to 10D show examples in which the background model data is composed of mesh model data.
- the content of the header is the header itself of the background model data.
- the data set class code of the background model data format is 0x0007.
- the data size of the first division is saved following the data size of the background image model data. Subsequently, the polygon data of the first division is saved.
- the divided mesh model data is first stored with a time code.
- the size of the data, the number of polygons in the divided mesh model data, and then the data for each polygon, that is, the coordinates of the vertices of the polygons and the color information of the polygons are stored in this order.
- a polygon may belong to either division or both divisions.
- the polygon may be divided by the boundary line and belong to each division.
- FIG. 2 A three-dimensional information processing apparatus 1300, which is a second embodiment for processing three-dimensional shape data, will be described using the virtual viewpoint image generation system configuration diagram of FIG. In the same figure, the same numbers are attached to the same configurations as those in FIG. 1, and the descriptions thereof are omitted.
- This embodiment differs from the first embodiment in that a three-dimensional information processing apparatus 1300 has a virtual viewpoint image generation unit 1301 .
- this embodiment differs from the first embodiment in the division method.
- the model generation unit 1303 has the functions of the foreground model generation unit 103 and the background model generation unit 104 of the eleventh embodiment.
- a configuration example of computer hardware applicable to the three-dimensional information processing apparatus 1300 according to the present embodiment is the same as that of the first embodiment, so description thereof will be omitted.
- the terminals 1310a to 1310d transmit to the three-dimensional information processing apparatus 1300 virtual viewpoint information in which the user has set the virtual viewpoint.
- the terminals 1310a to 1310d do not have a renderer, and only set virtual viewpoints and display virtual viewpoint images.
- the transmission/reception unit 1308 receives virtual viewpoint information from the terminal 1310 and transmits it to the selection unit 1309 and the virtual viewpoint image generation unit 1301 . Further, the transmitting/receiving unit 1308 has a function of transmitting the generated virtual viewpoint image to the terminals 1310a to 1310d that transmitted the virtual viewpoint information.
- a virtual viewpoint image generation unit 1301 includes a renderer, and generates a virtual viewpoint image based on the input virtual viewpoint information and the three-dimensional shape data read from the storage unit 108 .
- a selection unit 1309 selects a data set necessary for the virtual viewpoint image generation unit 1301 to generate a virtual viewpoint image. Note that the terminals 1310a to 1310d are explained as the terminal 1310 unless otherwise specified. Also, the number of terminals 1310 is not limited to this, and may be one.
- 16(a) to (c) show an example of the configuration of the foreground model data of the second embodiment.
- the foreground model data set is saved in units of frames, but it is not limited to this.
- it may be managed on an object-by-object basis.
- the foreground model data header is the same as in the first embodiment.
- the three-dimensional shape data is composed of point cloud model data and foreground image data will be described.
- the time code representing the time of the first frame of the foreground model data and the data size of the frame are stored in the following order. Subsequently, the number P of subjects for generating a virtual viewpoint image at the time indicated by the time code is saved. Furthermore, the number C of cameras used for photographing at that time is stored. Subsequently, the camera ID of the camera used is saved. Subsequently, the foreground model data of each subject is stored. First, a data size is saved to represent the foreground model data of the subject. Furthermore, the division number D of the foreground model data of the subject is stored.
- the divided foreground model data of the subject is saved.
- a data size of the split foreground model data is saved, followed by a split foreground model data description.
- the stored description includes the data size of the divided foreground model, followed by the number C of cameras that capture the subject and C camera IDs. ing.
- the divided foreground model data is saved.
- the divided foreground model data has the same configuration as in FIG. 6(b).
- the foreground image data is also the same as in FIG. 6(b).
- FIG. 14 shows an example of the state of division in this embodiment.
- FIG. 14 shows an example of implementing 12 divisions.
- the division method and number are not limited to this.
- an area 1401-b of concentric circles is represented by taking the imaging range of the camera 101-b on the subject 210 as the range in which the subject 210 can be seen. Similar relationships are established between area 1401-d and camera 101-d, area 1401-h and camera 101-h, and area 1401-j and camera 101-j. Furthermore, the same relationship holds for area 1401-o and camera 101-o, area 1401-p and camera 101-p, area 1401-q and camera 101-q, and area 1401-r and camera 101-r.
- the boundaries of the range where the respective regions overlap each other are defined as division boundaries.
- Division 1402-1 includes area 1401-b and area 1401-r, and the number of cameras C is two.
- the point data of the point cloud model data of the object 210 is included in the Data set of 1st sub point cloud in 1st Object .
- the Number of Camera is 2, and the color of the point cloud of the divided image can be determined only from the images of the cameras 101-b and 101-r with camera IDs.
- division 1402-2 includes area 1401-b, and the number of cameras C is one.
- a division 1402-3 includes areas 1401-d and 1401-h, and the number of cameras C is two.
- Division 1402-4 includes area 1401-d and the number of cameras C is one.
- Division 1402-5 includes area 1401-j, and the number of cameras C is one.
- a division 1402-6 includes areas 1401-j and 1401-q, and the number of cameras C is two.
- Division 1402-7 includes region 1401-q and the number of cameras C is one.
- Division 1402-8 includes area 1401-p and area 1401-q, and the number of cameras C is two.
- Division 1402-9 includes area 1401-o, area 1401-p, and area 1401-q, and the number of cameras C is three.
- a partition 1402-10 includes an area 1401-p and an area 1401-q, and the number of cameras C is two.
- Division 1402-11 includes area 1401-b, area 1401-p, area 1401-q, and area 1401-r, and the number of cameras C is four.
- Division 1402-12 includes area 1401-b, area 1401-q, and area 1401-r, and the number of cameras C is three. Division of these regions is uniquely determined from the position of the subject, the camera that is taking the picture, and its position.
- the camera ID of the foreground image of each division becomes the same within the division, which has the effect of facilitating data management.
- Embodiment 2 the information processing method of the virtual viewpoint image generation system with the above configuration will be described using the flowchart of FIG.
- the same numbers are attached to the steps in which the operation of each unit is the same as the processing operation (FIG. 11) of the first embodiment, and the description thereof is omitted.
- the processing shown in FIG. 15 is started when image data is received by the input unit 102 .
- step S1100 After the sequence header is generated in step S1100, the background model data is processed in steps S1101 to S1103. The process advances to step S1104 to repeat input in units of frames from the start of shooting. Point cloud model data of each subject is generated by step S1107.
- step S1501 the division of the foreground model data for each subject is repeated.
- step S1508 as described in FIG. 14, it is divided into regions visible to one or more cameras.
- step S1502 when the foreground model data of all the objects have been divided, the repetition ends.
- a background image is generated and divided in steps S1111 to S1113 in the same manner as in the first embodiment, and saved.
- the transmitting/receiving unit 1308 receives information necessary for the terminal 1310 to generate a virtual viewpoint image from the terminal 1310 .
- the selection unit 1309 selects corresponding sequence data according to the input information.
- input is repeated frame by frame from the start of virtual viewpoint image generation.
- the background model data and background image data necessary to generate the background are selected and output through steps S1117 to S1122.
- step S1123 the subsequent processing is repeated for all subjects included in the field of view of virtual camera 200 in the frame of the time of the time code.
- step S1124 the selection unit 110 selects foreground model data included in the virtual viewpoint image from the virtual viewpoint information. For example, foreground model data relating to the subject 260 shown in FIG. 14 is selected.
- the selection unit 1309 selects the divided foreground model data with reference to FIG. As shown in FIG. 14, near virtual camera 250 are cameras 101-q and 101-r. A selection unit 1309 selects the divided data of the divided foreground model data including the camera IDs. Since these camera IDs are included in division 1402-1 and division 1402-3, these division data are selected.
- step S ⁇ b>1126 the management unit 107 acquires selection information from the selection unit 1309 and outputs these pieces of divided data from the storage unit 108 to the virtual viewpoint image generation unit 1301 . That is, the subject 260 in FIG. 14 is the first subject. Then, Data size of 1st sub point cloud of 1st Object is output as divided data of the foreground model data of the division 1402-1 . Further, Data size of 3rd sub point cloud of 1st Object is output as the divided data of the foreground model data of division 1402-3 .
- step S1527 the selection unit 1309 selects the foreground image data of the camera IDs included in all of the divided data selected in step S1525.
- step S ⁇ b>1128 the management unit 107 acquires information on the selected data, reads the selected data from the storage unit 108 , and outputs the data to the virtual viewpoint image generation unit 1301 .
- step S1130 the virtual viewpoint image generation unit 1301 generates a virtual viewpoint image based on the acquired data and virtual viewpoint information. Then, the generated virtual viewpoint image is output to the transmitting/receiving unit 1308 . The transmitting/receiving unit 1308 transmits the generated virtual viewpoint image to the terminal 1310 requesting generation of the virtual viewpoint image.
- the transmission channel is a communication channel for transmitting between the storage unit 108 and the virtual viewpoint image generation unit 1301 . Since the generated virtual viewpoint image is transmitted to the terminal 1310, the amount of data to be transmitted from the transmitting/receiving unit 1308 to the terminal 1310 is larger than that of transmitting the material data for generating the virtual viewpoint image to the terminal 1310. can be reduced.
- visibility information may be used to generate divided data.
- the visibility information is information indicating from which camera the constituent elements (for example, points in the case of point cloud model data) making up the three-dimensional shape data can be seen.
- the points of the point cloud visible from a camera close to the position of the virtual camera 250 may be selected using the visibility information, and only the visible points may be output. As a result, only the points visible from the virtual camera are transmitted, so the amount of information can be further reduced.
- the division is performed after the entire foreground model data is generated, but the present invention is not limited to this.
- data may be divided while creating foreground model data by shape estimation.
- shape estimation may be performed for each division, or while determining which division a point or polygon belongs to while calculating the visibility determination result.
- the divided data to be transmitted may be transmitted with priority.
- segment 1402-3 containing area 1401-p in front of virtual camera 200 is transmitted first.
- the transmission amount and image quality can be controlled by thinning out the points of the divided point cloud with low priority or by thinning out the cameras that send the foreground images. It is also possible to give a higher priority to specific divisions such as faces.
- divisions are not determined only according to the overlap of the imaging ranges of the cameras, but may be selected so that the number of point clouds is almost uniform, or the size of each division is the same. can be Basically, divisions should not overlap, but some may overlap.
- region 1402-7 may be included in both division 1402-6 and division 1402-8. The foreground image of the points in this region will be used to color the points at the boundary of both regions, which has the effect of improving the quality of the boundary of the segmentation.
- the dividing method may be the following method. That is, the foreground model or the like may be divided based on the virtual viewpoint information. In this case, the foreground model and the like are not split until the virtual viewpoint information is specified.
- the storage unit 108 defines a foreground model for each subject instead of divided model data. That is, in FIG. 16, the data divided into "subs" are integrated into one. Specifically, in FIG. 16B, Data size of 1st sub point cloud of 1st Object is read as Data size of point cloud of 1st Object . Then, the Data size of the point cloud of 1st Object itself is written here.
- Description of 1st sub point cloud of 1st Object should be read as Description of point cloud of 1st Object .
- Data set of 1st sub point cloud in 1st Object is read as Data set of point cloud in 1st Object .
- the Data set of Dth sub point cloud in 1st Object disappears from the Data size of 2nd sub point cloud of 1st Object .
- the foreground model is used as an example, the same applies to the background model.
- the selection unit 1309 Upon receiving an instruction to generate a virtual viewpoint image from the terminal 1310, the selection unit 1309 selects a virtual image from the virtual viewpoint specified by the virtual viewpoint information based on the virtual viewpoint information acquired via the transmission/reception unit 1308. Identify the foreground model contained in the relevant field of view. Further, the selection unit 1309 identifies a portion of the identified foreground model to be displayed in the virtual viewpoint image. Then, the selection unit 1309 outputs information on the specified portion to the management unit 107 . Based on the acquired information, the management unit 107 divides the foreground model stored in the storage unit 108 into a portion displayed in the virtual viewpoint image and other portions.
- the management unit 107 outputs to the virtual viewpoint image generation unit 1301 a model of a portion corresponding to a portion displayed in the virtual viewpoint image among the divided models. Therefore, a part of the foreground model necessary for the virtual viewpoint image is output, and the amount of data to be transmitted can be reduced. In addition, since the foreground model is divided after obtaining the virtual viewpoint information, it is possible to efficiently generate necessary and sufficient divided models. Also, the data stored in the storage unit 108 is simplified.
- the management unit 107 extracts the model of the portion corresponding to the portion displayed in the virtual viewpoint image, and converts the partial model to the virtual viewpoint image generation unit. You may make it output to 1301. FIG. In this case, the model dividing unit 1305 may not be included in the 3D information processing apparatus.
- the partial model to be output may be specified by the terminal 1310.
- the user may specify which partial model is to be output via the terminal 1310 operated by the user, or specify the partial model to be output by the terminal 1310 based on the virtual viewpoint information specified by the user.
- this partial model may be a partial model divided in advance as in the first and second embodiments, or may be a partial model divided or specified based on virtual viewpoint information.
- multiple partial models may be displayed on the terminal 1310 for the user to specify.
- all of the multiple partial models included in the foreground model may be output.
- all of a plurality of partial models may be output according to a user's instruction.
- the terminals 1310a to 1310d input different virtual viewpoint information for the same frame in the same sequence at the same timing
- the following configuration may be used. That is, the fields of view of a plurality of virtual cameras corresponding to a plurality of pieces of virtual viewpoint information respectively input by the terminals 1310a to 1310d are specified, a foreground model included in one of the fields of view is specified, and any of the foreground models is specified. A portion to be displayed in one of the virtual viewpoint images may be specified. Then, the identified portion to be displayed in any one of the virtual viewpoint images may be output to the virtual viewpoint image generation unit 1301 .
- the virtual viewpoint image generation unit 1301 may generate a plurality of virtual viewpoint images at the same time, or may sequentially generate one virtual viewpoint image. In the latter case, the virtual viewpoint image generation unit 1301 temporarily stores the output data in a buffer and uses the data at the required timing.
- the present invention is not limited to this.
- an external device having a virtual viewpoint image generation unit 1301 may be provided separately from the 3D information processing device 1300 .
- the material data (foreground model, etc.) necessary for the virtual viewpoint image is output to the external device, and the virtual viewpoint image generated by the external device is output to the transmitting/receiving unit 1308 .
- the present disclosure provides a program that implements one or more functions of the above-described embodiments to a device or device via a network or a storage medium, and one or more processors in the device or computer of the device reads and executes the program. It can also be realized by processing to It can also be implemented by a circuit (for example, ASIC) that implements one or more functions.
- a circuit for example, ASIC
- the computer program code itself read from the storage medium implements the functions of the above-described embodiments, and the storage medium storing the computer program code may be used to execute the present disclosure. It also includes the case where the operating system (OS) running on the computer performs part or all of the actual processing based on the instructions of the code of the program, and the above functions are realized by the processing. . Also, the present disclosure may be realized with the following configuration.
- the computer program code read from the storage medium is written into a memory provided in a function expansion card inserted into the computer or a function expansion unit connected to the computer.
- the storage medium stores computer program code corresponding to the processing described above.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Processing Or Creating Images (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Image Generation (AREA)
Abstract
Description
競技場(スタジアム)やコンサートホールなどの施設に複数のカメラを設置し撮影を行い、生成された三次元形状データを処理する三次元情報処理装置100について、図1の仮想視点画像生成システム構成図を用いて説明する。仮想視点画像生成システムは、カメラ101a~t、入力部102、前景モデル生成部103、背景モデル生成部104、モデル取得部105、モデル分割部106、管理部107、保存部108、送受信部109、選択部110、端末111a~dを有する。なお、特に断りが無い限り、カメラ101a~tは、カメラ101として説明を行う。また、単にカメラと称する場合、実カメラまたは物理カメラを指す。また、特に断りが無い限り、端末111a~dは、端末111として説明を行う。また、三次元形状データは、以下ではモデルと呼ぶこともある。モデルは、前景や背景の三次元形状を示す三次元形状データを指す場合もあるし、三次元形状データに加え、その前景や背景の色情報をさらに有するデータを指す場合もある。
以下では、保存部108に保存されるデータの別の例について述べる。
図7(a)は、点群の各点に色情報が付けられた色付き点群モデルデータのデータセットの構成の一例である。色付き点群モデルデータにおいて、図6に示す前景モデルデータと同様に分割されている。具体的には、図7(b)で示すように、前景モデルデータと同様に各フレームでデータが構成されており、先頭から、タイムコード、当該フレームのデータサイズ、被写体の数、撮影に使用されたカメラの数、カメラIDが保存される。つづいて、色付き点群モデルデータの分割数が記載され、各被写体の色付き点群モデルデータのデータサイズに続いて、各分割された色付き点群モデルデータのデータが続く。また、図7(c)に示すように、分割された色付き点群モデルデータはデータのサイズ、分割された色付き点群モデルデータの点の数につづいて、それぞれの点の座標と色情報が保存される。
図8(a)は、メッシュなどを構成するメッシュモデルデータのデータセットの構成の一例である。メッシュモデルにおいて、前景モデルデータや色付き点群モデルデータと同様に分割されている。具体的には、図8(b)に示すように、前景モデルデータと同様に各フレームでデータが構成されており、先頭から、タイムコード、当該フレームのデータサイズ、被写体の数が保存される。つづいて、メッシュモデルデータの分割数が記載され、各被写体のメッシュモデルデータのデータサイズに続いて、各分割されたメッシュモデルデータのデータが続く。また、図8(c)に示すように、分割されたメッシュモデルデータはデータのサイズ、分割されたメッシュモデルデータのポリゴンの数に続いて、ポリゴン毎のデータ、すなわちポリゴンの頂点の座標とポリゴンの色情報が順に保存される。
背景モデルデータもメッシュモデルデータで管理することも可能である。図10(a)~(d)に、背景モデルデータをメッシュモデルデータで構成する例を示す。図10(b)に示すように、ヘッダの内容は背景モデルデータのヘッダそのものである。ただし、本実施形態では、背景モデルデータのフォーマットのデータセットクラスコードは0x0007となる。図10(c)に示すように、背景モデルデータがメッシュモデルである場合、背景画像モデルデータのデータサイズに続き、第1の分割のデータサイズが保存される。続いて、第1分割のポリゴンデータが保存される。図10(d)に示すように、分割されたメッシュモデルデータは最初にタイムコードが保存される。続いて、データのサイズ、分割されたメッシュモデルデータのポリゴンの数に続いて、ポリゴン毎のデータ、すなわちポリゴンの頂点の座標とポリゴンの色情報が順に保存される。
三次元形状データを処理する実施形態2である三次元情報処理装置1300について、図13の仮想視点画像生成システム構成図を用いて説明する。同図において、各部の動作が図1と同じ構成に関しては、同じ番号を付し、説明を省略する。本実施形態では、三次元情報処理装置1300が仮想視点画像生成部1301を有する点が、実施形態1と異なる。また、本実施形態では分割の方法が実施形態1と異なる。なお、モデル生成部1303は、実施形態11の前景モデル生成部103と背景モデル生成部104の機能を有する。また、本実施形態に係る三次元情報処理装置1300に適用可能なコンピュータのハードウェアの構成例は、実施形態1と同じであるため説明を省略する。
分割方法は、以下のような方法でもよい。すなわち、仮想視点情報に基づいて、前景モデルなどを分割するようにしてもよい。この場合、仮想視点情報が特定されるまで、前景モデルなどは分割されない。つまり、保存部108には、分割されたモデルのデータではなく、被写体ごとの前景モデルが規定される。つまり、図16において、「sub」に分割されていたデータが一つに統合される。具体的には、図16(b)において、Data size of 1st sub point cloud of 1st Objectは、Data size of point cloud of 1st Objectとして読み替える。そして、ここには、point cloud of 1st Object自体のData sizeが書き込まれる。また、Description of 1st sub point cloud of 1st Objectは、Description of point cloud of 1st Objectと読み替える。また、Data set of 1st sub point cloud in 1st Objectは、Data set of point cloud in 1st Objectと読み替える。そして、Data size of 2nd sub point cloud of 1st ObjectからData set of Dth sub point cloud in 1st Objectはなくなる。なお、前景モデルを例にしたが、背景モデルでも同様である。
本開示は、上述の実施形態の1以上の機能を実現するプログラムを、ネットワーク又は記憶媒体を介して装置又は装置に供給し、その装置又は装置のコンピュータにおける1つ以上のプロセッサーがプログラムを読出し実行する処理でも実現可能である。また、1以上の機能を実現する回路(例えば、ASIC)によっても実現可能である。
Claims (13)
- 仮想視点の位置と前記仮想視点からの視線方向を特定するための仮想視点情報を取得する第1の取得手段と、
オブジェクトの三次元形状データを取得する第2の取得手段と、
前記第1の取得手段により取得された仮想視点情報に基づいて、前記オブジェクトの部分領域であって、前記仮想視点からの見えを表す仮想視点画像に表示される部分領域を特定する特定手段と、
前記第2の取得手段により取得された三次元形状データのうち、前記特定手段により特定された部分領域に対応する部分データを出力する出力手段と、を有する情報処理装置。 - 前記三次元形状データは、複数の部分データを有し、
前記出力手段は、前記複数の部分データのうち、前記特定手段により特定された部分領域に対応する三次元形状データの構成要素を含む部分データを出力することを特徴とする請求項1に記載の情報処理装置。 - 前記複数の部分データは、前記三次元形状データの位置に応じて分割されて生成されることを特徴とする請求項2に記載の情報処理装置。
- 前記複数の部分データは、基準座標軸に基づいて分割されて生成されることを特徴とする請求項2又は3に記載の情報処理装置。
- 前記複数の部分データは、三次元形状データを生成するために使用される撮像装置の位置に基づいて分割されて生成されることを特徴とする請求項2乃至4のいずれか1項に記載の情報処理装置。
- 前記特定手段により特定された部分領域に基づいて、前記第2の取得手段により取得された三次元形状データを、複数の部分データに分割する分割手段を有し、
前記出力手段は、前記分割手段により分割された複数の部分データのうち、前記特定手段により特定された部分領域に対応する部分データを出力することを特徴とする請求項1に記載の情報処理装置。 - 前記第1の取得手段は、複数の仮想視点情報を取得し、
前記特定手段は、前記オブジェクトの部分領域であって、前記複数の仮想視点情報により特定される複数の仮想視点からの見えを表す複数の仮想視点画像のいずれかに表示される部分領域を特定することを特徴とする請求項1に記載の情報処理装置。 - 前記第2の取得手段により取得された三次元形状データのうち、前記特定手段により特定された部分領域に対応する部分データとは異なる部分データは出力されないように制御する制御手段を有することを特徴とする請求項1乃至7のいずれか1項に記載の情報処理装置。
- 仮想視点の位置と前記仮想視点からの視線方向を特定するための仮想視点情報を取得する第1の取得工程と、
オブジェクトの三次元形状データを取得する第2の取得工程と、
前記第1の取得工程により取得された仮想視点情報に基づいて、前記オブジェクトの部分領域であって、前記仮想視点からの見えを表す仮想視点画像に表示される部分領域を特定する特定工程と、
前記第2の取得工程により取得された三次元形状データのうち、前記特定工程により特定された部分領域に対応する部分データを出力する出力工程と、を有する情報処理方法。 - 前記三次元形状データは、複数の部分データを有し、
前記出力工程は、前記複数の部分データのうち、前記特定工程により特定された部分領域に対応する三次元形状データの構成要素を含む部分データを出力することを特徴とする請求項9に記載の情報処理方法。 - 前記特定工程により特定された部分領域に基づいて、前記第2の取得工程により取得された三次元形状データを、複数の部分データに分割する分割工程を有し、
前記出力工程は、前記分割工程により分割された複数の部分データのうち、前記特定工程により特定された部分領域に対応する部分データを出力することを特徴とする請求項9に記載の情報処理方法。 - 前記第2の取得工程により取得された三次元形状データのうち、前記特定工程により特定された部分領域に対応する部分データとは異なる部分データは出力されないように制御されることを特徴とする請求項9乃至11のいずれか1項に記載の情報処理方法。
- コンピュータを、請求項1乃至8のいずれか1項に記載の情報処理装置として機能させるためのプログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020237027348A KR20230130709A (ko) | 2021-02-18 | 2022-02-09 | 정보처리 장치, 정보처리 방법, 및 기억 매체 |
CN202280015742.4A CN116940964A (zh) | 2021-02-18 | 2022-02-09 | 信息处理设备、信息处理方法和程序 |
EP22756041.4A EP4296958A1 (en) | 2021-02-18 | 2022-02-09 | Information processing device, information processing method, and program |
US18/450,844 US20230394701A1 (en) | 2021-02-18 | 2023-08-16 | Information processing apparatus, information processing method, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021024134A JP2022126205A (ja) | 2021-02-18 | 2021-02-18 | 情報処理装置、情報処理方法、およびプログラム |
JP2021-024134 | 2021-02-18 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/450,844 Continuation US20230394701A1 (en) | 2021-02-18 | 2023-08-16 | Information processing apparatus, information processing method, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022176720A1 true WO2022176720A1 (ja) | 2022-08-25 |
Family
ID=82931618
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/004992 WO2022176720A1 (ja) | 2021-02-18 | 2022-02-09 | 情報処理装置、情報処理方法、およびプログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US20230394701A1 (ja) |
EP (1) | EP4296958A1 (ja) |
JP (1) | JP2022126205A (ja) |
KR (1) | KR20230130709A (ja) |
CN (1) | CN116940964A (ja) |
WO (1) | WO2022176720A1 (ja) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7459199B1 (ja) | 2022-09-20 | 2024-04-01 | キヤノン株式会社 | 画像処理システム |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08194841A (ja) * | 1995-01-17 | 1996-07-30 | Hitachi Ltd | 有限要素領域分割方法 |
JP2007255989A (ja) * | 2006-03-22 | 2007-10-04 | Navitime Japan Co Ltd | ナビゲーションシステム、経路探索サーバ、端末装置および地図表示方法 |
WO2018025660A1 (ja) * | 2016-08-05 | 2018-02-08 | ソニー株式会社 | 画像処理装置および画像処理方法 |
JP2021024134A (ja) | 2019-07-31 | 2021-02-22 | 株式会社パイロットコーポレーション | シャープペンシル |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018147329A1 (ja) | 2017-02-10 | 2018-08-16 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 自由視点映像生成方法及び自由視点映像生成システム |
-
2021
- 2021-02-18 JP JP2021024134A patent/JP2022126205A/ja active Pending
-
2022
- 2022-02-09 KR KR1020237027348A patent/KR20230130709A/ko unknown
- 2022-02-09 EP EP22756041.4A patent/EP4296958A1/en active Pending
- 2022-02-09 WO PCT/JP2022/004992 patent/WO2022176720A1/ja active Application Filing
- 2022-02-09 CN CN202280015742.4A patent/CN116940964A/zh active Pending
-
2023
- 2023-08-16 US US18/450,844 patent/US20230394701A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08194841A (ja) * | 1995-01-17 | 1996-07-30 | Hitachi Ltd | 有限要素領域分割方法 |
JP2007255989A (ja) * | 2006-03-22 | 2007-10-04 | Navitime Japan Co Ltd | ナビゲーションシステム、経路探索サーバ、端末装置および地図表示方法 |
WO2018025660A1 (ja) * | 2016-08-05 | 2018-02-08 | ソニー株式会社 | 画像処理装置および画像処理方法 |
JP2021024134A (ja) | 2019-07-31 | 2021-02-22 | 株式会社パイロットコーポレーション | シャープペンシル |
Also Published As
Publication number | Publication date |
---|---|
CN116940964A (zh) | 2023-10-24 |
EP4296958A1 (en) | 2023-12-27 |
JP2022126205A (ja) | 2022-08-30 |
KR20230130709A (ko) | 2023-09-12 |
US20230394701A1 (en) | 2023-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11217006B2 (en) | Methods and systems for performing 3D simulation based on a 2D video image | |
JP6425780B1 (ja) | 画像処理システム、画像処理装置、画像処理方法及びプログラム | |
US10417829B2 (en) | Method and apparatus for providing realistic 2D/3D AR experience service based on video image | |
US10917622B2 (en) | Information processing apparatus, display control method, and storage medium | |
US8933965B2 (en) | Method for calculating light source information and generating images combining real and virtual images | |
EP3321889A1 (en) | Device and method for generating and displaying 3d map | |
WO2019117264A1 (ja) | 仮想視点画像を生成するシステム、方法及びプログラム | |
KR20140100656A (ko) | 전방향 영상 및 3차원 데이터를 이용한 시점 영상 제공 장치 및 방법 | |
KR20140082610A (ko) | 휴대용 단말을 이용한 증강현실 전시 콘텐츠 재생 방법 및 장치 | |
WO2022002181A1 (zh) | 自由视点视频重建方法及播放处理方法、设备及存储介质 | |
KR102382247B1 (ko) | 화상 처리 장치, 화상 처리 방법 및 컴퓨터 프로그램 | |
CN113170213A (zh) | 图像合成 | |
CN109920043A (zh) | 虚拟3d对象的立体渲染 | |
KR20180120456A (ko) | 파노라마 영상을 기반으로 가상현실 콘텐츠를 제공하는 장치 및 그 방법 | |
JP2022105590A (ja) | 情報処理装置、情報処理方法、及び、プログラム | |
EP3616402A1 (en) | Methods, systems, and media for generating and rendering immersive video content | |
WO2022176720A1 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
JP4892405B2 (ja) | 画像処理装置および方法 | |
KR20210055381A (ko) | 스마트 디스플레이를 통해 증강 현실 컨텐츠를 제공하는 장치, 방법 및 컴퓨터 프로그램 | |
US20190052868A1 (en) | Wide viewing angle video processing system, wide viewing angle video transmitting and reproducing method, and computer program therefor | |
US11830140B2 (en) | Methods and systems for 3D modeling of an object by merging voxelized representations of the object | |
JP2019114269A (ja) | 仮想視点画像を生成するシステム、方法及びプログラム | |
WO2023166794A1 (ja) | 情報処理装置、情報処理方法、画像生成装置、画像生成方法及びプログラム | |
JP7378960B2 (ja) | 画像処理装置、画像処理システム、画像生成方法、および、プログラム | |
JP7417827B2 (ja) | 画像編集方法、画像表示方法、画像編集システム、及び画像編集プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22756041 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20237027348 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020237027348 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280015742.4 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022756041 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022756041 Country of ref document: EP Effective date: 20230918 |