WO2022191010A1 - Dispositif de traitement d'informations et procédé de traitement d'informations - Google Patents

Dispositif de traitement d'informations et procédé de traitement d'informations Download PDF

Info

Publication number
WO2022191010A1
WO2022191010A1 PCT/JP2022/008967 JP2022008967W WO2022191010A1 WO 2022191010 A1 WO2022191010 A1 WO 2022191010A1 JP 2022008967 W JP2022008967 W JP 2022008967W WO 2022191010 A1 WO2022191010 A1 WO 2022191010A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
information
dimensional
imaging
information processing
Prior art date
Application number
PCT/JP2022/008967
Other languages
English (en)
Japanese (ja)
Inventor
剛也 小林
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2022191010A1 publication Critical patent/WO2022191010A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals

Definitions

  • the present disclosure relates to an information processing device and an information processing method.
  • volumetric capture that generates a 3D model of a subject using captured images of an existing subject, and generates a high-quality 3D image of the subject based on the generated 3D model and the captured image of the subject.
  • An object of the present disclosure is to provide an information processing apparatus and an information processing method capable of generating a three-dimensional image of higher quality in volumetric capture.
  • An information processing apparatus includes a generation unit that generates an image by applying a texture image to a three-dimensional model included in three-dimensional data, a first position of a virtual camera that acquires an image of a virtual space, Based on a second position of the three-dimensional model and a third position of one or more imaging cameras that capture an image of the subject in real space, a captured image of the subject to be used as a texture image is obtained from one or more imaging cameras. and a selection unit that selects an imaging camera.
  • the information processing apparatus includes a generation unit that generates three-dimensional data based on captured images captured by one or more imaging cameras, and a three-dimensional data corresponding to a subject included in the captured image from the three-dimensional data. a separating unit that separates the model and generates position information indicating the position of the separated three-dimensional model.
  • FIG. 10 is a diagram showing basic processing of volumetric capture based on a captured image of a photographed image, which is applicable to the embodiment;
  • FIG. 10 is a diagram for explaining a problem of an example of existing technology;
  • FIG. 10 is a diagram for explaining a problem of another example of the existing technology;
  • FIG. 10 is a diagram for explaining a method of selecting an example of an imaging camera that acquires a texture image to be applied to a 3D model, according to existing technology;
  • FIG. 10 is a schematic diagram for explaining a first example of image pickup camera selection according to existing technology;
  • FIG. 10 is a schematic diagram for explaining a first example of image pickup camera selection according to existing technology;
  • FIG. 10 is a schematic diagram for explaining a second example of image pickup camera selection according to existing technology;
  • FIG. 10 is a diagram showing basic processing of volumetric capture based on a captured image of a photographed image, which is applicable to the embodiment;
  • FIG. 10 is a diagram for explaining a problem of an example of existing
  • FIG. 10 is a schematic diagram for explaining a second example of image pickup camera selection according to existing technology
  • 1 is a functional block diagram showing an example of functions of an information processing system according to an embodiment
  • FIG. FIG. 3 is a schematic diagram showing an example configuration for acquiring image data of a subject, which is applicable to the embodiment
  • 1 is a block diagram showing a hardware configuration of an example of an information processing device applicable to an information processing system according to an embodiment
  • FIG. 6 is an exemplary flowchart schematically showing processing in the information processing system according to the embodiment
  • FIG. 4 is a schematic diagram schematically showing a three-dimensional model generation process applicable to the embodiment
  • It is a block diagram showing an example of the configuration of a 3D model generation unit according to the embodiment.
  • FIG. 7 is a schematic diagram for explaining subject separation processing according to the embodiment;
  • FIG. 14 is an exemplary flowchart illustrating subject separation processing according to the embodiment.
  • FIG. 4 is a schematic diagram for explaining selection of an imaging camera according to the embodiment; 4 is a block diagram showing an example configuration of a rendering unit according to the embodiment;
  • FIG. 8 is an example flowchart illustrating a first example of imaging camera selection processing in rendering processing according to the embodiment;
  • FIG. 4 is a schematic diagram for explaining the relationship between an object and a virtual camera according to the embodiment;
  • FIG. 10 is a schematic diagram for explaining processing for calculating an average value of reference positions of objects according to the embodiment; 9 is an example flowchart illustrating a second example of imaging camera selection processing in rendering processing according to the embodiment; 6 is an example flowchart illustrating rendering processing according to the embodiment; FIG. 4 is a schematic diagram for explaining post-effect processing according to the embodiment; FIG. 5 is a schematic diagram showing more specifically post-effect processing according to the embodiment;
  • FIG. 1 is a diagram showing basic processing of volumetric capture based on a captured image of a photographed image, which is applicable to the embodiment.
  • step S1 the system surrounds an object (subject) with a large number of cameras in real space and captures an image of the subject.
  • a camera that captures an image of a subject in real space is hereinafter referred to as an imaging camera.
  • step S2 the system converts the subject into three-dimensional data and generates a three-dimensional model of the subject based on a plurality of captured images captured by multiple imaging cameras (3D modeling processing).
  • step S3 the system renders the three-dimensional model generated at step S2 to generate an image.
  • step S3 the system places the 3D model in the virtual space, and renders the 3D model from a virtual camera (hereinafter referred to as a virtual camera) that can freely move in the virtual space. to generate an image. That is, the system renders according to the position and orientation of the virtual camera with respect to the 3D model. For example, a user who operates a virtual camera can observe an image of a three-dimensional model viewed from a position according to his/her own operation.
  • a virtual camera a virtual camera
  • a format combining mesh information and UV textures and a format combining mesh information and multi-textures are generally used.
  • Mesh information is a set of vertices and edges of a three-dimensional model made up of polygons.
  • a UV texture is a texture obtained by assigning UV coordinates, which are coordinates on the texture, to a texture image.
  • multi-texture is used to overlap and paste a plurality of texture images onto the polygons of the three-dimensional model.
  • the format that combines mesh information and UV texture covers all directions of the 3D model with one UV texture, so the amount of data is relatively small and lightweight, and the rendering load is low.
  • This format is suitable for use in the View Independent method (hereinafter abbreviated as the VI method), which is a rendering method in which the geometry is fixed with respect to the viewpoint movement of the virtual camera.
  • the format that combines mesh information and multi-textures increases the amount of data and the rendering load, but it can provide high image quality.
  • This format is suitable for use in the View Dependent method (hereinafter abbreviated as the VD method) in which the geometric shape changes as the viewpoint of the virtual camera moves.
  • FIG. 2 is a diagram for explaining a problem of an example of existing technology.
  • multiple three-dimensional models 51 1 to 51 3 are included for a single three-dimensional data 50, as shown in section (a) of FIG.
  • the three-dimensional models 51 1 to 51 3 are objects in virtual space obtained by giving three-dimensional information to the image of the subject in real space included in the captured image.
  • each three -dimensional model 51 1 -51 3 since the three-dimensional models 51 1 to 51 3 could not be separated and recognized, it was difficult to obtain sufficient quality when rendering the three-dimensional models 51 1 to 51 3 . That is, in order to render each three-dimensional model 51 1 -51 3 with sufficient quality, each three -dimensional model 51 1 -51 3 must be rendered as independent data, as shown in section (b) of FIG. It must be treated as 52 1 to 52 3 .
  • FIG. 3 is a diagram for explaining a problem of another example of existing technology.
  • a single three-dimensional data 50 includes a plurality of three-dimensional models 51 1 to 51 3 as shown in section (a) of FIG.
  • three - dimensional data 500 after post-effect processing in section (b) of FIG. It was difficult to selectively apply effect processing (in this example, non-display processing).
  • separation processing for separating each of the three-dimensional models 51 1 to 51 3 is required.
  • the existing technology does not consider such separation of the plurality of three-dimensional models 51 1 to 51 3 .
  • FIG. 4 is a diagram for explaining an example of a selection method of an imaging camera that acquires a texture image to be applied to a subject according to existing technology.
  • a subject 80 in real space and a plurality of imaging cameras 60 1 to 60 8 surrounding the subject 80 in real space are shown.
  • a reference position 81 is shown as a reference position of the subject 80 .
  • FIG. 4 also shows a virtual camera 70 arranged in the virtual space.
  • the coordinates in the real space and the coordinates in the virtual space match, and unless otherwise specified, the description will be made without distinguishing between the real space and the virtual space.
  • the real space and the virtual space have the same scale, and the position of an object (object, imaging camera, etc.) placed in the real space can be directly replaced with the position in the virtual space.
  • the positions of, for example, the three-dimensional model and the virtual camera 70 in the virtual space can be directly replaced with the positions in the real space.
  • the reference position 81 of the object 80 the position corresponding to the point closest to the optical axes of all the imaging cameras 60 1 to 60 8 in the object 80 can be applied.
  • the reference position 81 of the subject 80 may be an intermediate position between the maximum value and the minimum value of the vertex coordinates of the subject 80, or the most important position in the subject 80 (corresponding to the subject 80). If the subject is a person, it may be the position of the face, for example.
  • the optimum imaging camera for acquiring the texture to be applied to the three-dimensional model is determined according to the importance of each imaging camera 60 1 to 60 8 . It is known to select based on The degree of importance can be determined, for example, based on the angle formed by the position of the virtual camera 70 and the positions of the imaging cameras 60 1 to 60 8 with the reference position 81 as the vertex.
  • the angle ⁇ 1 formed by the position of the virtual camera 70 and the imaging camera 60 1 with respect to the reference position 81 is the smallest angle
  • the angle ⁇ 2 formed by the imaging camera 60 2 is the next smallest angle. becomes. Therefore, with respect to the position of the virtual camera 70, the imaging camera 60 1 has the highest importance, and the imaging camera 60 2 has the next highest importance after the imaging camera 60 1 .
  • the importance P(i) of each imaging camera 60 1 to 60 8 can be calculated by the following equation (1).
  • P(i) arccos(C i ⁇ C v ) (1)
  • equation (1) the value i represents each of the imaging cameras 60 1 to 60 8 . Also, the value C i represents a vector from each imaging camera 60 1 to 60 8 to the reference position 81 , and the value C v represents a vector from the virtual camera 70 to the reference position 81 . That is, equation (1) obtains the importance P(i) of the imaging cameras 60 1 to 60 8 based on the inner product of vectors from the imaging cameras 60 1 to 60 8 and the virtual camera 70 to the reference position 81 .
  • an unintended imaging camera may be selected as the optimal imaging camera.
  • FIGS. 5A and 5B are schematic diagrams for explaining a first example of image pickup camera selection according to the existing technology.
  • This first example is an example of selecting an imaging camera based on a vector relative to a reference position. That is, in FIGS. 5A and 5B, as described with reference to FIG. 4 , when a plurality of subjects 82 1 and 82 2 are included, vectors C i and , and a vector C v from the virtual camera 70 to the reference position.
  • an imaging range 84 includes two subjects 82 1 and 82 2 .
  • the subject 82 1 is positioned at the upper left corner of the imaging range 84 in the figure, and the subject 82 2 is positioned at the lower right corner of the imaging range 84 in the figure.
  • 16 imaging cameras 60 1 to 60 16 are arranged surrounding the imaging range 84 with their imaging directions facing the center of the imaging range 84 .
  • the virtual camera 70 has an angle of view ⁇ , and the three-dimensional model corresponding to the subject 82 1 is assumed to fit within the angle of view ⁇ . Since the positions of the subjects 82 1 and 82 2 are unknown, the reference position 83 adopts the center of the imaging range 84 or the center of gravity of the subjects 82 1 and 82 2 .
  • the three-dimensional models corresponding to the subjects 82 1 and 82 2 are assumed to be the subjects 82 1 and 82 2 unless otherwise specified.
  • FIG. 5A shows an example in which the virtual camera 70 is on the front side of the reference position 83 with respect to the subject 82 1 .
  • the imaging camera 60 1 located on a straight line 93a passing through the virtual camera 70 from the object 82 1 is the optimum imaging camera.
  • the direction of the vector 91a from the virtual camera 70 to the reference position 83 and the direction of the vector 90a from the imaging camera 60 16 to the reference position 83 substantially match, and the imaging camera 60 16 is optimal. camera.
  • the imaging camera 60 16 differs in position and orientation with respect to the object 82 1 with respect to the optimal imaging camera 60 1 in the ideal case. Therefore, the quality of the texture based on the captured image of the imaging camera 60 16 is lower than that of the texture based on the captured image of the imaging camera 60 1 .
  • FIG. 5B shows an example in which the virtual camera 70 is positioned between the subject 82 1 and the reference position 83.
  • the imaging camera 60 1 located on the straight line 93b passing through the virtual camera 70 from the object 82 2 is the optimum imaging camera.
  • the reference position 83 is on the side opposite to the subject 82 1 with respect to the virtual camera 70, and the vector 91b from the virtual camera 70 to the reference position 83 points to the side opposite to the subject 82 1 . become. Therefore, the direction of the vector 90b from the imaging camera 60 11 located on the opposite side of the subject 82 1 as viewed from the virtual camera 70 to the reference position 83 becomes close to the direction of the vector 91b, and the imaging camera 60 11 is the optimum imaging camera. be selected.
  • the imaging camera 60 11 images a surface of the subject 82 1 that cannot be seen from the virtual camera 70 . Therefore, the quality of the texture based on the image captured by the imaging camera 60 11 is greatly reduced compared to the texture based on the ideal image captured by the imaging camera 60 2 .
  • the selection method of the optimum imaging camera is not limited to the selection method based on the vector for the reference position described above.
  • a second example of image pickup camera selection by existing technology is based on the angle between the optical axis of the virtual camera 70 and the vector of each image pickup camera 60 1 to 60 16 with respect to the subject 82 1 , from each image pickup camera 60 1 to 60 16 This is an example of selecting the optimum imaging camera.
  • FIGS. 6A and 6B are schematic diagrams for explaining a second example of image pickup camera selection according to existing technology.
  • the subjects 821 and 822, the reference position 83, and the imaging range 84 are the same as in FIGS. 5A and 5B described above, so descriptions thereof will be omitted here.
  • FIG. 6A corresponds to FIG. 5A described above, and shows an example in which the virtual camera 70 is positioned closer to the subject 82 1 than the reference position 83 .
  • the imaging camera 60 1 located on a straight line 93a passing through the virtual camera 70 from the object 82 1 is the optimum imaging camera.
  • the virtual camera 70 faces upward in the figure, and the optical axis 94a is upward.
  • the angle between the direction of the vector 90c from the imaging camera 60 1 to the reference position 83 and the optical axis 94a of the virtual camera 70 is the smallest. Therefore, the same imaging camera 60 1 as the ideal optimum imaging camera is selected as the optimum imaging camera, and high-quality texture can be obtained.
  • FIG. 6B corresponds to FIG. 5B described above, and shows an example in which the virtual camera 70 is positioned between the object 82 1 and the reference position 83 .
  • the imaging camera 60 2 located on a straight line 93c passing through the virtual camera 70 from the object 82 2 is the optimum imaging camera.
  • the virtual camera 70 faces upward in the drawing, and the optical axis 94b is directed upward.
  • the angle between the direction of the vector 90c from the imaging camera 60 1 to the reference position 83 and the optical axis 94b of the virtual camera 70 is the smallest. Therefore, as the optimum imaging camera, the imaging camera 60 1 different from the optimum imaging camera 60 2 in the ideal case is selected. Therefore, the quality of the texture based on the captured image of the imaging camera 60 1 is lower than that of the texture based on the captured image of the imaging camera 60 2 .
  • the information processing system obtains the position of each subject when generating each three-dimensional model. Then, when rendering a three-dimensional model based on each subject, the information processing system uses the position of each subject obtained when the three-dimensional model is generated, and uses the imaging camera to acquire the texture to be applied to the three-dimensional model. to select.
  • the imaging camera used for acquiring the texture to be applied to the 3D model can be appropriately selected. You can choose and get high quality textures. Also, by using the position information added to each three-dimensional model, it is possible to apply post-effect processing to each three-dimensional model.
  • FIG. 7 is an exemplary functional block diagram illustrating functions of the information processing system according to the embodiment.
  • the information processing system 100 includes a data acquisition unit 110, a 3D (3-Dimensional) model generation unit 111, a formatting unit 112, a transmission unit 113, a reception unit 120, and a rendering unit 121. , and a display unit 122 .
  • the information processing system 100 includes, for example, an information processing device for outputting a 3D model including a data acquisition unit 110, a 3D model generation unit 111, a formatting unit 112, and a transmission unit 113, a reception unit 120, and rendering units 121 and 122. and an information processing device for outputting display information.
  • the information processing system 100 can also be configured by a single computer device (information processing device).
  • the data acquisition unit 110, the 3D model generation unit 111, the formatting unit 112, the transmission unit 113, the reception unit 120, the rendering unit 121, and the display unit 122 run the information processing program according to the embodiment on, for example, a CPU (Central Processing Unit). It is realized by being executed. Not limited to this, some or all of the data acquisition unit 110, the 3D model generation unit 111, the formatting unit 112, the transmission unit 113, the reception unit 120, the rendering unit 121, and the display unit 122 may be hardware that cooperates with each other. It may be realized by a circuit.
  • the data acquisition unit 110 acquires image data for generating a 3D model of a subject.
  • FIG. 8 is a schematic diagram showing an example configuration for acquiring image data of a subject, which is applicable to the embodiment.
  • a plurality of captured images captured from a plurality of viewpoints by a plurality of imaging cameras 60 1 , 60 2 , 60 3 , .
  • the captured images from multiple viewpoints are preferably images captured in synchronism by the plurality of imaging cameras 60 1 to 60 n .
  • the data acquisition unit 110 may acquire, as image data, a plurality of captured images obtained by capturing the subject 80 from a plurality of viewpoints with a single imaging camera.
  • this image data acquisition method is applicable when the position of the subject 80 is fixed.
  • the data acquisition unit 110 may perform calibration based on the image data and acquire the internal parameters and external parameters of each imaging camera 60 1 to 60 n . Also, the data acquisition unit 110 may acquire a plurality of pieces of depth information indicating distances from a plurality of viewpoints to the subject 80, for example.
  • the 3D model generation unit 111 generates a 3D model having 3D information of the subject 80 based on image data obtained by the data acquisition unit 110 and obtained by imaging the subject 80 from multiple viewpoints.
  • the 3D model generation unit 111 uses, for example, a so-called Visual Hull to cut the three-dimensional shape of the subject 80 using images from multiple viewpoints (for example, silhouette images from multiple viewpoints). A three-dimensional model of the subject 80 is generated. In this case, the 3D model generation unit 111 further transforms the 3D model generated using Visual Full with a high degree of accuracy using a plurality of pieces of depth information indicating distances from a plurality of viewpoints to the subject 80. can be done.
  • the 3D model generated by the 3D model generation unit 111 is generated using captured images captured by the imaging cameras 60 1 to 60 n in the real space, and therefore can be said to be a real 3D model.
  • the 3D model generation unit 111 can express the generated 3D model, for example, in the form of mesh data.
  • the mesh data is data representing shape information representing the surface shape of the subject 80 by connections between vertices called polygon meshes.
  • the method of expressing the three-dimensional model generated by the 3D model generation unit 111 is not limited to mesh data.
  • the 3D model generation unit 111 may describe the generated 3D model in a so-called point cloud representation method represented by point position information.
  • the 3D model generation unit 111 also generates color information data of the subject 80 as a texture in association with the three-dimensional model of the subject 80 .
  • the 3D model generation unit 111 can generate, for example, a View Independent (VD) texture that has a constant color when viewed from any direction. Not limited to this, the 3D model generation unit 111 may generate a View Dependent (VI) texture whose color changes depending on the viewing direction.
  • VD View Independent
  • VI View Dependent
  • the formatting unit 112 converts the 3D model data generated by the 3D model generation unit 111 into data in a format suitable for transmission and storage.
  • the formatting unit 112 can convert the 3D model generated by the 3D model generating unit 111 into a plurality of two-dimensional images by perspectively projecting the model from a plurality of directions.
  • the formatting unit 112 may generate depth information, which is two-dimensional depth images from multiple viewpoints, using the three-dimensional model.
  • the formatting unit 112 compresses and encodes the depth information and the color information of the state of the two-dimensional image, and outputs them to the transmission unit 113 .
  • the formatting unit 112 may transmit the depth information and the color information side by side as one image, or may transmit them as two separate images.
  • the formatting unit 112 can compress and encode the data using a compression technique for two-dimensional images such as AVC (Advanced Video Coding).
  • the formatting unit 112 may also convert the three-dimensional model into a point cloud format. Furthermore, the formatting unit 112 may output the 3D model to the transmission unit 113 as 3D data. In this case, the formatting unit 112 can use, for example, the Geometry-based-Approach three-dimensional compression technology discussed in MPEG (Moving Picture Experts Group).
  • the transmission unit 113 transmits transmission data generated by the formatting unit 112 .
  • the transmission unit 113 transmits transmission data after performing a series of processes of the data acquisition unit 110, the 3D model generation unit 111, and the formatting unit 112 offline. Further, the transmission unit 113 may transmit the transmission data generated from the series of processes described above in real time.
  • the receiving section 120 receives transmission data transmitted from the transmitting section 113 .
  • the rendering unit 121 performs rendering according to the position of the virtual camera 70 using the transmission data received by the receiving unit 120 .
  • mesh data of a three-dimensional model is projected from the viewpoint of the virtual camera 70 that performs drawing, and texture mapping is performed to paste textures representing colors and patterns.
  • the drawn image can be viewed from a freely set viewpoint by means of the virtual camera 70 regardless of the positions of the imaging cameras 60 1 to 60 n at the time of photographing.
  • the rendering unit 121 performs texture mapping to paste textures representing the color, pattern, and texture of the mesh according to the position of the mesh of the three-dimensional model.
  • the rendering unit 121 may perform texture mapping using a VD method that considers the viewpoint from the user (virtual camera 70). Not limited to this, the rendering unit 121 may perform texture mapping by a VI method that does not consider the viewpoint of the user.
  • the VD method changes the texture to be pasted on the 3D model according to the position of the viewpoint from the user (the viewpoint from the virtual camera 70). Therefore, the VD method has the advantage of realizing higher quality rendering than the VI method. On the other hand, since the VI method does not consider the position of the viewpoint from the user, it has the advantage of reducing the amount of processing compared to the VD method.
  • the user's viewpoint data may be input to the rendering unit 121 from the display device, for example, by detecting the region of interest of the user.
  • the rendering unit 121 may employ billboard rendering, which renders an object such that the object maintains a vertical orientation with respect to the viewpoint of the user, for example.
  • the rendering unit 121 may render an object that the user is less interested in using billboard rendering, and render other objects using another rendering method.
  • the display unit 122 displays the resulting image rendered by the rendering unit 121 on the display device of the display device.
  • the display device may be, for example, a head-mounted display or a spatial display, or may be a display device of an information device such as a smartphone, a television receiver, or a personal computer. Also, the display device may be a 2D monitor for two-dimensional display, or a 3D monitor for three-dimensional display.
  • the information processing system 100 shown in FIG. 7 shows a series of flows from the data acquisition unit 110 that acquires captured images, which are materials for generating content, to the display control unit that controls the display device observed by the user.
  • the transmitting unit 113 and the receiving unit 120 are provided to show a series of flow from the content (three-dimensional model) creation side to the content observation side through the distribution of the content data.
  • the information processing system 100 can omit the formatting unit 112, the transmission unit 113, and the reception unit 120.
  • FIG. 7 shows a series of flows from the data acquisition unit 110 that acquires captured images, which are materials for generating content, to the display control unit that controls the display device observed by the user.
  • the transmitting unit 113 and the receiving unit 120 are provided to show a series of flow from the content (three-dimensional model) creation side to the content observation side through the distribution of the content data.
  • the same information processing apparatus for example, a personal computer
  • the information processing system 100 can omit the formatting unit 112, the transmission unit 113, and
  • the same implementer may implement all of them, or each functional block may be implemented by different implementers.
  • operator A generates 3D content (three-dimensional model) using data acquisition section 110 , 3D model generation section 111 and format formation section 112 .
  • the 3D content is distributed through the transmitter 113 (platform) of the operator B, and the display device of the operator C receives, renders, and displays the 3D content.
  • each functional block shown in FIG. 7 can be implemented on a cloud network.
  • the rendering unit 121 may be implemented within a display device, or may be implemented in a server on a cloud network. In that case, information is exchanged between the display device and the server.
  • the data acquisition unit 110, the 3D model generation unit 111, the formatting unit 112, the transmission unit 113, the reception unit 120, the rendering unit 121, and the display unit 122 are collectively described as the information processing system 100.
  • the unit 110 , the 3D model generation unit 111 , the formatting unit 112 , the transmission unit 113 , the reception unit 120 and the rendering unit 121 may be collectively referred to as the information processing system 100 .
  • FIG. 9 is a block diagram showing a hardware configuration of an example of an information processing device applicable to the information processing system 100 according to the embodiment.
  • the information processing apparatus 2000 shown in FIG. 9 can be applied to both the information processing apparatus for outputting the 3D model and the information processing apparatus for outputting the display information described above.
  • the information processing apparatus 2000 shown in FIG. 9 can also be applied to a configuration including the entire information processing system 100 shown in FIG.
  • an information processing device 2000 includes a CPU (Central Processing Unit) 2100, a ROM (Read Only Memory) 2101, a RAM (Random Access Memory) 2102, an interface (I/F) 2103, an input section 2104, and , an output unit 2105 , a storage device 2106 , a communication I/F 2107 and a drive device 2108 .
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the CPU 2100, ROM 2101, RAM 2102 and I/F 2103 are communicably connected to each other via a bus 2110.
  • An input unit 2104 , an output unit 2105 , a storage device 2106 , a communication I/F 2107 and a drive device 2108 are connected to the I/F 2103 .
  • These input unit 2104 , output unit 2105 , storage device 2106 , communication I/F 2107 and drive device 2108 can communicate with CPU 2100 and the like via I/F 2103 and bus 2110 .
  • the storage device 2106 is a non-volatile storage medium such as a hard disk drive or flash memory.
  • the CPU 2100 controls the overall operation of the information processing apparatus 2000 according to programs stored in the ROM 2101 and storage device 2106 and using the RAM 2102 as a work memory.
  • the input unit 2104 accepts data input to the information processing device 2000 .
  • an input device for inputting data according to user operation such as a pointing device such as a mouse, a keyboard, a touch panel, a joystick, or a controller, can be applied.
  • the input unit 2104 can include various input terminals for inputting data from an external device.
  • the input section 2104 can include a sound pickup device such as a microphone.
  • the output unit 2105 is responsible for outputting information from the information processing device 2000 .
  • a display device such as a display can be applied as the output unit 2105 .
  • the output unit 2105 can include a sound output device such as a speaker.
  • the output unit 2105 can include various output terminals for outputting data to external devices.
  • the output unit 2105 preferably includes a GPU (Graphics Processing Unit).
  • the GPU has a memory (GPU memory) for graphics processing.
  • a communication I/F 2107 controls communication via a network such as a LAN (Local Area Network) or the Internet.
  • a drive device 2108 drives removable media such as optical discs, magneto-optical discs, flexible discs, and semiconductor memories to read and write data.
  • the CPU 2100 executes the information processing program according to the embodiment to obtain the data acquisition unit 110 and the 3D model generation unit 111 described above.
  • the formatting unit 112 and the transmitting unit 113 are configured as modules on the main storage area of the RAM 2012, respectively.
  • the CPU 2100 executes the information processing program according to the embodiment, so that the receiving unit 120, the rendering unit 121, and the display unit 122 are configured as modules, for example, on the main memory area of the RAM 2012 .
  • These information processing programs can be acquired from outside (for example, a server device) via a network such as a LAN or the Internet by communication via the communication I/F 2107, and installed on the information processing device 2000. ing. Not limited to this, the information program may be stored in a removable storage medium such as a CD (Compact Disk), a DVD (Digital Versatile Disk), or a USB (Universal Serial Bus) memory and provided.
  • a removable storage medium such as a CD (Compact Disk), a DVD (Digital Versatile Disk), or a USB (Universal Serial Bus) memory and provided.
  • FIG. 10 is an exemplary flowchart schematically showing processing in the information processing system 100 according to the embodiment. Prior to the processing according to the flowchart of FIG. 10 , as described with reference to FIG.
  • the information processing system 100 acquires captured image data for generating a three-dimensional model of the subject 80 by the data acquisition unit 110 in step S10.
  • the information processing system 100 uses the 3D model generation unit 111 to generate a three-dimensional model having three-dimensional information of the subject 80 based on the captured image data acquired in step S10.
  • the information processing system 100 causes the formatting unit 112 to encode the three-dimensional model shape and texture data generated in step S11 into a format suitable for transmission and storage.
  • the information processing system 100 causes the transmission unit 113 to transmit the data encoded in step S12.
  • the information processing system 100 receives the data transmitted in step S13 by the receiving unit 120.
  • the receiving unit 120 decodes the received data and restores the shape and texture data of the three-dimensional model.
  • the information processing system 100 causes the rendering section 121 to perform rendering using the shape and texture data passed from the receiving section 120, and generate image data for displaying the three-dimensional model.
  • the information processing system 100 causes the display unit 122 to display the image data generated by rendering on the display device.
  • step S16 ends, the series of processes in the flowchart of FIG. 10 ends.
  • FIG. 11 is a schematic diagram that schematically shows a three-dimensional model generation process that can be applied to the embodiment.
  • the 3D model generation unit 111 generates a plurality of 3D models 51 1 to 51 3 based on captured images captured from different viewpoints, for example, each of which is based on an object in real space.
  • Generate three-dimensional data 50 including: Various methods are conceivable for adding position information to each of the three-dimensional models 51 1 to 51 3 .
  • position information is added to each of the three-dimensional models 51 1 to 51 3 using bounding boxes.
  • Section (b) of FIG. 11 shows an example of a bounding box. Rectangular parallelepipeds circumscribing the three-dimensional models 51 1 , 51 2 and 51 3 are determined as three-dimensional bounding boxes 200 1 , 200 2 and 200 3 . Each vertex of these three-dimensional bounding boxes 200 1 to 200 3 is used as position information indicating the position of the corresponding three-dimensional models 51 1 to 51 3 .
  • BoundingBox[1] and BoundingBox[2] of the three-dimensional bounding boxes 200 2 and 200 3 with respect to the three-dimensional models 51 2 and 51 3 are similarly represented by the following equations (3) and (4).
  • BoundingBox[1] (x min1 , x max1 , y min1 , y max1 , z min1 , z max1 ) ...
  • BoundingBox[2] (x min2 , x max2 , y min2 , y max2 , z min2 , z max2 ) (4)
  • FIG. 12 is a block diagram showing an example configuration of the 3D model generation unit 111 according to the embodiment.
  • the 3D model generation unit 111 includes a 3D model processing unit 1110 and a 3D model separation unit 1111 .
  • the image data captured by each of the imaging cameras 60 1 to 60 n and the imaging camera information output from the data acquisition unit 110 are input to the 3D model generation unit 111 .
  • the imaging camera information may include color information, depth information, camera parameter information, and the like.
  • the camera parameter information includes, for example, information on the position, direction and angle of view ⁇ of each imaging camera 60 1 to 60 n .
  • Camera parameter information may further include zoom information, shutter speed information, aperture information, and the like.
  • the imaging camera information of each of the imaging cameras 60 1 to 60 n is passed to the 3D model processing section 1110 and output from the 3D model generation section 111 .
  • the 3D model processing unit 1110 Based on the image data captured by each of the imaging cameras 60 1 to 60 n and the imaging camera information, the 3D model processing unit 1110 removes the three-dimensional shape of the subject included in the imaging range using the above-described Visual Full, thereby arranging the vertices of the subject. and generate surface data. More specifically, the 3D model processing unit 1110 acquires in advance an image of the background of the space in which the subject is placed in the real space for each of the imaging cameras 60 1 to 60 n . A silhouette image of the subject is generated based on the difference between each image of the subject captured by each of the imaging cameras 60 1 to 60 n and each background image. By shaving the three-dimensional space from this silhouette image, it is possible to obtain the three-dimensional shape of the subject based on the vertex and surface data.
  • the 3D model processing unit 1110 functions as a generation unit that generates three-dimensional data based on captured images captured by one or more imaging cameras.
  • the 3D model processing unit 1110 outputs the generated vertex and surface data of the subject as mesh information.
  • the mesh information output from the 3D model processing unit 1110 is output from the 3D model generation unit 111 and passed to the 3D model separation unit 1111 .
  • the 3D model separation unit 1111 separates each subject based on the mesh information passed from the 3D model processing unit 1110 and generates position information of each subject.
  • FIG. 13 is a schematic diagram for explaining subject separation processing according to the embodiment.
  • FIG. 14 is a flowchart of an example showing subject separation processing according to the embodiment.
  • the 3D model processing unit 1110 generates a plurality of 3D models 51 visually based on captured images captured from different viewpoints. Generate three-dimensional data 50 containing 1 to 51 3 .
  • step S100 of FIG. 14 the 3D model separation unit 1111 projects the 3D data 50 in the height direction (y-axis direction) to generate 2D silhouette information for each of the 3D models 51 1 to 51 3 .
  • Section (b) of FIG. 13 shows examples of two-dimensional silhouettes 52 1 -52 3 based on respective three-dimensional models 51 1 -51 3 .
  • the 3D model separation unit 1111 performs clustering on a two-dimensional plane based on the silhouettes 52 1 to 52 3 to detect blobs.
  • Subsequent steps S103 to S105 are processed for each blob detected in step S101.
  • step S103 the 3D model separating unit 1111 divides the detected blob into a two-dimensional bounding rectangle corresponding to each of the three-dimensional models 51 1 to 51 3 , as shown in section (c) of FIG. Obtained as boxes 53 1 to 53 3 .
  • the 3D model separation unit 1111 adds height information to the two-dimensional bounding box 53 1 obtained in step S103 to obtain a 3D model as shown in section (d) of FIG. Generate a dimensional bounding box 200 1 .
  • Height information to be added to the two-dimensional bounding box 53 1 can be obtained based on the three-dimensional data 50 shown in section (a) of FIG. 13, for example.
  • the 3D model separation unit 1111 gives height information to the two-dimensional bounding boxes 53 2 and 53 3 to generate three-dimensional bounding boxes 200 2 and 200 3 .
  • a two-dimensional bounding box 53 2 is obtained in step S103
  • height information is added to the two-dimensional bounding box 53 2 in the next step S104.
  • a three-dimensional bounding box 200 2 is generated.
  • a two-dimensional bounding box 53 3 is obtained in step S103
  • height information is added to the two-dimensional bounding box 53 3 in step S104 to obtain a three-dimensional bounding box 200 information 3 is generated.
  • step S105 when the 3D model separation unit 1111 determines that the processing for all blobs has ended (step S105, "Yes"), it ends the series of processing according to the flowchart in FIG.
  • three-dimensional bounding boxes 200 1 -200 3 corresponding to the three-dimensional models 51 1 -51 3 are generated. Then, based on the vertex coordinates of each of these three-dimensional bounding boxes 200 1 to 200 3 , position information indicating the position of each of the three-dimensional models 51 1 to 51 3 is obtained.
  • the 3D model separation unit 1111 separates the 3D model corresponding to the subject included in the captured image from the 3D data and generates position information indicating the position of the separated 3D model. Function.
  • the 3D model generation unit 111 adds position information indicating the position of the 3D model to the 3D model separated by the 3D model separation unit 1111 and outputs the 3D model.
  • rendering processing by the rendering unit 121 uses the position information indicating the position of the subject acquired by the 3D model generation unit 111 as described above to select the optimum imaging camera for acquiring the texture to be applied to the subject.
  • FIG. 15 is a schematic diagram for explaining selection of an imaging camera according to the embodiment.
  • section (a) shows an example of imaging camera selection according to the embodiment.
  • section (b) is the same diagram as FIG. 6B according to the existing technology described above, and is reprinted for comparison with the embodiment.
  • two subjects 82 1 and 82 2 are included in an imaging range 84 to be imaged, as in FIG. 5A and the like described above.
  • Subject 82 1 is positioned at the upper left corner of imaging range 84 in section (a) of FIG. 15, and subject 82 2 is positioned at the lower right corner of imaging range 84 in section (a) of FIG.
  • 16 imaging cameras 60 1 to 60 16 each having an angle of view ⁇ are arranged to surround an imaging range 84 with their imaging directions facing the center of the imaging range 84.
  • the virtual camera 70 has an angle of view ⁇ , and the three-dimensional model corresponding to the subject 82 1 is assumed to fit within the angle of view ⁇ .
  • the virtual camera 70 is arranged closer to the subject 82 1 than the center of the imaging range 84, and the subject 82 1 is included within the angle of view ⁇ of the virtual camera 70.
  • the reference position 83 is set based on position information indicating the position of the subject 82 1 obtained by the 3D model generation unit 111 .
  • the reference position 83 is set at the center of the subject 82 1 .
  • the rendering unit 121 renders the positions of the imaging cameras 60 1 to 60 16 and the position of the subject 82 1 included within the angle of view ⁇ of the virtual camera 70. , each vector from each imaging camera 60 1 to 60 16 to the reference position 83 is obtained. The rendering unit 121 also obtains a vector 91 e from the virtual camera 70 to the reference position 83 from the virtual camera 70 and the position of the subject 821 included within the angle of view ⁇ of the virtual camera 70 .
  • the rendering unit 121 for example, in accordance with the above-described formula (1), for each angle formed by each vector (vector C i ) from each imaging camera 60 1 to 60 16 to the reference position 83 and the vector 91e (vector C v ) Based on this, the degree of importance P(i) of each imaging camera 60 1 to 60 16 is obtained.
  • the rendering unit 121 selects the optimum imaging camera for acquiring the texture to be applied to the subject 82 1 based on the importance P(i) obtained for each of the imaging cameras 60 1 to 60 16 .
  • the imaging camera 60 is on a straight line 93c passing through the virtual camera 70 from the subject 82 1 (reference position 83). 2 is the optimal imaging camera.
  • the position of the subject 82 1 is obtained and set as the reference position 83 . Therefore, among the vectors from the imaging cameras 60 1 to 60 16 to the reference position 83, by selecting the vector with the smallest angle with the vector 91e from the virtual camera 70 to the reference position 83, an imaging camera close to the ideal can be selected as the optimal imaging camera.
  • the imaging camera 602 which is the above-described ideal imaging camera, is selected as the optimum imaging camera.
  • the imaging camera 60 2 can be said to be a camera viewing the subject 82 1 from substantially the same direction as the virtual camera 70 . Therefore, according to the imaging camera selection method according to the embodiment, it is possible to obtain textures of higher quality.
  • Section (b) of FIG. 15 is based on the angle between the optical axis of the virtual camera 70 and the vector of each of the imaging cameras 60 1 to 60 16 with respect to the subject 82 1 from each of the imaging cameras 60 1 to 60 16 according to the existing technology.
  • This is an example of selecting the optimum imaging camera.
  • the reference position 83 does not match the position of the subject 82 1 , and the virtual camera 70 and the selected optimum imaging camera are viewing the subject 82 1 from substantially the same direction.
  • the imaging camera 60 1 different from the ideal imaging camera 60 2 is selected as the optimum imaging camera. Therefore, compared to the selection direction using the position information indicating the position of the subject 821 according to the embodiment, the quality of the acquired texture is degraded.
  • FIG. 16 is a block diagram showing an example configuration of the rendering unit 121 according to the embodiment.
  • the rendering unit 121 includes a mesh transfer unit 1210, an imaging camera selection unit 1211, an imaging viewpoint depth generation unit 1212, an imaging camera information transfer unit 1213, and a virtual viewpoint texture generation unit 1214.
  • the mesh information, imaging camera information, and subject position information generated by the 3D model generation unit 111 are input to the rendering unit 121 .
  • virtual viewpoint position information indicating the position and direction of the virtual camera 70 is input to the rendering unit 121 .
  • This virtual viewpoint position information is input by the user, for example, using a controller (corresponding to the input unit 2104).
  • the rendering unit 121 generates a texture at the virtual viewpoint of the virtual camera 70 based on the mesh information, the imaging camera information, the virtual viewpoint position information, and the subject position information.
  • the mesh information is transferred to the mesh transfer unit 1210.
  • the mesh transfer unit 1210 transfers the passed mesh information to the imaging viewpoint depth generation unit 1212 and the virtual viewpoint texture generation unit 1214 .
  • the mesh transfer processing by the mesh transfer unit 1210 is processing for transferring mesh information to the GPU memory.
  • the virtual viewpoint texture generation unit 1214 may access the GPU memory to acquire mesh information. Note that if the mesh information is on the GPU memory when the reception unit 120 receives the mesh information, the mesh transfer unit 1210 can be omitted.
  • the imaging camera information is transferred to the imaging camera information transfer unit 1213.
  • camera parameter information in the imaging camera information is transferred to the imaging camera selection unit 1211 and the imaging viewpoint depth generation unit 1212 .
  • the imaging viewpoint depth generation unit 1212 selects an imaging camera from the imaging cameras 60 1 to 60 n according to camera selection information passed from the imaging camera selection unit 1211, which will be described later. Based on the mesh information transferred from the mesh transfer unit 1210, the imaging viewpoint depth generation unit 1212 generates selected imaging viewpoint depth information, which is depth information corresponding to the image captured by the selected imaging camera.
  • the imaging viewpoint depth generation unit 1212 may transfer the depth information included in the imaging camera information input to the rendering unit 121 to the imaging viewpoint depth generation unit 1212 .
  • depth generation processing by the imaging viewpoint depth generation unit 1212 is unnecessary, and the imaging viewpoint depth generation unit 1212 transfers the depth information to the virtual viewpoint texture generation unit 1214 as selected imaging viewpoint depth information.
  • the virtual viewpoint texture generation unit 1214 may access the GPU memory and acquire the selected imaging viewpoint depth information.
  • the virtual viewpoint position information and the subject position information are transferred to the imaging camera selection section 1211 and the imaging camera information transfer section 1213 .
  • the imaging camera selection unit 1211 selects one or more imaging cameras to be used in subsequent processing from the imaging cameras 60 1 to 60 n based on the camera parameter information, the virtual viewpoint position information, and the subject position information.
  • Camera selection information is generated that indicates one or more imaging cameras.
  • the imaging camera selection unit 1211 transfers the generated camera selection information to the imaging viewpoint depth generation unit 1212 and the imaging camera information transfer unit 1213 .
  • the imaging camera selection unit 1211 selects the first position of the virtual camera that acquires the image of the virtual space, the second position of the three-dimensional model, and one or more imaging cameras that capture the subject in the real space. It functions as a selection unit that selects, from one or more imaging cameras, an imaging camera that acquires an imaging image of a subject to be used as a texture image based on the third position.
  • the imaging camera information transfer section 1213 transfers imaging camera information indicating the selected imaging camera to the virtual viewpoint texture generation section 1214 as selected camera information. Even in this case, if the imaging camera information is already on the GPU memory, the process of transferring the selected camera information can be omitted. In this case, the virtual viewpoint texture generation unit 1214 may access the GPU memory and acquire the imaging camera information.
  • the virtual viewpoint texture generation unit 1214 receives the mesh information from the mesh transfer unit 1210, the selected imaging viewpoint depth information from the imaging viewpoint depth generation unit 1212, and the selected camera information from the imaging camera information transfer unit 1213. is transferred. Also, the virtual viewpoint position information and the subject position information input to the rendering unit 121 are transferred to the virtual viewpoint texture generation unit 1214 . The virtual viewpoint texture generation unit 1214 generates the texture of the virtual viewpoint, which is the viewpoint from the virtual camera 70, based on the information transferred from each of these units.
  • the virtual viewpoint texture generation unit 1214 functions as a generation unit that generates an image by applying a texture image to the 3D model included in the 3D data.
  • FIG. 17 is an exemplary flowchart illustrating a first example of imaging camera selection processing in rendering processing according to the embodiment.
  • one reference position is set for one or more subjects.
  • Each process in the flowchart of FIG. 17 is a process executed by the imaging camera selection unit 1211 included in the rendering unit 121 .
  • the subsequent processing from step S201 to step S205 is processing for each object (subject). Note that the number of objects input to the rendering unit 121 can be obtained from subject position information.
  • the subsequent processing in steps S202 and S203 is processing for each vertex of the bounding box of the i-th object.
  • the imaging camera selection unit 1211 projects the j-th vertex of the bounding box of the i-th object onto the virtual camera 70 based on the virtual viewpoint position information and the subject position information related to the object.
  • the imaging camera selection unit 1211 has determined that processing has been completed for all vertices of the target bounding box, or that the j-th vertex of the target bounding box has been projected within the angle of view ⁇ of the virtual camera 70. If so (step S203, "Yes"), the process proceeds to step S204. In step S204, if even one of all the vertices of the target bounding box exists within the angle of view ⁇ of the virtual camera 70, the imaging camera selection unit 1211 adds a reference position based on the bounding box.
  • steps S203 and S204 if at least one of the vertices of the bounding box projected onto the virtual camera 70 is included within the angle of view ⁇ of the virtual camera 70, the imaging camera selection unit 1211 It is assumed that the (subject) of the object related to the bounding box exists within the angle of view ⁇ of the virtual camera 70 . Then, the imaging camera selection unit 1211 obtains the reference position based on the bounding box assumed to exist within the angle of view ⁇ of the virtual camera 70 .
  • FIG. 18 is a schematic diagram for explaining the relationship between an object (subject) and the virtual camera 70 according to the embodiment.
  • the vertex 201a is outside the angle of view ⁇ of the virtual camera 70.
  • the imaging camera selection unit 1211 assumes that the three-dimensional model 51a exists within the angle of view ⁇ of the virtual camera 70 .
  • the imaging camera selection unit 1211 determines that the three-dimensional model 51a is It is assumed to exist within the angle of view ⁇ of the virtual camera 70 .
  • the imaging camera selection unit 1211 obtains the reference position 84a for the three-dimensional model 51a related to the bounding box 200a based on the coordinates of each vertex of the three-dimensional bounding box 200a. For example, the imaging camera selection unit 1211 obtains the average value of the coordinates of each vertex of the three-dimensional bounding box 200a as the reference position 84a for the three-dimensional model 51a related to the bounding box 200a.
  • step S203 when it is determined in step S203 that any vertex other than the vertex 201a of the bounding box 200a exists within the angle of view ⁇ of the virtual camera 70, , the process proceeds to step S204.
  • the imaging camera selection unit 1211 determines whether the processes of steps S202 to S204 have been completed for all objects input to the rendering unit 121.
  • the imaging camera selection unit 1211 determines that processing has not been completed for all objects input to the rendering unit 121 (“No” in step S205)
  • the imaging camera selection unit 1211 determines that the processing has been completed for all objects input to the rendering unit 121 (step S205, “Yes”), the process proceeds to step S206.
  • step S206 the imaging camera selection unit 1211 calculates a representative reference position for all objects for which processing has been completed up to step S205. More specifically, in step S206, the imaging camera selection unit 1211 calculates the average value of the reference positions of all the objects for which the processing has been completed up to step S205. The imaging camera selection unit 1211 uses the calculated average value as a representative reference position for all the objects.
  • FIG. 19 is a schematic diagram for explaining the process of calculating the average value of the reference positions of the objects according to the embodiment.
  • the angle of view ⁇ of the virtual camera 70 includes a bounding box 200a for the three-dimensional model 51a and a bounding box 200b for the three-dimensional model 51b.
  • Reference positions 84a and 84b are set for the three-dimensional models 51a and 51b, respectively.
  • the imaging camera selection unit 1211 sets the reference position 85 with respect to the coordinates of the average values of the coordinates of the reference positions 84a and 84b.
  • This reference position 85 serves as a common reference position for the three-dimensional models 51a and 51b.
  • the reference position 85 is used to select the optimum imaging camera for the three-dimensional models 51a and 51b in common. set.
  • the three-dimensional models 51a and 51b form one group.
  • the subsequent processing of steps S208 to S210 is processing for each of the imaging cameras 60 1 to 60 n . Also, the processing target in the loop among the imaging cameras 60 1 to 60 n is assumed to be the imaging camera 60 k .
  • step S208 for the k-th imaging camera 60 k , the imaging camera selection unit 1211 obtains an angle between a vector directed from the imaging camera 60 k to the reference position 85 and a vector directed from the virtual camera 70 to the reference position 85. .
  • the imaging camera selection unit 1211 sorts the imaging cameras 60 k in ascending order of angles obtained in step S208 in the loop processing based on the loop variable k. That is, in step S209, the imaging camera selection unit 1211 sorts the imaging cameras 60 k in descending order of importance.
  • the imaging camera selection unit 1211 determines whether or not processing has been completed for all of the arranged imaging cameras 60 1 to 60 n .
  • the imaging camera selection unit 1211 determines that the processing has not been completed for all the imaging cameras 60 1 to 60 n (step S210, “No”)
  • the imaging camera selection unit 1211 determines that the processing has been completed for all the imaging cameras 60 1 to 60 n (step S210, “Yes”), the process proceeds to step S211.
  • the imaging camera selection unit 1211 selects camera information indicating top m imaging cameras from the array of imaging cameras 60 1 to 60 n sorted in ascending order of angle.
  • the imaging camera selection unit 1211 transfers information indicating each selected imaging camera to the imaging viewpoint depth generation unit 1212 and the imaging camera information transfer unit 1213 as camera selection information.
  • step S211 ends, the imaging camera selection unit 1211 ends the series of processes according to the flowchart of FIG.
  • one reference position 85 is collectively set for a plurality of three-dimensional models 51a and 51b. Therefore, in post-effect processing and the like, which will be described later, the three-dimensional models 51a and 51b are subjected to common effect processing at the same time.
  • FIG. 20 is an exemplary flowchart illustrating a second example of imaging camera selection processing in rendering processing according to the embodiment.
  • one reference position is set for each of one or more subjects.
  • Each process in the flowchart of FIG. 20 is a process executed by the imaging camera selection unit 1211 included in the rendering unit 121.
  • FIG. 20 is a process executed by the imaging camera selection unit 1211 included in the rendering unit 121.
  • steps S200 to S205 is the same as the processing of steps S200 to S205 in the flowchart of the above-described figure, so the description is omitted here.
  • the imaging camera selection unit 1211 advances the process to step S2060 when the reference position addition processing for all objects is completed in step S205.
  • the subsequent processing from step S208 to step S2101 is processing for each object.
  • the subsequent processing of steps S208 to S210 is processing for each of the imaging cameras 60 1 to 60 n . Note that the processing of steps S208 to S210 is the same as the processing of steps S208 to S210 in the flow chart of FIG.
  • step S210 determines that the processing for all the arranged imaging cameras 60 1 to 60 n is completed in step S210 (step S210, “Yes”), the process proceeds to step S2101.
  • step S2101 the imaging camera selection unit 1211 determines whether or not the processing for all objects included within the angle of view ⁇ of the virtual camera 70 has been completed.
  • step S2101, “No” the imaging camera selection unit 1211 determines that processing for all objects included in the angle of view ⁇ of the virtual camera 70 has not ended.
  • step S2101, "Yes” the process proceeds to step S211.
  • the imaging camera selection unit 1211 selects the top m imaging cameras from the arrangement of the imaging cameras 60 1 to 60 n sorted in ascending order of angle, as in step S211 of the flowchart of FIG. Select the camera information to display.
  • the imaging camera selection unit 1211 transfers information indicating each selected imaging camera to the imaging viewpoint depth generation unit 1212 and the imaging camera information transfer unit 1213 as camera selection information.
  • step S211 ends, the imaging camera selection unit 1211 ends the series of processes according to the flowchart of FIG.
  • the reference position 85 in FIG. 19 described above is not set, and reference positions 84a and 84b are set for the three-dimensional models 51a and 51b, respectively. .
  • reference positions 84a and 84b are individually set for each of the plurality of three-dimensional models 51a and 51b. Therefore, in post-effect processing, etc., which will be described later, it is possible to apply effect processing to each of the three-dimensional models 51a and 51b individually.
  • FIG. 21 is an exemplary flowchart illustrating rendering processing according to the embodiment. Each process according to the flowchart of FIG. 21 is a process executed by the virtual viewpoint texture generation unit 1214 included in the rendering unit 121 .
  • the mesh information may include mesh information of a plurality of subjects.
  • the virtual viewpoint texture generation unit 1214 selects vertices to be projected onto the virtual viewpoint by the virtual camera 70 from the mesh information based on the subject position information.
  • the virtual viewpoint texture generation unit 1214 rasterizes based on the vertices selected in step S301. That is, the vertices not selected in step S301 are not rasterized and are not projected onto the virtual viewpoint, ie, the virtual camera 70 . Therefore, the virtual viewpoint texture generation unit 1214 can selectively set display/non-display for each of a plurality of subjects.
  • step S305 the virtual viewpoint texture generation unit 1214 obtains the vertex of the mesh corresponding to the pixel q of the virtual viewpoint.
  • the subsequent processing of steps S307 to S313 is processing for each of the imaging cameras 60 1 to 60 n . Also, the imaging camera 60 r is assumed to be the object of processing in the loop among the imaging cameras 60 1 to 60 n .
  • step S307 the virtual viewpoint texture generation unit 1214 projects the vertex coordinates of the vertices obtained in step S305 onto the imaging camera 60r , and obtains the UV coordinates of the vertex coordinates of the imaging camera 60r .
  • step S308 the virtual viewpoint texture generation unit 1214 compares the depth of each vertex of the mesh in the imaging camera 60 r with the depth of the vertex coordinates of the vertex obtained in step S305, and obtains the difference between the two.
  • step S309 the virtual viewpoint texture generation unit 1214 determines whether the difference obtained in step S308 is equal to or greater than a threshold. If the virtual viewpoint texture generation unit 1214 determines that the difference is equal to or greater than the threshold value (step S309, "Yes"), the process proceeds to step S310, and the imaging camera information (selected camera information) obtained by the imaging camera 60r is generated. I don't use it.
  • step S308 determines that the difference obtained in step S308 is less than the threshold value (step S309 , "No")
  • the process proceeds to step S311, and Suppose we use information.
  • the virtual viewpoint texture generation unit 1214 acquires color information at the UV coordinates obtained in step S307 from the imaging camera information.
  • the virtual viewpoint texture generation unit 1214 then obtains a blend coefficient for the color information.
  • the virtual viewpoint texture generation unit 1214 selects the imaging camera 60 r based on the imaging camera information selected by the processing of steps S208 to S211 in the flowchart of FIG. 17 or FIG. Find the blend coefficient for the captured image (texture image) of r .
  • step S313 the virtual viewpoint texture generation unit 1214 determines whether or not the processing for all of the arranged imaging cameras 60 1 to 60 n has been completed.
  • step S313, “Yes” the process proceeds to step S314.
  • step S314 the virtual viewpoint texture generation unit 1214 blends the color information in the imaging camera information used in step S311 among the imaging cameras 60 1 to 60 n according to the blending coefficient obtained in step S312. Thus, color information for pixel q is determined.
  • step S315 determines in step S315 that processing has been completed for all pixels, it terminates the series of processing according to the flowchart of FIG.
  • FIG. 22 is a schematic diagram for explaining post-effect processing according to the embodiment.
  • Section (a) of FIG. 22 shows an example of rendering processing according to the embodiment
  • section (b) shows an example of rendering processing by existing technology.
  • the virtual viewpoint texture generation unit 1214 performs ray tracing from the position of the pixel 72 of the image acquired by the virtual camera 70 (the output pixel of the virtual camera 70) to the virtual optical path 95 and the optical paths 96 1 to 96 4 . Then, the position of the pixel (input pixel) corresponding to the pixel 72 of each of the plurality of imaging cameras 60 1 to 60 4 is obtained (FIG. 21, steps S305 to S307).
  • the virtual viewpoint texture generation unit 1214 obtains the color information of the pixels 72 of the virtual camera 70 by blending the obtained color information of the respective pixels of the plurality of imaging cameras 60 1 to 60 4 according to the blend coefficients (Fig. 21, step S312).
  • the subject 87 is on the front side of the subject 86 from the virtual camera 70 , and is on the virtual optical path 96 4 to the subject 86 with respect to the imaging camera 60 4 and is reflected in the imaging camera 604 .
  • the virtual viewpoint texture generation unit 1214 selects vertices to be projected onto the virtual viewpoint by the virtual camera 70 from mesh information based on subject position information. Therefore, the virtual viewpoint texture generation unit 1214 can selectively set display/non-display for each of a plurality of subjects. Specifically, like the subject 87 indicated by the dotted line in section (a) of FIG. 22, the subject 87 can be hidden based on the subject position information indicating the position of the subject 87 .
  • the imaging camera 60 4 in real space images the subject 87 . Therefore, the virtual viewpoint texture generation unit 1214 does not use the captured image captured by the imaging camera 604 as the texture image of the subject 86 . Also, the plane of the subject 87 on the side of the imaging camera (not shown) located at a position where the subject 87 has passed from the virtual camera 70 (indicated by an arrow 97 ) cannot be seen from the virtual camera 70 . Therefore, the virtual viewpoint texture generation unit 1214 does not acquire the imaging camera information of the imaging camera. As a result, the processing load of the virtual viewpoint texture generation unit 1214 can be reduced.
  • the processing for switching display/non-display of a specific subject among the subjects included in the angle of view ⁇ of the virtual camera 70 has been described as an example, but this is not limited to this example. , can also be applied to other post-effect processing.
  • FIG. 23 is a schematic diagram showing more specifically the post-effect processing according to the embodiment.
  • Section (a) of FIG. 23 shows an example of an output image 300a output from the virtual camera 70 in which the three-dimensional models 51c and 51d are included in the angle of view ⁇ of the virtual camera 70.
  • FIG. Three-dimensional models 51c and 51d are associated with bounding boxes 200c and 200d, respectively. Note that in sections (a) and (b) of FIG. 23, the frame lines of the respective bounding boxes 200c and 200d are shown for explanation and are not displayed in the actual image.
  • Section (b) of FIG. 23 shows an example of an output image 300b when the position of the virtual camera 70 is moved from section (a) and the three-dimensional model 51c is moved.
  • the three-dimensional model 51d associated with the bounding box 200d is specified based on subject position information indicating the position of the three-dimensional model 51d, and is hidden by post-effect processing.
  • the three-dimensional model 51d itself is within the angle of view ⁇ of the virtual camera 70, and the related bounding box 200d exists.
  • a 3D model of a subject generated by the information processing system 100 according to the embodiment and 3D data managed by another device may be combined to produce new video content. good.
  • the three-dimensional model of the subject generated by the information processing system 100 according to the embodiment By combining the background data with the background data, it is possible to create video content that makes the subject appear as if it exists in the location indicated by the background data.
  • the video content to be produced may be video content having three-dimensional information, or video content obtained by converting three-dimensional information into two-dimensional information.
  • the 3D model of the subject generated by the information processing system 100 according to the embodiment includes, for example, a 3D model generated by the 3D model generation unit 111 and a 3D model reconstructed by the rendering unit 121. .
  • a subject for example, a performer generated by the information processing system 100 according to the embodiment can be placed in a virtual space where the user communicates as an avatar.
  • the user becomes an avatar and can observe the photographed subject in the virtual space.
  • the user at the remote location can observe the 3D model of the subject through a playback device at the remote location.
  • real-time communication between the subject and a remote user can be realized by transmitting the three-dimensional model of the subject in real time.
  • a case where the subject is a teacher and the user is a student, or a case where the subject is a doctor and the user is a patient can be assumed.
  • the information processing program according to the embodiment described above may be executed in another device having a CPU, a ROM, a RAM, etc. and having functions as an information processing device.
  • the device should have the necessary functional blocks and be able to obtain the necessary information.
  • each step of one flowchart may be executed by one device, or may be shared by a plurality of devices.
  • the plurality of processes may be executed by one device, or may be shared by a plurality of devices.
  • a plurality of processes included in one step of the flowchart can also be executed as a process of a plurality of steps.
  • the processing described as a plurality of steps in the flowchart can also be collectively executed as one step.
  • the processing of the steps describing the information processing program may be executed in chronological order according to the order shown in each flowchart described above. , may be executed in parallel, or individually as needed, such as when a call is made. That is, as long as there is no contradiction, the processing of each step may be executed in an order different from the order described above. Furthermore, the processing of the step of writing the information processing program according to the embodiment may be executed in parallel with the processing of other programs, or may be executed in combination with the processing of other programs. good.
  • a plurality of technologies related to the present disclosure can be implemented independently, or a plurality of technologies related to the present disclosure can be implemented in combination, as long as there is no contradiction. Also, part or all of the techniques according to the above-described embodiments can be implemented in combination with other techniques not described above.
  • the present technology can also take the following configuration.
  • a generation unit that generates an image by applying a texture image to a three-dimensional model included in three-dimensional data; Based on a first position of a virtual camera that acquires an image of the virtual space, a second position of the three-dimensional model, and a third position of one or more imaging cameras that capture an object in the real space, a selection unit that selects, from one or more imaging cameras, an imaging camera that acquires a captured image of the subject to be used as the texture image; comprising Information processing equipment.
  • the generating unit The texture image is generated according to the viewpoint from the virtual camera based on the captured image acquired by the imaging camera selected by the selection unit from the one or more imaging cameras.
  • the information processing device according to (1) above.
  • the selection unit Selecting an imaging camera that acquires a captured image of the subject according to the importance of each of the one or more imaging cameras obtained based on the first position, the second position, and the third position; The information processing apparatus according to (1) or (2).
  • the selection unit Obtaining the degree of importance based on an angle formed by the first position and the third position with the second position as the vertex; The information processing device according to (3) above.
  • the generating unit generating the texture image by blending the captured images captured by the one or more imaging cameras according to the importance; The information processing apparatus according to (3) or (4).
  • the generating unit applying the texture image to the three-dimensional model when at least one vertex coordinate of each of the vertex coordinates of a rectangular parallelepiped circumscribing the three-dimensional model is within the angle of view of the virtual camera;
  • the information processing apparatus according to any one of (1) to (5) above.
  • the generating unit Designating the three-dimensional model to give a predetermined effect based on the second position;
  • the information processing apparatus according to any one of (1) to (6).
  • the predetermined effect is an effect of hiding the specified three-dimensional model from the virtual camera;
  • the information processing device according to (7) above.
  • the selection unit Deselecting, from among the one or more imaging cameras, an imaging camera that images the subject from a direction outside the angle of view of the virtual camera of the three-dimensional model; The information processing apparatus according to any one of (1) to (8).
  • the selection unit Using the average coordinates of the vertex coordinates of a rectangular parallelepiped circumscribing the three-dimensional model as the second position; The information processing apparatus according to any one of (1) to (9).
  • (11) The selection unit When a plurality of three-dimensional models included in the three-dimensional data are included within the angle of view of the virtual camera, the average of the second positions of the plurality of three-dimensional models is calculated as the plurality of three-dimensional models. as the second position for The information processing device according to (10) above.
  • a generation step of generating an image by applying a texture image to a three-dimensional model included in three-dimensional data Based on a first position of a virtual camera that acquires an image of the virtual space, a second position of the three-dimensional model, and a third position of one or more imaging cameras that capture an object in the real space, a selection step of selecting, from one or more imaging cameras, an imaging camera that acquires a captured image of the subject to be used as the texture image; having Information processing methods.
  • a generation unit that generates three-dimensional data based on captured images captured by one or more imaging cameras; a separation unit that separates a three-dimensional model corresponding to a subject included in the captured image from the three-dimensional data and generates position information indicating the position of the separated three-dimensional model; comprising Information processing equipment.
  • the separation unit is By specifying a region of the subject on the two-dimensional plane based on information on the two-dimensional plane obtained by projecting the three-dimensional data in the height direction, and providing the region with information in the height direction, separating the three-dimensional model; The information processing device according to (13) above.
  • the separation unit is generating the position information including the coordinates of each vertex of a rectangular parallelepiped circumscribing the three-dimensional model, which is generated by giving information in the height direction to the region;
  • the information processing device according to (14) above.
  • an output unit that adds the position information to the three-dimensional model separated from the three-dimensional data by the separation unit and outputs the model; further comprising The information processing apparatus according to any one of (13) to (15).
  • the output unit outputting the information of the three-dimensional model as multi-viewpoint captured images obtained by capturing the subject corresponding to the three-dimensional model with the one or more imaging cameras, and depth information for each of the multi-viewpoint captured images;
  • the information processing device according to (16) above.
  • the output unit outputting the information of the three-dimensional model as mesh information;
  • the information processing device according to (16) above. (19) executed by a processor; a generation step of generating three-dimensional data based on captured images captured by one or more imaging cameras; a separation unit step of separating a three-dimensional model corresponding to a subject included in the captured image from the three-dimensional data and generating position information indicating the position of the separated three-dimensional model; has a Information processing methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

La présente invention concerne un dispositif de traitement d'informations comprenant : une unité de génération (1214) qui génère une image obtenue par l'application d'une image de texture sur un modèle tridimensionnel inclus dans des données tridimensionnelles ; et une unité de sélection (1211) qui sélectionne, sur la base d'une première position d'une caméra virtuelle servant à acquérir une image d'un espace virtuel, d'une deuxième position du modèle tridimensionnel et d'une troisième position d'une ou de plusieurs caméras d'imagerie capturant chacune une image d'un sujet dans un espace réel, une caméra d'imagerie pour acquérir l'image qui est capturée du sujet et qui doit être utilisée comme image de texture, parmi la ou les caméras d'imagerie.
PCT/JP2022/008967 2021-03-12 2022-03-02 Dispositif de traitement d'informations et procédé de traitement d'informations WO2022191010A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-040346 2021-03-12
JP2021040346 2021-03-12

Publications (1)

Publication Number Publication Date
WO2022191010A1 true WO2022191010A1 (fr) 2022-09-15

Family

ID=83227186

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/008967 WO2022191010A1 (fr) 2021-03-12 2022-03-02 Dispositif de traitement d'informations et procédé de traitement d'informations

Country Status (1)

Country Link
WO (1) WO2022191010A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009539155A (ja) * 2006-06-02 2009-11-12 イジュノシッヒ テクニッヒ ホッフシューラ チューリッヒ 動的に変化する3次元のシーンに関する3次元表現を生成するための方法およびシステム
WO2019039282A1 (fr) * 2017-08-22 2019-02-28 ソニー株式会社 Dispositif de traitement d'image et procédé de traitement d'image
WO2019082958A1 (fr) * 2017-10-27 2019-05-02 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Dispositif de codage de modèle tridimensionnel, dispositif de décodage de modèle tridimensionnel, procédé de codage de modèle tridimensionnel et procédé de décodage de modèle tridimensionnel
JP2020014159A (ja) * 2018-07-19 2020-01-23 キヤノン株式会社 ファイルの生成装置およびファイルに基づく映像の生成装置
US20200143557A1 (en) * 2018-11-01 2020-05-07 Samsung Electronics Co., Ltd. Method and apparatus for detecting 3d object from 2d image
JP2020126393A (ja) * 2019-02-04 2020-08-20 キヤノン株式会社 画像処理装置、画像処理方法、及びプログラム
JP2021022032A (ja) * 2019-07-25 2021-02-18 Kddi株式会社 合成装置、方法及びプログラム

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009539155A (ja) * 2006-06-02 2009-11-12 イジュノシッヒ テクニッヒ ホッフシューラ チューリッヒ 動的に変化する3次元のシーンに関する3次元表現を生成するための方法およびシステム
WO2019039282A1 (fr) * 2017-08-22 2019-02-28 ソニー株式会社 Dispositif de traitement d'image et procédé de traitement d'image
WO2019082958A1 (fr) * 2017-10-27 2019-05-02 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Dispositif de codage de modèle tridimensionnel, dispositif de décodage de modèle tridimensionnel, procédé de codage de modèle tridimensionnel et procédé de décodage de modèle tridimensionnel
JP2020014159A (ja) * 2018-07-19 2020-01-23 キヤノン株式会社 ファイルの生成装置およびファイルに基づく映像の生成装置
US20200143557A1 (en) * 2018-11-01 2020-05-07 Samsung Electronics Co., Ltd. Method and apparatus for detecting 3d object from 2d image
JP2020126393A (ja) * 2019-02-04 2020-08-20 キヤノン株式会社 画像処理装置、画像処理方法、及びプログラム
JP2021022032A (ja) * 2019-07-25 2021-02-18 Kddi株式会社 合成装置、方法及びプログラム

Similar Documents

Publication Publication Date Title
US10535181B2 (en) Virtual viewpoint for a participant in an online communication
EP3712856B1 (fr) Procédé et système pour générer une image
JP7386888B2 (ja) 画面上の話者のフューショット合成
JP2014505917A (ja) 3dヒューマンマシンインターフェースのためのハイブリッドリアリティ
TWI813098B (zh) 用於新穎視圖合成之神經混合
EP3396635A2 (fr) Procédé et équipement technique de codage de contenu multimédia
WO2020184174A1 (fr) Dispositif de traitement d'image et procédé de traitement d'image
Kurillo et al. A framework for collaborative real-time 3D teleimmersion in a geographically distributed environment
Farbiz et al. Live three-dimensional content for augmented reality
GB2565301A (en) Three-dimensional video processing
WO2022191010A1 (fr) Dispositif de traitement d'informations et procédé de traitement d'informations
JP6091850B2 (ja) テレコミュニケーション装置及びテレコミュニケーション方法
US20230252722A1 (en) Information processing apparatus, information processing method, and program
Andersen et al. An AR-guided system for fast image-based modeling of indoor scenes
EP3564905A1 (fr) Convertissement d'un objet volumetrique dans une scène 3d vers un modèle de représentation plus simple
US11769299B1 (en) Systems and methods for capturing, transporting, and reproducing three-dimensional simulations as interactive volumetric displays
CN116528065B (zh) 一种高效虚拟场景内容光场获取与生成方法
Scheer et al. A client-server architecture for real-time view-dependent streaming of free-viewpoint video
WO2023276261A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
US20240185526A1 (en) Systems and methods for capturing, transporting, and reproducing three-dimensional simulations as interactive volumetric displays
Thatte et al. Real-World Virtual Reality With Head-Motion Parallax
WO2022224964A1 (fr) Dispositif de traitement d'informations et procédé de traitement d'informations
Pitkänen Open Access Dynamic Human Point Cloud Datasets
CN117424997A (zh) 视频处理方法、装置、设备及可读存储介质
JP2023026148A (ja) 視点算出装置及びそのプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22766969

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22766969

Country of ref document: EP

Kind code of ref document: A1