WO2022103190A1 - Method and device for generating texture of three-dimensional model - Google Patents
Method and device for generating texture of three-dimensional model Download PDFInfo
- Publication number
- WO2022103190A1 WO2022103190A1 PCT/KR2021/016503 KR2021016503W WO2022103190A1 WO 2022103190 A1 WO2022103190 A1 WO 2022103190A1 KR 2021016503 W KR2021016503 W KR 2021016503W WO 2022103190 A1 WO2022103190 A1 WO 2022103190A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- texture
- noise
- image processing
- camera
- processing apparatus
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 30
- 238000012545 processing Methods 0.000 claims abstract description 83
- 238000003672 processing method Methods 0.000 claims abstract description 6
- 238000012935 Averaging Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 2
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 230000014509 gene expression Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 229920001621 AMOLED Polymers 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 229920000642 polymer Polymers 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/44—Morphing
Definitions
- the present invention relates to image processing, and to a method and apparatus for generating a texture of a three-dimensional model.
- Three-dimensional (3-demension) modeling refers to modeling or virtualizing the real world or the unreal world in a three-dimensional form and implementing it. Such three-dimensional modeling can be accomplished by expressing a three-dimensional target object as geometric data.
- the 3D modeling may be generated by 3D scanning the target object.
- a picture created by scanning describes the distance from each point of interest to a surface. Accordingly, the three-dimensional position of each point in the picture can be identified.
- a scan operation in several directions is required to obtain information in all directions of the target object, and such a scanning operation takes a considerable amount of time.
- An object of the present invention is to provide a method and apparatus for effectively generating a texture of a three-dimensional model.
- An object of the present invention is to provide a method and apparatus for effectively removing noise from a texture of a 3D model.
- An image processing method performed by an image processing apparatus includes generating a basic texture using a reference camera among a plurality of cameras for photographing a target, the plurality of cameras
- the method may include generating at least one candidate image using at least one candidate camera, and removing noise of the base texture by using the at least one candidate image.
- the reference camera may include a camera that includes a photographing surface of the target among the plurality of cameras in a photographing range and has a lens direction most similar to a direction perpendicular to the surface.
- the at least one candidate camera may include at least one camera including a photographing surface of the target among the plurality of cameras in a photographing range.
- the removing of the noise may include: determining a corrected value by combining values representing pixels on the same three-dimensional coordinates in the base texture and the at least one candidate image; It may include the step of setting as
- Combination of the values includes averaging the values, and weights for the averaging may vary according to a plurality of regions in which the basic texture is divided.
- the removing of the noise may include: determining differential values for each candidate image with respect to values representing pixels on the same three-dimensional coordinates in the base texture and the at least one candidate image; This may include applying the value to the base texture.
- the removing of the noise includes: morphing a 3-dimensional object according to the shape of the boundary line of the basic texture, and projecting the basic texture onto the morphed 3-dimensional object;
- the method may include morphing the 3D object according to a shape of a boundary line of the auxiliary texture generated from the at least one candidate image, and projecting the auxiliary texture onto the morphed 3D object.
- the removing of the noise may include: obtaining information related to the noise; extracting noise from the basic texture based on the information related to the noise; and removing the noise in an area in which the noise exists. It may include the step of performing an operation for.
- the method may further include adjusting the color of the candidate images based on the reference texture.
- the adjustment value for adjusting the color may be determined based on pixel values of the noise-free portion.
- An image processing apparatus includes a processor and a memory.
- the processor generates a basic texture by using a reference camera among a plurality of cameras for photographing a target, and generates at least one candidate image by using at least one candidate camera among the plurality of cameras; , the noise of the base texture may be removed by using the at least one candidate image.
- the computer-readable recording medium according to an embodiment of the present invention may be a computer-readable recording medium in which a computer program for performing the image processing method according to the above-described embodiment is recorded.
- FIG. 1 is a diagram illustrating an image processing apparatus according to an embodiment of the present invention.
- FIG. 2 is a diagram illustrating an example of a source image capturing environment of a texture of a three-dimensional (3-dimension) model according to an embodiment of the present invention.
- FIG. 3 is a diagram illustrating an embodiment of a procedure for generating a texture of a 3D model according to an embodiment of the present invention.
- 4A and 4B are diagrams illustrating examples of mapping of a 3D model and a 2D image expressed in a 3D coordinate system according to an embodiment of the present invention.
- FIG. 5 is a diagram illustrating another embodiment of a procedure for generating a texture of a 3D model according to an embodiment of the present invention.
- references to 'connected', 'connecting', 'fastened', 'fastening', 'coupled', 'coupled', etc., and various variations of these expressions, refer to other elements directly It is used in the sense of being connected or indirectly connected through other elements.
- the present invention proposes a method and apparatus for generating a texture of a three-dimensional model.
- the present invention proposes a technique for reducing or removing noise included in a source image in generating a texture of a 3D model.
- a two-dimensional (2-dimension) image can be used as a source.
- the two-dimensional image used as the source may include noise depending on the photographing environment.
- the noise may be caused by lighting, etc. present in a photographing environment of a two-dimensional image.
- noise may be caused by distortion of the color of pixels in a captured image due to light reflection of lighting. Therefore, in order to improve the quality of a three-dimensional model, it is necessary to remove or reduce noise in a two-dimensional image used as a source of a texture of the three-dimensional model.
- the image processing apparatus 100 includes a processor 110 , a memory 120 , a communication unit 130 , an input unit 140 , and an output unit 150 .
- the configuration of FIG. 1 is exemplary, and some of the listed components may be omitted, or other components not listed may be further included.
- the processor 110 controls the operation of the image processing apparatus 100 .
- the processor 110 may control at least one of the memory 120 , the communication unit 130 , the input unit 140 , and the output unit 150 . That is, the processor 110 may control other components so that the image processing apparatus 100 performs operations according to various embodiments to be described later.
- the memory 120 stores data such as a basic program, an application program, and setting information for the operation of the image processing apparatus 100 . Also, the memory 120 stores temporary data generated during the operation of the image processing apparatus 100 . The memory 120 may provide stored data according to the request of the processor 110 .
- the communication unit 130 performs functions for transmitting and receiving signals to and from another external device (eg, an external camera).
- the communication unit 130 may support wired communication or wireless communication.
- the communication unit 130 may perform channel encoding, modulation, demodulation, and channel decoding.
- the input unit 140 performs functions for detecting a user's input.
- the input unit 140 may transmit a command or data input by a user to the processor 110 to the processor 110 .
- the input unit 140 may include at least one hardware module for input.
- the hardware module may include at least one of a keyboard, a mouse, a digitizer, a sensor, a touchpad, and a button.
- the output unit 150 outputs data and information to be recognized by a user.
- the output unit 150 may configure and display a visually recognizable screen.
- the output unit 150 may be replaced with another device capable of outputting a form recognizable by other senses in addition to being visually recognizable.
- the output unit 150 may include at least one hardware module for output.
- hardware modules include speakers, liquid crystal display (LCD), light emitting diode (LED), light emitting polymer display (LPD), organic light emitting diode (OLED), active matrix organic light emitting diode (AMOLED), and FLED. (Flexible LED) may include at least one of.
- Noise removal according to various embodiments of the present disclosure is performed using a plurality of cameras.
- An environment for acquiring a source image of a texture according to an embodiment of the present invention is shown in FIG. 2 below.
- FIG. 2 is a diagram illustrating an example of a source image capturing environment of a texture of a 3D model according to an embodiment of the present invention.
- a plurality of cameras 211 , 212a , 212b , and 212c are disposed around a photographing target 210 for acquiring a texture.
- One of the plurality of cameras 211 , 212a , 212b , 212c may be selected as the reference camera 211 , and the others 212a , 212b , 212c may be used as a candidate camera or a child camera.
- the reference camera 211 among the plurality of cameras 211 , 212a , 212b and 212c is at least one of a field of view (FOV), a distance to the target 210, and an angle to the target 210 .
- FOV field of view
- a camera in which a surface to be photographed and a lens direction of the target 210 are closest to a vertical direction and all of the targets 210 are included in the photographing area may be selected as the reference camera 211 .
- the image captured by the reference camera 2110 becomes a basic texture
- the images captured by the candidate cameras 212a, 212b, and 212c become an auxiliary texture for compensating for noise of the basic texture.
- the plurality of cameras 211 , 212a , 212b , and 212c may be understood as some components of the image processing apparatus or may be understood as separate apparatuses. That is, referring to FIGS. 1 and 2 , the plurality of cameras 211 , 212a , 212b , and 212c may be a part of the input unit 140 . Alternatively, the plurality of cameras 211 , 212a , 212b , and 212c may be connected to another device having a communication function, and the image processing apparatus 100 may receive captured images from the other device through the communication unit 130 . .
- 3 is a diagram illustrating an embodiment of a procedure for generating a texture of a 3D model according to an embodiment of the present invention. 3 illustrates an operation method of the image processing apparatus 100 of FIG. 1 .
- the image processing apparatus generates a basic texture using a reference camera.
- the image processing apparatus selects a camera having a camera view that best expresses a texture among a plurality of cameras as a reference camera, and generates a basic texture based on an image captured by the reference camera.
- the camera view that best expresses the texture may mean that the target for photographing for obtaining the texture includes all, and the direction of the view and the angle between the targets are closest to vertical.
- the image processing apparatus generates at least one candidate image by using at least one candidate camera.
- the image processing apparatus acquires at least one image captured by at least one candidate camera different from the primary camera.
- the at least one candidate camera has a camera view different from the default camera, but has a camera view representing a texture to be used in a 3D model.
- the at least one candidate camera may be a camera that captures the target at a different angle or distance from the primary camera.
- the image processing apparatus removes noise from the base texture by using at least one candidate image. For example, the image processing apparatus generates at least one auxiliary texture from at least one candidate image, and corrects at least one pixel value of the basic texture using pixel values of at least one auxiliary texture having the same 3D coordinates. .
- the image processing apparatus removes noise from the base texture by using at least one candidate image.
- the noise removal operation using the candidate image may be defined in various ways.
- the image processing apparatus may combine values representing the same pixel having the same 3D coordinates in the base texture and at least one auxiliary texture, and set the combined value as a value of the corresponding pixel. That is, the image processing apparatus may determine a corrected value for each coordinate by combining values of the base texture and at least one auxiliary texture for each 3D coordinate, and set the corrected value as a pixel value on the corresponding coordinate.
- combining may include an averaging operation.
- the averaging may be performed with the same weight regardless of the region of the texture.
- the image processing apparatus may divide the texture into a plurality of regions and apply different weights according to the regions. For example, in an area in which noise is not generated or generated in a small amount, the image processing apparatus may set the weight of the base texture to be relatively higher than that of the auxiliary textures. On the other hand, with respect to an area in which noise is generated above a certain level, the image processing apparatus may set the weights for the auxiliary textures to be relatively higher than that of the basic textures.
- the adaptive setting of these weights can be made by leveling the level of noise and defining a combination of weights for each level. Alternatively, the adaptive setting of the weight may be made by quantifying the level of noise and defining a weight determining function having a value indicating the level of noise as an input.
- the image processing apparatus may remove noise by removing a difference in expression between the base texture and the auxiliary textures.
- the image processing apparatus may remove a visual expression expressed in the basic texture but not expressed in the auxiliary texture with respect to the same pixel on 3D coordinates.
- the image processing apparatus may remove noise included in the basic texture based on a difference value for each coordinate between the primary texture and the secondary texture. For example, the image processing apparatus generates a plurality of difference values such as 'basic texture-auxiliary texture #1', 'basic texture-auxiliary texture #2', ... 'basic texture-auxiliary texture #n' for each three-dimensional coordinates.
- the maximum difference value among the difference values for each coordinate may be applied to the corresponding coordinates of the basic texture.
- 'primary texture-secondary texture#1', 'primary texture-secondary texture#2', ... of 'primary texture-secondary texture#n', 'primary texture-secondary texture#2' has the maximum value.
- the image processing apparatus may subtract the 'basic texture-auxiliary texture #2' from the pixel value of the corresponding coordinates.
- the noise removal operation of removing the difference in expression may be performed based on the average value of two or more auxiliary textures, not the pixel value of one specific auxiliary texture.
- the image processing apparatus may operate as follows in units of coordinates. The image processing apparatus classifies the pixel values of the primary texture and the auxiliary texture into a plurality of value sections, and then sets the target value by averaging the pixel values of the textures included in the value section to which the most textures belong. Then, the image processing apparatus sets the current value by averaging the pixel values of at least one texture included in the value section to which the basic texture belongs. Finally, the image processing apparatus uses the difference value between the target value and the current value to correct the pixel value of the corresponding coordinate of the base texture (eg, add the difference value to the pixel value).
- the image processing apparatus may remove noise from the base texture by using the auxiliary textures.
- morphing and projection operations may be performed. The morphing and projection operations will be described as follows.
- FIGS. 4A and 4B are diagrams illustrating examples of mapping of a 3D model and a 2D image expressed in a 3D coordinate system according to an embodiment of the present invention.
- Figure 4a is a three-dimensional model by loading the mesh of the three-dimensional model in the three-dimensional coordinate system, and then projecting the obtained two-dimensional image into the three-dimensional coordinate system according to the shooting information in the direction taken from the position of the camera in the three-dimensional coordinate system.
- An example in which a mesh and a two-dimensional image are superimposed is shown.
- projecting a two-dimensional image into a three-dimensional coordinate system means that the two-dimensional image is captured in the shooting direction of the camera from which the two-dimensional image was acquired at the position of the three-dimensional coordinate system corresponding to the position of the camera from which the two-dimensional image was acquired. It means that the 3D space maps the 2D image according to the acquired angle of view of the camera. This may be performed by setting the user's screen view point to the field of view of the corresponding camera, expressing the 3D model to the user, and superimposing the 2D image therewith. 4A shows a result screen according to this.
- the two-dimensional image of the three-dimensional model may not coincide with the corresponding surface of the three-dimensional shape represented by the mesh.
- one vertex 410 of the 3D model and one vertex 420 in the 2D image are displaced from each other. This phenomenon may occur depending on the distortion characteristics of the camera and the surface area characteristics of the scan target (eg, a transparent surface).
- the image processing apparatus may modify the shape of the 3D model so that the expression of the corresponding part of the 3D model matches the 2D image based on the 2D image representing the specific region of the 3D model.
- the image processing apparatus may display coordinates representing a part (eg, vertices) of the 3D model mesh in a part (eg, vertices) corresponding to the part (eg, vertices) expressed in the 2D image projected in the 3D coordinate system. : By moving to the coordinates where the vertices are expressed), the mesh of the 3D model and the representation of the 2D image can be matched with each other.
- the image processing apparatus may display coordinates representing a part (eg, vertices) of the 3D model mesh in a part (eg, vertices) corresponding to the part (eg, vertices) expressed in the two-dimensional image projected in the 3D coordinate system. coordinates expressed) is received, and accordingly, the mesh of the 3D model and the representation of the 2D image can be matched with each other.
- the image processing apparatus may match expressions by moving one vertex 410 of the 3D model to one vertex 420 in the 2D image.
- the result according to this is shown in FIG. 4B.
- the image processing apparatus may record the modified details in order to restore the shape of the 3D model modified later.
- the image processing apparatus may generate a texture using a 2D image corresponding to the mesh of the modified 3D model. For example, the image processing apparatus selects a portion of a two-dimensional image corresponding to a face of a mesh, generates a coordinate correspondence relationship between both data as a UV map, and converts a portion of the two-dimensional image into a texture can be saved as
- the image processing apparatus restores the shape of the modified 3D model to the original shape by using the shape modification history of the 3D model or the 3D model node position correspondence relationship before and after modification, and restores the texture according to the restored shape to the original 3 It can be applied to the texture of the dimensional model.
- the shape of the 3D model even if the mesh of the 3D model is modified, since UV expresses the correspondence between the modified mesh and the 2D image as it is, even on the deformed mesh of the modified 3D model, the A texture by a part may be expressed in a deformed shape.
- the image processing apparatus identifies a difference between a shape of a 2D image corresponding to a view of the camera and a shape of a 3D model corresponding thereto for each view of the camera, and a task list for texture improvement in the order of the camera with the highest difference can be created and presented to the user.
- a texture creation procedure including the aforementioned morphing and projection operations is described below with reference to FIG. 4 .
- 5 is a diagram illustrating another embodiment of a procedure for generating a texture of a 3D model according to an embodiment of the present invention. 5 exemplifies an operation method of the image processing apparatus 100 of FIG. 1 .
- the image processing apparatus selects a reference camera specified as one surface.
- the image processing apparatus may select a reference camera based on location information, direction information, and location information of a target photographed to obtain a texture and direction information of the cameras.
- the image processing apparatus may select a camera whose angle between the surface and the lens direction is closest to a right angle as the reference camera while including the entire surface to be used as a texture among the surfaces of the target in the camera view.
- the image processing apparatus may select a reference camera based on images captured by each of the cameras.
- the image processing apparatus may select a reference camera as a camera that has captured an image having the highest sharpness among captured images.
- the image processing apparatus may display images captured by a plurality of cameras, and select a camera that has captured an image designated by a user as a reference camera.
- the image processing apparatus selects a candidate camera group corresponding to the selected reference camera. That is, the image processing apparatus selects other cameras for capturing other images to be used to remove noise of the basic texture obtained by the reference camera.
- other cameras included in the candidate camera group may be selected from among cameras included in the shooting range without cutting off the shooting surface of the target.
- other cameras included in the candidate camera group may be selected from among cameras including all feature points of textures within a shooting range.
- other cameras included in the candidate camera group may be predefined. That is, a candidate camera group corresponding to each camera may be predefined.
- the image processing apparatus morphs the 3D object based on the feature point of the target of the reference camera.
- the shape of the surface of the 3D object and the boundary line may not match due to a shooting angle or the like. Accordingly, the image processing apparatus may change the shape of the 3D object to match the shape of the boundary line of the surface to which the texture is to be mapped with the captured image.
- the image processing apparatus projects the basic texture captured by the reference camera.
- the image processing apparatus maps the basic texture to the morphed 3D object.
- a mask may be applied to the base texture in consideration of the subsequent projection of the auxiliary textures.
- the image processing apparatus may restore the shape of the morphed 3D object after projection.
- the image processing device projects the auxiliary texture captured by the candidate camera.
- the image processing apparatus maps the auxiliary texture to the 3D object.
- the image processing apparatus may morph the 3D object according to the shape of the boundary line of the auxiliary texture and then project the auxiliary texture.
- the image processing apparatus may restore the shape of the morphed 3D object. That is, morphing and restoration operations may be performed for each camera before and after texture projection.
- the image processing apparatus adjusts a color value based on the color of the reference camera. Color values for the same part may be different from each other due to camera settings that capture the target, product characteristics of the camera, and external interference at the time of shooting. Accordingly, the image processing apparatus adjusts color values of candidate images captured by the candidate cameras based on the reference image captured by the reference camera. For example, the image processing apparatus calculates difference values of pixel values for each coordinate between the reference image and one candidate image, and then calculates the average value of the difference values corresponding to the noise-free portion among all the difference values for all pixels of the candidate image. Values can be reflected (eg subtracted). That is, the average value uses the difference values of the noise-free portion, but the color value adjustment is applied to the entire image.
- the image processing apparatus may select a portion having a relatively low value from among all the distributions of the difference values.
- the image processing apparatus may select some difference values based on the coordinate values.
- the image processing apparatus merges the projected textures.
- the image processing apparatus merges the plurality of projected textures. In this case, merging may be performed on the entire texture or part of the texture.
- the image processing apparatus may divide the texture into a plurality of regions, identify at least one region in which noise is present, and then merge the textures with respect to the at least one identified region.
- the image processing apparatus adjusts color values of candidate images according to the reference image.
- the color values are adjusted based on the analysis result of the pixel values.
- the image processing apparatus may generate a plurality of color adjustment results and then display the generated results. Accordingly, when the user designates one result, the image processing apparatus may apply the designated result. This is in consideration of errors in judgment of the noise region, and the like.
- noise included in the basic texture obtained through the primary camera may be removed by using the auxiliary textures obtained through the candidate cameras.
- the characteristics of noise are known, it is expected that noise can be removed more effectively.
- the image processing apparatus may utilize information on a cause of noise.
- noise may be caused by lighting.
- the image processing apparatus may obtain information on the location, brightness, color, etc. of the installed lighting, and predict the noise characteristics based on the obtained information.
- the image processing apparatus may obtain information on the surface material of the target to be photographed, and predict noise characteristics based on the material.
- the image processing apparatus may extract the noise from the basic texture based on the noise characteristic. Accordingly, the image processing apparatus may perform a noise removal operation only in an area in which the extracted noise exists. In this case, the noise removal operation is prevented from being performed even in a noise-free region, so that the amount of computation can be reduced.
- the image processing apparatus and the image processing method according to the embodiment described above may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium.
- the computer-readable medium may include program instructions, data files, data structures, etc. alone or in combination.
- the program instructions recorded on the medium may be specially designed and configured according to the embodiment, or may be known and available to those skilled in the art of computer software.
- Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic such as floppy disks.
- - includes magneto-optical media, and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
- Examples of program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
- the content disclosed in this specification may be used to remove noise of a texture.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Generation (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (12)
- 영상 처리 장치에 의하여 수행되는 영상 처리 방법에 있어서,In the image processing method performed by the image processing apparatus,타겟(target)을 촬영하기 위한 복수의 카메라들 중 기준 카메라를 이용하여 기본 텍스처를 생성하는 단계;generating a basic texture by using a reference camera among a plurality of cameras for photographing a target;상기 복수의 카메라들 중 적어도 하나의 후보 카메라를 이용하여 적어도 하나의 후보 이미지를 생성하는 단계; 및generating at least one candidate image by using at least one candidate camera among the plurality of cameras; and상기 적어도 하나의 후보 이미지를 이용하여 상기 기본 텍스처의 노이즈를 제거하는 단계를 포함하는 영상 처리 방법.and removing noise from the base texture by using the at least one candidate image.
- 청구항 1에 있어서,The method according to claim 1,상기 기준 카메라는, 상기 복수의 카메라들 중 상기 타겟의 촬영되는 면을 촬영 범위에 포함하고, 상기 면과 수직인 방향과 가장 유사한 렌즈 방향을 가지는 카메라를 포함하는 영상 처리 방법.The reference camera includes, among the plurality of cameras, a photographing surface of the target in a photographing range, and includes a camera having a lens direction most similar to a direction perpendicular to the surface.
- 청구항 1에 있어서,The method according to claim 1,상기 적어도 하나의 후보 카메라는, 상기 복수의 카메라들 중 상기 타겟의 촬영되는 면을 촬영 범위에 포함하는 적어도 하나의 카메라를 포함하는 영상 처리 방법.The at least one candidate camera includes at least one camera including a photographing surface of the target among the plurality of cameras in a photographing range.
- 청구항 1에 있어서,The method according to claim 1,상기 노이즈를 제거하는 단계는,The step of removing the noise is상기 기본 텍스처 및 상기 적어도 하나의 후보 이미지에서 동일한 3차원 좌표 상의 픽셀을 표현하는 값들을 결합함으로써 보정된 값을 결정하는 단계; 및determining a corrected value by combining values representing pixels on the same three-dimensional coordinates in the base texture and the at least one candidate image; and상기 보정된 값을 해당 픽셀의 값으로서 설정하는 단계를 포함하는 영상 처리 방법.and setting the corrected value as a value of a corresponding pixel.
- 청구항 4에 있어서,5. The method according to claim 4,상기 값들의 결합은, 상기 값들의 평균화를 포함하며,Combining the values comprises averaging the values,상기 평균화를 위한 가중치들은, 상기 기본 텍스처를 분할한 복수의 영역들에 따라 달라지는 영상 처리 방법.The weights for the averaging are different according to a plurality of regions obtained by dividing the basic texture.
- 청구항 1에 있어서,The method according to claim 1,상기 노이즈를 제거하는 단계는,The step of removing the noise is상기 기본 텍스처 및 상기 적어도 하나의 후보 이미지에서 동일한 3차원 좌표 상의 픽셀을 표현하는 값들에 대한 후보 이미지 별 차분(differential) 값들을 결정하는 단계; 및determining differential values for each candidate image with respect to values representing pixels on the same three-dimensional coordinates in the base texture and the at least one candidate image; and상기 차분 값들 중 최대 값을 기본 텍스처에 적용하는 단계를 포함하는 영상 처리 방법.and applying a maximum value among the difference values to a base texture.
- 청구항 1에 있어서,The method according to claim 1,상기 노이즈를 제거하는 단계는,The step of removing the noise is상기 기본 텍스처의 경계선의 모양에 따라 3차원(3-dimension) 객체(object)를 모핑(morphing)하고, 모핑된 3차원 객체에 상기 기본 텍스처를 투영하는 단계; 및morphing a 3-dimensional object according to the shape of the boundary line of the basic texture, and projecting the basic texture onto the morphed 3-dimensional object; and상기 적어도 하나의 후보 이미지로부터 생성된 보조 텍스처의 경계선의 모양에 따라 상기 3차원 객체를 모핑하고, 모핑된 3차원 객체에 상기 보조 텍스처를 투영하는 단계를 포함하는 영상 처리 방법.and morphing the 3D object according to a shape of a boundary line of the auxiliary texture generated from the at least one candidate image, and projecting the auxiliary texture onto the morphed 3D object.
- 청구항 1에 있어서,The method according to claim 1,상기 노이즈를 제거하는 단계는,The step of removing the noise is상기 노이즈에 관련된 정보를 획득하는 단계;obtaining information related to the noise;상기 노이즈에 관련된 정보에 기반하여 상기 기본 텍스처에서 노이즈를 추출하는 단계; 및extracting noise from the basic texture based on the information related to the noise; and상기 노이즈가 존재하는 영역에 대하여 상기 노이즈를 제거하기 위한 연산을 수행하는 단계를 포함하는 영상 처리 방법.and performing an operation for removing the noise with respect to the region where the noise exists.
- 청구항 1에 있어서,The method according to claim 1,상기 후보 이미지들이 색상을 상기 기준 텍스처에 기반하여 조정하는 단계를 더 포함하는 영상 처리 방법.and adjusting the color of the candidate images based on the reference texture.
- 청구항 9에 있어서,10. The method of claim 9,상기 색상을 조정하기 위한 조정 값은, 상기 노이즈가 없는 부분의 픽셀 값들에 기반하여 결정되는 영상 처리 방법.The adjustment value for adjusting the color is determined based on pixel values of the noise-free portion.
- 영상 처리 장치에 있어서,In the image processing apparatus,프로세서 및 메모리를 포함하고,processor and memory;상기 프로세서는,The processor is타겟(target)을 촬영하기 위한 복수의 카메라들 중 기준 카메라를 이용하여 기본 텍스처를 생성하고,A basic texture is generated using a reference camera among a plurality of cameras for photographing a target,상기 복수의 카메라들 중 적어도 하나의 후보 카메라를 이용하여 적어도 하나의 후보 이미지를 생성하고,generating at least one candidate image by using at least one candidate camera among the plurality of cameras;상기 적어도 하나의 후보 이미지를 이용하여 상기 기본 텍스처의 노이즈를 제거하는 영상 처리 장치.An image processing apparatus for removing noise from the base texture by using the at least one candidate image.
- 청구항 1의 방법을 수행하기 위한 컴퓨터 프로그램이 기록된 컴퓨터로 읽을 수 있는 기록매체.A computer-readable recording medium in which a computer program for performing the method of claim 1 is recorded.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2020-0151654 | 2020-11-13 | ||
KR1020200151654A KR102551194B1 (en) | 2020-11-13 | 2020-11-13 | Method and apparatus for generating textures of 3-dimension model |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022103190A1 true WO2022103190A1 (en) | 2022-05-19 |
Family
ID=81601588
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2021/016503 WO2022103190A1 (en) | 2020-11-13 | 2021-11-12 | Method and device for generating texture of three-dimensional model |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102551194B1 (en) |
WO (1) | WO2022103190A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030070578A (en) * | 2003-08-12 | 2003-08-30 | 학교법인고려중앙학원 | Method for automatic animation of three dimensions scan face data |
KR101748674B1 (en) * | 2016-05-18 | 2017-06-19 | 연세대학교 산학협력단 | Method and apparatus for producing three-dimensional image |
KR101875047B1 (en) * | 2018-04-24 | 2018-07-06 | 주식회사 예간아이티 | System and method for 3d modelling using photogrammetry |
KR20190059092A (en) * | 2017-11-22 | 2019-05-30 | 한국전자통신연구원 | Method for reconstructing three dimension information of object and apparatus for the same |
JP2020166652A (en) * | 2019-03-29 | 2020-10-08 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
-
2020
- 2020-11-13 KR KR1020200151654A patent/KR102551194B1/en active IP Right Grant
-
2021
- 2021-11-12 WO PCT/KR2021/016503 patent/WO2022103190A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030070578A (en) * | 2003-08-12 | 2003-08-30 | 학교법인고려중앙학원 | Method for automatic animation of three dimensions scan face data |
KR101748674B1 (en) * | 2016-05-18 | 2017-06-19 | 연세대학교 산학협력단 | Method and apparatus for producing three-dimensional image |
KR20190059092A (en) * | 2017-11-22 | 2019-05-30 | 한국전자통신연구원 | Method for reconstructing three dimension information of object and apparatus for the same |
KR101875047B1 (en) * | 2018-04-24 | 2018-07-06 | 주식회사 예간아이티 | System and method for 3d modelling using photogrammetry |
JP2020166652A (en) * | 2019-03-29 | 2020-10-08 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
Also Published As
Publication number | Publication date |
---|---|
KR20220065301A (en) | 2022-05-20 |
KR102551194B1 (en) | 2023-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5680976B2 (en) | Electronic blackboard system and program | |
WO2019066563A1 (en) | Camera pose determination and tracking | |
EP3642802A1 (en) | Apparatus for editing image using depth map and method thereof | |
WO2020171553A1 (en) | An electronic device applying bokeh effect to image and controlling method thereof | |
WO2016053067A1 (en) | 3-dimensional model generation using edges | |
JP2007129709A (en) | Method for calibrating imaging device, method for calibrating imaging system including arrangement of imaging devices, and imaging system | |
JP2010165248A (en) | Image processor, image collation method, and program | |
WO2019156308A1 (en) | Apparatus and method for estimating optical image stabilization motion | |
WO2011065671A2 (en) | Apparatus and method for detecting a vertex of an image | |
WO2020032383A1 (en) | Electronic device for providing recognition result of external object by using recognition information about image, similar recognition information related to recognition information, and hierarchy information, and operating method therefor | |
WO2019035551A1 (en) | Apparatus for composing objects using depth map and method for the same | |
WO2017195984A1 (en) | 3d scanning device and method | |
WO2016006786A1 (en) | Rendering system and rendering method thereof | |
WO2022265347A1 (en) | Three-dimensional scene recreation using depth fusion | |
WO2022059937A1 (en) | Robot and control method therefor | |
EP3066508A1 (en) | Method and system for creating a camera refocus effect | |
WO2019098421A1 (en) | Object reconstruction device using motion information and object reconstruction method using same | |
WO2022103190A1 (en) | Method and device for generating texture of three-dimensional model | |
WO2011078430A1 (en) | Sequential searching method for recognizing plurality of feature point-based markers and augmented reality implementing method using same | |
WO2017086522A1 (en) | Method for synthesizing chroma key image without requiring background screen | |
WO2022203464A2 (en) | Method for real-time omnidirectional stereo matching using multi-view fisheye lenses and system therefor | |
WO2023210884A1 (en) | Device and method for removing noise on basis of non-local means | |
WO2023038369A1 (en) | Semantic three-dimensional (3d) building augmentation | |
WO2019172577A1 (en) | Image processing device and method of electronic device | |
WO2021040342A2 (en) | Image processing method and image processing device for generating 3d content by means of 2d images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21892357 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21892357 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20.09.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21892357 Country of ref document: EP Kind code of ref document: A1 |