WO2017094536A1 - 画像処理装置および画像処理方法 - Google Patents
画像処理装置および画像処理方法 Download PDFInfo
- Publication number
- WO2017094536A1 WO2017094536A1 PCT/JP2016/084381 JP2016084381W WO2017094536A1 WO 2017094536 A1 WO2017094536 A1 WO 2017094536A1 JP 2016084381 W JP2016084381 W JP 2016084381W WO 2017094536 A1 WO2017094536 A1 WO 2017094536A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point
- thdepth
- color
- image data
- dimensional
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 82
- 238000003672 processing method Methods 0.000 title claims abstract description 7
- 238000012937 correction Methods 0.000 claims abstract description 101
- 238000000034 method Methods 0.000 description 158
- 239000013598 vector Substances 0.000 description 106
- 230000014509 gene expression Effects 0.000 description 23
- 238000010586 diagram Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/564—Depth or shape recovery from multiple images from contours
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/44—Morphing
Definitions
- the present disclosure relates to an image processing apparatus and an image processing method, and more particularly, to an image processing apparatus and an image processing method that can change a highly accurate image following high-speed movement of a virtual viewpoint.
- the background object is hidden by the foreground object in the image. Therefore, when the virtual viewpoint is different from the viewpoint at the time of shooting, an occlusion area that is an area of a background object that is hidden by the foreground object in the shot image but is not hidden in the virtual viewpoint image occurs. Since the image of the occlusion area does not exist in the captured image, it needs to be newly generated.
- an occlusion area image generation method for example, there is a method of generating an occlusion area image at a predetermined virtual viewpoint using a captured image of a viewpoint different from the predetermined viewpoint (see, for example, Patent Document 1).
- the present disclosure has been made in view of such a situation, and makes it possible to change a highly accurate image following high-speed movement of a virtual viewpoint.
- An image processing apparatus includes the plurality of points included in three-dimensional data including three-dimensional positions and color information of a plurality of points generated from color image data and depth image data of a predetermined viewpoint.
- the image processing apparatus includes a position correction unit that corrects the three-dimensional position of a point of an occlusion region boundary among the points.
- the image processing method according to one aspect of the present disclosure corresponds to the image processing apparatus according to one aspect of the present disclosure.
- the three-dimensional data including the three-dimensional positions and color information of the plurality of points generated from the color image data and the depth image data of a predetermined viewpoint.
- the three-dimensional position of the boundary point of the occlusion area is corrected.
- the image processing apparatus can be realized by causing a computer to execute a program.
- a program to be executed by a computer can be provided by being transmitted via a transmission medium or by being recorded on a recording medium.
- an image can be processed. Further, according to one aspect of the present disclosure, it is possible to change a high-accuracy image following high-speed movement of a virtual viewpoint.
- FIG. 25 is a block diagram illustrating a configuration example of an embodiment of an image processing apparatus to which the present disclosure is applied. It is a figure which shows the example of input image data. It is a figure which shows the three-dimensional position of the background of FIG. 2, a partial circle, a rectangle, and a cylinder. 3 is a flowchart for explaining color image data generation processing of the image processing apparatus of FIG. 1. 5 is a flowchart for explaining details of an occlusion countermeasure process in FIG. 4. It is a flowchart explaining the detail of the initial position calculation process of FIG. 6 is a flowchart illustrating details of patch generation processing in FIG. 5. It is a figure which shows the example of the triangle patch corresponding to connection information.
- FIG. 10 It is a flowchart explaining the detail of the movement calculation process of FIG. 10 is a flowchart illustrating details of a moving direction determination process in FIG. 9. It is a flowchart explaining the detail of the moving direction determination core process of FIG. It is a figure which shows the example of a moving direction. It is a figure explaining the two-dimensional position corresponding to a depth. 10 is a flowchart for explaining details of a movement vector determination process in FIG. 9. It is a flowchart explaining the detail of the movement vector determination core process of FIG. It is a flowchart explaining the alpha determination process of FIG. It is a flowchart explaining the detail of the mvz calculation process of FIG. 6 is a flowchart illustrating details of color correction processing in FIG. 5.
- FIG. 20 is a flowchart illustrating details of the beta determination process of FIG. 19.
- FIG. It is a flowchart explaining the color correction core process of FIG. It is a figure explaining the flowchart of the position correction process of FIG. It is a flowchart explaining the detail of the position correction core process of FIG. It is the graph which showed the x coordinate and z coordinate of the point whose y coordinate is y1 among the three-dimensional positions of each point produced
- First embodiment image processing apparatus (FIGS. 1 to 30) 2.
- Second Embodiment Computer (FIG. 31)
- FIG. 1 is a block diagram illustrating a configuration example of an embodiment of an image processing apparatus to which the present disclosure is applied.
- the image processing apparatus 10 determines a predetermined color different from the one viewpoint from color image data including color information of each pixel at a predetermined one viewpoint and depth image data including a depth representing the position of each pixel in the depth direction of the subject. Color image data of an arbitrary virtual viewpoint within the range is generated.
- the three-dimensional data generation unit 11 of the image processing apparatus 10 includes a position calculation unit 31 and a patch generation unit 32.
- the three-dimensional data generation unit 11 generates three-dimensional data including three-dimensional positions of a plurality of points (Vertex), color information, and connection information indicating connection relations from color image data and depth image data.
- color image data and depth image data of one predetermined viewpoint are input to the position calculation unit 31 as input image data from a camera (not shown) or the like.
- the position calculation unit 31 uses each pixel of the color image data as a point, and generates a three-dimensional position of each point based on the depth image data.
- the position calculation unit 31 associates the three-dimensional position of each point with color information based on the color image data, and supplies the color information to the patch generation unit 32.
- the patch generation unit 32 generates connection information based on the three-dimensional position of each point supplied from the position calculation unit 31 so that one or more triangular patches are formed with each point as a vertex.
- the patch generation unit 32 supplies three-dimensional data including the three-dimensional position of each point, color information, and connection information to the movement vector calculation unit 12.
- the movement vector calculation unit 12 detects a boundary point of the occlusion region where the position (depth) in the z direction (depth direction) is discontinuous based on the three-dimensional data supplied from the patch generation unit 32.
- the movement vector calculation unit 12 determines the movement direction of the two-dimensional position on the screen and the movement vector of the three-dimensional position on the screen at the boundary point of the occlusion area based on the three-dimensional position of each point. Further, the movement vector calculation unit 12 determines the movement direction of the two-dimensional position and the movement vector of the three-dimensional position of points other than the boundary of the occlusion area to indicate that they do not move.
- the movement vector calculation unit 12 supplies the movement direction and movement vector of each point, and three-dimensional data to the color correction unit 13.
- the color correcting unit 13 corrects the color information included in the three-dimensional data based on the moving direction and the moving vector supplied from the moving vector calculating unit 12.
- the color correction unit 13 supplies the three-dimensional data including the corrected color information and the movement vector to the position correction unit 14.
- the position correcting unit 14 corrects the three-dimensional position of the boundary point of the occlusion area among the plurality of points included in the three-dimensional data based on the movement vector supplied from the color correcting unit 13.
- the position correction unit 14 supplies the corrected three-dimensional data to the color image data generation unit 15.
- the color image data generation unit 15 generates color image data of a virtual viewpoint input by a user operation or the like based on the three-dimensional data supplied from the position correction unit 14. Specifically, the color image data generation unit 15 generates color image data of a virtual viewpoint by projecting a subject arranged in a three-dimensional space from a virtual viewpoint onto a two-dimensional plane based on the three-dimensional data. The color image data generation unit 15 outputs color image data of the virtual viewpoint.
- the movement vector calculation unit 12 generates the color image data of the virtual viewpoint using the same three-dimensional data regardless of the virtual viewpoint. Therefore, by generating three-dimensional data in advance, the color image data of the virtual viewpoint can be changed following the high-speed movement of the virtual viewpoint.
- FIG. 2 is a diagram illustrating an example of input image data.
- the input image data in FIG. 2 is composed of color image data 51 in FIG. 2A and depth image data 52 in FIG. 2B.
- the color information of the color image data is represented by a pattern.
- the color image data includes a background 70, a partial circle 71 and a partial circle 72, which are partially missing circles, a rectangle 73, a cylinder 74, and a cylinder 75.
- the pixel value of the depth image data increases as the distance increases.
- the background 70 is the whitest (brighter)
- the partial circle 71, the partial circle 72 and the rectangle 73, the cylinder 74, and the cylinder 75 are gradually black (darker) in this order. Is far away.
- FIG. 3 is a diagram illustrating the three-dimensional positions of the background 70, the partial circle 71, the partial circle 72, the rectangle 73, the cylinder 74, and the cylinder 75 in FIG. 2.
- the x and y coordinates of the three-dimensional positions of the background 70, the partial circle 71, the partial circle 72, the rectangle 73, the cylinder 74, and the cylinder 75 are the horizontal direction on the screen of the color image data 51 and Corresponds to the vertical position coordinates. Further, the z coordinates of the three-dimensional positions of the background 70, the partial circle 71, the partial circle 72, the rectangle 73, the cylinder 74, and the cylinder 75 correspond to the pixel values of the depth image data 52.
- the area overlapping with the cylinder 75 is an occlusion area that is hidden from the viewpoint of the input image data and cannot be seen, but is not hidden from the virtual viewpoint.
- the image processing apparatus 10 Since the occlusion area is an area that cannot be seen from the viewpoint of the input image data, there is no color image data in the occlusion area in the input image data. Accordingly, the image processing apparatus 10 generates three-dimensional data from the input image data, and moves the three-dimensional position of the boundary point of the occlusion area in the three-dimensional data, thereby increasing the color image data in the occlusion area. Enable generation with accuracy.
- FIG. 4 is a flowchart for explaining color image data generation processing of the image processing apparatus 10 of FIG. This color image data generation process is started, for example, when input image data is input to the image processing apparatus 10. Note that the variables in the color image data generation process are variables common to all processes, and can be referred to and updated in each process.
- step S11 of FIG. 4 the image processing apparatus 10 performs an occlusion countermeasure process that enables generation of color image data of the occlusion area with high accuracy. Details of the occlusion countermeasure processing will be described with reference to FIG.
- step S12 the color image data generation unit 15 of the image processing apparatus 10 determines whether or not the virtual viewpoint input by a user operation or the like has been changed.
- step S13 the color image data generation unit 15 generates color image data of the virtual viewpoint based on the three-dimensional data supplied from the position correction unit 14. ,Output. Then, the process proceeds to step S14.
- step S12 determines whether the virtual viewpoint has been changed. If it is determined in step S12 that the virtual viewpoint has not been changed, the process proceeds to step S14.
- step S14 the image processing apparatus 10 determines whether to end the color image data generation process. If it is determined in step S14 that the color image data generation process is not terminated, the process returns to step S12, and the processes in steps S12 to S14 are repeated until the color image data generation process is terminated.
- step S14 if it is determined in step S14 that the color image data generation process is to be terminated, the process ends.
- FIG. 5 is a flowchart for explaining the details of the occlusion countermeasure process in step S11 of FIG.
- the position calculation unit 31 of the three-dimensional data generation unit 11 performs an initial position calculation process that associates the three-dimensional position of each point with color information. Details of the initial position calculation process will be described with reference to FIG.
- step S32 the patch generation unit 32 performs a patch generation process for generating connection information for each point. Details of the patch generation processing will be described with reference to FIG.
- step S33 the movement vector calculation unit 12 performs a movement calculation process for calculating the movement direction and movement vector of each point. Details of the movement calculation process will be described with reference to FIG. 9 described later.
- step S34 the color correction unit 13 performs color correction processing for correcting the color information of each point. Details of this color correction processing will be described with reference to FIG.
- step S35 the position correction unit 14 performs a position correction process for correcting the three-dimensional position of the boundary point of the occlusion area based on the movement vector. Details of the position correction processing will be described with reference to FIG. After the process of step S35, the process returns to step S11 of FIG. 4 and proceeds to step S12.
- FIG. 6 is a flowchart for explaining the details of the initial position calculation process in step S31 of FIG.
- step S51 in FIG. 6 the position calculation unit 31 sets x indicating 0th pixel from the left as the pixel to be processed (hereinafter, referred to as target point), and the number from the bottom. Y indicating whether the pixel is a pixel is set to 0.
- step S52 the position calculation unit 31 determines whether y is smaller than the number of pixels height in the vertical direction of the color image data of the input image data. If it is determined in step S52 that y is smaller than the pixel number height, the process proceeds to step S53.
- step S53 the position calculation unit 31 determines whether x is smaller than the horizontal pixel count width of the color image data of the input image data. If it is determined in step S53 that x is smaller than the pixel width, the process proceeds to step S54.
- step S54 the position calculation unit 31 obtains color information of the position (x, y) on the screen of the xth pixel from the left and the yth pixel from the bottom from the color image data of the input image data. To do.
- step S55 the position calculation unit 31 acquires the depth z of the position (x, y) from the depth image data of the input image data.
- step S56 the position calculation unit 31 associates the color information of the position (x, y) with the three-dimensional position (x, y, z) and supplies the color information to the patch generation unit 32.
- step S57 the position calculation unit 31 increments x by 1. Then, the process returns to step S53, and the processes of steps S53 to S57 are repeated until x reaches the number of pixels width.
- step S53 determines whether x is equal to or greater than the number of pixels width, that is, if all the pixels in the y-th row from the bottom are targeted. If it is determined in step S53 that x is equal to or greater than the number of pixels width, that is, if all the pixels in the y-th row from the bottom are targeted, the process proceeds to step S58.
- step S58 the position calculation unit 31 sets x to 0 and increments y by 1. Then, the process returns to step S52, and the processes of steps S52 to S58 are repeated until y reaches the number of pixels height.
- step S52 determines whether y is greater than or equal to the number of pixels height, that is, if all the pixels are the target points.
- FIG. 7 is a flowchart for explaining details of the patch generation processing in step S32 of FIG.
- steps S71 to S73, S75, and S76 in FIG. 7 are the same as the processes in steps S51 to S53, S57, and S58 in FIG. 6 except that the process is performed by the patch generation unit 32 instead of the position calculation unit 31. Therefore, the description is omitted as appropriate.
- step S73 If it is determined in step S73 that x is smaller than the pixel width, the process proceeds to step S74.
- step S74 the patch generation unit 32 forms a triangular patch having the target point as a vertex based on the three-dimensional position (x, y, z) of the target point supplied from the position calculation unit 31. Generate connection information for the target point. Then, the process proceeds to step S75.
- step S72 If it is determined in step S72 that y is equal to or greater than the number of pixels height, that is, if all the pixels are targeted, the patch generation unit 32 determines the three-dimensional position (x, y, z) of each point, Three-dimensional data composed of color information and connection information is supplied to the movement vector calculation unit 12. And a process returns to step S32 of FIG. 5, and progresses to step S33.
- FIG. 8 is a diagram illustrating an example of a triangular patch corresponding to the connection information generated by the patch generation unit 32 of FIG.
- the patch generation unit 32 generates connection information of each point 91 so that, for example, a triangular patch 92 having three neighboring points 91 as vertices is formed. Specifically, the patch generation unit 32 connects the three points 91 excluding the lower right point 91 among the 2 ⁇ 2 points 91 and connects the three points 91 excluding the upper left point 91. The connection information of each point 91 is generated so that the two are connected to each other.
- FIG. 9 is a flowchart for explaining the details of the movement calculation process in step S33 of FIG.
- the movement vector calculation unit 12 performs a movement direction determination process for determining the movement directions of all the points 91. Details of this movement direction determination processing will be described with reference to FIG.
- step S92 the movement vector calculation unit 12 performs a movement vector determination process for determining the movement vectors of all the points 91. Details of the movement vector determination processing will be described with reference to FIG.
- FIG. 10 is a flowchart for explaining the details of the moving direction determination process in step S91 of FIG.
- steps S111 to S113, S115, and S116 in FIG. 10 are performed by the movement vector calculation unit 12 instead of the position calculation unit 31, and the processes in steps S51 to S53, S57, and S58 in FIG. Since it is the same, description is abbreviate
- step S113 If it is determined in step S113 that x is smaller than the pixel width, the process proceeds to step S114.
- step S114 the movement vector calculation unit 12 performs a movement direction determination core process for determining the movement direction of the target point. Details of the moving direction determination core processing will be described with reference to FIG. After the process of step S114, the process proceeds to step S115.
- step S112 If it is determined in step S112 that y is greater than or equal to the number of pixels height, that is, if all pixels are target points, the movement vector calculation unit 12 sets the movement direction of all points 91 to the color correction unit 13. Supply. And a process returns to step S91 of FIG. 9, and progresses to step S92.
- FIG. 11 is a flowchart for explaining the details of the moving direction determination core processing in step S114 of FIG.
- step S131 of FIG. 11 the movement vector calculation unit 12 sets the movement direction dir of the two-dimensional position on the screen of the target point to NONE indicating that no movement is performed.
- the moving direction dir as shown in FIG. 12, in addition to NONE, UU representing the upward direction, RR representing the right direction, DD representing the downward direction, and LL representing the left direction can be set.
- step S132 the movement vector calculation unit 12 acquires the depths z_LL, z_DL, z_UU, z_DD, z_UR, and z_RR of the point 91 near the target point.
- Depths z_LL, z_DL, z_UU, z_DD, z_UR, and z_RR are depths of the points 91 on the left, lower left, upper, lower, upper right, and right of the target point, respectively. Specifically, as shown in FIG.
- the depths z_LL, z_DL, z_UU, z_DD, z_UR, z_RR are the positions (x ⁇ 1, y), (x-1, y-1), (x, y + 1), (x, y-1), (x + 1, y + 1), (x + 1, y)
- the depth is 91.
- a square represents a pixel corresponding to each point 91.
- step S133 the movement vector calculation unit 12 obtains differences diffLL, diffDL, diffUU, diffDD, diffUR, and diffRR between the depth z of the target point and each of the depths z_LL, z_DL, z_UU, z_DD, z_UR, and z_RR.
- step S134 the movement vector calculation unit 12 determines whether or not the conditional expression dir_UU is satisfied.
- the movement vector calculation unit 12 determines that the conditional expression dir_UU is satisfied.
- step S134 If it is determined in step S134 that the conditional expression dir_UU is satisfied, the movement vector calculation unit 12 detects the target point as the boundary point 91 of the occlusion area, and advances the process to step S135. In step S135, the movement vector calculation unit 12 sets the movement direction dir to UU representing the upward direction. Then, the process proceeds to step S142.
- step S134 if it is determined in step S134 that the conditional expression dir_UU is not satisfied, it is determined in step S136 whether the conditional expression dir_RR is satisfied.
- the differences diff_UR and diff_RR are greater than the threshold ThDepth, the differences diff_LL and diff_DL are less than or equal to the threshold ThDepth, and at least one of the differences diff_UU and diff_DD is greater than the threshold ThDepth, the differences diff_LL, diff_UU, diff_DD, and dif_DL are less than or equal to the threshold ThDepth If at least one of the differences diff_UR and diff_RR is greater than the threshold ThDepth, the difference diff_RR and the difference diff_DD are greater than the threshold ThDepth, the differences diff_LL and diff_UU are less than or equal to the threshold ThDepth, and both the differences diff_UR and diff_DL are greater than the threshold ThDepth If it is larger or less than the threshold ThDepth, the movement vector calculation unit 12 determines that the conditional expression dir_RR is satisfied.
- step S136 When it is determined in step S136 that the conditional expression dir_RR is satisfied, the movement vector calculation unit 12 detects the target point as the boundary point 91 of the occlusion area, and advances the process to step S137. In step S137, the movement vector calculation unit 12 sets the movement direction dir to RR indicating the right direction. Then, the process proceeds to step S142.
- step S138 the movement vector calculation unit 12 determines whether or not the conditional expression dir_DD is satisfied.
- the movement vector calculation unit 12 determines that the conditional expression dir_DD is satisfied.
- step S138 When it is determined in step S138 that the conditional expression dir_DD is satisfied, the movement vector calculation unit 12 detects the target point as the boundary point 91 of the occlusion area, and advances the process to step S139. In step S139, the movement vector calculation unit 12 sets the movement direction dir to DD representing the downward direction. Then, the process proceeds to step S142.
- step S140 determines whether the conditional expression dir_LL is satisfied.
- the differences diff_LL and diff_DL are greater than the threshold ThDepth, the differences diff_UR and diff_RR are less than or equal to the threshold ThDepth, and at least one of the differences diff_UU and diff_DD is greater than the threshold ThDepth, the differences diff_UU, diff_UR, diff_RR, and dif_DD are less than or equal to the threshold ThDepth If at least one of the differences diff_LL and diff_DL is greater than the threshold ThDepth, the difference diff_LL and difference diff_UU are greater than the threshold ThDepth, the differences diff_RR and diff_DD are less than or equal to the threshold ThDepth, and both the difference diff_UR and difference diff_DL are greater than the threshold ThDepth If it is larger or less than or equal to the threshold ThDepth, the movement vector calculation unit 12 determines that the conditional expression dir_LL is satisfied.
- step S140 When it is determined in step S140 that the conditional expression dir_LL is satisfied, the movement vector calculation unit 12 detects the target point as the boundary point 91 of the occlusion area, and advances the process to step S141. In step S141, the movement vector calculation unit 12 sets the movement direction dir to LL representing the left direction. Then, the process proceeds to step S142.
- step S142 the movement vector calculation unit 12 stores the movement direction dir of the target point. And a process returns to step S114 of FIG. 10, and progresses to step S115.
- the movement vector calculation unit 12 detects the target point as the boundary point 91 of the occlusion area, and moves the target point in the direction dir Set to something other than NONE.
- FIG. 14 is a flowchart for explaining the details of the movement vector determination process in step S92 of FIG.
- steps S151 to S153, S155, and S156 of FIG. 14 are performed by the movement vector calculation unit 12 instead of the position calculation unit 31, and the processes of steps S51 to S53, S57, and S58 of FIG. Since it is the same, description will be omitted as appropriate.
- step S153 If it is determined in step S153 that x is smaller than the pixel width, the process proceeds to step S154.
- step S154 the movement vector calculation unit 12 performs movement vector determination core processing for determining the movement vector of the target point. Details of the movement vector determination core processing will be described with reference to FIG. After the process of step S154, the process proceeds to step S155.
- step S152 If it is determined in step S152 that y is equal to or greater than the number of pixels height, that is, if all the pixels are the target points, the movement vector calculation unit 12 sets the movement vectors and three-dimensional data of all the points 91 to This is supplied to the correction unit 13. Then, the process returns to step S92 in FIG. 9, returns to step S33 in FIG. 5, and proceeds to step S34.
- FIG. 15 is a flowchart for explaining the details of the movement vector determination core processing in step S154 of FIG.
- step S171 of FIG. 15 the movement vector calculation unit 12 determines whether the movement direction dir of the target point is NONE. If it is determined in step S171 that the movement direction of the target point is not NONE, in step S172, the movement vector calculation unit 12 sets depth_cur to the depth z of the target point.
- step S173 the movement vector calculation unit 12 performs alpha determination processing for determining alphaX representing the positive / negative of the x coordinate in the direction represented by the moving direction dir and alphaY representing the positive / negative of the y coordinate. Details of the alpha determination process will be described with reference to FIG.
- step S174 the movement vector calculation unit 12 sets “1” to “move” that represents the movement amount of the target point by the number of pixels. Further, the movement vector calculation unit 12 sets behind_flag to 0.
- Betahind_flag is a flag indicating whether or not the target point is located behind the position in the depth direction of the point 91, which is the position (tar_x, tar_y) obtained in step S176, which will be described later, on the screen. Behind_flag is 1 when it indicates that the target point is on the back side of the position in the depth direction of the point 91 where the two-dimensional position on the screen is the position (tar_x, tar_y), and it is not on the back side. It is 0 when represented.
- step S175 the movement vector calculation unit 12 determines whether the move is smaller than the maximum movement amount.
- the maximum movement amount is determined according to the range that the virtual viewpoint can take. If it is determined in step S175 that move is smaller than the maximum movement amount, the process proceeds to step S176.
- step S176 the movement vector calculation unit 12 calculates the tar_x by adding the x coordinate of the target point and the alphaX times of the move. Further, the movement vector calculation unit 12 adds the y coordinate of the target point and alphaY times of move to obtain tar_y.
- step S177 the movement vector calculation unit 12 sets the depth tar_z of the point 91 where the two-dimensional position on the screen is the position (tar_x, tar_y) to depth_tar.
- step S178 the movement vector calculation unit 12 determines whether depth_tar is smaller than a value obtained by subtracting the movement threshold from depth_cur. The movement threshold is set in consideration of the depth error.
- step S178 If it is determined in step S178 that depth_tar is equal to or greater than the value obtained by subtracting the movement threshold from depth_cur, the movement vector calculation unit 12 determines that the target point is in front of the point 91 at the position (tar_x, tar_y). Determination is made and the process proceeds to step S180.
- step S178 if it is determined in step S178 that depth_tar is smaller than the value obtained by subtracting the movement threshold from depth_cur, the movement vector calculation unit 12 has the target point on the back side from the point 91 of the position (tar_x, tar_y). And the process proceeds to step S179.
- step S179 the movement vector calculation unit 12 sets behind_flag to 1, and the process proceeds to step S180.
- step S180 it is determined whether behind_flag is 1 and depth_tar is greater than depth_cur.
- step S180 If behind_flag is 1 and depth_tar is greater than depth_cur in step S180, that is, if the point 91 at the position (tar_x, tar_y) has changed from a state on the near side to the target point to a state on the far side, The process proceeds to step S183.
- behind_flag is not 1 or depth_tar is less than or equal to depth_cur in step S180, that is, all the points 91 at positions (tar_x, tar_y) are located behind the target point, or positions (tar_x, When the point 91 of tar_y) has not changed from the state on the near side to the target point to the state on the far side, the process proceeds to step S181.
- step S181 the movement vector calculation unit 12 increments move by 1, returns the process to step S175, and repeats the subsequent processes.
- step S175 If it is determined in step S175 that the move is greater than or equal to the maximum movement amount, the process proceeds to step S182. In step S182, the movement vector calculation unit 12 determines whether behind_flag is 1.
- step S182 If it is determined in step S182 that behind_flag is 1, that is, if the point 91 at the position (tar_x, tar_y) has not changed from the state in front of the target point to the state in the back, the process is step. The process proceeds to S183.
- step S183 the movement vector calculation unit 12 determines the x component mvx of the movement vector of the target point to be (move-1) times alphaX and the y component mvy to be (move-1) times alphaY. .
- the movement vector calculation unit 12 sets the value obtained by subtracting 1 from the previous move, that is, the current move as the final move. , X component mvx and y component mvy are determined.
- step S184 the movement vector calculation unit 12 performs mvz calculation processing for calculating the z component mvz of the movement vector of the target point. Details of the mvz calculation process will be described with reference to FIG. After the process of step S184, the process proceeds to step S186.
- step S171 determines whether the movement direction dir is NONE, that is, if the target point is not a boundary point of the occlusion area.
- step S182 determines whether behind_flag is not 1, that is, the point 91 at the position (tar_x, tar_y) is never before the target point until move reaches the maximum movement amount from 0. If so, the process proceeds to step S185.
- step S185 the movement vector calculation unit 12 sets the x component mvx, the y component mvy, and the z component mvz to 0, and proceeds to step S186.
- step S186 the movement vector calculation unit 12 stores the movement vector (mvx, mvy, mvz), returns the process to step S154 in FIG. 14, and advances the process to step S155.
- the movement vector calculation unit 12 has a target point that is the boundary point 91 of the occlusion area, and is located near the target point in front of the target point in the direction indicated by the movement direction dir from the target point.
- the x component mvx and the y component mvy for moving the two-dimensional position of the target point to the two-dimensional position of the point 91 (reference point) are determined.
- FIG. 16 is a flowchart for explaining the alpha determination process in step S173 in FIG.
- step S201 of FIG. 16 the movement vector calculation unit 12 sets alphaX and alphaY to 0.
- step S202 the movement vector calculation unit 12 determines whether or not the movement direction dir is LL.
- step S203 the movement vector calculation unit 12 sets alphaX to -1. And a process returns to step S173 of FIG. 15, and progresses to step S174.
- step S204 the moving vector calculation unit 12 determines whether the moving direction dir is RR.
- the movement vector calculation unit 12 sets alphaX to 1 in step S205. And a process returns to step S173 of FIG. 15, and progresses to step S174.
- step S206 the moving vector calculation unit 12 determines whether the moving direction dir is UU. When it is determined in step S206 that the moving direction dir is UU, the moving vector calculation unit 12 sets alphaY to 1 in step S207. And a process returns to step S173 of FIG. 15, and progresses to step S174.
- step S206 determines whether the moving direction dir is UU, that is, if the moving direction dir is DD.
- step S208 the moving vector calculation unit 12 sets alphaY to -1. And a process returns to step S173 of FIG. 15, and progresses to step S174.
- FIG. 17 is a flowchart for explaining details of the mvz calculation process in step S184 of FIG.
- step S221 in FIG. 17 the movement vector calculation unit 12 sets the ratio ratio to a value obtained by dividing the sum of absolute values of the x component mvx and the y component mvy of the movement vector of the target point by half of the maximum movement amount from 1. Set to the subtracted value.
- the ratio ratio is the ratio of the depth depth_tar of the point 91 at the position (tar_x, tar_y) in the depth depth_new after the movement of the target point.
- step S222 it is determined whether the ratio ratio is less than zero. When it is determined in step S222 that the ratio ratio is 0 or more, if the sum of the x component mvx and the y component mvy is less than or equal to half of the maximum movement amount, the process proceeds to step S224.
- step S222 determines whether the ratio ratio is smaller than 0, that is, if the sum of the x component mvx and the y component mvy is larger than half of the maximum movement amount. If it is determined in step S222 that the ratio ratio is smaller than 0, that is, if the sum of the x component mvx and the y component mvy is larger than half of the maximum movement amount, the process proceeds to step S223. In step S223, the movement vector calculation unit 12 changes the ratio ratio to 0 and advances the process to step S224.
- step S224 the movement vector calculation unit 12 calculates the depth depth_new after the movement of the target point by multiplying the depth depth_cur before the movement by (1-ratio) and the point 91 (reference point) at the position (tar_x, tar_y). Determine the sum of the depth depth_tar multiplied by the ratio.
- step S225 the movement vector calculation unit 12 determines a value obtained by subtracting the depth depth_cur before the movement of the target point from the depth depth_new after the movement of the target point, as the z component mvz of the movement vector of the target point. Then, the process returns to step S184 in FIG. 15 and proceeds to step S186.
- the movement vector calculation unit 12 calculates the z component mvz of the movement vector of the target point. Is set to non-zero.
- the threshold value is the maximum movement amount. It is not limited to half.
- FIG. 18 is a flowchart for explaining the details of the color correction processing in step S34 of FIG.
- steps S241 to S243, S246, and S247 of FIG. 18 are the same as the processes of steps S51 to S53, S57, and S58 of FIG. 6 except that the process is performed by the color correction unit 13 instead of the position calculation unit 31. Therefore, the description is omitted as appropriate.
- step S243 If it is determined in step S243 that x is smaller than the pixel width, the process proceeds to step S244.
- step S244 the color correction unit 13 performs a color reference position determination core process for determining the position of the point 91 corresponding to the color information referred to when correcting the color information of the target point. Details of the color reference position determination core processing will be described with reference to FIG.
- step S245 the color correction unit 13 performs color correction core processing for correcting the color information of the target point. Details of the color correction core processing will be described with reference to FIG. After the process of step S245, the process proceeds to step S246.
- step S242 If it is determined in step S242 that y is greater than or equal to the number of pixels height, that is, if all the pixels are the target points, the color correction unit 13 calculates the three-dimensional data with the corrected color information and the movement vector.
- the movement vector (mvx, mvy, mvz) supplied from the unit 12 is supplied to the position correction unit 14. And a process returns to step S34 of FIG. 5, and progresses to step S35.
- FIG. 19 is a flowchart for explaining the details of the color reference position determination core processing in step S244 in FIG.
- step S261 of FIG. 19 the color correction unit 13 sets offset to 0.
- step S262 the color correction unit 13 determines whether the movement direction dir supplied from the movement vector calculation unit 12 is NONE. If it is determined in step S262 that the moving direction dir is not NONE, the process proceeds to step S263.
- step S263 the color correction unit 13 sets depth_cur to the depth z of the target point.
- step S264 the color correction unit 13 performs beta determination processing for determining betaX representing the positive / negative of the x coordinate in the direction opposite to the direction represented by the moving direction dir and betaY representing the positive / negative of the y coordinate. Details of the beta determination process will be described with reference to FIG.
- step S265 the color correction unit 13 determines whether offset is equal to or less than the maximum offset amount. If it is determined in step S265 that offset is equal to or less than the maximum offset amount, the process proceeds to step S266.
- step S266 the color correction unit 13 determines the x coordinate ref_x of the position of the point 91 corresponding to the color information referred to when correcting the color information of the target point as the sum of the x coordinate of the target point and the offset of betaX times. To do. Further, the color correction unit 13 determines the y coordinate ref_y of the position of the point 91 corresponding to the color information referred to when correcting the color information of the target point as the sum of the y coordinate of the target point and the offset of betaY times.
- step S267 the color correction unit 13 sets depth_ref to the depth ref_z of the point 91 at the position (ref_x, ref_y).
- step S268 the color correction unit 13 determines whether the absolute value of the value obtained by subtracting depth_cur from depth_ref is greater than the offset threshold.
- step S268 If it is determined in step S268 that the absolute value of the value obtained by subtracting depth_cur from depth_ref is equal to or less than the offset threshold, the color correction unit 13 determines that the subject at the point 91 at the position (ref_x, ref_y) is the same. Determination is made, and the process proceeds to step S269. In step S269, the color correction unit 13 increments offset by 1, returns the process to step S265, and repeats the subsequent processes.
- step S268 determines that the absolute value of the value obtained by subtracting depth_cur from depth_ref is greater than the offset threshold. Then, the process proceeds to step S270.
- step S265 If it is determined in step S265 that offset is greater than the maximum offset amount, the process proceeds to step S270.
- step S270 the color correction unit 13 decrements offset by one. That is, when the subject at the point 91 at the position (ref_x, ref_y) is different, or the subject at the point 91 at the position (ref_x, ref_y) is not changed from the same state to the different state.
- the color correction unit 13 sets the immediately preceding offset to the final offset. Then, the process returns to step S244 in FIG. 18 and proceeds to step S245.
- step S262 when it is determined in step S262 that the moving direction dir is NONE, that is, when the target point is not the boundary point 91 of the occlusion area, the process returns to step S244 in FIG. 18 and proceeds to step S245.
- the final position (ref_x, ref_y) is such that, when the point 91 at the boundary of the occlusion area is the target point, the difference in the position in the depth direction from the target point continuous to the target point is within the offset threshold value. It becomes the two-dimensional position of the point 91 farthest from the target point.
- FIG. 20 is a flowchart for explaining the details of the beta determination process in step S264 of FIG.
- step S291 in FIG. 20 the color correction unit 13 sets betaX and betaY to 0.
- step S292 the color correction unit 13 determines whether the moving direction dir is LL. When it is determined in step S292 that the moving direction dir is LL, the color correction unit 13 sets betaX to 1 in step S293. Then, the process returns to step S264 in FIG. 19 and proceeds to step S265.
- step S292 determines whether the moving direction dir is RR. If it is determined in step S294 that the moving direction dir is RR, the color correction unit 13 sets betaX to ⁇ 1 in step S295. Then, the process returns to step S264 in FIG. 19 and proceeds to step S265.
- step S296 the color correction unit 13 determines whether the moving direction dir is UU. If it is determined in step S296 that the moving direction dir is UU, the color correction unit 13 sets betaY to ⁇ 1 in step S297. Then, the process returns to step S264 in FIG. 19 and proceeds to step S265.
- step S296 determines whether the moving direction dir is UU, that is, if the moving direction dir is DD. If it is determined in step S296 that the moving direction dir is not UU, that is, if the moving direction dir is DD, the color correcting unit 13 sets betaY to 1 in step S298. Then, the process returns to step S264 in FIG. 19 and proceeds to step S265.
- FIG. 21 is a flowchart for explaining the color correction core processing in step S245 of FIG.
- step S310 in FIG. 21 the color correction unit 13 sets offsetX and offsetY to offset.
- step S311 the color correction unit 13 determines whether the moving direction dir is LL or RR. If it is determined in step S311 that the moving direction dir is not LL or RR, the process proceeds to step S312.
- step S312 the color correction unit 13 determines whether the moving direction dir is UU or DD. If it is determined in step S312 that the moving direction dir is UU or DD, in step S313, the color correction unit 13 changes offsetX to 1, and the process proceeds to step S315.
- step S312 determines whether the moving direction dir is UU or DD, that is, if the moving direction dir is NONE. If it is determined in step S312 that the moving direction dir is not UU or DD, that is, if the moving direction dir is NONE, the process proceeds to step S315.
- step S314 If it is determined in step S311 that the moving direction dir is LL or RR, in step S314, the color correction unit 13 changes offsetY to 1, and the process proceeds to step S315.
- step S315 the color correction unit 13 sets xx and yy to 0.
- step S316 the color correction unit 13 determines whether yy is smaller than offsetY.
- step S317 the color correction unit 13 determines whether xx is smaller than offsetX.
- step S3108 the color correction unit 13 determines the sum of the x coordinate of the target point and betaX times xx as dst_x. In addition, the color correction unit 13 determines the sum of the y coordinate of the target point and betaY times yy as dst_y.
- step S319 the color correction unit 13 uses the color information of the point 91 at the position (dst_x, dst_y) as the color of the point 91 at the position (ref_x, ref_y) in the three-dimensional data supplied from the movement vector calculation unit 12. Replace with information.
- step S320 the color correction unit 13 increments xx by 1, returns the process to step S317, and repeats the processes of steps S317 to S320 until xx becomes equal to or greater than offsetX.
- step S317 If it is determined in step S317 that xx is greater than or equal to offsetX, the color correction unit 13 sets xx to 0 and increments yy by 1. Then, the process returns to step S316, and the processes of steps S316 to S321 are repeated until yy becomes offsetY or more.
- step S316 determines whether yy is greater than or equal to offsetY. If it is determined in step S316 that yy is greater than or equal to offsetY, the process ends, the process returns to step S245 in FIG. 18, and the process proceeds to step S246.
- the color correction unit 13 replaces the color information of the point 91 from the point 91 at the position (ref_x, ref_y) to the target point with the color information of the point 91 at the position (ref_x, ref_y).
- FIG. 22 is a diagram illustrating a flowchart of the position correction process in step S35 of FIG.
- steps S331 to S333, S335, and S336 in FIG. 22 are the same as the processes in steps S51 to S53, S57, and S58 in FIG. 6 except that the process is performed by the position correction unit 14 instead of the position calculation unit 31. Therefore, the description is omitted as appropriate.
- step S333 If it is determined in step S333 that x is smaller than the pixel width, the process proceeds to step S334.
- step S334 the position correction unit 14 performs position correction core processing for correcting the three-dimensional position of the target point. Details of the position correction core processing will be described with reference to FIG. After the process of step S334, the process proceeds to step S335.
- step S332 If it is determined in step S332 that y is equal to or greater than the number of pixels height, that is, if all the pixels are the target points, the position correction unit 14 uses the three-dimensional data in which the three-dimensional position of the point 91 is corrected. The color image data generation unit 15 is supplied. Then, the process returns to step S35 in FIG. 5, returns to step S11 in FIG. 4, and proceeds to step S12.
- FIG. 23 is a flowchart for explaining details of the position correction core processing in step S334 of FIG.
- step S351 the position correction unit 14 acquires a movement vector (mvx, mvy, mvz) from the color correction unit 13.
- step S352 the position correction unit 14 determines whether the movement vector (mvx, mvy, mvz) is (0, 0, 0).
- step S352 If it is determined in step S352 that the movement vector (mvx, mvy, mvz) is not (0,0,0), in step S353, the position correction unit 14 determines the position (x, y, z) of the target point. Correct to position (x + mvx, y + mvy, z + mvz). Then, the process returns to step S334 in FIG. 22 and proceeds to step S335.
- step S352 if it is determined in step S352 that the movement vector (mvx, mvy, mvz) is (0,0,0), the position correction unit 14 corrects the position (x, y, z) of the target point. Instead, the process returns to step S334 in FIG. 22 and proceeds to step S335.
- FIG. 24 is a graph showing the x-coordinate and the z-coordinate of the point 91 whose y-coordinate is y1 in FIG. 2 among the three-dimensional positions of the respective points 91 generated by the position calculating unit 31.
- each point 91 is represented by a circle, and the pattern attached to the circle is the background 70, partial circle 71, partial circle 72, rectangle 73, or cylinder 75 of FIG. It is a pattern attached. The same applies to FIGS. 25 and 26 described later.
- FIG. 25 shows connection information of points 91 generated by the patch generation unit 32 in the graph of FIG.
- the occlusion area is represented by an area S1, and the color information of the area S1 does not exist in the three-dimensional data of the input image data. Therefore, when the color image data of the virtual viewpoint is generated from the three-dimensional data of the input image data, for example, the color information at a predetermined position in the region S1 projected onto the pixel of the color image data of the virtual viewpoint is the position of the position. Interpolation is performed using the color information of the points 91 that are the vertices of the included triangular patch 92.
- the color information of the position a of the region S1 projected on the color image data of the virtual viewpoint is the point 111 of the background 70 and the cylinder 75. Is interpolated from the color information of the point 112.
- color image data of a virtual viewpoint can be generated.
- the color of the pixel of the color image data of the virtual viewpoint corresponding to the position a is a mixed color of the color of the background 70 and the color of the cylinder 75, and the user feels uncomfortable.
- FIG. 26 shows the x-coordinate and the z-coordinate of the point 91 whose y-coordinate is y1 in FIG. 2 and the connection among the color information supplied to the color image data generation unit 15 and the three-dimensional data whose three-dimensional position is corrected. It is the graph which showed information.
- the x coordinate of the point 91 on the far side of the point 91 at the boundary of the occlusion area is in front of the point 91 in the vicinity. It is corrected to be the same as the x coordinate of the point 91 on the side.
- the x-coordinate of the point 111 on the far side of the point 91 at the boundary of the occlusion area is the same as the x-coordinate of the point 113 that is the point 91 on the near side of the point 111 on the right side of the point 111 by two pixels. It has been corrected.
- the occlusion area is represented by the area S2, and the area S2 is smaller than the area S1. Therefore, the number of pixels onto which the area S2 in the virtual viewpoint color image data is projected is reduced. As a result, user discomfort can be reduced.
- the position a projected on the color image data of the virtual viewpoint is the position of the point 111 after the three-dimensional position correction outside the region S2. Accordingly, the color of the pixel of the color image data of the virtual viewpoint corresponding to the position “a” is not a mixed color of the color of the background 70 and the color of the cylinder 75, and does not feel uncomfortable.
- the x coordinate is the same as the point 114 which is the point 91 on the left side of one pixel among the points 91 of the boundary of the occlusion area.
- the depth z of the point 115 corrected to become is corrected so as to approach the depth z of the point 114.
- FIG. 27 is a graph showing the y-coordinate and z-coordinate of the point 91 whose x-coordinate is x1 in FIG. 2 among the three-dimensional positions of the respective points 91 generated by the position calculating unit 31.
- each point 91 is represented by a circle, and the pattern attached to the circle is a pattern attached to the background 70, the partial circle 72, or the cylinder 74 in FIG. The same applies to FIGS. 28 and 29 described later.
- FIG. 28 shows connection information of the points 91 generated by the patch generation unit 32 in the graph of FIG.
- the occlusion area is represented by an area S3, and the color information of the area S3 does not exist in the three-dimensional data of the input image data. Therefore, when the color image data of the virtual viewpoint is generated from the three-dimensional data of the input image data, for example, the color information at a predetermined position in the region S3 projected onto the pixels of the color image data of the virtual viewpoint indicates the position. Interpolation is performed using the color information of the points 91 that are the vertices of the included triangular patch 92.
- the color information of the position b of the region S3 projected onto the color image data of the virtual viewpoint includes the point 131 of the partial circle 72 and the background. Interpolated from the color information of 70 points 132.
- the color of the pixel of the color image data of the virtual viewpoint corresponding to the position b is a mixed color of the color of the partial circle 72 and the color of the background 70, and the user feels uncomfortable.
- FIG. 29 shows the y-coordinate and the z-coordinate and the connection of the point 91 whose x-coordinate is x1 in FIG. 2 among the three-dimensional data whose color information and three-dimensional position are supplied to the color image data generation unit 15. It is the graph which showed information.
- the y coordinate of the point 91 on the far side of the point 91 at the boundary of the occlusion area is in front of the point 91 in the vicinity. It is corrected to be the same as the y coordinate of the side point 91.
- the y-coordinate of the point 131 on the far side of the point 91 at the boundary of the occlusion area is the same as the y-coordinate of the point 113 that is the point 91 on the left side of the point 131 by two pixels on the left side. It has been corrected.
- the occlusion area is represented by an area S4, and the area S4 is smaller than the area S3. Accordingly, the number of pixels on which the occlusion area S4 is projected in the color image data of the virtual viewpoint is reduced. As a result, user discomfort can be reduced.
- the position b projected on the color image data of the virtual viewpoint is the position of the point 131 after the three-dimensional position correction outside the region S4. Therefore, the color of the pixel of the color image data of the virtual viewpoint corresponding to the position b is not a mixed color of the color of the background 70 and the color of the cylinder 75, and is a color that does not feel strange.
- the moving direction dir is four directions, UU, DD, LL, and RR, but the type of moving direction dir is not limited to this.
- FIG. 30 is a diagram illustrating another example of the moving direction dir.
- the moving direction dir has four directions of UU, DD, LL, and RR: UL representing upper left, UR representing upper right, DL representing lower left, and DR representing lower right. It is also possible to add 8 directions.
- conditional expressions for the movement direction determining core process in FIG. 11 are 8, and each conditional expression is different from the conditional expression described in FIG.
- the image processing apparatus 10 corrects the three-dimensional position of the boundary point of the occlusion area included in the three-dimensional data of the input image data, the occlusion area can be reduced. Therefore, it is possible to generate color image data of a virtual viewpoint with little discomfort without generating color information of the occlusion area for each virtual viewpoint. As a result, it is possible to generate high-accuracy virtual viewpoint color image data following high-speed movement of the virtual viewpoint. In addition, it is possible to reduce the processing load for generating the virtual viewpoint color image data.
- ⁇ Second Embodiment> (Description of computer to which the present disclosure is applied)
- the series of processes described above can be executed by hardware or can be executed by software.
- a program constituting the software is installed in the computer.
- the computer includes, for example, a general-purpose personal computer capable of executing various functions by installing various programs by installing a computer incorporated in dedicated hardware.
- FIG. 31 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
- a CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- An input / output interface 205 is further connected to the bus 204.
- An input unit 206, an output unit 207, a storage unit 208, a communication unit 209, and a drive 210 are connected to the input / output interface 205.
- the input unit 206 includes a keyboard, a mouse, a microphone, and the like.
- the output unit 207 includes a display, a speaker, and the like.
- the storage unit 208 includes a hard disk, a nonvolatile memory, and the like.
- the communication unit 209 includes a network interface and the like.
- the drive 210 drives a removable medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
- the CPU 201 loads the program stored in the storage unit 208 to the RAM 203 via the input / output interface 205 and the bus 204 and executes the program. A series of processing is performed.
- the program executed by the computer 200 can be provided by being recorded in, for example, a removable medium 211 such as a package medium.
- the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
- the program can be installed in the storage unit 208 via the input / output interface 205 by attaching the removable medium 211 to the drive 210.
- the program can be received by the communication unit 209 via a wired or wireless transmission medium and installed in the storage unit 208.
- the program can be installed in the ROM 202 or the storage unit 208 in advance.
- the program executed by the computer 200 may be a program that is processed in time series in the order described in this specification, or a necessary timing such as in parallel or when a call is made. It may be a program in which processing is performed.
- the present disclosure can take a cloud computing configuration in which one function is shared by a plurality of devices via a network and is processed jointly.
- each step described in the above flowchart can be executed by one device or can be shared by a plurality of devices.
- the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
- this indication can also take the following structures.
- An image processing apparatus comprising: a position correction unit that corrects the three-dimensional position.
- the position correction unit is configured to correct the two-dimensional position of the boundary point of the occlusion area to the two-dimensional position of a reference point that is in the vicinity and near the boundary point of the occlusion area.
- the image processing apparatus according to (1) The image processing apparatus according to (1).
- the position correction unit corrects the position in the depth direction of the boundary point of the occlusion area when the correction amount of the two-dimensional position of the boundary point of the occlusion area is smaller than a threshold value.
- Image processing according to (3) apparatus (5) The image processing apparatus according to (4), wherein the position correction unit corrects the position in the depth direction of the boundary point of the occlusion region using the position in the depth direction of the reference point.
- a color image data generation unit that generates color image data of a viewpoint different from the predetermined viewpoint based on the three-dimensional data in which a three-dimensional position of a boundary point of the occlusion region is corrected by the position correction unit; The image processing apparatus according to any one of (1) to (5).
- the image processing apparatus according to any one of (1) to (6), further including a color correction unit that corrects the color information.
- the color correction unit is configured to obtain the color information of a point that is continuous with a boundary point of the occlusion region and that has a difference in position in the depth direction with respect to the point within the predetermined range.
- the image processing apparatus configured to correct the color information at a point farthest from a boundary point of the image.
- the image processing device The point of the boundary of the occlusion area among the plurality of points included in the three-dimensional data including the three-dimensional positions and color information of the plurality of points generated from the color image data and the depth image data of the predetermined viewpoint.
- An image processing method including a position correction step of correcting the three-dimensional position.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
1.第1実施の形態:画像処理装置(図1乃至図30)
2.第2実施の形態:コンピュータ(図31)
(画像処理装置の一実施の形態の構成例)
図1は、本開示を適用した画像処理装置の一実施の形態の構成例を示すブロック図である。
図2は、入力画像データの例を示す図である。
図3は、図2の背景70、部分円71、部分円72、長方形73、円筒74、および円筒75の3次元位置を示す図である。
図4は、図1の画像処理装置10のカラー画像データ生成処理を説明するフローチャートである。このカラー画像データ生成処理は、例えば、入力画像データが画像処理装置10に入力されたとき開始される。なお、カラー画像データ生成処理における変数は全処理で共通の変数であり、各処理において参照および更新が可能である。
図8は、図1のパッチ生成部32により生成される接続情報に対応する三角形パッチの例を示す図である。
(diffLL > ThDepth && diffUU > ThDepth && diffUR > ThDepth && diffRR <= ThDepth && diffDD <= ThDepth && diffDL <= ThDepth) ||
(diffLL <= ThDepth && diffUU > ThDepth && diffUR > ThDepth && diffRR <= ThDepth && diffDD <= ThDepth && diffDL <= ThDepth) ||
(diffLL <= ThDepth && diffUU > ThDepth && diffUR <= ThDepth && diffRR <= ThDepth && diffDD <= ThDepth && diffDL <= ThDepth) ||
(diffLL > ThDepth && diffUU > ThDepth && diffUR > ThDepth && diffRR > ThDepth && diffDD <= ThDepth && diffDL <= ThDepth)
である。
(diffLL <= ThDepth && diffUU > ThDepth && diffUR > ThDepth && diffRR > ThDepth && diffDD <= ThDepth && diffDL <= ThDepth) ||
(diffLL <= ThDepth && diffUU <= ThDepth && diffUR > ThDepth && diffRR <= ThDepth && diffDD <= ThDepth && diffDL <= ThDepth) ||
(diffLL <= ThDepth && diffUU <= ThDepth && diffUR > ThDepth && diffRR > ThDepth && diffDD <= ThDepth && diffDL <= ThDepth) ||
(diffLL <= ThDepth && diffUU <= ThDepth && diffUR <= ThDepth && diffRR > ThDepth && diffDD <= ThDepth && diffDL <= ThDepth) ||
(diffLL <= ThDepth && diffUU <= ThDepth && diffUR > ThDepth && diffRR > ThDepth && diffDD > ThDepth && diffDL <= ThDepth) ||
(diffLL <= ThDepth && diffUU > ThDepth && diffUR > ThDepth && diffRR > ThDepth && diffDD > ThDepth && diffDL <= ThDepth) ||
(diffLL <= ThDepth && diffUU <= ThDepth && diffUR > ThDepth && diffRR > ThDepth && diffDD > ThDepth && diffDL > ThDepth) ||
(diffLL <= ThDepth && diffUU <= ThDepth && diffUR <= ThDepth && diffRR > ThDepth && diffDD > ThDepth && diffDL <= ThDepth)
(diffLL <= ThDepth && diffUU <= ThDepth && diffUR <= ThDepth && diffRR > ThDepth && diffDD > ThDepth && diffDL > ThDepth) ||
(diffLL <= ThDepth && diffUU <= ThDepth && diffUR <= ThDepth && diffRR <= ThDepth && diffDD > ThDepth && diffDL <= ThDepth) ||
(diffLL <= ThDepth && diffUU <= ThDepth && diffUR <= ThDepth && diffRR <= ThDepth && diffDD > ThDepth && diffDL > ThDepth) ||
(diffLL > ThDepth && diffUU <= ThDepth && diffUR <= ThDepth && diffRR > ThDepth && diffDD > ThDepth && diffDL > ThDepth)
である。
(diffLL > ThDepth && diffUU <= ThDepth && diffUR <= ThDepth && diffRR <= ThDepth && diffDD > ThDepth && diffDL > ThDepth) ||
(diffLL <= ThDepth && diffUU <= ThDepth && diffUR <= ThDepth && diffRR <= ThDepth && diffDD <= ThDepth && diffDL > ThDepth) ||
(diffLL > ThDepth && diffUU > ThDepth && diffUR <= ThDepth && diffRR <= ThDepth && diffDD <= ThDepth && diffDL > ThDepth) ||
(diffLL > ThDepth && diffUU <= ThDepth && diffUR <= ThDepth && diffRR <= ThDepth && diffDD <= ThDepth && diffDL <= ThDepth) ||
(diffLL > ThDepth && diffUU <= ThDepth && diffUR <= ThDepth && diffRR <= ThDepth && diffDD <= ThDepth && diffDL > ThDepth) ||
(diffLL > ThDepth && diffUU > ThDepth && diffUR <= ThDepth && diffRR <= ThDepth && diffDD > ThDepth && diffDL > ThDepth) ||
(diffLL > ThDepth && diffUU > ThDepth && diffUR > ThDepth && diffRR <= ThDepth && diffDD <= ThDepth && diffDL > ThDepth) ||
(diffLL > ThDepth && diffUU > ThDepth && diffUR <= ThDepth && diffRR <= ThDepth && diffDD <= ThDepth && diffDL <= ThDepth)
図24は、位置算出部31により生成された各点91の3次元位置のうちの、y座標が図2のy1である点91のx座標とz座標を示したグラフである。
図30は、移動方向dirの他の例を示す図である。
(本開示を適用したコンピュータの説明)
上述した一連の処理は、ハードウエアにより実行することもできるし、ソフトウエアにより実行することもできる。一連の処理をソフトウエアにより実行する場合には、そのソフトウエアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウエアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどが含まれる。
所定の視点のカラー画像データとデプス画像データから生成された、複数の点の3次元位置と色情報とからなる3次元データに含まれる、前記複数の点のうちのオクルージョン領域の境界の点の前記3次元位置を補正する位置補正部
を備える画像処理装置。
(2)
前記位置補正部は、前記オクルージョン領域の境界の点の2次元位置を、前記オクルージョン領域の境界の点の近傍かつ手前側の点である参照点の2次元位置に補正する
ように構成された
前記(1)に記載の画像処理装置。
(3)
前記位置補正部は、前記オクルージョン領域の境界の点の奥行き方向の位置を補正する
ように構成された
前記(2)に記載の画像処理装置。
(4)
前記位置補正部は、前記オクルージョン領域の境界の点の2次元位置の補正量が閾値より小さい場合、前記オクルージョン領域の境界の点の奥行き方向の位置を補正する
前記(3)に記載の画像処理装置。
(5)
前記位置補正部は、前記参照点の奥行き方向の位置を用いて、前記オクルージョン領域の境界の点の奥行き方向の位置を補正する
前記(4)に記載の画像処理装置。
(6)
前記位置補正部により前記オクルージョン領域の境界の点の3次元位置が補正された前記3次元データに基づいて、前記所定の視点とは異なる視点のカラー画像データを生成するカラー画像データ生成部
をさらに備える
前記(1)乃至(5)のいずれかに記載の画像処理装置。
(7)
前記色情報を補正する色補正部
をさらに備える
前記(1)乃至(6)のいずれかに記載の画像処理装置。
(8)
前記色補正部は、前記オクルージョン領域の境界の点に連続する、その点との奥行き方向の位置の差分が所定の範囲内である点の前記色情報を、その点のうちの、前記オクルージョン領域の境界の点から最も遠い点の前記色情報に補正する
ように構成された
前記(7)に記載の画像処理装置。
(9)
前記所定の視点のカラー画像データとデプス画像データから前記3次元データを生成する3次元データ生成部
をさらに備える
前記(1)乃至(8)のいずれかに記載の画像処理装置。
(10)
画像処理装置が、
所定の視点のカラー画像データとデプス画像データから生成された、複数の点の3次元位置と色情報とからなる3次元データに含まれる、前記複数の点のうちのオクルージョン領域の境界の点の前記3次元位置を補正する位置補正ステップ
を含む画像処理方法。
Claims (10)
- 所定の視点のカラー画像データとデプス画像データから生成された、複数の点の3次元位置と色情報とからなる3次元データに含まれる、前記複数の点のうちのオクルージョン領域の境界の点の前記3次元位置を補正する位置補正部
を備える画像処理装置。 - 前記位置補正部は、前記オクルージョン領域の境界の点の2次元位置を、前記オクルージョン領域の境界の点の近傍かつ手前側の点である参照点の2次元位置に補正する
ように構成された
請求項1に記載の画像処理装置。 - 前記位置補正部は、前記オクルージョン領域の境界の点の奥行き方向の位置を補正する
ように構成された
請求項2に記載の画像処理装置。 - 前記位置補正部は、前記オクルージョン領域の境界の点の2次元位置の補正量が閾値より小さい場合、前記オクルージョン領域の境界の点の奥行き方向の位置を補正する
請求項3に記載の画像処理装置。 - 前記位置補正部は、前記参照点の奥行き方向の位置を用いて、前記オクルージョン領域の境界の点の奥行き方向の位置を補正する
請求項4に記載の画像処理装置。 - 前記位置補正部により前記オクルージョン領域の境界の点の3次元位置が補正された前記3次元データに基づいて、前記所定の視点とは異なる視点のカラー画像データを生成するカラー画像データ生成部
をさらに備える
請求項1に記載の画像処理装置。 - 前記色情報を補正する色補正部
をさらに備える
請求項1に記載の画像処理装置。 - 前記色補正部は、前記オクルージョン領域の境界の点に連続する、その点との奥行き方向の位置の差分が所定の範囲内である点の前記色情報を、その点のうちの、前記オクルージョン領域の境界の点から最も遠い点の前記色情報に補正する
ように構成された
請求項7に記載の画像処理装置。 - 前記所定の視点のカラー画像データとデプス画像データから前記3次元データを生成する3次元データ生成部
をさらに備える
請求項1に記載の画像処理装置。 - 画像処理装置が、
所定の視点のカラー画像データとデプス画像データから生成された、複数の点の3次元位置と色情報とからなる3次元データに含まれる、前記複数の点のうちのオクルージョン領域の境界の点の前記3次元位置を補正する位置補正ステップ
を含む画像処理方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020187013125A KR102646019B1 (ko) | 2015-12-01 | 2016-11-21 | 화상 처리 장치 및 화상 처리 방법 |
US15/769,575 US10846916B2 (en) | 2015-12-01 | 2016-11-21 | Image processing apparatus and image processing method |
JP2017553776A JP6807034B2 (ja) | 2015-12-01 | 2016-11-21 | 画像処理装置および画像処理方法 |
CN201680069227.9A CN108475408A (zh) | 2015-12-01 | 2016-11-21 | 图像处理设备和图像处理方法 |
EP16870473.2A EP3367326A4 (en) | 2015-12-01 | 2016-11-21 | Image-processing device and image-processing method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015234468 | 2015-12-01 | ||
JP2015-234468 | 2015-12-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017094536A1 true WO2017094536A1 (ja) | 2017-06-08 |
Family
ID=58797233
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/084381 WO2017094536A1 (ja) | 2015-12-01 | 2016-11-21 | 画像処理装置および画像処理方法 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10846916B2 (ja) |
EP (1) | EP3367326A4 (ja) |
JP (1) | JP6807034B2 (ja) |
KR (1) | KR102646019B1 (ja) |
CN (1) | CN108475408A (ja) |
WO (1) | WO2017094536A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020016975A (ja) * | 2018-07-24 | 2020-01-30 | Kddi株式会社 | 画像処理装置、方法及びプログラム |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10165259B2 (en) * | 2017-02-15 | 2018-12-25 | Adobe Systems Incorporated | Generating novel views of a three-dimensional object based on a single two-dimensional image |
JP2022026844A (ja) * | 2020-07-31 | 2022-02-10 | キヤノン株式会社 | 画像処理装置、画像処理方法およびプログラム |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011096136A1 (ja) * | 2010-02-02 | 2011-08-11 | コニカミノルタホールディングス株式会社 | 疑似画像生成装置および疑似画像生成方法 |
JP2012170067A (ja) * | 2011-02-14 | 2012-09-06 | Mitsubishi Electric Research Laboratories Inc | トレリス構造を用いてシーンの仮想画像を生成する方法とシステム |
JP2014211829A (ja) * | 2013-04-19 | 2014-11-13 | 富士通株式会社 | 画像処理装置、画像処理回路、画像処理プログラム、及び画像処理方法 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6397120B1 (en) * | 1999-12-30 | 2002-05-28 | David A. Goldman | User interface and method for manipulating singularities for automatic embroidery data generation |
JP3748545B2 (ja) | 2002-09-19 | 2006-02-22 | 株式会社ナムコ | プログラム、情報記憶媒体及び画像生成装置 |
JP4537104B2 (ja) | 2004-03-31 | 2010-09-01 | キヤノン株式会社 | マーカ検出方法、マーカ検出装置、位置姿勢推定方法、及び複合現実空間提示方法 |
US7609899B2 (en) * | 2004-05-28 | 2009-10-27 | Ricoh Company, Ltd. | Image processing apparatus, image processing method, and recording medium thereof to smooth tile boundaries |
JP4617965B2 (ja) | 2005-03-31 | 2011-01-26 | ソニー株式会社 | 画像処理方法、その装置およびプログラム |
JP4770619B2 (ja) * | 2005-09-29 | 2011-09-14 | ソニー株式会社 | 表示画像補正装置、画像表示装置、表示画像補正方法 |
KR101367284B1 (ko) * | 2008-01-28 | 2014-02-26 | 삼성전자주식회사 | 시점 변화에 따른 영상 복원 방법 및 장치 |
CN101790103B (zh) * | 2009-01-22 | 2012-05-30 | 华为技术有限公司 | 一种视差计算方法及装置 |
KR101681095B1 (ko) * | 2010-08-24 | 2016-12-02 | 삼성전자주식회사 | 컬러 영상과 시점 및 해상도가 동일한 깊이 영상 생성 방법 및 장치 |
CN101937578B (zh) * | 2010-09-08 | 2012-07-04 | 宁波大学 | 一种虚拟视点彩色图像绘制方法 |
JP2012186781A (ja) * | 2011-02-18 | 2012-09-27 | Sony Corp | 画像処理装置および画像処理方法 |
-
2016
- 2016-11-21 JP JP2017553776A patent/JP6807034B2/ja active Active
- 2016-11-21 EP EP16870473.2A patent/EP3367326A4/en active Pending
- 2016-11-21 US US15/769,575 patent/US10846916B2/en active Active
- 2016-11-21 WO PCT/JP2016/084381 patent/WO2017094536A1/ja active Application Filing
- 2016-11-21 KR KR1020187013125A patent/KR102646019B1/ko active IP Right Grant
- 2016-11-21 CN CN201680069227.9A patent/CN108475408A/zh active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011096136A1 (ja) * | 2010-02-02 | 2011-08-11 | コニカミノルタホールディングス株式会社 | 疑似画像生成装置および疑似画像生成方法 |
JP2012170067A (ja) * | 2011-02-14 | 2012-09-06 | Mitsubishi Electric Research Laboratories Inc | トレリス構造を用いてシーンの仮想画像を生成する方法とシステム |
JP2014211829A (ja) * | 2013-04-19 | 2014-11-13 | 富士通株式会社 | 画像処理装置、画像処理回路、画像処理プログラム、及び画像処理方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3367326A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020016975A (ja) * | 2018-07-24 | 2020-01-30 | Kddi株式会社 | 画像処理装置、方法及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
US20180315239A1 (en) | 2018-11-01 |
CN108475408A (zh) | 2018-08-31 |
KR20180088801A (ko) | 2018-08-07 |
KR102646019B1 (ko) | 2024-03-12 |
US10846916B2 (en) | 2020-11-24 |
JPWO2017094536A1 (ja) | 2018-09-13 |
EP3367326A4 (en) | 2018-10-03 |
EP3367326A1 (en) | 2018-08-29 |
JP6807034B2 (ja) | 2021-01-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11210838B2 (en) | Fusing, texturing, and rendering views of dynamic three-dimensional models | |
JP7403528B2 (ja) | シーンの色及び深度の情報を再構成するための方法及びシステム | |
US20190098278A1 (en) | Image processing apparatus, image processing method, and storage medium | |
US9344695B2 (en) | Automatic projection image correction system, automatic projection image correction method, and non-transitory storage medium | |
US6529626B1 (en) | 3D model conversion apparatus and method | |
CN106682673A (zh) | 图像处理装置以及方法 | |
KR102152432B1 (ko) | 동적 3차원 모델을 이용한 실사 콘텐츠 생성 시스템 및 방법 | |
CN110458932B (zh) | 图像处理方法、装置、系统、存储介质和图像扫描设备 | |
TW201520973A (zh) | 三維立體模型之建立方法和裝置 | |
KR102199458B1 (ko) | 3차원 컬러 메쉬 복원 방법 및 장치 | |
WO2017094536A1 (ja) | 画像処理装置および画像処理方法 | |
KR20200031678A (ko) | 장면의 타일식 3차원 이미지 표현을 생성하기 위한 장치 및 방법 | |
JP2010121945A (ja) | 3次元形状生成装置 | |
JP2017092756A (ja) | 画像処理装置、画像処理方法、画像投影システムおよびプログラム | |
JP6601825B2 (ja) | 画像処理装置および2次元画像生成用プログラム | |
Kim et al. | Real‐Time Human Shadow Removal in a Front Projection System | |
JP2015197374A (ja) | 3次元形状推定装置及び3次元形状推定方法 | |
JP6835455B2 (ja) | 時系列の奥行き画像におけるデプス値を補正するプログラム、装置及び方法 | |
JP2017199285A (ja) | 情報処理装置、情報処理方法、プログラム | |
JP2003337953A (ja) | 画像処理装置および画像処理方法、並びにコンピュータ・プログラム | |
Zhu et al. | An intelligent projection system adapted to arbitrary surfaces | |
JP2020046744A (ja) | 画像処理装置、背景画像の生成方法およびプログラム | |
Heng et al. | Keyframe-based texture mapping for rgbd human reconstruction | |
US20240062463A1 (en) | Method of real-time generation of 3d imaging | |
US20230245378A1 (en) | Information processing apparatus, information processing method, and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16870473 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017553776 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15769575 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 20187013125 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |