JP2012038105A  Information processor, information processing method and program  Google Patents
Information processor, information processing method and program Download PDFInfo
 Publication number
 JP2012038105A JP2012038105A JP2010178070A JP2010178070A JP2012038105A JP 2012038105 A JP2012038105 A JP 2012038105A JP 2010178070 A JP2010178070 A JP 2010178070A JP 2010178070 A JP2010178070 A JP 2010178070A JP 2012038105 A JP2012038105 A JP 2012038105A
 Authority
 JP
 Japan
 Prior art keywords
 dimensional
 line segment
 line
 position
 model
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Granted
Links
Images
Abstract
A holding unit that holds a 3D model, a position / orientation input unit that inputs a position and orientation of the 3D model, a selection unit that selects a line segment constituting the 3D model from the 3D model, and an object Based on the position and orientation of the object, a line segment and a plane constituting the threedimensional model are projected as a projection line segment and a projection plane on a twodimensional image in which depth information up to the threedimensional model is held in each pixel. A projection unit, a deletion unit that deletes one of the projected line segments that overlaps the twodimensional image from the selected line segment, and a deletion unit that deletes the line segment that has been deleted. An extraction unit that extracts a line segment serving as an edge indicating the feature of the target object on the twodimensional image based on the depth information from the line segment.
[Selection] Figure 5
Description
The present invention relates to an information processing apparatus, an information processing method, and a program for extracting a threedimensional line observed as an edge on an image from a threedimensional shape model.
Along with the development of robot technology in recent years, robots are instead performing complicated tasks that have been performed by humans, such as assembly of industrial products. Such a robot performs assembly by gripping a part with an end effector such as a hand. In order for the robot to grip the part, it is necessary to measure the relative position and orientation between the part to be gripped and the robot (hand). As a method for measuring the position and orientation, measurement by model fitting in which a threedimensional shape model of an object is applied to a feature detected from a twodimensional image captured by a camera is generally used. NonPatent Document 1 discloses a method of using an edge as a feature detected from a twodimensional image. In this method, the shape of the object is represented by a set of threedimensional lines. Then, assuming that the approximate position and orientation of the object are known, the position and orientation of the object are measured by correcting the approximate position and orientation so that the projected image of the threedimensional line is applied to the edge detected on the image. To do. In general, as a threedimensional line representing the shape of an object, a threedimensional line serving as an outline (jump edge) of the object is often used. In many cases, a threedimensional line serving as a roof edge is used inside the object. However, since the threedimensional line that becomes the outline changes depending on the direction in which the object is observed, a process for extracting the threedimensional line that becomes the outline according to the direction in which the object is observed is necessary.
As a method for extracting a threedimensional line serving as an outline from a threedimensional shape model representing the shape of an object, Patent Document 1 discloses a method using normal information of a surface in the threedimensional shape model. In this method, for each threedimensional line in the threedimensional shape model, it is calculated whether two adjacent surfaces are facing the front or the back with respect to the viewpoint, and if the front and back are different, it is determined as a contour line By doing so, a threedimensional line is extracted.
Further, in the method disclosed in Patent Document 2, the inner product of the normal between the surfaces in the threedimensional shape model and the inner product of the colors between the surfaces are calculated. The threedimensional line that is the boundary between the surfaces is determined to be a contour.
Further, NonPatent Document 2 discloses a method of calculating a 3D line from a drawing image of a 3D shape model as a method of not directly extracting a 3D line from the 3D shape model. In this method, a threedimensional shape model is drawn by computer graphics (CG), and edge detection is performed on a depth buffer obtained as a result of the drawing. The depth buffer is an image in which depth values from the viewpoint to each point on the model are stored, and by detecting edges from the depth buffer, it is possible to calculate a region where the depth changes discontinuously. . By calculating the parameters of the threedimensional line corresponding to the edge from the image coordinates and the depth value of the pixel detected as the edge on the depth buffer, the threedimensional line serving as the edge on the image is calculated.
T. Drummond and R. Cipolla, "Realtime visual tracking of complex structures," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.24, no.7, pp.932946, 2002. H. Wuest, F. Wientapper, and D. Stricker, "Adaptive Modelbased Tracking Using AnalysisbySynthesis Techniques," International Conference on Computer Analysis of Image and Patterns (CAIP), 2007.
In the method of directly extracting a threedimensional line from a threedimensional shape model as disclosed in Patent Document 1 or 2, the threedimensional line is not defined in the threedimensional shape model and represents a shape. It is assumed that there is no crack (gap) between the surfaces. However, when the CAD data for design described by the parametric curved surface is a model obtained by triangular mesh conversion, the boundary curve or line segment between the 3D curved surface or the polygon patches is assumed to be a single one. Not defined. Therefore, it may be defined independently as belonging to each surface. Furthermore, there are many cases where there are cracks between the independently defined lines. As a result, it is difficult to refer to adjacent surfaces or calculate the angle between the surfaces, and it is difficult to accurately extract the 3D line itself. There is a problem that it is necessary.
As a measure for a threedimensional shape model including a crack, a method of interpolating a crack of the threedimensional shape model, generating a model without a crack once, and extracting a threedimensional line can be considered.
Patent Document 3 discloses a technique for interpolating cracks by simplifying a model including cracks. In this method, an area that approximates a mesh is determined, vertices in the approximate range are merged, and the model is simplified to remove cracks. However, since the original shape is simplified, there is a problem that the accuracy of the original shape data is not preserved. Also, if the threshold is set inappropriately, the accuracy of the model may be significantly impaired. In addition, since the threshold value of an appropriate approximate range changes according to the scale, observation distance, and observation direction of the model, there is a limit to application to various usage scenes.
On the other hand, the method using the image of the CG drawing result of the threedimensional shape model as shown in NonPatent Document 2 accurately extracts a threedimensional line that becomes a contour even from a model including a crack. Is possible. By sampling a projected image on a threedimensional line image in a threedimensional shape model in units of pixels, in most cases, cracks are buried in the pixel range. Therefore, it is possible to extract a threedimensional line as a contour without any problem even from a model including a crack. However, since the parameters of the threedimensional line are calculated from the twodimensional coordinates of the edge detected from the image, there is a problem that the accuracy is lowered due to the influence of the sampling error due to the drawing. This is particularly noticeable when the resolution of the image to be drawn is low. From the viewpoint of not impairing the accuracy of the threedimensional shape model, a method of directly extracting the information of the threedimensional line from the threedimensional shape model is desirable.
In view of the above problems, the present invention appropriately determines whether or not a threedimensional line is observed as an edge on an image, and includes cracks and overlaps without reducing the accuracy of the parameters of the threedimensional line. An object is to extract a threedimensional line even from a threedimensional shape model.
An information processing apparatus according to the present invention that achieves the above object is as follows.
Holding means for holding a threedimensional model indicating the threedimensional shape information of the target object;
Position and orientation input means for inputting the position and orientation of the threedimensional model;
Selection means for selecting a line segment constituting the threedimensional model from the threedimensional model;
Based on the position and orientation of the target object, a line segment and a plane constituting the threedimensional model are projected onto a twodimensional image in which depth information up to the threedimensional model is held in each pixel. Projection means for projecting as a surface;
A deletion unit that deletes one of the projected line segments that overlaps the twodimensional image from the line segment selected by the selection unit;
A line segment that is selected by the selection unit and that serves as an edge indicating the feature of the target object on the twodimensional image based on the depth information from the line segment that has been deleted by the deletion unit Extracting means for extracting
It is characterized by providing.
According to the present invention, it is possible to extract a threedimensional line observed as an edge with high accuracy even if the threedimensional shape model includes cracks and overlapping definitions. Further, by sampling the projection image of the threedimensional line on the twodimensional image, it is possible to extract the threedimensional line so that the threedimensional line does not overlap on the twodimensional plane. Furthermore, by referring to the shape around the threedimensional line from the projection image of the threedimensional shape model, it becomes possible to determine the threedimensional line that becomes the contour or the roof edge.
(First embodiment)
In the present embodiment, a case will be described in which the information processing apparatus according to the present invention is applied as a threedimensional line extraction apparatus for threedimensional model fitting for a photographed image. The threedimensional line extraction apparatus extracts a line segment observed as an edge indicating the feature of the target object on the twodimensional image from the line segment constituting the threedimensional model indicating the threedimensional shape information of the target object.
First, the hardware configuration of the threedimensional line extraction apparatus 100 will be described with reference to FIG. The CPU 1 controls the operation of the entire apparatus, specifically the operation of each processing unit described later. The memory 2 stores programs and data used for the operation of the CPU 1. The bus 3 manages data transfer between the processing units. The interface 4 is an interface between the bus 3 and various devices. The external storage device 5 is an external storage device that stores programs and data to be read by the CPU 1. The keyboard 6 and the mouse 7 constitute an input device for starting a program and designating the operation of the program. The display unit 8 displays the operation result of the process.
With reference to FIG. 1B, a configuration of a threedimensional line extraction apparatus 100 that extracts a threedimensional line observed as an edge from threedimensional model data 10 representing the shape of an observation target object will be described. The threedimensional line extraction apparatus 100 includes a threedimensional model storage unit 101, an observation direction input unit 102, a threedimensional line extraction unit 103, a model drawing unit 104, a sampling unit 105, and an edge determination unit 106.
The threedimensional line extraction apparatus 100 extracts a threedimensional line observed as an edge on an image from the threedimensional model data 10 representing the shape of the observation target object stored in the threedimensional model storage unit 101. The extracted threedimensional line is input to the position / orientation estimation apparatus 11 and used for the object position / orientation estimation process.
Next, each processing unit constituting the threedimensional line extraction apparatus 100 will be described.
The threedimensional model storage unit 101 stores the threedimensional model data 10. The 3D model storage unit 101 is connected to the 3D line extraction unit 103 and the model drawing unit 104. The threedimensional model data 10 is a model that represents the shape of an object from which a threedimensional line is extracted. For example, the threedimensional model data 10 may be described as a mesh model constituted by information of a plurality of surfaces, or a shape may be described by a parametric expression such as NURBS. The threedimensional model data 10 may be expressed by any method as long as it includes geometric information representing the shape of the target object. In the present embodiment, a mesh model constituted by a surface and points and line segments defining the surface is used as the threedimensional model data 10.
The observation direction input unit 102 inputs the observation direction of the threedimensional model data 10. Here, the observation direction means the position and orientation of the threedimensional shape model with respect to the viewpoint for observing the threedimensional model. Position and orientation input is performed by the observation direction input unit 102. In the present embodiment, the position / orientation estimation apparatus 11 continuously measures in the time axis direction, and the measurement value obtained by the previous (previous time) position / orientation estimation apparatus 11 is used as the position of the next threedimensional model data 10 and Use as posture.
The threedimensional line extraction unit 103 extracts a threedimensional line from the threedimensional model data 10. A threedimensional line represents straight line information described by a position and direction in a threedimensional space. In the present embodiment, a local straight line having no length described by the position and direction is extracted as a threedimensional line. The representation format of the threedimensional line may be any geometric information that can describe the position and direction. A line may be represented by the passing position and direction, or may be represented as parametric data.
As shown in FIG. 2, in the process of extracting a threedimensional line from a mesh model, a projected image of line segments constituting the mesh model is divided on the twodimensional image at regular intervals, and the position and direction are set at each division point. A local straight line is extracted by assigning. When a model whose shape is described by parametric expression is used, a threedimensional line may be extracted by dividing the parametric curve. A detailed processing method of threedimensional line extraction will be described later.
The model drawing unit 104 performs CG drawing processing of the threedimensional model data 10. The graphic library used for drawing in the threedimensional drawing unit may be a widely used graphic library such as OpenGL or DirectX, or may be a graphic library developed independently. Any method may be used as long as the model format stored in the 3D model storage unit 101 can be projected onto the 2D image. In this embodiment, OpenGL is used as a graphic library.
The sampling unit 105 selects a threedimensional line from the threedimensional line extracted by the threedimensional line extraction unit 103 so as not to be close to the projection image of the other threedimensional line on the twodimensional image. If a vertex or side overlap or a crack exists in the threedimensional shape model, as shown in FIG. 3A, the threedimensional line extracted from the threedimensional shape model also overlaps. Therefore, as shown in FIG. 3B, a threedimensional line overlap flag map is created to check whether another threedimensional line exists in the vicinity of the target threedimensional line. Then, by selecting the threedimensional line so as to be unique in the vicinity range, the threedimensional line is extracted without overlapping on the twodimensional screen. A detailed processing method of threedimensional line extraction will be described later.
The edge determination unit 106 further selects a 3D line observed as an edge from the 3D lines selected by the sampling unit 105. Here, as shown in FIG. 7, the threedimensional line as an edge is a threedimensional line of a portion where the shape around the threedimensional line changes discontinuously in a step shape or a portion where the shape changes discontinuously in a roof shape. Refers to that. Detailed processing of edge determination will be described later.
Next, with reference to FIG. 4, a flowchart showing a processing procedure of the threedimensional line extraction method in the present embodiment will be described.
First, in step S401, initialization is executed. The observation direction input unit 102 inputs a direction for observing the 3D model data 10 stored in the 3D model storage unit 101 to the 3D line extraction apparatus 100. As described above, in this embodiment, the observation direction input unit 102 acquires the position and orientation for observing the threedimensional model from the position and orientation estimation apparatus 11. In the process of the model drawing unit 104, camera internal parameters (focal length and principal point position) used for the drawing process are also required, and thus the camera internal parameters are acquired from the position / orientation estimation device 11 together with the observation direction. This completes the initialization in step S401. Then, the process proceeds to step S402.
In step S402, the threedimensional line extraction unit 103 extracts a threedimensional line from the threedimensional model data 10 by projecting and dividing line segment data constituting each surface in the model. Specifically, first, all line segment data in the threedimensional model is projected onto the image using the position and orientation of the viewpoint for observing the threedimensional model data 10 acquired in step S401 and the camera internal parameters. A projected line segment on the twodimensional image is calculated. The projected image of the line segment is also a line segment on the image.
Next, as described in FIG. 2, the line segment data in the threedimensional model is divided so that the projected line segments are equally spaced on the image, and the parameters of the threedimensional line are set for each division point. By assigning, a threedimensional line is extracted. The threedimensional line holds the threedimensional coordinates of the position and the threedimensional direction of the segment to be divided. You may further hold  maintain the twodimensional coordinate of a projection result, and the twodimensional direction of a projection line segment. As described above, the data format indicating the line segment is described by the position information of the division points obtained by dividing the line segment and the threedimensional direction of the line segment so that the projected line segments corresponding to the line segment are equally spaced. .
When the extraction of 3D lines from all the line segment data in the 3D model is completed and the extraction results are stored as a 3D line list, the process of step S402 ends. Then, the process proceeds to step S403. In step S403, the model drawing unit 104 performs CG drawing of the 3D model data 10 using the position and orientation of the viewpoint for observing the 3D model data 10 acquired in step S401 and the camera internal parameters. Here, the CG drawing process is a process of projecting the surface information of the threedimensional shape model onto the twodimensional image based on the position and orientation of the object acquired in step S401 and the camera internal parameters, and outputting the image as an image. It is. At this time, the maximum value and the minimum value of the distance from the viewpoint to the model are set, and the model outside the range is not drawn, thereby reducing the calculation cost of the drawing process. This process is called clipping and is a commonly performed process. By CG rendering of the threedimensional model data 10, a depth buffer storing depth values up to the threedimensional model data 10 is generated from the twodimensional image. When the CG drawing of the threedimensional model data 10 is finished, the process of step S403 is finished. Then, the process proceeds to step S404.
That is, on the basis of the position and orientation of the target object, the line segment and the surface constituting the threedimensional model are projected onto the twodimensional image in which the depth information up to the threedimensional model is held in each pixel. Project as.
In step S404, the sampling unit 105 samples a threedimensional line from the threedimensional line list extracted in step S402 so that no overlap occurs on the twodimensional screen. This sampling process will be described with reference to FIG.
FIG. 5 is a flowchart showing a processing procedure of the sampling method of the threedimensional line according to the present embodiment. This process is performed in units of threedimensional lines extracted in step S402.
First, in step S501, one 3D line is selected, and it is specified whether the 3D line is on the surface of the 3D model or hidden behind the surface of the 3D model. The 3D line list extracted in step S402 includes 3D lines that are hidden by the plane of the 3D model. Therefore, the threedimensional coordinates of the threedimensional line are compared with the value of the depth buffer calculated in step S403 to determine whether the threedimensional line is behind the depth buffer (hidden line). The threedimensional position of the extracted line segment is compared with the position indicated by the depth information, and the line segment existing behind the position indicated by the depth information is removed as a hidden line.
Next, in step S502, it is determined whether or not it is specified in step S501 that the 3D line is hidden behind the surface of the 3D model. When it is specified that the surface of the threedimensional model is hidden behind the surface (step S502; YES), the process proceeds to step S505. In step S505, the threedimensional line is removed from the threedimensional line list. When it is specified that the threedimensional line is on the surface of the threedimensional model (step S502; NO), the process proceeds to step S503.
In step S503, the overlap of the threedimensional line is checked. For the overlap determination of the threedimensional line, an overlap flag map for checking whether another threedimensional line exists in the vicinity of the target threedimensional line is used. The overlap flag map is a map in which a flag indicating true is stored when a threedimensional line exists, and a flag indicating false is stored where there is no threedimensional line. The duplicate flag map is initialized with a fake flag in advance. Then, by referring to and updating the overlapping flag map according to the twodimensional coordinates of the threedimensional line, it is determined whether there is another threedimensional line in the vicinity of the twodimensional coordinates of the target threedimensional line.
Next, in step S504, when it is determined in step S503 that there is another threedimensional line in the vicinity (step S504; YES), the process proceeds to step S505. In step S505, the threedimensional line is removed from the threedimensional line list. If it is determined that there is no other threedimensional line in the vicinity (step S504; NO), the overlap flag map is referred to, a true flag is stored at the twodimensional coordinate position of the threedimensional line, and the process proceeds to step S506. move on. The process of removing the threedimensional line from the threedimensional line list is, for example, within a predetermined region from one of the projection line segments on the twodimensional image among the projection line segments corresponding to the selected line segment. This is a process of deleting one projection line segment as an overlapping line segment on a twodimensional image, leaving only one.
In step S506, it is determined whether all threedimensional lines in the threedimensional line list have been processed. If there is a threedimensional line that has not yet been processed (step S506; NO), the process returns to step S501. On the other hand, if the next threedimensional line is processed and all the threedimensional lines have been processed (step S506; YES), the process of step S404 in FIG. 4 is ended. Then, the process proceeds to step S405.
In step S405, the edge determination unit 106 further selects a 3D line to be an edge from the 3D line list selected in step S404. Specifically, first, for a line segment after the process of removing from the threedimensional line list is performed, a predetermined number of pixels in a direction orthogonal to the line segment from a position indicated by a point constituting the line segment Obtain pixel depth information. Then, a second derivative value of the depth information with respect to the orthogonal direction is calculated. It is determined whether or not the absolute value of the secondary differential value is greater than or equal to a threshold value. When it is determined that the absolute value of the secondary differential value is equal to or greater than the threshold value, the line segment is extracted as a line segment serving as an edge indicating the feature of the target object on the twodimensional image.
With reference to FIG. 6, the detailed processing procedure of the edge selection method for the threedimensional line according to step S405 will be described.
First, in step S601, paying attention to one of the threedimensional line lists selected in step S404, depth value information of the threedimensional line peripheral region is acquired from the depth buffer obtained in step S403. As the depth value information, as shown in FIGS. 7A and 7B, the distance value of the projection position of the threedimensional line and the distance value of the normal direction ± 2 pixel position of the projected line are acquired. The number and interval of sampling distance values may be adjusted according to the level of detail of the model and the drawing resolution.
Next, in step S602, as shown in FIG. 7C, a secondary differential value is calculated with respect to the distance value of the threedimensional line peripheral region. Next, in step S603, edge determination is performed using the second derivative value of the calculated distance value. It is determined whether or not the absolute value of the second derivative of the calculated distance value is equal to or less than a threshold value. When the absolute value of the secondary differential value is larger than the threshold value (step S603; NO), it is determined that the distance value is a boundary portion that changes discontinuously, that is, a threedimensional line that becomes an edge. The process proceeds to step S605. On the other hand, when the absolute value of the secondary differential value is equal to or smaller than the threshold value (step S603; NO), it is determined that it is not an edge, and thus the process proceeds to step S604.
In step S604, the 3D line determined not to be an edge is removed from the 3D line list. In step S605, it is determined whether or not the above processing has been performed on all threedimensional lines in the threedimensional line list. If processing has not been performed for all threedimensional lines (step S605; NO), the process returns to step S601. On the other hand, when the process is performed on all the threedimensional lines (step S605; YES), the process ends.
Through the above processing, a 3D line list serving as an edge is extracted from the 3D model data 10. The resulting threedimensional line list is output to the position / orientation estimation apparatus 11 and used for model fitting processing. The model fitting process can be performed by a method disclosed in NonPatent Document 1, for example. That is, first, the threedimensional line output from the threedimensional line extraction apparatus 100 is projected on a twodimensional image in which an object to be modelfitted is imaged based on the approximate position and orientation of the object. Then, onedimensional edge detection is performed from the vicinity of the projection image, and an edge on the image corresponding to each threedimensional line is detected. Next, the position and orientation of the target object can be calculated by correcting the position and orientation repeatedly by nonlinear optimization calculation based on the correspondence between the threedimensional line and the edge on the image.
As described above, in the present embodiment, by extracting a 3D line from a 3D model so that there is no overlap on a 2D screen, and performing edge determination of the 3D line using a depth buffer, It is possible to deal with cracks and overlapping vertices in the threedimensional shape model. In addition, by directly calculating the coordinates of the threedimensional line from the coordinates of the threedimensional shape model, it is possible to extract the threedimensional line as an edge while maintaining the accuracy of the original threedimensional shape model.
(Second Embodiment)
In the first embodiment, in step S405, the contour of the threedimensional line is determined by calculating the second derivative value of the distance value of the depth buffer around the threedimensional line. However, the contour determination of the threedimensional line is not limited to this. For example, an edge image generated by performing edge detection from the entire depth buffer may be used. Specifically, in step S403, after obtaining the depth buffer by CG drawing of the threedimensional shape model, edge detection processing is performed on the entire depth buffer. In the edge detection process, for example, a known edge detector of Canny is used. In the edge determination processing in step S405, the edge detection result of the depth buffer corresponding to the twodimensional position of the threedimensional line is referred to. If the edge is an edge on the depth buffer, the threedimensional line is determined to be an edge. To do. The edge determination of the threedimensional line is not limited to the abovedescribed method, and any method may be used as long as the determination process is performed based on the shape discontinuity around the threedimensional line.
(Third embodiment)
In the first embodiment, the duplication of the threedimensional line on the twodimensional screen is removed by creating the duplication flag map on the twodimensional screen in step S404. However, the removal of the overlap of the threedimensional line is not limited to this. For example, the determination may be made by calculating the twodimensional distance between the threedimensional lines. Specifically, when the twodimensional distance between the already determined projection image of the threedimensional line and the projection image of the target threedimensional line is calculated and there is another threedimensional line in the vicinity of the threshold value or less, This is a process for removing the focused threedimensional line from the threedimensional line list. In addition, since the neighborhood search processing between the threedimensional lines is basically a full search, the data structure of the threedimensional lines may be held using a kd tree to improve the search efficiency. As described above, any method may be used as long as the vicinity of the threedimensional line can be determined on the twodimensional screen, and the method is not particularly limited.
(Fourth embodiment)
In the first embodiment, the threedimensional line is processed and output as local line data having no length. However, the data format of the threedimensional line is not limited to this, and may be handled as line segment data having a finite length as described by the start point and the end point. For example, the data format indicating the line segment may be configured to be described by the position information of each end point of the start point and the end point constituting the line segment. Specifically, in step S402, not the straight line data having a length extracted from the threedimensional model data 10 but each line segment constituting the threedimensional model data 10 is registered as a threedimensional line. However, at this time, local line segment data divided from each line segment may be held by the same method as the process of step S402. Subsequent processing is performed for each local line segment data divided from the line segment, as in the first embodiment. Then, when the process of step S405 ends, the threedimensional line including local line segment data determined not to be an edge is deleted, and only the threedimensional line that becomes an edge is output to the position and orientation estimation apparatus 11. As described above, the line data handled as a threedimensional line can be described in any format as long as it can describe a threedimensional straight line observed as a contour or a roof edge and can be supported by the position / orientation estimation apparatus 11. But you can.
(Fifth embodiment)
In the first embodiment, a threedimensional line having a contour corresponding to the observation direction is output to the position / orientation estimation apparatus 11 at runtime. However, threedimensional line data observed from various directions may be created in advance and output to the position / orientation estimation apparatus 11. Specifically, a plurality of Geodesic Domains surrounding the target object are set with different radii, and the observation direction and the observation distance of the target object are set to be equal on the Geodesic Dome. Then, using a position and orientation corresponding to each point of each Geodesic Domain as an observation viewpoint, a threedimensional line serving as an edge is extracted, and data of the threedimensional line is stored together with information on the observation direction and observation distance of the target object. The position / orientation estimation apparatus 11 selects the threedimensional line data closest to the position and orientation of the target object at the previous time from the threedimensional line data for each direction, and uses it for position / orientation estimation. Compared with the method of selecting a threedimensional line at runtime, this method has a feature that the amount of calculation at run time is reduced instead of increasing the amount of data to be held. These methods may be selected according to the usage scene.
(Sixth embodiment)
In the first embodiment, the threedimensional model data 10 is mesh data composed of points, surfaces, and line segments. However, the data format of the threedimensional model data 10 is not limited to this, and may be a parametric model expressed by a NURBS curved surface, for example. In this case, in step S402, a process for calculating a threedimensional line segment from the parametric model is required. This process is, for example, a process of calculating a threedimensional line segment by detecting an edge from a normal map of a parametric model and extracting the coordinates and direction of a portion where the normal changes rapidly. As described above, the data format handled as the threedimensional model data 10 may be any format as long as the shape of the target object can be expressed.
(Other embodiments)
The present invention can also be realized by executing the following processing. That is, software (program) that realizes the functions of the abovedescribed embodiments is supplied to a system or apparatus via a network or various storage media, and a computer (or CPU, MPU, or the like) of the system or apparatus reads the program. It is a process to be executed.
Claims (8)
 Holding means for holding a threedimensional model indicating the threedimensional shape information of the target object;
Position and orientation input means for inputting the position and orientation of the threedimensional model;
Selection means for selecting a line segment constituting the threedimensional model from the threedimensional model;
Based on the position and orientation of the target object, a line segment and a plane constituting the threedimensional model are projected onto a twodimensional image in which depth information up to the threedimensional model is held in each pixel. Projection means for projecting as a surface;
A deletion unit that deletes one of the projected line segments that overlaps the twodimensional image from the line segment selected by the selection unit;
A line segment that is selected by the selection unit and that serves as an edge indicating the feature of the target object on the twodimensional image based on the depth information from the line segment that has been deleted by the deletion unit Extracting means for extracting
An information processing apparatus comprising:  The threedimensional position of the line segment extracted by the extracting means is compared with the position indicated by the depth information, and the line segment existing behind the position indicated by the depth information is extracted as a hidden line by the extracting means. The information processing apparatus according to claim 1, further comprising a removing unit that removes the line segment.
 The deleting means is
Among the projection line segments corresponding to the line segment selected by the selection means, a projection line segment existing within a predetermined area from one of the projection line segments on the twodimensional image is displayed on the twodimensional image. The information processing apparatus according to claim 1, wherein one of the line segments is deleted while leaving one as an overlapping line segment.  The extraction means includes
For a line segment selected by the selection unit and deleted after the deletion unit, a predetermined pixel segment is formed in a direction perpendicular to the line segment from a position indicated by a point constituting the line segment. An acquisition means for acquiring depth information of the pixel;
Calculating means for calculating a second derivative value of the depth information acquired by the acquiring means with respect to the orthogonal direction;
Judgment means for judging whether or not the absolute value of the secondary differential value is greater than or equal to a threshold value,
When the determination means determines that the absolute value of the secondary differential value is greater than or equal to a threshold value, the line segment is extracted as a line segment serving as an edge indicating the feature of the target object on the twodimensional image. The information processing apparatus according to any one of claims 1 to 3.  The data format indicating the line segment is described by the position information of the dividing points obtained by dividing the line segment and the threedimensional direction of the line segment so that the projected line segments corresponding to the line segment are equally spaced. The information processing apparatus according to claim 1, wherein:
 5. The information processing apparatus according to claim 1, wherein a data format indicating the line segment is described by position information of each end point of a start point and an end point constituting the line segment.
 An information processing method for extracting a line segment observed as an edge indicating a feature of the target object on a twodimensional image from a line segment constituting a threedimensional model indicating threedimensional shape information of the target object,
A position and orientation input step in which the position and orientation input means inputs the position and orientation of the threedimensional model held in the holding means indicating the threedimensional shape information of the target object;
A selection step of selecting a line segment constituting the threedimensional model from the threedimensional model;
Projecting means projects line segments and surfaces constituting the threedimensional model onto a twodimensional image in which depth information up to the threedimensional model is held in each pixel based on the position and orientation of the target object. A projection step of projecting as a line segment and a projection plane;
A deletion step in which the deletion means deletes one of the projected line segments that overlaps on the twodimensional image from the line segment selected in the selection step;
An edge that indicates a feature of the target object on the twodimensional image based on the depth information from the line segment selected by the selection step and extracted after the deletion step. An extraction process for extracting a line segment,
An information processing method characterized by comprising:  A program for causing a computer to execute the information processing method according to claim 7.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

JP2010178070A JP5620741B2 (en)  20100806  20100806  Information processing apparatus, information processing method, and program 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

JP2010178070A JP5620741B2 (en)  20100806  20100806  Information processing apparatus, information processing method, and program 
Publications (2)
Publication Number  Publication Date 

JP2012038105A true JP2012038105A (en)  20120223 
JP5620741B2 JP5620741B2 (en)  20141105 
Family
ID=45850042
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

JP2010178070A Active JP5620741B2 (en)  20100806  20100806  Information processing apparatus, information processing method, and program 
Country Status (1)
Country  Link 

JP (1)  JP5620741B2 (en) 
Cited By (1)
Publication number  Priority date  Publication date  Assignee  Title 

WO2018185807A1 (en) *  20170403  20181011  富士通株式会社  Distance information processing device, distance information processing method, and distance information processing program 
Citations (8)
Publication number  Priority date  Publication date  Assignee  Title 

JPH0520464A (en) *  19910709  19930129  Nec Corp  Display element selection/display device 
JPH05307589A (en) *  19920417  19931119  Sanyo Electric Co Ltd  System for data transfer between cad systems 
JPH10188030A (en) *  19961031  19980721  Chokosoku Network Computer Gijutsu Kenkyusho:Kk  Contour extracting method 
JPH11260812A (en) *  19980313  19990924  Nec Corp  System and method for mesh generation 
JP2002319031A (en) *  20010424  20021031  Taito Corp  Polygon model outline detection method in image processor 
JP2005050336A (en) *  20030728  20050224  Dassault Systemes  Method for providing vector image with removed black bar 
JP2005100177A (en) *  20030925  20050414  Sony Corp  Image processor and its method 
JP2010079452A (en) *  20080924  20100408  Canon Inc  Position and orientation measurement apparatus and method thereof 

2010
 20100806 JP JP2010178070A patent/JP5620741B2/en active Active
Patent Citations (8)
Publication number  Priority date  Publication date  Assignee  Title 

JPH0520464A (en) *  19910709  19930129  Nec Corp  Display element selection/display device 
JPH05307589A (en) *  19920417  19931119  Sanyo Electric Co Ltd  System for data transfer between cad systems 
JPH10188030A (en) *  19961031  19980721  Chokosoku Network Computer Gijutsu Kenkyusho:Kk  Contour extracting method 
JPH11260812A (en) *  19980313  19990924  Nec Corp  System and method for mesh generation 
JP2002319031A (en) *  20010424  20021031  Taito Corp  Polygon model outline detection method in image processor 
JP2005050336A (en) *  20030728  20050224  Dassault Systemes  Method for providing vector image with removed black bar 
JP2005100177A (en) *  20030925  20050414  Sony Corp  Image processor and its method 
JP2010079452A (en) *  20080924  20100408  Canon Inc  Position and orientation measurement apparatus and method thereof 
Cited By (1)
Publication number  Priority date  Publication date  Assignee  Title 

WO2018185807A1 (en) *  20170403  20181011  富士通株式会社  Distance information processing device, distance information processing method, and distance information processing program 
Also Published As
Publication number  Publication date 

JP5620741B2 (en)  20141105 
Similar Documents
Publication  Publication Date  Title 

JP6244407B2 (en)  Improved depth measurement quality  
JP5699788B2 (en)  Screen area detection method and system  
JP6415026B2 (en)  Interference determination apparatus, interference determination method, and computer program  
JP6011102B2 (en)  Object posture estimation method  
US9208547B2 (en)  Stereo correspondence smoothness tool  
JP4392507B2 (en)  3D surface generation method  
JP5631025B2 (en)  Information processing apparatus, processing method thereof, and program  
EP2430588B1 (en)  Object recognition method, object recognition apparatus, and autonomous mobile robot  
JP4004899B2 (en)  Article position / orientation detection apparatus and article removal apparatus  
US20150187091A1 (en)  Size measurement device and size measurement method  
JP5548482B2 (en)  Position / orientation measuring apparatus, position / orientation measuring method, program, and storage medium  
US20120268567A1 (en)  Threedimensional measurement apparatus, processing method, and nontransitory computerreadable storage medium  
JP5671281B2 (en)  Position / orientation measuring apparatus, control method and program for position / orientation measuring apparatus  
US20120306876A1 (en)  Generating computer models of 3d objects  
US20150062123A1 (en)  Augmented reality (ar) annotation computer system and computerreadable medium and method for creating an annotated 3d graphics model  
JP4025442B2 (en)  3D model conversion apparatus and method  
JP5624394B2 (en)  Position / orientation measurement apparatus, measurement processing method thereof, and program  
Lee et al.  Skeletonbased 3D reconstruction of asbuilt pipelines from laserscan data  
US9632678B2 (en)  Image processing apparatus, image processing method, and program  
JP2013004088A (en)  Image processing method, image processing device, scanner and computer program  
JP4752918B2 (en)  Image processing apparatus, image collation method, and program  
US7911503B2 (en)  Information processing apparatus and information processing method  
JP5538667B2 (en)  Position / orientation measuring apparatus and control method thereof  
JP5094663B2 (en)  Position / orientation estimation model generation apparatus, position / orientation calculation apparatus, image processing apparatus, and methods thereof  
JP5651909B2 (en)  Multiview ray tracing with edge detection and shader reuse 
Legal Events
Date  Code  Title  Description 

A621  Written request for application examination 
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20130806 

A977  Report on retrieval 
Free format text: JAPANESE INTERMEDIATE CODE: A971007 Effective date: 20140519 

A131  Notification of reasons for refusal 
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20140606 

A521  Written amendment 
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20140728 

TRDD  Decision of grant or rejection written  
A01  Written decision to grant a patent or to grant a registration (utility model) 
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20140822 

A61  First payment of annual fees (during grant procedure) 
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20140919 

R151  Written notification of patent or utility model registration 
Ref document number: 5620741 Country of ref document: JP Free format text: JAPANESE INTERMEDIATE CODE: R151 