WO2021014542A1 - Template creation device, object recognition processing device, template creation method, object recognition processing method, and program - Google Patents

Template creation device, object recognition processing device, template creation method, object recognition processing method, and program Download PDF

Info

Publication number
WO2021014542A1
WO2021014542A1 PCT/JP2019/028691 JP2019028691W WO2021014542A1 WO 2021014542 A1 WO2021014542 A1 WO 2021014542A1 JP 2019028691 W JP2019028691 W JP 2019028691W WO 2021014542 A1 WO2021014542 A1 WO 2021014542A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
template
data
viewpoint
symmetry
Prior art date
Application number
PCT/JP2019/028691
Other languages
French (fr)
Japanese (ja)
Inventor
嘉典 小西
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Priority to PCT/JP2019/028691 priority Critical patent/WO2021014542A1/en
Priority to JP2021534434A priority patent/JP7327484B2/en
Publication of WO2021014542A1 publication Critical patent/WO2021014542A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B21/00Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
    • G01B21/20Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring contours or curvatures, e.g. determining profile

Definitions

  • the present invention relates to a template creation device, an object recognition processing device, a template creation method, an object recognition processing method, and a program.
  • Template matching is a method of detecting an object included in an input image by preparing a model (template) of the object to be recognized in advance and evaluating the degree of matching of image features between the input image and the model. is there.
  • Object recognition by template matching is used in a wide range of fields such as inspection and picking in FA (Factory Automation), robot vision, and surveillance cameras.
  • Such a template can be created using 3DCAD data of an object to be recognized (hereinafter, also referred to as "target object").
  • target object an object to be recognized
  • 3DCAD data of an object to be recognized
  • a template is created using the three-dimensional measurement data obtained by three-dimensionally measuring the actual target object.
  • target object In order to create a complete 3D model of an object to be recognized (hereinafter, also referred to as "target object”), it has been necessary to acquire 3D measurement data obtained by photographing the target object from all angles. For this reason, the number of times of shooting required increases, and it takes a long time for measurement.
  • Patent Document 1 a technique has been proposed in which three-dimensional measurement data equivalent to that obtained when target objects are photographed from various angles is acquired by photographing a plurality of target objects arranged in various postures (for example).
  • the present invention has been made in view of the above problems, and it is necessary to reduce the number of times of photographing required when creating a template for object recognition using three-dimensional measurement data of a target object, and to create a template.
  • the purpose is to save time.
  • a template creation device that creates a template corresponding to the target object based on the three-dimensional measurement data of the target object.
  • a 3D measurement data acquisition unit that acquires 3D measurement data of the target object,
  • a measurement viewpoint position / posture data acquisition unit that acquires data related to the position / orientation of the measurement viewpoint as a viewpoint for three-dimensionally measuring the target object.
  • a symmetry parameter setting unit that sets a symmetry parameter that specifies the symmetry of the target object, Using the 3D measurement data corresponding to the measurement viewpoint and the 3D data obtained by duplicating the 3D measurement data corresponding to the measurement viewpoint based on the symmetry parameter of the target object, 3 of the target object.
  • a 3D data duplication unit that generates 3D data
  • a template creation position / posture data acquisition unit that acquires data related to the position / orientation of a template creation viewpoint as a viewpoint for creating the template.
  • a creation viewpoint 3D data generation unit that generates 3D data from the template creation viewpoint based on the generated 3D data of the target object. It is a template creation device characterized by being provided with.
  • the 3D measurement data of the target object is generated by duplicating the partial 3D measurement data of the target object based on the symmetry set by the symmetry parameter.
  • the number of measurements is less. Since the template is created based on the three-dimensional data generated in this way, the time required for creating the template can be shortened.
  • the symmetry parameter may be a parameter that specifies that the target object is plane-symmetrical, may be a parameter that specifies that the target object is rotationally symmetric, or the target object may be rotationally symmetric. It may be a parameter that specifies symmetry with respect to rotation of (360 / N) degrees (N is a natural number of 2 or more).
  • the symmetry of the target object includes, for example, plane symmetry, rotational symmetry, and symmetry with respect to rotation of (360 / N) degrees (N is a natural number of 2 or more), but is not limited to this, and symmetry combining these. Including sex.
  • a parameter for specifying the plane of symmetry can be set as a symmetry parameter.
  • a parameter for specifying the rotation axis can be set as a symmetry parameter.
  • a parameter for specifying the rotation axis and N can be set as a symmetry parameter.
  • the user's settings may be accepted for at least one of the symmetry parameter and the position / orientation of the measurement viewpoint.
  • the user can set an appropriate symmetry parameter and the position / orientation of the measurement viewpoint according to the shape of the actual target object, so that the time required for creating the template can be further shortened. In addition, it becomes possible to create a more accurate template.
  • An image acquisition unit that acquires an image including the target object, A feature extraction unit that extracts features from the acquired image, A template matching unit that recognizes the target object included in the acquired image by template matching using a template created by the template creation device, and a template matching unit. It is an object recognition processing device equipped with.
  • the object recognition is performed by the template created based on the three-dimensional measurement data of the target object, the time required for the object recognition can be shortened.
  • a more accurate template enables highly accurate object recognition.
  • the present invention A template creation method for creating a template corresponding to a target object based on three-dimensional measurement data of the target object.
  • the symmetry parameter that specifies the symmetry of the target object is set, Using the 3D measurement data corresponding to the measurement viewpoint and the 3D data obtained by duplicating the 3D measurement data corresponding to the measurement viewpoint based on the symmetry parameter of the target object, 3 of the target object.
  • the 3D measurement data of the target object is generated by duplicating the partial 3D measurement data of the target object based on the symmetry set by the symmetry parameter.
  • the number of measurements is less. Since the template is created based on the three-dimensional data generated in this way, the time required for creating the template can be shortened.
  • by duplicating the 3D measurement data of the target object using symmetry and generating the 3D data of the entire target object a plurality of 3D measurements due to measurement errors, shooting viewpoint position and orientation errors, etc. Since it is possible to reduce the deviation when integrating data and create a template that is closer to the real thing, it is possible to create a more accurate template with a smaller number of measurements.
  • the order of each step included in the present invention is not limited to the order described.
  • An object recognition processing method that recognizes an object using a template.
  • the object recognition is performed by the template created based on the three-dimensional measurement data of the target object, the time required for the object recognition can be shortened.
  • a more accurate template enables highly accurate object recognition.
  • the present invention This is a program for causing a computer to execute each step of the template creation method.
  • the present invention This is a program for causing a computer to execute each step of the object recognition processing method.
  • the present invention it is possible to reduce the number of times of shooting required when creating a template for object recognition by using the three-dimensional measurement data of the target object, and to shorten the time required for creating the template.
  • FIG. 1 is an overall configuration diagram of an object recognition device according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing a schematic configuration of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 3 is a flowchart showing the procedure of the template creation process according to the embodiment of the present invention.
  • 4A and 4B are flowcharts showing a procedure of three-dimensional data duplication processing in the embodiment of the present invention.
  • FIG. 5 is a diagram illustrating an imaging of a target object and an acquisition configuration of a photographing viewpoint position / orientation according to an embodiment of the present invention.
  • FIG. 6 is a diagram illustrating the 6-fold symmetry of the target object according to the embodiment of the present invention.
  • FIG. 1 is an overall configuration diagram of an object recognition device according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing a schematic configuration of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 3 is a flowchart showing the procedure of the template creation process according
  • FIG. 7 is a diagram for explaining the plane symmetry of the target object according to the embodiment of the present invention.
  • 8A and 8B are diagrams showing an example of three-dimensional data duplication processing according to the embodiment of the present invention.
  • FIG. 9 shows another example of the three-dimensional data duplication process according to the embodiment of the present invention.
  • FIG. 10 is a flowchart showing the procedure of the object recognition process according to the first embodiment of the present invention.
  • FIG. 11 is a diagram showing an example of a conventional shooting viewpoint of the target object.
  • FIG. 12 is a diagram showing another example of the conventional shooting viewpoint of the target object.
  • the nut 271 when such an object 271 is viewed from the direction of the central axis (Z axis orthogonal to the paper surface) inside the substantially cylindrical hollow of the nut 271, the nut 271 has this central axis. It becomes symmetric when rotated 60 degrees around. Further, as shown in FIG. 7, the nut 271 is plane-symmetrical with respect to the plane PL in the direction orthogonal to the central axis direction. In the present invention, paying attention to the symmetry of the target object as described above, as shown in FIG. 8, one three-dimensional measurement data from diagonally above is reproduced by rotating it about the central axis by 60 degrees six times. As a result, three-dimensional data of the portion rotated by 60 degrees around the central axis is generated.
  • the three-dimensional data thus generated is duplicated in a plane-symmetrical manner with respect to the plane PL to generate the three-dimensional data. Therefore, the three-dimensional data of the target object 271 can be generated by the three-dimensional measurement from one shooting viewpoint diagonally upward.
  • the four surfaces of the target object 272 that is, diagonally forward. It is necessary to perform 3D measurement from the IMP21, IMP24, IMP25, and IMP26 from the front (front side of the paper) and the back (back side of the paper), respectively, and a total of 8 3D measurements are required. It was. As shown in FIG. 9, such a target object 272 is plane-symmetrical with respect to a plane parallel to the paper surface when viewed from the shooting viewpoint IMP21 diagonally above the target object 272, and is obliquely above the shooting viewpoint IMP22.
  • the three-dimensional measurement data from the photographing viewpoint IMP21 is reproduced plane-symmetrically, and plane-symmetrical to a plane parallel to the paper surface.
  • 3D data of the part is generated, and the 3D measurement data from the shooting viewpoints IMP22 and IMP23 are rotated 180 degrees, respectively, and the 3D data of the part rotated 180 degrees around the axis orthogonal to the paper surface is generated. .. Therefore, the three-dimensional data of the target object 271 can be generated by three three-dimensional measurements from the three imaging viewpoints of IMPs 21 to 23.
  • the number of measurements of the three-dimensional measurement data of the target object can be reduced. Since the template is created based on the three-dimensional data generated in this way, the time required for creating the template can be shortened.
  • the 3D measurement data of the target object is duplicated using symmetry to generate the 3D data of the entire target object, there are a plurality of 3Ds due to measurement errors, shooting viewpoint position and orientation errors, and the like. Since it is possible to reduce the deviation when integrating the measurement data and create a template that is closer to the real thing, it is possible to create a more accurate template with a smaller number of measurements.
  • the object recognition device 2 is installed in a production line that assembles and processes articles, and recognizes the position and orientation of the target object 27 loaded on the tray 26 by template matching using the data captured from the sensor unit 20. It is a system that performs (three-dimensional object recognition). Objects to be recognized (hereinafter, also referred to as “target objects”) 27 are stacked separately on the tray 26.
  • the object recognition device 2 is roughly composed of a sensor unit 20 and an image processing device 21.
  • the sensor unit 20 and the image processing device 21 are connected by wire or wirelessly, and the output of the sensor unit 20 is taken into the image processing device 21.
  • the image processing device 21 is a device that performs various processes using the data captured from the sensor unit 20.
  • the processing of the image processing device 21 may include, for example, distance measurement (distance measurement), three-dimensional shape recognition, object recognition, scene recognition, and the like.
  • the recognition result of the object recognition device 2 is output to, for example, a PLC (programmable logic controller) 25 or a display 22.
  • the recognition result is used, for example, for controlling the picking robot 28, controlling the processing device and the printing device, and inspecting and measuring the target object 27.
  • the sensor unit 20 has at least a camera for capturing an optical image of the target object 27. Further, the sensor unit 20 may include a configuration (sensor, lighting device, floodlight device, etc.) necessary for performing three-dimensional measurement of the target object 27. For example, when measuring the depth distance by stereo matching (also called stereo vision, stereo camera method, etc.), a plurality of cameras are provided in the sensor unit 20. In the case of active stereo, the sensor unit 20 is further provided with a floodlight device that projects pattern light onto the target object 27. When three-dimensional measurement is performed by the space-coded pattern projection method, a floodlight device and a camera for projecting pattern light are provided in the sensor unit 20. In addition, any method such as an illuminance difference stereo method, a TOF (time of flight) method, and a phase shift method may be used as long as it can acquire three-dimensional information of the target object 27.
  • a configuration sensor, lighting device, floodlight device, etc.
  • the image processing device 21 is composed of, for example, a computer including a CPU (processor), a RAM (memory), a non-volatile storage device (hard disk, SSD, etc.), an input device, an output device, and the like.
  • the CPU expands the program stored in the non-volatile storage device into the RAM and executes the program to realize various configurations described later.
  • the configuration of the image processing device 21 is not limited to this, and all or a part of the configurations described later may be realized by a dedicated circuit such as FPGA or ASIC, or realized by cloud computing or distributed computing. You may.
  • FIG. 2 is a block diagram showing the configuration of the image processing device 21.
  • the image processing device 21 has a configuration of a template creating device 30 and a configuration of an object recognition processing device 31.
  • the template creation device 30 has a configuration for creating a template to be used in the object recognition process, and is a shooting viewpoint three-dimensional position / orientation data acquisition unit 300, a three-dimensional measurement data acquisition unit 301 of the target object, and a symmetry parameter setting unit 302. It has a three-dimensional data duplication unit 303, a template creation viewpoint position / orientation information acquisition unit 304, a three-dimensional data generation unit 305 of the template creation viewpoint, a feature extraction unit 306, a template creation unit 307, and a template information output unit 308.
  • the shooting viewpoint three-dimensional position / posture data acquisition unit 300 corresponds to the “measurement viewpoint position / posture data acquisition unit” of the present invention.
  • the three-dimensional measurement data acquisition unit 301 of the target object corresponds to the "three-dimensional measurement data acquisition unit” of the present invention.
  • the three-dimensional data duplication unit 303 corresponds to the "three-dimensional data duplication unit” of the present invention.
  • the template creation viewpoint position / posture information acquisition unit 304 corresponds to the “template creation viewpoint position / posture data acquisition unit” of the present invention.
  • the three-dimensional data generation unit 305 of the template creation viewpoint corresponds to the "creation viewpoint three-dimensional data generation unit" of the present invention.
  • the object recognition processing device 31 is configured to execute object recognition processing by template matching, and includes an image acquisition unit 310, an image pyramid creation unit 311, a feature extraction unit 312, a template information acquisition unit 313, a template matching unit 314, and recognition. It has a result output unit 315.
  • the image acquisition unit 310, the feature extraction unit 312, and the template matching unit 314 correspond to the "image acquisition unit", the "feature extraction unit”, and the “template matching unit” of the present invention, respectively.
  • FIG. 3 is a flowchart showing the overall procedure of the template creation process as the template creation method performed in the template creation device 30.
  • the three-dimensional data duplication process of step S101 will be described with reference to FIGS. 4A and 4B.
  • 4A and 4B are flowcharts illustrating a three-dimensional data duplication processing procedure.
  • FIG. 4A is a case where the target object is plane symmetric
  • FIG. 4B is a case where the symmetric object is rotationally symmetric or N-fold symmetric (N is a natural number of 2 or more) described later.
  • step S11 the three-dimensional data duplication unit 303 obtains data regarding the three-dimensional position / orientation of the shooting viewpoint at the time of three-dimensional measurement from the shooting viewpoint three-dimensional position / orientation data acquisition unit 300, and obtains the three-dimensional measurement data of the target object from the target object.
  • the symmetry parameters are acquired from the symmetry parameter setting unit 302 from the three-dimensional measurement data acquisition unit 301.
  • the photographing viewpoint corresponds to the "measurement viewpoint" of the present invention.
  • FIG. 5 is a diagram illustrating an outline of a configuration for acquiring three-dimensional measurement data of the target object and information on the three-dimensional position / orientation of the photographing viewpoint.
  • the three-dimensional sensor 220 is arranged at a predetermined position / orientation with respect to the target object 271, and the three-dimensional measurement data of the target object 271 is acquired (photographed), and the position / orientation information of the photographing viewpoint is acquired.
  • the 3D sensor 220 captures the target object from different viewpoints and acquires the 3D measurement data
  • the 3D sensor 220 may be fixed and the posture of the target object 271 may be changed, or the target object may be changed.
  • the 271 may be fixed and the posture of the three-dimensional sensor 220 may be changed, as long as the relative positional relationship between the three-dimensional sensor 220 and the target object 271 changes.
  • FIG. 5 shows an example of a case where the target object 271 mounted on the mounting surface 32 is photographed by the three-dimensional sensor 220 to acquire the three-dimensional measurement data.
  • the three-dimensional sensor 220 may be fixed and the posture of the mounting surface 32 may be changed, or the posture of the mounting surface 32 may be fixed and the posture of the three-dimensional sensor 220 may be changed.
  • the shape of the ternary position posture of the photographing viewpoint can be known from the plane marker 33 displayed on the mounting surface 32 of the target object and the reference object (3D CAD data or the like) fixedly arranged with respect to the mounting surface. It may be acquired by recognizing the object), or it may be acquired by providing the mounting surface 32 of the target object on the gonio stage.
  • the symmetry parameter means that when the target object viewed from the shooting viewpoint of a certain position and orientation and the target object viewed from the shooting viewpoint of another position and orientation have symmetry, the symmetry parameter is derived from the shooting viewpoint of one position and orientation. It is an index of symmetry for duplicating the three-dimensional measurement data and using it as the three-dimensional measurement data from the other position and orientation having symmetry. As such a symmetry parameter, there is rotational symmetry around the rotation axis, and the data for specifying the rotation axis of the target object is the symmetry parameter.
  • the nut 271 is viewed from the direction of the central axis (Z axis orthogonal to the paper surface) inside the hollow of the nut, which is substantially cylindrical. Becomes symmetric when rotated 60 degrees around this central axis. That is, the three-dimensional shape of the nut 271 when rotated by 60 degrees is the same.
  • the same does not mean the same in a strict sense, but means the same according to the purpose of three-dimensional measurement data for creating a template.
  • N N-fold symmetry
  • the data for specifying the rotation axis of the target object and the target number N are symmetry parameters.
  • the nut 271 is plane-symmetric with respect to the plane PL orthogonal to the height direction at a position halved in the height direction along the central axis.
  • the data that identifies the plane of symmetry of the target object becomes the symmetry parameter.
  • Such a symmetry parameter may be set by the user via an input device, or may be calculated according to the shape of the target object specified by the user.
  • the height (H) of the target object is calculated from the three-dimensional measurement data of all shooting viewpoints in step S12.
  • the height of the target object is the height in the direction orthogonal to the plane of symmetry (normal direction).
  • step S13 one 3D measurement data of the shooting viewpoint that has not been duplicated is taken out.
  • step S14 only the depth direction distance of the three-dimensional measurement data is duplicated at half the height of the target object.
  • the 3D measurement data having Zorg as the Z coordinate has the Z coordinate of Znew, which is inverted at half the height of the target object.
  • step S15 it is determined whether or not the duplication has generated three-dimensional data for all viewpoints with respect to the target object.
  • step S15 when it is determined that the duplication has generated three-dimensional data for all viewpoints with respect to the target object, the three-dimensional data duplication process for plane symmetry is terminated. In step S15, if there is a viewpoint in which the three-dimensional data is not generated by duplication, the processes after step S13 are repeated.
  • the three-dimensional data duplication unit 303 acquires data regarding the three-dimensional position and orientation of the shooting viewpoint at the time of three-dimensional measurement, three-dimensional measurement data of the target object, and symmetry parameters, respectively.
  • step S16 one 3D measurement data of the photographing viewpoint that has not been duplicated is taken out.
  • step S17 with the Z axis as the rotation axis of the target object, in the case of N-fold symmetry, the already generated three-dimensional measurement data of the shooting viewpoint is rotated about the Z axis by (360 / N) degrees. By duplicating, three-dimensional data of the shooting viewpoint is generated. Further, in the case of rotational symmetry, the three-dimensional data of the photographing viewpoint is generated by rotating the already generated three-dimensional measurement data of the photographing viewpoint by an arbitrary angle around the Z axis and duplicating the data.
  • the target object is N-fold symmetric or rotationally symmetric, it is necessary to match the rotation centers of the rotated three-dimensional data and the original three-dimensional measurement data.
  • the rotation centers can be identified by a template matching method using silhouette images projected on a two-dimensional plane. Further, the center of rotation may be identified by the alignment method by the ICP (Iterative Closest Point) method using the three-dimensional point clouds. Further, as described above, the user can specify the center of rotation as a symmetry parameter, or the target object may be placed so that the center of rotation of the target object coincides with a known point in the field of view.
  • ICP Intelligent Closest Point
  • step S18 it is determined whether or not the duplication has generated three-dimensional data for all viewpoints with respect to the target object.
  • step S18 when it is determined that the duplication has generated three-dimensional data for all viewpoints with respect to the target object, the three-dimensional data duplication process for rotational symmetry or N-fold symmetry is terminated. In step S18, if there is a viewpoint in which the three-dimensional data is not generated by duplication, the processes after step S16 are repeated.
  • the symmetry parameter includes a parameter indicating plane symmetry and the case where the symmetry parameter includes a parameter indicating rotational symmetry or N-fold symmetry are described separately, but the symmetry parameter is a surface.
  • both the processes of FIGS. 4A and 4B are performed. In this case, the order of processing of both may be appropriately set.
  • the target object 271 is a hexagonal nut, for example, as shown in FIG. 8A, the three-dimensional measurement data obtained by photographing the target object from diagonally above are 6-fold symmetric with respect to the imaging viewpoint IMP11 of FIG. 8B and the hollow inside.
  • the entire three-dimensional data of the target object 271 is generated by duplicating using the plane symmetry in the direction orthogonal to the central axis direction of. Further, in the case of the columnar object object 272 as shown in FIG. 9, it is plane symmetric with respect to the photographing viewpoint IMP11 and twice symmetric with respect to each of the photographing viewpoints IMP12 and IMP13.
  • the symmetry parameter includes both a parameter showing plane symmetry and a parameter showing rotational symmetry or N-fold symmetry
  • the reproduction of the three-dimensional measurement data with respect to the shooting viewpoint IMP11 by plane symmetry, and the shooting viewpoint IMP12 and Two-fold symmetry replication for each of the IMP13s can generate three-dimensional measurement data for all viewpoints.
  • the 3D data generation unit 305 of the template creation viewpoint creates a template for the target object from the template creation viewpoint position / orientation information acquisition unit 304 in step S102.
  • Create a template that is the viewpoint to be used Acquire information about the position and orientation of the viewpoint.
  • the information regarding the position and orientation of the template creation viewpoint may be set by the user via an input device, or the position and orientation of a plurality of template creation viewpoints may be set by a predetermined method.
  • step S103 the three-dimensional data at the template creation viewpoint acquired in step S102 is generated based on the three-dimensional data of the target object duplicated in step S101.
  • the feature extraction unit 306 extracts the image features of the target object 27 based on the three-dimensional measurement data of the target object 27 with respect to the template creation viewpoint generated in step S103.
  • image features for example, brightness, color, brightness gradient direction, quantization gradient direction, HoG (Histogram of Oriented Gradients), surface normal direction, HAR-like, SIFT (Scale-Invariant Feature Transform), etc. are used. Can be done.
  • the brightness gradient direction represents the direction (angle) of the brightness gradient in the local region centered on the feature point as a continuous value
  • the quantization gradient direction is the brightness in the local region centered on the feature point.
  • the direction of the gradient is represented by a discrete value (for example, 8 directions are held by 1 byte of information from 0 to 7).
  • a point from which an image feature is obtained is called a feature point.
  • step S105 the template creation unit 307 creates a template corresponding to the template creation viewpoint based on the image features extracted in step S104.
  • the template is, for example, a data set containing the coordinate values of each feature point and the extracted image features.
  • step S106 the template creation unit 307 determines whether or not the templates for all the template creation viewpoints acquired in step S102 have been created. In step S106, when it is determined that the templates for all the template creation viewpoints acquired in step S102 have been created, the template creation unit 307 processes the template data for object recognition processing via the template information output unit 308. Output to the device. In step S106, if there is a template creation viewpoint for which a template has not been created among the template creation viewpoints acquired in step S102, the processes of steps S103 to S105 are repeated.
  • the template information acquisition unit 313 acquires the template data output from the template information output unit 308 of the template creation device 30, and supplies the template data to the template matching unit 314.
  • step S202 the image acquisition unit 310 acquires an image including the target object and is the target of the object recognition process.
  • This image may be read from the internal storage device of the image processing device 21, or may be acquired from an external storage or the like via a network. Further, the image taken by the sensor unit 20 may be acquired.
  • the recognition result is used for controlling the picking robot 28, the sensor unit 20 may be attached to the arm of the picking robot 28.
  • the image pyramid creation unit 311 generates a low-resolution image from the image acquired by the image acquisition unit 310, and generates an image pyramid.
  • an image pyramid from a plurality of images in which the resolution is gradually changed from a low resolution image to a high resolution image such as 160 pixels ⁇ 120 pixels as the first layer image and 320 pixels ⁇ 240 pixels as the second layer image.
  • a high resolution image such as 160 pixels ⁇ 120 pixels as the first layer image and 320 pixels ⁇ 240 pixels as the second layer image.
  • the feature extraction unit 312 extracts image features for each layer image constituting the image pyramid.
  • the extracted image features are features of the same type as the image features at the time of template creation.
  • image features for the first layer image the image with the lowest resolution
  • it has the same resolution as the first layer image and is extracted at each pixel position of the first layer image.
  • An image having feature amount data as a pixel value (hereinafter, also referred to as "first layer feature image”) can be obtained.
  • a second layer feature image is obtained as a result of extracting the image features for the second layer image.
  • the template matching unit 314 uses the template data for each viewpoint supplied from the template information acquisition unit 313 and the feature amounts calculated by the feature extraction unit 312 corresponding to the data of each template. Perform template matching.
  • the template matching unit 314 first performs matching processing using the first layer feature image and the template for each viewpoint for the first layer.
  • the template matching unit 314 detects a template that is a correct answer candidate as a result of matching processing using the template for each viewpoint for the first layer
  • the template matching unit 314 sets the search range of the second layer feature image based on the detection result, and sets the search range for the second layer feature image. Matching processing is performed using the two-layer feature image and the template for each viewpoint for the second layer.
  • the third layer image and the fourth layer image are present, the same processing is performed on these layer images. As a result of such processing, it is possible to recognize the existence position and orientation of the object in the lowest layer (the layer having the highest image resolution).
  • step S206 when the template matching unit 314 recognizes the existence position and posture of the target object, it outputs the recognition information indicating the recognition result to the recognition result output unit 315.
  • the recognition result output unit 315 outputs the recognition information supplied from the template matching unit 314 to an external device, a liquid crystal panel, or the like.
  • the recognition information is used, for example, for inspection / measurement of an object object, control of a picking robot 28, and the like.
  • the partial 3D measurement data of the target object is duplicated to generate the entire 3D data based on the specified symmetry, so that the 3D measurement data of the target object is measured. It can be done less often. Since the template is created based on the three-dimensional data generated in this way, the time required for creating the template can be shortened.
  • the 3D measurement data of the target object is duplicated using symmetry to generate the 3D data of the entire target object, there are a plurality of 3Ds due to measurement errors, shooting viewpoint position and orientation errors, and the like. Since it is possible to reduce the deviation when integrating the measurement data and create a template that is closer to the real thing, it is possible to create a more accurate template with a smaller number of measurements.
  • the number of shooting viewpoints may be determined by the user.
  • the shooting viewpoint may be one viewpoint or a plurality of viewpoints.
  • the measurement viewpoint position / orientation can be determined so that the user can generate three-dimensional data of the entire target object in consideration of the symmetry of the target object.
  • the next shooting viewpoint may be determined while generating three-dimensional data by duplication.
  • the height required for duplicating the plane-symmetrical target object may be the average of the measurement results in the vicinity of the Z maximum value (maximum value of the height in the Z direction) of each three-dimensional measurement data.
  • a histogram of the Z value (height in the Z direction) of each three-dimensional measurement data may be created and used as the Z value of the largest bin that has voted above the threshold.
  • the height required for duplicating the plane-symmetrical target object may be input by the user as an actually measured value.
  • the rotation angle can be set as appropriate, but it is preferable to set it according to the range that can be measured from one viewpoint, for example, about 30 to 45 degrees. If the highly reliable range of the 3D measurement data is known, more accurate 3D data can be generated by using the 3D measurement data in the highly reliable range for duplication. You can generate an accurate template.
  • a template creation device (30) that creates a template corresponding to the target object (27) based on the three-dimensional measurement data of the target object (27).
  • a three-dimensional measurement data acquisition unit (301) that acquires three-dimensional measurement data of the target object (271), and A measurement viewpoint position / orientation data acquisition unit (300) for acquiring data on the position / orientation of the measurement viewpoint as a viewpoint for three-dimensionally measuring the target object, and A symmetry parameter setting unit (302) that sets a symmetry parameter that specifies the symmetry of the target object, and Using the three-dimensional measurement data corresponding to the measurement viewpoint and the three-dimensional data obtained by duplicating the three-dimensional measurement data corresponding to the measurement viewpoint based on the symmetry parameter of the target object (27), the target is used.
  • Object recognition device 21 Image processing device 27: Target object 30: Template creation device 31: Object recognition processing device 300: Shooting viewpoint 3D position / orientation data acquisition unit 301: 3D measurement data acquisition unit 302 of the target object: Symmetrical Sex parameter setting unit 303: 3D data duplication unit 304: Template creation viewpoint position / orientation information acquisition unit 305: 3D data generation unit 310 of template creation viewpoint: Image acquisition unit 312: Feature extraction unit 314: Template matching unit

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

This template creation device creates a template corresponding to a target object on the basis of three-dimensional measurement data, the template creation device comprising: a target object three-dimensional measurement data acquisition unit; a three-dimensional measurement measuring viewpoint position/attitude data acquisition unit; a symmetry parameter setting unit; a three-dimensional data replication unit that generates three-dimensional data of the target object by using three-dimensional data obtained by replicating the three-dimensional measurement data on the basis of the symmetry parameter; a template creation position/attitude information acquisition unit; and a template creation viewpoint three-dimensional data generation unit that generates three-dimensional data from a template creation viewpoint.

Description

テンプレート作成装置、物体認識処理装置、テンプレート作成方法、物体認識処理方法及びプログラムTemplate creation device, object recognition processing device, template creation method, object recognition processing method and program
 本発明は、テンプレート作成装置、物体認識処理装置、テンプレート作成方法、物体認識処理方法及びプログラムに関する。 The present invention relates to a template creation device, an object recognition processing device, a template creation method, an object recognition processing method, and a program.
 画像から物体を認識(検出)する方法の一つとしてテンプレートマッチングがある。テンプレートマッチングは、認識対象となる物体のモデル(テンプレート)を予め用意しておき、入力画像とモデルとの間の画像特徴の一致度を評価することで入力画像に含まれる物体を検出する方法である。テンプレートマッチングによる物体認識は、例えばFA(Factory Automation)における検査やピッキング、ロボットビジョン、監視カメラなど、多岐にわたる分野で利用されている。 There is template matching as one of the methods of recognizing (detecting) an object from an image. Template matching is a method of detecting an object included in an input image by preparing a model (template) of the object to be recognized in advance and evaluating the degree of matching of image features between the input image and the model. is there. Object recognition by template matching is used in a wide range of fields such as inspection and picking in FA (Factory Automation), robot vision, and surveillance cameras.
 このようなテンプレートは、認識対象となる物体(以下、「対象物体」ともいう。)の3DCADデータを用いて作成することができる。しかし、対象物体の3DCADデータがない場合や、3DCADデータと実際の対象物体との差異が大きい場合のように、3DCADデータと用いることができない、又は適切でない場合がある。このような場合には、実際の対象物体を3次元計測した3次元計測データを用いてテンプレートを作成する。 Such a template can be created using 3DCAD data of an object to be recognized (hereinafter, also referred to as "target object"). However, there are cases where it cannot be used with 3D CAD data or is not appropriate, such as when there is no 3D CAD data of the target object or when the difference between the 3D CAD data and the actual target object is large. In such a case, a template is created using the three-dimensional measurement data obtained by three-dimensionally measuring the actual target object.
 従来、認識対象となる物体(以下「対象物体」ともいう。)の完全な3次元モデルを作成するためには、あらゆる角度から対象物体を撮影した3次元計測データを取得する必要があった。このため、必要な撮影回数が多くなり、計測に長時間を要していた。 Conventionally, in order to create a complete 3D model of an object to be recognized (hereinafter, also referred to as "target object"), it has been necessary to acquire 3D measurement data obtained by photographing the target object from all angles. For this reason, the number of times of shooting required increases, and it takes a long time for measurement.
 このため、様々な姿勢で配置された複数個の対象物体を撮影することにより、様々な角度からの対象物体を撮影した場合と同等の3次元計測データを取得する技術が提案されている(例えば、特許文献1参照)。 For this reason, a technique has been proposed in which three-dimensional measurement data equivalent to that obtained when target objects are photographed from various angles is acquired by photographing a plurality of target objects arranged in various postures (for example). , Patent Document 1).
 しかし、このような場合でも、完全な3次元モデルを作成しようとすれば、大量の物体を、当該物体のあらゆる面をカバーするように配置する必要があるため、撮影に長時間を要していた。 However, even in such a case, in order to create a complete 3D model, it takes a long time to shoot because a large number of objects must be arranged so as to cover all surfaces of the objects. It was.
特許第6298035号公報Japanese Patent No. 6298035
 本発明は、上記のような問題に鑑みてなされたものであり、対象物体の3次元計測データを用いて物体認識用のテンプレートを作成する際に必要な撮影回数を削減し、テンプレート作成の所要時間を短縮することを目的とする。 The present invention has been made in view of the above problems, and it is necessary to reduce the number of times of photographing required when creating a template for object recognition using three-dimensional measurement data of a target object, and to create a template. The purpose is to save time.
 上記の課題を解決するための本発明は、
 対象物体の3次元計測データに基づいて、該対象物体に対応するテンプレートを作成するテンプレート作成装置であって、
 前記対象物体の3次元計測データを取得する3次元計測データ取得部と、
 前記対象物体を3次元計測する視点としての計測視点の位置姿勢に関するデータを取得する計測視点位置姿勢データ取得部と、
 前記対象物体の対称性を特定する対称性パラメータを設定する対称性パラメータ設定部と、
 前記計測視点に対応する前記3次元計測データと、前記対象物体の前記対称性パラメータに基づいて前記計測視点に対応する前記3次元計測データを複製した3次元データを用いて、該対象物体の3次元データを生成する3次元データ複製部と、
 前記テンプレートを作成するための視点としてのテンプレート作成視点の位置姿勢に関するデータを取得するテンプレート作成位置姿勢データ取得部と、
 生成された前記対象物体の3次元データに基づいて、前記テンプレート作成視点からの3次元データを生成する作成視点3次元データ生成部と、
 を備えることを特徴とするテンプレート作成装置である。
The present invention for solving the above problems
A template creation device that creates a template corresponding to the target object based on the three-dimensional measurement data of the target object.
A 3D measurement data acquisition unit that acquires 3D measurement data of the target object,
A measurement viewpoint position / posture data acquisition unit that acquires data related to the position / orientation of the measurement viewpoint as a viewpoint for three-dimensionally measuring the target object.
A symmetry parameter setting unit that sets a symmetry parameter that specifies the symmetry of the target object,
Using the 3D measurement data corresponding to the measurement viewpoint and the 3D data obtained by duplicating the 3D measurement data corresponding to the measurement viewpoint based on the symmetry parameter of the target object, 3 of the target object. A 3D data duplication unit that generates 3D data,
A template creation position / posture data acquisition unit that acquires data related to the position / orientation of a template creation viewpoint as a viewpoint for creating the template.
A creation viewpoint 3D data generation unit that generates 3D data from the template creation viewpoint based on the generated 3D data of the target object.
It is a template creation device characterized by being provided with.
 本発明によれば、対称性パラメータによって設定された対称性に基づいて、対象物体の部分的な3次元計測データを複製して全体の3次元データを生成するので、対象物体の3次元計測データの計測回数がより少なくて済む。このようにして生成された3次元データに基づいてテンプレートを作成するので、テンプレート作成に要する時間を短縮することができる。
 また、対称性を利用して対象物体の3次元計測データを複製して、対象物体全体の3次元データを生成することで、計測誤差や撮影視点位置姿勢の誤差等による、複数の3次元計測データを統合する際のずれを軽減でき、より実物に近いテンプレートを作成できるので、より少ない計測回数でより正確なテンプレートを作成することができる。
According to the present invention, the 3D measurement data of the target object is generated by duplicating the partial 3D measurement data of the target object based on the symmetry set by the symmetry parameter. The number of measurements is less. Since the template is created based on the three-dimensional data generated in this way, the time required for creating the template can be shortened.
In addition, by duplicating the 3D measurement data of the target object using symmetry and generating the 3D data of the entire target object, a plurality of 3D measurements due to measurement errors, shooting viewpoint position and orientation errors, etc. Since it is possible to reduce the deviation when integrating data and create a template that is closer to the real thing, it is possible to create a more accurate template with a smaller number of measurements.
 また、本発明においては、
 前記対称性パラメータは、前記対象物体が面対称であることを特定するパラメータであってもよいし、前記対象物体が回転対称であることを特定するパラメータであってもよいし、前記対象物体が(360/N)度(Nは2以上の自然数)の回転について対称であることを特定するパラメータであってもよい。
Further, in the present invention,
The symmetry parameter may be a parameter that specifies that the target object is plane-symmetrical, may be a parameter that specifies that the target object is rotationally symmetric, or the target object may be rotationally symmetric. It may be a parameter that specifies symmetry with respect to rotation of (360 / N) degrees (N is a natural number of 2 or more).
 対象物体の対称性としては、例えば、面対称、回転対称、(360/N)度(Nは2以上の自然数)の回転について対称等があるが、これに限定されず、これらを組み合わせた対称性も含む。また、対象物体が面対称である場合には、対称性パラメータとして対称面を特定するパラメータを設定することができる。対象物体が回転対称である場合には、対称性パラメータとして回転軸を特定するパラメータを設定することができる。また、対象物体が(360/N)度(Nは2以上の自然数)の回転について対称である場合には、対称性パラメータとして回転軸とNとを特定するパラメータを設定することができる。このような対称性パラメータを設定することにより、対象物体の対称性に応じた対称性を設定することができる。 The symmetry of the target object includes, for example, plane symmetry, rotational symmetry, and symmetry with respect to rotation of (360 / N) degrees (N is a natural number of 2 or more), but is not limited to this, and symmetry combining these. Including sex. Further, when the target object is plane symmetric, a parameter for specifying the plane of symmetry can be set as a symmetry parameter. When the target object is rotationally symmetric, a parameter for specifying the rotation axis can be set as a symmetry parameter. Further, when the target object is symmetric with respect to rotation of (360 / N) degrees (N is a natural number of 2 or more), a parameter for specifying the rotation axis and N can be set as a symmetry parameter. By setting such a symmetry parameter, it is possible to set the symmetry according to the symmetry of the target object.
 また、本発明において、
 前記対称性パラメータ及び前記計測視点の位置姿勢の少なくともいずれか一方について、ユーザの設定を受け付けるようにしてもよい。
Further, in the present invention
The user's settings may be accepted for at least one of the symmetry parameter and the position / orientation of the measurement viewpoint.
 このようにすれば、ユーザが、実際の対象物体の形状に応じ適切な対称性パラメータや計測視点の位置姿勢を設定することができるので、テンプレート作成に要する時間をより短縮できる。また、より正確なテンプレートの作成が可能となる。 By doing so, the user can set an appropriate symmetry parameter and the position / orientation of the measurement viewpoint according to the shape of the actual target object, so that the time required for creating the template can be further shortened. In addition, it becomes possible to create a more accurate template.
 また、本発明は、
 前記対象物体を含む画像を取得する画像取得部と、
 前記取得された画像から特徴量を抽出する特徴抽出部と、
 前記テンプレート作成装置によって作成されたテンプレートを用いたテンプレートマッチングにより前記取得された画像に含まれる前記対象物体を認識するテンプレートマッチング部と、
を備えた物体認識処理装置である。
In addition, the present invention
An image acquisition unit that acquires an image including the target object,
A feature extraction unit that extracts features from the acquired image,
A template matching unit that recognizes the target object included in the acquired image by template matching using a template created by the template creation device, and a template matching unit.
It is an object recognition processing device equipped with.
 本発明によれば、対象物体の3次元計測データに基づいて作成されたテンプレートによって物体認識が行われるので、物体認識に要する時間を短縮することができる。また、より正確なテンプレートにより高精度の物体認識が可能となる。 According to the present invention, since the object recognition is performed by the template created based on the three-dimensional measurement data of the target object, the time required for the object recognition can be shortened. In addition, a more accurate template enables highly accurate object recognition.
 また、本発明は、
 対象物体の3次元計測データに基づいて、該対象物体に対応するテンプレートを作成するテンプレート作成方法であって、
 前記対象物体の3次元計測データを取得するステップと、
 前記対象物体を3次元計測する視点としての計測視点の位置姿勢に関するデータを取得するステップと、
 前記対象物体の対称性を特定する対称性パラメータを設定すると、
 前記計測視点に対応する前記3次元計測データと、前記対象物体の前記対称性パラメータに基づいて前記計測視点に対応する前記3次元計測データを複製した3次元データを用いて、該対象物体の3次元データを生成するステップと、
 前記テンプレートを作成するための視点としてのテンプレート作成視点の位置姿勢に関するデータを取得するステップと、
 生成された前記対象物体の3次元データに基づいて、前記テンプレート作成視点からの3次元データを生成するステップと、
 を含むことを特徴とするテンプレート作成方法である。
In addition, the present invention
A template creation method for creating a template corresponding to a target object based on three-dimensional measurement data of the target object.
The step of acquiring the three-dimensional measurement data of the target object and
A step of acquiring data on the position and orientation of the measurement viewpoint as a viewpoint for measuring the target object in three dimensions, and
When the symmetry parameter that specifies the symmetry of the target object is set,
Using the 3D measurement data corresponding to the measurement viewpoint and the 3D data obtained by duplicating the 3D measurement data corresponding to the measurement viewpoint based on the symmetry parameter of the target object, 3 of the target object. Steps to generate dimensional data and
The step of acquiring data on the position and orientation of the template creation viewpoint as the viewpoint for creating the template, and
A step of generating 3D data from the template creation viewpoint based on the generated 3D data of the target object, and
It is a template creation method characterized by including.
 本発明によれば、対称性パラメータによって設定された対称性に基づいて、対象物体の部分的な3次元計測データを複製して全体の3次元データを生成するので、対象物体の3次元計測データの計測回数がより少なくて済む。このようにして生成された3次元データに基づいてテンプレートを作成するので、テンプレート作成に要する時間を短縮することができる。
 また、対称性を利用して対象物体の3次元計測データを複製して、対象物体全体の3次元データを生成することで、計測誤差や撮影視点位置姿勢の誤差等による、複数の3次元計測データを統合する際のずれを軽減でき、より実物に近いテンプレートを作成できるので、より少ない計測回数でより正確なテンプレートを作成することができる。
 また、本発明に含まれる各ステップの順序は記載された順序に限定されるものではない。
According to the present invention, the 3D measurement data of the target object is generated by duplicating the partial 3D measurement data of the target object based on the symmetry set by the symmetry parameter. The number of measurements is less. Since the template is created based on the three-dimensional data generated in this way, the time required for creating the template can be shortened.
In addition, by duplicating the 3D measurement data of the target object using symmetry and generating the 3D data of the entire target object, a plurality of 3D measurements due to measurement errors, shooting viewpoint position and orientation errors, etc. Since it is possible to reduce the deviation when integrating data and create a template that is closer to the real thing, it is possible to create a more accurate template with a smaller number of measurements.
Further, the order of each step included in the present invention is not limited to the order described.
 テンプレートを用いて対象物体を認識する物体認識処理方法であって、
 前記対象物体を含む画像を取得するステップと、
 前記取得された画像から特徴量を抽出するステップと、
 前記テンプレート作成方法により作成されたテンプレートを用いたテンプレートマッチングにより前記対象物体を認識するステップと、
を含むことを特徴とする物体認識処理方法である。
An object recognition processing method that recognizes an object using a template.
The step of acquiring an image including the target object and
The step of extracting the feature amount from the acquired image and
The step of recognizing the target object by template matching using the template created by the template creation method, and
It is an object recognition processing method characterized by including.
 本発明によれば、対象物体の3次元計測データに基づいて作成されたテンプレートによって物体認識が行われるので、物体認識に要する時間を短縮することができる。また、より正確なテンプレートにより高精度の物体認識が可能となる。 According to the present invention, since the object recognition is performed by the template created based on the three-dimensional measurement data of the target object, the time required for the object recognition can be shortened. In addition, a more accurate template enables highly accurate object recognition.
 また、本発明は、
 前記テンプレート作成方法の各ステップをコンピュータに実行させるためのプログラムである。
In addition, the present invention
This is a program for causing a computer to execute each step of the template creation method.
 また、本発明は、
 前記物体認識処理方法の各ステップをコンピュータに実行させるためのプログラムである。
In addition, the present invention
This is a program for causing a computer to execute each step of the object recognition processing method.
 本発明によれば、対象物体の3次元計測データを用いて、物体認識用のテンプレートを作成する際に必要な撮影回数を削減し、テンプレート作成の所要時間を短縮することができる。 According to the present invention, it is possible to reduce the number of times of shooting required when creating a template for object recognition by using the three-dimensional measurement data of the target object, and to shorten the time required for creating the template.
図1は、本発明の実施形態における物体認識装置の全体構成図である。FIG. 1 is an overall configuration diagram of an object recognition device according to an embodiment of the present invention. 図2は、本発明の実施形態における画像処理装置の概略構成を示すブロック図である。FIG. 2 is a block diagram showing a schematic configuration of an image processing apparatus according to an embodiment of the present invention. 図3は、本発明の実施例形態におけるテンプレート作成処理の手順を示すフローチャートである。FIG. 3 is a flowchart showing the procedure of the template creation process according to the embodiment of the present invention. 図4A及び図4Bは、本発明の実施例形態における3次元データ複製処理の手順を示すフローチャートである。4A and 4B are flowcharts showing a procedure of three-dimensional data duplication processing in the embodiment of the present invention. 図5は、本発明の実施形態における対象物体の撮影及び撮影視点位置姿勢の取得構成を説明する図である。FIG. 5 is a diagram illustrating an imaging of a target object and an acquisition configuration of a photographing viewpoint position / orientation according to an embodiment of the present invention. 図6は、本発明の実施形態における対象物体の6回対称性を説明する図である。FIG. 6 is a diagram illustrating the 6-fold symmetry of the target object according to the embodiment of the present invention. 図7は、本発明の実施形態における対象物体の面対称性を説明する図である。FIG. 7 is a diagram for explaining the plane symmetry of the target object according to the embodiment of the present invention. 図8A及び図8Bは、本発明の実施形態における3次元データ複製処理の一例を示す図である。8A and 8B are diagrams showing an example of three-dimensional data duplication processing according to the embodiment of the present invention. 図9は、本発明の実施形態における3次元データ複製処理の他の例を示すである。FIG. 9 shows another example of the three-dimensional data duplication process according to the embodiment of the present invention. 図10は、本発明の実施例1における物体認識処理の手順を示すフローチャートである。FIG. 10 is a flowchart showing the procedure of the object recognition process according to the first embodiment of the present invention. 図11は、従来の対象物体の撮影視点の一例を示す図である。FIG. 11 is a diagram showing an example of a conventional shooting viewpoint of the target object. 図12は、従来の対象物体の撮影視点の他の例を示す図である。FIG. 12 is a diagram showing another example of the conventional shooting viewpoint of the target object.
〔適用例〕
 以下、本発明の適用例について、図面を参照しつつ説明する。
 図11に示すような、六角形のナットの対象物体271の3次元計測データを用いて、物体認識のためのテンプレートを作成する場合に、従来であれば、対象物体271の6面、すなわち斜め前方からの撮影視点IMP11,IMP12,IMP13,IMP14,IMP15,IMP16からの3次元計測を、それぞれ表(紙面手前側)と裏(紙面奥側)から行う必要があり、計12回の3次元計測が必要となっていた。
 このような対象物体271は、図6に示すように、ナット271の略円筒形の中空内部の中心軸(紙面に直交するZ軸)の方向から見たときに、ナット271は、この中心軸のまわりに60度回転させると対称となる。また、図7に示すように、ナット271の中心軸方向に直交する方向の平面PLに対して面対称となる。
 本発明では、上述のような対象物体の対称性に着目し、図8に示すように、斜め上方からの1つの3次元計測データを中心軸の周りに60度ずつ6回回転させて複製することにより、中心軸回りに60度回転させた部分の3次元データを生成している。さらに、このように生成された3次元データを、面PLに対して面対称に複製することにより、3次元データを生成する。従って、斜め上方の1つ撮影視点からの3次元計測によって、対象物体271の3次元データを生成できる。
[Application example]
Hereinafter, application examples of the present invention will be described with reference to the drawings.
When creating a template for object recognition using the three-dimensional measurement data of the target object 271 of the hexagonal nut as shown in FIG. 11, conventionally, the six surfaces of the target object 271, that is, diagonally Shooting viewpoints from the front It is necessary to perform 3D measurement from IMP11, IMP12, IMP13, IMP14, IMP15, IMP16 from the front (front side of the paper) and the back (back side of the paper), respectively, and a total of 12 3D measurements Was needed.
As shown in FIG. 6, when such an object 271 is viewed from the direction of the central axis (Z axis orthogonal to the paper surface) inside the substantially cylindrical hollow of the nut 271, the nut 271 has this central axis. It becomes symmetric when rotated 60 degrees around. Further, as shown in FIG. 7, the nut 271 is plane-symmetrical with respect to the plane PL in the direction orthogonal to the central axis direction.
In the present invention, paying attention to the symmetry of the target object as described above, as shown in FIG. 8, one three-dimensional measurement data from diagonally above is reproduced by rotating it about the central axis by 60 degrees six times. As a result, three-dimensional data of the portion rotated by 60 degrees around the central axis is generated. Further, the three-dimensional data thus generated is duplicated in a plane-symmetrical manner with respect to the plane PL to generate the three-dimensional data. Therefore, the three-dimensional data of the target object 271 can be generated by the three-dimensional measurement from one shooting viewpoint diagonally upward.
 また、図12に示すような、柱状の対象物体272の3次元計測データを用いて、物体認識のためのテンプレートを作成する場合に、従来であれば、対象物体272の4面、すなわち斜め前方からの撮影視点IMP21,IMP24,IMP25,IMP26からの3次元計測を、それぞれ表(紙面手前側)と裏(紙面奥側)から行う必要があり、計8回の3次元計測が必要となっていた。
 このような対象物体272は、図9に示すように、対象物体272の斜め上方の撮影視点IMP21からみたとき、紙面に平行な面に対して面対称であり、斜め右上方の撮影視点IMP22と斜め左上方の撮影視点IMP23からみたときに紙面に直交する軸の周りにそれぞれ2回対称である。
 本発明では、上述のような対象物体の対称性に着目し、図9に示すように、撮影視点IMP21からの1つの3次元計測データを面対称に複製し、紙面に平行な面に面対称な部分の3次元データを生成し、撮影視点IMP22及びIMP23からの3次元計測データをそれぞれ180度回転させて、紙面に直交する軸の周りに180度回転させた部分の3次元データを生成する。従って、IMP21~23の3つの撮影視点からの3回の3次元計測によって、対象物体271の3次元データを生成できる。
Further, when creating a template for object recognition using the three-dimensional measurement data of the columnar target object 272 as shown in FIG. 12, conventionally, the four surfaces of the target object 272, that is, diagonally forward. It is necessary to perform 3D measurement from the IMP21, IMP24, IMP25, and IMP26 from the front (front side of the paper) and the back (back side of the paper), respectively, and a total of 8 3D measurements are required. It was.
As shown in FIG. 9, such a target object 272 is plane-symmetrical with respect to a plane parallel to the paper surface when viewed from the shooting viewpoint IMP21 diagonally above the target object 272, and is obliquely above the shooting viewpoint IMP22. It is symmetrical twice about an axis orthogonal to the paper surface when viewed from the shooting viewpoint IMP23 diagonally to the upper left.
In the present invention, paying attention to the symmetry of the target object as described above, as shown in FIG. 9, one three-dimensional measurement data from the photographing viewpoint IMP21 is reproduced plane-symmetrically, and plane-symmetrical to a plane parallel to the paper surface. 3D data of the part is generated, and the 3D measurement data from the shooting viewpoints IMP22 and IMP23 are rotated 180 degrees, respectively, and the 3D data of the part rotated 180 degrees around the axis orthogonal to the paper surface is generated. .. Therefore, the three-dimensional data of the target object 271 can be generated by three three-dimensional measurements from the three imaging viewpoints of IMPs 21 to 23.
 このようにすれば、対象物体の3次元計測データの計測回数がより少なくて済む。このようにして生成された3次元データに基づいてテンプレートを作成するので、テンプレート作成に要する時間を短縮することができる。
 また、対称性を利用して対象物体の3次元計測データを複製して、対象物体全体の3次元データを生成した場合には、計測誤差や撮影視点位置姿勢の誤差等による、複数の3次元計測データを統合する際のずれを軽減でき、より実物に近いテンプレートを作成できるので、より少ない計測回数でより正確なテンプレートを作成することができる。
By doing so, the number of measurements of the three-dimensional measurement data of the target object can be reduced. Since the template is created based on the three-dimensional data generated in this way, the time required for creating the template can be shortened.
In addition, when the 3D measurement data of the target object is duplicated using symmetry to generate the 3D data of the entire target object, there are a plurality of 3Ds due to measurement errors, shooting viewpoint position and orientation errors, and the like. Since it is possible to reduce the deviation when integrating the measurement data and create a template that is closer to the real thing, it is possible to create a more accurate template with a smaller number of measurements.
〔実施形態〕
 (物体認識装置の全体構成)
 図1を参照して、本発明の実施形態に係る物体認識装置について説明する。
[Embodiment]
(Overall configuration of object recognition device)
The object recognition device according to the embodiment of the present invention will be described with reference to FIG.
 物体認識装置2は、物品の組み立てや加工などを行う生産ラインに設置され、センサユニット20から取り込まれたデータを用いて、テンプレートマッチングによりトレイ26に積載された対象物体27の位置・姿勢を認識(3次元の物体認識)するシステムである。トレイ26上には、認識対象の物体(以下、「対象物体」ともいう。)27がバラ積みされている。 The object recognition device 2 is installed in a production line that assembles and processes articles, and recognizes the position and orientation of the target object 27 loaded on the tray 26 by template matching using the data captured from the sensor unit 20. It is a system that performs (three-dimensional object recognition). Objects to be recognized (hereinafter, also referred to as “target objects”) 27 are stacked separately on the tray 26.
 物体認識装置2は、概略、センサユニット20と画像処理装置21から構成される。センサユニット20と画像処理装置21のあいだは有線又は無線で接続されており、センサユニット20の出力は画像処理装置21に取り込まれる。画像処理装置21は、センサユニット20から取り込まれたデータを用いて各種の処理を行うデバイスである。画像処理装置21の処理としては、例えば、距離計測(測距)、3次元形状認識、物体認識、シーン認識などが含まれてもよい。物体認識装置2の認識結果は、例えばPLC(プログラマブルロジックコントローラ)25やディスプレイ22などに出力される。認識結果は、例えば、ピッキング・ロボット28の制御、加工装置や印字装置の制御、対象物体27の検査や計測などに利用される。 The object recognition device 2 is roughly composed of a sensor unit 20 and an image processing device 21. The sensor unit 20 and the image processing device 21 are connected by wire or wirelessly, and the output of the sensor unit 20 is taken into the image processing device 21. The image processing device 21 is a device that performs various processes using the data captured from the sensor unit 20. The processing of the image processing device 21 may include, for example, distance measurement (distance measurement), three-dimensional shape recognition, object recognition, scene recognition, and the like. The recognition result of the object recognition device 2 is output to, for example, a PLC (programmable logic controller) 25 or a display 22. The recognition result is used, for example, for controlling the picking robot 28, controlling the processing device and the printing device, and inspecting and measuring the target object 27.
 (センサユニット)
 センサユニット20は、対象物体27の光学像を撮影するためのカメラを少なくとも有する。さらに、センサユニット20は、対象物体27の3次元計測を行うために必要な構成(センサ、照明装置、投光装置など)を含んでもよい。例えば、ステレオマッチング(ステレオビジョン、ステレオカメラ方式などとも呼ばれる。)によって奥行き距離を計測する場合には、センサユニット20に複数台のカメラが設けられる。アクティブステレオの場合はさらに、対象物体27にパターン光を投射する投光装置がセンサユニット20に設けられる。空間コード化パターン投影方式により3次元計測を行う場合には、パターン光を投射する投光装置とカメラがセンサユニット20に設けられる。他にも、照度差ステレオ法、TOF(タイムオブフライト)法、位相シフト法など、対象物体27の3次元情報を取得可能な方法であればいかなる方式を用いてもよい。
(Sensor unit)
The sensor unit 20 has at least a camera for capturing an optical image of the target object 27. Further, the sensor unit 20 may include a configuration (sensor, lighting device, floodlight device, etc.) necessary for performing three-dimensional measurement of the target object 27. For example, when measuring the depth distance by stereo matching (also called stereo vision, stereo camera method, etc.), a plurality of cameras are provided in the sensor unit 20. In the case of active stereo, the sensor unit 20 is further provided with a floodlight device that projects pattern light onto the target object 27. When three-dimensional measurement is performed by the space-coded pattern projection method, a floodlight device and a camera for projecting pattern light are provided in the sensor unit 20. In addition, any method such as an illuminance difference stereo method, a TOF (time of flight) method, and a phase shift method may be used as long as it can acquire three-dimensional information of the target object 27.
 (画像処理装置)
 画像処理装置21は、例えば、CPU(プロセッサ)、RAM(メモリ)、不揮発性記憶装置(ハードディスク、SSDなど)、入力装置、出力装置などを備えるコンピュータにより構成される。この場合、CPUが、不揮発性記憶装置に格納されたプログラムをRAMに展開し、当該プログラムを実行することによって、後述する各種の構成が実現される。ただし、画像処理装置21の構成はこれに限られず、後述する構成のうちの全部又は一部を、FPGAやASICなどの専用回路で実現してもよいし、クラウドコンピューティングや分散コンピューティングにより実現してもよい。
(Image processing device)
The image processing device 21 is composed of, for example, a computer including a CPU (processor), a RAM (memory), a non-volatile storage device (hard disk, SSD, etc.), an input device, an output device, and the like. In this case, the CPU expands the program stored in the non-volatile storage device into the RAM and executes the program to realize various configurations described later. However, the configuration of the image processing device 21 is not limited to this, and all or a part of the configurations described later may be realized by a dedicated circuit such as FPGA or ASIC, or realized by cloud computing or distributed computing. You may.
 図2は、画像処理装置21の構成を示すブロック図である。画像処理装置21は、テンプレート作成装置30の構成と、物体認識処理装置31の構成を有している。
 テンプレート作成装置30は、物体認識処理で利用するテンプレートを作成するための構成であり、撮影視点3次元位置姿勢データ取得部300、対象物体の3次元計測データ取得部301、対称性パラメータ設定部302、3次元データ複製部303、テンプレート作成視点位置姿勢情報取得部304、テンプレート作成視点の3次元データ生成部305、特徴抽出部306、テンプレート作成部307、テンプレート情報出力部308を有する。本実施形態では、撮影視点3次元位置姿勢データ取得部300が本発明の「計測視点位置姿勢データ取得部」に対応する。また、本実施形態では、対象物体の3次元計測データ取得部301が本発明の「3次元計測データ取得部」に対応する。また、本実施形態では、3次元データ複製部303が、本発明の「3次元データ複製部」に対応する。また、本実施形態では、テンプレート作成視点位置姿勢情報取得部304が本発明の「テンプレート作成視点位置姿勢データ取得部」に対応する。そして、本実施形態では、テンプレート作成視点の3次元データ生成部305が、本発明の「作成視点3次元データ生成部」に対応する。
 物体認識処理装置31は、テンプレートマッチングによる物体認識処理を実行するための構成であり、画像取得部310、画像ピラミッド作成部311、特徴抽出部312、テンプレート情報取得部313、テンプレートマッチング部314、認識結果出力部315を有する。本実施形態では、画像取得部310、特徴抽出部312、テンプレートマッチング部314が、それぞれ本発明の「画像取得部」、「特徴抽出部」、「テンプレートマッチング部」に対応する。
FIG. 2 is a block diagram showing the configuration of the image processing device 21. The image processing device 21 has a configuration of a template creating device 30 and a configuration of an object recognition processing device 31.
The template creation device 30 has a configuration for creating a template to be used in the object recognition process, and is a shooting viewpoint three-dimensional position / orientation data acquisition unit 300, a three-dimensional measurement data acquisition unit 301 of the target object, and a symmetry parameter setting unit 302. It has a three-dimensional data duplication unit 303, a template creation viewpoint position / orientation information acquisition unit 304, a three-dimensional data generation unit 305 of the template creation viewpoint, a feature extraction unit 306, a template creation unit 307, and a template information output unit 308. In the present embodiment, the shooting viewpoint three-dimensional position / posture data acquisition unit 300 corresponds to the “measurement viewpoint position / posture data acquisition unit” of the present invention. Further, in the present embodiment, the three-dimensional measurement data acquisition unit 301 of the target object corresponds to the "three-dimensional measurement data acquisition unit" of the present invention. Further, in the present embodiment, the three-dimensional data duplication unit 303 corresponds to the "three-dimensional data duplication unit" of the present invention. Further, in the present embodiment, the template creation viewpoint position / posture information acquisition unit 304 corresponds to the “template creation viewpoint position / posture data acquisition unit” of the present invention. Then, in the present embodiment, the three-dimensional data generation unit 305 of the template creation viewpoint corresponds to the "creation viewpoint three-dimensional data generation unit" of the present invention.
The object recognition processing device 31 is configured to execute object recognition processing by template matching, and includes an image acquisition unit 310, an image pyramid creation unit 311, a feature extraction unit 312, a template information acquisition unit 313, a template matching unit 314, and recognition. It has a result output unit 315. In the present embodiment, the image acquisition unit 310, the feature extraction unit 312, and the template matching unit 314 correspond to the "image acquisition unit", the "feature extraction unit", and the "template matching unit" of the present invention, respectively.
 (3次元データ複製処理)
 図3は、テンプレート作成装置30において行われるテンプレート作成方法としてのテンプレート作成処理の全体的な手順を示すフローチャートである。ここでは、まず、このうち、ステップS101の3次元データ複製処理について、図4A及び図4Bを参照して説明する。図4A及び図4Bは、3次元データ複製処理手順を説明するフローチャートである。図4Aは対象物体が面対称の場合、図4Bは対称物体が回転対称又は後述するN回対称(Nは2以上の自然数)の場合である。
(3D data duplication processing)
FIG. 3 is a flowchart showing the overall procedure of the template creation process as the template creation method performed in the template creation device 30. Here, first, among them, the three-dimensional data duplication process of step S101 will be described with reference to FIGS. 4A and 4B. 4A and 4B are flowcharts illustrating a three-dimensional data duplication processing procedure. FIG. 4A is a case where the target object is plane symmetric, and FIG. 4B is a case where the symmetric object is rotationally symmetric or N-fold symmetric (N is a natural number of 2 or more) described later.
 まず、図4Aに示すフローチャートを参照して、対象物体が面対称である場合の3次元データ複製処理手順を説明する。
 ステップS11において、3次元データ複製部303が、3次元計測時の撮影視点の3次元位置姿勢に関するデータを撮影視点3次元位置姿勢データ取得部300から、対象物体の3次元計測データを対象物体の3次元計測データ取得部301から、対称性パラメータ設定部302から対称性パラメータをそれぞれ取得する。本実施形態では、撮影視点が本発明の「計測視点」に対応する。
First, a three-dimensional data duplication processing procedure when the target object is plane-symmetrical will be described with reference to the flowchart shown in FIG. 4A.
In step S11, the three-dimensional data duplication unit 303 obtains data regarding the three-dimensional position / orientation of the shooting viewpoint at the time of three-dimensional measurement from the shooting viewpoint three-dimensional position / orientation data acquisition unit 300, and obtains the three-dimensional measurement data of the target object from the target object. The symmetry parameters are acquired from the symmetry parameter setting unit 302 from the three-dimensional measurement data acquisition unit 301. In the present embodiment, the photographing viewpoint corresponds to the "measurement viewpoint" of the present invention.
 図5は、対象物体の3次元計測データ及び撮影視点の3次元位置姿勢に関する情報を取得する構成の概略を説明する図である。
 対象物体271に対して、3次元センサ220を所定の位置姿勢に配置して、対象物体271の3次元計測データを取得(撮影)するとともに、撮影視点の位置姿勢情報を取得する。3次元センサ220によって対象物体を異なる視点から撮影して3次元計測データを取得する場合には、3次元センサ220を固定して、対象物体271の姿勢を変えるようにしてもよいし、対象物体271を固定して、3次元センサ220の姿勢を変えるようにしてもよく、3次元センサ220と対象物体271の相対的な位置関係が変わればよい。図5では、3次元センサ220によって載置面32上に載置された対象物体271を撮影して3次元計測データを取得する場合の一例を示す。ここでは、3次元センサ220を固定して、載置面32の姿勢を変えてもよいし、載置面32の姿勢を固定し、3次元センサ220の姿勢を変えるようにしてもよい。
 また、撮影視点の3元位置姿勢は、対象物体の載置面32に表示された平面マーカ33や、載置面に対して固定して配置された参照物体(3DCADデータ等により形状が分かっている物体)を認識することによって取得してもよいし、対象物体の載置面32をゴニオステージ上に設けることによって取得してもよい。
FIG. 5 is a diagram illustrating an outline of a configuration for acquiring three-dimensional measurement data of the target object and information on the three-dimensional position / orientation of the photographing viewpoint.
The three-dimensional sensor 220 is arranged at a predetermined position / orientation with respect to the target object 271, and the three-dimensional measurement data of the target object 271 is acquired (photographed), and the position / orientation information of the photographing viewpoint is acquired. When the 3D sensor 220 captures the target object from different viewpoints and acquires the 3D measurement data, the 3D sensor 220 may be fixed and the posture of the target object 271 may be changed, or the target object may be changed. The 271 may be fixed and the posture of the three-dimensional sensor 220 may be changed, as long as the relative positional relationship between the three-dimensional sensor 220 and the target object 271 changes. FIG. 5 shows an example of a case where the target object 271 mounted on the mounting surface 32 is photographed by the three-dimensional sensor 220 to acquire the three-dimensional measurement data. Here, the three-dimensional sensor 220 may be fixed and the posture of the mounting surface 32 may be changed, or the posture of the mounting surface 32 may be fixed and the posture of the three-dimensional sensor 220 may be changed.
Further, the shape of the ternary position posture of the photographing viewpoint can be known from the plane marker 33 displayed on the mounting surface 32 of the target object and the reference object (3D CAD data or the like) fixedly arranged with respect to the mounting surface. It may be acquired by recognizing the object), or it may be acquired by providing the mounting surface 32 of the target object on the gonio stage.
 ここで、対称性パラメータとは、ある位置姿勢の撮影視点から見た対象物体と他の位置姿勢の撮影視点から見た対象物体が対称性を有する場合に、一方の位置姿勢の撮影視点からの3次元計測データを複製し、対称性を有する他方の位置姿勢からの3次元計測データとして用いるための、対称性の指標である。
 このような対称性パラメータとしては、回転軸のまわりの回転対称があり、対象物体の回転軸を特定するデータが対称性パラメータとなる。
Here, the symmetry parameter means that when the target object viewed from the shooting viewpoint of a certain position and orientation and the target object viewed from the shooting viewpoint of another position and orientation have symmetry, the symmetry parameter is derived from the shooting viewpoint of one position and orientation. It is an index of symmetry for duplicating the three-dimensional measurement data and using it as the three-dimensional measurement data from the other position and orientation having symmetry.
As such a symmetry parameter, there is rotational symmetry around the rotation axis, and the data for specifying the rotation axis of the target object is the symmetry parameter.
 図6に示すように、対象物体が六角形のナット271である場合には、ナットの略円筒形の中空内部の中心軸(紙面に直交するZ軸)の方向から見たときに、ナット271は、この中心軸のまわりに60度回転させると対称となる。すなわち、60度ずつ回転させた場合のナット271の3次元形状は同一である。なお、ここでは、同一は厳密な意味での同一ではなく、テンプレートを作成するための3次元計測データという目的に応じた同一を意味する。このように対称軸の回りに(360/N)度回転させると同一となり、N(N=6)回回転させるともとの形状となるような対称性を、N回対称(以下「N」を対象数という。)と称する。この場合には、対象物体の回転軸を特定するデータと、対象数Nが対称性パラメータとなる。
 また、図7に示すように、ナット271は、中心軸に沿った高さ方向の1/2の位置で、高さ方向に直交する面PLに対して面対称である。この場合には、対象物体の対称面を特定するデータが対称性パラメータとなる。
As shown in FIG. 6, when the target object is a hexagonal nut 271, the nut 271 is viewed from the direction of the central axis (Z axis orthogonal to the paper surface) inside the hollow of the nut, which is substantially cylindrical. Becomes symmetric when rotated 60 degrees around this central axis. That is, the three-dimensional shape of the nut 271 when rotated by 60 degrees is the same. Here, the same does not mean the same in a strict sense, but means the same according to the purpose of three-dimensional measurement data for creating a template. In this way, the symmetry that becomes the same when rotated by (360 / N) degrees around the axis of symmetry and becomes the original shape when rotated N (N = 6) times is defined as N-fold symmetry (hereinafter, "N"). It is called the number of objects. In this case, the data for specifying the rotation axis of the target object and the target number N are symmetry parameters.
Further, as shown in FIG. 7, the nut 271 is plane-symmetric with respect to the plane PL orthogonal to the height direction at a position halved in the height direction along the central axis. In this case, the data that identifies the plane of symmetry of the target object becomes the symmetry parameter.
 このような対称性パラメータは、ユーザが入力装置を介して設定してもよいし、ユーザが指定した対象物体の形状に応じて算出するようにしてもよい。 Such a symmetry parameter may be set by the user via an input device, or may be calculated according to the shape of the target object specified by the user.
 ステップS11で取得された対称性パラメータが面対称を示すパラメータを含む場合には、ステップS12において、全撮影視点の3次元計測データから対象物体の高さ(H)を算出する。対象物体の高さは、対称面に直交する方向(法線方向)の高さである。 When the symmetry parameter acquired in step S11 includes a parameter indicating plane symmetry, the height (H) of the target object is calculated from the three-dimensional measurement data of all shooting viewpoints in step S12. The height of the target object is the height in the direction orthogonal to the plane of symmetry (normal direction).
 ステップS13において、まだ複製されていない撮影視点の3次元計測データを一つ取り出す。 In step S13, one 3D measurement data of the shooting viewpoint that has not been duplicated is taken out.
 ステップS14において、3次元計測データの奥行き方向距離のみ対象物体の半分の高さで反転させたデータを複製する。このとき、複製される3次元計測データの奥行き方向をZ軸方向とすると、Z座標としてZorgを有する3次元計測データを、対象物体の半分の高さで反転させたZnewのZ座標を有する3次元データを、複製データとして生成する。 In step S14, only the depth direction distance of the three-dimensional measurement data is duplicated at half the height of the target object. At this time, assuming that the depth direction of the duplicated 3D measurement data is the Z-axis direction, the 3D measurement data having Zorg as the Z coordinate has the Z coordinate of Znew, which is inverted at half the height of the target object. Generate dimensional data as duplicate data.
 ステップS15において、複製により、対象物体に対する全ての視点に対する3次元データが生成された否かを判断する。 In step S15, it is determined whether or not the duplication has generated three-dimensional data for all viewpoints with respect to the target object.
 ステップS15において、複製により、対象物体に対する全ての視点に対する3次元データが生成されたと判断された場合には、面対称に対する3次元データ複製処理を終了する。
 ステップS15において、複製により3次元データが生成されていない視点がある場合には、ステップS13以降の処理を繰り返す。
In step S15, when it is determined that the duplication has generated three-dimensional data for all viewpoints with respect to the target object, the three-dimensional data duplication process for plane symmetry is terminated.
In step S15, if there is a viewpoint in which the three-dimensional data is not generated by duplication, the processes after step S13 are repeated.
 次に、図4Bに示すフローチャートを参照して、対象物体が回転対称又はN回対称である場合の3次元データ複製処理手順を説明する。図4Aと共通する処理については、同様の符号を用いて詳細な説明は省略する。 Next, the three-dimensional data duplication processing procedure when the target object is rotationally symmetric or N-fold symmetric will be described with reference to the flowchart shown in FIG. 4B. For the processing common to FIG. 4A, the same reference numerals are used and detailed description thereof will be omitted.
 まず、ステップS11において、3次元データ複製部303が、3次元計測時の撮影視点の3次元位置姿勢に関するデータ、対象物体の3次元計測データ、対称性パラメータをそれぞれ取得する。
 以下に説明するのは、対称性パラメータが回転対称又はN回対称を示すパラメータを含む場合の処理である。
First, in step S11, the three-dimensional data duplication unit 303 acquires data regarding the three-dimensional position and orientation of the shooting viewpoint at the time of three-dimensional measurement, three-dimensional measurement data of the target object, and symmetry parameters, respectively.
The following describes the process when the symmetry parameter includes a parameter indicating rotational symmetry or N-fold symmetry.
 次に、ステップS16において、まだ、複製されていない撮影視点の3次元計測データを一つ取り出す。 Next, in step S16, one 3D measurement data of the photographing viewpoint that has not been duplicated is taken out.
 ステップS17において、Z軸を対象物体の回転軸として、N回対称の場合には、すでに生成されている撮影視点の3次元計測データを、Z軸回りに(360/N)度だけ回転させて複製することにより、当該撮影視点の3次元データを生成する。また、回転対称の場合には、すでに生成されている撮影視点の3次元計測データを、Z軸回りに任意の角度だけ回転させて複製することにより、当該撮影視点の3次元データを生成する。
 ここで、対象物体がN回対称や回転対称である場合には、回転させた3次元データと元の3次元計測データとの回転中心を一致させる必要がある。3次元データ間で回転中心を一致させるには、2次元平面に投影したシルエット画像同士を用いてテンプレートマッチングの方法で回転中心を同定することができる。また、3次元点群同士を用いてICP(Iterative Closest Point)法による位置合わせの方法により回転中心を同定するようにしてもよい。また、上述したように対称性パラメータとしてユーザが回転中心を指定することもできるし、視野内の既知の点に対象物体の回転中心が一致するように対象物体を設置するようにしてもよい。
In step S17, with the Z axis as the rotation axis of the target object, in the case of N-fold symmetry, the already generated three-dimensional measurement data of the shooting viewpoint is rotated about the Z axis by (360 / N) degrees. By duplicating, three-dimensional data of the shooting viewpoint is generated. Further, in the case of rotational symmetry, the three-dimensional data of the photographing viewpoint is generated by rotating the already generated three-dimensional measurement data of the photographing viewpoint by an arbitrary angle around the Z axis and duplicating the data.
Here, when the target object is N-fold symmetric or rotationally symmetric, it is necessary to match the rotation centers of the rotated three-dimensional data and the original three-dimensional measurement data. In order to match the rotation centers between the three-dimensional data, the rotation centers can be identified by a template matching method using silhouette images projected on a two-dimensional plane. Further, the center of rotation may be identified by the alignment method by the ICP (Iterative Closest Point) method using the three-dimensional point clouds. Further, as described above, the user can specify the center of rotation as a symmetry parameter, or the target object may be placed so that the center of rotation of the target object coincides with a known point in the field of view.
 ステップS18において、複製により、対象物体に対する全ての視点に対する3次元データが生成された否かを判断する。 In step S18, it is determined whether or not the duplication has generated three-dimensional data for all viewpoints with respect to the target object.
 ステップS18において、複製により、対象物体に対する全ての視点に対する3次元データが生成されたと判断された場合には、回転対称又はN回対称に対する3次元データ複製処理を終了する。
 ステップS18において、複製により3次元データが生成されていない視点がある場合には、ステップS16以降の処理を繰り返す。
In step S18, when it is determined that the duplication has generated three-dimensional data for all viewpoints with respect to the target object, the three-dimensional data duplication process for rotational symmetry or N-fold symmetry is terminated.
In step S18, if there is a viewpoint in which the three-dimensional data is not generated by duplication, the processes after step S16 are repeated.
 図4A及び図4Bでは、対称性パラメータが面対称を示すパラメータを含む場合と、対称性パラメータが回転対称又はN回対称を示すパラメータを含む場合とを分けて説明したが、対称性パラメータが面対称を示すパラメータと、回転対称又はN回対称を示すパラメータの両者を含む場合は、図4A及び図4Bの両者の処理を行う。この場合の両者の処理の順序は適宜設定すればよい。
 対象物体271が六角形のナットの場合には、例えば、図8Aに示すように、斜め上方から対象物体を撮影した3次元計測データを、図8Bの撮影視点IMP11に対する6回対称と、中空内部の中心軸方向に直交する方向の面対称とを利用して複製し、対象物体271の全体の3次元データを生成する。
 また、図9に示すような、柱状の対象物体272の場合には、撮影視点IMP11に対して面対称であり、撮影視点IMP12及びIMP13のそれぞれに対して2回対称である。このように対称性パラメータが面対称を示すパラメータと、回転対称又はN回対称を示すパラメータの両者を含む場合には、撮影視点IMP11に対する3次元計測データの面対称による複製と、撮影視点IMP12及びIMP13のそれぞれに対する2回対称による複製により、全ての視点に対する3次元計測データを生成することができる。
In FIGS. 4A and 4B, the case where the symmetry parameter includes a parameter indicating plane symmetry and the case where the symmetry parameter includes a parameter indicating rotational symmetry or N-fold symmetry are described separately, but the symmetry parameter is a surface. When both the parameter showing symmetry and the parameter showing rotational symmetry or N-fold symmetry are included, both the processes of FIGS. 4A and 4B are performed. In this case, the order of processing of both may be appropriately set.
When the target object 271 is a hexagonal nut, for example, as shown in FIG. 8A, the three-dimensional measurement data obtained by photographing the target object from diagonally above are 6-fold symmetric with respect to the imaging viewpoint IMP11 of FIG. 8B and the hollow inside. The entire three-dimensional data of the target object 271 is generated by duplicating using the plane symmetry in the direction orthogonal to the central axis direction of.
Further, in the case of the columnar object object 272 as shown in FIG. 9, it is plane symmetric with respect to the photographing viewpoint IMP11 and twice symmetric with respect to each of the photographing viewpoints IMP12 and IMP13. In this way, when the symmetry parameter includes both a parameter showing plane symmetry and a parameter showing rotational symmetry or N-fold symmetry, the reproduction of the three-dimensional measurement data with respect to the shooting viewpoint IMP11 by plane symmetry, and the shooting viewpoint IMP12 and Two-fold symmetry replication for each of the IMP13s can generate three-dimensional measurement data for all viewpoints.
 (テンプレート作成処理)
 図3に戻って、テンプレート作成処理手順を説明する。
(Template creation process)
Returning to FIG. 3, the template creation processing procedure will be described.
 上述の3次元データ複製処理(ステップS101)に続いて、テンプレート作成視点の3次元データ生成部305は、ステップS102において、テンプレート作成視点位置姿勢情報取得部304から、対象物体に対してテンプレートを作成する視点であるテンプレート作成視点の位置姿勢に関する情報を取得する。このテンプレート作成視点の位置姿勢に関する情報は、ユーザが入力装置を介してしてもよいし、所定の方法によって複数のテンプレート作成視点の位置姿勢を設定しておくようにしてもよい。 Following the above-mentioned 3D data duplication process (step S101), the 3D data generation unit 305 of the template creation viewpoint creates a template for the target object from the template creation viewpoint position / orientation information acquisition unit 304 in step S102. Create a template that is the viewpoint to be used Acquire information about the position and orientation of the viewpoint. The information regarding the position and orientation of the template creation viewpoint may be set by the user via an input device, or the position and orientation of a plurality of template creation viewpoints may be set by a predetermined method.
 ステップS103において、し、ステップS101で複製された対象物体の3次元データに基づき、ステップS102で取得されたテンプレート作成視点における3次元データを生成する。 In step S103, the three-dimensional data at the template creation viewpoint acquired in step S102 is generated based on the three-dimensional data of the target object duplicated in step S101.
 ステップS104において、特徴抽出部306が、ステップS103で生成された、テンプレート作成視点に対する対象物体27の3次元計測データに基づいて、対象物体27の画像特徴を抽出する。画像特徴としては、例えば、輝度、色、輝度勾配方向、量子化勾配方向、HoG(Histogram of Oriented Gradients)、表面の法線方向、HAAR-like、SIFT(Scale-Invariant Feature Transform)などを用いることができる。輝度勾配方向は、特徴点を中心とする局所領域での輝度の勾配の方向(角度)を連続値で表すものであり、量子化勾配方向は、特徴点を中心とする局所領域での輝度の勾配の方向を離散値で表す(例えば、8方向を0~7の1バイトの情報で保持する)ものである。画像特徴が得られた点を特徴点と呼ぶ。 In step S104, the feature extraction unit 306 extracts the image features of the target object 27 based on the three-dimensional measurement data of the target object 27 with respect to the template creation viewpoint generated in step S103. As image features, for example, brightness, color, brightness gradient direction, quantization gradient direction, HoG (Histogram of Oriented Gradients), surface normal direction, HAR-like, SIFT (Scale-Invariant Feature Transform), etc. are used. Can be done. The brightness gradient direction represents the direction (angle) of the brightness gradient in the local region centered on the feature point as a continuous value, and the quantization gradient direction is the brightness in the local region centered on the feature point. The direction of the gradient is represented by a discrete value (for example, 8 directions are held by 1 byte of information from 0 to 7). A point from which an image feature is obtained is called a feature point.
 ステップS105において、テンプレート作成部307が、ステップS104で抽出された画像特徴に基づいて、テンプレート作成視点に対応するテンプレートを作成する。テンプレートは、例えば、各特徴点の座標値と抽出された画像特徴とを含むデータセットである。 In step S105, the template creation unit 307 creates a template corresponding to the template creation viewpoint based on the image features extracted in step S104. The template is, for example, a data set containing the coordinate values of each feature point and the extracted image features.
 ステップS106において、テンプレート作成部307は、ステップS102で取得された全てのテンプレート作成視点に対するテンプレートが作成されたか否かを判断する。
 ステップS106において、ステップS102で取得された全てのテンプレート作成視点に対するテンプレートが作成されたと判断された場合には、テンプレート作成部307は、テンプレート情報出力部308を介して、テンプレートのデータを物体認識処理装置に出力する。
 ステップS106において、ステップS102で取得されたテンプレート作成視点のうち、テンプレートが作成されていないテンプレート作成視点がある場合には、ステップS103~S105の処理を繰り返す。
In step S106, the template creation unit 307 determines whether or not the templates for all the template creation viewpoints acquired in step S102 have been created.
In step S106, when it is determined that the templates for all the template creation viewpoints acquired in step S102 have been created, the template creation unit 307 processes the template data for object recognition processing via the template information output unit 308. Output to the device.
In step S106, if there is a template creation viewpoint for which a template has not been created among the template creation viewpoints acquired in step S102, the processes of steps S103 to S105 are repeated.
 (物体認識処理)
 図10のフローチャートを参照して、物体認識処理装置31による物体認識処理方法の一例を説明する。
(Object recognition processing)
An example of the object recognition processing method by the object recognition processing device 31 will be described with reference to the flowchart of FIG.
 まず、ステップS201において、テンプレート情報取得部313は、テンプレート作成装置30のテンプレート情報出力部308から出力されるテンプレートのデータを取得し、テンプレートマッチング部314に供給する。 First, in step S201, the template information acquisition unit 313 acquires the template data output from the template information output unit 308 of the template creation device 30, and supplies the template data to the template matching unit 314.
 次に、ステップS202において、画像取得部310が、対象物体を含み、物体認識処理の対象となる画像を取得する。この画像は、画像処理装置21の内部記憶装置から読み込んでもよいし、外部のストレージ等からネットワークを介して取得してもよい。また、センサユニット20で撮影された画像を取得するようにしてもよい。なお、認識結果をピッキング・ロボット28の制御に用いる場合には、センサユニット20をピッキング・ロボット28のアームに取り付けるようにしてもよい。 Next, in step S202, the image acquisition unit 310 acquires an image including the target object and is the target of the object recognition process. This image may be read from the internal storage device of the image processing device 21, or may be acquired from an external storage or the like via a network. Further, the image taken by the sensor unit 20 may be acquired. When the recognition result is used for controlling the picking robot 28, the sensor unit 20 may be attached to the arm of the picking robot 28.
 次に、ステップS203において、画像ピラミッド作成部311が、画像取得部310によって取得された画像から低解像度画像を生成し、画像ピラミッドを生成する。例えば、第1層画像として160画素×120画素、第2層画像として320画素×240画素等のように低解像度画像から高解像度画像へと解像度を段階的に異ならせた複数の画像から画像ピラミッドを構成することができる。 Next, in step S203, the image pyramid creation unit 311 generates a low-resolution image from the image acquired by the image acquisition unit 310, and generates an image pyramid. For example, an image pyramid from a plurality of images in which the resolution is gradually changed from a low resolution image to a high resolution image such as 160 pixels × 120 pixels as the first layer image and 320 pixels × 240 pixels as the second layer image. Can be configured.
 次に、ステップS204において、特徴抽出部312は、画像ピラミッドを構成する各層画像に対して画像特徴を抽出する。抽出される画像特徴は、テンプレート作成の際の画像特徴と同種の特徴量である。画像ピラミッドの最上層である第1層画像(最も解像度が低い画像)に対する画像特徴の抽出の結果として、第1層画像と同じ解像度を有し、第1層画像の各画素位置で抽出された特徴量データを画素値としてもつ画像(以下、「第1層特徴画像」ともいう。)が得られる。同様に、第2層画像に対する画像特徴の抽出の結果として、第2層特徴画像が得られる。 Next, in step S204, the feature extraction unit 312 extracts image features for each layer image constituting the image pyramid. The extracted image features are features of the same type as the image features at the time of template creation. As a result of extracting image features for the first layer image (the image with the lowest resolution) which is the uppermost layer of the image pyramid, it has the same resolution as the first layer image and is extracted at each pixel position of the first layer image. An image having feature amount data as a pixel value (hereinafter, also referred to as "first layer feature image") can be obtained. Similarly, a second layer feature image is obtained as a result of extracting the image features for the second layer image.
 次に、ステップS205において、テンプレートマッチング部314は、テンプレート情報取得部313から供給される視点ごとのテンプレートのデータと、各テンプレートのデータに対応する特徴抽出部312算出した各特徴量とを用いてテンプレートマッチングを行う。
 ここでは、テンプレートマッチング部314は、まず、第1層特徴画像と第1層用の視点ごとのテンプレートを用いてマッチング処理を行う。テンプレートマッチング部314は、第1層用の視点ごとのテンプレートを用いたマッチング処理の結果、正解候補となるテンプレートを検出すると、検出結果に基づき、第2層特徴画像の探索範囲を設定し、第2層特徴画像と第2層用の視点ごとのテンプレートを用いてマッチング処理を行う。第3層画像、第4層画像が存在する場合には、これらの層画像に対しても同様の処理が行われる。このような処理が行われる結果、最下層(最も画像の解像度が高い層)における物体の存在位置姿勢を認識することができる。
Next, in step S205, the template matching unit 314 uses the template data for each viewpoint supplied from the template information acquisition unit 313 and the feature amounts calculated by the feature extraction unit 312 corresponding to the data of each template. Perform template matching.
Here, the template matching unit 314 first performs matching processing using the first layer feature image and the template for each viewpoint for the first layer. When the template matching unit 314 detects a template that is a correct answer candidate as a result of matching processing using the template for each viewpoint for the first layer, the template matching unit 314 sets the search range of the second layer feature image based on the detection result, and sets the search range for the second layer feature image. Matching processing is performed using the two-layer feature image and the template for each viewpoint for the second layer. When the third layer image and the fourth layer image are present, the same processing is performed on these layer images. As a result of such processing, it is possible to recognize the existence position and orientation of the object in the lowest layer (the layer having the highest image resolution).
 次に、ステップS206において、テンプレートマッチング部314は、対象物体の存在位置や姿勢などを認識すると、認識結果を示す認識情報を認識結果出力部315に出力する。認識結果出力部315は、テンプレートマッチング部314から供給される認識情報を、外部装置や液晶パネル等に出力する。認識情報は、例えば、対象物体の検査・計測や、ピッキング・ロボット28の制御などに利用される。 Next, in step S206, when the template matching unit 314 recognizes the existence position and posture of the target object, it outputs the recognition information indicating the recognition result to the recognition result output unit 315. The recognition result output unit 315 outputs the recognition information supplied from the template matching unit 314 to an external device, a liquid crystal panel, or the like. The recognition information is used, for example, for inspection / measurement of an object object, control of a picking robot 28, and the like.
 (本実施形態の利点)
 以上述べた構成及び処理では、指定された対称性に基づいて、対象物体の部分的な3次元計測データを複製して全体の3次元データを生成するので、対象物体の3次元計測データの計測回数がより少なくて済む。このようにして生成された3次元データに基づいてテンプレートを作成するので、テンプレート作成に要する時間を短縮することができる。
 また、対称性を利用して対象物体の3次元計測データを複製して、対象物体全体の3次元データを生成した場合には、計測誤差や撮影視点位置姿勢の誤差等による、複数の3次元計測データを統合する際のずれを軽減でき、より実物に近いテンプレートを作成できるので、より少ない計測回数でより正確なテンプレートを作成することができる。
(Advantages of this embodiment)
In the configuration and processing described above, the partial 3D measurement data of the target object is duplicated to generate the entire 3D data based on the specified symmetry, so that the 3D measurement data of the target object is measured. It can be done less often. Since the template is created based on the three-dimensional data generated in this way, the time required for creating the template can be shortened.
In addition, when the 3D measurement data of the target object is duplicated using symmetry to generate the 3D data of the entire target object, there are a plurality of 3Ds due to measurement errors, shooting viewpoint position and orientation errors, and the like. Since it is possible to reduce the deviation when integrating the measurement data and create a template that is closer to the real thing, it is possible to create a more accurate template with a smaller number of measurements.
 <その他>
 上記実施形態は、本発明の構成例を例示的に説明するものに過ぎない。本発明は上記の具体的な形態には限定されることはなく、その技術的思想の範囲内で種々の変形が可能である。
 対象物体の3次元計測を行う場合には、撮影視点の数は、ユーザが決定してもよい。撮影視点は1視点でもよいし、複数視点でもよい。
 計測視点位置姿勢は、ユーザが対象物体の対称性を考慮した上で、対象物体の全体の3次元データを生成できるように決定することができる。複製によって3次元データの生成を行いながら、次の、撮影視点を決定するようにしてもよい。
 面対称な対象物体の複製の際に必要な高さは、各3次元計測データのZ最大値(Z方向の高さの最大値)近傍の測定結果の平均としてもよい。外れ値やノイズを考慮するために、各3次元計測データのZ値(Z方向の高さ)のヒストグラムを作成し、閾値以上の投票があった最大のビンのZ値としてもよい。
 また、面対称な対象物体の複製の際に必要な高さは、実測値をユーザが入力するようにしてもよい。
 3次元計測データを複製する場合には、法線ベクトルや点群の色等の各3次元点に付随するデータがあれば、これらも同時に複製する。例えば、回転対称の場合には法線ベクトルも同様に回転させ、面対称の場合は奥行き成分のみを符号反転させる。
 回転対称の場合には、回転させる角度は適宜設定することができるが、例えば30度から45度程度のように1視点で計測できる範囲に応じて設定することが好ましい。
 3次元計測データの信頼性の高い範囲が分かっている場合には、その信頼性が高い範囲の3次元計測データを複製に用いるようにすれば、より正確な3次元データが生成できるので、より正確なテンプレートを生成することができる。
<Others>
The above-described embodiment is merely an example of a configuration example of the present invention. The present invention is not limited to the above-mentioned specific form, and various modifications can be made within the scope of its technical idea.
When performing three-dimensional measurement of the target object, the number of shooting viewpoints may be determined by the user. The shooting viewpoint may be one viewpoint or a plurality of viewpoints.
The measurement viewpoint position / orientation can be determined so that the user can generate three-dimensional data of the entire target object in consideration of the symmetry of the target object. The next shooting viewpoint may be determined while generating three-dimensional data by duplication.
The height required for duplicating the plane-symmetrical target object may be the average of the measurement results in the vicinity of the Z maximum value (maximum value of the height in the Z direction) of each three-dimensional measurement data. In order to consider outliers and noise, a histogram of the Z value (height in the Z direction) of each three-dimensional measurement data may be created and used as the Z value of the largest bin that has voted above the threshold.
Further, the height required for duplicating the plane-symmetrical target object may be input by the user as an actually measured value.
When duplicating 3D measurement data, if there is data associated with each 3D point such as a normal vector or a color of a point cloud, these are also duplicated at the same time. For example, in the case of rotational symmetry, the normal vector is rotated in the same manner, and in the case of plane symmetry, only the depth component is sign-inverted.
In the case of rotational symmetry, the rotation angle can be set as appropriate, but it is preferable to set it according to the range that can be measured from one viewpoint, for example, about 30 to 45 degrees.
If the highly reliable range of the 3D measurement data is known, more accurate 3D data can be generated by using the 3D measurement data in the highly reliable range for duplication. You can generate an accurate template.
 なお、以下には本発明の構成要件と実施例の構成とを対比可能とするために、本発明の構成要件を図面の符号付きで記載しておく。
<発明1>
 対象物体(27)の3次元計測データに基づいて、該対象物体(27)に対応するテンプレートを作成するテンプレート作成装置(30)であって、
 前記対象物体(271)の3次元計測データを取得する3次元計測データ取得部(301)と、
 前記対象物体を3次元計測する視点としての計測視点の位置姿勢に関するデータを取得する計測視点位置姿勢データ取得部(300)と、
 前記対象物体の対称性を特定する対称性パラメータを設定する対称性パラメータ設定部(302)と、
 前記計測視点に対応する前記3次元計測データと、前記対象物体(27)の前記対称性パラメータに基づいて前記計測視点に対応する前記3次元計測データを複製した3次元データを用いて、該対象物体(27)の3次元データを生成する3次元データ複製部(303)と、
 前記テンプレートを作成するための視点としてのテンプレート作成視点の位置姿勢に関するデータを取得するテンプレート作成位置姿勢データ取得部(304)と、
 生成された前記対象物体の3次元データに基づいて、前記テンプレート作成視点からの3次元データを生成する作成視点3次元データ生成部(305)と、
 を備えることを特徴とするテンプレート作成装置(30)。
In addition, in order to make it possible to compare the constituent requirements of the present invention with the configurations of the examples, the constituent requirements of the present invention are described with reference numerals in the drawings.
<Invention 1>
A template creation device (30) that creates a template corresponding to the target object (27) based on the three-dimensional measurement data of the target object (27).
A three-dimensional measurement data acquisition unit (301) that acquires three-dimensional measurement data of the target object (271), and
A measurement viewpoint position / orientation data acquisition unit (300) for acquiring data on the position / orientation of the measurement viewpoint as a viewpoint for three-dimensionally measuring the target object, and
A symmetry parameter setting unit (302) that sets a symmetry parameter that specifies the symmetry of the target object, and
Using the three-dimensional measurement data corresponding to the measurement viewpoint and the three-dimensional data obtained by duplicating the three-dimensional measurement data corresponding to the measurement viewpoint based on the symmetry parameter of the target object (27), the target is used. A 3D data duplication unit (303) that generates 3D data of an object (27),
A template creation position / posture data acquisition unit (304) for acquiring data on the position / orientation of a template creation viewpoint as a viewpoint for creating the template, and
A creation viewpoint 3D data generation unit (305) that generates 3D data from the template creation viewpoint based on the generated 3D data of the target object.
A template creation device (30).
2:物体認識装置
21:画像処理装置
27:対象物体
30:テンプレート作成装置
31:物体認識処理装置
300:撮影視点3次元位置姿勢データ取得部
301:対象物体の3次元計測データ取得部
302:対称性パラメータ設定部
303:3次元データ複製部
304:テンプレート作成視点位置姿勢情報取得部
305:テンプレート作成視点の3次元データ生成部
310:画像取得部
312:特徴抽出部
314:テンプレートマッチング部
2: Object recognition device 21: Image processing device 27: Target object 30: Template creation device 31: Object recognition processing device 300: Shooting viewpoint 3D position / orientation data acquisition unit 301: 3D measurement data acquisition unit 302 of the target object: Symmetrical Sex parameter setting unit 303: 3D data duplication unit 304: Template creation viewpoint position / orientation information acquisition unit 305: 3D data generation unit 310 of template creation viewpoint: Image acquisition unit 312: Feature extraction unit 314: Template matching unit

Claims (10)

  1.  対象物体の3次元計測データに基づいて、該対象物体に対応するテンプレートを作成するテンプレート作成装置であって、
     前記対象物体の3次元計測データを取得する3次元計測データ取得部と、
     前記対象物体を3次元計測する視点としての計測視点の位置姿勢に関するデータを取得する計測視点位置姿勢データ取得部と、
     前記対象物体の対称性を特定する対称性パラメータを設定する対称性パラメータ設定部と、
     前記計測視点に対応する前記3次元計測データと、前記対象物体の前記対称性パラメータに基づいて前記計測視点に対応する前記3次元計測データを複製した3次元データを用いて、該対象物体の3次元データを生成する3次元データ複製部と、
     前記テンプレートを作成するための視点としてのテンプレート作成視点の位置姿勢に関するデータを取得するテンプレート作成位置姿勢データ取得部と、
     生成された前記対象物体の3次元データに基づいて、前記テンプレート作成視点からの3次元データを生成する作成視点3次元データ生成部と、
     を備えることを特徴とするテンプレート作成装置。
    A template creation device that creates a template corresponding to the target object based on the three-dimensional measurement data of the target object.
    A 3D measurement data acquisition unit that acquires 3D measurement data of the target object,
    A measurement viewpoint position / posture data acquisition unit that acquires data related to the position / orientation of the measurement viewpoint as a viewpoint for three-dimensionally measuring the target object.
    A symmetry parameter setting unit that sets a symmetry parameter that specifies the symmetry of the target object,
    Using the 3D measurement data corresponding to the measurement viewpoint and the 3D data obtained by duplicating the 3D measurement data corresponding to the measurement viewpoint based on the symmetry parameter of the target object, 3 of the target object. A 3D data duplication unit that generates 3D data,
    A template creation position / posture data acquisition unit that acquires data related to the position / orientation of a template creation viewpoint as a viewpoint for creating the template.
    A creation viewpoint 3D data generation unit that generates 3D data from the template creation viewpoint based on the generated 3D data of the target object.
    A template creation device characterized by comprising.
  2.  前記対称性パラメータは、前記対象物体が面対称であることを特定するパラメータであることを特徴とする請求項1に記載のテンプレート作成装置。 The template creation device according to claim 1, wherein the symmetry parameter is a parameter that specifies that the target object is plane symmetric.
  3.  前記対称性パラメータは、前記対象物体が回転対称であることを特定するパラメータであることを特徴とする請求項1又は2に記載のテンプレート作成装置。 The template creation device according to claim 1 or 2, wherein the symmetry parameter is a parameter that specifies that the target object is rotationally symmetric.
  4.  前記対称性パラメータは、前記対象物体が(360/N)度(Nは2以上の自然数)の回転について対称であることを特定するパラメータであることを特徴とする請求項1又は2に記載のテンプレート作成装置。 The symmetry parameter according to claim 1 or 2, wherein the symmetry parameter is a parameter that specifies that the target object is symmetric with respect to a rotation of (360 / N) degrees (N is a natural number of 2 or more). Template creation device.
  5.  前記対称性パラメータ及び前記計測視点の位置姿勢の少なくともいずれか一方について、ユーザの設定を受け付けることを特徴とする請求項1乃至4のいずれか1項に記載のテンプレート作成装置。 The template creation device according to any one of claims 1 to 4, wherein a user's setting is accepted for at least one of the symmetry parameter and the position / orientation of the measurement viewpoint.
  6.  前記対象物体を含む画像を取得する画像取得部と、
     前記取得された画像から特徴量を抽出する特徴抽出部と、
     請求項1乃至4のいずれか1項に記載のテンプレート作成装置によって作成されたテンプレートを用いたテンプレートマッチングにより前記取得された画像に含まれる前記対象物体を認識するテンプレートマッチング部と、
    を備えた物体認識処理装置。
    An image acquisition unit that acquires an image including the target object,
    A feature extraction unit that extracts features from the acquired image,
    A template matching unit that recognizes the target object included in the acquired image by template matching using the template created by the template creation device according to any one of claims 1 to 4.
    An object recognition processing device equipped with.
  7.  対象物体の3次元計測データに基づいて、該対象物体に対応するテンプレートを作成するテンプレート作成方法であって、
     前記対象物体の3次元計測データを取得するステップと、
     前記対象物体を3次元計測する視点としての計測視点の位置姿勢に関するデータを取得するステップと、
     前記対象物体の対称性を特定する対称性パラメータを設定すると、
     前記計測視点に対応する前記3次元計測データと、前記対象物体の前記対称性パラメータに基づいて前記計測視点に対応する前記3次元計測データを複製した3次元データを用いて、該対象物体の3次元データを生成するステップと、
     前記テンプレートを作成するための視点としてのテンプレート作成視点の位置姿勢に関するデータを取得するステップと、
     生成された前記対象物体の3次元データに基づいて、前記テンプレート作成視点からの3次元データを生成するステップと、
     を含むことを特徴とするテンプレート作成方法。
    A template creation method for creating a template corresponding to a target object based on three-dimensional measurement data of the target object.
    The step of acquiring the three-dimensional measurement data of the target object and
    A step of acquiring data on the position and orientation of the measurement viewpoint as a viewpoint for measuring the target object in three dimensions, and
    When the symmetry parameter that specifies the symmetry of the target object is set,
    Using the 3D measurement data corresponding to the measurement viewpoint and the 3D data obtained by duplicating the 3D measurement data corresponding to the measurement viewpoint based on the symmetry parameter of the target object, 3 of the target object. Steps to generate dimensional data and
    The step of acquiring data on the position and orientation of the template creation viewpoint as the viewpoint for creating the template, and
    A step of generating 3D data from the template creation viewpoint based on the generated 3D data of the target object, and
    A template creation method characterized by including.
  8.  テンプレートを用いて対象物体を認識する物体認識処理方法であって、
     前記対象物体を含む画像を取得するステップと、
     前記取得された画像から特徴量を抽出するステップと、
     請求項7に記載のテンプレート作成方法により作成されたテンプレートを用いたテンプレートマッチングにより前記対象物体を認識するステップと、
    を含むことを特徴とする物体認識処理方法。
    An object recognition processing method that recognizes an object using a template.
    The step of acquiring an image including the target object and
    The step of extracting the feature amount from the acquired image and
    A step of recognizing the target object by template matching using the template created by the template creation method according to claim 7.
    An object recognition processing method characterized by including.
  9.  請求項7に記載のテンプレート作成方法の各ステップをコンピュータに実行させるためのプログラム。 A program for causing a computer to execute each step of the template creation method according to claim 7.
  10.  請求項8に記載の物体認識処理方法の各ステップをコンピュータに実行させるためのプログラム。 A program for causing a computer to execute each step of the object recognition processing method according to claim 8.
PCT/JP2019/028691 2019-07-22 2019-07-22 Template creation device, object recognition processing device, template creation method, object recognition processing method, and program WO2021014542A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2019/028691 WO2021014542A1 (en) 2019-07-22 2019-07-22 Template creation device, object recognition processing device, template creation method, object recognition processing method, and program
JP2021534434A JP7327484B2 (en) 2019-07-22 2019-07-22 Template creation device, object recognition processing device, template creation method, object recognition processing method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/028691 WO2021014542A1 (en) 2019-07-22 2019-07-22 Template creation device, object recognition processing device, template creation method, object recognition processing method, and program

Publications (1)

Publication Number Publication Date
WO2021014542A1 true WO2021014542A1 (en) 2021-01-28

Family

ID=74193521

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/028691 WO2021014542A1 (en) 2019-07-22 2019-07-22 Template creation device, object recognition processing device, template creation method, object recognition processing method, and program

Country Status (2)

Country Link
JP (1) JP7327484B2 (en)
WO (1) WO2021014542A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007524085A (en) * 2003-12-11 2007-08-23 ストライダー ラブス,インコーポレイテッド A technique for predicting the surface of a shielded part by calculating symmetry.
JP2010112729A (en) * 2008-11-04 2010-05-20 Omron Corp Method of creating three-dimensional model, and object recognition device
JP2018197685A (en) * 2017-05-23 2018-12-13 公益財団法人かずさDna研究所 Three-dimensional measurement device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2530866B (en) * 2014-08-06 2017-05-03 Gripple Ltd Securing device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007524085A (en) * 2003-12-11 2007-08-23 ストライダー ラブス,インコーポレイテッド A technique for predicting the surface of a shielded part by calculating symmetry.
JP2010112729A (en) * 2008-11-04 2010-05-20 Omron Corp Method of creating three-dimensional model, and object recognition device
JP2018197685A (en) * 2017-05-23 2018-12-13 公益財団法人かずさDna研究所 Three-dimensional measurement device

Also Published As

Publication number Publication date
JPWO2021014542A1 (en) 2021-01-28
JP7327484B2 (en) 2023-08-16

Similar Documents

Publication Publication Date Title
CN110490916B (en) Three-dimensional object modeling method and apparatus, image processing device, and medium
Singh et al. Bigbird: A large-scale 3d database of object instances
JP5480914B2 (en) Point cloud data processing device, point cloud data processing method, and point cloud data processing program
WO2012053521A1 (en) Optical information processing device, optical information processing method, optical information processing system, and optical information processing program
KR20160082931A (en) Method for calibrating a depth camera
WO2021140886A1 (en) Three-dimensional model generation method, information processing device, and program
EP3258441A1 (en) Template creation device and template creation method
CN103562934B (en) Face location detection
JP2011198349A (en) Method and apparatus for processing information
JP6836561B2 (en) Image processing device and image processing method
JP2012043308A (en) Position and attitude determination method, position and attitude determination device, object model generation method, object model generation device and program
EP3382645A2 (en) Method for generation of a 3d model based on structure from motion and photometric stereo of 2d sparse images
CN111862301A (en) Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium
JP6172432B2 (en) Subject identification device, subject identification method, and subject identification program
CN114761997A (en) Target detection method, terminal device and medium
Weinmann et al. Geometric point quality assessment for the automated, markerless and robust registration of unordered TLS point clouds
WO2020075252A1 (en) Information processing device, program, and information processing method
CN114170284B (en) Multi-view point cloud registration method based on active landmark point projection assistance
US10252417B2 (en) Information processing apparatus, method of controlling information processing apparatus, and storage medium
JP6237032B2 (en) Color and three-dimensional shape measuring method and apparatus
US20200286205A1 (en) Precise 360-degree image producing method and apparatus using actual depth information
GB2569609A (en) Method and device for digital 3D reconstruction
WO2021014542A1 (en) Template creation device, object recognition processing device, template creation method, object recognition processing method, and program
WO2021014538A1 (en) Template creation device, object recognition processing device, template creation method, object recognition processing method, and program
JP7298687B2 (en) Object recognition device and object recognition method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19938546

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021534434

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19938546

Country of ref document: EP

Kind code of ref document: A1