WO2021117117A1 - Model generation device and picking robot - Google Patents

Model generation device and picking robot Download PDF

Info

Publication number
WO2021117117A1
WO2021117117A1 PCT/JP2019/048190 JP2019048190W WO2021117117A1 WO 2021117117 A1 WO2021117117 A1 WO 2021117117A1 JP 2019048190 W JP2019048190 W JP 2019048190W WO 2021117117 A1 WO2021117117 A1 WO 2021117117A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
unit
work
information
parameter
Prior art date
Application number
PCT/JP2019/048190
Other languages
French (fr)
Japanese (ja)
Inventor
繁伸 浅田
亮輔 川西
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2019/048190 priority Critical patent/WO2021117117A1/en
Priority to CN201980102787.3A priority patent/CN114766037A/en
Priority to DE112019007961.1T priority patent/DE112019007961T5/en
Priority to JP2021563478A priority patent/JP7162760B2/en
Publication of WO2021117117A1 publication Critical patent/WO2021117117A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Definitions

  • the present invention relates to a model generator and a picking robot that generate a model of a target work.
  • Patent Document 1 a model of the target work is input, image data is generated while simulating the scene of the target work, the gripping position is estimated for the image data, and the target work is taken out based on the estimation result.
  • the technology of a robot simulation device that simulates a process is disclosed.
  • the robot simulation device described in Patent Document 1 adjusts the parameters of the simulation related to the robot movement, and picking is the ratio of the number of successful grips to the number of trials of the gripping movement of the target work. It is improving the success rate.
  • the work to which the simulation can be applied is only a standard work in which a CAD (Computer Aided Design) model of an industrial part exists. Therefore, the robot simulation apparatus described in Patent Document 1 has a problem that the simulation cannot be applied to a work such as food in which the shape is irregular depending on the individual and the CAD model does not exist.
  • CAD Computer Aided Design
  • the present invention has been made in view of the above, and an object of the present invention is to obtain a model generator capable of generating a model of a target work that has an irregular shape and is applicable to simulation.
  • the model generator of the present invention is information on the target work from the sensor that acquires the scene information including the information on the target work for generating the model and the scene information.
  • a work information acquisition unit that acquires work information
  • a reference model determination unit that determines a reference model that is a reference model for expressing irregular shapes based on scene information and work information, and scene information and work information.
  • the deformation constraint condition determination unit that determines the deformation constraint condition that is the condition for deforming the reference model, and the deformation of the reference model based on the deformation constraint condition. It is characterized by including an indefinite model generation unit that determines parameters within a range of deformation constraint conditions and deforms a reference model according to deformation parameters.
  • the model generator has an effect that it has an irregular shape and can generate a model of a target work applicable to simulation.
  • Block diagram showing a configuration example of the model generator according to the first embodiment A first flowchart showing the operation of the model generator according to the first embodiment.
  • a second flowchart showing the operation of the model generator according to the first embodiment The figure which shows the example of the case where the processing circuit included in the model generator which concerns on Embodiment 1 is configured by a processor and a memory The figure which shows the example in the case of configuring the processing circuit provided in the model generation apparatus which concerns on Embodiment 1 with dedicated hardware.
  • Block diagram showing a configuration example of the reference model determination unit according to the second embodiment A flowchart showing the operation of the reference model determination unit according to the second embodiment.
  • Block diagram showing a configuration example of the picking robot according to the third embodiment Block diagram showing a configuration example of the simulation condition setting unit according to the third embodiment Block diagram showing a configuration example of the data set generation unit according to the third embodiment Block diagram showing a configuration example of the parameter adjustment unit according to the third embodiment Block diagram showing a configuration example of the gripping parameter adjusting unit according to the third embodiment A flowchart showing the operation of the gripping parameter adjusting unit according to the third embodiment.
  • Block diagram showing a configuration example of the recognition parameter adjustment unit according to the third embodiment A flowchart showing the operation of the recognition parameter adjusting unit according to the third embodiment.
  • FIG. 1 is a block diagram showing a configuration example of the model generation device 10 according to the first embodiment of the present invention.
  • the model generation device 10 generates a model of a target work having an irregular shape and applicable to simulation.
  • the model of the target work generated by the model generation device 10 may be a three-dimensional target work model or a two-dimensional target work model.
  • a method in which the model generation device 10 generates a three-dimensional model of the target work will be specifically described.
  • the model of the target work may be simply referred to as a model.
  • the model generation device 10 can be realized by dimensional compression such as projecting a three-dimensional model method onto a plane.
  • the target work that can generate a model with the model generation device 10 is not limited to the amorphous work that is a target work having irregular shapes, but also includes a fixed shape work that is a target work having individual shapes.
  • the model generation device 10 includes a sensor 101, a work information acquisition unit 102, a reference model determination unit 103, a deformation constraint condition determination unit 104, a storage unit 105, and an amorphous model generation unit 106. And. Details of each configuration will be described together with the operation of the model generator 10.
  • FIG. 2 is a first flowchart showing the operation of the model generation device 10 according to the first embodiment. The flowchart of FIG. 2 shows the operation from the sensor 101 to the deformation constraint condition determination unit 104.
  • FIG. 3 is a second flowchart showing the operation of the model generation device 10 according to the first embodiment. The flowchart of FIG. 3 shows the operation of the amorphous model generation unit 106.
  • the sensor 101 measures the target work for which the model is to be generated in the present embodiment (step S101), and acquires the scene information of the target work.
  • the scene information is information including information such as the background of the target work as well as the work information which is the information of the target work for which the model is to be generated.
  • the work information is information representing the shape, color, etc. of the target work.
  • the sensor 101 includes, for example, a 2D (Dimensions) camera, a 3D sensor, and the like.
  • the sensor 101 may acquire 2D data as a grayscale image or a color image using a 2D camera, or may acquire 3D data as a 3D point cloud or a distance image using a 3D sensor.
  • the scene information data may be measurement data or image data. In the following description, 2D may be referred to as two-dimensional, and 3D may be referred to as three-dimensional.
  • the work information acquisition unit 102 acquires work information, which is information on the target work to be model-generated, from the scene information acquired by the sensor 101 (step S102).
  • the work information acquisition unit 102 may acquire information such as a size range and a color indicated by RGB (Red Green Blue) from the data acquired by the sensor 101.
  • RGB Red Green Blue
  • the method of acquiring the work information for example, one target work is arranged on the rotation stage of uniaxial rotation, and the sensor 101 measures while rotating the target work on the rotation stage. The data does not have to be the entire circumference of the target work, and the measurement by the sensor 101 may be performed only once.
  • the work information acquisition unit 102 deletes the background and the like unnecessary for model generation from the obtained data, and acquires the data of only the target work.
  • the work information acquisition unit 102 calculates the size range of the target work, the average color information, and the like based on the obtained data of the target work.
  • the size range of the target work is, for example, a range specified by each of the X coordinate, the Y coordinate, and the Z coordinate of the three-dimensional point cloud.
  • the reference model determination unit 103 determines a reference model, which is a reference model for expressing an irregular shape, based on the scene information obtained by the sensor 101 and the work information obtained by the work information acquisition unit 102. (Step S103).
  • the reference model determined by the reference model determination unit 103 may be a primitive shape, for example, a geometric shape such as a rectangular parallelepiped, a sphere, or a triangular pyramid, a model created by a user, geometric shapes, or a user-created model. It may be a combination of the models that have been created.
  • the method of expressing the model may be a set of points having information on three-dimensional coordinates, that is, a three-dimensional point cloud, a mesh model, a numerical value capable of expressing a geometric shape, or a combination thereof.
  • the expression method of the reference model is not limited to the description content of the present embodiment, and other expression methods may be used.
  • the reference model determination unit 103 selects one scene information from a plurality of scene information obtained by the sensor 101, and is an object represented by the work information included in the scene information. Calculate the position of the center of gravity of the work.
  • the reference model determination unit 103 uses the calculated center of gravity position as the center point of the reference model, and generates several different types of reference models based on the center point.
  • the reference model determination unit 103 determines the size of the reference model to be generated based on the work information obtained by the work information acquisition unit 102.
  • the reference model determination unit 103 calculates the distribution of the normals of the measurement data and the reference model, and automatically uses the reference model closest to the normal distribution of the measurement data as the reference model for expressing the irregular shape of the target work. You may decide with.
  • the method of determining the reference model by such automatic determination can be expressed by the following equation (1). That is, the reference model determination unit 103 executes the following equation (1) in determining the reference model by automatic determination.
  • M is the measurement data
  • P is the reference model
  • g (M) is the normal distribution of the measurement data
  • g (P) is the normal distribution of the reference model.
  • the user may determine the geometric shape most similar to the target work.
  • the deformation constraint condition determination unit 104 determines the deformation constraint condition, which is a condition for deforming the reference model, based on the result of analyzing the work shape using the scene information and the work information and the reference model (step S104).
  • the deformation constraint condition determination unit 104 stores the determined deformation constraint condition in the storage unit 105.
  • the deformation constraint condition determination unit 104 defines how to deform the reference model as parameters.
  • the deformation constraint condition determination unit 104 sets the deformation parameters to different values for each model in order to make the shapes of the individual models uneven. In order to deform the reference model and reproduce the characteristics of the shape close to the target work, it is necessary to set constraints on the degree of deformation by appropriately determining the range of values of each deformation parameter. Therefore, the deformation constraint condition determination unit 104 determines the range of possible values of the deformation parameter.
  • the deformation parameters representing the shape of the target work for example, the size of the target work, the amount of deformation of the reference model when the reference model is deformed, the curvature, the frequency of appearance of irregularities, the appearance range of irregularities, etc. can be considered.
  • the amount of deformation is, for example, the amount of movement of mesh vertices in the case of a mesh model. That is, the deformation constraint condition includes at least one of the frequency of appearance of unevenness of the target work, the appearance range of unevenness of the target work, and the curvature as numerical conditions.
  • the deformation constraint condition determination unit 104 can improve the reproducibility of the model by using numerical conditions such as the appearance frequency of irregularities.
  • the deformation constraint condition determination unit 104 determines the deformation constraint condition based on the scene information obtained by the sensor 101, the work information obtained by the work information acquisition unit 102, and the reference model determined by the reference model determination unit 103. To do.
  • the deformation constraint condition determination unit 104 constrains, for example, the movement amount of the mesh vertices of the reference model by assuming that the vector consisting of the mesh vertices of the reference model and the center point of the reference model is the movement range of the mesh vertices of the reference model. Add conditions.
  • the deformation constraint condition determination unit 104 can prevent the failure of model generation such as the intersection of other meshes or the surface of the mesh being turned inside out when the mesh vertices are moved.
  • the deformation constraint condition determination unit 104 may determine the size range of the target work based on the work information obtained by the work information acquisition unit 102.
  • the deformation constraint condition determination unit 104 estimates the appearance frequency of irregularities based on the scene information obtained by the sensor 101, and if the distribution of the normal estimation is within a predetermined threshold value, the appearance of irregularities appears.
  • the frequency may be set as one of the deformation parameters, and the range of the appearance frequency of the unevenness may be determined.
  • the deformation constraint condition determination unit 104 may extract and use the contour of the work for analyzing the shape of the work. That is, the work shape of the target work analyzed by the deformation constraint condition determination unit 104 includes the contour of the target work. As a method of extracting the contour, for example, the deformation constraint condition determination unit 104 sets an arbitrary cross section for the measurement data of the work, and searches for the measurement data included within a certain distance from the set cross section. Can be considered.
  • the contour is not limited to one with respect to the measurement data of the work. For example, in the method described above, a contour can be obtained for each position and posture in which the cross section is set.
  • the deformation constraint condition determining unit 104 performs curve fitting on the work contour and determines the range of the appearance frequency of unevenness based on the result of the curve fitting.
  • the deformation constraint condition determination unit 104 can easily determine the deformation constraint condition according to the irregularity of the target work by analyzing the work contour information of the target work. As a result, the model generator 10 can improve the reproducibility of the model.
  • the deformation constraint condition determination unit 104 may automatically acquire the shape feature by a learning method using a neural network instead of using a shape feature designed manually such as a curve by curvature or polynomial approximation.
  • the deformation constraint condition determination unit 104 learns, for example, a neural network that classifies the target work and the work other than the target work by using the data obtained by imaging the normal distribution obtained from the measurement data of the target work as input information.
  • the deformation constraint condition determination unit 104 may use the feature map of the intermediate layer of the neural network that has learned the shape of the target work as the deformation parameter.
  • the model generator 10 can acquire deformation parameters suitable for the target work by machine learning, it is possible to accurately reproduce a complicated work shape. As a result, it can be expected that the deformation constraint condition determination unit 104 can obtain deformation constraint conditions that are more versatile and have high expressive ability than manually designing the shape features.
  • the configuration of the neural network is not limited, and the learning target is not limited to the classification problem. However, it is desirable to have a network configuration and learning target in which the feature map of the intermediate layer well represents the features of the work shape.
  • the storage unit 105 stores the deformation constraint condition which is the range of the deformation parameter determined by the deformation constraint condition determination unit 104.
  • the irregular model generation unit 106 reads out the deformation constraint condition from the storage unit 105, determines the deformation parameter of the reference model within the range of the deformation constraint condition based on the read deformation constraint condition (step S105), and follows the deformation parameter.
  • the reference model is transformed (step S106).
  • the model generation device 10 is supposed to generate a plurality of amorphous models. Therefore, the amorphous model generation unit 106 accepts the user to specify the number of models to be generated.
  • step S107: No it returns to step S105 and repeats the above-described operation.
  • the amorphous model generation unit 106 generates the number of models specified by the user (step S107: Yes)
  • the operation ends.
  • the amorphous model generation unit 106 determines the deformation parameters based on the deformation constraint conditions determined by the deformation constraint condition determination unit 104. For example, when the output model is a mesh model, the irregular model generation unit 106 searches for the vector with the shortest distance in each measurement data in the amount of movement of the mesh of the reference model, and lowers the measurement data to the vector. The intersection of the line and the vector may be determined as the destination of the mesh vertices of the reference model. Since the amorphous model generation unit 106 is represented by the number of meshes of the reference model to be used in terms of curvature, the number of meshes of the reference model is increased in the measurement data where the density of the point cloud is high.
  • the number of meshes may be determined.
  • the irregular model generation unit 106 may estimate the normal line with respect to the measurement data and determine the appearance frequency of the unevenness based on the distribution of the normal line in the appearance frequency of the unevenness. This is because it is considered that the wider the range of the normal distribution, the higher the frequency of appearance of unevenness. In the irregular appearance range, the irregular model generation unit 106 may estimate the uneven portion based on the normal distribution estimated for the measurement data, and calculate the uneven range at the estimated portion.
  • the amorphous model generation unit 106 When calculating the value of each deformation parameter, the amorphous model generation unit 106 randomly changes the value so that each model has a different shape. However, it is necessary to make the parameter value after the change within the range of the deformation constraint condition. After determining the deformation parameters, the amorphous model generation unit 106 deforms the reference model based on the determined deformation parameters.
  • the amorphous model generation unit 106 may use the contour information of the target work extracted from the scene information when moving the mesh vertices. For example, the amorphous model generation unit 106 replaces all or part of the contour of the reference model with the contour information of the target work, and sets the movement destination of the mesh vertices to the three-dimensional point on the contour having the shortest distance from the mesh vertices. To do.
  • the contour information of the target work is, for example, a group of three-dimensional points constituting the contour.
  • As the contour information to be replaced for example, a work contour extracted from a different work is randomly selected, and the selected work contour is not used as it is, but is enlarged in the length direction of the unevenness or the contour within the range of the deformation constraint condition. It is conceivable to use the contour information generated by synthesizing different work contours, or to generate and use different contour information by performing scale conversion to reduce the scale.
  • the amorphous model generation unit 106 may use the work contour generated by synthesizing two or more different work contours as the work contour information. As a result, the amorphous model generation unit 106 can increase the variation in the shape of the generated model by synthesizing the contour information of the target work, that is, the actual work, and generating a new contour. Further, the amorphous model generation unit 106 may use the work contour generated by enlarging or reducing the scale of the work contour as the information of the work contour. As a result, the amorphous model generation unit 106 can increase the variation of the shape of the generated model by further increasing the variation of the contour to be used.
  • the amorphous model generation unit 106 can be expected to generate a more natural work model by utilizing the work contour information actually acquired from the work when determining the deformation amount of the reference model. As a result, the amorphous model generation unit 106 can generate a model more like the target work by using the information of the work contour of the target work, that is, the actual work.
  • the amorphous model generation unit 106 may be replaced by a model generation method using a neural network.
  • the neural network is configured to input at least one of scene information, work information, deformation constraint conditions, and deformation parameters, and output a model of the target work in a defined expression method.
  • the neural network may be configured to output a contour image by inputting a random initial value.
  • the value to be input may be one or a vector having a plurality of numerical values.
  • the correct answer data of the input numerical group and the contour model of the work associated with each numerical group are prepared in advance as the training data set of the neural network, and their transformations are learned. This gives a trained neural network.
  • the neural network for example, when it is desired to output a three-dimensional model, a method such as configuring the network so that the output part of the network corresponds to the voxel space can be considered.
  • GAN Geneative Adversarial Networks
  • the above-mentioned network configuration, contents of the learning data set, and the like are examples, and the present invention is not limited to these, and other network configurations, the contents of the learning data set, and the like may be used.
  • the model generation device 10 can be expected to be more versatile than a manually constructed model generation algorithm by generating a model with high expressive power by a neural network, and the applicable range of the work can be expanded.
  • the sensor 101 is a measuring instrument such as a camera or a laser.
  • the storage unit 105 is a memory.
  • the work information acquisition unit 102, the reference model determination unit 103, the deformation constraint condition determination unit 104, and the amorphous model generation unit 106 are realized by a processing circuit.
  • the processing circuit may be a processor and memory for executing a program stored in the memory, or may be dedicated hardware.
  • FIG. 4 is a diagram showing an example in which the processing circuit included in the model generation device 10 according to the first embodiment is configured by a processor and a memory.
  • the processing circuit is composed of the processor 91 and the memory 92, each function of the processing circuit of the model generator 10 is realized by software, firmware, or a combination of software and firmware.
  • the software or firmware is written as a program and stored in the memory 92.
  • each function is realized by the processor 91 reading and executing the program stored in the memory 92. That is, the processing circuit includes a memory 92 for storing a program in which the processing of the model generation device 10 is to be executed as a result. It can also be said that these programs cause a computer to execute the procedures and methods of the model generator 10.
  • the processor 91 may be a CPU (Central Processing Unit), a processing device, an arithmetic unit, a microprocessor, a microcomputer, a DSP (Digital Signal Processor), or the like.
  • the memory 92 includes, for example, non-volatile or volatile such as RAM (Random Access Memory), ROM (Read Only Memory), flash memory, EPROM (Erasable Program ROM), and EPROM (registered trademark) (Electrical EPROM). This includes semiconductor memories, magnetic disks, flexible disks, optical disks, compact disks, mini disks, DVDs (Digital York Disc), and the like.
  • FIG. 5 is a diagram showing an example in which the processing circuit included in the model generation device 10 according to the first embodiment is configured by dedicated hardware.
  • the processing circuit 93 shown in FIG. 5 includes, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), and the like. FPGA (Field Processor Gate Array) or a combination of these is applicable.
  • Each function of the model generator 10 may be realized by the processing circuit 93 for each function, or each function may be collectively realized by the processing circuit 93.
  • model generator 10 may be realized by dedicated hardware, and some may be realized by software or firmware.
  • the processing circuit can realize each of the above-mentioned functions by the dedicated hardware, software, firmware, or a combination thereof.
  • the model generation device 10 measures the target work with the sensor 101, and determines the reference model from the information obtained by the measurement with the sensor 101.
  • the model generation device 10 determines the deformation constraint conditions for the reference model, determines the deformation parameters of the reference model within the range of the deformation constraint conditions, and deforms the reference model according to the deformation parameters.
  • the model generation device 10 can generate a model even if the target work is an amorphous work. Therefore, the model generator 10 can be used in a food factory or the like that frequently handles amorphous workpieces.
  • the model generation device 10 acquires some measurement data of the target work and deforms the reference model based on the deformation constraint condition of the target work determined based on the analysis result of the measurement data. Therefore, a model can be generated even if the target work is an indefinite work.
  • the model generation device 10 can improve the reproducibility of the model by analyzing numerical conditions such as the appearance frequency of irregularities and contour information of the target work. Further, the model generation device 10 can generate a model more like the target work by using the contour information of the target work, that is, the actual work.
  • the model generation device 10 can increase the variation in the shape of the model to be generated by synthesizing the contour information of the target work, that is, the actual work and generating a new contour.
  • the model generation device 10 can increase the variation of the shape of the model to be generated by further increasing the variation of the contour to be used. Further, since the model generation device 10 can acquire deformation parameters suitable for the target work by machine learning, it is possible to accurately reproduce a complicated work shape. Since the model generation device 10 generates a model with high expressive ability in the model generation by the neural network, it can be expected to be more versatile than the manually constructed model generation algorithm, and the applicable range of the work can be expanded.
  • Embodiment 2 In the second embodiment, a specific configuration and operation of the reference model determination unit 103 included in the model generation device 10 will be described.
  • FIG. 6 is a block diagram showing a configuration example of the reference model determination unit 103 according to the second embodiment.
  • the reference model determination unit 103 includes an individual model determination unit 201, a registration unit 202, an average shape generation unit 203, a shape element division unit 204, an individual difference category determination unit 205, and a work. It includes a spindle determination unit 206, a primitive collation unit 207, and a fitting model determination unit 208. Details of each configuration will be described together with the operation of the reference model determination unit 103.
  • FIG. 7 is a flowchart showing the operation of the reference model determination unit 103 according to the second embodiment. The flowchart of FIG. 7 shows the details of the operation of the reference model determination unit 103, that is, the operation of step S103 in the flowchart of FIG.
  • the individual model determination unit 201 models each work information acquired by the work information acquisition unit 102, and determines an individual model in which each work information is modeled (step S201).
  • the individual model is a model of each of the plurality of work information acquired by the work information acquisition unit 102. Specifically, the individual model determination unit 201 deletes the background unnecessary for model generation from the data obtained by the sensor 101, and extracts only the data of the target work.
  • the individual model determination unit 201 may generate a model by deforming the geometric shape with respect to the obtained target work data, or may determine the obtained target work data as an individual model as it is. Good.
  • the individual model determination unit 201 automatically determines the geometric shape most similar to the measurement data of the target work based on the normal distribution, as in the case of determining the reference model described in the first embodiment. Alternatively, the user may manually determine the geometric shape that most closely resembles the target work.
  • the registration unit 202 automatically integrates a plurality of individual models determined by the individual model determination unit 201 (step S202). Integration is, for example, the alignment of multiple individual models.
  • the alignment method to be used may be ICP (Iterative Closest Points) or a feature point-based alignment method.
  • the average shape generation unit 203 generates an average shape model of the target work based on the integration result by the registration unit 202 (step S203).
  • the average shape model is a model in which the features of a plurality of individual models integrated by the registration unit 202 are averaged.
  • the average shape generation unit 203 calculates the average value of the mesh vertices and the average value of the normals from each individual model after alignment, and uses the model composed of them as the average shape model. It may be generated.
  • the output model is a three-dimensional point cloud
  • the average shape generation unit 203 calculates the average value of the three-dimensional point cloud of each individual model after alignment, and generates a model composed of them as an average shape model. May be good.
  • the reference model determination unit 103 can easily determine the reference model by using the average shape model of the target work, and can improve the reproducibility of the model.
  • the shape element dividing unit 204 divides the average shape model generated by the average shape generating unit 203 or the individual model determined by the individual model determining unit 201 into at least one or more shape elements constituting each model (step). S204).
  • the shape element dividing unit 204 divides each shape element constituting the average shape model or the individual model by, for example, performing a clustering process on the average shape model or the individual model. In the shape element dividing unit 204, if the input data is a three-dimensional point cloud, it is conceivable to cluster the three-dimensional point cloud for each shape element by the k-means method.
  • the individual difference category determination unit 205 is an individual that represents the irregularity of the work by rank with respect to the average shape model generated by the average shape generation unit 203 and divided into one or more shape elements by the shape element division unit 204.
  • the difference category is determined (step S205).
  • the individual difference category determination unit 205 determines the individual difference category for the average shape model generated by the average shape generation unit 203 and divided into one or more shape elements by the shape element division unit 204. To do.
  • the degree of variation and the tendency of variation differ from object to object.
  • unprocessed fruits have the same general tendency of shape, but the size and shape differ depending on the individual. Therefore, it is difficult to apply the conventionally used object detection method by template matching on a two-dimensional image.
  • paste products such as chikuwa and processed foods are heat-treated after being molded. Therefore, the color of the surface may be slightly different or the fine unevenness of the surface may be different due to the variation in the degree of burning, but the size itself can be regarded as almost the same.
  • the shape of the batter varies from individual to individual, and the ingredients inside, specifically, in the case of fried chicken, the shape of chicken may be irregular. Therefore, there are some that cannot find regularity in the shape.
  • the individual difference category determination unit 205 calculates the normal distribution of the average shape model for determining the individual difference category, and calculates the normal distribution of the average shape model and the normal distribution of each work information extracted from the scene information. Match.
  • the individual difference category determination unit 205 classifies the work having a large difference into a category having no regularity in shape, and the work having a small difference into a category of work having regularity in shape.
  • the work spindle determination unit 206 is based on the results obtained by the individual difference category determination unit 205 with respect to the average shape model generated by the average shape generation unit 203 or the individual model determined by the individual model determination unit 201.
  • the work spindle which is the inertial spindle, is determined (step S206).
  • the work spindle determination unit 206 is specifically generated by the average shape generation unit 203 and determined by the average shape model divided into one or more shape elements by the shape element division unit 204, or the individual model determination unit 201.
  • the work spindle is determined for the individual model divided into one or more shape elements by the shape element dividing portion 204. In the case of a category in which the shape is not regular, the work spindle determination unit 206 may determine that there is no work spindle because the work spindle is not obtained.
  • the primitive collation unit 207 performs primitive collation to collate the average shape model generated by the average shape generation unit 203 with a primitive shape, for example, a geometric shape such as a rectangular parallelepiped, a sphere, or a triangular pyramid (step S207).
  • a primitive shape for example, a geometric shape such as a rectangular parallelepiped, a sphere, or a triangular pyramid
  • the primitive collation unit 207 performs primitive collation on the average shape model generated by the average shape generation unit 203 and divided into one or more shape elements by the shape element division unit 204.
  • the primitive collating unit 207 may calculate the similarity based on the normal distribution of the average shape model and the normal distribution of the geometric shape.
  • the primitive collation unit 207 obtains, for example, the difference between the normal distribution of the average shape model and the normal distribution of the geometric shape, and looks up by associating the similarity with the difference of the normal distribution.
  • a table may be prepared in advance, and the similarity may be obtained based on the calculated difference in normal distribution and the lookup table.
  • the fitting model determination unit 208 determines the fitting model, which is the reference model, based on the matching result of the primitive matching by the primitive matching unit 207 (step S208).
  • the fit model determination unit 208 may consider that the fit model cannot be determined when the calculated similarity is equal to or less than a predetermined threshold value, and may determine the average shape model as the fit model. When the similarity is equal to or greater than the threshold value, the fitting model determining unit 208 may determine the geometric shape having the highest similarity as the fitting model. Since the reference model determination unit 103 can determine a reference model close to the target work, the reproducibility of the generated model can be improved.
  • the fitting model determining unit 208 may determine a reference model, that is, a fitting model for each shape element divided by the shape element dividing unit 204.
  • the model generation device 10 can generate a model of a target work connected by a plurality of objects such as skewered foodstuffs by determining a fitting model for each element constituting the divided target work.
  • the deformation constraint condition determination unit 104 determines the deformation constraint condition of the contour based on the numerical conditions and the shape analysis result of the reference model (step S104).
  • the model generation device 10 can generate a model based on the acquired contour information even when the contour shape of the target work is complicated.
  • the deformation constraint condition determination unit 104 may include the contour information of the average shape model in the acquisition of the contour information. Further, the deformation constraint condition determination unit 104 may use the contour information of the average shape model as a reference when determining the deformation constraint condition. For example, the height of the unevenness may be based on the contour of the average shape model instead of the center of the model, or the height of the unevenness may be normalized by the height of the contour of the average shape model.
  • the deformation constraint condition determination unit 104 When the work shape is complicated, it is difficult for the deformation constraint condition determination unit 104 to fit the work shape with a simple curve, but by normalizing with an average shape model, the work shape is approximated by a simple sine wave, a polynomial, or the like. It is expected that it will be possible.
  • the model generator 10 determines the reference model based on the result of the individual difference category and the average shape model, so that the model can be changed according to the individual difference of the target work. Reproducibility can be improved. As a result, by using the model generated by the model generator 10, the reproducibility is improved even in the simulation process before the introduction of the industrial robot, and it becomes easy to make the robot arrangement or operation check for picking work accurate. It has the effect of becoming. Further, since the model generation device 10 can generate a model of a target work in which a plurality of objects are connected, for example, it is possible to generate a model of dumplings, skewered foodstuffs, etc. handled at a food site, which is a target of model generation. The applicable range of the work is expanded.
  • the model generation device 10 uses the average shape model generated by the average shape generation unit 203 as the reference model. This makes it easier for the model generation device 10 to determine the reference model by the average shape model, and even in the case of a workpiece that is difficult to represent with a primitive shape such as a rectangular parallelepiped, a sphere, or a triangular pyramid, the average shape.
  • a model of the target work can be generated by using the model as a reference model. Further, the model generation device 10 can generate a model based on the acquired contour information even when the contour shape of the target work is complicated.
  • the model generation device 10 determines the reference model based on the work spindle and the average shape model, so that it becomes easy to accurately determine the individual difference category of the target work based on the work spindle. As a result, the model generation device 10 can improve the reproducibility of the model by being able to determine a reference model close to the target work according to the individual difference of each target work. Further, the model generation device 10 can also generate a model of the target work connected by a plurality of objects such as skewered foodstuffs by determining a fitting model for each element constituting the divided target work.
  • Embodiment 3 In the third embodiment, the picking robot including the model generation device 10 described in the first embodiment or the second embodiment will be described.
  • FIG. 8 is a block diagram showing a configuration example of the picking robot 20 according to the third embodiment.
  • the picking robot 20 includes a model generation device 10, a simulation condition setting unit 107, a scene generation unit 108, a data set generation unit 109, a parameter adjustment unit 110, a recognition processing unit 111, and a picking unit 112. .
  • the model generator 10 may have either the configuration of the first embodiment or the second embodiment.
  • the simulation condition setting unit 107 reads out the deformation constraint condition from the storage unit 105, generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition, and uses the generated model to generate sensor information in the simulation and the sensor information in the simulation.
  • Set simulation conditions that include at least one of the work information.
  • the deformation constraint condition stored in the storage unit 105 is determined by the model generation device 10 as described above.
  • the amorphous model generation unit 106 can also generate a model of the target work by controlling other configurations.
  • FIG. 9 is a block diagram showing a configuration example of the simulation condition setting unit 107 according to the third embodiment. As shown in FIG. 9, the simulation condition setting unit 107 includes a sensor information setting unit 301, a work information setting unit 302, and an environment information setting unit 303.
  • the sensor information setting unit 301 sets at least one of the specifications and installation information of the sensor 101 in the simulation.
  • the sensor information setting unit 301 may set parameters such as the angle of view of the sensor 101, the working distance, the installation angle of the sensor 101, the resolution, and other specifications of the sensor 101, or the sensor installation information in the simulation.
  • the sensor information setting unit 301 generates at least one model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and uses the generated model to perform a sensor on the simulation. It may be confirmed in advance whether the target work fits within the field of view of 101.
  • the work information setting unit 302 sets the work information of the target work.
  • the work information setting unit 302 generates at least one model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and the generated model is used to generate the color of the target work.
  • Information, size, the number of models of the target work used by the scene generation unit 108, and the like may be set.
  • the environment information setting unit 303 sets the environment information in the simulation.
  • the environment information setting unit 303 may set the light source environment such as diffuse reflection or ambient light, the box size for making the pieces in a bulk state, the work supply method, and the like.
  • As a work supply method a bulk state in which the position and orientation of the target work are completely irregular, an alignment state in which the arrangement position and orientation of the target work are determined, and the like can be considered.
  • the environment information setting unit 303 generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and confirms the supply state on the simulation using the generated model. You may.
  • the simulation condition setting unit 107 uses the sensor information setting unit 301, the work information setting unit 302, and the environment information setting unit 303 to set the simulation conditions for the sensor information, the work information, and the environment information.
  • the picking robot 20 can improve the picking success rate by simulating the user's actual environment on the simulator and making it easier to adjust the parameters according to the user's actual environment.
  • the scene generation unit 108 generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and the generated model and the simulation set by the simulation condition setting unit 107 Simulate the scene of the target work using the conditions.
  • the scene information obtained by the measurement of the sensor 101 includes the background information of the target work, whereas the scene of the target work simulated by the scene generation unit 108 targets only the target work.
  • the scene generation unit 108 does not have to be different for each generated model, and the same generation model may be reused.
  • the scene generation unit 108 duplicates 5 models of each type with 10 variations of the model of the target work to be generated, and the total number of models used for scene generation. May be 50 pieces.
  • the calculation cost of model generation is large, it is considered to be effective in reducing the calculation cost.
  • the data set generation unit 109 generates a data set for adjusting parameters that control the picking operation of the target work, based on the scene generated by the scene generation unit 108. Specifically, the data set is for adjusting parameters that control the operation of the picking unit 112 that picks the target work.
  • FIG. 10 is a block diagram showing a configuration example of the data set generation unit 109 according to the third embodiment. As shown in FIG. 10, the data set generation unit 109 includes a 2D data generation unit 401, a 3D data generation unit 402, and an annotation data generation unit 403.
  • the 2D data generation unit 401 is a two-dimensional data generation unit that generates 2D data for the scene generated by the scene generation unit 108.
  • the 2D data generation unit 401 may generate a grayscale image or an RGB image.
  • the 3D data generation unit 402 is a three-dimensional data generation unit that generates 3D data for the scene generated by the scene generation unit 108.
  • the 3D data generation unit 402 may generate a three-dimensional point cloud or a distance image.
  • the annotation data generation unit 403 generates annotation data for the scene generated by the scene generation unit 108.
  • the annotation data generation unit 403 may generate labeling data colored for each type of object, position / orientation information of each work in the data, or data to which a type name or the like is added.
  • the data generated by the data set generation unit 109 is simulation data, it is conceivable that there may be discrepancies due to the degree of noise added compared to the actual image. Therefore, the data set generation unit 109 may have a function of reproducing the measurement noise.
  • the data generated by the data set generation unit 109 that is, the data generated by the 2D data generation unit 401, the 3D data generation unit 402, and the annotation data generation unit 403 are collectively referred to as a data set.
  • the picking robot 20 can automatically generate data necessary for parameter adjustment on a simulation by the data set generation unit 109, so that the load of data collection work by the user can be reduced.
  • the parameter adjustment unit 110 generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and the generated model and the data generated by the data set generation unit 109. Automatically adjust the picking parameters using the set.
  • FIG. 11 is a block diagram showing a configuration example of the parameter adjusting unit 110 according to the third embodiment. As shown in FIG. 11, the parameter adjusting unit 110 includes a gripping parameter adjusting unit 501 and a recognition parameter adjusting unit 502.
  • the gripping parameter adjusting unit 501 generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint conditions stored in the storage unit 105, and the robot hand provided in the picking unit 112 using the generated model. Adjust the parameters for.
  • the gripping parameter adjusting unit 501 may, for example, target a robot hand such as a tweezers hand, a parallel hand, or a suction pad, and adjust the opening width or the claw width of the robot hand that makes it easy to grip the target work.
  • the type of robot hand may be determined by the user.
  • the gripping parameter adjusting unit 501 may adjust all kinds or a plurality of gripping parameters at the same time in adjusting the gripping parameters, or may individually adjust the gripping parameters according to a predetermined order. Hereinafter, a case where the gripping parameters are adjusted individually will be mainly described.
  • FIG. 12 is a block diagram showing a configuration example of the gripping parameter adjusting unit 501 according to the third embodiment.
  • the gripping parameter adjusting unit 501 includes a gripping parameter adjusting range determining unit 601, a gripping parameter changing unit 602, a model rotating unit 603, a gripping evaluation unit 604, and a gripping parameter value determining unit 605.
  • a gripping parameter adjustment end determination unit 606 is provided. Details of each configuration will be described together with the operation of the gripping parameter adjusting unit 501.
  • FIG. 13 is a flowchart showing the operation of the gripping parameter adjusting unit 501 according to the third embodiment.
  • the gripping parameter adjustment range determining unit 601 generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint conditions stored in the storage unit 105, and sets the gripping parameter adjustment range based on the generated model. Determine (step S301).
  • the gripping parameter adjustment range determination unit 601 generates at least one model of the target work by the amorphous model generation unit 106 based on the deformation constraint conditions stored in the storage unit 105, and based on the size of the generated model and the like. ,
  • the adjustment range of the gripping parameter, the initial value of the gripping parameter, and the like may be determined.
  • the gripping parameter changing unit 602 changes the type of gripping parameter to be adjusted (step S302).
  • the model rotation unit 603 generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and rotates the generated model (step S303).
  • the model rotation unit 603 generates at least one model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and randomly or in advance determines the generated model. It may be rotated in the order given.
  • the grip evaluation unit 604 rotates the model on the model rotation unit 603 to perform grip evaluation (step S304).
  • the grip evaluation unit 604 fits the model of the robot hand when the model of the target work is rotated by the model rotation unit 603, and the frequency of occurrence of deviation between the spindle of the target work and the grip direction F Ang and the frequency of interference with the target work.
  • the evaluation function of the gripping parameter composed of the two evaluation values of F Col may be calculated.
  • step S305: No When the rotation of the model by the model rotation unit 603 is not completed (step S305: No), the grip parameter adjusting unit 501 returns to step S303 to rotate the model by the model rotation unit 603 and grip the model by the grip evaluation unit 604. Continue the evaluation.
  • step S305: Yes the gripping parameter adjusting unit 501 proceeds to the operation of step S306.
  • the gripping parameter value determining unit 605 determines the value of the gripping parameter based on the evaluation result of the gripping evaluation unit 604 (step S306). Specifically, the gripping parameter value determining unit 605 searches for the gripping parameter value that minimizes the evaluation function calculated by the gripping evaluation unit 604, and determines the gripping parameter value found by the search as the adjustment parameter value. To do.
  • the gripping parameter adjustment end determination unit 606 determines whether or not the adjustment of all the gripping parameters to be adjusted has been completed (step S307). When the adjustment of all the gripping parameters to be adjusted is not completed (step S307: No), the gripping parameter adjustment end determination unit 606 instructs each unit to return to the operation of step S302 and perform the same operation as described above. To do. When the adjustment of all the gripping parameters to be adjusted is completed (step S307: Yes), the gripping parameter adjustment end determination unit 606 ends the operation of the gripping parameter adjusting unit 501.
  • the gripping parameter adjusting unit 501 automatically adjusts the gripping parameter.
  • the picking robot 20 can automatically adjust the parameters of the robot hand that easily grips the target work by using the generated model, and can reduce the load of the user's parameter adjustment work.
  • the recognition parameter adjusting unit 502 generates a model of the target work by the irregular model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and the generated model and the data set generation unit 109 generate the model.
  • the recognition parameters of the object recognition method are adjusted using the data set and the grip parameters adjusted by the grip parameter adjusting unit 501.
  • the recognition parameter adjusting unit 502 may adjust the recognition parameter of the method of detecting the candidate gripping position from the 2D data or the 3D data, or may adjust the parameter of the learning model of the gripping position detection by deep learning.
  • the parameters of the learning model of deep learning used as auxiliary processing for improving the recognition rate, for example, segmentation may be adjusted.
  • the recognition parameter adjusting unit 502 may adjust all kinds or a plurality of recognition parameters at the same time in adjusting the recognition parameters, or may individually adjust the recognition parameters according to a predetermined order. In the following, the case where the recognition parameters are adjusted individually will be mainly described.
  • FIG. 14 is a block diagram showing a configuration example of the recognition parameter adjusting unit 502 according to the third embodiment.
  • the recognition parameter adjustment unit 502 includes a recognition parameter adjustment range determination unit 701, a recognition parameter change unit 702, a recognition trial unit 703, a recognition evaluation unit 704, and a recognition parameter value determination unit 705.
  • a recognition parameter adjustment end determination unit 706 is provided. Details of each configuration will be described together with the operation of the recognition parameter adjusting unit 502.
  • FIG. 15 is a flowchart showing the operation of the recognition parameter adjusting unit 502 according to the third embodiment.
  • the recognition parameter adjustment range determination unit 701 generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and uses the generated model to adjust the recognition parameter adjustment range. Is determined (step S401).
  • the recognition parameter adjustment range determination unit 701 generates at least one model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and based on the size of the generated model and the like. ,
  • the adjustment range of the recognition parameter, the initial value of the recognition parameter, and the like may be determined.
  • the recognition parameter changing unit 702 changes the type of recognition parameter to be adjusted (step S402).
  • the recognition trial unit 703 performs a recognition process on the data generated by the data set generation unit 109 using the grip parameters adjusted by the grip parameter adjusting unit 501 (step S403).
  • the recognition trial unit 703 returns to step S403 when the recognition process is not completed (step S404: No), continues the recognition process, and recognizes the recognition result when the recognition process is completed (step S404: Yes). Output to the evaluation unit 704.
  • the recognition evaluation unit 704 evaluates the recognition result by the recognition trial unit 703 based on the recognition result acquired from the recognition trial unit 703 (step S405).
  • the recognition evaluation unit 704 has an amount of deviation E Pos between the gripping position and the optimum gripping position when recognized by the recognition trial unit 703, an amount of deviation E Ang between the main axis of the target work and the gripping direction, and an evaluation value E Num of the number of recognized pieces. You may calculate the evaluation function of the recognition parameter consisting of three.
  • the optimum gripping position may be the position of the center of gravity of the target work or a position determined in advance by the user.
  • the recognition parameter value determination unit 705 determines the value of the recognition parameter based on the evaluation result of the recognition evaluation unit 704 (step S406). Specifically, the recognition parameter value determination unit 705 searches for the value of the recognition parameter that minimizes the evaluation function calculated by the recognition evaluation unit 704, and determines the value of the recognition parameter found by the search as the value of the adjustment parameter. To do.
  • the recognition parameter adjustment end determination unit 706 determines whether or not the adjustment of all the recognition parameters to be adjusted has been completed (step S407).
  • the recognition parameter adjustment end determination unit 706 instructs each unit to return to the operation of step S402 and perform the same operation as described above. To do.
  • the recognition parameter adjustment end determination unit 706 ends the operation of the recognition parameter adjustment unit 502.
  • the recognition parameter adjusting unit 502 automatically adjusts the recognition parameter.
  • the picking robot 20 can automatically adjust the recognition parameters that tend to have a high recognition rate, and can reduce the load of the user's parameter adjustment work.
  • the parameter adjusting unit 110 automatically adjusts the gripping parameter and the recognition parameter.
  • the storage unit 105 may store the parameters for picking adjusted by the parameter adjustment unit 110.
  • the recognition processing unit 111 performs object recognition processing on the data acquired by the sensor 101 based on the parameters adjusted by the parameter adjustment unit 110.
  • the recognition processing unit 111 may perform object recognition processing on the data acquired by the sensor 101 based on the adjusted parameters stored in the storage unit 105, and detect a candidate gripping position. ..
  • the recognition processing unit 111 may directly acquire the parameters adjusted by the parameter adjustment unit 110 from the parameter adjustment unit 110.
  • the picking unit 112 includes a robot hand.
  • the picking unit 112 performs a picking operation of the target work by the robot hand based on the recognition processing result by the recognition processing unit 111. Based on the recognition processing result, the picking unit 112 may repeatedly perform operations such as approaching the robot hand to the gripping position having the highest gripping likelihood, picking the target work, and moving it to the specified position. Good.
  • the picking robot 20 may be equipped with sensors other than the sensor 101, such as a force sensor, a proximity sensor, and a tactile sensor for controlling force. Further, the model generation device 10 may target a plurality of picking robots 20, or may target a picking robot 20 including a plurality of picking units 112. The picking robot 20 can improve the picking success rate of the target work by simulating the actual environment of the user on the simulator and adjusting the parameters using the generated data set and the model of the generated target work.
  • sensors other than the sensor 101 such as a force sensor, a proximity sensor, and a tactile sensor for controlling force.
  • the model generation device 10 may target a plurality of picking robots 20, or may target a picking robot 20 including a plurality of picking units 112.
  • the picking robot 20 can improve the picking success rate of the target work by simulating the actual environment of the user on the simulator and adjusting the parameters using the generated data set and the model of the generated target work.
  • the picking unit 112 is realized by the robot hand and the robot hand control unit.
  • the robot hand control unit of the simulation condition setting unit 107, the scene generation unit 108, the data set generation unit 109, the parameter adjustment unit 110, the recognition processing unit 111, and the picking unit 112 is realized by a processing circuit.
  • the processing circuit may be a processor and a memory for executing a program stored in the memory, or may be dedicated hardware.
  • the picking robot 20 automatically adjusts the parameters for picking using the data set generated on the simulator and the model of the generated target work. As a result, the picking robot 20 can improve the picking success rate, and can reduce the period and cost of trial and error related to the introduction and operation of the industrial robot.
  • the picking robot 20 can simulate the actual environment of the user on the simulator and automatically generate the data necessary for parameter adjustment using the model of the work generated without using the actual machine.
  • the load of data collection work can be reduced.
  • the picking robot 20 has a high recognition rate and can automatically adjust parameters that make it easy to grip the target work, so that the load of the parameter adjustment work by the user can be reduced and the picking success rate of the target work can be improved. ..
  • the configuration shown in the above-described embodiment shows an example of the content of the present invention, can be combined with another known technique, and is one of the configurations without departing from the gist of the present invention. It is also possible to omit or change the part.

Abstract

This invention comprises: a sensor (101) that acquires scene information including information about a subject workpiece for which a model is to be generated; a workpiece information acquisition unit (102) that acquires workpiece information, i.e. the information about the subject workpiece, from the scene information; a standard model determination unit (103) that determines a standard model, i.e. a model serving as a standard for expressing an irregular shape, on the basis of the scene information and the workpiece information; a deformation constraint condition determination unit (104) that determines a deformation constraint condition, i.e. a condition for deformation of the standard model, on the basis of the standard model and results obtained by analyzing a workpiece shape using the scene information and the workpiece information; and an amorphous model generation unit (106) that, on the basis of the deformation constraint condition, determines a deformation parameter for the standard model within a range of the deformation constraint condition, and deforms the standard model in accordance with the deformation parameter.

Description

モデル生成装置およびピッキングロボットModel generator and picking robot
 本発明は、対象ワークのモデルを生成するモデル生成装置およびピッキングロボットに関する。 The present invention relates to a model generator and a picking robot that generate a model of a target work.
 従来、通い箱などに入れられ、一定の方向に向きが揃っていないワークを把持することが可能な装置がある。特許文献1には、対象ワークのモデルを入力し、対象ワークのシーンを模擬させながら画像データを生成し、画像データに対して把持位置の推定処理を施し、推定結果に基づいて対象ワークの取り出し工程をシミュレートするロボットシミュレーション装置の技術が開示されている。特許文献1に記載のロボットシミュレーション装置は、対象ワークの取り出しが困難な場合、ロボット動作に関わるシミュレーションのパラメータを調整することで、対象ワークの把持動作の試行回数に対する把持成功回数の割合であるピッキング成功率を向上させている。 Conventionally, there is a device that can be put in a returnable box or the like and can grip a work whose orientation is not aligned in a certain direction. In Patent Document 1, a model of the target work is input, image data is generated while simulating the scene of the target work, the gripping position is estimated for the image data, and the target work is taken out based on the estimation result. The technology of a robot simulation device that simulates a process is disclosed. When it is difficult to take out the target work, the robot simulation device described in Patent Document 1 adjusts the parameters of the simulation related to the robot movement, and picking is the ratio of the number of successful grips to the number of trials of the gripping movement of the target work. It is improving the success rate.
特開2018-144158号公報JP-A-2018-144158
 ばら積み状態のワークをロボットでピッキングするようなシチュエーション、いわゆるばら積みピッキングにおいては、まずピッキング対象のワークを認識して把持位置を知る必要がある。ばら積みピッキングにおいて高いピッキング成功率を実現するには、物体認識手法の認識パラメータの調整が必要である。 In a situation where a robot picks a work in a bulk pile, so-called bulk picking, it is first necessary to recognize the work to be picked and know the gripping position. In order to achieve a high picking success rate in bulk picking, it is necessary to adjust the recognition parameters of the object recognition method.
 しかしながら、特許文献1に記載のロボットシミュレーション装置では、シミュレーションが適用できるワークは工業部品のCAD(Computer Aided Design)モデルが存在する定形ワークのみである。そのため、特許文献1に記載のロボットシミュレーション装置は、食品のように個体によって形状が不揃いでCADモデルが存在しないワークにはシミュレーションを適用できない、という問題があった。 However, in the robot simulation device described in Patent Document 1, the work to which the simulation can be applied is only a standard work in which a CAD (Computer Aided Design) model of an industrial part exists. Therefore, the robot simulation apparatus described in Patent Document 1 has a problem that the simulation cannot be applied to a work such as food in which the shape is irregular depending on the individual and the CAD model does not exist.
 本発明は、上記に鑑みてなされたものであって、不揃いな形状であって、シミュレーションに適用可能な対象ワークのモデルを生成可能なモデル生成装置を得ることを目的とする。 The present invention has been made in view of the above, and an object of the present invention is to obtain a model generator capable of generating a model of a target work that has an irregular shape and is applicable to simulation.
 上述した課題を解決し、目的を達成するために、本発明のモデル生成装置は、モデルを生成する対象ワークの情報を含むシーン情報を取得するセンサと、シーン情報から、対象ワークの情報であるワーク情報を取得するワーク情報取得部と、シーン情報およびワーク情報に基づいて、不揃いな形状を表現するための基準となるモデルである基準モデルを決定する基準モデル決定部と、シーン情報とワーク情報とを用いてワーク形状を解析した結果、および基準モデルに基づいて、基準モデルを変形させる条件である変形制約条件を決定する変形制約条件決定部と、変形制約条件に基づいて、基準モデルの変形パラメータを変形制約条件の範囲内で決定し、変形パラメータに従って基準モデルを変形する不定形モデル生成部と、を備えることを特徴とする。 In order to solve the above-mentioned problems and achieve the object, the model generator of the present invention is information on the target work from the sensor that acquires the scene information including the information on the target work for generating the model and the scene information. A work information acquisition unit that acquires work information, a reference model determination unit that determines a reference model that is a reference model for expressing irregular shapes based on scene information and work information, and scene information and work information. Based on the result of analyzing the work shape using and the reference model, the deformation constraint condition determination unit that determines the deformation constraint condition that is the condition for deforming the reference model, and the deformation of the reference model based on the deformation constraint condition. It is characterized by including an indefinite model generation unit that determines parameters within a range of deformation constraint conditions and deforms a reference model according to deformation parameters.
 本発明によれば、モデル生成装置は、不揃いな形状であって、シミュレーションに適用可能な対象ワークのモデルを生成できる、という効果を奏する。 According to the present invention, the model generator has an effect that it has an irregular shape and can generate a model of a target work applicable to simulation.
実施の形態1に係るモデル生成装置の構成例を示すブロック図Block diagram showing a configuration example of the model generator according to the first embodiment 実施の形態1に係るモデル生成装置の動作を示す第1のフローチャートA first flowchart showing the operation of the model generator according to the first embodiment. 実施の形態1に係るモデル生成装置の動作を示す第2のフローチャートA second flowchart showing the operation of the model generator according to the first embodiment. 実施の形態1に係るモデル生成装置が備える処理回路をプロセッサおよびメモリで構成する場合の例を示す図The figure which shows the example of the case where the processing circuit included in the model generator which concerns on Embodiment 1 is configured by a processor and a memory 実施の形態1に係るモデル生成装置が備える処理回路を専用のハードウェアで構成する場合の例を示す図The figure which shows the example in the case of configuring the processing circuit provided in the model generation apparatus which concerns on Embodiment 1 with dedicated hardware. 実施の形態2に係る基準モデル決定部の構成例を示すブロック図Block diagram showing a configuration example of the reference model determination unit according to the second embodiment 実施の形態2に係る基準モデル決定部の動作を示すフローチャートA flowchart showing the operation of the reference model determination unit according to the second embodiment. 実施の形態3に係るピッキングロボットの構成例を示すブロック図Block diagram showing a configuration example of the picking robot according to the third embodiment 実施の形態3に係るシミュレーション条件設定部の構成例を示すブロック図Block diagram showing a configuration example of the simulation condition setting unit according to the third embodiment 実施の形態3に係るデータセット生成部の構成例を示すブロック図Block diagram showing a configuration example of the data set generation unit according to the third embodiment 実施の形態3に係るパラメータ調整部の構成例を示すブロック図Block diagram showing a configuration example of the parameter adjustment unit according to the third embodiment 実施の形態3に係る把持パラメータ調整部の構成例を示すブロック図Block diagram showing a configuration example of the gripping parameter adjusting unit according to the third embodiment 実施の形態3に係る把持パラメータ調整部の動作を示すフローチャートA flowchart showing the operation of the gripping parameter adjusting unit according to the third embodiment. 実施の形態3に係る認識パラメータ調整部の構成例を示すブロック図Block diagram showing a configuration example of the recognition parameter adjustment unit according to the third embodiment 実施の形態3に係る認識パラメータ調整部の動作を示すフローチャートA flowchart showing the operation of the recognition parameter adjusting unit according to the third embodiment.
 以下に、本発明の実施の形態に係るモデル生成装置およびピッキングロボットを図面に基づいて詳細に説明する。なお、この実施の形態によりこの発明が限定されるものではない。 Hereinafter, the model generator and the picking robot according to the embodiment of the present invention will be described in detail with reference to the drawings. The present invention is not limited to this embodiment.
実施の形態1.
 図1は、本発明の実施の形態1に係るモデル生成装置10の構成例を示すブロック図である。モデル生成装置10は、不揃いな形状であってシミュレーションに適用可能な対象ワークのモデルを生成する。モデル生成装置10で生成される対象ワークのモデルは、3次元的な対象ワークのモデルでもよいし、2次元的な対象ワークのモデルでもよい。以降では、具体的に、モデル生成装置10が3次元的な対象ワークのモデルを生成する方法について説明する。また、対象ワークのモデルのことを単にモデルと称することがある。モデル生成装置10は、2次元的なモデルを生成する場合、3次元モデルの手法を平面に投影するなどの次元圧縮によって実現できる。なお、モデル生成装置10でモデルを生成できる対象ワークは、個々の形状が不揃いな対象ワークである不定形ワークに限定されず、個々の形状が揃っている対象ワークである定形ワークも含まれる。
Embodiment 1.
FIG. 1 is a block diagram showing a configuration example of the model generation device 10 according to the first embodiment of the present invention. The model generation device 10 generates a model of a target work having an irregular shape and applicable to simulation. The model of the target work generated by the model generation device 10 may be a three-dimensional target work model or a two-dimensional target work model. Hereinafter, a method in which the model generation device 10 generates a three-dimensional model of the target work will be specifically described. In addition, the model of the target work may be simply referred to as a model. When generating a two-dimensional model, the model generation device 10 can be realized by dimensional compression such as projecting a three-dimensional model method onto a plane. The target work that can generate a model with the model generation device 10 is not limited to the amorphous work that is a target work having irregular shapes, but also includes a fixed shape work that is a target work having individual shapes.
 図1に示すように、モデル生成装置10は、センサ101と、ワーク情報取得部102と、基準モデル決定部103と、変形制約条件決定部104と、記憶部105と、不定形モデル生成部106と、を備える。各構成の詳細について、モデル生成装置10の動作とともに説明する。図2は、実施の形態1に係るモデル生成装置10の動作を示す第1のフローチャートである。図2のフローチャートは、センサ101から変形制約条件決定部104までの動作を示している。図3は、実施の形態1に係るモデル生成装置10の動作を示す第2のフローチャートである。図3のフローチャートは、不定形モデル生成部106の動作を示している。 As shown in FIG. 1, the model generation device 10 includes a sensor 101, a work information acquisition unit 102, a reference model determination unit 103, a deformation constraint condition determination unit 104, a storage unit 105, and an amorphous model generation unit 106. And. Details of each configuration will be described together with the operation of the model generator 10. FIG. 2 is a first flowchart showing the operation of the model generation device 10 according to the first embodiment. The flowchart of FIG. 2 shows the operation from the sensor 101 to the deformation constraint condition determination unit 104. FIG. 3 is a second flowchart showing the operation of the model generation device 10 according to the first embodiment. The flowchart of FIG. 3 shows the operation of the amorphous model generation unit 106.
 センサ101は、本実施の形態においてモデルを生成する対象となる対象ワークを計測し(ステップS101)、対象ワークのシーン情報を取得する。シーン情報とは、モデルの生成対象となる対象ワークの情報であるワーク情報とともに、対象ワークの背景などの情報が含まれる情報である。ワーク情報とは、対象ワークについての形状、色などを表す情報である。センサ101は、例えば、2D(Dimensions)カメラ、3Dセンサなどを備える。センサ101は、2Dカメラを用いてグレースケール画像またはカラー画像による2Dデータを取得してもよいし、3Dセンサを用いて3次元点群または距離画像による3Dデータを取得してもよい。シーン情報のデータについては、計測データでもよいし、画像データでもよい。以降の説明において、2Dを2次元と称し、3Dを3次元と称することがある。 The sensor 101 measures the target work for which the model is to be generated in the present embodiment (step S101), and acquires the scene information of the target work. The scene information is information including information such as the background of the target work as well as the work information which is the information of the target work for which the model is to be generated. The work information is information representing the shape, color, etc. of the target work. The sensor 101 includes, for example, a 2D (Dimensions) camera, a 3D sensor, and the like. The sensor 101 may acquire 2D data as a grayscale image or a color image using a 2D camera, or may acquire 3D data as a 3D point cloud or a distance image using a 3D sensor. The scene information data may be measurement data or image data. In the following description, 2D may be referred to as two-dimensional, and 3D may be referred to as three-dimensional.
 ワーク情報取得部102は、センサ101で取得されたシーン情報から、モデル生成の対象となる対象ワークの情報であるワーク情報を取得する(ステップS102)。ワーク情報取得部102は、センサ101で得られたデータから、サイズ範囲、RGB(Red Green Blue)で示されるカラーなどの情報を取得してもよい。ワーク情報の取得方法については、例えば、一軸回転の回転ステージ上に対象ワークを1つ配置させ、回転ステージで対象ワークを回転させながら、センサ101で計測する。データは、対象ワークの全周分である必要はなく、センサ101による計測は1回だけでもよい。ワーク情報取得部102は、得られたデータからモデル生成に不要な背景などを削除して、対象ワークのみのデータを取得する。ワーク情報取得部102は、得られた対象ワークのデータに基づいて、対象ワークのサイズ範囲、平均の色情報などを算出する。対象ワークのサイズ範囲とは、例えば、3次元点群のX座標、Y座標、およびZ座標の各座標で特定される範囲である。 The work information acquisition unit 102 acquires work information, which is information on the target work to be model-generated, from the scene information acquired by the sensor 101 (step S102). The work information acquisition unit 102 may acquire information such as a size range and a color indicated by RGB (Red Green Blue) from the data acquired by the sensor 101. Regarding the method of acquiring the work information, for example, one target work is arranged on the rotation stage of uniaxial rotation, and the sensor 101 measures while rotating the target work on the rotation stage. The data does not have to be the entire circumference of the target work, and the measurement by the sensor 101 may be performed only once. The work information acquisition unit 102 deletes the background and the like unnecessary for model generation from the obtained data, and acquires the data of only the target work. The work information acquisition unit 102 calculates the size range of the target work, the average color information, and the like based on the obtained data of the target work. The size range of the target work is, for example, a range specified by each of the X coordinate, the Y coordinate, and the Z coordinate of the three-dimensional point cloud.
 基準モデル決定部103は、センサ101で得られたシーン情報およびワーク情報取得部102で得られたワーク情報に基づいて、不揃いな形状を表現するための基準となるモデルである基準モデルを決定する(ステップS103)。基準モデル決定部103で決定される基準モデルは、プリミティブ形状、例えば、直方体、球、三角錐などの幾何学形状でもよいし、ユーザが作成したモデルでもよいし、幾何学形状同士またはユーザが作成したモデル同士の組み合わせでもよい。モデルの表現方法は、3次元座標の情報を有する点の集合、すなわち3次元点群でもよいし、メッシュモデルでもよいし、幾何学形状を表現可能な数値でもよいし、これらの組み合わせでもよい。基準モデルの表現方法は、本実施の形態の記載内容に限られるものではなく、他の表現方法を用いてもよい。ここで、基準モデルの決定方法については、自動決定および手動決定の2種類が考えられる。 The reference model determination unit 103 determines a reference model, which is a reference model for expressing an irregular shape, based on the scene information obtained by the sensor 101 and the work information obtained by the work information acquisition unit 102. (Step S103). The reference model determined by the reference model determination unit 103 may be a primitive shape, for example, a geometric shape such as a rectangular parallelepiped, a sphere, or a triangular pyramid, a model created by a user, geometric shapes, or a user-created model. It may be a combination of the models that have been created. The method of expressing the model may be a set of points having information on three-dimensional coordinates, that is, a three-dimensional point cloud, a mesh model, a numerical value capable of expressing a geometric shape, or a combination thereof. The expression method of the reference model is not limited to the description content of the present embodiment, and other expression methods may be used. Here, there are two possible methods for determining the reference model: automatic determination and manual determination.
 自動決定による基準モデルの決定方法では、基準モデル決定部103は、例えば、センサ101で得られた複数のシーン情報から1つのシーン情報を選択し、シーン情報に含まれるワーク情報で表される対象ワークの重心位置を算出する。基準モデル決定部103は、算出した重心位置を基準モデルの中心点とし、中心点を基にしていくつかの種類が異なる基準モデルを生成する。基準モデル決定部103は、生成する基準モデルのサイズについて、ワーク情報取得部102で得られたワーク情報に基づいて決定する。基準モデル決定部103は、計測データおよび基準モデルそれぞれの法線の分布を算出し、計測データの法線分布に一番近い基準モデルを対象ワークの不揃いな形状を表現するための基準モデルとして自動で決定してもよい。このような自動決定による基準モデルの決定方法を、以下の式(1)で表すことができる。すなわち、基準モデル決定部103は、自動決定による基準モデルの決定において、以下の式(1)を実行する。なお、式(1)において、Mは計測データであり、Pは基準モデルであり、g(M)は計測データの法線分布であり、g(P)は基準モデルの法線分布である。 In the method of determining the reference model by automatic determination, for example, the reference model determination unit 103 selects one scene information from a plurality of scene information obtained by the sensor 101, and is an object represented by the work information included in the scene information. Calculate the position of the center of gravity of the work. The reference model determination unit 103 uses the calculated center of gravity position as the center point of the reference model, and generates several different types of reference models based on the center point. The reference model determination unit 103 determines the size of the reference model to be generated based on the work information obtained by the work information acquisition unit 102. The reference model determination unit 103 calculates the distribution of the normals of the measurement data and the reference model, and automatically uses the reference model closest to the normal distribution of the measurement data as the reference model for expressing the irregular shape of the target work. You may decide with. The method of determining the reference model by such automatic determination can be expressed by the following equation (1). That is, the reference model determination unit 103 executes the following equation (1) in determining the reference model by automatic determination. In the equation (1), M is the measurement data, P is the reference model, g (M) is the normal distribution of the measurement data, and g (P) is the normal distribution of the reference model.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 手動決定による基準モデルの決定方法では、ユーザが、対象ワークに対して一番類似する幾何学形状を決定してもよい。 In the method of determining the reference model by manual determination, the user may determine the geometric shape most similar to the target work.
 変形制約条件決定部104は、シーン情報とワーク情報とを用いてワーク形状を解析した結果、および基準モデルに基づいて、基準モデルを変形させる条件である変形制約条件を決定する(ステップS104)。変形制約条件決定部104は、決定した変形制約条件を記憶部105に記憶させる。ここで、本実施の形態では、変形制約条件決定部104は、基準モデルをどのように変形させるかをパラメータ化して定義する。変形制約条件決定部104は、個々のモデルの形状を不揃いにするため、変形パラメータをモデルごとに異なった値に設定する。基準モデルを変形させて対象ワークに近い形状の特徴を再現するためには、各変形パラメータの値の範囲を適切に決めることで変形の度合いに制約条件を設ける必要がある。そのため、変形制約条件決定部104は、変形パラメータの取り得る値の範囲を決定する。 The deformation constraint condition determination unit 104 determines the deformation constraint condition, which is a condition for deforming the reference model, based on the result of analyzing the work shape using the scene information and the work information and the reference model (step S104). The deformation constraint condition determination unit 104 stores the determined deformation constraint condition in the storage unit 105. Here, in the present embodiment, the deformation constraint condition determination unit 104 defines how to deform the reference model as parameters. The deformation constraint condition determination unit 104 sets the deformation parameters to different values for each model in order to make the shapes of the individual models uneven. In order to deform the reference model and reproduce the characteristics of the shape close to the target work, it is necessary to set constraints on the degree of deformation by appropriately determining the range of values of each deformation parameter. Therefore, the deformation constraint condition determination unit 104 determines the range of possible values of the deformation parameter.
 対象ワークの形状を表す変形パラメータとしては、例えば、対象ワークのサイズ、基準モデル変形時における基準モデルの変形量、曲率、凸凹の出現頻度、凸凹の出現範囲などが考えられる。変形量とは、例えば、メッシュモデルの場合はメッシュ頂点の移動量などである。すなわち、変形制約条件は、数値的な条件として、対象ワークの凸凹の出現頻度、対象ワークの凸凹の出現範囲、および曲率のうち少なくとも1つを含む。変形制約条件決定部104は、凸凹の出現頻度などの数値的な条件を用いることで、モデルの再現性を向上させることができる。 As the deformation parameters representing the shape of the target work, for example, the size of the target work, the amount of deformation of the reference model when the reference model is deformed, the curvature, the frequency of appearance of irregularities, the appearance range of irregularities, etc. can be considered. The amount of deformation is, for example, the amount of movement of mesh vertices in the case of a mesh model. That is, the deformation constraint condition includes at least one of the frequency of appearance of unevenness of the target work, the appearance range of unevenness of the target work, and the curvature as numerical conditions. The deformation constraint condition determination unit 104 can improve the reproducibility of the model by using numerical conditions such as the appearance frequency of irregularities.
 変形制約条件決定部104は、センサ101で得られたシーン情報、ワーク情報取得部102で得られたワーク情報、および基準モデル決定部103で決定された基準モデルに基づいて、変形制約条件を決定する。変形制約条件決定部104は、例えば、基準モデルのメッシュ頂点の移動量について、基準モデルのメッシュ頂点と基準モデルの中心点とで成るベクトルを基準モデルのメッシュ頂点の移動範囲と仮定するなどの拘束条件を付ける。これにより、変形制約条件決定部104は、メッシュ頂点の移動時に他のメッシュ同士の交差またはメッシュの面が裏に返るといったモデル生成の失敗を防止することができる。変形制約条件決定部104は、対象ワークのサイズについて、ワーク情報取得部102で得られたワーク情報に基づいてサイズの範囲を決定してもよい。変形制約条件決定部104は、凸凹の出現頻度について、センサ101で得られたシーン情報に対して法線推定をして、法線推定の分布があらかじめ決められた閾値内であれば凸凹の出現頻度を変形パラメータの1つとし、凸凹の出現頻度の範囲を決定してもよい。 The deformation constraint condition determination unit 104 determines the deformation constraint condition based on the scene information obtained by the sensor 101, the work information obtained by the work information acquisition unit 102, and the reference model determined by the reference model determination unit 103. To do. The deformation constraint condition determination unit 104 constrains, for example, the movement amount of the mesh vertices of the reference model by assuming that the vector consisting of the mesh vertices of the reference model and the center point of the reference model is the movement range of the mesh vertices of the reference model. Add conditions. As a result, the deformation constraint condition determination unit 104 can prevent the failure of model generation such as the intersection of other meshes or the surface of the mesh being turned inside out when the mesh vertices are moved. The deformation constraint condition determination unit 104 may determine the size range of the target work based on the work information obtained by the work information acquisition unit 102. The deformation constraint condition determination unit 104 estimates the appearance frequency of irregularities based on the scene information obtained by the sensor 101, and if the distribution of the normal estimation is within a predetermined threshold value, the appearance of irregularities appears. The frequency may be set as one of the deformation parameters, and the range of the appearance frequency of the unevenness may be determined.
 変形制約条件決定部104は、ワーク形状の解析のためにワークの輪郭を抽出して利用してもよい。すなわち、変形制約条件決定部104が解析する対象ワークのワーク形状には、対象ワークの輪郭が含まれる。輪郭の抽出方法として、変形制約条件決定部104は、例えば、ワークの計測データに対して任意の断面を設定し、設定した断面から一定距離の範囲内に含まれる計測データを探索するなどの方法が考えられる。ワークの計測データに対して輪郭は1つとは限らず、例えば、前述した方法であれば断面の設定された位置、姿勢ごとに輪郭が得られる。ワーク輪郭の解析方法として、変形制約条件決定部104は、例えば、ワーク輪郭に対して曲線フィッティングし、曲線フィッティングの結果に基づいて凸凹の出現頻度の範囲を決定することが考えられる。変形制約条件決定部104は、対象ワークのワーク輪郭の情報を解析することで、対象ワークの不揃いさに応じた変形制約条件を決めやすくなる。これにより、モデル生成装置10は、モデルの再現性を向上させることができる。 The deformation constraint condition determination unit 104 may extract and use the contour of the work for analyzing the shape of the work. That is, the work shape of the target work analyzed by the deformation constraint condition determination unit 104 includes the contour of the target work. As a method of extracting the contour, for example, the deformation constraint condition determination unit 104 sets an arbitrary cross section for the measurement data of the work, and searches for the measurement data included within a certain distance from the set cross section. Can be considered. The contour is not limited to one with respect to the measurement data of the work. For example, in the method described above, a contour can be obtained for each position and posture in which the cross section is set. As a method of analyzing the work contour, it is conceivable that the deformation constraint condition determining unit 104, for example, performs curve fitting on the work contour and determines the range of the appearance frequency of unevenness based on the result of the curve fitting. The deformation constraint condition determination unit 104 can easily determine the deformation constraint condition according to the irregularity of the target work by analyzing the work contour information of the target work. As a result, the model generator 10 can improve the reproducibility of the model.
 変形制約条件決定部104は、曲率、多項式近似などによる曲線のように人手で設計した形状特徴を用いる代わりに、ニューラルネットワークを用いた学習手法によって形状特徴を自動的に獲得させてもよい。変形制約条件決定部104は、例えば、対象ワークの計測データから取得した法線分布を画像化したデータを入力情報として、対象ワークと対象ワーク以外のワークとのクラス分類をするニューラルネットワークを学習し、学習後のニューラルネットワークの中間層の特徴マップを形状特徴として利用することなどが考えられる。このように、変形制約条件決定部104は、変形パラメータとして対象ワークの形状を学習したニューラルネットワークの中間層の特徴マップを用いてもよい。モデル生成装置10は、機械学習によって対象ワークに適した変形パラメータを獲得できるため、複雑なワーク形状を精度良く再現できる。これにより、変形制約条件決定部104では、人手で形状特徴を設計するよりも、汎用的、かつ表現能力の高い変形制約条件が得られることが期待できる。ニューラルネットワークの構成は限定されず、学習対象もクラス分類問題に限定されない。ただし、中間層の特徴マップがワーク形状の特徴をよく表すようなネットワーク構成および学習対象が望ましい。 The deformation constraint condition determination unit 104 may automatically acquire the shape feature by a learning method using a neural network instead of using a shape feature designed manually such as a curve by curvature or polynomial approximation. The deformation constraint condition determination unit 104 learns, for example, a neural network that classifies the target work and the work other than the target work by using the data obtained by imaging the normal distribution obtained from the measurement data of the target work as input information. , It is conceivable to use the feature map of the intermediate layer of the neural network after learning as a shape feature. As described above, the deformation constraint condition determination unit 104 may use the feature map of the intermediate layer of the neural network that has learned the shape of the target work as the deformation parameter. Since the model generator 10 can acquire deformation parameters suitable for the target work by machine learning, it is possible to accurately reproduce a complicated work shape. As a result, it can be expected that the deformation constraint condition determination unit 104 can obtain deformation constraint conditions that are more versatile and have high expressive ability than manually designing the shape features. The configuration of the neural network is not limited, and the learning target is not limited to the classification problem. However, it is desirable to have a network configuration and learning target in which the feature map of the intermediate layer well represents the features of the work shape.
 記憶部105は、変形制約条件決定部104で決定された変形パラメータの範囲である変形制約条件を記憶する。 The storage unit 105 stores the deformation constraint condition which is the range of the deformation parameter determined by the deformation constraint condition determination unit 104.
 不定形モデル生成部106は、記憶部105から変形制約条件を読み出し、読み出した変形制約条件に基づいて、基準モデルの変形パラメータを変形制約条件の範囲内で決定し(ステップS105)、変形パラメータに従って基準モデルを変形する(ステップS106)。前述のように、モデル生成装置10では、複数の不定形のモデルを生成することを想定している。そのため、不定形モデル生成部106は、ユーザから、生成するモデルの個数の指定を受け付けておくこととする。不定形モデル生成部106は、ユーザから指定された個数のモデルを生成していない場合(ステップS107:No)、ステップS105に戻って前述の動作を繰り返し実施する。不定形モデル生成部106は、ユーザから指定された個数のモデルを生成した場合(ステップS107:Yes)、動作を終了する。 The irregular model generation unit 106 reads out the deformation constraint condition from the storage unit 105, determines the deformation parameter of the reference model within the range of the deformation constraint condition based on the read deformation constraint condition (step S105), and follows the deformation parameter. The reference model is transformed (step S106). As described above, the model generation device 10 is supposed to generate a plurality of amorphous models. Therefore, the amorphous model generation unit 106 accepts the user to specify the number of models to be generated. When the amorphous model generation unit 106 has not generated the number of models specified by the user (step S107: No), it returns to step S105 and repeats the above-described operation. When the amorphous model generation unit 106 generates the number of models specified by the user (step S107: Yes), the operation ends.
 具体的には、不定形モデル生成部106は、変形制約条件決定部104で決定された変形制約条件に基づいて変形パラメータを決定する。不定形モデル生成部106は、例えば、出力モデルがメッシュモデルの場合、基準モデルのメッシュの移動量では、各計測データにおいて、距離が一番近いベクトルを探索し、計測データからベクトルへ下ろした法線とベクトルとの交点を基準モデルのメッシュ頂点の移動先として決定してもよい。不定形モデル生成部106は、曲率においては、使用する基準モデルのメッシュ数により表現されるため、計測データの中で点群の密度が大きい箇所に対してメッシュ数を増やすといった形で基準モデルのメッシュ数を決定してもよい。不定形モデル生成部106は、凸凹の出現頻度においては、計測データに対して法線推定をし、法線の分布に基づいて凸凹の出現頻度を決定してもよい。法線分布の範囲が広いほど、凸凹の出現頻度が多いと考えられるからである。不定形モデル生成部106は、凸凹の出現範囲においては、計測データに対して推定した法線分布に基づいて、凸凹箇所を推定し、推定された箇所で凸凹の範囲を算出してもよい。 Specifically, the amorphous model generation unit 106 determines the deformation parameters based on the deformation constraint conditions determined by the deformation constraint condition determination unit 104. For example, when the output model is a mesh model, the irregular model generation unit 106 searches for the vector with the shortest distance in each measurement data in the amount of movement of the mesh of the reference model, and lowers the measurement data to the vector. The intersection of the line and the vector may be determined as the destination of the mesh vertices of the reference model. Since the amorphous model generation unit 106 is represented by the number of meshes of the reference model to be used in terms of curvature, the number of meshes of the reference model is increased in the measurement data where the density of the point cloud is high. The number of meshes may be determined. The irregular model generation unit 106 may estimate the normal line with respect to the measurement data and determine the appearance frequency of the unevenness based on the distribution of the normal line in the appearance frequency of the unevenness. This is because it is considered that the wider the range of the normal distribution, the higher the frequency of appearance of unevenness. In the irregular appearance range, the irregular model generation unit 106 may estimate the uneven portion based on the normal distribution estimated for the measurement data, and calculate the uneven range at the estimated portion.
 不定形モデル生成部106は、各変形パラメータの値を算出する際、個々のモデルが異なる形状となるように、ランダムに値を変化させる。ただし、変化後のパラメータ値が変形制約条件の範囲内になるようにする必要がある。不定形モデル生成部106は、変形パラメータの決定後、決定した変形パラメータに基づいて基準モデルを変形する。 When calculating the value of each deformation parameter, the amorphous model generation unit 106 randomly changes the value so that each model has a different shape. However, it is necessary to make the parameter value after the change within the range of the deformation constraint condition. After determining the deformation parameters, the amorphous model generation unit 106 deforms the reference model based on the determined deformation parameters.
 不定形モデル生成部106は、メッシュ頂点を移動させる際、シーン情報から抽出した対象ワークの輪郭情報を利用してもよい。不定形モデル生成部106は、例えば、基準モデルの輪郭の全部または一部を対象ワークの輪郭情報で置き換え、メッシュ頂点の移動先をメッシュ頂点との距離が最も小さい輪郭上の3次元点に設定する。対象ワークの輪郭情報とは、例えば、輪郭を構成する3次元点群である。置き換える輪郭情報としては、例えば、異なるワークから抽出したワーク輪郭をランダムで選択する、選択したワーク輪郭をそのまま使用するのではなく変形制約条件の範囲内で凸凹または輪郭の長さ方向に対して拡大縮小するスケール変換をして別の輪郭情報を生成して用いる、異なるワーク輪郭同士を合成して生成した輪郭情報を用いるなどが考えられる。 The amorphous model generation unit 106 may use the contour information of the target work extracted from the scene information when moving the mesh vertices. For example, the amorphous model generation unit 106 replaces all or part of the contour of the reference model with the contour information of the target work, and sets the movement destination of the mesh vertices to the three-dimensional point on the contour having the shortest distance from the mesh vertices. To do. The contour information of the target work is, for example, a group of three-dimensional points constituting the contour. As the contour information to be replaced, for example, a work contour extracted from a different work is randomly selected, and the selected work contour is not used as it is, but is enlarged in the length direction of the unevenness or the contour within the range of the deformation constraint condition. It is conceivable to use the contour information generated by synthesizing different work contours, or to generate and use different contour information by performing scale conversion to reduce the scale.
 すなわち、不定形モデル生成部106は、ワーク輪郭の情報として、異なる2つ以上のワーク輪郭を合成して生成したワーク輪郭を用いてもよい。これにより、不定形モデル生成部106は、対象ワーク、すなわち実ワークの輪郭情報を合成して新しい輪郭を生成することで、生成するモデルの形状のバリエーションを増やすことができる。また、不定形モデル生成部106は、ワーク輪郭の情報として、ワーク輪郭のスケールを拡大または縮小して生成したワーク輪郭を用いてもよい。これにより、不定形モデル生成部106は、用いる輪郭のバリエーションをさらに増やすことで、生成するモデルの形状のバリエーションを増やすことができる。また、不定形モデル生成部106は、基準モデルの変形量を決定する際、実際にワークから取得したワーク輪郭の情報を活用することで、より自然なワークモデルを生成することが期待できる。これにより、不定形モデル生成部106は、対象ワーク、すなわち実ワークのワーク輪郭の情報を用いることで、より対象ワークらしいモデルを生成することができる。 That is, the amorphous model generation unit 106 may use the work contour generated by synthesizing two or more different work contours as the work contour information. As a result, the amorphous model generation unit 106 can increase the variation in the shape of the generated model by synthesizing the contour information of the target work, that is, the actual work, and generating a new contour. Further, the amorphous model generation unit 106 may use the work contour generated by enlarging or reducing the scale of the work contour as the information of the work contour. As a result, the amorphous model generation unit 106 can increase the variation of the shape of the generated model by further increasing the variation of the contour to be used. Further, the amorphous model generation unit 106 can be expected to generate a more natural work model by utilizing the work contour information actually acquired from the work when determining the deformation amount of the reference model. As a result, the amorphous model generation unit 106 can generate a model more like the target work by using the information of the work contour of the target work, that is, the actual work.
 なお、モデル生成装置10では、不定形モデル生成部106をニューラルネットワークによるモデル生成手法で代替してもよい。ニューラルネットワークは、シーン情報、ワーク情報、変形制約条件、変形パラメータのうち少なくとも1つを入力とし、規定された表現方法における対象ワークのモデルを出力するように構成される。ニューラルネットワークは、例えば、画像などの2Dモデルであれば、ランダムな初期値を入力として、輪郭画像を出力する構成が考えられる。入力する値は1つでもよいし、複数の数値を有するベクトルでもよい。このようなケースにおいては、あらかじめニューラルネットワークの学習データセットとして、入力となる数値群およびそれぞれの数値群に紐づけられたワークの輪郭モデルの正解データを用意しておき、それらの変換を学習することで学習済みのニューラルネットワークが得られる。 In the model generation device 10, the amorphous model generation unit 106 may be replaced by a model generation method using a neural network. The neural network is configured to input at least one of scene information, work information, deformation constraint conditions, and deformation parameters, and output a model of the target work in a defined expression method. For example, in the case of a 2D model such as an image, the neural network may be configured to output a contour image by inputting a random initial value. The value to be input may be one or a vector having a plurality of numerical values. In such a case, the correct answer data of the input numerical group and the contour model of the work associated with each numerical group are prepared in advance as the training data set of the neural network, and their transformations are learned. This gives a trained neural network.
 ニューラルネットワークについては、例えば、3次元モデルを出力したい場合、ネットワークの出力部がボクセル空間に対応するようにネットワークを構成するなどの方法が考えられる。このような機能を実現するための枠組みとしては、GAN(Generative Adversarial Networks)などが提案されている。なお、前述のネットワーク構成、学習データセットの内容などは一例であり、これらに限定されず、他のネットワーク構成、学習データセットの内容などを用いてよい。モデル生成装置10は、ニューラルネットワークによる高い表現能力によってモデルを生成することで、人手で構築されたモデル生成アルゴリズムよりも汎用的であることが期待でき、ワークの適用範囲が拡大できる。 Regarding the neural network, for example, when it is desired to output a three-dimensional model, a method such as configuring the network so that the output part of the network corresponds to the voxel space can be considered. As a framework for realizing such a function, GAN (Generative Adversarial Networks) and the like have been proposed. The above-mentioned network configuration, contents of the learning data set, and the like are examples, and the present invention is not limited to these, and other network configurations, the contents of the learning data set, and the like may be used. The model generation device 10 can be expected to be more versatile than a manually constructed model generation algorithm by generating a model with high expressive power by a neural network, and the applicable range of the work can be expanded.
 つづいて、モデル生成装置10のハードウェア構成について説明する。モデル生成装置10において、センサ101はカメラ、レーザなどの計測器である。記憶部105はメモリである。ワーク情報取得部102、基準モデル決定部103、変形制約条件決定部104、および不定形モデル生成部106は処理回路により実現される。処理回路は、メモリに格納されるプログラムを実行するプロセッサおよびメモリであってもよいし、専用のハードウェアであってもよい。 Next, the hardware configuration of the model generator 10 will be described. In the model generator 10, the sensor 101 is a measuring instrument such as a camera or a laser. The storage unit 105 is a memory. The work information acquisition unit 102, the reference model determination unit 103, the deformation constraint condition determination unit 104, and the amorphous model generation unit 106 are realized by a processing circuit. The processing circuit may be a processor and memory for executing a program stored in the memory, or may be dedicated hardware.
 図4は、実施の形態1に係るモデル生成装置10が備える処理回路をプロセッサおよびメモリで構成する場合の例を示す図である。処理回路がプロセッサ91およびメモリ92で構成される場合、モデル生成装置10の処理回路の各機能は、ソフトウェア、ファームウェア、またはソフトウェアとファームウェアとの組み合わせにより実現される。ソフトウェアまたはファームウェアはプログラムとして記述され、メモリ92に格納される。処理回路では、メモリ92に記憶されたプログラムをプロセッサ91が読み出して実行することにより、各機能を実現する。すなわち、処理回路は、モデル生成装置10の処理が結果的に実行されることになるプログラムを格納するためのメモリ92を備える。また、これらのプログラムは、モデル生成装置10の手順および方法をコンピュータに実行させるものであるともいえる。 FIG. 4 is a diagram showing an example in which the processing circuit included in the model generation device 10 according to the first embodiment is configured by a processor and a memory. When the processing circuit is composed of the processor 91 and the memory 92, each function of the processing circuit of the model generator 10 is realized by software, firmware, or a combination of software and firmware. The software or firmware is written as a program and stored in the memory 92. In the processing circuit, each function is realized by the processor 91 reading and executing the program stored in the memory 92. That is, the processing circuit includes a memory 92 for storing a program in which the processing of the model generation device 10 is to be executed as a result. It can also be said that these programs cause a computer to execute the procedures and methods of the model generator 10.
 ここで、プロセッサ91は、CPU(Central Processing Unit)、処理装置、演算装置、マイクロプロセッサ、マイクロコンピュータ、またはDSP(Digital Signal Processor)などであってもよい。また、メモリ92には、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable ROM)、EEPROM(登録商標)(Electrically EPROM)などの、不揮発性または揮発性の半導体メモリ、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ミニディスク、またはDVD(Digital Versatile Disc)などが該当する。 Here, the processor 91 may be a CPU (Central Processing Unit), a processing device, an arithmetic unit, a microprocessor, a microcomputer, a DSP (Digital Signal Processor), or the like. Further, the memory 92 includes, for example, non-volatile or volatile such as RAM (Random Access Memory), ROM (Read Only Memory), flash memory, EPROM (Erasable Program ROM), and EPROM (registered trademark) (Electrical EPROM). This includes semiconductor memories, magnetic disks, flexible disks, optical disks, compact disks, mini disks, DVDs (Digital Versailles Disc), and the like.
 図5は、実施の形態1に係るモデル生成装置10が備える処理回路を専用のハードウェアで構成する場合の例を示す図である。処理回路が専用のハードウェアで構成される場合、図5に示す処理回路93は、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field Programmable Gate Array)、またはこれらを組み合わせたものが該当する。モデル生成装置10の各機能を機能別に処理回路93で実現してもよいし、各機能をまとめて処理回路93で実現してもよい。 FIG. 5 is a diagram showing an example in which the processing circuit included in the model generation device 10 according to the first embodiment is configured by dedicated hardware. When the processing circuit is composed of dedicated hardware, the processing circuit 93 shown in FIG. 5 includes, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), and the like. FPGA (Field Processor Gate Array) or a combination of these is applicable. Each function of the model generator 10 may be realized by the processing circuit 93 for each function, or each function may be collectively realized by the processing circuit 93.
 なお、モデル生成装置10の各機能について、一部を専用のハードウェアで実現し、一部をソフトウェアまたはファームウェアで実現するようにしてもよい。このように、処理回路は、専用のハードウェア、ソフトウェア、ファームウェア、またはこれらの組み合わせによって、上述の各機能を実現することができる。 Note that some of the functions of the model generator 10 may be realized by dedicated hardware, and some may be realized by software or firmware. As described above, the processing circuit can realize each of the above-mentioned functions by the dedicated hardware, software, firmware, or a combination thereof.
 以上説明したように、本実施の形態によれば、モデル生成装置10は、センサ101で対象ワークを計測し、センサ101で計測して得られた情報から基準モデルを決定する。モデル生成装置10は、基準モデルに対する変形制約条件を決定し、基準モデルの変形パラメータを変形制約条件の範囲内で決定し、変形パラメータに従って基準モデルを変形することとした。これにより、モデル生成装置10は、対象ワークが不定形ワークであってもモデルを生成することができる。従って、モデル生成装置10を、不定形ワークを頻繁に扱う食品工場などに活用することが可能となる。モデル生成装置10で生成されたモデルをシミュレーション活用することによって、省人化を目的とした産業用ロボットの現場導入ならびに稼働までの検証に要する期間およびコストを削減することができる。 As described above, according to the present embodiment, the model generation device 10 measures the target work with the sensor 101, and determines the reference model from the information obtained by the measurement with the sensor 101. The model generation device 10 determines the deformation constraint conditions for the reference model, determines the deformation parameters of the reference model within the range of the deformation constraint conditions, and deforms the reference model according to the deformation parameters. As a result, the model generation device 10 can generate a model even if the target work is an amorphous work. Therefore, the model generator 10 can be used in a food factory or the like that frequently handles amorphous workpieces. By utilizing the model generated by the model generator 10 by simulation, it is possible to reduce the period and cost required for on-site introduction and verification of the operation of the industrial robot for the purpose of labor saving.
 本実施の形態によれば、モデル生成装置10は、対象ワークの計測データをいくつか取得し、計測データの解析結果に基づいて決定した対象ワークの変形制約条件に基づいて基準モデルを変形させることで、対象ワークが不定形ワークであってもモデルを生成することができる。モデル生成装置10は、凸凹の出現頻度などの数値的な条件、および対象ワークの輪郭情報の解析によって、モデルの再現性を向上させることができる。また、モデル生成装置10は、対象ワーク、すなわち実ワークの輪郭情報を用いることで、より対象ワークらしいモデルを生成することができる。モデル生成装置10は、対象ワーク、すなわち実ワークの輪郭情報を合成して新しい輪郭を生成することで、生成するモデルの形状のバリエーションを増やすことができる。モデル生成装置10は、用いる輪郭のバリエーションをさらに増やすことで、生成するモデルの形状のバリエーションを増やすことができる。また、モデル生成装置10は、機械学習によって対象ワークに適した変形パラメータを獲得できるため、複雑なワーク形状を精度良く再現できる。モデル生成装置10は、ニューラルネットワークによるモデル生成では、高い表現能力によってモデルを生成するので、人手で構築されたモデル生成アルゴリズムよりも汎用的であることが期待でき、ワークの適用範囲が拡大できる。 According to the present embodiment, the model generation device 10 acquires some measurement data of the target work and deforms the reference model based on the deformation constraint condition of the target work determined based on the analysis result of the measurement data. Therefore, a model can be generated even if the target work is an indefinite work. The model generation device 10 can improve the reproducibility of the model by analyzing numerical conditions such as the appearance frequency of irregularities and contour information of the target work. Further, the model generation device 10 can generate a model more like the target work by using the contour information of the target work, that is, the actual work. The model generation device 10 can increase the variation in the shape of the model to be generated by synthesizing the contour information of the target work, that is, the actual work and generating a new contour. The model generation device 10 can increase the variation of the shape of the model to be generated by further increasing the variation of the contour to be used. Further, since the model generation device 10 can acquire deformation parameters suitable for the target work by machine learning, it is possible to accurately reproduce a complicated work shape. Since the model generation device 10 generates a model with high expressive ability in the model generation by the neural network, it can be expected to be more versatile than the manually constructed model generation algorithm, and the applicable range of the work can be expanded.
実施の形態2.
 実施の形態2では、モデル生成装置10が備える基準モデル決定部103の具体的な構成および動作について説明する。
Embodiment 2.
In the second embodiment, a specific configuration and operation of the reference model determination unit 103 included in the model generation device 10 will be described.
 図6は、実施の形態2に係る基準モデル決定部103の構成例を示すブロック図である。図6に示すように、基準モデル決定部103は、個別モデル決定部201と、レジストレーション部202と、平均形状生成部203と、形状要素分割部204と、個体差カテゴリ決定部205と、ワーク主軸決定部206と、プリミティブ照合部207と、当てはめモデル決定部208と、を備える。各構成の詳細について、基準モデル決定部103の動作とともに説明する。図7は、実施の形態2に係る基準モデル決定部103の動作を示すフローチャートである。図7のフローチャートは、基準モデル決定部103の動作、すなわち図2のフローチャートにおけるステップS103の動作の詳細を示している。 FIG. 6 is a block diagram showing a configuration example of the reference model determination unit 103 according to the second embodiment. As shown in FIG. 6, the reference model determination unit 103 includes an individual model determination unit 201, a registration unit 202, an average shape generation unit 203, a shape element division unit 204, an individual difference category determination unit 205, and a work. It includes a spindle determination unit 206, a primitive collation unit 207, and a fitting model determination unit 208. Details of each configuration will be described together with the operation of the reference model determination unit 103. FIG. 7 is a flowchart showing the operation of the reference model determination unit 103 according to the second embodiment. The flowchart of FIG. 7 shows the details of the operation of the reference model determination unit 103, that is, the operation of step S103 in the flowchart of FIG.
 個別モデル決定部201は、ワーク情報取得部102で取得された各ワーク情報をモデル化し、各ワーク情報をモデル化した個別モデルを決定する(ステップS201)。個別モデルとは、ワーク情報取得部102で取得された複数のワーク情報の各々をモデル化したものである。具体的には、個別モデル決定部201は、センサ101で得られたデータに対して、モデル生成に不要な背景を削除し、対象ワークのデータのみを抽出する。個別モデル決定部201は、得られた対象ワークのデータに対して幾何学形状を変形させることでモデルを生成させてもよいし、得られた対象ワークのデータをそのまま個別モデルとして決定してもよい。個別モデル決定部201は、モデルを生成する場合、実施の形態1で説明した基準モデルの決定と同様、法線分布に基づいて対象ワークの計測データに一番類似する幾何学形状を自動で決定してもよいし、ユーザが対象ワークに一番類似する幾何学形状を手動で決定してもよい。 The individual model determination unit 201 models each work information acquired by the work information acquisition unit 102, and determines an individual model in which each work information is modeled (step S201). The individual model is a model of each of the plurality of work information acquired by the work information acquisition unit 102. Specifically, the individual model determination unit 201 deletes the background unnecessary for model generation from the data obtained by the sensor 101, and extracts only the data of the target work. The individual model determination unit 201 may generate a model by deforming the geometric shape with respect to the obtained target work data, or may determine the obtained target work data as an individual model as it is. Good. When generating the model, the individual model determination unit 201 automatically determines the geometric shape most similar to the measurement data of the target work based on the normal distribution, as in the case of determining the reference model described in the first embodiment. Alternatively, the user may manually determine the geometric shape that most closely resembles the target work.
 レジストレーション部202は、個別モデル決定部201で決定された複数の個別モデルを自動で統合する(ステップS202)。統合とは、例えば、複数の個別モデルの位置合わせである。利用する位置合わせの手法については、ICP(Iterative Closest Points)でもよいし、特徴点ベースの位置合わせ手法でもよい。 The registration unit 202 automatically integrates a plurality of individual models determined by the individual model determination unit 201 (step S202). Integration is, for example, the alignment of multiple individual models. The alignment method to be used may be ICP (Iterative Closest Points) or a feature point-based alignment method.
 平均形状生成部203は、レジストレーション部202による統合結果に基づいて、対象ワークの平均形状モデルを生成する(ステップS203)。平均形状モデルとは、レジストレーション部202によって統合された複数の個別モデルの特徴を平均化したモデルである。平均形状生成部203は、例えば、出力モデルがメッシュモデルの場合、位置合わせ後の各個別モデルからメッシュ頂点の平均値と法線の平均値を算出して、それらからなるモデルを平均形状モデルとして生成してもよい。平均形状生成部203は、例えば、出力モデルが3次元点群の場合、位置合わせ後の各個別モデルの3次元点群の平均値を算出し、それらからなるモデルを平均形状モデルとして生成してもよい。基準モデル決定部103は、対象ワークの平均形状モデルを用いることで基準モデルを決定しやすくなり、モデルの再現性を向上させることができる。 The average shape generation unit 203 generates an average shape model of the target work based on the integration result by the registration unit 202 (step S203). The average shape model is a model in which the features of a plurality of individual models integrated by the registration unit 202 are averaged. For example, when the output model is a mesh model, the average shape generation unit 203 calculates the average value of the mesh vertices and the average value of the normals from each individual model after alignment, and uses the model composed of them as the average shape model. It may be generated. For example, when the output model is a three-dimensional point cloud, the average shape generation unit 203 calculates the average value of the three-dimensional point cloud of each individual model after alignment, and generates a model composed of them as an average shape model. May be good. The reference model determination unit 103 can easily determine the reference model by using the average shape model of the target work, and can improve the reproducibility of the model.
 形状要素分割部204は、平均形状生成部203で生成された平均形状モデルまたは個別モデル決定部201で決定された個別モデルを、各モデルを構成する少なくとも1つ以上の形状要素に分割する(ステップS204)。形状要素分割部204は、平均形状モデルまたは個別モデルに対して、例えば、クラスタリング処理を施すことで、平均形状モデルまたは個別モデルを構成する各形状要素を分割する。形状要素分割部204では、入力データが3次元点群であれば、3次元点群に対してk-means法によって、3次元点群を形状要素ごとにクラスタリングすることが考えられる。 The shape element dividing unit 204 divides the average shape model generated by the average shape generating unit 203 or the individual model determined by the individual model determining unit 201 into at least one or more shape elements constituting each model (step). S204). The shape element dividing unit 204 divides each shape element constituting the average shape model or the individual model by, for example, performing a clustering process on the average shape model or the individual model. In the shape element dividing unit 204, if the input data is a three-dimensional point cloud, it is conceivable to cluster the three-dimensional point cloud for each shape element by the k-means method.
 個体差カテゴリ決定部205は、平均形状生成部203で生成され、形状要素分割部204で1つ以上の形状要素に分割された平均形状モデルに対して、ワークの不揃いさをランク別に表した個体差カテゴリを決定する(ステップS205)。個体差カテゴリ決定部205は、詳細には、平均形状生成部203で生成され、形状要素分割部204で1つ以上の形状要素に分割された平均形状モデルを対象にして、個体差カテゴリを決定する。 The individual difference category determination unit 205 is an individual that represents the irregularity of the work by rank with respect to the average shape model generated by the average shape generation unit 203 and divided into one or more shape elements by the shape element division unit 204. The difference category is determined (step S205). In detail, the individual difference category determination unit 205 determines the individual difference category for the average shape model generated by the average shape generation unit 203 and divided into one or more shape elements by the shape element division unit 204. To do.
 個体ごとに形状のばらつきがある物体の中でも、ばらつきの度合い、ばらつきの傾向などは物体ごとに異なる。例えば、未加工の果物は、形状の大まかな傾向は一致しているものの、個体によってサイズ、形状などが異なる。そのため、従来から用いられている2次元画像上でのテンプレートマッチングによる物体検出手法を適用するのは難しい。また、ちくわなどの練り物製品および加工食品は、型で成形された後に加熱処理を施される。そのため、焼け具合のばらつきで表面の色が若干異なっていたり表面の細かな凸凹に差異があったりするが、サイズ自体はほぼ同じとみなすことができる。また、衣をつけた唐揚げなどの揚げ物は、衣の形状が個体ごとにまちまちであったり、中身の食材、具体的には唐揚げであれば鶏肉の形状が不揃いであったりする。そのため、形状に規則性が見出せないものも存在する。より対象ワークらしいモデルを生成するためには、ワークの形状のばらつきを、ワークの性質からいくつかのカテゴリ、すなわち個体差カテゴリに分類し、それぞれの分類にあったモデル生成手法を適用することが有効である。 Among objects whose shape varies from individual to individual, the degree of variation and the tendency of variation differ from object to object. For example, unprocessed fruits have the same general tendency of shape, but the size and shape differ depending on the individual. Therefore, it is difficult to apply the conventionally used object detection method by template matching on a two-dimensional image. In addition, paste products such as chikuwa and processed foods are heat-treated after being molded. Therefore, the color of the surface may be slightly different or the fine unevenness of the surface may be different due to the variation in the degree of burning, but the size itself can be regarded as almost the same. Further, in fried foods such as fried chicken with batter, the shape of the batter varies from individual to individual, and the ingredients inside, specifically, in the case of fried chicken, the shape of chicken may be irregular. Therefore, there are some that cannot find regularity in the shape. In order to generate a model that is more like a target work, it is necessary to classify the variation in the shape of the work into several categories based on the nature of the work, that is, the individual difference category, and apply the model generation method suitable for each classification. It is valid.
 個体差カテゴリ決定部205は、例えば、個体差カテゴリの決定について、平均形状モデルの法線分布を算出し、平均形状モデルの法線分布とシーン情報から抽出した各ワーク情報の法線分布とを照合する。個体差カテゴリ決定部205は、差異が大きいワークは形状に規則性のないカテゴリに分類し、差異が小さいワークは形状に規則性のあるワークのカテゴリに分類する。 For example, the individual difference category determination unit 205 calculates the normal distribution of the average shape model for determining the individual difference category, and calculates the normal distribution of the average shape model and the normal distribution of each work information extracted from the scene information. Match. The individual difference category determination unit 205 classifies the work having a large difference into a category having no regularity in shape, and the work having a small difference into a category of work having regularity in shape.
 ワーク主軸決定部206は、平均形状生成部203で生成された平均形状モデルまたは個別モデル決定部201で決定された個別モデルに対して、個体差カテゴリ決定部205で得られた結果に基づいて、慣性主軸であるワーク主軸を決定する(ステップS206)。ワーク主軸決定部206は、詳細には、平均形状生成部203で生成され、形状要素分割部204で1つ以上の形状要素に分割された平均形状モデル、または個別モデル決定部201で決定され、形状要素分割部204で1つ以上の形状要素に分割された個別モデルを対象にして、ワーク主軸を決定する。ワーク主軸決定部206は、形状に規則性のないカテゴリの場合、ワーク主軸が求められないのでワーク主軸が無いと判別してもよい。 The work spindle determination unit 206 is based on the results obtained by the individual difference category determination unit 205 with respect to the average shape model generated by the average shape generation unit 203 or the individual model determined by the individual model determination unit 201. The work spindle, which is the inertial spindle, is determined (step S206). The work spindle determination unit 206 is specifically generated by the average shape generation unit 203 and determined by the average shape model divided into one or more shape elements by the shape element division unit 204, or the individual model determination unit 201. The work spindle is determined for the individual model divided into one or more shape elements by the shape element dividing portion 204. In the case of a category in which the shape is not regular, the work spindle determination unit 206 may determine that there is no work spindle because the work spindle is not obtained.
 プリミティブ照合部207は、平均形状生成部203で生成された平均形状モデルを、プリミティブ形状、例えば、直方体、球、三角錐などの幾何学形状と照合させるプリミティブ照合を行う(ステップS207)。プリミティブ照合部207は、詳細には、平均形状生成部203で生成され、形状要素分割部204で1つ以上の形状要素に分割された平均形状モデルを対象にして、プリミティブ照合を行う。プリミティブ照合部207は、平均形状モデルの法線分布と幾何学形状の法線分布とに基づいて、類似度を算出してもよい。プリミティブ照合部207は、類似度の算出方法として、例えば、平均形状モデルの法線分布と幾何学形状の法線分布との差異を求め、法線分布の差異に対する類似度を対応付けたルックアップテーブルをあらかじめ用意し、算出した法線分布の差異とルックアップテーブルに基づいて類似度を求めてもよい。 The primitive collation unit 207 performs primitive collation to collate the average shape model generated by the average shape generation unit 203 with a primitive shape, for example, a geometric shape such as a rectangular parallelepiped, a sphere, or a triangular pyramid (step S207). In detail, the primitive collation unit 207 performs primitive collation on the average shape model generated by the average shape generation unit 203 and divided into one or more shape elements by the shape element division unit 204. The primitive collating unit 207 may calculate the similarity based on the normal distribution of the average shape model and the normal distribution of the geometric shape. As a method of calculating the similarity, the primitive collation unit 207 obtains, for example, the difference between the normal distribution of the average shape model and the normal distribution of the geometric shape, and looks up by associating the similarity with the difference of the normal distribution. A table may be prepared in advance, and the similarity may be obtained based on the calculated difference in normal distribution and the lookup table.
 当てはめモデル決定部208は、プリミティブ照合部207によるプリミティブ照合の照合結果に基づいて、基準モデルである当てはめモデルを決定する(ステップS208)。当てはめモデル決定部208は、計算して得られた類似度があらかじめ決められた閾値以下の場合、当てはめモデルを決定できないとみなし、平均形状モデルを当てはめモデルとして決定してもよい。当てはめモデル決定部208は、類似度が閾値以上の場合、類似度が一番大きい幾何学形状を当てはめモデルとして決定してもよい。基準モデル決定部103は、対象ワークに近い基準モデルを決定できることで、生成するモデルの再現性を向上させることができる。当てはめモデル決定部208は、形状要素分割部204で分割された形状要素ごとに基準モデル、すなわち当てはめモデルを決定してもよい。モデル生成装置10は、分割された対象ワークを構成する要素ごとに当てはめモデルを決定することで、串刺し食材といった複数の物体で連結される対象ワークのモデルを生成することができる。 The fitting model determination unit 208 determines the fitting model, which is the reference model, based on the matching result of the primitive matching by the primitive matching unit 207 (step S208). The fit model determination unit 208 may consider that the fit model cannot be determined when the calculated similarity is equal to or less than a predetermined threshold value, and may determine the average shape model as the fit model. When the similarity is equal to or greater than the threshold value, the fitting model determining unit 208 may determine the geometric shape having the highest similarity as the fitting model. Since the reference model determination unit 103 can determine a reference model close to the target work, the reproducibility of the generated model can be improved. The fitting model determining unit 208 may determine a reference model, that is, a fitting model for each shape element divided by the shape element dividing unit 204. The model generation device 10 can generate a model of a target work connected by a plurality of objects such as skewered foodstuffs by determining a fitting model for each element constituting the divided target work.
 モデル生成装置10において、変形制約条件決定部104は、数値的な条件および基準モデルの形状解析結果に基づいて、輪郭の変形制約条件を決定する(ステップS104)。モデル生成装置10は、対象ワークの輪郭形状が複雑である場合でも、取得した輪郭情報に基づいてモデルを生成することができる。変形制約条件決定部104は、輪郭情報の取得においては平均形状モデルの輪郭情報を含めてもよい。また、変形制約条件決定部104は、平均形状モデルの輪郭情報を、変形制約条件を決定する際の基準にしてもよい。例えば、凸凹の高さの基準をモデル中心ではなく平均形状モデルの輪郭を基準にしたり、凸凹の高さを平均形状モデルの輪郭の高さで正規化したりするなどの方法がある。変形制約条件決定部104は、ワーク形状が複雑な場合はワーク形状を単純な曲線でフィッティングすることが困難であるが、平均形状モデルで正規化することによって、単純な正弦波、多項式などで近似できるようになることが期待される。 In the model generation device 10, the deformation constraint condition determination unit 104 determines the deformation constraint condition of the contour based on the numerical conditions and the shape analysis result of the reference model (step S104). The model generation device 10 can generate a model based on the acquired contour information even when the contour shape of the target work is complicated. The deformation constraint condition determination unit 104 may include the contour information of the average shape model in the acquisition of the contour information. Further, the deformation constraint condition determination unit 104 may use the contour information of the average shape model as a reference when determining the deformation constraint condition. For example, the height of the unevenness may be based on the contour of the average shape model instead of the center of the model, or the height of the unevenness may be normalized by the height of the contour of the average shape model. When the work shape is complicated, it is difficult for the deformation constraint condition determination unit 104 to fit the work shape with a simple curve, but by normalizing with an average shape model, the work shape is approximated by a simple sine wave, a polynomial, or the like. It is expected that it will be possible.
 モデル生成装置10において、その他の構成の動作は、実施の形態1のときと同様である。 In the model generator 10, the operations of other configurations are the same as those in the first embodiment.
 以上説明したように、本実施の形態によれば、モデル生成装置10は、個体差カテゴリの結果および平均形状モデルに基づいて基準モデルを決定することで、対象ワークの個体差に応じてモデルの再現性を向上させることができる。これにより、モデル生成装置10で生成されたモデルを用いることで、産業用ロボットを導入する前のシミュレーション工程においても再現性が向上し、ピッキング作業のためのロボット配置または動作確認などを正確にしやすくなるという効果がある。また、モデル生成装置10が複数の物体で連結された対象ワークのモデルを生成できることにより、例えば、食品現場で扱われる団子、串刺し食材などのモデルを生成することができ、モデル生成の対象となるワークの適用範囲が拡大する。 As described above, according to the present embodiment, the model generator 10 determines the reference model based on the result of the individual difference category and the average shape model, so that the model can be changed according to the individual difference of the target work. Reproducibility can be improved. As a result, by using the model generated by the model generator 10, the reproducibility is improved even in the simulation process before the introduction of the industrial robot, and it becomes easy to make the robot arrangement or operation check for picking work accurate. It has the effect of becoming. Further, since the model generation device 10 can generate a model of a target work in which a plurality of objects are connected, for example, it is possible to generate a model of dumplings, skewered foodstuffs, etc. handled at a food site, which is a target of model generation. The applicable range of the work is expanded.
 本実施の形態によれば、モデル生成装置10は、基準モデルとして、平均形状生成部203で生成された平均形状モデルを用いる。これにより、モデル生成装置10は、平均形状モデルによって基準モデルを決定しやすくなり、プリミティブ形状、例えば、直方体、球、三角錐などの幾何学形状で表現することが難しいワークの場合でも、平均形状モデルを基準モデルとして用いることで対象ワークのモデルを生成することができる。また、モデル生成装置10は、対象ワークの輪郭形状が複雑である場合でも、取得した輪郭情報に基づいてモデルを生成することができる。また、モデル生成装置10は、ワーク主軸および平均形状モデルに基づいて基準モデルを決定することで、ワーク主軸に基づいて対象ワークの個体差カテゴリを正確に判別しやすくなる。これにより、モデル生成装置10は、各対象ワークの個体差に応じて対象ワークに近い基準モデルを決定できることで、モデルの再現性を向上させることができる。また、モデル生成装置10は、分割された対象ワークを構成する要素ごとに当てはめモデルを決定することで、串刺し食材といった複数の物体で連結される対象ワークのモデルも生成することができる。 According to the present embodiment, the model generation device 10 uses the average shape model generated by the average shape generation unit 203 as the reference model. This makes it easier for the model generation device 10 to determine the reference model by the average shape model, and even in the case of a workpiece that is difficult to represent with a primitive shape such as a rectangular parallelepiped, a sphere, or a triangular pyramid, the average shape. A model of the target work can be generated by using the model as a reference model. Further, the model generation device 10 can generate a model based on the acquired contour information even when the contour shape of the target work is complicated. Further, the model generation device 10 determines the reference model based on the work spindle and the average shape model, so that it becomes easy to accurately determine the individual difference category of the target work based on the work spindle. As a result, the model generation device 10 can improve the reproducibility of the model by being able to determine a reference model close to the target work according to the individual difference of each target work. Further, the model generation device 10 can also generate a model of the target work connected by a plurality of objects such as skewered foodstuffs by determining a fitting model for each element constituting the divided target work.
実施の形態3.
 実施の形態3では、実施の形態1または実施の形態2で説明したモデル生成装置10を備えるピッキングロボットについて説明する。
Embodiment 3.
In the third embodiment, the picking robot including the model generation device 10 described in the first embodiment or the second embodiment will be described.
 図8は、実施の形態3に係るピッキングロボット20の構成例を示すブロック図である。ピッキングロボット20は、モデル生成装置10と、シミュレーション条件設定部107と、シーン生成部108と、データセット生成部109と、パラメータ調整部110と、認識処理部111と、ピッキング部112と、を備える。モデル生成装置10は、実施の形態1または実施の形態2のどちらの構成のものであってもよい。 FIG. 8 is a block diagram showing a configuration example of the picking robot 20 according to the third embodiment. The picking robot 20 includes a model generation device 10, a simulation condition setting unit 107, a scene generation unit 108, a data set generation unit 109, a parameter adjustment unit 110, a recognition processing unit 111, and a picking unit 112. .. The model generator 10 may have either the configuration of the first embodiment or the second embodiment.
 シミュレーション条件設定部107は、記憶部105から変形制約条件を読み出し、変形制約条件に基づいて不定形モデル生成部106により対象ワークのモデルを生成し、生成したモデルを用いてシミュレーション内のセンサ情報およびワーク情報のうち少なくとも1つを含むシミュレーション条件を設定する。記憶部105に記憶されている変形制約条件は、前述のようにモデル生成装置10で決定されたものである。不定形モデル生成部106は、他の構成の制御によって、対象ワークのモデルを生成することもできる。図9は、実施の形態3に係るシミュレーション条件設定部107の構成例を示すブロック図である。図9に示すように、シミュレーション条件設定部107は、センサ情報設定部301と、ワーク情報設定部302と、環境情報設定部303と、を備える。 The simulation condition setting unit 107 reads out the deformation constraint condition from the storage unit 105, generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition, and uses the generated model to generate sensor information in the simulation and the sensor information in the simulation. Set simulation conditions that include at least one of the work information. The deformation constraint condition stored in the storage unit 105 is determined by the model generation device 10 as described above. The amorphous model generation unit 106 can also generate a model of the target work by controlling other configurations. FIG. 9 is a block diagram showing a configuration example of the simulation condition setting unit 107 according to the third embodiment. As shown in FIG. 9, the simulation condition setting unit 107 includes a sensor information setting unit 301, a work information setting unit 302, and an environment information setting unit 303.
 センサ情報設定部301は、シミュレーションにおけるセンサ101のスペックおよび設置情報のうち少なくとも1つを設定する。センサ情報設定部301は、センサ101の画角、ワーキングディスタンス、センサ101の設置角度、解像度などのセンサ101のスペックまたはシミュレーション内のセンサ設置情報などについてのパラメータを設定してもよい。センサ情報設定部301は、記憶部105に記憶されている変形制約条件に基づいて不定形モデル生成部106により対象ワークのモデルを少なくとも1つ生成し、生成したモデルを用いて、シミュレーション上でセンサ101の視野内に対象ワークが収まるかなどの事前確認をしてもよい。 The sensor information setting unit 301 sets at least one of the specifications and installation information of the sensor 101 in the simulation. The sensor information setting unit 301 may set parameters such as the angle of view of the sensor 101, the working distance, the installation angle of the sensor 101, the resolution, and other specifications of the sensor 101, or the sensor installation information in the simulation. The sensor information setting unit 301 generates at least one model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and uses the generated model to perform a sensor on the simulation. It may be confirmed in advance whether the target work fits within the field of view of 101.
 ワーク情報設定部302は、対象ワークのワーク情報を設定する。ワーク情報設定部302は、記憶部105に記憶されている変形制約条件に基づいて不定形モデル生成部106により対象ワークのモデルを少なくとも1つ生成し、生成したモデルを用いて、対象ワークの色情報、サイズ、シーン生成部108で使用する対象ワークのモデルの個数などを設定してもよい。 The work information setting unit 302 sets the work information of the target work. The work information setting unit 302 generates at least one model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and the generated model is used to generate the color of the target work. Information, size, the number of models of the target work used by the scene generation unit 108, and the like may be set.
 環境情報設定部303は、シミュレーション内の環境情報を設定する。環境情報設定部303は、拡散反射または環境光などの光源環境、ばら積み状態にするための箱サイズまたはワークの供給方法などについて設定してもよい。ワークの供給方法としては、対象ワークの位置と姿勢が完全に不規則となるばら積み状態、対象ワークの配置位置と姿勢が決まった整列状態などが考えられる。環境情報設定部303は、記憶部105に記憶されている変形制約条件に基づいて不定形モデル生成部106により対象ワークのモデルを生成し、生成したモデルを用いて、シミュレーション上で供給状態を確認してもよい。 The environment information setting unit 303 sets the environment information in the simulation. The environment information setting unit 303 may set the light source environment such as diffuse reflection or ambient light, the box size for making the pieces in a bulk state, the work supply method, and the like. As a work supply method, a bulk state in which the position and orientation of the target work are completely irregular, an alignment state in which the arrangement position and orientation of the target work are determined, and the like can be considered. The environment information setting unit 303 generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and confirms the supply state on the simulation using the generated model. You may.
 シミュレーション条件設定部107は、センサ情報設定部301、ワーク情報設定部302、および環境情報設定部303を用いて、センサ情報、ワーク情報、および環境情報のシミュレーション条件を設定する。ピッキングロボット20は、シミュレータ上でユーザの実環境を模擬し、ユーザの実環境に応じたパラメータを調整しやすくなることで、ピッキング成功率を向上させることができる。 The simulation condition setting unit 107 uses the sensor information setting unit 301, the work information setting unit 302, and the environment information setting unit 303 to set the simulation conditions for the sensor information, the work information, and the environment information. The picking robot 20 can improve the picking success rate by simulating the user's actual environment on the simulator and making it easier to adjust the parameters according to the user's actual environment.
 シーン生成部108は、記憶部105に記憶されている変形制約条件に基づいて不定形モデル生成部106により対象ワークのモデルを生成し、生成したモデルと、シミュレーション条件設定部107で設定されたシミュレーション条件とを用いて、対象ワークのシーンを模擬する。センサ101の計測で得られるシーン情報は対象ワークの背景の情報が含まれるのに対して、シーン生成部108で模擬される対象ワークのシーンは、対象ワークのみを対象にしたものである。シーン生成部108は、シーンを模擬する際、生成した各モデルが異なっている必要はなく、同じ生成モデルを使い回してもよい。シーン生成部108は、例えば、モデル50個を用いてシーンを生成する場合、生成する対象ワークのモデルのバリエーションを10個として、各種類5個ずつ複製し、シーン生成に使用するモデルの合計個数を50個にするようにしてもよい。モデル生成の計算コストが大きい場合、計算コストの削減において有効であると考えられる。 The scene generation unit 108 generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and the generated model and the simulation set by the simulation condition setting unit 107 Simulate the scene of the target work using the conditions. The scene information obtained by the measurement of the sensor 101 includes the background information of the target work, whereas the scene of the target work simulated by the scene generation unit 108 targets only the target work. When simulating a scene, the scene generation unit 108 does not have to be different for each generated model, and the same generation model may be reused. For example, when a scene is generated using 50 models, the scene generation unit 108 duplicates 5 models of each type with 10 variations of the model of the target work to be generated, and the total number of models used for scene generation. May be 50 pieces. When the calculation cost of model generation is large, it is considered to be effective in reducing the calculation cost.
 データセット生成部109は、シーン生成部108で生成されたシーンに基づいて、対象ワークのピッキング動作を制御するパラメータを調整するためのデータセットを生成する。データセットは、具体的には、対象ワークをピッキングするピッキング部112の動作を制御するパラメータを調整するためのものである。図10は、実施の形態3に係るデータセット生成部109の構成例を示すブロック図である。図10に示すように、データセット生成部109は、2Dデータ生成部401と、3Dデータ生成部402と、アノテーションデータ生成部403と、を備える。 The data set generation unit 109 generates a data set for adjusting parameters that control the picking operation of the target work, based on the scene generated by the scene generation unit 108. Specifically, the data set is for adjusting parameters that control the operation of the picking unit 112 that picks the target work. FIG. 10 is a block diagram showing a configuration example of the data set generation unit 109 according to the third embodiment. As shown in FIG. 10, the data set generation unit 109 includes a 2D data generation unit 401, a 3D data generation unit 402, and an annotation data generation unit 403.
 2Dデータ生成部401は、シーン生成部108で生成されたシーンに対して、2Dデータを生成する2次元データ生成部である。2Dデータ生成部401は、グレースケール画像を生成してもよいし、RGB画像を生成してもよい。 The 2D data generation unit 401 is a two-dimensional data generation unit that generates 2D data for the scene generated by the scene generation unit 108. The 2D data generation unit 401 may generate a grayscale image or an RGB image.
 3Dデータ生成部402は、シーン生成部108で生成されたシーンに対して、3Dデータを生成する3次元データ生成部である。3Dデータ生成部402は、3次元点群を生成してもよいし、距離画像を生成してもよい。 The 3D data generation unit 402 is a three-dimensional data generation unit that generates 3D data for the scene generated by the scene generation unit 108. The 3D data generation unit 402 may generate a three-dimensional point cloud or a distance image.
 アノテーションデータ生成部403は、シーン生成部108で生成されたシーンに対して、アノテーションデータを生成する。アノテーションデータ生成部403は、物体の種類ごとに色付けしたラベリングデータ、またはデータにある各ワークの位置姿勢情報、または種類名などが付加されたデータなどを生成してもよい。 The annotation data generation unit 403 generates annotation data for the scene generated by the scene generation unit 108. The annotation data generation unit 403 may generate labeling data colored for each type of object, position / orientation information of each work in the data, or data to which a type name or the like is added.
 データセット生成部109で生成されるデータはシミュレーションデータのため、実画像と比較してノイズの付加具合などで乖離がみられることが考えられる。そのため、データセット生成部109は、計測ノイズを再現する機能を備えていてもよい。なお、データセット生成部109で生成されるデータ、すなわち2Dデータ生成部401、3Dデータ生成部402、およびアノテーションデータ生成部403で生成される各データをまとめてデータセットと称する。ピッキングロボット20は、データセット生成部109によってシミュレーション上でパラメータ調整に必要なデータを自動生成できることで、ユーザによるデータ収集作業の負荷を軽減できる。 Since the data generated by the data set generation unit 109 is simulation data, it is conceivable that there may be discrepancies due to the degree of noise added compared to the actual image. Therefore, the data set generation unit 109 may have a function of reproducing the measurement noise. The data generated by the data set generation unit 109, that is, the data generated by the 2D data generation unit 401, the 3D data generation unit 402, and the annotation data generation unit 403 are collectively referred to as a data set. The picking robot 20 can automatically generate data necessary for parameter adjustment on a simulation by the data set generation unit 109, so that the load of data collection work by the user can be reduced.
 パラメータ調整部110は、記憶部105に記憶されている変形制約条件に基づいて不定形モデル生成部106により対象ワークのモデルを生成し、生成したモデルと、データセット生成部109で生成されたデータセットとを用いて、ピッキングについてのパラメータを自動調整する。図11は、実施の形態3に係るパラメータ調整部110の構成例を示すブロック図である。図11に示すように、パラメータ調整部110は、把持パラメータ調整部501と、認識パラメータ調整部502と、を備える。 The parameter adjustment unit 110 generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and the generated model and the data generated by the data set generation unit 109. Automatically adjust the picking parameters using the set. FIG. 11 is a block diagram showing a configuration example of the parameter adjusting unit 110 according to the third embodiment. As shown in FIG. 11, the parameter adjusting unit 110 includes a gripping parameter adjusting unit 501 and a recognition parameter adjusting unit 502.
 把持パラメータ調整部501は、記憶部105に記憶されている変形制約条件に基づいて不定形モデル生成部106により対象ワークのモデルを生成し、生成したモデルを用いて、ピッキング部112が備えるロボットハンドについてのパラメータを調整する。把持パラメータ調整部501は、例えば、ピンセットハンド、平行ハンド、吸着パッドなどのロボットハンドを対象とし、対象ワークを把持しやすいロボットハンドの開き幅または爪幅などを調整してもよい。ロボットハンドの種類は、ユーザで決定することが考えられる。把持パラメータ調整部501は、把持パラメータの調整において、同時に全種類または複数の把持パラメータを調整してもよいし、あらかじめ決められた順序に従って個別に把持パラメータを調整してもよい。以降では、主に個別に把持パラメータを調整していく場合について説明する。 The gripping parameter adjusting unit 501 generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint conditions stored in the storage unit 105, and the robot hand provided in the picking unit 112 using the generated model. Adjust the parameters for. The gripping parameter adjusting unit 501 may, for example, target a robot hand such as a tweezers hand, a parallel hand, or a suction pad, and adjust the opening width or the claw width of the robot hand that makes it easy to grip the target work. The type of robot hand may be determined by the user. The gripping parameter adjusting unit 501 may adjust all kinds or a plurality of gripping parameters at the same time in adjusting the gripping parameters, or may individually adjust the gripping parameters according to a predetermined order. Hereinafter, a case where the gripping parameters are adjusted individually will be mainly described.
 把持パラメータ調整部501の詳細な構成および動作について説明する。図12は、実施の形態3に係る把持パラメータ調整部501の構成例を示すブロック図である。図12に示すように、把持パラメータ調整部501は、把持パラメータ調整範囲決定部601と、把持パラメータ変更部602と、モデル回転部603と、把持評価部604と、把持パラメータ値決定部605と、把持パラメータ調整終了判定部606と、を備える。各構成の詳細について、把持パラメータ調整部501の動作とともに説明する。図13は、実施の形態3に係る把持パラメータ調整部501の動作を示すフローチャートである。 The detailed configuration and operation of the gripping parameter adjusting unit 501 will be described. FIG. 12 is a block diagram showing a configuration example of the gripping parameter adjusting unit 501 according to the third embodiment. As shown in FIG. 12, the gripping parameter adjusting unit 501 includes a gripping parameter adjusting range determining unit 601, a gripping parameter changing unit 602, a model rotating unit 603, a gripping evaluation unit 604, and a gripping parameter value determining unit 605. A gripping parameter adjustment end determination unit 606 is provided. Details of each configuration will be described together with the operation of the gripping parameter adjusting unit 501. FIG. 13 is a flowchart showing the operation of the gripping parameter adjusting unit 501 according to the third embodiment.
 把持パラメータ調整範囲決定部601は、記憶部105に記憶されている変形制約条件に基づいて不定形モデル生成部106により対象ワークのモデルを生成し、生成したモデルに基づいて把持パラメータの調整範囲を決定する(ステップS301)。把持パラメータ調整範囲決定部601は、記憶部105に記憶されている変形制約条件に基づいて不定形モデル生成部106により対象ワークのモデルを少なくとも1つ生成し、生成したモデルのサイズなどに基づいて、把持パラメータの調整範囲、把持パラメータの初期値などを決定してもよい。 The gripping parameter adjustment range determining unit 601 generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint conditions stored in the storage unit 105, and sets the gripping parameter adjustment range based on the generated model. Determine (step S301). The gripping parameter adjustment range determination unit 601 generates at least one model of the target work by the amorphous model generation unit 106 based on the deformation constraint conditions stored in the storage unit 105, and based on the size of the generated model and the like. , The adjustment range of the gripping parameter, the initial value of the gripping parameter, and the like may be determined.
 把持パラメータ変更部602は、調整する把持パラメータの種類を変更する(ステップS302)。 The gripping parameter changing unit 602 changes the type of gripping parameter to be adjusted (step S302).
 モデル回転部603は、記憶部105に記憶されている変形制約条件に基づいて不定形モデル生成部106により対象ワークのモデルを生成し、生成したモデルを回転させる(ステップS303)。モデル回転部603は、記憶部105に記憶されている変形制約条件に基づいて不定形モデル生成部106により対象ワークのモデルを少なくとも1つ生成し、生成したモデルを無作為に、または事前に決められた順序で回転させてもよい。 The model rotation unit 603 generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and rotates the generated model (step S303). The model rotation unit 603 generates at least one model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and randomly or in advance determines the generated model. It may be rotated in the order given.
 把持評価部604は、モデル回転部603でモデルを回転させて把持評価をする(ステップS304)。把持評価部604は、モデル回転部603で対象ワークのモデルが回転させられたときにロボットハンドのモデルを当てはめ、対象ワークの主軸と把持方向とのズレ発生頻度FAngおよび対象ワークとの干渉頻度FColの2つの評価値で構成される把持パラメータの評価関数を計算してもよい。 The grip evaluation unit 604 rotates the model on the model rotation unit 603 to perform grip evaluation (step S304). The grip evaluation unit 604 fits the model of the robot hand when the model of the target work is rotated by the model rotation unit 603, and the frequency of occurrence of deviation between the spindle of the target work and the grip direction F Ang and the frequency of interference with the target work. The evaluation function of the gripping parameter composed of the two evaluation values of F Col may be calculated.
 把持パラメータ調整部501は、モデル回転部603によるモデルの回転が終了していない場合(ステップS305:No)、ステップS303に戻って、モデル回転部603によるモデルの回転、および把持評価部604による把持評価を継続する。把持パラメータ調整部501は、モデル回転部603によるモデルの回転が終了した場合(ステップS305:Yes)、ステップS306の動作に進む。 When the rotation of the model by the model rotation unit 603 is not completed (step S305: No), the grip parameter adjusting unit 501 returns to step S303 to rotate the model by the model rotation unit 603 and grip the model by the grip evaluation unit 604. Continue the evaluation. When the rotation of the model by the model rotating unit 603 is completed (step S305: Yes), the gripping parameter adjusting unit 501 proceeds to the operation of step S306.
 把持パラメータ値決定部605は、把持評価部604の評価結果に基づいて、把持パラメータの値を決定する(ステップS306)。具体的には、把持パラメータ値決定部605は、把持評価部604で算出された評価関数が最小となる把持パラメータの値を探索し、探索により発見した把持パラメータの値を調整パラメータの値として決定する。 The gripping parameter value determining unit 605 determines the value of the gripping parameter based on the evaluation result of the gripping evaluation unit 604 (step S306). Specifically, the gripping parameter value determining unit 605 searches for the gripping parameter value that minimizes the evaluation function calculated by the gripping evaluation unit 604, and determines the gripping parameter value found by the search as the adjustment parameter value. To do.
 把持パラメータ調整終了判定部606は、調整対象である全ての把持パラメータの調整が終了したか否かを判定する(ステップS307)。把持パラメータ調整終了判定部606は、調整対象である全ての把持パラメータの調整が終了していない場合(ステップS307:No)、ステップS302の動作に戻って前述と同様の動作を行うよう各部に指示する。把持パラメータ調整終了判定部606は、調整対象である全ての把持パラメータの調整が終了した場合(ステップS307:Yes)、把持パラメータ調整部501の動作を終了する。 The gripping parameter adjustment end determination unit 606 determines whether or not the adjustment of all the gripping parameters to be adjusted has been completed (step S307). When the adjustment of all the gripping parameters to be adjusted is not completed (step S307: No), the gripping parameter adjustment end determination unit 606 instructs each unit to return to the operation of step S302 and perform the same operation as described above. To do. When the adjustment of all the gripping parameters to be adjusted is completed (step S307: Yes), the gripping parameter adjustment end determination unit 606 ends the operation of the gripping parameter adjusting unit 501.
 このようにして、ピッキングロボット20では、把持パラメータ調整部501は、把持パラメータを自動調整する。ピッキングロボット20は、生成したモデルを用いて、対象ワークを把持しやすいロボットハンドについてのパラメータを自動調整でき、ユーザのパラメータ調整作業の負荷を軽減できる。 In this way, in the picking robot 20, the gripping parameter adjusting unit 501 automatically adjusts the gripping parameter. The picking robot 20 can automatically adjust the parameters of the robot hand that easily grips the target work by using the generated model, and can reduce the load of the user's parameter adjustment work.
 図11の説明に戻る。認識パラメータ調整部502は、記憶部105に記憶されている変形制約条件に基づいて不定形モデル生成部106により対象ワークのモデルを生成し、生成したモデルと、データセット生成部109で生成されたデータセットと、把持パラメータ調整部501で調整された把持パラメータとを用いて、物体認識手法の認識パラメータを調整する。認識パラメータ調整部502は、2Dデータまたは3Dデータから候補となる把持位置を検出する手法の認識パラメータを調整してもよいし、深層学習による把持位置検出の学習モデルのパラメータを調整してもよいし、認識率を向上させるための補助処理、例えばセグメンテーションとして用いる深層学習の学習モデルのパラメータを調整してもよい。以降では、主に2Dデータから候補となる把持位置を検出する手法の認識パラメータ調整について説明する。また、認識パラメータ調整部502は、認識パラメータの調整において、同時に全種類または複数の認識パラメータを調整してもよいし、あらかじめ決められた順序に従って個別に認識パラメータを調整してもよい。以降では、主に個別に認識パラメータを調整していく場合について説明する。 Return to the explanation of FIG. The recognition parameter adjusting unit 502 generates a model of the target work by the irregular model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and the generated model and the data set generation unit 109 generate the model. The recognition parameters of the object recognition method are adjusted using the data set and the grip parameters adjusted by the grip parameter adjusting unit 501. The recognition parameter adjusting unit 502 may adjust the recognition parameter of the method of detecting the candidate gripping position from the 2D data or the 3D data, or may adjust the parameter of the learning model of the gripping position detection by deep learning. However, the parameters of the learning model of deep learning used as auxiliary processing for improving the recognition rate, for example, segmentation, may be adjusted. Hereinafter, the recognition parameter adjustment of the method of detecting the candidate gripping position from the 2D data will be mainly described. Further, the recognition parameter adjusting unit 502 may adjust all kinds or a plurality of recognition parameters at the same time in adjusting the recognition parameters, or may individually adjust the recognition parameters according to a predetermined order. In the following, the case where the recognition parameters are adjusted individually will be mainly described.
 認識パラメータ調整部502の詳細な構成および動作について説明する。図14は、実施の形態3に係る認識パラメータ調整部502の構成例を示すブロック図である。図14に示すように、認識パラメータ調整部502は、認識パラメータ調整範囲決定部701と、認識パラメータ変更部702と、認識試行部703と、認識評価部704と、認識パラメータ値決定部705と、認識パラメータ調整終了判定部706と、を備える。各構成の詳細について、認識パラメータ調整部502の動作とともに説明する。図15は、実施の形態3に係る認識パラメータ調整部502の動作を示すフローチャートである。 The detailed configuration and operation of the recognition parameter adjusting unit 502 will be described. FIG. 14 is a block diagram showing a configuration example of the recognition parameter adjusting unit 502 according to the third embodiment. As shown in FIG. 14, the recognition parameter adjustment unit 502 includes a recognition parameter adjustment range determination unit 701, a recognition parameter change unit 702, a recognition trial unit 703, a recognition evaluation unit 704, and a recognition parameter value determination unit 705. A recognition parameter adjustment end determination unit 706 is provided. Details of each configuration will be described together with the operation of the recognition parameter adjusting unit 502. FIG. 15 is a flowchart showing the operation of the recognition parameter adjusting unit 502 according to the third embodiment.
 認識パラメータ調整範囲決定部701は、記憶部105で記憶されている変形制約条件に基づいて不定形モデル生成部106により対象ワークのモデルを生成し、生成したモデルを用いて、認識パラメータの調整範囲を決定する(ステップS401)。認識パラメータ調整範囲決定部701は、記憶部105に記憶されている変形制約条件に基づいて不定形モデル生成部106により対象ワークのモデルを少なくとも1つ生成し、生成したモデルのサイズなどに基づいて、認識パラメータの調整範囲、認識パラメータの初期値などを決定してもよい。 The recognition parameter adjustment range determination unit 701 generates a model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and uses the generated model to adjust the recognition parameter adjustment range. Is determined (step S401). The recognition parameter adjustment range determination unit 701 generates at least one model of the target work by the amorphous model generation unit 106 based on the deformation constraint condition stored in the storage unit 105, and based on the size of the generated model and the like. , The adjustment range of the recognition parameter, the initial value of the recognition parameter, and the like may be determined.
 認識パラメータ変更部702は、調整する認識パラメータの種類を変更する(ステップS402)。 The recognition parameter changing unit 702 changes the type of recognition parameter to be adjusted (step S402).
 認識試行部703は、データセット生成部109で生成されたデータに対して、把持パラメータ調整部501で調整された把持パラメータを用いて認識処理を施す(ステップS403)。認識試行部703は、認識処理が終了していない場合(ステップS404:No)、ステップS403に戻って、認識処理を継続し、認識処理が終了した場合(ステップS404:Yes)、認識結果を認識評価部704に出力する。 The recognition trial unit 703 performs a recognition process on the data generated by the data set generation unit 109 using the grip parameters adjusted by the grip parameter adjusting unit 501 (step S403). The recognition trial unit 703 returns to step S403 when the recognition process is not completed (step S404: No), continues the recognition process, and recognizes the recognition result when the recognition process is completed (step S404: Yes). Output to the evaluation unit 704.
 認識評価部704は、認識試行部703から取得した認識結果に基づいて、認識試行部703による認識結果を評価する(ステップS405)。認識評価部704は、認識試行部703で認識した際の把持位置と最適な把持位置とのズレ量EPos、対象ワークの主軸と把持方向とのズレ量EAng、認識個数の評価値ENumの3つからなる認識パラメータの評価関数を計算してもよい。最適な把持位置については、対象ワークの重心位置でもよいし、ユーザがあらかじめ決めた位置でもよい。 The recognition evaluation unit 704 evaluates the recognition result by the recognition trial unit 703 based on the recognition result acquired from the recognition trial unit 703 (step S405). The recognition evaluation unit 704 has an amount of deviation E Pos between the gripping position and the optimum gripping position when recognized by the recognition trial unit 703, an amount of deviation E Ang between the main axis of the target work and the gripping direction, and an evaluation value E Num of the number of recognized pieces. You may calculate the evaluation function of the recognition parameter consisting of three. The optimum gripping position may be the position of the center of gravity of the target work or a position determined in advance by the user.
 認識パラメータ値決定部705は、認識評価部704の評価結果に基づいて、認識パラメータの値を決定する(ステップS406)。具体的には、認識パラメータ値決定部705は、認識評価部704で算出された評価関数が最小となる認識パラメータの値を探索し、探索により発見した認識パラメータの値を調整パラメータの値として決定する。 The recognition parameter value determination unit 705 determines the value of the recognition parameter based on the evaluation result of the recognition evaluation unit 704 (step S406). Specifically, the recognition parameter value determination unit 705 searches for the value of the recognition parameter that minimizes the evaluation function calculated by the recognition evaluation unit 704, and determines the value of the recognition parameter found by the search as the value of the adjustment parameter. To do.
 認識パラメータ調整終了判定部706は、調整対象である全ての認識パラメータの調整が終了したか否かを判定する(ステップS407)。認識パラメータ調整終了判定部706は、調整対象である全ての認識パラメータの調整が終了していない場合(ステップS407:No)、ステップS402の動作に戻って前述と同様の動作を行うよう各部に指示する。認識パラメータ調整終了判定部706は、調整対象である全ての認識パラメータの調整が終了した場合(ステップS407:Yes)、認識パラメータ調整部502の動作を終了する。 The recognition parameter adjustment end determination unit 706 determines whether or not the adjustment of all the recognition parameters to be adjusted has been completed (step S407). When the recognition parameter adjustment end determination unit 706 has not completed the adjustment of all the recognition parameters to be adjusted (step S407: No), the recognition parameter adjustment end determination unit 706 instructs each unit to return to the operation of step S402 and perform the same operation as described above. To do. When the adjustment of all the recognition parameters to be adjusted is completed (step S407: Yes), the recognition parameter adjustment end determination unit 706 ends the operation of the recognition parameter adjustment unit 502.
 このようにして、ピッキングロボット20では、認識パラメータ調整部502は、認識パラメータを自動調整する。ピッキングロボット20は、生成したモデルとデータセットを用いることで、認識率が高くなりやすい認識パラメータを自動調整でき、ユーザのパラメータ調整作業の負荷を軽減できる。パラメータ調整部110は、把持パラメータおよび認識パラメータを自動調整する。ピッキングロボット20は、自動でパラメータを調整することにより、ユーザによるパラメータの調整作業が不要となり、ユーザのパラメータ調整作業の負荷を軽減できる。 In this way, in the picking robot 20, the recognition parameter adjusting unit 502 automatically adjusts the recognition parameter. By using the generated model and data set, the picking robot 20 can automatically adjust the recognition parameters that tend to have a high recognition rate, and can reduce the load of the user's parameter adjustment work. The parameter adjusting unit 110 automatically adjusts the gripping parameter and the recognition parameter. By automatically adjusting the parameters of the picking robot 20, the parameter adjustment work by the user becomes unnecessary, and the load of the user's parameter adjustment work can be reduced.
 図8の説明に戻る。記憶部105は、パラメータ調整部110で調整されたピッキングについてのパラメータを記憶してもよい。 Return to the explanation in Fig. 8. The storage unit 105 may store the parameters for picking adjusted by the parameter adjustment unit 110.
 認識処理部111は、センサ101で取得されたデータに対して、パラメータ調整部110で調整されたパラメータに基づいて、物体認識処理をする。認識処理部111は、センサ101で取得されたデータに対して、記憶部105に記憶されている調整されたパラメータに基づいて、物体認識処理をし、候補となる把持位置を検出してもよい。なお、認識処理部111は、パラメータ調整部110で調整されたパラメータについて、パラメータ調整部110から直接取得するようにしてもよい。 The recognition processing unit 111 performs object recognition processing on the data acquired by the sensor 101 based on the parameters adjusted by the parameter adjustment unit 110. The recognition processing unit 111 may perform object recognition processing on the data acquired by the sensor 101 based on the adjusted parameters stored in the storage unit 105, and detect a candidate gripping position. .. The recognition processing unit 111 may directly acquire the parameters adjusted by the parameter adjustment unit 110 from the parameter adjustment unit 110.
 ピッキング部112は、ロボットハンドを備える。ピッキング部112は、認識処理部111による認識処理結果に基づいて、ロボットハンドによる対象ワークのピッキング動作を行う。ピッキング部112は、認識処理結果に基づいて、把持尤度が一番高い把持位置へロボットハンドをアプローチさせて、対象ワークをピッキングし、規定された位置へ移動させるといった動作を繰り返して行ってもよい。 The picking unit 112 includes a robot hand. The picking unit 112 performs a picking operation of the target work by the robot hand based on the recognition processing result by the recognition processing unit 111. Based on the recognition processing result, the picking unit 112 may repeatedly perform operations such as approaching the robot hand to the gripping position having the highest gripping likelihood, picking the target work, and moving it to the specified position. Good.
 なお、ピッキングロボット20は、力を制御するための力覚センサ、近接センサ、触覚センサといった、センサ101以外のセンサを装着していてもよい。また、モデル生成装置10は、複数のピッキングロボット20を対象にしてもよいし、複数のピッキング部112を備えるピッキングロボット20を対象にしてもよい。ピッキングロボット20は、シミュレータ上でユーザの実環境を模擬し、生成したデータセットと生成した対象ワークのモデルを用いてパラメータ調整をすることで、対象ワークのピッキング成功率を向上させることができる。 The picking robot 20 may be equipped with sensors other than the sensor 101, such as a force sensor, a proximity sensor, and a tactile sensor for controlling force. Further, the model generation device 10 may target a plurality of picking robots 20, or may target a picking robot 20 including a plurality of picking units 112. The picking robot 20 can improve the picking success rate of the target work by simulating the actual environment of the user on the simulator and adjusting the parameters using the generated data set and the model of the generated target work.
 ピッキングロボット20のハードウェア構成について説明する。ピッキングロボット20において、ピッキング部112は、ロボットハンド、およびロボットハンド制御部により実現される。シミュレーション条件設定部107、シーン生成部108、データセット生成部109、パラメータ調整部110、認識処理部111、およびピッキング部112のロボットハンド制御部は処理回路により実現される。処理回路は、モデル生成装置10の場合と同様、メモリに格納されるプログラムを実行するプロセッサおよびメモリであってもよいし、専用のハードウェアであってもよい。 The hardware configuration of the picking robot 20 will be described. In the picking robot 20, the picking unit 112 is realized by the robot hand and the robot hand control unit. The robot hand control unit of the simulation condition setting unit 107, the scene generation unit 108, the data set generation unit 109, the parameter adjustment unit 110, the recognition processing unit 111, and the picking unit 112 is realized by a processing circuit. As in the case of the model generator 10, the processing circuit may be a processor and a memory for executing a program stored in the memory, or may be dedicated hardware.
 以上説明したように、本実施の形態によれば、ピッキングロボット20は、シミュレータ上で生成したデータセットと生成した対象ワークのモデルとを用いてピッキングについてのパラメータを自動調整する。これにより、ピッキングロボット20は、ピッキング成功率を向上させることができ、産業用ロボットの導入および稼働までの検証に係る試行錯誤の期間、コストなどを削減することが可能である。 As described above, according to the present embodiment, the picking robot 20 automatically adjusts the parameters for picking using the data set generated on the simulator and the model of the generated target work. As a result, the picking robot 20 can improve the picking success rate, and can reduce the period and cost of trial and error related to the introduction and operation of the industrial robot.
 本実施の形態によれば、ピッキングロボット20は、シミュレータ上でユーザの実環境を模擬し、実機を用いずに生成したワークのモデルを用いてパラメータ調整に必要なデータを自動生成でき、ユーザによるデータ収集作業の負荷を軽減することができる。また、ピッキングロボット20は、認識率が高くなり、かつ対象ワークを把持しやすいパラメータを自動調整できることで、ユーザによるパラメータ調整作業の負荷を軽減でき、対象ワークのピッキング成功率を向上させることができる。 According to the present embodiment, the picking robot 20 can simulate the actual environment of the user on the simulator and automatically generate the data necessary for parameter adjustment using the model of the work generated without using the actual machine. The load of data collection work can be reduced. Further, the picking robot 20 has a high recognition rate and can automatically adjust parameters that make it easy to grip the target work, so that the load of the parameter adjustment work by the user can be reduced and the picking success rate of the target work can be improved. ..
 以上の実施の形態に示した構成は、本発明の内容の一例を示すものであり、別の公知の技術と組み合わせることも可能であるし、本発明の要旨を逸脱しない範囲で、構成の一部を省略、変更することも可能である。 The configuration shown in the above-described embodiment shows an example of the content of the present invention, can be combined with another known technique, and is one of the configurations without departing from the gist of the present invention. It is also possible to omit or change the part.
 10 モデル生成装置、20 ピッキングロボット、101 センサ、102 ワーク情報取得部、103 基準モデル決定部、104 変形制約条件決定部、105 記憶部、106 不定形モデル生成部、107 シミュレーション条件設定部、108 シーン生成部、109 データセット生成部、110 パラメータ調整部、111 認識処理部、112 ピッキング部、201 個別モデル決定部、202 レジストレーション部、203 平均形状生成部、204 形状要素分割部、205 個体差カテゴリ決定部、206 ワーク主軸決定部、207 プリミティブ照合部、208 当てはめモデル決定部、301 センサ情報設定部、302 ワーク情報設定部、303 環境情報設定部、401 2Dデータ生成部、402 3Dデータ生成部、403 アノテーションデータ生成部、501 把持パラメータ調整部、502 認識パラメータ調整部、601 把持パラメータ調整範囲決定部、602 把持パラメータ変更部、603 モデル回転部、604 把持評価部、605 把持パラメータ値決定部、606 把持パラメータ調整終了判定部、701 認識パラメータ調整範囲決定部、702 認識パラメータ変更部、703 認識試行部、704 認識評価部、705 認識パラメータ値決定部、706 認識パラメータ調整終了判定部。 10 model generator, 20 picking robot, 101 sensor, 102 work information acquisition unit, 103 reference model determination unit, 104 deformation constraint condition determination unit, 105 storage unit, 106 irregular model generation unit, 107 simulation condition setting unit, 108 scenes Generation unit, 109 data set generation unit, 110 parameter adjustment unit, 111 recognition processing unit, 112 picking unit, 201 individual model determination unit, 202 registration unit, 203 average shape generation unit, 204 shape element division unit, 205 individual difference category Determination unit, 206 work spindle determination unit, 207 primitive collation unit, 208 fitting model determination unit, 301 sensor information setting unit, 302 work information setting unit, 303 environment information setting unit, 401 2D data generation unit, 402 3D data generation unit, 403 annotation data generation unit, 501 grip parameter adjustment unit, 502 recognition parameter adjustment unit, 601 grip parameter adjustment range determination unit, 602 grip parameter change unit, 603 model rotation unit, 604 grip evaluation unit, 605 grip parameter value determination unit, 606 Gripping parameter adjustment end determination unit, 701 recognition parameter adjustment range determination unit, 702 recognition parameter change unit, 703 recognition trial unit, 704 recognition evaluation unit, 705 recognition parameter value determination unit, 706 recognition parameter adjustment end determination unit.

Claims (21)

  1.  モデルを生成する対象ワークの情報を含むシーン情報を取得するセンサと、
     前記シーン情報から、前記対象ワークの情報であるワーク情報を取得するワーク情報取得部と、
     前記シーン情報および前記ワーク情報に基づいて、不揃いな形状を表現するための基準となるモデルである基準モデルを決定する基準モデル決定部と、
     前記シーン情報と前記ワーク情報とを用いてワーク形状を解析した結果、および前記基準モデルに基づいて、前記基準モデルを変形させる条件である変形制約条件を決定する変形制約条件決定部と、
     前記変形制約条件に基づいて、前記基準モデルの変形パラメータを前記変形制約条件の範囲内で決定し、前記変形パラメータに従って前記基準モデルを変形する不定形モデル生成部と、
     を備えることを特徴とするモデル生成装置。
    A sensor that acquires scene information including information on the target work that generates a model,
    A work information acquisition unit that acquires work information that is information on the target work from the scene information,
    A reference model determination unit that determines a reference model, which is a reference model for expressing an irregular shape, based on the scene information and the work information.
    A deformation constraint condition determining unit that determines a deformation constraint condition that is a condition for deforming the reference model based on the result of analyzing the work shape using the scene information and the work information and the reference model.
    An irregular model generation unit that determines the deformation parameters of the reference model based on the deformation constraint conditions within the range of the deformation constraint conditions and deforms the reference model according to the deformation parameters.
    A model generator characterized by comprising.
  2.  前記変形制約条件は、数値的な条件として、対象ワークの凸凹の出現頻度、対象ワークの凸凹の出現範囲、および曲率のうち少なくとも1つを含む、
     ことを特徴とする請求項1に記載のモデル生成装置。
    The deformation constraint condition includes at least one of the frequency of appearance of unevenness of the target work, the appearance range of unevenness of the target work, and the curvature as numerical conditions.
    The model generator according to claim 1.
  3.  前記変形制約条件決定部が解析する前記ワーク形状には、前記対象ワークのワーク輪郭が含まれる、
     ことを特徴とする請求項1または2に記載のモデル生成装置。
    The work shape analyzed by the deformation constraint condition determination unit includes the work contour of the target work.
    The model generator according to claim 1 or 2.
  4.  前記不定形モデル生成部は、前記基準モデルの変形量を決定する際、前記ワーク輪郭の情報を用いる、
     ことを特徴とする請求項3に記載のモデル生成装置。
    The amorphous model generation unit uses the information of the work contour when determining the amount of deformation of the reference model.
    The model generator according to claim 3.
  5.  前記不定形モデル生成部は、前記ワーク輪郭の情報として、異なる2つ以上のワーク輪郭を合成して生成したワーク輪郭を用いる、
     ことを特徴とする請求項4に記載のモデル生成装置。
    The amorphous model generation unit uses a work contour generated by synthesizing two or more different work contours as the work contour information.
    The model generator according to claim 4.
  6.  前記不定形モデル生成部は、前記ワーク輪郭の情報として、ワーク輪郭のスケールを拡大または縮小して生成したワーク輪郭を用いる、
     ことを特徴とする請求項4または5に記載のモデル生成装置。
    The amorphous model generation unit uses the work contour generated by enlarging or reducing the scale of the work contour as the information of the work contour.
    The model generator according to claim 4 or 5.
  7.  前記基準モデル決定部は、
     前記ワーク情報取得部で取得された各ワーク情報をモデル化し、各ワーク情報をモデル化した個別モデルを決定する個別モデル決定部と、
     前記個別モデルを統合するレジストレーション部と、
     前記レジストレーション部による統合結果に基づいて、前記対象ワークの平均形状モデルを生成する平均形状生成部と、
     を備えることを特徴とする請求項1から6のいずれか1項に記載のモデル生成装置。
    The reference model determination unit
    An individual model determination unit that models each work information acquired by the work information acquisition unit and determines an individual model that models each work information.
    The registration unit that integrates the individual models and
    Based on the integration result by the registration unit, the average shape generation unit that generates the average shape model of the target work, and the average shape generation unit.
    The model generator according to any one of claims 1 to 6, wherein the model generator is provided.
  8.  前記基準モデルとして、前記平均形状生成部で生成された前記平均形状モデルを用いる、
     ことを特徴とする請求項7に記載のモデル生成装置。
    As the reference model, the average shape model generated by the average shape generation unit is used.
    The model generator according to claim 7.
  9.  前記変形制約条件決定部は、数値的な条件および前記基準モデルの形状解析結果に基づいて、輪郭の変形制約条件を決定する、
     ことを特徴とする請求項7または8に記載のモデル生成装置。
    The deformation constraint condition determination unit determines the deformation constraint condition of the contour based on the numerical conditions and the shape analysis result of the reference model.
    The model generator according to claim 7 or 8.
  10.  前記基準モデル決定部は、
     前記平均形状モデルに対して、ワークの不揃いさをランク別に表した個体差カテゴリを決定する個体差カテゴリ決定部、
     を備え、前記個体差カテゴリの結果および前記平均形状モデルに基づいて、前記基準モデルを決定することを特徴とする請求項7から9のいずれか1項に記載のモデル生成装置。
    The reference model determination unit
    An individual difference category determination unit that determines an individual difference category that expresses the unevenness of the work by rank with respect to the average shape model.
    The model generation device according to any one of claims 7 to 9, wherein the reference model is determined based on the result of the individual difference category and the average shape model.
  11.  前記基準モデル決定部は、
     前記平均形状モデルまたは前記個別モデルに対して、慣性主軸であるワーク主軸を決定するワーク主軸決定部、
     を備え、決定したワーク主軸および平均形状モデルに基づいて前記基準モデルを決定することを特徴とする請求項7から10のいずれか1項に記載のモデル生成装置。
    The reference model determination unit
    A work spindle determination unit that determines the work spindle, which is the inertial spindle, with respect to the average shape model or the individual model.
    The model generation device according to any one of claims 7 to 10, wherein the reference model is determined based on the determined work spindle and the average shape model.
  12.  前記基準モデル決定部は、
     前記平均形状モデルを幾何学形状と照合させるプリミティブ照合部と、
     前記プリミティブ照合部の照合結果に基づいて、基準モデルを決定する当てはめモデル決定部と、
     を備え、前記基準モデルとなる当てはめモデルを決定することを特徴とする請求項7から11のいずれか1項に記載のモデル生成装置。
    The reference model determination unit
    A primitive collation unit that collates the average shape model with the geometric shape,
    A fitting model determination unit that determines a reference model based on the matching result of the primitive matching unit, and
    The model generation device according to any one of claims 7 to 11, wherein the fitting model to be the reference model is determined.
  13.  前記基準モデル決定部は、
     前記平均形状モデルまたは前記個別モデルを、各モデルを構成する少なくとも1つ以上の形状要素に分割する形状要素分割部、
     を備え、分割された形状要素ごとに前記基準モデルを決定することを特徴とする請求項7から12のいずれか1項に記載のモデル生成装置。
    The reference model determination unit
    A shape element dividing portion that divides the average shape model or the individual model into at least one shape element constituting each model.
    The model generation device according to any one of claims 7 to 12, wherein the reference model is determined for each of the divided shape elements.
  14.  前記変形制約条件決定部は、前記変形パラメータとして前記対象ワークの形状を学習したニューラルネットワークの中間層の特徴マップを用いる、
     ことを特徴とする請求項1から13のいずれか1項に記載のモデル生成装置。
    The deformation constraint condition determination unit uses a feature map of the intermediate layer of the neural network that has learned the shape of the target work as the deformation parameter.
    The model generator according to any one of claims 1 to 13.
  15.  前記不定形モデル生成部は、前記対象ワークのモデルを出力とするニューラルネットワークであり、前記ニューラルネットワークの入力は、前記シーン情報、前記ワーク情報、前記変形制約条件、および前記変形パラメータのうち少なくとも1つを含む、
     ことを特徴とする請求項1から14のいずれか1項に記載のモデル生成装置。
    The amorphous model generation unit is a neural network that outputs a model of the target work, and the input of the neural network is at least one of the scene information, the work information, the deformation constraint condition, and the deformation parameter. Including one
    The model generator according to any one of claims 1 to 14.
  16.  請求項1から15のいずれか1項に記載のモデル生成装置と、
     前記モデル生成装置で決定された変形制約条件から対象ワークのモデルを生成し、生成したモデルを用いて、シミュレーション内のセンサ情報およびワーク情報のうち少なくとも1つを含むシミュレーション条件を設定するシミュレーション条件設定部と、
     前記変形制約条件から対象ワークのモデルを生成し、生成したモデルと、前記シミュレーション条件とを用いて、前記対象ワークのシーンを模擬するシーン生成部と、
     前記シーンに基づいて、対象ワークのピッキング動作を制御するパラメータを調整するためのデータセットを生成するデータセット生成部と、
     前記変形制約条件から対象ワークのモデルを生成し、生成したモデルと、前記データセットとを用いて、ピッキングについてのパラメータを自動調整するパラメータ調整部と、
     前記センサで取得したデータに対して、前記パラメータ調整部で調整されたパラメータに基づいて、物体認識処理をする認識処理部と、
     前記認識処理部による認識処理結果に基づいて、ロボットハンドによる対象ワークのピッキング動作を行うピッキング部と、
     を備えることを特徴とするピッキングロボット。
    The model generator according to any one of claims 1 to 15.
    A simulation condition setting that generates a model of the target work from the deformation constraint conditions determined by the model generator and sets a simulation condition including at least one of the sensor information and the work information in the simulation using the generated model. Department and
    A scene generation unit that generates a model of the target work from the deformation constraint condition and simulates the scene of the target work by using the generated model and the simulation condition.
    Based on the scene, a data set generator that generates a data set for adjusting parameters that control the picking operation of the target work, and a data set generator.
    A parameter adjustment unit that generates a model of the target work from the deformation constraint conditions and automatically adjusts parameters for picking using the generated model and the data set.
    A recognition processing unit that performs object recognition processing on the data acquired by the sensor based on the parameters adjusted by the parameter adjustment unit.
    Based on the recognition processing result by the recognition processing unit, the picking unit that performs the picking operation of the target work by the robot hand, and
    A picking robot characterized by being equipped with.
  17.  前記シミュレーション条件設定部は、
     シミュレーションにおける前記センサのスペックおよび設置情報のうち少なくとも1つを設定するセンサ情報設定部と、
     前記対象ワークのワーク情報を設定するワーク情報設定部と、
     シミュレーション内の環境情報を設定する環境情報設定部と、
     を備え、センサ情報、ワーク情報、および環境情報のシミュレーション条件を設定することを特徴とする請求項16に記載のピッキングロボット。
    The simulation condition setting unit
    A sensor information setting unit that sets at least one of the sensor specifications and installation information in the simulation,
    A work information setting unit that sets the work information of the target work, and
    The environment information setting unit that sets the environment information in the simulation,
    The picking robot according to claim 16, further comprising:, and setting simulation conditions for sensor information, work information, and environmental information.
  18.  前記データセット生成部は、
     前記シーン生成部で生成されたシーンに対して、2次元データを生成する2次元データ生成部と、
     3次元データを生成する3次元データ生成部と、
     アノテーションデータを生成するアノテーションデータ生成部と、
     を備え、前記シーンに基づいてデータセットを生成することを特徴とする請求項16または17に記載のピッキングロボット。
    The data set generator
    A two-dimensional data generation unit that generates two-dimensional data for the scene generated by the scene generation unit, and a two-dimensional data generation unit.
    A 3D data generator that generates 3D data,
    Annotation data generator that generates annotation data and
    The picking robot according to claim 16 or 17, wherein a data set is generated based on the scene.
  19.  前記パラメータ調整部は、
     前記変形制約条件から対象ワークのモデルを生成し、生成したモデルを用いて、前記ロボットハンドについての把持パラメータを調整する把持パラメータ調整部と、
     前記変形制約条件から対象ワークのモデルを生成し、生成したモデルと、前記データセットと、前記把持パラメータ調整部で調整された把持パラメータとを用いて、物体認識手法の認識パラメータを調整する認識パラメータ調整部と、
     を備え、把持パラメータおよび認識パラメータを自動調整することを特徴とする請求項16から18のいずれか1項に記載のピッキングロボット。
    The parameter adjustment unit
    A grip parameter adjusting unit that generates a model of the target work from the deformation constraint conditions and adjusts the grip parameters for the robot hand using the generated model.
    A recognition parameter that generates a model of the target work from the deformation constraint condition and adjusts the recognition parameter of the object recognition method by using the generated model, the data set, and the grip parameter adjusted by the grip parameter adjusting unit. Adjustment part and
    The picking robot according to any one of claims 16 to 18, wherein the picking robot is characterized by automatically adjusting a gripping parameter and a recognition parameter.
  20.  前記把持パラメータ調整部は、
     前記変形制約条件から対象ワークのモデルを生成し、生成したモデルに基づいて把持パラメータの調整範囲を決定する把持パラメータ調整範囲決定部と、
     調整する把持パラメータの種類を変更する把持パラメータ変更部と、
     前記変形制約条件から対象ワークのモデルを生成し、生成したモデルを回転させるモデル回転部と、
     前記モデル回転部でモデルを回転させて把持評価をする把持評価部と、
     前記把持評価部の評価結果に基づいて、把持パラメータの値を決定する把持パラメータ値決定部と、
     調整対象である全ての把持パラメータの調整が終了したか否かを判定する把持パラメータ調整終了判定部と、
     を備え、前記把持パラメータを自動調整することを特徴とする請求項19に記載のピッキングロボット。
    The gripping parameter adjusting unit is
    A grip parameter adjustment range determination unit that generates a model of the target work from the deformation constraint conditions and determines the grip parameter adjustment range based on the generated model.
    A grip parameter change unit that changes the type of grip parameter to be adjusted, and a grip parameter change unit.
    A model rotating unit that generates a model of the target work from the deformation constraint conditions and rotates the generated model,
    A grip evaluation unit that rotates the model with the model rotation unit to evaluate the grip,
    A gripping parameter value determining unit that determines the value of the gripping parameter based on the evaluation result of the gripping evaluation unit,
    A grip parameter adjustment end determination unit that determines whether or not all the grip parameters to be adjusted have been adjusted, and a grip parameter adjustment end determination unit.
    19. The picking robot according to claim 19, wherein the gripping parameter is automatically adjusted.
  21.  前記認識パラメータ調整部は、
     前記変形制約条件から対象ワークのモデルを生成し、生成したモデルを用いて認識パラメータの調整範囲を決定する認識パラメータ調整範囲決定部と、
     調整する認識パラメータの種類を変更する認識パラメータ変更部と、
     前記データセット生成部で生成されたデータに対して、前記把持パラメータ調整部で調整された把持パラメータを用いて認識処理を施す認識試行部と、
     前記認識試行部による認識結果の評価をする認識評価部と、
     前記認識評価部の結果に基づいて、認識パラメータの値を決定する認識パラメータ値決定部と、
     調整対象である全ての認識パラメータの調整が終了したか否かを判定する認識パラメータ調整終了判定部と、
     を備え、前記認識パラメータを自動調整することを特徴とする請求項19または20に記載のピッキングロボット。
    The recognition parameter adjustment unit
    A recognition parameter adjustment range determination unit that generates a model of the target work from the deformation constraint conditions and determines the adjustment range of the recognition parameter using the generated model.
    A recognition parameter change section that changes the type of recognition parameter to be adjusted,
    A recognition trial unit that performs recognition processing on the data generated by the data set generation unit using the grip parameters adjusted by the grip parameter adjustment unit.
    A recognition evaluation unit that evaluates the recognition result by the recognition trial unit,
    A recognition parameter value determination unit that determines the value of the recognition parameter based on the result of the recognition evaluation unit,
    A recognition parameter adjustment end determination unit that determines whether or not all the recognition parameters to be adjusted have been adjusted, and a recognition parameter adjustment end determination unit.
    The picking robot according to claim 19 or 20, wherein the recognition parameter is automatically adjusted.
PCT/JP2019/048190 2019-12-10 2019-12-10 Model generation device and picking robot WO2021117117A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/JP2019/048190 WO2021117117A1 (en) 2019-12-10 2019-12-10 Model generation device and picking robot
CN201980102787.3A CN114766037A (en) 2019-12-10 2019-12-10 Model generation device and picking robot
DE112019007961.1T DE112019007961T5 (en) 2019-12-10 2019-12-10 MODEL GENERATOR AND RECORDING ROBOT
JP2021563478A JP7162760B2 (en) 2019-12-10 2019-12-10 Model generator and picking robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/048190 WO2021117117A1 (en) 2019-12-10 2019-12-10 Model generation device and picking robot

Publications (1)

Publication Number Publication Date
WO2021117117A1 true WO2021117117A1 (en) 2021-06-17

Family

ID=76329939

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/048190 WO2021117117A1 (en) 2019-12-10 2019-12-10 Model generation device and picking robot

Country Status (4)

Country Link
JP (1) JP7162760B2 (en)
CN (1) CN114766037A (en)
DE (1) DE112019007961T5 (en)
WO (1) WO2021117117A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003132338A (en) * 2001-08-08 2003-05-09 Mitsubishi Electric Research Laboratories Inc Method for restoring nonrigid 3d shape and movement of object and system for restoring nonrigid 3d model of scene
JP2014164641A (en) * 2013-02-27 2014-09-08 Seiko Epson Corp Image processing apparatus, robot control system, robot, program, and image processing method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018144158A (en) 2017-03-03 2018-09-20 株式会社キーエンス Robot simulation device, robot simulation method, robot simulation program, computer-readable recording medium and recording device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003132338A (en) * 2001-08-08 2003-05-09 Mitsubishi Electric Research Laboratories Inc Method for restoring nonrigid 3d shape and movement of object and system for restoring nonrigid 3d model of scene
JP2014164641A (en) * 2013-02-27 2014-09-08 Seiko Epson Corp Image processing apparatus, robot control system, robot, program, and image processing method

Also Published As

Publication number Publication date
JP7162760B2 (en) 2022-10-28
JPWO2021117117A1 (en) 2021-06-17
CN114766037A (en) 2022-07-19
DE112019007961T5 (en) 2022-09-22

Similar Documents

Publication Publication Date Title
JP6036662B2 (en) Robot simulation apparatus, program, recording medium and method
CN109859305B (en) Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face
JP4785880B2 (en) System and method for 3D object recognition
JP6323993B2 (en) Information processing apparatus, information processing method, and computer program
Goldfeder et al. Data-driven grasping
Kumar et al. Object detection and recognition for a pick and place robot
CN103984362B (en) Position and orientation measurement apparatus, information processor and information processing method
JP5458885B2 (en) Object detection method, object detection apparatus, and robot system
CN107067473A (en) 3D modeling object is reconstructed
JP5092711B2 (en) Object recognition apparatus and robot apparatus
US8411081B2 (en) Systems and methods for enhancing symmetry in 2D and 3D objects
CN110992427B (en) Three-dimensional pose estimation method and positioning grabbing system for deformed object
Pound et al. A patch-based approach to 3D plant shoot phenotyping
Kudoh et al. Painting robot with multi-fingered hands and stereo vision
JP2016179534A (en) Information processor, information processing method, program
CN109508707B (en) Monocular vision-based grabbing point acquisition method for stably grabbing object by robot
Yogeswaran et al. 3d surface analysis for automated detection of deformations on automotive body panels
JP7387117B2 (en) Computing systems, methods and non-transitory computer-readable media
EP4226269A1 (en) Computer architecture for generating footwear digital asset
Lukyanov et al. Optical Method For Monitoring Tool Control For Green Burnishing With Using Of Algorithms With Adaptive Settings
JP2009128192A (en) Object recognition device and robot device
WO2021117117A1 (en) Model generation device and picking robot
US11775699B2 (en) Extracting grasping cues from tool geometry for digital human models
JP2020016446A (en) Information processing device, control method of information processing device, program, measurement device, and article production method
CN117769724A (en) Synthetic dataset creation using deep-learned object detection and classification

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19955798

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021563478

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 19955798

Country of ref document: EP

Kind code of ref document: A1