WO2020111269A1 - Dimensional data calculation device, product manufacturing device, information processing device, silhouette image generating device, and terminal device - Google Patents

Dimensional data calculation device, product manufacturing device, information processing device, silhouette image generating device, and terminal device Download PDF

Info

Publication number
WO2020111269A1
WO2020111269A1 PCT/JP2019/046896 JP2019046896W WO2020111269A1 WO 2020111269 A1 WO2020111269 A1 WO 2020111269A1 JP 2019046896 W JP2019046896 W JP 2019046896W WO 2020111269 A1 WO2020111269 A1 WO 2020111269A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
unit
dimension
shape
calculation
Prior art date
Application number
PCT/JP2019/046896
Other languages
French (fr)
Japanese (ja)
Inventor
佐藤 大輔
浩紀 八登
親史 有田
佳久 石橋
嵩士 中野
諒介 佐々木
亮介 田嶋
大田 佳宏
Original Assignee
Arithmer株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2018224376A external-priority patent/JP6531273B1/en
Priority claimed from JP2019082513A external-priority patent/JP6579353B1/en
Priority claimed from JP2019186653A external-priority patent/JP6792273B2/en
Application filed by Arithmer株式会社 filed Critical Arithmer株式会社
Publication of WO2020111269A1 publication Critical patent/WO2020111269A1/en
Priority to US17/333,008 priority Critical patent/US11922649B2/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present disclosure relates to a dimension data calculation device, a product manufacturing device, an information processing device, a silhouette image generation device, and a terminal device.
  • Patent Document 1 Japanese Patent Laid-Open No. 2017-018158
  • an acquisition unit that acquires image data of a captured object and full length data of the object, an extraction unit that extracts shape data indicating the shape of the object from the image data, and a shape A conversion unit that converts the data based on the full length data, the dimension data of the shape data converted by the conversion unit is dimensionally reduced, and the weighting coefficient optimized for each reduced value of each dimension and each part of the object is used.
  • a dimension data calculation device including a calculation unit that calculates the dimension data of each part of an object.
  • a product manufacturing apparatus that manufactures a product related to the shape of an object using the dimension data calculated using the dimension data calculation apparatus of the first aspect.
  • a reception unit that receives a silhouette image of an object, and an object engine that associates the silhouette image of the sample object with a predetermined number of shape parameter values associated with the sample object.
  • An estimation unit that estimates the value of the shape parameter of the object from the received silhouette image by using the dimension data, in which the estimated value of the shape parameter of the object is related to an arbitrary part of the object.
  • An information processing device associated with the.
  • a receiving unit that receives attribute data of an object, and an object engine that associates the attribute data of the sample object with a value of a predetermined number of shape parameters associated with the sample object.
  • An estimation unit that estimates the value of the shape parameter of the object from the received attribute data, and the estimated value of the shape parameter of the object is dimensional data related to an arbitrary part of the object.
  • An information processing device associated with the.
  • an acquisition unit that acquires image data of a captured object and full length data of the object, an extraction unit that extracts shape data indicating the shape of the object from the image data, and a shape Using a transformation unit that transforms the data into a silhouette image based on the full length data, and an object engine that associates the silhouette image of the sample object with the values of a predetermined number of shape parameters associated with the sample object, the silhouette A dimension data calculating device, comprising: an estimating unit that estimates the value of a predetermined number of shape parameters from an image; and a calculating unit that calculates the dimension data of an object based on the value of the estimated predetermined number of shape parameters.
  • a product manufacturing apparatus for manufacturing a product related to the shape of an object using at least one dimension data calculated by using the dimension data calculating apparatus according to the fifth aspect. Provided.
  • an acquisition unit that acquires attribute data including at least one of full length data and weight data of an object, and attribute data using a coefficient learned by machine learning.
  • a dimension data calculation device includes a calculation unit that calculates the dimension data of each part of the object by performing polynomial regression.
  • an object region of an object is obtained by using an acquisition unit that acquires image data including a depth map, in which the object is photographed, and three-dimensional point cloud data generated from the depth map.
  • an acquisition unit that acquires image data of a captured object and full-length data of the object, an extraction unit that extracts shape data indicating the shape of the object from the image data, and a shape A calculation unit that calculates the dimension data of each part of the object using the data, the image data includes a depth map, and the shape data extracted by the extraction unit is associated with the depth data of the object in the depth map.
  • a dimensional data calculation device is provided.
  • a product manufacturing apparatus that manufactures a product related to the shape of an object using the dimension data calculated using the dimension data calculating apparatus of the ninth aspect.
  • an acquisition unit that acquires image data of a captured object and full-length data of the object, an extraction unit that extracts shape data indicating the shape of the object from the image data, and a sample An estimation unit that estimates the value of a predetermined number of shape parameters from the silhouette image of the object by using an object engine that associates the silhouette image of the object and the value of the shape parameter of a predetermined number associated with the sample object
  • a calculation unit that calculates the dimension data of the object based on the values of the estimated predetermined number of shape parameters, the image data includes a depth map, and the shape data is the depth data of the object in the depth map.
  • a dimensional data calculation device associated with.
  • a product manufacturing apparatus for manufacturing a product related to the shape of an object using the dimension data calculated using the dimension data calculating apparatus according to the eleventh aspect.
  • the terminal device is connected to an information processing device that processes information about an object from image data of the object, and acquires image data of the object.
  • An acquisition unit a determination unit that determines whether or not the target object included in the image data is a pre-registered target object, the determination result by the determination unit is displayed on the output unit, and the image data is transmitted to the information processing device.
  • a terminal device is provided, which includes a reception unit that receives an input as to whether or not to perform.
  • a shape parameter acquisition unit that acquires the value of the shape parameter of the object and three-dimensional mesh data of the object from the value of the shape parameter of the object are configured
  • a dimension data calculation device includes a calculation unit that calculates the dimension data of a predetermined region based on the information on the vertices of the three-dimensional mesh data that forms the associated predetermined region.
  • FIG. 17 is a sequence diagram showing an operation of the product manufacturing system 3001 of FIG. 16. It is a schematic diagram which shows an example of the screen displayed on the terminal device 3010 of FIG. It is a schematic diagram which shows an example of the screen displayed on the terminal device 3010 of FIG. It is a schematic diagram of the dimension data calculation system 4100 which concerns on 4th Embodiment. 21 is a flowchart showing the operation of the learning device 4125 of FIG. 21 is a flowchart showing the operation of the dimension data calculation device 4020 of FIG. 20. It is a schematic diagram which shows the concept of the product manufacturing system 4001S which concerns on 4th Embodiment.
  • FIG. 1 is a schematic diagram showing the configuration of the dimension data calculation device 1020 according to this embodiment.
  • the dimension data calculation device 1020 can be realized by any computer, and includes a storage unit 1021, an input/output unit 1022, a communication unit 1023, and a processing unit 1024.
  • the dimension data calculation device 1020 may be realized as hardware using an LSI (Large Scale Integration), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or the like.
  • the storage unit 1021 stores various kinds of information, and is realized by an arbitrary storage device such as a memory and a hard disk.
  • the storage unit 1021 stores the weighting factor necessary for executing the information processing described later in association with the length and weight of the target object.
  • the weighting factor is acquired in advance by performing machine learning from teacher data including attribute data, image data, and dimension data described later.
  • the input/output unit 1022 is realized by a keyboard, a mouse, a touch panel, etc., and inputs various information to the computer and outputs various information from the computer.
  • the communication unit 1023 is realized by an arbitrary network card or the like, and enables communication with a communication device on the network by wire or wirelessly.
  • the processing unit 1024 executes various types of information processing, and is realized by a processor such as a CPU or GPU and a memory.
  • the processing unit 1024 functions as the acquisition unit 1024A, the extraction unit 1024B, the conversion unit 1024C, and the calculation unit 1024D by reading the program stored in the storage unit 1021 into the CPU, GPU, or the like of the computer.
  • the acquisition unit 1024A acquires image data of a captured object, full length data/weight data of the object, and the like. In addition, here, the acquisition unit 1024A acquires a plurality of image data obtained by photographing the object from different directions.
  • the extraction unit 1024B extracts shape data indicating the shape of the object from the image data. Specifically, the extraction unit 1024B extracts the target area included in the image data by using the semantic segmentation algorithm (Mask R-CNN or the like) prepared for each type of the target object, The shape data of is extracted.
  • the semantic segmentation algorithm is constructed using teacher data in which the shape of the object is not specified.
  • the extraction unit 1024B extracts the shape data of the target object from the target object area by the GrabCut algorithm. This makes it possible to extract the shape of the object with high accuracy. Furthermore, the extraction unit 1024B may correct the image of the target object specified by the grab cut algorithm based on the color image of the specific portion of the target object. This makes it possible to generate the shape data of the object with higher accuracy.
  • the conversion unit 1024C converts the shape data into silhouettes based on the full length data. Thereby, the shape data is standardized.
  • the calculation unit 1024D uses the shape data converted by the conversion unit 1024C to calculate the dimension data of each part of the object. Specifically, the calculation unit 1024D reduces the dimension of the shape data converted by the conversion unit 1024C.
  • the dimension reduction here is realized by a method such as principal component analysis, particularly kernel principal component analysis (Kernel PCA) or linear discriminant analysis.
  • the calculation unit 1024D calculates the dimension data of each part of the object using the reduced value of each dimension and the weighting coefficient optimized for each part of the object.
  • the calculation unit 1024D linearly combines the value of each dimension reduced for the first time and the weighting coefficient W1pi optimized for each part of the object to obtain the predetermined value Zpi.
  • the symbol p is the number of dimensions obtained by reduction and is a value of 10 or more.
  • the calculating unit 1024D performs the second dimension reduction using the predetermined value Zpi and the attribute data including at least the attributes of the length and weight of the target object, and each dimension obtained by the second dimension reduction. Based on the value of, the dimensional data of each part of the object is calculated.
  • the number of weighting factors W1pi is prepared as many as the reduced dimension for each dimensional location (i) of the object.
  • the calculation unit 1024D calculates the predetermined value Zpi by using the linear combination, but the calculation unit 1024D may calculate these values by a method other than the linear combination. Specifically, the calculation unit 1024D generates a quadratic feature amount from the value of each dimension obtained by the dimension reduction, and calculates the quadratic feature amount and the weighting coefficient optimized for each part of the object.
  • the predetermined value may be obtained by combining
  • FIG. 2 is a flowchart for explaining the operation of the size data calculation device 1020 according to this embodiment.
  • the dimension data calculation device 1020 acquires a plurality of image data obtained by photographing the entire target object from different directions via an external terminal device and the like together with the total length data indicating the total length of the target object (S1001).
  • the dimension data calculation device 1020 extracts shape data indicating the shape of each part of the object from each image data (S1002). Subsequently, the dimension data calculation device 1020 executes a rescale process for converting each shape data into a predetermined size based on the total length data (S1003).
  • the dimension data calculation device 1020 combines the plurality of converted shape data to generate new shape data (hereinafter, also referred to as shape data for calculation). Specifically, as shown in FIG. 3, the shape data of h rows and w columns are combined to form an m ⁇ h ⁇ w data string. The symbol m is the number of shape data (S1004).
  • the dimensional data of each part is calculated (S1005 to S1008). It should be noted that the symbol j is the total number of dimension portions for which dimension data is to be calculated.
  • the dimension data calculation device 1020 includes the acquisition unit 1024A, the extraction unit 1024B, the conversion unit 1024C, and the calculation unit 1024D.
  • the acquisition unit 1024A acquires image data of a captured object and full-length data of the object.
  • the extraction unit 1024B extracts shape data indicating the shape of the object from the image data.
  • the conversion unit 1024C converts the shape data based on the full length data to form a silhouette.
  • the calculation unit 1024D uses the shape data converted by the conversion unit 1024C to calculate the dimension data of each part of the object.
  • the dimension data calculation device 1020 calculates the dimension data of each part of the object using the image data and the full length data, it is possible to provide highly accurate dimension data. Further, since the dimension data calculation device 1020 can process many pieces of image data and full length data at once, it is possible to highly accurately provide many pieces of dimension data.
  • the dimension data of each part of a living thing as an object can be calculated with high accuracy. Further, it is possible to calculate with high accuracy the dimensional data of each part of an arbitrary object such as a car or various luggage as the object. Further, by incorporating the dimension data calculation device into a product manufacturing device that manufactures various products, it becomes possible to manufacture a product that conforms to the shape of the target object.
  • the acquisition unit 1024A acquires a plurality of image data obtained by photographing the object from different directions. With such a configuration, the accuracy of the dimensional data can be improved.
  • the calculation unit 1024D reduces the dimension of the shape data converted by the conversion unit 1024C. Then, the calculation unit 1024D calculates the dimension data of each part of the object using the reduced value of each dimension and the weighting coefficient W1pi optimized for each part of the object. With such a configuration, it is possible to improve the accuracy of the dimensional data while suppressing the calculation load.
  • the calculation unit 1024D linearly combines the reduced value of each dimension and the weighting coefficient W1pi optimized for the i-th part of the object to obtain the predetermined value Zi. Further, the calculation unit 1024D executes the second dimension reduction using the predetermined value Zi and the attribute data including at least the attributes of the length and the weight of the object, and the i-th dimension data of the object. To calculate. With such a configuration, it is possible to further improve the accuracy of the dimensional data while suppressing the calculation load. In the above description, the calculation unit 1024D generates a secondary feature amount from the value of each dimension obtained by dimension reduction, instead of the linear combination, and for each of the secondary feature amount and the object part. The predetermined value may be obtained by combining with the optimized weight coefficient.
  • the extraction unit 1024B extracts the target object region included in the image data by using the semantic segmentation algorithm constructed using the teacher data prepared for each kind of the target object.
  • the shape data of the object is extracted.
  • the extraction unit 1024B extracts the shape data of the object from the object area by the grab cut algorithm. With such a configuration, the accuracy of the dimensional data can be further improved.
  • the extraction unit 1024B may correct the image of the object extracted by the grab cut algorithm based on the color image of the specific portion in the image data to generate new shape data.
  • the accuracy of the dimensional data can be further improved. For example, when the object is a person, by setting the hand and the back as specific parts and correcting them based on the color images of these specific parts, it is possible to obtain the shape data of the person who is the object with high accuracy. it can.
  • the acquisition unit 1024A acquires a plurality of image data obtained by photographing the target object from different directions, but a plurality of image data is not necessarily required. It is possible to calculate the dimension data of each part even if the image data of the object is one.
  • a depth data measuring device that can also acquire depth data can be applied, and a depth map having depth data for each pixel may be configured based on the depth data.
  • the image data that can be acquired by the acquisition unit 1024A can be RGB-D (Red, Green, Blue, Depth) data.
  • the image data can include a depth map in addition to the RGB image data that can be acquired by a normal monocular camera.
  • a depth data measuring device is a stereo camera.
  • the “stereo camera” refers to an imaging device of any form capable of simultaneously capturing an object from a plurality of different directions and reproducing binocular parallax to form a depth map.
  • a depth map may be configured by obtaining depth data using a LiDAR (Light Detection and Ranging) device.
  • LiDAR Light Detection and Ranging
  • the semantic segmentation algorithm and/or the grab cut algorithm are adopted to obtain the shape data of the object. Additionally or alternatively, for example, when a stereo camera is applied, the depth map obtained from the stereo camera can be used to associate the depth data of the object with the shape data of the object. .. This makes it possible to generate the shape data of the object with higher accuracy.
  • the extraction unit 1024B determines, based on the depth map acquired by the acquisition unit 1024A, the target object region that is a part in which the target object is captured from the image data in which the target object is captured. To extract. For example, the object region is extracted by removing the region whose depth data is not within the predetermined range from the depth map. In the extracted object region, the shape data is associated with the object depth data on a pixel-by-pixel basis.
  • the conversion unit 1024C converts the shape data based on the full length data. In addition, the conversion unit 1024C converts the shape data into monochrome image data based on the depth data of the target area in addition to the full length data to generate a “gradation silhouette image” (new shape data). (See below).
  • the gradation silhouette image to be generated is not simple black and white binarized data, but is a single color represented by data with a brightness value of 0 (“black”) to 1 (“white”) based on depth data. It is a multi-tone monochrome image. That is, the gradation silhouette image data is associated with the depth data and has a larger amount of information regarding the shape of the object.
  • the gradation silhouette image data is standardized by full length data.
  • the shape data of the object is extracted with higher accuracy by extracting the object region based on the depth map configured by the acquisition unit 1024A using any machine capable of measuring the depth data. It becomes possible to do. Further, since the gradation silhouette image data (shape data to be converted) is associated with the depth data of the target object, it has a larger amount of information regarding the shape of the target object, and the calculation unit 1024D calculates each part of the target object. It is possible to calculate the dimension data with higher accuracy.
  • the calculation unit 1024D reduces the dimension of the gradation silhouette image data (shape data) converted by the conversion unit 1024C.
  • the number of dimensions reduced for the first time is about 10 times larger than that of the silhouette image of the binarized data.
  • the number of weighting factors is prepared for each dimensional location (i) of the object according to the reduced dimension.
  • gradation silhouette image is described here as being distinguished from a simple silhouette image, but in other embodiments and other modified examples, it may be simply described as a silhouette image without distinguishing both.
  • the calculation unit 1024D executes dimension reduction twice, but such processing is not always necessary.
  • the calculation unit 1024D may calculate the dimension data of each part of the object from the value of each dimension obtained by executing the dimension reduction once.
  • the dimension data calculation device 1020 may calculate the dimension data without reducing the dimension of the shape data.
  • the extraction unit 1024B extracts the target object area included in the image data by using the semantic segmentation algorithm constructed using the teacher data in which the shape of the target object is not specified. It is not necessary to use such teacher data.
  • a semantic segmentation algorithm constructed using teacher data in which the shape of the object is specified may be used.
  • FIG. 4 is a schematic diagram showing the concept of the product manufacturing system 1001 according to this embodiment.
  • the product manufacturing system 1001 is a system for manufacturing a desired product 1006, including a dimension data calculation device 1020 capable of communicating with the terminal device 1010 owned by the user 1005, and a product manufacturing device 1030.
  • FIG. 4 as an example, the concept when the object 1007 is a person and the product 1006 is a chair is shown.
  • the object 1007 and the product 1006 of the product manufacturing system according to the present embodiment are not limited to these.
  • the terminal device 1010 can be realized by a so-called smart device.
  • the terminal device 1010 exerts various functions by installing the user program in the smart device.
  • the terminal device 1010 generates image data captured by the user 1005.
  • the terminal device 1010 may have a stereo camera function of simultaneously capturing an object from a plurality of different directions and reproducing binocular parallax.
  • the image data is not limited to that captured by the terminal device 1010, and, for example, data captured using a stereo camera installed in a store may be used.
  • the terminal device 1010 accepts input of attribute data indicating the attribute of the target object 1007.
  • attribute data examples include the total length, weight, and elapsed time (including age) from the generation of the object 1007.
  • the terminal device 1010 has a communication function, and executes transmission and reception of various information with the dimension data calculation device 1020 and the product manufacturing device 1030.
  • the dimension data calculation device 1020 can be realized by an arbitrary computer.
  • the storage unit 1021 of the dimension data calculation device 1020 stores the information transmitted from the terminal device 1010 in association with the identification information that identifies the user 1005 of the terminal device 1010.
  • the storage unit 1021 also stores parameters and the like necessary for executing information processing described below.
  • the storage unit 1021 stores the weighting factor W1pi necessary for executing the information processing described later in association with the item of the attribute of the target object 1007 and the like.
  • the processing unit 1024 of the dimension data calculation device 1020 functions as the acquisition unit 1024A, the extraction unit 1024B, the conversion unit 1024C, and the calculation unit 1024D, as described above.
  • the acquisition unit 1024A acquires the image data captured by the user 1005 and the attribute data of the target object 1007.
  • the extraction unit 1024B also extracts shape data indicating the shape of the object 1007 from the image data. For example, when “person” is set in advance as the type of object, a semantic segmentation algorithm is constructed using teacher data for identifying a person.
  • the extraction unit 1024B corrects the image of the target object 1007 specified by the grab cut algorithm based on the color image of the specific portion of the target object 1007, and further highly accurately generates the shape data of the target object 1007.
  • the conversion unit 1024C converts the shape data based on the full length data to make a silhouette.
  • the calculation unit 1024D calculates the dimension data of each part of the user 1005 using the shape data converted by the conversion unit 1024C.
  • the calculation unit 1024D linearly combines the reduced value of each dimension and the weighting coefficient W1pi optimized for each part of the object 1007 to obtain the predetermined value Z1i.
  • the calculation unit 1024D reduces the dimension using the predetermined value Z1i and the attribute data of the object 1007, and calculates the dimension data of each part of the object 1007 based on the reduced value of each dimension.
  • the product manufacturing apparatus 1030 is a manufacturing apparatus that manufactures a desired product related to the shape of the object 1007 using the dimension data calculated by the dimension data calculation apparatus 1020. Note that the product manufacturing apparatus 1030 can employ any device that can automatically manufacture and process a product, and can be realized by, for example, a three-dimensional printer.
  • FIG. 5 is a sequence diagram for explaining the operation of the product manufacturing system 1001 according to this embodiment.
  • 6 and 7 are schematic diagrams showing screen transitions of the terminal device 1010. First, the entire object 1007 is imaged multiple times via the terminal device 1010 so that the object 1007 is imaged from different directions, and a plurality of image data of the object 1007 is generated (T1001). Here, a plurality of front and side pictures as shown in FIGS. 6 and 7 are taken.
  • the user 1005 inputs the attribute data indicating the attribute of the object 1007 to the terminal device 1010 (T1002).
  • the attribute data full length data, weight data, elapsed time data (including age, etc.) of the object 1007, etc. are input.
  • the plurality of image data and attribute data are transmitted from the terminal device 1010 to the dimension data calculation device 1020.
  • the dimension data calculation device 1020 When the dimension data calculation device 1020 receives a plurality of image data and attribute data from the terminal device 1010, the dimension data calculation device 1020 calculates the dimension data of each part of the object 1007 using these data (T1003).
  • the terminal device 1010 displays the dimension data on the screen according to the setting.
  • the product manufacturing apparatus 1030 manufactures the desired product 1006 based on the dimension data calculated by the dimension data calculation apparatus 1020 (T1004).
  • the product manufacturing system 1001 includes the dimension data calculation device 1020 capable of communicating with the terminal device 1010 owned by the user 1005, and the product manufacturing system. Apparatus 1030.
  • the terminal device 1010 (imaging device) captures a plurality of images of the object 1007.
  • the dimension data calculation device 1020 includes an acquisition unit 1024A, an extraction unit 1024B, a conversion unit 1024C, and a calculation unit 1024D.
  • the acquisition unit 1024A acquires the image data of the object 1007 from the terminal device 1010 together with the full length data of the object 1007.
  • the extraction unit 1024B extracts shape data indicating the shape of the object 1007 from the image data.
  • the conversion unit 1024C converts the shape data based on the full length data to form a silhouette.
  • the calculation unit 1024D calculates the dimension data of each part of the object 1007 using the shape data converted by the conversion unit 1024C.
  • the product manufacturing apparatus 1030 manufactures the product 1006 using the dimension data calculated by the calculation unit 1024D.
  • the dimension data calculation device 1020 calculates each part of the target object 1007 with high accuracy, and thus a desired product related to the shape of the target object 1007 can be provided.
  • the product manufacturing system 1001 can manufacture a model of an organ by measuring the shapes of various organs such as the heart.
  • various healthcare products and the like can be manufactured by measuring the waist shape of a person.
  • a person's figure product can be manufactured from the person's shape.
  • a chair or the like suitable for a person can be manufactured from the shape of the person.
  • car toys can be manufactured from car shapes.
  • a diorama or the like can be manufactured from an arbitrary landscape painting.
  • the dimensional data calculation device 1020 and the product manufacturing device 1030 are described as separate device devices, but they may be integrally configured.
  • FIG. 8 is a schematic diagram showing the configuration of the dimension data calculation device 2120 according to this embodiment.
  • the dimension data calculation device 2120 can be realized by an arbitrary computer and includes a storage unit 2121, an input/output unit 2122, a communication unit 2123, and a processing unit 2124.
  • the dimension data calculation device 2120 may be realized as hardware using an LSI (Large Scale Integration), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or the like.
  • the storage unit 2121 stores various kinds of information, and is realized by an arbitrary storage device such as a memory and a hard disk.
  • the storage unit 2121 stores the weighting factor Wri necessary for executing the information processing described later in association with the length and weight of the target object.
  • the weighting factor is acquired in advance by performing machine learning from teacher data including attribute data and size data described later.
  • the input/output unit 2122 has the same configuration and function as the input/output unit 1022 described above.
  • the communication unit 2123 has the same configuration and function as the communication unit 1023 described above.
  • the processing unit 2124 executes various types of information processing, and is realized by a processor such as a CPU or GPU and a memory.
  • the processing unit 2124 functions as the acquisition unit 2124A and the calculation unit 2124D by reading the program stored in the storage unit 2121 into the CPU, GPU, or the like of the computer.
  • the acquisition unit 2124A acquires the attribute data Dzr (r is the number of elements of the attribute data) including at least one of the full length data, the weight data, and the elapsed time data (including age) of the object.
  • the calculation unit 2124D calculates the dimension data of each part of the object using the attribute data acquired by the acquisition unit 2124A. Specifically, the calculation unit 2124D calculates the dimension data of each part of the target object by performing a quadratic regression on the attribute data using the weight coefficient Wsi that has been machine-learned.
  • the symbol s is the number of elements used for the calculation obtained from the attribute data.
  • a primary Also referred to as a term
  • the dimension data calculation device 2120 includes the acquisition unit 2124A and the calculation unit 2124D.
  • the acquisition unit 2124A acquires the attribute data including at least one of the full length data, the weight data, and the elapsed time data of the target object.
  • the calculation unit 2124D calculates the dimension data of each part of the object using the attribute data.
  • the dimension data calculation device 2120 calculates the dimension data of each part of the object using the attribute data, it is possible to provide highly accurate dimension data. Specifically, the calculation unit 2124D performs quadratic regression on the attribute data using the machine-learned coefficient to calculate the dimension data of each part of the object with high accuracy. Further, since the dimension data calculation device 2020 can process a large number of data at once, it is possible to provide a large number of dimension data at high speed.
  • the dimension data of each part of the living thing can be calculated with high accuracy. Further, by incorporating the dimension data calculation device 2120 into a product manufacturing device that manufactures various products, it is possible to manufacture a product that conforms to the shape of the target object.
  • the calculation unit 2124D calculates the dimension data of each part of the object by performing the quadratic regression of the attribute data, but the calculation of the calculation unit 2124D is not limited to this. ..
  • the calculation unit 2124D may be a unit that linearly combines the attribute data to obtain the dimension data.
  • FIG. 10 is a schematic diagram showing the concept of the product manufacturing system 2001S according to this embodiment.
  • the dimension data calculation device 2120 according to the present embodiment can also be applied to the product manufacturing system 2001S, similarly to the dimension data calculation device 1020 according to the first embodiment.
  • the terminal device 2010S only needs to accept the input of attribute data indicating the attribute of the target object 2007.
  • attribute data include the total length, weight, and elapsed time (including age) from the generation of the object 1007.
  • the processing unit 2124 of the dimension data calculation device 2120 functions as the acquisition unit 2124A and the calculation unit 2124D.
  • the calculation unit 2124D calculates the dimension data of each part of the object 2007 using the attribute data acquired by the acquisition unit 2124A. Specifically, the calculation unit 2124D calculates the dimension data of each part of the object by performing a quadratic regression on the attribute data using the weight coefficient Wsi that has been machine-learned.
  • the dimensional data calculation device 2120 calculates each part of the target object 2007 with high accuracy, so that a desired product related to the shape of the target object 2007 can be provided.
  • the product manufacturing system 2001S according to the second embodiment can exhibit the same effects as the product manufacturing system 1001 according to the first embodiment.
  • a dimension data calculation system according to an embodiment of the information processing apparatus, the information processing method, the product manufacturing apparatus, and the dimension data calculation apparatus of the present invention will be described below with reference to the accompanying drawings.
  • the information processing device and the dimension data calculation device are implemented as part of the dimension data calculation system.
  • a matrix ⁇ may be used to represent a set of shape parameters
  • an element ⁇ may be used to represent elements of the matrix ⁇ .
  • FIG. 11 is a schematic diagram showing the configuration of the dimension data calculation system 3100 according to this embodiment.
  • the dimension data calculation system 3100 includes a dimension data calculation device 3020 and a learning device 3025.
  • the dimension data calculation device 3020 and the learning device 3025 can be realized by an arbitrary computer.
  • the dimension data calculation device 3020 includes a storage unit 3021, an input/output unit 3022, a communication unit 3023, and a processing unit 3024.
  • the learning device 3025 also includes a storage unit 3026 and a processing unit 3027.
  • the dimension data calculation device 3020 and the learning device 3025 may be realized as hardware using an LSI (Large Scale Integration), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or the like.
  • Each of the storage units 3021 and 3026 stores various kinds of information, and is realized by an arbitrary storage device such as a memory and a hard disk.
  • the storage unit 3021 stores various data including the target engine 3021A, programs, information, and the like in order for the processing unit 3024 to execute information processing regarding dimension data calculation.
  • the storage unit 3026 also stores training data used in the learning stage to generate the target engine 3021A.
  • the input/output unit 3022 is realized by a keyboard, a mouse, a touch panel, etc., and inputs various information to the computer and outputs various information from the computer.
  • the communication unit 3023 is realized by a network interface such as an arbitrary network card and enables communication with a communication device on the network by wire or wirelessly.
  • Each of the processing units 3024 and 3027 is realized by a processor such as a CPU (Central Processing Unit) and/or a GPU (Graphical Processing Unit) and a memory in order to execute various information processing.
  • the processing unit 3024 functions as an acquisition unit 3024A, an extraction unit 3024B, a conversion unit 3024C, an estimation unit 3024D, and a calculation unit 3024E when the program stored in the storage unit 3021 is read by the CPU, GPU, etc. of the computer. ..
  • the processing unit 3027 functions as the preprocessing unit 3027A and the learning unit 3027B by the CPU, GPU, etc. of the computer reading the program stored in the storage unit 3026.
  • the acquisition unit 3024A acquires the image data of the object, and the attribute data such as the total length data and the weight data of the object.
  • the acquisition unit 3024A acquires, for example, a plurality of pieces of image data obtained by shooting an object from a plurality of different directions with an imaging device.
  • the extraction unit 3024B extracts shape data indicating the shape of the object from the image data. Specifically, the extraction unit 3024B extracts the target area included in the image data by using the semantic segmentation algorithm (Mask R-CNN or the like) prepared for each type of the target object to extract the target object. The shape data of is extracted.
  • the semantic segmentation algorithm can be constructed using training data in which the shape of the object is not specified.
  • the extraction unit 3024B extracts the shape data of the target object from the target object area by the Grab Cut algorithm. This makes it possible to extract the shape of the object with high accuracy.
  • the extraction unit 3024B separates the target object and the background image other than the target object by correcting the image of the target object specified by the grab cut algorithm based on the color image of the specific portion of the target object. Good. This makes it possible to generate the shape data of the object with higher accuracy.
  • the conversion unit 3024C converts the shape data into silhouettes based on the full length data. That is, the shape data of the object is converted to generate a silhouette image of the object. Thereby, the shape data is standardized.
  • the conversion unit 3024C also functions as a reception unit for inputting the generated silhouette image to the estimation unit 3024D.
  • the estimation unit 3024D estimates the values of a predetermined number of shape parameters from the silhouette image.
  • the object engine 3021A is used for the estimation.
  • the value of the shape parameter of the predetermined number of objects estimated by the estimation unit 3024D is associated with the dimension data related to an arbitrary part of the object.
  • the calculation unit 3024E calculates, from the values of the predetermined number of shape parameters estimated by the estimation unit 3024D, the dimension data of the object associated with the shape parameter values. Specifically, the calculation unit 3024E configures three-dimensional data of a plurality of vertices in the object from the shape parameter values of the object estimated by the estimation unit 3024D, and further, the target based on the three-dimensional data. Calculate dimensional data between any two vertices of an object.
  • the preprocessing unit 3027A carries out various preprocessing for learning.
  • the pre-processing unit 3027A specifies a predetermined number of shape parameters through feature extraction of the three-dimensional data of the sample object by dimension reduction. Also, a predetermined number (dimension) of shape parameter values is obtained for each sample object. The value of the shape parameter of the sample object is stored in the storage unit 3026 as training data.
  • the preprocessing unit 3027A virtually constructs a three-dimensional object of the sample object in the three-dimensional space based on the three-dimensional data of the sample object, and then virtually provides it in the three-dimensional space.
  • a silhouette image of a sample target object is generated by projecting a three-dimensional object from a predetermined direction using an imaging device.
  • Data of the generated silhouette image of the sample object is stored in the storage unit 3026 as training data.
  • the learning unit 3027B learns to associate the relationship between the silhouette image of the sample object and the values of the predetermined number of shape parameters associated with the sample object. As a result of the learning, the target engine 3021A is generated.
  • the generated object engine 3021A can be held in the form of an electronic file.
  • the dimension data calculation device 3020 calculates the dimension data of the object
  • the object engine 3021A is stored in the storage unit 3021 and referred to by the estimation unit 3024D.
  • FIG. 12 is a flowchart showing the operation (S3010) of the learning device 3025, which generates the target engine 3021A based on the sample target data.
  • FIG. 13 is a flowchart showing the operation of the dimension data calculation device 3020, and calculates the dimension data of the target object based on the image data of the target object.
  • data of the sample target is prepared and stored in the storage unit 3026 (S3011).
  • the data provided is data for 400 sample objects, including 5,000 three-dimensional data for each sample object.
  • the three-dimensional data includes three-dimensional coordinate data of the vertices of the sample object.
  • the three-dimensional data includes apex information of each mesh forming the three-dimensional object, mesh data such as a normal direction of each apex, full length data, weight data, and elapsed time data (including age and the like). Attribute data such as "" may be included.
  • the vertex number is associated with the three-dimensional data of the sample object.
  • three-dimensional data of 5,000 vertices are associated with vertex numbers #1 to #5,000 for each sample object.
  • all or part of the vertex number is associated with the information on the part of the object. For example, when the object is a "person", the vertex number #20 is associated with the "head apex”, and similarly, the vertex number #313 is the "acromion of the left shoulder" and the vertex number #521 is the "shoulder of the right shoulder". Associated with a "peak”.
  • the preprocessing unit 3027A performs feature conversion into shape parameters by dimension reduction (S3012). Specifically, for each sample object, feature extraction is performed by dimension reduction of the three-dimensional data of the sample object. As a result, a predetermined number (number of dimensions) of shape parameters are obtained. In one example, the dimensionality of the shape parameter is 30. Dimension reduction is realized by methods such as principal component analysis and Random Projection.
  • the preprocessing unit 3027A converts the three-dimensional data for each sample object into a predetermined number of shape parameter values using the projection matrix of the principal component analysis. This makes it possible to remove noise from the three-dimensional data of the sample object and compress the three-dimensional data while maintaining related characteristic information.
  • each sample object includes three-dimensional (coordinate) data of 5,000 vertices, and each three-dimensional data is characterized by a 30-dimensional shape parameter. It is supposed to be converted.
  • the vertex coordinate matrix of [400 rows, 15,000 columns (5,000 ⁇ 3)] representing the data of 400 sample objects is defined as a matrix X.
  • a matrix W is a projection matrix of [15,000 rows, 30 columns] generated by the principal component analysis.
  • the shape parameter matrix ⁇ can be calculated from the following equation.
  • the 15,000-dimensional data of each of the 400 sample objects is converted into the shape parameter ( ⁇ 1 ,..., ⁇ 30 ) of the 30-dimensional main component. To be done.
  • the average value of 400 values ( ⁇ 1 , 1 ,..., ⁇ 400,1 ) for ⁇ i is calculated to be zero.
  • the preprocessing unit 3027A expands the data set of the shape parameters included in the shape parameter matrix ⁇ using random numbers (S3013).
  • 400 data sets ( ⁇ i,1 ,..., ⁇ i,30 (1 ⁇ i ⁇ 400)) are converted to 10,000 extended data sets ( ⁇ j ) of shape parameters. , 1,..., ⁇ j,30 (1 ⁇ j ⁇ 10,000)).
  • Data expansion is performed using random numbers having a normal distribution.
  • the expanded data set has a normal distribution with a variance of 3 ⁇ for each shape parameter value.
  • the three-dimensional data of the extension data set can be constructed.
  • a [10,000 rows, 30 columns] extended shape parameter matrix representing 10,000 extended data sets is further transformed into a matrix ⁇ ′( ⁇ j,k ,..., ⁇ j, k (1 ⁇ j ⁇ 10,000 and 1 ⁇ k ⁇ 30)).
  • the vertex coordinate matrix X′ representing the three-dimensional data of 10,000 sample objects is [30 rows, 15,000 columns] the transposed matrix W T of the projection matrix W with respect to the extended shape parameter matrix ⁇ ′. It is obtained by multiplying from the right.
  • the vertex coordinate matrix X′ can be calculated from the following formula, and 5,000 (15,000/3) three-dimensional data are obtained for each sample object expanded to 10,000. be able to.
  • the preprocessing unit 3027A After the vertex coordinate matrix X′ is obtained as a result of S3013, the preprocessing unit 3027A generates each silhouette image based on the expanded three-dimensional data of the sample object (S3014).
  • a three-dimensional object of the sample object is virtually constructed from 5,000 three-dimensional data in the three-dimensional space. Then, a three-dimensional object is projected using a projection device that is also virtually provided in the three-dimensional space and is capable of projecting from any direction.
  • a projection device that is also virtually provided in the three-dimensional space and is capable of projecting from any direction.
  • it is preferable that two silhouette images in the front direction and the side direction are acquired by projection for every 10,000 sample objects.
  • the acquired silhouette image is represented by monochrome binarized data.
  • the learning unit 3027B associates the relationship between the shape parameter value associated with the sample object and the silhouette image of the sample object by learning (S3015). Specifically, it is preferable that the pair of the shape parameter data set obtained in S3013 and the silhouette image obtained in S3014 is used as training data, and the relationship between the two is learned by deep learning.
  • the binary data of each silhouette image is input to the deep learning network architecture.
  • the weighting factor of the network architecture is set so that the data output from the network architecture approaches the values of the 30 shape parameters.
  • the deep learning here can use a convolutional neural network (CNN: Convolutional Neural Network) in one example.
  • the relationship between the value of the shape parameter associated with the sample object and the silhouette image of the sample object is learned by deep learning, and a deep learning network architecture is constructed.
  • the object engine 3021A which is an estimation model for estimating the value of the shape parameter, is generated in response to the input of the silhouette image of the object.
  • the dimension data calculating device 3020 preliminarily stores the electronic file of the target engine 3021A generated by the learning device 3025 and the projection information of the principal component analysis obtained by the learning device 3025. It is stored in the storage unit 3021 and used for calculating the dimension data of the object.
  • the acquisition unit 3024A acquires, through the input/output unit 3122, a plurality of image data obtained by photographing the entire object from different directions via an external terminal device or the like, together with the full length data indicating the total length of the object ( S3021).
  • the extraction unit 3024B extracts shape data indicating the shape of each part of the object from each image data (S3022).
  • the conversion unit 3024C executes rescaling processing for converting each shape data into a predetermined size based on the full length data (S3023).
  • the silhouette image of the target object is generated, and the dimension data calculation device 3020 receives the silhouette image of the target object.
  • the estimation unit 3024D estimates the value of the shape parameter of the target object from the received silhouette image (S3024). Then, the calculation unit 3024E calculates the dimension data related to the part of the target object based on the value of the shape parameter of the target object (S3025).
  • the three-dimensional data of the vertex of the object is constructed from the values of the predetermined number of shape parameters estimated by the object engine 3021A for the object.
  • the inverse transformation of the projection (S3010) related to the dimension reduction performed by the preprocessing unit 3027A in the learning stage may be performed.
  • three-dimensional data can be obtained by multiplying the transposed matrix WT of the projection matrix W related to the principal component analysis from the right to the estimated number of shape parameter values (column vectors).
  • the three-dimensional data X′′ of the object can be calculated from the following formula for the value ⁇ ′′ of the shape parameter of the object.
  • the calculation unit 3024E calculates the dimension data between any two apexes of the object using the three-dimensional data.
  • a three-dimensional object is virtually constructed from the three-dimensional data, and dimension data between two vertices is calculated along a curved surface on the three-dimensional object. That is, the distance between the two vertices can be calculated three-dimensionally along the three-dimensional shape of the three-dimensional object.
  • two vertices on a three-dimensional mesh composed of a large number of vertices (5,000 three-dimensional data in the above example) are calculated. The shortest route connecting the routes is searched and the mesh through which the shortest route passes is specified.
  • the distance is calculated for each mesh along the shortest path and summed.
  • the total value is the three-dimensional distance between the two vertices.
  • mesh information such as vertex information of each mesh forming the three-dimensional object and a normal direction of each vertex can be used.
  • the object is a "person” and the "shoulder width" of the person is calculated.
  • shoulder width distance between the apex indicating the acromion of the left shoulder and the apex indicating the acromion of the right shoulder”.
  • the apex number of the apex indicating the acromion of the left shoulder is #313
  • the apex number of the apex indicating the acromion of the right shoulder is #521, for example.
  • the shortest path from the vertex numbers #313 to #521 is specified, and the vertex coordinate data of the mesh specified in relation to the shortest path is used to calculate the distance for each mesh along the shortest path. Can be calculated and summed.
  • the dimension data calculation device 3020 can highly accurately estimate the value of the predetermined number of shape parameters from the silhouette image by using the target engine 3021A.
  • the three-dimensional data of the object can be restored with high accuracy from the value of the shape parameter estimated with high accuracy, not only the specific part but also any two vertices can be used as the measurement target part with high accuracy. Can be calculated.
  • the calculated dimensional data between the two vertices is highly accurate because it is calculated along a three-dimensional shape based on a three-dimensional object composed of three-dimensional data.
  • the dimension data calculation system 3100 includes the dimension data calculation device 3020 and the learning device 3025.
  • the information processing device configured as a part of the dimension data calculation device 3020 includes a conversion unit (reception unit) 3024C and an estimation unit 3024D.
  • the conversion unit (reception unit) 3024C receives the silhouette image of the target object.
  • the estimation unit 3024D uses the object engine 3021A that associates the silhouette image of the sample object with the values of the predetermined number of shape parameters associated with the sample object to determine the shape parameter of the object from the received silhouette image. Estimate the value. Then, the value of the estimated shape parameter of the object is associated with the dimensional data relating to an arbitrary part of the object.
  • the dimension data calculation device 3020 includes an acquisition unit 3024A, an extraction unit 3024B, a conversion unit 3024C, an estimation unit 3024D, and a calculation unit 3024E.
  • the acquisition unit 3024A acquires image data of a captured object and full-length data of the object.
  • the extraction unit 3024B extracts shape data indicating the shape of the object from the image data.
  • the conversion unit 3024C converts the shape data into a silhouette image based on the full length data.
  • the estimation unit 3024D uses the object engine 3021A that associates the silhouette image of the sample object with the value of the predetermined number of shape parameters associated with the sample object to obtain the value of the predetermined number of shape parameters from the silhouette image. presume.
  • the calculation unit 3024E calculates the dimension data of the target object based on the estimated values of the predetermined number of shape parameters.
  • the dimension data calculation device 3020 can highly accurately estimate the values of the predetermined number of shape parameters from the silhouette image by using the target object engine 3021A that has been created in advance. Further, by using the value of the shape parameter estimated with high accuracy, it is possible to efficiently and highly accurately calculate the data related to an arbitrary part of the object. As described above, according to the dimension data calculation device 3020, the dimension data calculated for the object can be efficiently provided with high accuracy.
  • the dimension data calculation device 3020 By using the dimension data calculation device 3020, for example, the dimension data of each part of a living thing as an object can be calculated with high accuracy. Further, it is possible to highly accurately calculate the dimensional data of each part of an arbitrary object such as a car or various kinds of luggage as the object. Furthermore, by incorporating the dimension data calculation device 3020 into a product manufacturing device that manufactures various products, it is possible to manufacture a product that conforms to the shape of the object.
  • the dimension data calculation device 3020 By using the dimension data calculation device 3020, for example, the dimension data relating to each part of the living thing as an object can be calculated with high accuracy. Further, it is possible to highly accurately calculate the dimensional data of each part of an arbitrary object such as a car or various kinds of luggage as the object. Further, by incorporating the dimension data calculation device 3020 into a product manufacturing device that manufactures various products, it is possible to manufacture a product that conforms to the shape of the object.
  • a predetermined number of shape parameters associated with the sample object are specified by dimensionally reducing the three-dimensional data of the sample object.
  • dimension reduction is performed by principal component analysis. Thereby, noise can be effectively removed from the three-dimensional data of the sample object, and the three-dimensional data can be compressed.
  • the three-dimensional data of the object is calculated by inverse transformation of the projection related to the above-mentioned principal component analysis with respect to the estimated value of the shape parameter, and the three-dimensional data is the dimension data. Associated with. This makes it possible to accurately configure the three-dimensional data of the target with respect to the input of the silhouette image of the target.
  • the silhouette image of the sample target object is a projection image in a predetermined direction on a three-dimensional object composed of the three-dimensional data of the sample target object. That is, a silhouette image is obtained by constructing a three-dimensional object using the three-dimensional data of the sample object and then projecting the three-dimensional object. It is preferable to obtain two silhouette images in the front direction and the side direction by projection. Thereby, the silhouette image of the sample object can be generated with high accuracy.
  • the object engine 3021A is generated by learning the relationship between the silhouette image of the sample object and the values of the predetermined number of shape parameters associated with the sample object.
  • the learning can be performed by deep learning.
  • the silhouette image of the sample object and the shape parameter value of the sample object can be associated with high accuracy.
  • the calculation unit 3024E of the dimension data calculation device 3020 constructs three-dimensional data of a plurality of vertices of the object from the values of the predetermined number of shape parameters estimated for the object. Then, the dimension data between any two apexes of the object is calculated based on the constructed three-dimensional data. That is, the dimension data is calculated after the three-dimensional object is constructed using the three-dimensional data of the object. Accordingly, the dimension data between the two vertices can be calculated from the shape of the three-dimensional object of the object, so that the measurement target location is not limited to a specific portion.
  • the dimension data between the two vertices is calculated along the curved surface on the three-dimensional object including the three-dimensional data of the plurality of vertices in the object. ..
  • the dimension data can be calculated with higher accuracy.
  • the acquisition unit 3024A acquires a plurality of image data obtained by photographing the object from different directions.
  • a depth data measuring device capable of acquiring depth data together can be applied as an imaging device capable of simultaneously photographing an object from a plurality of different directions.
  • An example of the depth data measuring device is a stereo camera.
  • the “stereo camera” means an imaging device of any form that simultaneously captures an object from a plurality of different directions and reproduces binocular parallax.
  • plural pieces of image data are not necessarily required, and it is possible to calculate the dimension data of each part even if the image data of the object is one piece.
  • the image data that can be acquired by the acquisition unit 3024A can be RGB-D (Red, Green, Blue, Depth) data.
  • the image data can include a depth map having depth data for each pixel based on the depth data, in addition to the RGB image data that can be acquired by a normal monocular camera.
  • the semantic segmentation algorithm or the grab cut algorithm may be adopted to extract the shape data of the object, and the object and the background image other than the object may be separated.
  • the depth map acquired from the stereo camera is used to acquire the shape data of the object in association with the depth data of the object.
  • the background image may be separated. This makes it possible to generate the shape data of the object with higher accuracy.
  • the extraction unit 3024B based on the depth map acquired by the acquisition unit 3024A, from the image data in which the target object is captured, the target object region that is the part in which the target object is captured. Is better to extract.
  • the object region is extracted by removing the region whose depth data is not within the predetermined range from the depth map.
  • the shape data is associated with the object depth data on a pixel-by-pixel basis.
  • the conversion unit 3024C converts the shape data into new shape data based on the depth data of the target area in addition to the above-described full length data, and generates a “gradation silhouette image” (described later).
  • the gradation silhouette image to be generated is not simple black and white binarized data, but is a single color represented by data with a brightness value of 0 (“black”) to 1 (“white”) based on depth data. It is a multi-tone monochrome image. That is, the gradation silhouette image data is associated with the depth data and has a larger amount of information regarding the shape of the object.
  • the processing in the processing unit 3027 of the learning device 3025 is preferably performed as follows.
  • the gradation silhouette image data is standardized by full length data.
  • the depth data from the image capturing device to the sample target is also collected. Good to get. That is, the silhouette image data of the sample object is associated with the depth data.
  • the generated gradation silhouette image of the sample object is, for example, a monochromatic multi-tone monochrome image with a brightness value of 0 (“black”) to 1 (“white”) based on the depth data. Has much more information about the shape of.
  • the object engine 3021A When the object engine 3021A is generated by the learning unit 3027B, learning is performed so as to associate a predetermined number of shape parameter values regarding the sample object with the gradation silhouette image of the sample object associated with the depth data. Is good. That is, since the learning process based on the depth data of the sample target in the learning device 3025 is based on a larger amount of information, the target engine 3021A with higher accuracy can be generated.
  • the shape data of the object is generated with higher accuracy by extracting the object region based on the depth map configured by the acquisition unit 3024A using any machine capable of measuring the depth data. It becomes possible to do. Further, the gradation silhouette image data (the shape data to be converted) is associated with the depth data of the object.
  • the target engine 3021A is also generated as a result of the learning process based on the depth data. Therefore, the calculation unit 3024E can calculate the dimensional data of each part of the target object with higher accuracy, because the size information of the target object is larger.
  • gradation silhouette image is described here as being distinguished from a simple silhouette image, but in other embodiments and other modified examples, it may be simply described as a silhouette image without distinguishing both.
  • step S3013 the preprocessing unit 3027A performs data expansion processing for expanding the shape parameter data set using random numbers.
  • the data expansion processing it is only necessary to determine the number of samples to be expanded according to the number of sample objects. If the number of samples is sufficiently prepared in advance, the expansion process of S3013 may not be performed.
  • the shape parameters ( ⁇ 1 ,..., ⁇ 30 ) of the sample object are acquired by the principal component analysis in the preprocessing unit 3027A of the learning device 3025.
  • the shape parameter when the sample object is a “person” will be further considered.
  • the shape parameter when the object is “person” is at least It was considered to have properties.
  • the first-ranked principal component ⁇ 1 was associated to have a linear relationship with human height. Specifically, as shown in FIG. 14, the larger the first-order principal component ⁇ 1 , the smaller the person's height.
  • the height data acquired by the acquisition unit 3024A without using the target engine 3021A is used for the first-order principal component ⁇ 1.
  • the value of the first-order principal component ⁇ 1 may be configured to be calculated separately by using a linear regression model in which the height of a person is an explanatory variable.
  • the main component ⁇ 1 may be excluded from the learning target.
  • the network architecture is weighted.
  • the error between the second and subsequent main components excluding the first-priority main components obtained by performing the principal component analysis of the input silhouette image and the values after the shape parameter ⁇ 2, which is the training data, is minimum.
  • the weighting factors of the network architecture may be set to be customized. As a result, when the object is a “person”, the estimation accuracy of the value of the shape parameter in the estimation unit 3024D can be improved in addition to the use of the linear regression model.
  • the weighting coefficient of the network architecture is minimized so as to minimize the error with the values after the shape parameter ⁇ 1 including the main component of the first rank. May be set. Then, the value of the first-order principal component ⁇ 1 may be replaced with a value calculated separately by using a linear regression model in which the height of a person is used as an explanatory variable. As a result, when the object is a “person”, the estimation accuracy of the value of the shape parameter in the estimation unit 3024D can be improved in addition to the use of the linear regression model.
  • FIG. 15 is a schematic graph showing the recall of the shape parameter which is the main component.
  • the horizontal axis represents the principal component ranked by the contribution rate
  • the vertical axis represents the variance explanation rate of the eigenvalue.
  • the bar graph shows individual variance explanation rates for each rank.
  • the solid line graph shows the accumulation of the variance explanation rates from the first rank.
  • a graph regarding 10 principal components up to the 10th rank is schematically shown.
  • the eigenvalue of the covariance matrix obtained in the principal component analysis represents the size of the eigenvector (principal component), and the variance explanation ratio of the eigenvalue may be considered as the recall ratio for the principal component.
  • the cumulative variance ratios of the ten principal components from the first rank to the tenth rank show about 0.95 (broken line arrow). That is, it is understood by those skilled in the art that the recall rate of the ten principal components from the first rank to the tenth rank is about 95%. That is, in the above-mentioned example, the shape parameter subjected to the feature conversion by dimension reduction is 30 dimensions, but the shape parameter is not limited to this, and even if the shape parameter is 10 dimensions, about 95% can be covered. That is, although the number of shape parameters (the number of dimensions) is 30 in the above description, it may be about 10 in consideration of the characteristic 2.
  • FIG. 16 is a schematic diagram showing the concept of the product manufacturing system 3001 according to this embodiment.
  • the product manufacturing system 3001 is a system for manufacturing a desired product 3006, including a dimension data calculation device 3020 capable of communicating with the terminal device 3010 owned by the user 3005 and a product manufacturing device 3030.
  • a dimension data calculation device 3020 capable of communicating with the terminal device 3010 owned by the user 3005
  • a product manufacturing device 3030 capable of communicating with the terminal device 3010 owned by the user 3005
  • the concept when the object 3007 is a person and the product 3006 is a chair is shown.
  • the object 3007 and the product 3006 are not limited to these.
  • the terminal device 3010 can be realized by a so-called smart device.
  • the terminal device 3010 exerts various functions by installing the user program in the smart device.
  • the terminal device 3010 generates image data captured by the user 3005.
  • the terminal device 3010 may have a stereo camera function of simultaneously capturing images of a target object from a plurality of different directions and reproducing binocular parallax.
  • the image data is not limited to that captured by the terminal device 3010, and, for example, data captured using a stereo camera installed in a store may be used.
  • the terminal device 3010 accepts input of attribute data indicating the attribute of the target object 3007.
  • the “attribute” includes the total length, weight, and elapsed time (including age) from the generation of the object 3007.
  • the terminal device 3010 has a communication function, and executes transmission/reception of various information between the terminal device 3010 and the dimension data calculation device 3020 and the product manufacturing device 3030.
  • the dimension data calculation device 3020 can be realized by any computer.
  • the storage unit 3021 of the dimension data calculation device 3020 stores the information transmitted from the terminal device 3010 in association with the identification information that identifies the user 3005 of the terminal device 3010.
  • the storage unit 3021 also stores parameters and the like necessary for executing information processing for calculating dimension data.
  • the processing unit 3024 of the dimension data calculation device 3020 functions as the acquisition unit 3024A, the extraction unit 3024B, the conversion unit 3024C, the estimation unit 3024D, and the calculation unit 3024E, as described above.
  • the acquisition unit 3024A acquires the image data captured by the stereo camera by the user 3005 and the attribute data of the target object 3007.
  • the extraction unit 3024B also extracts shape data indicating the shape of the object 3007 from the image data. For example, when “person” is set in advance as the type of object, a semantic segmentation algorithm is constructed using training data for identifying a person.
  • the extraction unit 3024B may separate the target object and the background image other than the target object by using the depth map based on the depth data acquired from the stereo camera.
  • the conversion unit 3024C converts the shape data associated with the depth data of the object in the depth map into a gradation silhouette image based on the full length data.
  • the generated gradation silhouette image is preferably a monochrome image with a single color and multiple gradations based on the depth data.
  • the conversion unit 3024C also functions as a reception unit for inputting the generated silhouette image to the estimation unit 3024D.
  • the estimation unit 3024D uses the object engine 3021A that associates the silhouette image of the sample object with the value of the predetermined number of shape parameters associated with the sample object to obtain the value of the predetermined number of shape parameters from the silhouette image.
  • the calculation unit 3024E calculates the dimension data of the target object based on the estimated values of the predetermined number of shape parameters.
  • the shape parameter values of the target object estimated by the estimation unit 3024D form three-dimensional data of a plurality of vertices of the target object, and further, based on the three-dimensional data, an arbitrary 2 of the target object is calculated. Calculate dimensional data between two vertices.
  • the product manufacturing apparatus 3030 is a manufacturing apparatus that manufactures a desired product related to the shape of the object 3007 by using at least one size data calculated using the size data calculation device 3020. Note that the product manufacturing apparatus 3030 can employ any device that can automatically manufacture and process a product, and can be realized by, for example, a three-dimensional printer.
  • FIG. 17 is a sequence diagram for explaining the operation of the product manufacturing system 3001 according to this embodiment.
  • 18 and 19 are schematic diagrams showing screen transitions of the terminal device 3010.
  • a plurality of images of the target 3007 are captured via the terminal device 3010 so that the entire target 3007 is captured from different directions, and a plurality of image data of the captured target 3007 is generated (T3001).
  • a plurality of front and side photographs as shown in FIGS. 18 and 19 are taken. Such front and side photographs are preferably taken with the stereo camera function of the terminal device 3010 turned on.
  • the user 3005 inputs the attribute data indicating the attribute of the target object 3007 to the terminal device 3010 (T3002).
  • the attribute data full length data, weight data, elapsed time data (including age, etc.) of the object 3007, etc. are input.
  • the plurality of image data and the attribute data are transmitted from the terminal device 3010 to the dimension data calculation device 3020.
  • the dimension data calculation device 3020 When the dimension data calculation device 3020 receives a plurality of image data and attribute data from the terminal device 3010, the dimension data calculation device 3020 calculates the dimension data of each part of the object 3007 using these data (T3003). Note that the terminal device 3010 displays size data on the screen according to the settings. Then, the product manufacturing apparatus 3030 manufactures the desired product 3006 based on the dimension data calculated by the dimension data calculation apparatus 3020 (T3004).
  • the product manufacturing system 3001 includes the dimension data calculation device 3020 capable of communicating with the terminal device 3010 owned by the user 3005, and the product manufacturing system.
  • a device 3030 capable of communicating with the terminal device 3010 owned by the user 3005, and the product manufacturing system.
  • the terminal device 3010 captures a plurality of images of the object 3007.
  • the dimension data calculation device 3020 includes an acquisition unit 3024A, an extraction unit 3024B, a conversion unit 3024C, an estimation unit 3024D, and a calculation unit 3024E.
  • the acquisition unit 3024A acquires image data of a captured object and full-length data of the object.
  • the extraction unit 3024B extracts shape data indicating the shape of the object from the image data.
  • the conversion unit 3024C converts the shape data into a silhouette image based on the full length data.
  • the estimation unit 3024D uses the object engine 3021A that associates the silhouette image of the sample object with the value of the predetermined number of shape parameters associated with the sample object to obtain the value of the predetermined number of shape parameters from the silhouette image.
  • the calculation unit 3024E calculates the dimension data of the target object based on the estimated values of the predetermined number of shape parameters.
  • the product manufacturing apparatus 3030 manufactures the product 3006 using the dimension data calculated by the calculation unit 3024E. With such a configuration, the dimension data calculation device 3020 calculates each part of the target object 3007 with high accuracy, and thus a desired product related to the shape of the target object 3007 can be provided.
  • the product manufacturing system 3001 can manufacture a model of an organ by measuring the shapes of various organs such as the heart.
  • various healthcare products and the like can be manufactured by measuring the waist shape of a person.
  • a person's figure product can be manufactured from the person's shape.
  • a chair or the like suitable for a person can be manufactured from the shape of the person.
  • car toys can be manufactured from car shapes.
  • a diorama or the like can be manufactured from an arbitrary landscape picture.
  • the dimension data calculation device 3020 and the product manufacturing device 3030 are described as separate device devices, but they may be integrally configured.
  • FIG. 20 is a schematic diagram showing the configuration of the dimension data calculation system 4200 according to this embodiment.
  • the dimension data calculation system 4200 includes a dimension data calculation device 4120 and a learning device 4125.
  • the dimension data calculation device 4120 includes a storage unit 4121, an input/output unit 4122, a communication unit 4123, and a processing unit 4124.
  • the learning device 4125 also includes a storage unit 4126 and a processing unit 4127.
  • the dimension data calculation device 4120 and the learning device 4125 may be realized as hardware using an LSI, ASIC, FPGA, or the like.
  • Each of the storage units 4121 and 4126 stores various information, and is realized by an arbitrary storage device such as a memory and a hard disk.
  • the storage unit 4121 stores various data including the target engine 4121A, programs, information, and the like in order for the processing unit 4124 to execute information processing regarding dimension data calculation.
  • the storage unit 4126 also stores training data used in the learning stage to generate the target engine 4121A.
  • the input/output unit 4122 has the same configuration and function as the input/output unit 3022 described above.
  • the communication unit 4123 has the same configuration and function as the communication unit 3023 described above.
  • the processing unit 4124 functions as an acquisition unit 4124A, an estimation unit 4124D, and a calculation unit 4124E when the program stored in the storage unit 4121 is read by the CPU, GPU, etc. of the computer.
  • the processing unit 4127 functions as the preprocessing unit 4127A and the learning unit 4127B by the CPU, GPU, etc. of the computer reading the program stored in the storage unit 4126.
  • the acquisition unit 4124A acquires the attribute data including at least one of the total length data, weight data, and elapsed time data (including age) of the object.
  • the acquisition unit 4124A also functions as a reception unit for inputting the attribute data to the estimation unit 4124D.
  • the estimating unit 4124D estimates the values of a predetermined number of shape parameters from the attribute data.
  • the object engine 4121A is used for the estimation.
  • the value of the shape parameter of the object estimated by the estimation unit 4124D can be associated with the dimension data related to an arbitrary part of the object, as described later.
  • the calculation unit 4124E calculates the dimension data of the object from the value of the shape parameter of the object estimated by the estimation unit 4124D. Specifically, the calculation unit 4124E configures three-dimensional data of a plurality of vertices in the object from the shape parameter values of the object estimated by the estimation unit 4124D, and further, based on the three-dimensional data, the target Calculate dimensional data between any two vertices of an object.
  • the preprocessing unit 4127A carries out various preprocessing for learning.
  • the preprocessing unit 4127A specifies a predetermined number of shape parameters by extracting the features of the three-dimensional data of the sample object by dimension reduction.
  • the value of the shape parameter of the sample object and the corresponding attribute data are stored in the storage unit 4126 as training data in advance.
  • the corresponding attribute data (full length data, weight data, elapsed time data (including age, etc.)) is prepared as three-dimensional data of the sample object.
  • the corresponding attribute data is stored in the storage unit 4126 as training data.
  • the learning unit 4127B learns to associate the value of the shape parameter of the sample object with the corresponding attribute data. As a result of the learning, the target engine 4121A is generated.
  • the generated object engine 4121A can be held in the form of an electronic file.
  • the object engine 4121A is stored in the storage unit 4121 and referred to by the estimation unit 4124D.
  • FIG. 21 is a flowchart showing the operation (S4110) of the learning device 4125, in which the target object engine 4121A is generated based on the sample target object data.
  • FIG. 22 is a flowchart showing the operation of the dimension data calculation device 4120.
  • the dimension data of the target object is calculated based on the image data of the target object.
  • data of the sample target is prepared and stored in the storage unit 4126 (S4111).
  • the prepared data is data of 400 sample objects, and includes 5,000 three-dimensional data prepared for each sample object and attribute data prepared for each sample object. Including.
  • the three-dimensional data includes three-dimensional coordinate data of the vertices of the sample object. Further, the three-dimensional data may include apex information of each mesh forming the three-dimensional object and mesh data such as a normal direction of each apex.
  • the three-dimensional data of the sample object is associated with the part number information together with the vertex number.
  • the preprocessing unit 4127A performs feature conversion into a predetermined number (dimension) of shape parameters by dimension reduction (S4112).
  • This feature conversion process is also the same as in the first embodiment.
  • the 15,000-dimensional (5,000 ⁇ 3) data that each of the 400 sample objects has is, for example, the 30-dimensional principal component. Is transformed into a shape parameter ⁇ of.
  • the learning unit 4127B uses the combination of the attribute data of the plurality of sample objects prepared in S4111 and the data set of the plurality of shape parameters obtained in S4112 as the training data to machine-learn the relationship between them. Yes (S4115).
  • the learning unit 4027B obtains the conversion attribute data Y from the attribute data of the object.
  • An element of the conversion matrix Z that associates the element y r of the conversion attribute data Y with the element ⁇ m of the shape parameter ⁇ is expressed as zrm.
  • the conversion matrix Z is a matrix composed of [s rows, n columns].
  • the symbol m is 1 ⁇ m ⁇ n, and n is 30 which is the number of dimensions of the shape parameter ⁇ in the above example.
  • the symbol r is 1 ⁇ r ⁇ s, and s is the number of elements used for the operation obtained from the conversion attribute data Y.
  • the attribute data of the object consists of full length data h, weight data w, and elapsed time data a. That is, the attribute data is a set of (h,w,a) elements.
  • the learning unit 4027B multiplies a value obtained by squaring each element (h, w, a) of the attribute data of the object (also called a quadratic term) and a value obtained by multiplying each element (also called an interaction term). .) and the value of each element itself (also called the primary term).
  • the learning unit 4027B performs regression analysis on a set of the conversion attribute data Y obtained from the attribute data associated with 400 sample objects and the shape parameter ⁇ obtained from the three-dimensional data of the sample objects. As a result, a conversion matrix Z composed of [9 rows, 30 columns] as shown below is obtained.
  • the data of the conversion matrix Z thus obtained is stored in the storage unit 4026 as the object engine 4121A.
  • the dimension data calculation device 4120 stores the electronic file of the object engine 4121A generated by the learning device 4125 and the projection information of the principal component analysis obtained by the learning device 4125. It is stored in the unit 4121 and used for calculating the dimension data of the object.
  • the acquisition unit 4124A acquires the attribute data of the target object via the input/output unit 4122 (S4121). Thereby, the attribute data of the target object is received.
  • the estimation unit 4124D estimates the value of the shape parameter of the target from the received attribute data (S4124).
  • the attribute data of the object consists of full length data h, weight data w, and elapsed time data a. That is, the attribute data is a set of (h,w,a) elements.
  • the estimation unit 4124D calculates the value obtained by squaring each element (h, w, a) of the attribute data of the object, the value obtained by multiplying each element, and the value of each element itself. To obtain the conversion attribute data Y.
  • the calculation unit 4124E calculates the dimensional data related to the part of the object based on the value of the shape parameter of the object (S4125). Specifically, the conversion attribute data Y is calculated from the attribute data of the target object acquired by the acquisition unit 4124A. Then, the shape parameter ⁇ is calculated by multiplying the conversion attribute data Y by the above-mentioned conversion matrix Z. After this, similarly to the one in the third embodiment (S3025), a three-dimensional object is virtually constructed from the three-dimensional data, and the dimension data between the two vertices is calculated along the curved surface on the three-dimensional object. To do. In addition, for the calculation of the three-dimensional distance, mesh information such as vertex information of each mesh forming the three-dimensional object and a normal direction of each vertex can be used.
  • the dimension data calculation device 4120 of the present embodiment can highly accurately estimate the value of the predetermined number of shape parameters from the attribute data of the object by using the object engine 4121A. Unlike the third embodiment, there is no need to input an image of an object, and neither S3022 (shape data extraction processing) nor S3023 (rescale processing) of FIG. 13 is required, which is efficient.
  • the three-dimensional data of the object can be restored with high accuracy from the value of the shape parameter estimated with high accuracy, not only the specific part but also any two vertices can be used as the measurement target part with high accuracy. Can be calculated.
  • the calculated dimensional data between the two vertices is highly accurate because it is calculated along a three-dimensional shape based on a three-dimensional object composed of three-dimensional data.
  • the dimension data calculation system 4200 includes the dimension data calculation device 4120 and the learning device 4125.
  • the information processing device configured as a part of the dimension data calculation device 4120 includes an acquisition unit (reception unit) 4124A, an estimation unit 4124D, and a calculation unit 4124E.
  • the acquisition unit (reception unit) 4124A receives the attribute data of the target object.
  • the estimation unit 4124D uses the target object engine 4121A that associates the attribute data of the sample target object with the values of the predetermined number of shape parameters associated with the sample target object, and then uses the target object shape parameter from the received attribute data. Estimate the value of. Then, the value of the estimated shape parameter of the object is associated with the dimensional data relating to an arbitrary part of the object.
  • the dimension data calculation device 4120 can efficiently estimate the values of the predetermined number of shape parameters from the attribute data by using the object engine 4121A that has been created in advance. Moreover, the value of the estimated shape parameter is highly accurate. Further, by using the value of the shape parameter estimated with high accuracy, it is possible to efficiently and highly accurately calculate the data related to an arbitrary part of the object. As described above, according to the dimension data calculation device 4120, the dimension data calculated for the object can be efficiently provided with high accuracy.
  • FIG. 23 is a schematic diagram showing the concept of the product manufacturing system 4001S according to this embodiment.
  • the dimension data calculation device 4120 according to the present embodiment can also be applied to the product manufacturing system 4001S, similarly to the dimension data calculation device 3020 according to the third embodiment.
  • the terminal device 4010S only needs to accept the input of attribute data indicating the attribute of the target object 4007.
  • attribute data include the total length, weight, and elapsed time (including age) from the generation of the object 1007.
  • the processing unit 4124 of the dimension data calculation device 4120 functions as the acquisition unit 4124A, the estimation unit 4124D, and the calculation unit 4124E.
  • the calculation unit 4124E calculates the dimension data related to the part of the target object based on the value of the shape parameter of the target object obtained by the estimation unit 4124D.
  • the dimension data calculation device 4120 efficiently and accurately calculates the dimension data of the target object 1007, so that a desired product related to the shape of the target object 1007 can be provided.
  • the product manufacturing system 4001S according to the fourth embodiment can exhibit the same effects as the product manufacturing system 3001 according to the third embodiment.
  • FIG. 24 is a schematic diagram showing the configuration of a silhouette image generating device 5020 according to another embodiment.
  • the silhouette image (including the gradation silhouette image) generated in the first embodiment and the third embodiment may be generated according to this silhouette image generation device 5020. That is, the silhouette image generation device 5020 may be configured as a part of the dimension data calculation device 1020 according to the first embodiment or as a part of the dimension data calculation device 3020 according to the third embodiment.
  • the silhouette image generation device 5020 can be realized by any computer, and includes an acquisition unit 5024A, an extraction unit 5024B, and a conversion unit 5024C.
  • the acquisition unit 5024A may correspond to all or a part of the acquisition unit 1024A of the dimension data calculation device 1020 according to the first embodiment and/or the acquisition unit 3024A of the dimension data calculation device 3020 according to the third embodiment.
  • the extraction unit 5024B corresponds to all or a part of the extraction unit 1024B of the dimension data calculation device 1020 according to the first embodiment and/or the extraction unit 3024B of the dimension data calculation device 3020 according to the third embodiment.
  • the conversion unit 5024C corresponds to all or a part of the conversion unit 1024C of the dimension data calculation device 1020 according to the first embodiment and/or the conversion unit 3024C of the dimension data calculation device 3020 according to the third embodiment. You may.
  • the acquisition unit 5024A acquires image data in which the object is photographed.
  • the acquisition unit 5024A acquires, for example, a plurality of pieces of image data obtained by shooting an object from a plurality of different directions with an imaging device.
  • a depth data measuring device capable of acquiring depth data is applicable, and a depth map having depth data for each pixel is configured based on the depth data.
  • the image data that can be acquired by the acquisition unit 5024A can include RGB-D (Red, Green, Blue, Depth) data.
  • the image data can acquire such a depth map in addition to the RGB image data that can be acquired by a normal monocular camera.
  • An example of the depth data measuring device is a stereo camera, and the stereo camera is also applied in the following description.
  • the stereo camera When photographing an object (particularly a person) with a stereo camera, it is preferable to guide the user so that the entire object is accommodated within a predetermined range of the display so that the object can be accurately specified.
  • a guide area may be displayed on the display, or a guide message may be displayed to prompt the user.
  • the target object can be positioned in a desired direction and distance from the stereo camera, and noise during silhouette image generation can be reduced.
  • the extraction unit 5024B extracts shape data indicating the shape of the object from the image data. More specifically, the extraction unit 5024B includes a three-dimensional point cloud generation unit 5124, a background point cloud removal unit 5224, a plane point cloud removal unit 5324, an object region extraction unit 5424, a shape data extraction unit 5524, and an object detection unit 5624. Including.
  • the 3D point cloud generation unit 5124 generates 3D point cloud data from the acquired depth map, and deploys a 3D point cloud consisting of a set of points in a virtual 3D coordinate space.
  • Each point has three-dimensional coordinates in a virtual three-dimensional space.
  • the stereo camera is virtually arranged at the origin, and the three-dimensional coordinate (xyz) system is defined according to the orientation of the stereo camera.
  • the optical axis direction of the stereo camera is defined as the depth direction (z-axis direction).
  • the background point cloud removing unit 5224 removes, from the generated three-dimensional point cloud data, the data of the three-dimensional point cloud existing apart from the predetermined distance along the depth direction of the virtual three-dimensional coordinate space.
  • the point to be removed can be regarded as constituting a background image because it exists far from the stereo camera, and it is preferable to remove the background portion from the image data in which the object is photographed. As a result, the three-dimensional point group that becomes noise can be effectively removed, so that the accuracy of specifying the object area extracted by the object area extraction unit 5424 can be improved.
  • the plane point cloud removing unit 5324 removes the three-dimensional point cloud data existing corresponding to the plane portion from the generated three-dimensional point cloud data.
  • the plane point cloud removing unit 5324 may operate as follows. First, the plane portion in the image data is estimated from the three-dimensional point cloud data generated from the depth map.
  • the plane here is, for example, the floor. That is, when the object is a person, the plane portion is the floor portion which is in contact with the upright person.
  • one plane part is selected from a plurality of sample planes sampled by a known random sampling method.
  • RANSAC Random Sample Consensus
  • the sample plane is sampled by randomly determining the normal vectors (a, b, c) and d. Then, the points of the three-dimensional point group that satisfy the following inequalities are identified with respect to how many three-dimensional point groups are associated with the sample plane.
  • DST is a predetermined threshold distance.
  • Points that satisfy the above inequality are considered to be on the sample plane of the three-dimensional point cloud.
  • the threshold distance DST 0, but the threshold distance DST is 0 in order to include the three-dimensional point cloud data within a predetermined minute distance from the sample plane in consideration of the shooting environment and the performance of the stereo camera. It is better to set it to a value close to.
  • a sample plane having a large number of three-dimensional point groups satisfying the above inequality that is, a sample plane having the largest content rate of the three-dimensional point group is a desired plane portion in the image data. Is estimated to be
  • the plane point cloud removing unit 5324 repeats the extraction of the sample planes a plurality of times and then determines the sample plane having the largest content rate of the three-dimensional point cloud, so that the robustness to the estimation of the desired plane portion is increased. be able to.
  • the plane point cloud removing unit 5324 removes the 3D point cloud data of the points existing in the estimated plane part from the generated 3D point cloud data.
  • the plane portion having the points to be removed is, for example, the floor portion in the image data. That is, the plane point cloud removing unit 5324 can remove the floor portion from the image data in which the object is photographed. As a result, the three-dimensional point group that becomes noise can be effectively removed, so that it is possible to improve the accuracy of specifying the target object area of the target object extracted by the target object area extracting unit 5424.
  • another plane part may be further estimated, and the three-dimensional point cloud data of points existing in the other plane part may be further removed.
  • the floor portion is estimated, the three-dimensional point cloud data is once removed from the entire three-dimensional point cloud data, and then a plane is estimated again from the sample planes sampled by the random sampling method described above.
  • the accuracy of estimation of the plane part also depends on the shooting environment when shooting the target object. For example, in order to accurately estimate the plane portion, it is necessary to make the number of points forming the plane portion larger than the number of points forming the object. Therefore, for example, it is preferable to allow the user to select a shooting environment in which many walls are not reflected, or fixedly install the stereo camera in the store in a place where many walls are not reflected.
  • the target area extraction unit 5424 extracts the target area of the target using the three-dimensional point cloud data.
  • the three-dimensional point cloud removed from the three-dimensional point cloud generated from the depth map by the three-dimensional point cloud generator 5124 by the background point cloud remover 5224 and/or the plane point cloud remover 5324, that is, noise-removed Using the data, a three-dimensional point cloud corresponding to the object is further specified. For example, a three-dimensional point group in a predetermined space range in the virtual three-dimensional space may be specified. Then, the object area of the object in the image data can be extracted based on the specified three-dimensional point cloud data. The object region extracted in this way is effectively noise-removed and has high accuracy. Thereby, the accuracy of the silhouette image converted by the conversion unit 5024C can be further improved.
  • the shape data extraction unit 5524 extracts shape data indicating the shape of the target object based on the depth data of the region in the depth map corresponding to the target region extracted by the target region extraction unit 5424.
  • the object detection unit 5624 uses the RGB image data acquired by the acquisition unit 5024A to extract the image area of the target object in the image data by object detection.
  • the image area of the object is defined by a two-dimensional (xy) coordinate area that is perpendicular to the depth (z) direction.
  • a known method may be used for object detection, and for example, region identification using an object detection algorithm by deep learning is applicable.
  • An example of an object detection algorithm by deep learning is R-CNN (Regions with Convolutional Neural Networks).
  • the above-described three-dimensional point cloud generation unit 5124 may generate the three-dimensional point cloud data based on the depth map of the portion corresponding to the image area extracted by the object detection unit 5624. Thereby, three-dimensional point cloud data with less noise can be generated, and as a result, the accuracy of the shape data extracted by the shape data extraction unit 5524 can be improved.
  • the conversion unit 5024C converts the shape data extracted by the shape data extraction unit 5524 to generate a silhouette image of the object.
  • the converted silhouette image is not simply represented by black and white binarized data, but based on the depth data, the image area of the object has, for example, a luminance value of 0 (“black”) to 1 (“white”). It is possible to obtain a monochromatic multi-tone monochrome image (gradation silhouette image) represented by the data up to ). That is, the silhouette image data can have a larger amount of information by associating the image area of the object with the depth data.
  • FIG. 25 is a flowchart for explaining the operation of the silhouette image generating device 5020 according to another embodiment described in FIG. Through this flow chart, a silhouette image of the object can be generated from the image data of the image of the object (S5000).
  • the acquisition unit 5024A acquires image data including a depth map in which an object is photographed (S5010).
  • the object detection unit 5624 extracts the image area of the object from the RGB image data included in the image data (S5020).
  • the said step may be arbitrary.
  • the 3D point cloud generation unit 5124 generates 3D point cloud data corresponding to the depth map included in the image data to configure a virtual 3D coordinate space (S5030).
  • S5020 it is preferable to generate the three-dimensional point cloud data based on the depth map of the portion corresponding to the image area (xy coordinate area) of the object.
  • the background point cloud removing unit 5224 removes the 3D point cloud data that is away from the predetermined threshold distance along the depth (z) direction of the virtual 3D coordinate space (S5040). Further, the plane point cloud removing unit 5324 estimates the plane part in the image data (S5050), and further removes the three-dimensional point cloud data corresponding to the plane part from the three-dimensional point cloud data (S5060). Note that S5050 and S5060 may be repeated to estimate a plurality of plane portions in the image data and remove the three-dimensional point cloud data.
  • the target area extraction unit 5424 extracts the target area of the target based on the removed three-dimensional point cloud data (S5070). Then, the shape data extraction unit 5524 extracts the shape data indicating the shape of the target object based on the depth data of the target object region in the depth map (S5080). Finally, the conversion unit 5024C converts the shape data to generate a silhouette image of the object.
  • the silhouette image generation device 5020 includes the acquisition unit 5024A, the extraction unit 5024B, and the conversion unit 5024C.
  • the acquisition unit 5024A acquires image data including a depth map in which an object is photographed.
  • the extraction unit 5024B extracts the object region of the object using the three-dimensional point cloud data generated from the depth map, and indicates the shape of the object based on the depth data of the depth map corresponding to the object region. Extract shape data.
  • the conversion unit 5024C converts the shape data to generate a silhouette image of the object.
  • the silhouette image generation device 5020 generates three-dimensional point cloud data using the depth map and then generates a silhouette image of the object. Since it is possible to effectively identify and remove the three-dimensional point cloud that becomes noise in the three-dimensional point cloud data, it is possible to improve the identification accuracy of the object region of the object. Thereby, a highly accurate silhouette image can be obtained. Further, by using the depth map, a gradation silhouette image, which is a monochrome image associated with the data, can be generated as the silhouette image, and a large amount of information can be provided regarding the shape of the object.
  • the extraction unit 5024B removes the 3D point cloud data existing apart from the predetermined threshold distance along the depth direction from the 3D point cloud data based on the target.
  • the object area of the object is extracted.
  • the extraction unit 5024B further estimates a plane portion in the image data from the three-dimensional point cloud data generated from the depth map, and estimates the estimated plane of the three-dimensional point cloud data.
  • the object area of the object is extracted based on the data obtained by removing the three-dimensional point cloud data existing in the part.
  • the extraction unit 5024B estimates the plane portion based on calculating the content rate of the three-dimensional point cloud data associated with the sample plane sampled according to the random sampling. Then, the plane portion is estimated by repeating the estimation. As a result, the sampling plane is extracted a plurality of times and then the sampling plane having the highest content rate of the three-dimensional point cloud data is determined, so that the robustness with respect to the estimation of the desired plane portion can be improved.
  • the extraction unit 5024B estimates a plurality of plane portions by repeating the process of estimating the plane portions. As a result, it is possible to effectively remove the three-dimensional point group that constitutes a plane and becomes noise in the screen data, and thus improves the accuracy of specifying the target object area of the target object extracted by the target object area extracting unit 5424. be able to.
  • the acquisition unit 5024A further acquires RGB image data
  • the extraction unit 5024B further extracts the image area of the target object using the RGB image data, and a part corresponding to the image area is extracted.
  • Three-dimensional point cloud data is generated from the depth map.
  • the object is a person and the plane portion includes the floor. This effectively creates a silhouette of a person standing upright on the floor.
  • FIG. 26 is a schematic diagram showing the configuration of the dimension data calculation device 6020 according to another embodiment.
  • the dimension data calculation device 6020 includes a shape parameter acquisition unit 6024D and a calculation unit 6024E.
  • the dimension data calculation device 6020 according to the present embodiment can be applied to the third and fourth embodiments and their modifications.
  • the shape parameter acquisition unit 6024D included in the dimension data calculation device 6020 according to the present embodiment is configured as all or part of the acquisition unit 3024A, the extraction unit 3024B, the conversion unit 3024C, and the estimation unit 3024D of the third embodiment. Good.
  • it may be configured as all or part of the acquisition unit 4124A and the estimation unit 4124D of the fourth embodiment.
  • the calculation unit 6024E included in the dimension data calculation device 6020 according to the present embodiment is configured as all or part of the calculation unit 3024E in the third embodiment, or as all or part of the calculation unit 4124E in the fourth embodiment. Good.
  • the shape parameter acquisition unit 6024D acquires the value of the shape parameter of the object.
  • the calculation unit 6024E configures the three-dimensional data of the target object from the value of the shape parameter of the target object, and based on the information of the vertices of the three-dimensional data that configures the predetermined part region associated with the predetermined part Calculate the dimensional data of the part.
  • the dimensional data can be related to any part. In order to calculate the dimensional data, a calculation algorithm can be set according to each part.
  • the calculation unit 6024E includes a three-dimensional data configuration unit 6124, a part region configuration unit 6224, a calculation point extraction unit 6324, and a dimension data calculation unit 6424.
  • the three-dimensional data configuration unit 6124 of the calculation unit 6024E configures the three-dimensional data of the target object from the acquired shape parameter values.
  • 3 Dimensional data is constructed.
  • the three-dimensional data to be constructed is preferably three-dimensional mesh data, and is information on a set of vertices of meshes forming a three-dimensional object (for example, three-dimensional coordinates of vertices).
  • the dimension measurement target is a human body and the configured three-dimensional object is a human body model, although not limited thereto.
  • 27a and 27b are schematic views of a human body model in a three-dimensional space when the object is a human body.
  • the human body model is composed of three-dimensional mesh data (mesh is not shown).
  • the human body model is preferably a model that stands upright on a horizontal plane, for example.
  • FIG. 27a is a plan view of the human body model
  • FIG. 27b is a front view thereof.
  • the three-dimensional coordinate system is adjusted with respect to the human body model such that the side direction is the x-axis, the front direction is the y-axis, and the height direction is the z-axis in the three-dimensional space.
  • the positive direction of the x-axis is the direction of the left half of the body from the center of gravity of the body when the human body model is viewed from the front
  • the negative direction is the direction of the right half of the body.
  • the positive direction is from the body center of gravity to the back direction
  • the negative direction is from the body center of gravity to the front direction.
  • the positive direction of the z axis is the upper body direction (or the vertical upward direction) from the body center of gravity in the height direction
  • the negative direction is the lower body direction (or the vertical downward direction).
  • the part region forming unit 6224 of the calculating unit 6024E forms a predetermined part region associated with a predetermined part from the information on the vertices of the three-dimensional mesh data formed by the three-dimensional data forming unit 6124. ..
  • the part region may be a tubular region having a center of gravity axis.
  • the tubular region is composed of a set of three-dimensional mesh data vertices that partially form a three-dimensional object within a predetermined range according to a predetermined region.
  • the configuration of the part region may include classifying (clustering) the distribution of the set of vertices under a predetermined condition.
  • the body regions that accommodate these parts are associated with the parts “hip” and “waist”, and similarly, the arm region is associated with the part “wrist” and the shoulder region is associated with the part “armhole”. Be done.
  • the part region is not limited to the cylindrical region, and may be a region in any three-dimensional space. Further, the part area may be a flat area as well as a three-dimensional area.
  • the calculation point extraction unit 6324 extracts a predetermined number of calculation points from the set of vertices of the three-dimensional mesh data that constitutes the part area. More specifically, the calculation point extraction unit 6324 selectively extracts calculation points partially associated with the part region from the set of vertices of the three-dimensional mesh data according to a predetermined part. For example, the tubular region is divided with respect to each quadrant of a (two-dimensional) coordinate system that is defined orthogonally to the centroid axis of the tubular region and has the centroid axis as the origin, and the calculation points are selectively extracted individually from each quadrant. To do.
  • the number of calculation points extracted by the calculation point extraction unit 6324 is preferably 3 to 5 from the viewpoint of calculation amount and accuracy when the region area is a cylindrical area. Based on the inventor's deep knowledge, when 6 or more calculation points are extracted, the calculation amount may increase and the calculation efficiency may decrease, while if at least 3 calculation points can be extracted, the dimension data can be calculated with high accuracy. It turns out that you can.
  • the mode of dividing the region and the number of divisions be set individually according to the region. Furthermore, in addition to extracting the calculation points associated with the divided part regions, the vertices of the three-dimensional mesh data that satisfy a predetermined condition may be additionally extracted as the calculation points.
  • the dimension data calculation unit 6424 concretely calculates the dimension data based on the calculation points extracted by the calculation point extraction unit 6324. For example, when the region area is a tubular area, the dimension data is calculated by calculating the length of the circumference along the circumference of the tubular area based on the information on the extracted calculation points. More specifically, the circumference length is calculated by calculating the total sum of the distances so that the calculation points adjacent to each other along the circumference of the tubular region are connected by lines. As described above, the dimension data can be calculated efficiently and highly accurately by calculating the length of the circumference of the cylindrical region using the information on the extracted calculation points.
  • FIG. 28 is a flowchart for explaining the operation of the size data calculation device 6020 described with reference to FIG. Through the process (S6000) according to this flowchart, it is possible to calculate the dimension data of a predetermined part of the object from the shape parameter values.
  • the shape parameter acquisition unit 6024D acquires the value of the shape parameter of the object (S6010).
  • the said step is good to be implemented through the estimation process in the estimation part 3024D in 3rd Embodiment and the estimation part 4124D in 4th Embodiment.
  • the three-dimensional data forming unit 6124 of the calculating unit 6024E forms three-dimensional mesh data from the shape parameter values of the target object (S6020).
  • the configured three-dimensional mesh data is information (for example, three-dimensional coordinates of the vertices) of a set of vertices of meshes that form a three-dimensional object.
  • the part region construction unit 6224 forms a predetermined part region that is associated in advance with a predetermined part from the information on the set of vertices of the formed three-dimensional mesh data (S6030).
  • the calculation point extraction unit 6324 selectively extracts a plurality of characteristic calculation points associated with the part area (S6040). More specifically, the part region is divided according to a predetermined part, and about 3 to 5 calculation points are individually selected from the divided part regions (a calculation point extraction example regarding a specific part will be described later).
  • the dimension data calculation unit 6424 calculates the dimension data based on the extracted calculation points (S6050). For example, when the region area is a tubular area, the dimension data is calculated by calculating the circumference length of the tubular area based on the extracted calculation points. More specifically, the three-dimensional circumference length is calculated by connecting the calculation points adjacent to each other along the circumference of the tubular region with a line and calculating the sum of the distances.
  • the characteristic calculation points are extracted from the model of the object, and by using the calculated calculation points, the circumference length of the cylindrical region is calculated, thereby efficiently and highly accurately calculating the dimension data. be able to.
  • 29 to 31 are schematic diagrams showing an example in which the range of the body region and the arm region, which are cylindrical regions, are specified with respect to the human body model and each region is configured.
  • 32a and 32b are schematic diagrams showing an example of extracting calculation points regarding the hips.
  • FIGS. 33a and 33b are schematic diagrams showing an example of extracting calculation points regarding a waist, FIGS. 34a and 34b around a wrist, and FIGS. 35a and 35b.
  • the body region BR and arm region AR of the human body model are extracted over a predetermined range in the height direction (z-axis direction). It is specified by cutting out the three-dimensional area.
  • a three-dimensional mesh is obtained by cutting out a region of a predetermined distance ( ⁇ Dcm) vertically from a position of a predetermined ratio of the height (R% from the top) with respect to the z axis. ⁇ It is better to extract a set of data vertices.
  • the region to be cut out includes the right arm region AR r , the left arm region AR 1 , and the body region BR. It should be noted that the values of the ratio R and the distance D are preferably selected individually depending on the part to be calculated.
  • FIG. 30 shows an xy plane in which the right arm region AR r , the left arm region AR 1 , and the body region BR are viewed from the z-axis positive direction.
  • C(c x , c y ) is the center point of the human body model and is calculated from each coordinate value of the set of vertices of the three-dimensional mesh data.
  • the center point C(c x , c y ) should be the center of gravity of the body.
  • FIG. 31 is a diagram showing the distribution of the vertices of the three-dimensional mesh data regarding the right arm region AR r , the left arm region AR 1 , and the body region BR shown in FIGS. 29 and 30.
  • the horizontal axis of the coordinate system in FIG. 31 indicates the distance in the x-axis direction from the center point C(c x , c y ) to each vertex.
  • the vertical axis indicates the distance from the center point C(c x , c y ) to each vertex.
  • the area ar is an area corresponding to the right arm area AR r and the left arm area AR 1 .
  • the region br is a region corresponding to the body region BR. That is, a set of vertices of a three-dimensional area extracted over a predetermined range in the height direction (z-axis direction) can be classified (clustered) into areas ar and br. Thereby, it is possible to determine whether each vertex of the three-dimensional region belongs to any of the body region BR and the arm region AR (here, the right arm region AR r and the left arm region AR l ).
  • the calculation points in the parts “hip”, “waist”, “wrist”, and “arm hole” are information on the vertices of the three-dimensional mesh data of these body regions BR or arm regions AR. Can be extracted based on.
  • FIG. 32a shows five calculation points for calculating the hip size data from the set of vertices of the three-dimensional mesh data forming the tubular body region BR shown in FIG. It is a schematic plan view showing an example of extracting the.
  • FIG. 32b is a schematic three-dimensional view thereof.
  • x'y has a plane x'y' which has a center of gravity axis AX1 of the body region BR1 along the height direction as the z'axis, and is orthogonal to the z'axis and has the center of gravity axis AX1 as the origin.
  • A'z' coordinate system is defined.
  • the x′ direction is the long axis direction of the cross section of the body region BR1 having a substantially elliptical shape (that is, the side direction of the human body model), and y The'direction is the minor axis direction of the same section (that is, the front (back) direction of the human body model).
  • calculation points (a1, b1, c1, d1, e1) should be extracted in order to calculate hip size data.
  • four (a1, b1, c1, d1) are individually extracted from the set of vertices of the three-dimensional mesh data existing in the quadrant of the x'y' plane when viewed from the direction of the center of gravity axis AX1.
  • a total of four calculation points may be extracted, one from each quadrant, or a total of three calculation points may be individually extracted from any three quadrants. In the following, it is assumed that a total of four calculation points are extracted.
  • a restriction region LR1 having a long axis x'value of -k 1 ⁇ x' ⁇ k 1 is provided, and four vertices of three-dimensional mesh data located in the restriction region LR1 are provided in each quadrant. It is preferable to individually extract the calculation points a1 to d1.
  • the value of k 1 is preferably about 1 ⁇ 4 of the length of the cross-section of the body region BR1 in the long axis direction.
  • each quadrant the four vertices that are farthest from the origin (the center of gravity axis AX1) in the x′-axis direction, that is, the four vertices that exist close to the boundary within the restricted region LR1 are individually extracted as calculation points. Is good. That is, in the first quadrant and the fourth quadrant, it is preferable to extract the vertices a1 and d1 having the maximum x′ value in the restricted region LR1 as calculation points, and in the second quadrant and the third quadrant, It is preferable to extract the vertices having the smallest x′ value in the restricted region LR1 as the calculation points b1 and c1.
  • the hip is usually a shape that projects in the front and back directions in the human body model. Therefore, in addition to the above-mentioned four calculation points (a1 to d1), as the remaining one calculation point e1, it is preferable to additionally extract a vertex located at a portion corresponding to the protrusion in the front direction. Specifically, the calculation point e1 is preferably extracted as the vertex that is farthest from the origin in the y'axis (short axis) direction (here, the y'value is the minimum).
  • FIG. 33a shows five calculation points for calculating waist size data from the set of vertices of the three-dimensional mesh data forming the tubular body region BR shown in FIG. It is a schematic plan view showing an example of extracting the. Further, FIG. 33b is a schematic stereoscopic view thereof.
  • the x'y'z' coordinate system defined in FIGS. 33a and 33b is defined similarly to FIGS. 32a and 32b.
  • the center of gravity axis AX2 of the body region BR2 is the origin of the x'y' plane.
  • five calculation points (a2, b2, c2, d2, e2) should be extracted in order to calculate waist size data.
  • Four of them (a2, b2, c2, d2) are individually extracted from a set of vertices of the three-dimensional mesh data existing in the quadrant of the x'y' plane when viewed from the direction of the center of gravity axis AX2.
  • a total of four calculation points may be extracted from each quadrant, or a total of three calculation points may be extracted from arbitrary three quadrants. Below, the example which extracts a total of four calculation points is assumed.
  • the waist it is preferable to first select the reference vertex std as the reference point and individually extract the vertices associated with the reference point std as the four calculation points a2, b2, c2, d2.
  • the waist has a shape in which the front direction (here, the negative direction of the y′ axis) projects in the human body model, and the set of vertices of the three-dimensional mesh data is biased toward the projecting portion.
  • the center of the substantially sectional shape of the body region BR2 is shifted from the origin, which is the center of gravity axis AX2, in the positive direction of the y′ axis.
  • the calculation points should be extracted in consideration of such a protruding portion of the waist.
  • the above-mentioned reference point std specifies the apex located at the location corresponding to such a protrusion.
  • the reference point std is extracted as a vertex that is farthest from the origin (the center of gravity axis AX2) in the frontal direction (here, the negative direction of the y'axis) in the human body model (the value of the y'axis is the minimum).
  • the reference point std itself does not necessarily have to be adopted as the calculation point.
  • the reference point std After the reference point std is extracted, it is preferable to individually extract the vertices close to the position of the reference point std in the z′-axis (centroid axis AX2) direction as calculation points a2, b2, c2, d2 in each quadrant. .. That is, the calculation points a2, b2, c2, and d2 are vertices whose z′-axis values are close to the z′ value of the reference point std and have approximately the same height. In a region such as a waist whose dimension is to be measured along a horizontal plane, it is preferable to extract a vertex having a height close to the height direction, which can further improve the dimension accuracy.
  • West further preferably provided a limited area LR2 which the value -k 2 ⁇ x ' ⁇ k 2 ' x is a major axis in the coordinate system 'x'y.
  • a limited area LR2 which the value -k 2 ⁇ x ' ⁇ k 2 ' x is a major axis in the coordinate system 'x'y.
  • the fifth calculation point e2 is additionally extracted, as described above, especially in the case of the waist, the center of gravity is in the front direction. This is because the bias is taken into consideration. That is, the set of vertices of the three-dimensional mesh data forming the body region BR2 is distributed from the origin (center of gravity axis AX2) to the ⁇ y′ region. In consideration of such a situation, by additionally extracting the vertex on the opposite side of the reference point std as the fifth calculation point e2, not only a dimensional error does not occur but the dimensional accuracy is further improved. I am trying to improve it.
  • FIG. 34a calculates wrist size data from a set of vertices of the three-dimensional mesh data forming the cylindrical arm region AR r or AR l shown in FIG.
  • FIG. 6 is a schematic plan view showing an example of extracting four calculation points for this purpose.
  • FIG. 34b is a schematic stereoscopic view thereof. Note that, in the arm region, either AR r or AR l may be a target for dimension measurement (hereinafter, generically referred to as arm region AR).
  • x'y' has an x'y' plane which has the center of gravity axis AX3 of the arm region AR along the height direction as the z'axis, and is orthogonal to the z'axis and has the center of gravity axis AX3 as the origin.
  • A'z' coordinate system is defined.
  • the x′ direction is the major axis direction of the cross section of the arm region AR having a substantially elliptical shape (that is, the side direction of the human body model), and the y′ direction is the same cross section.
  • the short axis direction that is, the front direction of the human body model).
  • calculation points (a3, b3, c3, d3) should be extracted in order to calculate the dimension data around the wrist.
  • the region around the wrist can be assumed to be the most recessed part of the arm region.
  • the arm region AR is scanned in the direction of the center of gravity to extract the cross-sectional region cs having the minimum length in the major axis direction and/or the minimum length in the minor axis direction. Therefore, it is better to identify the recessed part.
  • it may be based on only the length in the major axis direction, based on the length only in the minor axis direction, or both.
  • the cross-sectional area cs may be defined as a plane area or may be a three-dimensional area having a thickness in the z′-axis direction.
  • the cross-sectional area cs may be defined as the back of the hand.
  • the arm region AR is scanned in the direction of the center of gravity axis, and in the set of vertices of the three-dimensional mesh data, the two vertices most distant in the long axis direction and the two vertices most distant in the short axis direction.
  • Four points are extracted as calculation points (a3, b3, c3, d3).
  • the vertex closest to the z'axis is the calculation point a3
  • the vertex closest to the x'-axis in the negative direction in the negative direction (here, the x'value is the minimum) and closest to the z'-axis is extracted as the calculation point c3.
  • the closest vertex in the z'-axis direction at the position most distant in the positive direction of the y'-axis is the calculation point b3, and the y'-axis.
  • the closest vertex in the z'direction at the position farthest away in the negative direction is extracted as the calculation point d3.
  • the arm region is scanned in the direction of the center of gravity, and it is estimated that the region portion in which the length in the major axis direction and/or the length in the minor axis direction is the minimum is the relevant part, and four calculation points are calculated. Extract as (a3, b3, c3, d3).
  • the dimension data around the wrist In particular, since the length around the wrist is shorter than that at the hip or waist, it is sufficient to extract four calculation points by such a method, and the fifth point is not necessarily required.
  • FIG. 35a shows an armhole from a set of vertices of the three-dimensional mesh data forming the tubular shoulder region SR defined based on the tubular body region BR shown in FIG. 5 is a schematic plan view showing an example of extracting four calculation points in order to calculate the dimension data of FIG. Further, FIG. 35b is a schematic stereoscopic view thereof.
  • the furthest (highest) vertex in the positive direction of the z'axis is defined as the shoulder vertex, and x'y'z' is based on the shoulder vertex. Cut out in a predetermined range in the direction. Then, the cylindrical shoulder region SR is configured by using a set of vertices of the three-dimensional mesh data existing within the cut out range.
  • an x'' having a y''z'' plane which has the center of gravity axis AX4 of the shoulder region SR as the x'' axis and which is orthogonal to the x'' axis and has the center of gravity axis AX4 as the origin.
  • a y′′z′′ coordinate system is defined. Particularly, regarding the y′′z′′ plane coordinate system viewed from the direction of the center of gravity axis AX4, the y′′ direction is the minor axis direction of the cross section of the shoulder region SR having a substantially elliptical shape (that is, the front surface (wavefront) of the human body model). Direction), and the z′′ direction is the major axis direction of the same cross section (that is, the height direction of the human body model).
  • calculation points (a4, b4, c4, e4) should be extracted in order to calculate the dimension data of the armhole.
  • Three of them (a4, b4, c4) are the vertices most distant from the center of gravity axis AX4 in the positive and negative directions of the short axis and the positive direction of the long axis (z' axis) in the y''z'' plane. That is, it is preferable to set the vertex that is most distant in the vertical direction).
  • the vertex farthest from the center of gravity axis AX4 in the positive direction of the short axis (y′′ axis) (here, the maximum y′′ value) is extracted as the calculation point a4, and similarly, the center of gravity axis is extracted. It is preferable to extract the vertex farthest away from AX4 in the negative direction of the short axis (here, the y′′ value is the minimum) as the calculation point c4. Further, it is preferable to extract the vertex (here, the z'value is the maximum) farthest from the center of gravity axis AX4 in the positive direction of the long axis (z' axis) (that is, the vertically upward direction) as the calculation point b4.
  • a restricted region LR4 is set in the negative direction of the z′′ axis (that is, vertically downward direction) so that the z′′ value of the long axis falls within a predetermined range (z′′ ⁇ j). It is preferable to extract vertices close to the boundary in the restricted region LR4 (here, the z′′ value becomes maximum in the restricted region LR4).
  • the restricted area LR4 is defined according to the side area of the human body model.
  • the side region is divided into a vertically lower quarter and the limited region LR4 is defined.
  • the side region (not shown) is cut out in a predetermined range in the x"y"z" direction with the side apex as a reference, and the apex of the three-dimensional mesh data existing in the cut out range is extracted. It is better to use a set.
  • the side vertices are preferably defined as vertices having a height (z value) such that they cannot be distinguished from each other in the classification of the body region BR and the arm region AR as shown in FIG.
  • the armpit region has a convex shape upward, so when scanning in the x′′-axis direction, the apex of the armpit is classified into the body region BR and the arm region AR when both are classified. It corresponds to the height where it cannot be distinguished.
  • the extraction of the calculation points for calculating the length of the circumference in the cylindrical region has been described, but the present invention is not limited to this, and can be applied to the calculation of dimension data other than the cylindrical region.
  • the length of the part “sleeve length” can be calculated.
  • the length of the part “shoulder width” can be calculated based on the information on the vertices of the left and right shoulders.
  • the dimension data calculation device 6020 includes the shape parameter acquisition unit 6024D and the calculation unit 6024E.
  • the shape parameter acquisition unit 6024D acquires the value of the shape parameter of the object.
  • the calculation unit 6024E configures the three-dimensional mesh data of the target from the value of the shape parameter of the target, and based on the vertex information of the three-dimensional mesh data that configures the predetermined body region associated with the predetermined body. Then, the dimensional data of the predetermined part is calculated.
  • the dimension data of the predetermined portion is calculated based on the information of the vertices of the three-dimensional mesh data forming the portion region, thereby calculating the dimension data regarding the measurement target portion.
  • the accuracy can be improved.
  • the calculation unit 6024E selectively extracts the calculation points partially associated with the region of the region from the set of vertices of the three-dimensional mesh data according to a predetermined region. Dimension data is calculated based on the calculation points. As a result, the calculation of the dimension data can be made efficient by effectively performing the selective extraction of the calculation points.
  • the region area is a cylindrical area
  • the dimension data is calculated by calculating the circumference length of the cylindrical area based on the calculation points. Thereby, the calculation of the cylindrical dimension data can be performed with high accuracy and efficiency.
  • the length of the circumference is calculated so as to be the sum of the distances calculated by connecting the adjacent calculation points with lines along the circumference of the tubular region.
  • the dimension data calculation method obtains the three-dimensional mesh data of the object from the step of acquiring the value of the shape parameter of the object (S6010) and the value of the shape parameter of the object.
  • Configuring step (S6020) configuring a predetermined part region previously associated with a predetermined part from the information on the set of vertices of the formed three-dimensional mesh data (S6030), associating part part region
  • the method further includes a step of selectively extracting the calculated calculation points (S6040) and a step of calculating dimensional data based on the extracted calculation points (S6050).
  • a dimension data calculation method by calculating the dimension data of a predetermined region based on the information of the vertices of the three-dimensional mesh data forming the region of the region, the accuracy of the calculation of the dimension data regarding the region to be measured is calculated. Can be improved.
  • FIG. 36 is a schematic diagram showing the configuration of a terminal device 7020 according to another embodiment.
  • the terminal device 7020 has the functions of the terminal device 1010 according to the first embodiment, the terminal device 2010S according to the second embodiment, the terminal device 3010 according to the third embodiment, or the terminal device 4010S according to the fourth embodiment. It is a thing. Further, it can be connected to each of the above-mentioned dimension data calculation devices 1020, 2120, 3020, 4120, 6020. Furthermore, the terminal device 7020 is not limited to the dimension data calculation device, and can be connected to any information processing device that processes information about the object 7007 from image data of the object 7007 captured.
  • the terminal device 7020 includes an acquisition unit 7011, a communication unit 7012, a processing unit 7013, and an input/output unit 7014.
  • the acquisition unit 7011 acquires image data of the object 7007 captured.
  • the acquisition unit 7011 is composed of an arbitrary monocular camera.
  • the data acquired by the acquisition unit 7011 is processed by the processing unit 7013.
  • the communication unit 7012 is realized by a network interface such as an arbitrary network card and enables communication with a communication device on the network by wire or wirelessly.
  • the processing unit 7013 is realized by a processor such as a CPU (Central Processing Unit) and/or a GPU (Graphical Processing Unit) and a memory in order to execute various types of information processing, and executes processing by reading a program.
  • the processing unit (determination unit) 7013 determines whether the target object included in the image data (that is, reflected in the image data) is a target object registered in advance.
  • the processing unit 7013 uses the “target identification model” for identifying whether each pixel is a predetermined target or not, and determines whether the target included in the image data is a pre-registered target. Determine whether or not.
  • the input/output unit 7014 receives input of various information to the terminal device 7020, and outputs various information from the terminal device 7020.
  • the input/output unit 7014 is realized by an arbitrary touch panel.
  • the input/output unit (reception unit) 7014 shows the determination result by the processing unit 7013 described later on the screen (output unit) of the input/output unit 7014. Further, the input/output unit 7014 superimposes the determination image data obtained from the identification result for each pixel identified using the object identification model on the image data acquired by the acquisition unit 7011, and the input/output unit 7014 of the input/output unit 7014. Shown on the screen.
  • the object 7007 (here, a person) is displayed by being superimposed on the image data (hatched portion in FIG. 37) of the area BG other than the object.
  • a method of superimposing and drawing a transparent image such as alpha blending can be used.
  • the input/output unit (reception unit) 7014 receives an input as to whether or not to transmit the image data to the information processing device (for example, the dimension data calculation device 1020, 2120, 3020, 4120, 6020).
  • the user can visually check the superimposed image and confirm that the object that induces the misrecognition is not present, and then transmit the image data to the dimension data calculation device.
  • FIG. 38 is a sequence diagram between the terminal device 7020 and the information processing device (for example, the dimension data calculation devices 1020, 2120, 3020, 4120, 6020) for explaining the operation of the terminal device 7020.
  • the information processing device for example, the dimension data calculation devices 1020, 2120, 3020, 4120, 6020
  • the image data of the object 7007 is acquired through the terminal device 7020 by the operation of the user (V1).
  • the terminal device 7020 determines whether or not the target object 7007 included in the image data is a target object registered in advance by using the target object identification model, and displays it on the screen configuring the input/output unit 7014.
  • the judgment result is output (V2). For example, a screen as shown in FIG. 37 is displayed as the determination result.
  • the dimension data calculation device that has received the image data calculates the dimension data of the object 7007 using the image data transmitted from the terminal device 7020 (V5, V6).
  • the terminal device 7020 outputs the determination result as to whether or not the object included in the image data is a previously registered object, and transmits the image data to the dimension data calculation device. Since the input of whether or not is accepted, it is possible to provide a terminal device that can reduce the operation time by the user.
  • the dimension data calculation devices 1020, 2120, 3020, 4120, 6020 receive image data and segment the background by segmentation to generate a silhouette image. Then, the dimension data calculation device calculates, for example, the dimension data of each part of the human body. In such a dimension data calculation device, the image data is transmitted from the terminal device 7020 to the dimension data calculation device, and the reliability of the calculation result is confirmed only after the image data information processing by the dimension data calculation device is completed. I can't. If the reliability of the calculation result is low, the user needs to acquire the image data again using the terminal device 7020. On the other hand, when the terminal device 7020 is used, it is possible to prompt the user to confirm the validity of the image data before transmitting the image data in which the object is photographed. In some cases, the time required to obtain a highly accurate calculation result can be shortened.
  • the user of the terminal device 7020 predicts that the generation of a silhouette image by the dimension data calculation device will be successful even in an environment where various colors are arranged, such as in a store, or an environment in which a mannequin or the like is mistakenly recognized as a human body. Since it can be executed while checking, the operation time may be shortened in some cases.
  • the object identification model installed in the terminal device 7020 is required to execute segmentation at high speed. Therefore, a model that can infer at high speed is preferable, even if the accuracy of segmentation is sacrificed to some extent. In short, by preparing both the segmentation on the side of the dimension data calculation device and the segmentation on the side of the terminal device 7020, it is possible to achieve both generation of a precise silhouette image and removal of unnecessary objects.
  • the present disclosure is not limited to the above embodiments as they are.
  • the present disclosure can be embodied by modifying the constituent elements within a range not departing from the gist of the present invention at the implementation stage. Further, the present disclosure can form various disclosures by appropriately combining a plurality of constituent elements disclosed in each of the above-described embodiments. For example, some components may be deleted from all the components shown in the embodiment. Further, the constituent elements may be appropriately combined with different embodiments.
  • the dimension data calculation device includes an acquisition unit, an extraction unit, a conversion unit, and a calculation unit.
  • the acquisition unit acquires image data of a captured object and full-length data of the object.
  • the extraction unit extracts shape data indicating the shape of the object from the image data.
  • the conversion unit converts the shape data based on the full length data.
  • the calculation unit reduces the dimension of the shape data converted by the conversion unit, and uses the reduced value of each dimension and the weighting coefficient optimized for each part of the target to calculate the dimension data of each part of the target. calculate.
  • the dimension data calculation device is the dimension data calculation device according to the first aspect, in which the acquisition unit acquires a plurality of image data obtained by photographing the object from different directions.
  • the dimension data calculation device is the dimension data calculation device according to the first or second aspect, and the calculation unit performs the first dimension reduction on the shape data converted by the conversion unit.
  • the calculation unit linearly combines the value of each dimension obtained in the first dimension reduction and the weighting coefficient optimized for each part of the object to obtain a predetermined value, or obtains the predetermined value in the first dimension reduction.
  • a quadratic feature amount is generated from the obtained values of each dimension, and the quadratic feature amount and the weighting coefficient optimized for each part of the object are combined to obtain a predetermined value.
  • a dimension data calculation device is the dimension data calculation device according to the first to third aspects, in which the extraction unit performs the semantic segmentation constructed using the teacher data prepared for each type of object.
  • the shape data of the object is extracted by extracting the object area included in the image data using an algorithm.
  • the dimension data calculation device is the dimension data calculation device according to the fourth aspect, in which the extraction unit extracts the shape data of the target object from the target object region by a grab cut algorithm.
  • a dimension data calculation device is the dimension data calculation device according to the fifth aspect, in which the extraction unit causes the image of the object extracted by the grab cut algorithm to be based on the color image of the specific portion in the image data. Correction is performed to generate new shape data.
  • the dimension data calculation device according to the seventh aspect is the dimension data calculation device according to any of the first to sixth aspects, and the object is a person.
  • the product manufacturing apparatus manufactures a product related to the shape of the target object by using the dimension data calculated by using the dimension data calculation apparatus according to any one of the first to seventh aspects.
  • the dimension data calculation program of the ninth aspect causes a computer to be realized as an acquisition unit, an extraction unit, a conversion unit, and a calculation unit.
  • the acquisition unit acquires image data of a captured object and full-length data of the object.
  • the extraction unit extracts shape data indicating the shape of the object from the image data.
  • the conversion unit converts the shape data based on the full length data.
  • the calculation unit calculates the dimension data of each part of the object using the shape data converted by the conversion unit.
  • the dimension data calculation method according to the tenth aspect obtains image data of a photographed object and full length data of the object.
  • shape data indicating the shape of the object is extracted from the image data.
  • the shape data is converted based on the full length data.
  • the dimension data of each part of the object is calculated.
  • a product manufacturing system includes an acquisition unit, an extraction unit, a conversion unit, a calculation unit, and a product manufacturing device.
  • the acquisition unit acquires the image data of the target together with the full length data of the target from a photographing device that captures a plurality of images of the target.
  • the extraction unit extracts shape data indicating the shape of the object from the image data.
  • the conversion unit converts the shape data based on the full length data.
  • the calculation unit calculates the dimension data of each part of the object using the shape data converted by the conversion unit.
  • the product manufacturing apparatus manufactures a product related to the shape of the object by using the dimension data calculated by the calculation unit.
  • the information processing device includes a reception unit and an estimation unit.
  • the reception unit receives the silhouette image of the object.
  • the estimating unit uses the object engine that associates the silhouette image of the sample object and the values of the shape parameters associated with the sample object to obtain the shape parameter value of the object from the received silhouette image. presume.
  • the estimated value of the shape parameter of the object is associated with the dimensional data related to an arbitrary part of the object.
  • An information processing apparatus is the information processing apparatus according to the twelfth aspect, wherein a predetermined number of shape parameters associated with the sample target object are obtained by dimensionally reducing three-dimensional data of the sample target object. Be done.
  • the information processing apparatus according to the fourteenth aspect is the information processing apparatus according to the thirteenth aspect, in which dimension reduction is performed by principal component analysis.
  • three-dimensional data of the object is calculated by inversely transforming the projections related to the principal component analysis for the estimated values of the shape parameters.
  • the three-dimensional data is associated with the dimension data.
  • An information processing apparatus according to a fifteenth aspect is the information processing apparatus according to the fourteenth aspect, in which a predetermined number of main components after the second rank excluding the first rank main components are selected as shape parameters.
  • An information processing apparatus according to a sixteenth aspect is the information processing apparatus according to the fifteenth aspect, in which the object is a person and the main component of the first rank is associated with the height of the person.
  • An information processing apparatus is the information processing apparatus according to any one of the twelfth aspect to the sixteenth aspect, wherein the silhouette image of the sample object is a three-dimensional object including three-dimensional data of the sample object. It is a projection image of a predetermined direction in.
  • An information processing apparatus is the information processing apparatus according to any one of the twelfth aspect to the seventeenth aspect, wherein the object engine has a predetermined number of objects associated with the silhouette image of the sample object and the sample object. It is generated by learning the relationship with the shape parameter value.
  • An information processing apparatus according to a nineteenth aspect is the information processing apparatus according to any one of the twelfth aspect to the eighteenth aspect, further including a calculation unit.
  • the calculation unit configures three-dimensional data of a plurality of vertices in the object from the shape parameter values estimated for the object. Then, based on the constructed three-dimensional data, the dimension data between any two of the apexes of the object is calculated.
  • An information processing apparatus is the information processing apparatus according to the nineteenth aspect, wherein in the calculation unit, the dimensional data between the two vertices is composed of three-dimensional data of a plurality of vertices in the object. It is calculated along the curved surface on the three-dimensional object.
  • An information processing apparatus is the information processing apparatus according to any one of the twelfth aspect to the twentieth aspect, in which the silhouette image of the object is based on depth data obtained using the depth data measuring device, It is generated by separating the image of the object and the image other than the object.
  • An information processing apparatus is the information processing apparatus according to a twenty-second aspect, wherein the depth data measuring device is a stereo camera.
  • An information processing method is to learn a relationship between a silhouette image of a sample target object and values of a predetermined number of shape parameters associated with the sample target object to generate a target object engine. Next, the silhouette image of the target object is received.
  • the information processing device includes a reception unit and an estimation unit.
  • the reception unit receives the attribute data of the target object.
  • the estimation unit uses the object engine that associates the attribute data of the sample object with the values of the predetermined number of shape parameters associated with the sample object to obtain the value of the shape parameter of the object from the received attribute data. presume.
  • the estimated value of the shape parameter of the object is associated with the dimensional data related to an arbitrary part of the object.
  • An information processing apparatus is the information processing apparatus according to the twenty-fourth aspect, wherein a predetermined number of shape parameters associated with the sample object are obtained by dimensionally reducing three-dimensional data of the sample object. Be done.
  • An information processing apparatus is the information processing apparatus according to the twenty-fifth aspect, in which dimension reduction is performed by principal component analysis. Then, a predetermined number of main components after the second rank, excluding the first rank main components, are selected as the shape parameters.
  • the information processing apparatus according to the twenty-seventh aspect is the information processing apparatus according to the twenty-sixth aspect, wherein the object is a person. Also, the first-ranked main component is associated with the height of the person.
  • the attribute data includes height data of the object.
  • An information processing apparatus is the information processing apparatus according to any one of the twenty-fourth aspect to the twenty-seventh aspect, wherein the object engine is a predetermined number of objects associated with the attribute data of the sample object and the sample object. It is generated by learning the relationship with the shape parameter value.
  • An information processing apparatus according to a twenty-ninth aspect is the information processing apparatus according to any one of the twenty-fourth aspect to the twenty-eighth aspect, further including a calculation unit. The calculation unit configures three-dimensional data of a plurality of vertices in the object from the shape parameter values estimated for the object.
  • the calculation unit calculates the dimension data between any two of the apexes of the object based on the configured three-dimensional data.
  • An information processing apparatus is the information processing apparatus according to the twenty-ninth aspect, in which the dimension data between the two vertices is composed of three-dimensional data of a plurality of vertices in the object in the calculation unit. It is calculated along the curved surface on the three-dimensional object.
  • An information processing method is to learn a relationship between attribute data of a sample target object and values of a predetermined number of shape parameters associated with the sample target object to generate a target object engine. Next, the attribute data of the target object is received.
  • the dimension data calculation device includes an acquisition unit, an extraction unit, a conversion unit, an estimation unit, and a calculation unit.
  • the acquisition unit acquires image data of a captured object and full-length data of the object.
  • the extraction unit extracts shape data indicating the shape of the object from the image data.
  • the conversion unit converts the shape data into a silhouette image based on the full length data.
  • the estimating unit estimates the value of the predetermined number of shape parameters from the silhouette image by using the object engine that associates the silhouette image of the sample object with the value of the predetermined number of shape parameters associated with the sample object. ..
  • the calculation unit calculates the dimension data of the object based on the estimated values of the predetermined number of shape parameters.
  • the dimension data calculation device is the dimension data calculation device according to the thirty-second aspect, wherein a predetermined number of shape parameters are obtained by dimensionally reducing the three-dimensional data of the sample object.
  • the dimension data calculation apparatus according to the thirty-fourth aspect is the dimension data calculation apparatus according to the thirty-third aspect, in which dimension reduction is performed by principal component analysis.
  • a product manufacturing apparatus manufactures a product related to the shape of an object by using at least one dimension data calculated using the dimension data calculation apparatus according to any one of the thirty-second to thirty-fourth aspects.
  • the dimension data calculation device includes an acquisition unit and a calculation unit.
  • the acquisition unit acquires attribute data including at least one of full length data and weight data of the object.
  • the calculation unit calculates the dimension data of each part of the object by performing a polynomial regression on the attribute data using the coefficient learned by machine learning.
  • a dimension data calculation device is the dimension data calculation device according to the thirty-sixth aspect, in which the calculation unit secondarily regresses the attribute data using a coefficient learned by machine learning, Dimension data of each part of is calculated.
  • the dimension data calculation device is the dimension data calculation device according to the thirty-eighth aspect or the thirty-seventh aspect, and the object is a person.
  • the dimension data calculation program causes a computer to be realized as an acquisition unit and a calculation unit.
  • the acquisition unit acquires attribute data including at least one of full length data and weight data of the object.
  • the calculation unit calculates the dimension data of each part of the object by performing a polynomial regression on the attribute data using the coefficient learned by machine learning.
  • a dimension data calculating method obtains attribute data including at least one of full length data and weight data of an object. Then, the attribute data is subjected to polynomial regression using the coefficient learned by machine learning to calculate the dimension data of each part of the object.
  • the silhouette image generation device includes an acquisition unit, an extraction unit, and a conversion unit. The acquisition unit acquires image data including a depth map, in which an object is photographed.
  • the extraction unit extracts a target object area of the target object using the three-dimensional point cloud data generated from the depth map, and a shape indicating the shape of the target object based on the depth data of the depth map corresponding to the target object area. Extract the data.
  • the conversion unit converts the shape data to generate a silhouette image of the object.
  • a silhouette image generating apparatus is the silhouette image generating apparatus according to the forty-first aspect, in which the conversion unit is a monochrome image associated with the depth data corresponding to the target region as the silhouette image. Generate an image.
  • a silhouette image generating apparatus is the silhouette image generating apparatus according to the 41st viewpoint or the 42nd viewpoint, wherein the extraction unit is separated from a predetermined threshold distance along the depth direction in the three-dimensional point cloud data.
  • the object region of the object is extracted based on the data obtained by removing the existing three-dimensional point cloud data.
  • the silhouette image generating apparatus according to the 44th viewpoint is the silhouette image generating apparatus according to any one of the 41st viewpoint to the 43rd viewpoint, wherein the extraction unit further generates an image from the three-dimensional point cloud data generated from the depth map. Estimate the plane portion of the data.
  • a silhouette image generating apparatus is the silhouette image generating apparatus according to the 44th aspect, wherein the extraction unit calculates the content rate of the three-dimensional point cloud data associated with the sample plane sampled according to the random sampling. Based on that, the plane part is estimated.
  • the silhouette image generating apparatus according to the 46th viewpoint is the silhouette image generating apparatus according to any of the 41st viewpoint to the 45th viewpoint, wherein the object is a person and the plane portion includes the floor.
  • a dimension data calculation device includes an acquisition unit, an extraction unit, and a calculation unit.
  • the acquisition unit acquires image data of a captured object and full-length data of the object.
  • the extraction unit extracts shape data indicating the shape of the object from the image data.
  • the calculation unit calculates the dimension data of each part of the object using the shape data.
  • the image data includes a depth map, and the shape data extracted by the extraction unit is associated with the depth data of the object in the depth map.
  • a dimension data calculation device is the dimension data calculation device according to the forty-seventh aspect, in which the extraction unit extracts the object region of the object from the image data based on the depth map. Shape data is associated with depth data of the object in the object region.
  • a dimension data calculation device is the dimension data calculation device according to the forty-seventh aspect or the forty-eighth aspect, further including a conversion unit that converts the shape data based on the full length data.
  • a dimension data calculation device is the dimension data calculation device according to the forty-ninth aspect, in which the conversion unit converts the shape data into new shape data based on the total length data and the depth data of the object region. ..
  • the dimension data calculation device according to the fifty-first aspect is the dimension data calculation device according to the fifty-ninth aspect or the fiftieth aspect, wherein the calculation unit reduces the dimension of the shape data converted by the conversion unit and reduces the value of each dimension.
  • the product manufacturing apparatus uses the weighting coefficient optimized for each part of the object to calculate the dimension data of each part of the object.
  • the product manufacturing apparatus manufactures a product related to the shape of the target object by using the dimension data calculated by using the dimension data calculation apparatus according to any of the 47th to 51st aspects.
  • the dimension data calculation device includes an acquisition unit, an extraction unit, an estimation unit, and a calculation unit.
  • the acquisition unit acquires image data of a captured object and full-length data of the object.
  • the extraction unit extracts shape data indicating the shape of the object from the image data.
  • the estimation unit uses the object engine that associates the silhouette image of the sample object with the value of the predetermined number of shape parameters associated with the sample object, and uses the object engine to extract the value of the predetermined number of shape parameter values from the silhouette image of the object. To estimate.
  • the calculation unit calculates the dimension data of the object based on the estimated values of the predetermined number of shape parameters.
  • the image data includes a depth map, and the shape data is associated with the depth data of the object in the depth map.
  • the dimension data calculation device according to the fifty-fourth aspect is the dimension data calculation device according to the fifty-third aspect, in which the extraction unit extracts the object region of the object from the image data based on the depth map. Shape data is associated with depth data of the object in the object region.
  • a dimension data calculation device is the dimension data calculation device according to the fifty-third aspect or the fifty-fourth aspect, including a conversion unit that converts the shape data into new shape data based on the depth data of the object region. Further prepare.
  • the dimension data calculation device according to the fifty-sixth aspect is the dimension data calculation device according to any of the fifty-third aspect to the fifty-fifth aspect, wherein the predetermined number of shape parameters dimensionally reduces the three-dimensional data of the sample object. can get.
  • the product manufacturing apparatus manufactures a product related to the shape of the target object using the dimension data calculated using the dimension data calculation apparatus according to any one of the fifty-third to the fifty-sixth aspects.
  • the terminal device is connected to an information processing device that processes information about an object from image data obtained by capturing the object.
  • the terminal device includes an acquisition unit, a determination unit, and a reception unit.
  • the acquisition unit acquires image data of a target object.
  • the determination unit determines whether or not the target object included in the image data is a target object registered in advance.
  • the acceptance unit indicates the determination result by the determination unit to the output unit, and accepts an input as to whether or not to transmit the image data to the information processing device.
  • a terminal device is the terminal device according to the fifty-eighth aspect, wherein the determination unit includes the target object identification model for identifying whether or not each pixel is a predetermined target object in the image data.
  • a terminal device is the terminal device according to the 59th aspect, in which the accepting unit causes the obtaining unit to obtain the determination image data obtained from the identification result for each pixel identified using the object identification model. It is shown on the output unit by being superimposed on the acquired image data.
  • a dimension data calculation device includes a shape parameter acquisition unit and a calculation unit. The shape parameter acquisition unit acquires the value of the shape parameter of the object. The calculation unit configures the three-dimensional mesh data of the target from the value of the shape parameter of the target, and based on the information of the vertices of the three-dimensional mesh data that configures the predetermined region of the body associated with the predetermined region.
  • a dimension data calculation device is the dimension data calculation device according to the sixty-first aspect, in which the calculation unit partially forms a part region from a set of vertices of the three-dimensional mesh data according to a predetermined part. Selectively extract the calculation points associated with. Then, the calculation unit calculates the dimension data based on each calculation point.
  • the dimension data calculation device is the dimension data calculation device according to the 62nd aspect, wherein the part region is a tubular region. Then, the dimension data is calculated by calculating the circumference length of the cylindrical region based on the calculation points.
  • a dimension data calculation device is the dimension data calculation device according to the sixty-third aspect, in which the calculation section is orthogonal to the centroid axis of the tubular region and has the origin at the centroid axis. The calculation points are individually extracted from a set of vertices existing in three or more quadrants of the coordinate system defined with respect to the major axis direction and the minor axis direction.
  • a dimension data calculation apparatus is the dimension data calculation apparatus according to the sixty-third aspect, in which the calculation unit scans the tubular region in the direction of the center of gravity axis to determine the length and/or the length in the long axis direction.
  • a cross-sectional area of the tubular area having the smallest axial length is extracted.
  • the calculation unit extracts, as the calculation points, two vertices having a minimum length in the long axis direction and two vertices having a minimum length in the short axis direction in the cross-sectional area.
  • a method based on principal component analysis is mentioned as an example of dimension reduction, but the dimension reduction method is not limited to the above.
  • a dimension reduction method such as an automatic encoder, latent semantic analysis (LSA), and independent component analysis (ICA) may be adopted.
  • the teacher data of the dimension data it is also possible to adopt a dimension reduction method such as partial least squares (PLS), canonical correlation analysis (CCA), and linear discriminant analysis (LDA).
  • PLS partial least squares
  • CCA canonical correlation analysis
  • LDA linear discriminant analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present invention addresses the problem of providing highly accurate dimensional data of each part of a target object. A dimensional data calculation device 1020 is provided with an acquisition unit 1024A, an extraction unit 1024B, a conversion unit 1024C, and a calculation unit 1024D. The acquisition unit 1024A acquires image data obtained by photographing the target object, and total length data of the target object. The extraction unit 1024B extracts, from the image data, shape data indicating the shape of the target object. The conversion unit 1024C converts the shape data on the basis of the total length into a silhouette. The calculation unit 1024D, using the shape data converted by the conversion unit 1024C, calculates dimensional data of each part of the target object.

Description

寸法データ算出装置、製品製造装置、情報処理装置、及びシルエット画像生成装置、端末装置Dimension data calculation device, product manufacturing device, information processing device, silhouette image generation device, terminal device
 本開示は、寸法データ算出装置、製品製造装置、情報処理装置、及びシルエット画像生成装置、端末装置に関する。 The present disclosure relates to a dimension data calculation device, a product manufacturing device, an information processing device, a silhouette image generation device, and a terminal device.
 従来、対象物の形状に基づいて製品を製造する装置が検討されている。例えば、特許文献1には、指の爪を撮影して爪画像を得、取得された爪画像に基づいて、爪の形状、爪の位置、爪の曲率等の付け爪作成に必要な爪情報を得、これを記憶し、この爪情報に基づいて付け爪パーツを作成する技術が開示されている。
 特許文献1 特開2017-018158号公報
Conventionally, an apparatus for manufacturing a product based on the shape of an object has been studied. For example, in Patent Document 1, nail images are obtained by photographing nails of a finger, and nail information necessary for creating artificial nails such as the shape of the nail, the position of the nail, and the curvature of the nail is obtained based on the acquired nail image. There is disclosed a technique for obtaining a false nail part, storing it, and creating a false nail part based on this nail information.
Patent Document 1 Japanese Patent Laid-Open No. 2017-018158
 しかしながら、従来技術では、対象物における多数の形状を高精度に算出することが困難であった。 However, with the conventional technology, it was difficult to calculate a large number of shapes in the object with high accuracy.
 本発明の第1の態様においては、対象物が撮影された画像データ及び対象物の全長データを取得する取得部と、画像データから対象物の形状を示す形状データを抽出する抽出部と、形状データを全長データに基づいて変換する変換部と、変換部により変換された形状データを次元削減し、削減した各次元の値と対象物の部分ごとに最適化された重み係数とを用いて、対象物の各部分の寸法データを算出する算出部と、を備える、寸法データ算出装置が提供される。 In the first aspect of the present invention, an acquisition unit that acquires image data of a captured object and full length data of the object, an extraction unit that extracts shape data indicating the shape of the object from the image data, and a shape A conversion unit that converts the data based on the full length data, the dimension data of the shape data converted by the conversion unit is dimensionally reduced, and the weighting coefficient optimized for each reduced value of each dimension and each part of the object is used. Provided is a dimension data calculation device including a calculation unit that calculates the dimension data of each part of an object.
 本発明の第2の態様においては、第1の態様の寸法データ算出装置を用いて算出した寸法データを用いて、対象物の形状に関連する製品を製造する、製品製造装置が提供される。 In a second aspect of the present invention, there is provided a product manufacturing apparatus that manufactures a product related to the shape of an object using the dimension data calculated using the dimension data calculation apparatus of the first aspect.
 本発明の第3の態様においては、対象物のシルエット画像を受け付ける受付部と、サンプル対象物のシルエット画像とサンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジンを使用して、受け付けたシルエット画像から対象物の形状パラメータの値を推定する推定部と、を備え、推定された対象物の形状パラメータの値が、対象物が有する任意の部位に関連する寸法データに関連付けられる、情報処理装置が提供される。 According to a third aspect of the present invention, there is provided a reception unit that receives a silhouette image of an object, and an object engine that associates the silhouette image of the sample object with a predetermined number of shape parameter values associated with the sample object. An estimation unit that estimates the value of the shape parameter of the object from the received silhouette image by using the dimension data, in which the estimated value of the shape parameter of the object is related to an arbitrary part of the object. An information processing device associated with the.
 本発明の第4の態様においては、対象物の属性データを受け付ける受付部と、サンプル対象物の属性データとサンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジンを使用して、受け付けた属性データから対象物の形状パラメータの値を推定する推定部と、を備え、推定された対象物の形状パラメータの値が、対象物が有する任意の部位に関連する寸法データに関連付けられる、情報処理装置が提供される。 In a fourth aspect of the present invention, there is provided a receiving unit that receives attribute data of an object, and an object engine that associates the attribute data of the sample object with a value of a predetermined number of shape parameters associated with the sample object. An estimation unit that estimates the value of the shape parameter of the object from the received attribute data, and the estimated value of the shape parameter of the object is dimensional data related to an arbitrary part of the object. An information processing device associated with the.
 本発明の第5の態様においては、対象物が撮影された画像データ及び対象物の全長データを取得する取得部と、画像データから対象物の形状を示す形状データを抽出する抽出部と、形状データを全長データに基づいてシルエット画像に変換する変換部と、サンプル対象物のシルエット画像とサンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジンを使用して、シルエット画像から所定個数の形状パラメータの値を推定する推定部と、推定された所定個数の形状パラメータの値に基づいて、対象物の寸法データを算出する算出部と、を備える、寸法データ算出装置が提供される。 In a fifth aspect of the present invention, an acquisition unit that acquires image data of a captured object and full length data of the object, an extraction unit that extracts shape data indicating the shape of the object from the image data, and a shape Using a transformation unit that transforms the data into a silhouette image based on the full length data, and an object engine that associates the silhouette image of the sample object with the values of a predetermined number of shape parameters associated with the sample object, the silhouette A dimension data calculating device, comprising: an estimating unit that estimates the value of a predetermined number of shape parameters from an image; and a calculating unit that calculates the dimension data of an object based on the value of the estimated predetermined number of shape parameters. Provided.
 本発明の第6の態様においては、第5の態様の寸法データ算出装置を用いて算出された少なくとも1つの寸法データを用いて、対象物の形状に関連する製品を製造する、製品製造装置が提供される。 According to a sixth aspect of the present invention, there is provided a product manufacturing apparatus for manufacturing a product related to the shape of an object using at least one dimension data calculated by using the dimension data calculating apparatus according to the fifth aspect. Provided.
 本発明の第7の態様においては、対象物の全長データと重量データとのうちの少なくともいずれかを含む属性データを取得する取得部と、属性データを、機械学習により学習された係数を用いて多項式回帰することにより、対象物の各部分の寸法データを算出する算出部と、を備える、寸法データ算出装置が提供される。 In a seventh aspect of the present invention, an acquisition unit that acquires attribute data including at least one of full length data and weight data of an object, and attribute data using a coefficient learned by machine learning. A dimension data calculation device is provided that includes a calculation unit that calculates the dimension data of each part of the object by performing polynomial regression.
 本発明の第8の態様においては、対象物が撮影された、深度マップを含む画像データを取得する取得部と、深度マップから生成される3次元点群データを用いて対象物の対象物領域を抽出し、対象物領域に対応する深度マップの深度データに基づいて、対象物の形状を示す形状データを抽出する抽出部と、形状データを変換して、対象物のシルエット画像を生成する変換部と、を備える、シルエット画像生成装置が提供される。 In an eighth aspect of the present invention, an object region of an object is obtained by using an acquisition unit that acquires image data including a depth map, in which the object is photographed, and three-dimensional point cloud data generated from the depth map. An extraction unit for extracting shape data indicating the shape of the object based on the depth data of the depth map corresponding to the object area, and a conversion for converting the shape data to generate a silhouette image of the object And a silhouette image generating device.
 本発明の第9の態様においては、対象物が撮影された画像データ及び対象物の全長データを取得する取得部と、画像データから対象物の形状を示す形状データを抽出する抽出部と、形状データを用いて、対象物の各部分の寸法データを算出する算出部と、を備え、画像データが深度マップを含み、抽出部により抽出される形状データが深度マップにおける対象物の深度データに関連付けられる、寸法データ算出装置が提供される。 In a ninth aspect of the present invention, an acquisition unit that acquires image data of a captured object and full-length data of the object, an extraction unit that extracts shape data indicating the shape of the object from the image data, and a shape A calculation unit that calculates the dimension data of each part of the object using the data, the image data includes a depth map, and the shape data extracted by the extraction unit is associated with the depth data of the object in the depth map. A dimensional data calculation device is provided.
 本発明の第10の態様においては、第9の態様の寸法データ算出装置を用いて算出した寸法データを用いて、対象物の形状に関連する製品を製造する、製品製造装置が提供される。 According to a tenth aspect of the present invention, there is provided a product manufacturing apparatus that manufactures a product related to the shape of an object using the dimension data calculated using the dimension data calculating apparatus of the ninth aspect.
 本発明の第11の態様においては、対象物が撮影された画像データ及び対象物の全長データを取得する取得部と、画像データから対象物の形状を示す形状データを抽出する抽出部と、サンプル対象物のシルエット画像とサンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジンを使用して、対象物のシルエット画像から所定個数の形状パラメータの値を推定する推定部と、推定された所定個数の形状パラメータの値に基づいて、対象物の寸法データを算出する算出部と、を備え、画像データが深度マップを含み、形状データが深度マップにおける対象物の深度データに関連付けられる、寸法データ算出装置が提供される。 In an eleventh aspect of the present invention, an acquisition unit that acquires image data of a captured object and full-length data of the object, an extraction unit that extracts shape data indicating the shape of the object from the image data, and a sample An estimation unit that estimates the value of a predetermined number of shape parameters from the silhouette image of the object by using an object engine that associates the silhouette image of the object and the value of the shape parameter of a predetermined number associated with the sample object And a calculation unit that calculates the dimension data of the object based on the values of the estimated predetermined number of shape parameters, the image data includes a depth map, and the shape data is the depth data of the object in the depth map. There is provided a dimensional data calculation device associated with.
 本発明の第12の態様においては、第11の態様の寸法データ算出装置を用いて算出した寸法データを用いて、対象物の形状に関連する製品を製造する、製品製造装置が提供される。 According to a twelfth aspect of the present invention, there is provided a product manufacturing apparatus for manufacturing a product related to the shape of an object using the dimension data calculated using the dimension data calculating apparatus according to the eleventh aspect.
 本発明の第13の態様においては、対象物が撮影された画像データから対象物に関する情報を処理する情報処理装置に接続される端末装置であって、対象物が撮影された画像データを取得する取得部と、画像データに含まれる対象物が予め登録された対象物であるか否かを判定する判定部と、判定部による判定結果を出力部に示すとともに、画像データを情報処理装置に送信するか否かの入力を受け付ける受付部と、を備える、端末装置が提供される。 In a thirteenth aspect of the present invention, the terminal device is connected to an information processing device that processes information about an object from image data of the object, and acquires image data of the object. An acquisition unit, a determination unit that determines whether or not the target object included in the image data is a pre-registered target object, the determination result by the determination unit is displayed on the output unit, and the image data is transmitted to the information processing device. A terminal device is provided, which includes a reception unit that receives an input as to whether or not to perform.
 本発明の第14の態様においては、対象物の形状パラメータの値を取得する形状パラメータ取得部と、対象物の形状パラメータの値から対象物の3次元メッシュ・データを構成し、所定の部位に関連付けられた所定の部位領域を構成する3次元メッシュ・データの頂点の情報に基づいて、所定の部位の寸法データを算出する算出部と、を備える、寸法データ算出装置が提供される。 In a fourteenth aspect of the present invention, a shape parameter acquisition unit that acquires the value of the shape parameter of the object and three-dimensional mesh data of the object from the value of the shape parameter of the object are configured, A dimension data calculation device is provided that includes a calculation unit that calculates the dimension data of a predetermined region based on the information on the vertices of the three-dimensional mesh data that forms the associated predetermined region.
第1実施形態に係る寸法データ算出装置1020の構成を示す模式図である。It is a schematic diagram which shows the structure of the dimension data calculation apparatus 1020 which concerns on 1st Embodiment. 同実施形態に係る寸法データ算出装置1020の動作を説明するためのフローチャートである。9 is a flowchart for explaining an operation of the dimension data calculation device 1020 according to the same embodiment. 同実施形態に係る形状データの概念を示す模式図である。It is a schematic diagram which shows the concept of the shape data which concerns on the same embodiment. 同実施形態に係る製品製造システム1001の概念を示す模式図である。It is a schematic diagram which shows the concept of the product manufacturing system 1001 which concerns on the same embodiment. 同実施形態に係る製品製造システム1001の動作を説明するためのシーケンス図である。It is a sequence diagram for explaining operation of product manufacturing system 1001 concerning the embodiment. 同実施形態に係る端末装置1010に表示される画面の一例を示す模式図である。It is a schematic diagram which shows an example of the screen displayed on the terminal device 1010 which concerns on the same embodiment. 同実施形態に係る端末装置1010に表示される画面の一例を示す模式図である。It is a schematic diagram which shows an example of the screen displayed on the terminal device 1010 which concerns on the same embodiment. 第2実施形態に係る寸法データ算出装置2120の構成を示す模式図である。It is a schematic diagram which shows the structure of the dimension data calculation apparatus 2120 which concerns on 2nd Embodiment. 同実施形態に係る二次回帰で用いるデータの概念を示す模式図である。It is a schematic diagram which shows the concept of the data used by the secondary regression which concerns on the same embodiment. 同実施形態に係る製品製造システム2001Sの概念を示す模式図である。It is a schematic diagram which shows the concept of the product manufacturing system 2001S which concerns on the same embodiment. 第3実施形態に係る寸法データ算出システム3100の模式図である。It is a schematic diagram of the dimension data calculation system 3100 which concerns on 3rd Embodiment. 図11の学習装置3025の動作を示すフローチャートである。12 is a flowchart showing the operation of the learning device 3025 of FIG. 図11の寸法データ算出装置3020の動作を示すフローチャートである。12 is a flowchart showing the operation of the dimension data calculation device 3020 of FIG. 形状パラメータの特性を示す概略説明図である。It is a schematic explanatory drawing which shows the characteristic of a shape parameter. 形状パラメータの特性を示す概略グラフである。It is a schematic graph which shows the characteristic of a shape parameter. 第3実施形態に係る製品製造システム3001の概念を示す模式図である。It is a schematic diagram which shows the concept of the product manufacturing system 3001 which concerns on 3rd Embodiment. 図16の製品製造システム3001の動作を示すシーケンス図である。FIG. 17 is a sequence diagram showing an operation of the product manufacturing system 3001 of FIG. 16. 図16の端末装置3010に表示される画面の一例を示す模式図である。It is a schematic diagram which shows an example of the screen displayed on the terminal device 3010 of FIG. 図16の端末装置3010に表示される画面の一例を示す模式図である。It is a schematic diagram which shows an example of the screen displayed on the terminal device 3010 of FIG. 第4実施形態に係る寸法データ算出システム4100の模式図である。It is a schematic diagram of the dimension data calculation system 4100 which concerns on 4th Embodiment. 図20の学習装置4125の動作を示すフローチャートである。21 is a flowchart showing the operation of the learning device 4125 of FIG. 図20の寸法データ算出装置4020の動作を示すフローチャートである。21 is a flowchart showing the operation of the dimension data calculation device 4020 of FIG. 20. 第4実施形態に係る製品製造システム4001Sの概念を示す模式図である。It is a schematic diagram which shows the concept of the product manufacturing system 4001S which concerns on 4th Embodiment. 他の実施形態に係るシルエット画像生成装置5020の構成を示す模式図である。It is a schematic diagram which shows the structure of the silhouette image generation apparatus 5020 which concerns on other embodiment. 同実施形態に係るシルエット画像生成装置5020の動作を説明するためのフローチャートである。It is a flow chart for explaining operation of silhouette image generating device 5020 concerning the embodiment. 他の実施形態に係る寸法データ算出装置6020の構成を示す模式図である。It is a schematic diagram which shows the structure of the dimension data calculation apparatus 6020 which concerns on other embodiment. 対象物を人体とした場合の人体モデルの概略図である。It is a schematic diagram of a human body model when an object is a human body. 対象物を人体とした場合の人体モデルの概略図である。It is a schematic diagram of a human body model when an object is a human body. 他の実施形態に係る寸法データ算出装置6020の動作を説明するためのフローチャートである。11 is a flowchart for explaining the operation of the dimension data calculation device 6020 according to another embodiment. 人体モデルに対し、筒状領域である胴体領域及び腕領域を構成する例を示した概略図である。It is the schematic which showed the example which comprises the body area|region and arm area which are cylindrical regions with respect to a human body model. 人体モデルに対し、筒状領域である胴体領域及び腕領域を構成する例を示した概略図である。It is the schematic which showed the example which comprises the body area|region and arm area which are cylindrical regions with respect to a human body model. 人体モデルに対し、筒状領域である胴体領域及び腕領域を構成する例を示した概略図である。It is the schematic which showed the example which comprises the body area|region and arm area which are cylindrical regions with respect to a human body model. 「ヒップ」に関する計算点を抽出する例を示した概略平面図である。It is a schematic plan view which showed the example which extracts the calculation point regarding "hip". 「ヒップ」に関する計算点を抽出する例を示した概略立体図である。It is a schematic solid figure showing the example which extracts the calculation point about "hip". 「ウエスト」に関する計算点を抽出する例を示した概略平面図である。It is a schematic plan view which showed the example which extracts the calculation point regarding a "waist." 「ウエスト」に関する計算点を抽出する例を示した概略立体図である。It is a schematic solid figure showing the example which extracts the calculation point about a "waist." 「手首」に関する計算点を抽出する例を示した概略平面図である。It is a schematic plan view which showed the example which extracts the calculation point regarding a "wrist." 「手首」に関する計算点を抽出する例を示した概略立体図である。It is a schematic solid figure showing the example which extracts the calculation point about a "wrist." 「アームホール」に関する計算点を抽出する例を示した概略平面図である。It is a schematic plan view which showed the example which extracts the calculation point regarding an "armhole." 「アームホール」に関する計算点を抽出する例を示した概略立体図である。It is a schematic solid figure showing the example which extracts the calculation point about an "armhole." 他の実施形態に係る端末装置7020の構成を示す模式図である。It is a schematic diagram which shows the structure of the terminal device 7020 which concerns on other embodiment. 図36の端末装置7020に表示される画面の一例を示す模式図である。It is a schematic diagram which shows an example of the screen displayed on the terminal device 7020 of FIG. 図36の端末装置7020の動作を説明するためシーケンス図である。FIG. 37 is a sequence diagram for explaining the operation of the terminal device 7020 of FIG. 36.
 <第1実施形態>
 (1-1)寸法データ算出装置の構成
 図1は本実施形態に係る寸法データ算出装置1020の構成を示す模式図である。
<First Embodiment>
(1-1) Configuration of Dimension Data Calculation Device FIG. 1 is a schematic diagram showing the configuration of the dimension data calculation device 1020 according to this embodiment.
 寸法データ算出装置1020は任意のコンピュータにより実現することができ、記憶部1021、入出力部1022、通信部1023、及び処理部1024を備える。なお、寸法データ算出装置1020は、LSI(Large Scale Integration),ASIC(Application Specific Integrated Circuit),FPGA(Field-Programmable Gate Array)などを用いてハードウェアとして実現されるものでもよい。 The dimension data calculation device 1020 can be realized by any computer, and includes a storage unit 1021, an input/output unit 1022, a communication unit 1023, and a processing unit 1024. The dimension data calculation device 1020 may be realized as hardware using an LSI (Large Scale Integration), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or the like.
 記憶部1021は、各種情報を記憶するものであり、メモリ及びハードディスク等の任意の記憶装置により実現される。例えば、記憶部1021は、対象物の長さ及び重さ等に関連付けて、後述する情報処理を実行するために必要な重み係数を記憶する。なお、重み係数は、後述する属性データ・画像データ・寸法データからなる教師データから機械学習を実行することであらかじめ取得される。 The storage unit 1021 stores various kinds of information, and is realized by an arbitrary storage device such as a memory and a hard disk. For example, the storage unit 1021 stores the weighting factor necessary for executing the information processing described later in association with the length and weight of the target object. The weighting factor is acquired in advance by performing machine learning from teacher data including attribute data, image data, and dimension data described later.
 入出力部1022は、キーボード、マウス、タッチパネル等により実現され、コンピュータに各種情報を入力したり、コンピュータから各種情報を出力したりするものである。
 通信部1023は、任意のネットワークカード等により実現され、有線又は無線によりネットワーク上の通信機器との通信を可能にするものである。
The input/output unit 1022 is realized by a keyboard, a mouse, a touch panel, etc., and inputs various information to the computer and outputs various information from the computer.
The communication unit 1023 is realized by an arbitrary network card or the like, and enables communication with a communication device on the network by wire or wirelessly.
 処理部1024は、各種情報処理を実行するものであり、CPUやGPUといったプロセッサ及びメモリにより実現される。ここでは、コンピュータのCPU,GPU等に記憶部1021に記憶されたプログラムが読み込まれることにより、処理部1024が、取得部1024A、抽出部1024B、変換部1024C、及び算出部1024Dとして機能する。 The processing unit 1024 executes various types of information processing, and is realized by a processor such as a CPU or GPU and a memory. Here, the processing unit 1024 functions as the acquisition unit 1024A, the extraction unit 1024B, the conversion unit 1024C, and the calculation unit 1024D by reading the program stored in the storage unit 1021 into the CPU, GPU, or the like of the computer.
 取得部1024Aは、対象物が撮影された画像データ及び対象物の全長データ・重量データ等を取得する。なお、ここでは、取得部1024Aは、対象物を異なる方向から撮影した複数の画像データを取得する。 The acquisition unit 1024A acquires image data of a captured object, full length data/weight data of the object, and the like. In addition, here, the acquisition unit 1024A acquires a plurality of image data obtained by photographing the object from different directions.
 抽出部1024Bは、画像データから対象物の形状を示す形状データを抽出する。具体的には、抽出部1024Bは、対象物の種類毎に準備されたセマンティックセグメンテーションのアルゴリズム(Mask R-CNN等)を用いて、画像データに含まれる対象物領域を抽出することにより、対象物の形状データを抽出する。ここでは、セマンティックセグメンテーションのアルゴリズムは、対象物の形状が特定されていない教師データを用いて構築される。 The extraction unit 1024B extracts shape data indicating the shape of the object from the image data. Specifically, the extraction unit 1024B extracts the target area included in the image data by using the semantic segmentation algorithm (Mask R-CNN or the like) prepared for each type of the target object, The shape data of is extracted. Here, the semantic segmentation algorithm is constructed using teacher data in which the shape of the object is not specified.
 なお、セマンティックセグメンテーションのアルゴリズムが、形状が不特定の対象物の教師データを用いて構築されている場合、必ずしも高精度に対象物の形状を抽出することができないことがある。このような場合、抽出部1024Bは、対象物領域からグラブカット(GrabCut)アルゴリズムにより対象物の形状データを抽出する。これにより、高精度に対象物の形状を抽出することが可能になる。さらに、抽出部1024Bは、グラブカットアルゴリズムにより特定された対象物の画像を、対象物の特定部分の色画像に基づいて補正するものでもよい。これにより、さらに高精度に対象物の形状データを生成することが可能となる。 Note that if the semantic segmentation algorithm is constructed using teacher data of an object whose shape is unspecified, it may not always be possible to accurately extract the shape of the object. In such a case, the extraction unit 1024B extracts the shape data of the target object from the target object area by the GrabCut algorithm. This makes it possible to extract the shape of the object with high accuracy. Furthermore, the extraction unit 1024B may correct the image of the target object specified by the grab cut algorithm based on the color image of the specific portion of the target object. This makes it possible to generate the shape data of the object with higher accuracy.
 変換部1024Cは、形状データを全長データに基づいて変換し、シルエット化する。これにより形状データが規格化される。 The conversion unit 1024C converts the shape data into silhouettes based on the full length data. Thereby, the shape data is standardized.
 算出部1024Dは、変換部1024Cにより変換された形状データを用いて、対象物の各部分の寸法データを算出する。具体的には、算出部1024Dは、変換部1024Cにより変換された形状データの次元削減を行なう。ここでいう、次元削減は、主成分分析、特にカーネル主成分分析(KernelPCA)、線形判別分析などの手法により実現される。 The calculation unit 1024D uses the shape data converted by the conversion unit 1024C to calculate the dimension data of each part of the object. Specifically, the calculation unit 1024D reduces the dimension of the shape data converted by the conversion unit 1024C. The dimension reduction here is realized by a method such as principal component analysis, particularly kernel principal component analysis (Kernel PCA) or linear discriminant analysis.
 そして、算出部1024Dは、削減した各次元の値と対象物の部分ごとに最適化された重み係数とを用いて、対象物の各部分の寸法データを算出する。 Then, the calculation unit 1024D calculates the dimension data of each part of the object using the reduced value of each dimension and the weighting coefficient optimized for each part of the object.
 さらに詳しくは、算出部1024Dは、1回目に削減した各次元の値と対象物の部分ごとに最適化された重み係数W1piとを線形結合して所定値Zpiを求める。なお、記号pは削減して得られる次元数であり、10以上の値である。そして、算出部1024Dは、所定値Zpiと、対象物の長さ及び重さの属性を少なくとも含む属性データとを用いて2回目の次元削減を行い、2回目の次元削減で得られた各次元の値に基づいて、対象物の各部分の寸法データを算出する。なお、重み係数W1piの個数は、対象物の寸法箇所(i個)毎に、削減された次元と同数準備される。 More specifically, the calculation unit 1024D linearly combines the value of each dimension reduced for the first time and the weighting coefficient W1pi optimized for each part of the object to obtain the predetermined value Zpi. The symbol p is the number of dimensions obtained by reduction and is a value of 10 or more. Then, the calculating unit 1024D performs the second dimension reduction using the predetermined value Zpi and the attribute data including at least the attributes of the length and weight of the target object, and each dimension obtained by the second dimension reduction. Based on the value of, the dimensional data of each part of the object is calculated. The number of weighting factors W1pi is prepared as many as the reduced dimension for each dimensional location (i) of the object.
 なお、上記説明では、算出部1024Dは、線形結合を用いて所定値Zpiを求めていたが、算出部1024Dは線形結合以外の手法でこれらの値を求めるものでもよい。具体的には、算出部1024Dは、次元削減で得られた各次元の値から2次の特徴量を生成し、当該2次の特徴量と対象物の部分ごとに最適化された重み係数とを結合することで、所定値を求めるようにしてもよい。 In the above description, the calculation unit 1024D calculates the predetermined value Zpi by using the linear combination, but the calculation unit 1024D may calculate these values by a method other than the linear combination. Specifically, the calculation unit 1024D generates a quadratic feature amount from the value of each dimension obtained by the dimension reduction, and calculates the quadratic feature amount and the weighting coefficient optimized for each part of the object. The predetermined value may be obtained by combining
 (1-2)寸法データ算出装置の動作
 図2は本実施形態に係る寸法データ算出装置1020の動作を説明するためのフローチャートである。
(1-2) Operation of Dimension Data Calculation Device FIG. 2 is a flowchart for explaining the operation of the size data calculation device 1020 according to this embodiment.
 まず、寸法データ算出装置1020は、外部の端末装置等を介して対象物の全体を異なる方向から撮影した複数の画像データを、対象物の全長を示す全長データとともに取得する(S1001)。 First, the dimension data calculation device 1020 acquires a plurality of image data obtained by photographing the entire target object from different directions via an external terminal device and the like together with the total length data indicating the total length of the target object (S1001).
 次に、寸法データ算出装置1020は、各画像データから対象物の各部分の形状を示す形状データをそれぞれ抽出する(S1002)。
 続いて、寸法データ算出装置1020は、全長データに基づいて各形状データを所定の大きさに変換するリスケール処理を実行する(S1003)。
Next, the dimension data calculation device 1020 extracts shape data indicating the shape of each part of the object from each image data (S1002).
Subsequently, the dimension data calculation device 1020 executes a rescale process for converting each shape data into a predetermined size based on the total length data (S1003).
 次に、寸法データ算出装置1020は、変換後の複数の形状データを結合して、新たな形状データ(以下、計算用の形状データともいう。)を生成する。具体的には、図3に示すようにh行w列の形状データを結合し、m×h×wのデータ列にする。なお、記号mは形状データの個数である(S1004)。 Next, the dimension data calculation device 1020 combines the plurality of converted shape data to generate new shape data (hereinafter, also referred to as shape data for calculation). Specifically, as shown in FIG. 3, the shape data of h rows and w columns are combined to form an m×h×w data string. The symbol m is the number of shape data (S1004).
 この後、寸法データ算出装置1020は、新たに生成された形状データと、対象物の各部分に関して最適化された重み係数W1piとを用いて、対象物における第i番目(i=1~j)の各部分の寸法データを算出する(S1005~S1008)。なお、記号jは寸法データを算出しようとする寸法箇所の総数である。 After that, the dimension data calculation device 1020 uses the newly generated shape data and the weighting coefficient W1pi optimized for each part of the object, and the i-th (i=1 to j) object. The dimensional data of each part is calculated (S1005 to S1008). It should be noted that the symbol j is the total number of dimension portions for which dimension data is to be calculated.
 (1-3)寸法データ算出装置の特徴
 (1-3-1)
 以上説明したように、本実施形態に係る寸法データ算出装置1020は、取得部1024Aと、抽出部1024Bと、変換部1024Cと、算出部1024Dとを備える。取得部1024Aは、対象物が撮影された画像データ及び対象物の全長データを取得する。抽出部1024Bは、画像データから対象物の形状を示す形状データを抽出する。変換部1024Cは、形状データを全長データに基づいて変換し、シルエット化する。算出部1024Dは、変換部1024Cにより変換された形状データを用いて、対象物の各部分の寸法データを算出する。
(1-3) Features of dimensional data calculation device (1-3-1)
As described above, the dimension data calculation device 1020 according to this embodiment includes the acquisition unit 1024A, the extraction unit 1024B, the conversion unit 1024C, and the calculation unit 1024D. The acquisition unit 1024A acquires image data of a captured object and full-length data of the object. The extraction unit 1024B extracts shape data indicating the shape of the object from the image data. The conversion unit 1024C converts the shape data based on the full length data to form a silhouette. The calculation unit 1024D uses the shape data converted by the conversion unit 1024C to calculate the dimension data of each part of the object.
 したがって、寸法データ算出装置1020は、画像データと全長データとを用いて対象物の各部分の寸法データを算出するので、高精度な寸法データを提供することができる。また、寸法データ算出装置1020では、多数の画像データ及び全長データを一度に情報処理することが可能であるので、多数の寸法データを高精度に提供することができる。 Therefore, since the dimension data calculation device 1020 calculates the dimension data of each part of the object using the image data and the full length data, it is possible to provide highly accurate dimension data. Further, since the dimension data calculation device 1020 can process many pieces of image data and full length data at once, it is possible to highly accurately provide many pieces of dimension data.
 そして、このような寸法データ算出装置1020を用いることで、例えば、対象物として生物の各部分の寸法データを高精度に算出することができる。また、対象物として車や各種荷物など任意の物体の各部分の寸法データを高精度に算出することができる。
 また、寸法データ算出装置を、各種製品を製造する製品製造装置に組み込むことで、対象物の形状に適合した製品を製造することが可能となる。
Then, by using such a dimension data calculation device 1020, for example, the dimension data of each part of a living thing as an object can be calculated with high accuracy. Further, it is possible to calculate with high accuracy the dimensional data of each part of an arbitrary object such as a car or various luggage as the object.
Further, by incorporating the dimension data calculation device into a product manufacturing device that manufactures various products, it becomes possible to manufacture a product that conforms to the shape of the target object.
 (1-3-2)
 また、寸法データ算出装置1020では、取得部1024Aが、対象物を異なる方向から撮影した複数の画像データを取得する。このような構成により、寸法データの精度を高めることができる。
(1-3-2)
Further, in the dimension data calculation device 1020, the acquisition unit 1024A acquires a plurality of image data obtained by photographing the object from different directions. With such a configuration, the accuracy of the dimensional data can be improved.
 (1-3-3)
 また、寸法データ算出装置1020では、算出部1024Dが、変換部1024Cにより変換された形状データの次元削減を行なう。そして、算出部1024Dは、削減した各次元の値と対象物の部分ごとに最適化された重み係数W1piとを用いて、対象物の各部分の寸法データを算出する。このような構成により、計算負荷を抑えつつ、寸法データの精度を高めることができる。
(1-3-3)
In the dimension data calculation device 1020, the calculation unit 1024D reduces the dimension of the shape data converted by the conversion unit 1024C. Then, the calculation unit 1024D calculates the dimension data of each part of the object using the reduced value of each dimension and the weighting coefficient W1pi optimized for each part of the object. With such a configuration, it is possible to improve the accuracy of the dimensional data while suppressing the calculation load.
 詳しくは、算出部1024Dは、削減した各次元の値と、対象物の第i番目の部分に最適化された重み係数W1piとを線形結合して所定値Ziを求める。また、算出部1024Dは、所定値Ziと、対象物の長さ及び重さの属性を少なくとも含む属性データとを用いて2回目の次元削減を実行して、対象物の第i番目の寸法データを算出する。このような構成により、計算負荷を抑えつつ、寸法データの精度をさらに高めることができる。なお、上記説明において、算出部1024Dは、線形結合に代えて、次元削減で得られた各次元の値から2次の特徴量を生成し、当該2次の特徴量と対象物の部分ごとに最適化された重み係数とを結合することで、所定値を求めるようにしてもよい。 Specifically, the calculation unit 1024D linearly combines the reduced value of each dimension and the weighting coefficient W1pi optimized for the i-th part of the object to obtain the predetermined value Zi. Further, the calculation unit 1024D executes the second dimension reduction using the predetermined value Zi and the attribute data including at least the attributes of the length and the weight of the object, and the i-th dimension data of the object. To calculate. With such a configuration, it is possible to further improve the accuracy of the dimensional data while suppressing the calculation load. In the above description, the calculation unit 1024D generates a secondary feature amount from the value of each dimension obtained by dimension reduction, instead of the linear combination, and for each of the secondary feature amount and the object part. The predetermined value may be obtained by combining with the optimized weight coefficient.
 (1-3-4)
 また、寸法データ算出装置1020では、抽出部1024Bが、対象物の種類毎に準備された教師データを用いて構築されたセマンティックセグメンテーションのアルゴリズムを用いて、画像データに含まれる対象物領域を抽出することにより、対象物の形状データを抽出する。このような構成により、寸法データの精度を高めることができる。
(1-3-4)
Further, in the dimension data calculation device 1020, the extraction unit 1024B extracts the target object region included in the image data by using the semantic segmentation algorithm constructed using the teacher data prepared for each kind of the target object. Thus, the shape data of the object is extracted. With such a configuration, the accuracy of the dimensional data can be improved.
 なお、セマンティックセグメンテーションのアルゴリズムは一般的に公開されているものもあるが、一般的に公開されているものは、通常は対象物の形状が特定されていない教師データを用いて構築されている。そのため、目的によっては、画像データに含まれる対象物領域を抽出する精度が必ずしも十分でないことがある。 Note that although some semantic segmentation algorithms are open to the public, those that are open to the public are usually constructed using teacher data in which the shape of the object is not specified. Therefore, depending on the purpose, the accuracy of extracting the object region included in the image data may not always be sufficient.
 そこで、このような場合には、抽出部1024Bは、対象物領域からグラブカットアルゴリズムにより対象物の形状データを抽出する。このような構成により、寸法データの精度をさらに高めることができる。 Therefore, in such a case, the extraction unit 1024B extracts the shape data of the object from the object area by the grab cut algorithm. With such a configuration, the accuracy of the dimensional data can be further improved.
 さらに、抽出部1024Bは、グラブカットアルゴリズムにより抽出された対象物の画像を、画像データにおける特定部分の色画像に基づいて補正して、新たな形状データを生成するものでもよい。このような構成により、寸法データの精度をさらに高めることができる。例えば、対象物が人である場合、特定部分として手及び背中を設定し、これらの特定部分の色画像に基づいて補正することで、対象物である人の形状データを高精度に得ることができる。 Further, the extraction unit 1024B may correct the image of the object extracted by the grab cut algorithm based on the color image of the specific portion in the image data to generate new shape data. With such a configuration, the accuracy of the dimensional data can be further improved. For example, when the object is a person, by setting the hand and the back as specific parts and correcting them based on the color images of these specific parts, it is possible to obtain the shape data of the person who is the object with high accuracy. it can.
 (1-4)変形例
 (1-4-1)
 上記説明においては、取得部1024Aが、対象物を異なる方向から撮影した複数の画像データを取得するとしたが、必ずしも画像データが複数必要なわけではない。対象物の画像データが一枚であっても各部分の寸法データを算出することは可能である。
(1-4) Modification (1-4-1)
In the above description, the acquisition unit 1024A acquires a plurality of image data obtained by photographing the target object from different directions, but a plurality of image data is not necessarily required. It is possible to calculate the dimension data of each part even if the image data of the object is one.
 本実施形態の変形例として、深度データを併せて取得可能な深度データ測定装置が適用可能であり、深度データに基づいてピクセル毎に深度データを有する深度マップが構成されてもよい。このような深度データ測定装置を適用することにより、取得部1024Aで取得することができる画像データはRGB-D(Red、Green、Blue、Depth)データとすることができる。具体的には、画像データは、通常の単眼カメラで取得可能なRGB画像データに加えて、深度マップを含むことができる。 As a modified example of the present embodiment, a depth data measuring device that can also acquire depth data can be applied, and a depth map having depth data for each pixel may be configured based on the depth data. By applying such a depth data measuring device, the image data that can be acquired by the acquisition unit 1024A can be RGB-D (Red, Green, Blue, Depth) data. Specifically, the image data can include a depth map in addition to the RGB image data that can be acquired by a normal monocular camera.
 深度データ測定装置の一例はステレオカメラである。本明細書において「ステレオカメラ」とは、対象物を複数の異なる方向から同時に撮影して、両眼視差を再現して深度マップを構成可能な任意の形態の撮像装置を意味している。また、ステレオカメラ以外にも、LiDAR(Light Detection and Ranging)装置により深度データを求め、深度マップを構成してもよい。 An example of a depth data measuring device is a stereo camera. In the present specification, the “stereo camera” refers to an imaging device of any form capable of simultaneously capturing an object from a plurality of different directions and reproducing binocular parallax to form a depth map. In addition to the stereo camera, a depth map may be configured by obtaining depth data using a LiDAR (Light Detection and Ranging) device.
 (1-4-2)
 上記説明においては、セマンティックセグメンテーションアルゴリズム及び/又はグラブカットアルゴリズムを採用して、対象物の形状データを取得してよいものとした。これに加えて、或いはこれに替えて、例えば、ステレオカメラを適用した場合には、ステレオカメラから取得される深度マップを用いて、対象物の深度データを対象物の形状データに関連付けることができる。これにより、更に高精度に対象物の形状データを生成することが可能となる。
(1-4-2)
In the above description, the semantic segmentation algorithm and/or the grab cut algorithm are adopted to obtain the shape data of the object. Additionally or alternatively, for example, when a stereo camera is applied, the depth map obtained from the stereo camera can be used to associate the depth data of the object with the shape data of the object. .. This makes it possible to generate the shape data of the object with higher accuracy.
 具体的には、ステレオカメラを用いる場合、抽出部1024Bは、取得部1024Aで取得した深度マップに基づいて、対象物が撮影された画像データから、対象物が写っている部分である対象物領域を抽出する。例えば、深度マップから、深度データが所定の範囲にはない領域を取り除くことにより、対象物領域が抽出される。抽出された対象物領域では、形状データは、ピクセルごとに対象物の深度データに関連付けられている。 Specifically, when a stereo camera is used, the extraction unit 1024B determines, based on the depth map acquired by the acquisition unit 1024A, the target object region that is a part in which the target object is captured from the image data in which the target object is captured. To extract. For example, the object region is extracted by removing the region whose depth data is not within the predetermined range from the depth map. In the extracted object region, the shape data is associated with the object depth data on a pixel-by-pixel basis.
 変換部1024Cは、形状データを全長データに基づいて変換する。また、変換部1024Cは、形状データを、全長データに加えて、当該対象物領域の深度データに基づいてモノクロ画像データに変換して、「階調シルエット画像」(新たな形状データ)を生成する(後述)。 The conversion unit 1024C converts the shape data based on the full length data. In addition, the conversion unit 1024C converts the shape data into monochrome image data based on the depth data of the target area in addition to the full length data to generate a “gradation silhouette image” (new shape data). (See below).
 生成される階調シルエット画像は、単なる白黒の2値化データではなく、深度データに基づいて、例えば輝度値が0(「黒」)から1(「白」)までのデータで表された単色多階調のモノクロ画像である。つまり、階調シルエット画像データは、深度データに関連づけられて、対象物の形状に関して更に多くの情報量を有するものである。なお、階調シルエット画像データは全長データにより規格化されている。 The gradation silhouette image to be generated is not simple black and white binarized data, but is a single color represented by data with a brightness value of 0 (“black”) to 1 (“white”) based on depth data. It is a multi-tone monochrome image. That is, the gradation silhouette image data is associated with the depth data and has a larger amount of information regarding the shape of the object. The gradation silhouette image data is standardized by full length data.
 この変形例によれば、取得部1024Aに深度データを測定できる任意の機械を用いて構成される深度マップに基づいて対象物領域を抽出することにより、更に高精度に対象物の形状データを抽出することが可能となる。また、階調シルエット画像データ(変換される形状データ)は、対象物の深度データに関連付けられるので、対象物の形状に関して更に多くの情報量を有し、算出部1024Dによる対象物の各部分の寸法データを更に高精度に算出することが可能となる。 According to this modification, the shape data of the object is extracted with higher accuracy by extracting the object region based on the depth map configured by the acquisition unit 1024A using any machine capable of measuring the depth data. It becomes possible to do. Further, since the gradation silhouette image data (shape data to be converted) is associated with the depth data of the target object, it has a larger amount of information regarding the shape of the target object, and the calculation unit 1024D calculates each part of the target object. It is possible to calculate the dimension data with higher accuracy.
 補足すると、算出部1024Dが、変換部1024Cにより変換された階調シルエット画像データ(形状データ)を次元削減する。この場合、1回目に削減される次元の数は、2値化データのシルエット画像に比して10倍程度大きくなる。また、重み係数の個数は、対象物の寸法箇所(i個)毎に、削減された次元に応じて準備される。 Supplementally, the calculation unit 1024D reduces the dimension of the gradation silhouette image data (shape data) converted by the conversion unit 1024C. In this case, the number of dimensions reduced for the first time is about 10 times larger than that of the silhouette image of the binarized data. In addition, the number of weighting factors is prepared for each dimensional location (i) of the object according to the reduced dimension.
 なお、ここでは階調シルエット画像を単なるシルエット画像と区別して記述したが、他の実施形態及び他の変形例においては、両者を区別せずに単にシルエット画像と記載する場合がある。 Note that the gradation silhouette image is described here as being distinguished from a simple silhouette image, but in other embodiments and other modified examples, it may be simply described as a silhouette image without distinguishing both.
 (1-4-3)
 上記説明においては、算出部1024Dが、次元削減を2回実行しているが、必ずしもこのような処理が必要なわけではない。算出部1024Dは、次元削減を1回実行することで得られた各次元の値から、対象物の各部分の寸法データを算出するものでもよい。また、目的によっては、寸法データ算出装置1020は、形状データを次元削減せずに寸法データを算出するものでもよい。
(1-4-3)
In the above description, the calculation unit 1024D executes dimension reduction twice, but such processing is not always necessary. The calculation unit 1024D may calculate the dimension data of each part of the object from the value of each dimension obtained by executing the dimension reduction once. Depending on the purpose, the dimension data calculation device 1020 may calculate the dimension data without reducing the dimension of the shape data.
 (1-4-4)
 上記説明においては、抽出部1024Bが、対象物の形状が特定されていない教師データを用いて構築されたセマンティックセグメンテーションのアルゴリズムを用いて、画像データに含まれる対象物領域を抽出するとしたが、必ずしもこのような教師データを利用しなければならないものではない。例えば、対象物の形状が特定された教師データを用いて構築されたセマンティックセグメンテーションのアルゴリズムを用いてもよい。対象物の形状が特定された教師データを用いることで、目的に応じて、寸法データの計算精度を高めるとともに、計算負荷を抑制することができる。
(1-4-4)
In the above description, the extraction unit 1024B extracts the target object area included in the image data by using the semantic segmentation algorithm constructed using the teacher data in which the shape of the target object is not specified. It is not necessary to use such teacher data. For example, a semantic segmentation algorithm constructed using teacher data in which the shape of the object is specified may be used. By using the teacher data in which the shape of the object is specified, it is possible to improve the calculation accuracy of the dimension data and suppress the calculation load according to the purpose.
 (1-5)製品製造システムへの適用
 以下、上述した寸法データ算出装置1020を製品製造システム1001に適用する例について説明する。
 (1-5-1)製品製造システムの構成
 図4は本実施形態に係る製品製造システム1001の概念を示す模式図である。
 製品製造システム1001は、ユーザ1005が保有する端末装置1010と通信可能な寸法データ算出装置1020と、製品製造装置1030とを備え、所望の製品1006を製造するためのシステムである。図4では、一例として、対象物1007が人であり、製品1006が椅子であるときの概念を示している。ただし、本実施形態に係る製品製造システムの対象物1007及び製品1006はこれらに限定されるものではない。
(1-5) Application to Product Manufacturing System Hereinafter, an example in which the above-described dimension data calculation device 1020 is applied to the product manufacturing system 1001 will be described.
(1-5-1) Configuration of Product Manufacturing System FIG. 4 is a schematic diagram showing the concept of the product manufacturing system 1001 according to this embodiment.
The product manufacturing system 1001 is a system for manufacturing a desired product 1006, including a dimension data calculation device 1020 capable of communicating with the terminal device 1010 owned by the user 1005, and a product manufacturing device 1030. In FIG. 4, as an example, the concept when the object 1007 is a person and the product 1006 is a chair is shown. However, the object 1007 and the product 1006 of the product manufacturing system according to the present embodiment are not limited to these.
 端末装置1010は、いわゆるスマートデバイスにより実現することができる。ここでは、スマートデバイスに、ユーザ用プログラムがインストールされることで端末装置1010が各種機能を発揮する。具体的には、端末装置1010は、ユーザ1005により撮像される画像データを生成する。ここで、端末装置1010は、対象物を複数の異なる方向から同時に撮影して、両眼視差を再現するステレオカメラ機能を有するものでもよい。なお、画像データは端末装置1010で撮影されるものに限定されず、例えば、店舗内に設置されたステレオカメラを用いて撮影されたものを利用してもよい。 The terminal device 1010 can be realized by a so-called smart device. Here, the terminal device 1010 exerts various functions by installing the user program in the smart device. Specifically, the terminal device 1010 generates image data captured by the user 1005. Here, the terminal device 1010 may have a stereo camera function of simultaneously capturing an object from a plurality of different directions and reproducing binocular parallax. Note that the image data is not limited to that captured by the terminal device 1010, and, for example, data captured using a stereo camera installed in a store may be used.
 また、端末装置1010は、対象物1007の属性を示す属性データの入力を受け付ける。「属性」としては、対象物1007の全長・重量・生成からの経過時間(年齢を含む)などが挙げられる。また、端末装置1010は、通信機能を有しており、寸法データ算出装置1020及び製品製造装置1030と各種情報の送受信を実行する。 Further, the terminal device 1010 accepts input of attribute data indicating the attribute of the target object 1007. Examples of the “attribute” include the total length, weight, and elapsed time (including age) from the generation of the object 1007. Further, the terminal device 1010 has a communication function, and executes transmission and reception of various information with the dimension data calculation device 1020 and the product manufacturing device 1030.
 寸法データ算出装置1020は任意のコンピュータにより実現することができる。ここでは、寸法データ算出装置1020の記憶部1021は、端末装置1010のユーザ1005を識別する識別情報に関連付けて、端末装置1010から送信される情報を記憶する。また、記憶部1021は、後述する情報処理を実行するために必要なパラメータ等を記憶する。例えば、記憶部1021は、対象物1007の属性の項目等に関連付けて、後述する情報処理を実行するために必要な重み係数W1piを記憶する。 The dimension data calculation device 1020 can be realized by an arbitrary computer. Here, the storage unit 1021 of the dimension data calculation device 1020 stores the information transmitted from the terminal device 1010 in association with the identification information that identifies the user 1005 of the terminal device 1010. The storage unit 1021 also stores parameters and the like necessary for executing information processing described below. For example, the storage unit 1021 stores the weighting factor W1pi necessary for executing the information processing described later in association with the item of the attribute of the target object 1007 and the like.
 また、寸法データ算出装置1020の処理部1024は、上述したように、取得部1024A、抽出部1024B、変換部1024C、及び算出部1024Dとして機能する。ここでは、取得部1024Aは、ユーザ1005により撮影された画像データ及び対象物1007の属性データを取得する。また、抽出部1024Bは、画像データから対象物1007の形状を示す形状データを抽出する。例えば、対象物の種類として「人」が予め設定されている場合には、人を識別するための教師データを用いてセマンティックセグメンテーションのアルゴリズムが構築されている。また、抽出部1024Bは、グラブカットアルゴリズムにより特定された対象物1007の画像を、対象物1007の特定部分の色画像に基づいて補正し、さらに高精度に対象物1007の形状データを生成する。また、変換部1024Cは、形状データを全長データに基づいて変換し、シルエット化する。算出部1024Dは、変換部1024Cにより変換された形状データを用いて、ユーザ1005の各部分の寸法データを算出する。ここでは、算出部1024Dは、削減した各次元の値と対象物1007の部分ごとに最適化された重み係数W1piとを線形結合等して所定値Z1iを求める。そして、算出部1024Dは、所定値Z1iと、対象物1007の属性データとを用いて次元削減し、削減した各次元の値に基づいて、対象物1007の各部分の寸法データを算出する。 Further, the processing unit 1024 of the dimension data calculation device 1020 functions as the acquisition unit 1024A, the extraction unit 1024B, the conversion unit 1024C, and the calculation unit 1024D, as described above. Here, the acquisition unit 1024A acquires the image data captured by the user 1005 and the attribute data of the target object 1007. The extraction unit 1024B also extracts shape data indicating the shape of the object 1007 from the image data. For example, when “person” is set in advance as the type of object, a semantic segmentation algorithm is constructed using teacher data for identifying a person. Further, the extraction unit 1024B corrects the image of the target object 1007 specified by the grab cut algorithm based on the color image of the specific portion of the target object 1007, and further highly accurately generates the shape data of the target object 1007. Further, the conversion unit 1024C converts the shape data based on the full length data to make a silhouette. The calculation unit 1024D calculates the dimension data of each part of the user 1005 using the shape data converted by the conversion unit 1024C. Here, the calculation unit 1024D linearly combines the reduced value of each dimension and the weighting coefficient W1pi optimized for each part of the object 1007 to obtain the predetermined value Z1i. Then, the calculation unit 1024D reduces the dimension using the predetermined value Z1i and the attribute data of the object 1007, and calculates the dimension data of each part of the object 1007 based on the reduced value of each dimension.
 製品製造装置1030は、寸法データ算出装置1020を用いて算出された寸法データを用いて、対象物1007の形状に関連する所望の製品を製造する製造装置である。なお、製品製造装置1030は、自動で製品を製造・加工できる任意の装置を採用することができ、例えば3次元プリンタなどにより実現することができる。 The product manufacturing apparatus 1030 is a manufacturing apparatus that manufactures a desired product related to the shape of the object 1007 using the dimension data calculated by the dimension data calculation apparatus 1020. Note that the product manufacturing apparatus 1030 can employ any device that can automatically manufacture and process a product, and can be realized by, for example, a three-dimensional printer.
 (1-5-2)製品製造システムの動作
 図5は本実施形態に係る製品製造システム1001の動作を説明するためのシーケンス図である。また、図6,図7は端末装置1010の画面遷移を示す模式図である。
 まず、端末装置1010を介して対象物1007の全体が異なる方向から写るように複数回撮像されて、対象物1007が撮像された複数の画像データが生成される(T1001)。ここでは、図6,図7にそれぞれ示すような、正面及び側面の写真が複数枚撮影される。
(1-5-2) Operation of Product Manufacturing System FIG. 5 is a sequence diagram for explaining the operation of the product manufacturing system 1001 according to this embodiment. 6 and 7 are schematic diagrams showing screen transitions of the terminal device 1010.
First, the entire object 1007 is imaged multiple times via the terminal device 1010 so that the object 1007 is imaged from different directions, and a plurality of image data of the object 1007 is generated (T1001). Here, a plurality of front and side pictures as shown in FIGS. 6 and 7 are taken.
 次に、ユーザ1005により端末装置1010に、対象物1007の属性を示す属性データが入力される(T1002)。ここでは、属性データとして、対象物1007の全長データ・重量データ・経時データ(年齢等を含む)等が入力される。
 そして、これらの複数の画像データ及び属性データが端末装置1010から寸法データ算出装置1020に送信される。
Next, the user 1005 inputs the attribute data indicating the attribute of the object 1007 to the terminal device 1010 (T1002). Here, as the attribute data, full length data, weight data, elapsed time data (including age, etc.) of the object 1007, etc. are input.
Then, the plurality of image data and attribute data are transmitted from the terminal device 1010 to the dimension data calculation device 1020.
 寸法データ算出装置1020では、端末装置1010から複数の画像データ及び属性データを受信すると、これらのデータを用いて、対象物1007の各部分の寸法データを算出する(T1003)。なお、端末装置1010には、設定に応じて、寸法データが画面に表示される。 When the dimension data calculation device 1020 receives a plurality of image data and attribute data from the terminal device 1010, the dimension data calculation device 1020 calculates the dimension data of each part of the object 1007 using these data (T1003). The terminal device 1010 displays the dimension data on the screen according to the setting.
 そして、製品製造装置1030が、寸法データ算出装置1020により算出された寸法データに基づいて所望の製品1006を製造する(T1004)。 Then, the product manufacturing apparatus 1030 manufactures the desired product 1006 based on the dimension data calculated by the dimension data calculation apparatus 1020 (T1004).
 (1-5-3)製品製造システムの特徴
 以上説明したように、本実施形態に係る製品製造システム1001は、ユーザ1005が保有する端末装置1010と通信可能な寸法データ算出装置1020と、製品製造装置1030とを備える。端末装置1010(撮影装置)は、対象物1007の画像を複数枚撮影する。寸法データ算出装置1020は、取得部1024Aと、抽出部1024Bと、変換部1024Cと、算出部1024Dと、を備える。取得部1024Aは、端末装置1010から対象物1007の画像データを当該対象物1007の全長データとともに取得する。抽出部1024Bは、画像データから対象物1007の形状を示す形状データを抽出する。変換部1024Cは、形状データを全長データに基づいて変換し、シルエット化する。算出部1024Dは、変換部1024Cにより変換された形状データを用いて、対象物1007の各部分の寸法データを算出する。製品製造装置1030は、算出部1024Dにより算出された寸法データを用いて製品1006を製造する。
(1-5-3) Characteristics of Product Manufacturing System As described above, the product manufacturing system 1001 according to the present embodiment includes the dimension data calculation device 1020 capable of communicating with the terminal device 1010 owned by the user 1005, and the product manufacturing system. Apparatus 1030. The terminal device 1010 (imaging device) captures a plurality of images of the object 1007. The dimension data calculation device 1020 includes an acquisition unit 1024A, an extraction unit 1024B, a conversion unit 1024C, and a calculation unit 1024D. The acquisition unit 1024A acquires the image data of the object 1007 from the terminal device 1010 together with the full length data of the object 1007. The extraction unit 1024B extracts shape data indicating the shape of the object 1007 from the image data. The conversion unit 1024C converts the shape data based on the full length data to form a silhouette. The calculation unit 1024D calculates the dimension data of each part of the object 1007 using the shape data converted by the conversion unit 1024C. The product manufacturing apparatus 1030 manufactures the product 1006 using the dimension data calculated by the calculation unit 1024D.
 このような構成により、寸法データ算出装置1020が高精度に対象物1007の各部分を高精度に算出するので、対象物1007の形状に関連する所望の製品を提供できる。 With such a configuration, the dimension data calculation device 1020 calculates each part of the target object 1007 with high accuracy, and thus a desired product related to the shape of the target object 1007 can be provided.
 例えば、製品製造システム1001により、心臓などの各種臓器の形状の測定から、臓器の模型を製造することができる。
 また、例えば、人のウエスト形状の測定から各種ヘルスケア製品等を製造することができる。
 また、例えば、人の形状から当該人のフィギュア製品を製造することができる。
 また、例えば、人の形状から当該人に適合する椅子などを製造することができる。
 また、例えば、車の形状から車のおもちゃを製造することができる。
 また、例えば、任意の風景画からジオラマなどを製造することができる。
For example, the product manufacturing system 1001 can manufacture a model of an organ by measuring the shapes of various organs such as the heart.
Further, for example, various healthcare products and the like can be manufactured by measuring the waist shape of a person.
Also, for example, a person's figure product can be manufactured from the person's shape.
Further, for example, a chair or the like suitable for a person can be manufactured from the shape of the person.
Also, for example, car toys can be manufactured from car shapes.
Further, for example, a diorama or the like can be manufactured from an arbitrary landscape painting.
 なお、上記説明においては、寸法データ算出装置1020と製品製造装置1030とが別部材の装置として説明しているが、これらは一体として構成されるものでもよい。 Note that, in the above description, the dimensional data calculation device 1020 and the product manufacturing device 1030 are described as separate device devices, but they may be integrally configured.
 <第2実施形態>
 以下、既に説明した構成及び機能については略同一符号を付して説明を省略する。
 (2-1)寸法データ算出装置の構成
 図8は本実施形態に係る寸法データ算出装置2120の構成を示す模式図である。
<Second Embodiment>
Hereinafter, the configurations and functions that have already been described will be denoted by the same reference numerals, and description thereof will be omitted.
(2-1) Configuration of Dimension Data Calculation Device FIG. 8 is a schematic diagram showing the configuration of the dimension data calculation device 2120 according to this embodiment.
 寸法データ算出装置2120は任意のコンピュータにより実現することができ、記憶部2121、入出力部2122、通信部2123、及び処理部2124を備える。なお、寸法データ算出装置2120は、LSI(Large Scale Integration),ASIC(Application Specific Integrated Circuit),FPGA(Field-Programmable Gate Array)などを用いてハードウェアとして実現されるものでもよい。 The dimension data calculation device 2120 can be realized by an arbitrary computer and includes a storage unit 2121, an input/output unit 2122, a communication unit 2123, and a processing unit 2124. The dimension data calculation device 2120 may be realized as hardware using an LSI (Large Scale Integration), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or the like.
 記憶部2121は、各種情報を記憶するものであり、メモリ及びハードディスク等の任意の記憶装置により実現される。例えば、記憶部2121は、対象物の長さ及び重さ等に関連付けて、後述する情報処理を実行するために必要な重み係数Wriを記憶する。なお、重み係数は、後述する属性データ・寸法データからなる教師データから機械学習を実行することであらかじめ取得される。 The storage unit 2121 stores various kinds of information, and is realized by an arbitrary storage device such as a memory and a hard disk. For example, the storage unit 2121 stores the weighting factor Wri necessary for executing the information processing described later in association with the length and weight of the target object. The weighting factor is acquired in advance by performing machine learning from teacher data including attribute data and size data described later.
 入出力部2122は、前述の入出力部1022と同様の構成及び機能を有するものである。
 通信部2123は、前述の通信部1023と同様の構成及び機能を有するものである。
The input/output unit 2122 has the same configuration and function as the input/output unit 1022 described above.
The communication unit 2123 has the same configuration and function as the communication unit 1023 described above.
 処理部2124は、各種情報処理を実行するものであり、CPUやGPUといったプロセッサ及びメモリにより実現される。ここでは、コンピュータのCPU,GPU等に記憶部2121に記憶されたプログラムが読み込まれることにより、処理部2124が、取得部2124A、及び算出部2124Dとして機能する。 The processing unit 2124 executes various types of information processing, and is realized by a processor such as a CPU or GPU and a memory. Here, the processing unit 2124 functions as the acquisition unit 2124A and the calculation unit 2124D by reading the program stored in the storage unit 2121 into the CPU, GPU, or the like of the computer.
 取得部2124Aは、対象物の全長データと重量データと経時データ(年齢等を含む)とのうちの少なくともいずれかを含む属性データDzr(rは属性データの要素数)を取得する。 The acquisition unit 2124A acquires the attribute data Dzr (r is the number of elements of the attribute data) including at least one of the full length data, the weight data, and the elapsed time data (including age) of the object.
 算出部2124Dは、取得部2124Aにより取得された属性データを用いて、対象物の各部分の寸法データを算出する。具体的には、算出部2124Dは、属性データを、機械学習された重み係数Wsiを用いて二次回帰することにより、対象物の各部分の寸法データを算出する。重み係数は、対象物の部分ごとに最適化されており、対象物における第i番目の部分の重み係数をWsiと表記する。なお、i=1~jであり、jは寸法データを算出しようとする寸法箇所の総数である。また、記号sは属性データから得られる演算に用いられる要素の数である。 The calculation unit 2124D calculates the dimension data of each part of the object using the attribute data acquired by the acquisition unit 2124A. Specifically, the calculation unit 2124D calculates the dimension data of each part of the target object by performing a quadratic regression on the attribute data using the weight coefficient Wsi that has been machine-learned. The weighting factor is optimized for each part of the object, and the weighting factor of the i-th part of the object is denoted by Wsi. Note that i=1 to j, and j is the total number of dimension portions for which dimension data is to be calculated. The symbol s is the number of elements used for the calculation obtained from the attribute data.
 詳しくは、算出部2124Dは、図9に示すように、属性データDzr(r=3)の各要素x1,x2,x3を2乗した値(図9における1行目の値であり、二次項ともいう。)、各要素を掛け合わせた値(図9における2行目の値であり、相互作用項ともいう。)、各要素自体の値(図9における3行目の値であり、一次項ともいう)を用いて、寸法データを算出する。なお、図9に示す例では、3つの属性データの要素x1,x2,x3から得られる値が9つあり、s=9であるので、重み係数Wsiはi×9個の要素を有することになる。
 (2-2)寸法データ算出装置の特徴
Specifically, as shown in FIG. 9, the calculation unit 2124D calculates a value obtained by squaring each element x1, x2, and x3 of the attribute data Dzr (r=3) (the value in the first row in FIG. Value), a value obtained by multiplying each element (a value on the second line in FIG. 9, also referred to as an interaction term), a value of each element itself (a value on the third line in FIG. 9, and a primary (Also referred to as a term) is used to calculate dimensional data. Note that in the example shown in FIG. 9, there are nine values obtained from the three elements x1, x2, and x3 of the attribute data, and s=9, so the weighting factor Wsi has i×9 elements. Become.
(2-2) Features of dimensional data calculation device
 以上説明したように、本実施形態に係る寸法データ算出装置2120は、取得部2124Aと、算出部2124Dとを備える。取得部2124Aは、対象物の全長データと重量データと経時データとのうちの少なくともいずれかを含む属性データを取得する。算出部2124Dは、属性データを用いて、対象物の各部分の寸法データを算出する。 As described above, the dimension data calculation device 2120 according to this embodiment includes the acquisition unit 2124A and the calculation unit 2124D. The acquisition unit 2124A acquires the attribute data including at least one of the full length data, the weight data, and the elapsed time data of the target object. The calculation unit 2124D calculates the dimension data of each part of the object using the attribute data.
 したがって、寸法データ算出装置2120は、上記属性データを用いて対象物の各部分の寸法データを算出するので、高精度な寸法データを提供することができる。具体的には、算出部2124Dが、属性データを機械学習された係数を用いて二次回帰することにより、対象物の各部分の寸法データを高精度に算出する。
 また、寸法データ算出装置2020では、多数のデータを一度に情報処理することが可能であるので、多数の寸法データを高速に提供することができる。
Therefore, since the dimension data calculation device 2120 calculates the dimension data of each part of the object using the attribute data, it is possible to provide highly accurate dimension data. Specifically, the calculation unit 2124D performs quadratic regression on the attribute data using the machine-learned coefficient to calculate the dimension data of each part of the object with high accuracy.
Further, since the dimension data calculation device 2020 can process a large number of data at once, it is possible to provide a large number of dimension data at high speed.
 また、属性データとして、全長データと重量データと経時データとのうちの少なくともいずれかを含んでいる場合、生物の各部分の寸法データを高精度に算出することができる。
 また、寸法データ算出装置2120を、各種製品を製造する製品製造装置に組み込むことで、対象物の形状に適合した製品を製造することが可能となる。
When the attribute data includes at least one of the full length data, the weight data, and the elapsed time data, the dimension data of each part of the living thing can be calculated with high accuracy.
Further, by incorporating the dimension data calculation device 2120 into a product manufacturing device that manufactures various products, it is possible to manufacture a product that conforms to the shape of the target object.
 なお、上記説明においては、算出部2124Dが、属性データを二次回帰することにより、対象物の各部分の寸法データを算出するとしたが、算出部2124Dの演算はこれに限定されるものではない。算出部2124Dは、属性データを線形結合して寸法データを求めるものでもよい。 In the above description, the calculation unit 2124D calculates the dimension data of each part of the object by performing the quadratic regression of the attribute data, but the calculation of the calculation unit 2124D is not limited to this. .. The calculation unit 2124D may be a unit that linearly combines the attribute data to obtain the dimension data.
 (2-3)製品製造システムへの適用
 図10は本実施形態に係る製品製造システム2001Sの概念を示す模式図である。
 本実施形態に係る寸法データ算出装置2120も、第1実施形態に係る寸法データ算出装置1020と同様に、製品製造システム2001Sに適用することが可能である。
(2-3) Application to Product Manufacturing System FIG. 10 is a schematic diagram showing the concept of the product manufacturing system 2001S according to this embodiment.
The dimension data calculation device 2120 according to the present embodiment can also be applied to the product manufacturing system 2001S, similarly to the dimension data calculation device 1020 according to the first embodiment.
 本実施形態に係る端末装置2010Sは、対象物2007の属性を示す属性データの入力を受け付けるものであればよい。「属性」としては、対象物1007の全長・重量・生成からの経過時間(年齢を含む)などが挙げられる。 The terminal device 2010S according to the present embodiment only needs to accept the input of attribute data indicating the attribute of the target object 2007. Examples of the “attribute” include the total length, weight, and elapsed time (including age) from the generation of the object 1007.
 また、上述したように、寸法データ算出装置2120の処理部2124は、取得部2124A及び算出部2124Dとして機能する。算出部2124Dは、取得部2124Aにより取得された属性データを用いて、対象物2007の各部分の寸法データを算出する。具体的には、算出部2124Dは、属性データを機械学習された重み係数Wsiを用いて二次回帰することにより、対象物の各部分の寸法データを算出する。 Also, as described above, the processing unit 2124 of the dimension data calculation device 2120 functions as the acquisition unit 2124A and the calculation unit 2124D. The calculation unit 2124D calculates the dimension data of each part of the object 2007 using the attribute data acquired by the acquisition unit 2124A. Specifically, the calculation unit 2124D calculates the dimension data of each part of the object by performing a quadratic regression on the attribute data using the weight coefficient Wsi that has been machine-learned.
 製品製造システム2001Sでは、寸法データ算出装置2120が高精度に対象物2007の各部分を高精度に算出するので、対象物2007の形状に関連する所望の製品を提供できる。その他、第2実施形態に係る製品製造システム2001Sは、第1実施形態の製品製造システム1001と同様の効果を発揮することができる。 In the product manufacturing system 2001S, the dimensional data calculation device 2120 calculates each part of the target object 2007 with high accuracy, so that a desired product related to the shape of the target object 2007 can be provided. In addition, the product manufacturing system 2001S according to the second embodiment can exhibit the same effects as the product manufacturing system 1001 according to the first embodiment.
 <第3実施形態> <Third embodiment>
 以下に、本発明の情報処理装置、情報処理方法、製品製造装置、及び寸法データ算出装置の実施形態に係る寸法データ算出システムを添付図面とともに説明する。以下の実施形態の説明において、情報処理装置及び寸法データ算出装置は、寸法データ算出システムの一部として実装される。 A dimension data calculation system according to an embodiment of the information processing apparatus, the information processing method, the product manufacturing apparatus, and the dimension data calculation apparatus of the present invention will be described below with reference to the accompanying drawings. In the following description of the embodiments, the information processing device and the dimension data calculation device are implemented as part of the dimension data calculation system.
 添付図面において、同一又は類似の要素には同一又は類似の参照符号が付され、各実施形態の説明において同一又は類似の要素に関する重複する説明は省略することがある。また、各実施形態で示される特徴は、互いに矛盾しない限り他の実施形態にも適用可能である。更に、図面は模式的なものであり、必ずしも実際の寸法や比率等とは一致しない。図面相互間においても互いの寸法の関係や比率が異なる部分が含まれていることがある。 In the accompanying drawings, the same or similar reference numerals are given to the same or similar elements, and the duplicate description of the same or similar elements may be omitted in the description of each embodiment. Further, the features shown in each embodiment can be applied to other embodiments as long as they do not contradict each other. Furthermore, the drawings are schematic and do not necessarily match actual dimensions and ratios. The drawings may include portions having different dimensional relationships and ratios.
 なお、以下の説明において、行列やベクトルなどを用いて複数の要素をまとめて表記する場合は大文字で表し、行列の個々の要素を表記する場合は小文字で表すことがある。例えば、形状パラメータの集合を示す場合等は行列Λと表記し、行列Λの要素を表す場合は、要素λと表記する場合がある。 Note that, in the following description, when a plurality of elements are collectively expressed by using a matrix or vector, they may be expressed in uppercase, and individual elements of the matrix may be expressed in lowercase. For example, a matrix Λ may be used to represent a set of shape parameters, and an element λ may be used to represent elements of the matrix Λ.
 (3-1)寸法データ算出システムの構成
 図11は本実施形態に係る寸法データ算出システム3100の構成を示す模式図である。寸法データ算出システム3100は、寸法データ算出装置3020及び学習装置3025を備える。
(3-1) Configuration of Dimension Data Calculation System FIG. 11 is a schematic diagram showing the configuration of the dimension data calculation system 3100 according to this embodiment. The dimension data calculation system 3100 includes a dimension data calculation device 3020 and a learning device 3025.
 寸法データ算出装置3020及び学習装置3025は、それぞれ、任意のコンピュータにより実現することができる。寸法データ算出装置3020は、記憶部3021、入出力部3022、通信部3023、及び処理部3024を備える。また、学習装置3025は、記憶部3026及び処理部3027を備える。なお、寸法データ算出装置3020及び学習装置3025は、LSI(Large Scale Integration),ASIC(Application Specific Integrated Circuit),FPGA(Field-Programmable Gate Array)等を用いてハードウェアとして実現されてもよい。 Each of the dimension data calculation device 3020 and the learning device 3025 can be realized by an arbitrary computer. The dimension data calculation device 3020 includes a storage unit 3021, an input/output unit 3022, a communication unit 3023, and a processing unit 3024. The learning device 3025 also includes a storage unit 3026 and a processing unit 3027. The dimension data calculation device 3020 and the learning device 3025 may be realized as hardware using an LSI (Large Scale Integration), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or the like.
 記憶部3021,3026は、何れも各種情報を記憶するものであり、メモリ及びハードディスク等の任意の記憶装置により実現される。例えば、記憶部3021は、処理部3024において寸法データ算出に関する情報処理を実行するために、対象物エンジン3021Aを含む各種データ、プログラム、情報等を格納する。また、記憶部3026は、対象物エンジン3021Aを生成するために、学習段階で利用される訓練データを格納する。 Each of the storage units 3021 and 3026 stores various kinds of information, and is realized by an arbitrary storage device such as a memory and a hard disk. For example, the storage unit 3021 stores various data including the target engine 3021A, programs, information, and the like in order for the processing unit 3024 to execute information processing regarding dimension data calculation. The storage unit 3026 also stores training data used in the learning stage to generate the target engine 3021A.
 入出力部3022は、キーボード、マウス、タッチパネル等により実現され、コンピュータに各種情報を入力したり、コンピュータから各種情報を出力したりする。
 通信部3023は、任意のネットワークカード等のネットワークインタフェースにより実現され、有線又は無線によりネットワーク上の通信機器との通信を可能にする。
The input/output unit 3022 is realized by a keyboard, a mouse, a touch panel, etc., and inputs various information to the computer and outputs various information from the computer.
The communication unit 3023 is realized by a network interface such as an arbitrary network card and enables communication with a communication device on the network by wire or wirelessly.
 処理部3024,3027は、何れも各種情報処理を実行するために、CPU(Central Processing Unit)及び/又はGPU(Graphical Processing Unit)といったプロセッサ、並びにメモリにより実現される。処理部3024は、コンピュータのCPU,GPU等に、記憶部3021に記憶されたプログラムが読み込まれることにより、取得部3024A、抽出部3024B、変換部3024C、推定部3024D、及び算出部3024Eとして機能する。同様に、処理部3027は、コンピュータのCPU,GPU等に、記憶部3026に記憶されたプログラムが読み込まれることにより、前処理部3027A及び学習部3027Bとして機能する。 Each of the processing units 3024 and 3027 is realized by a processor such as a CPU (Central Processing Unit) and/or a GPU (Graphical Processing Unit) and a memory in order to execute various information processing. The processing unit 3024 functions as an acquisition unit 3024A, an extraction unit 3024B, a conversion unit 3024C, an estimation unit 3024D, and a calculation unit 3024E when the program stored in the storage unit 3021 is read by the CPU, GPU, etc. of the computer. .. Similarly, the processing unit 3027 functions as the preprocessing unit 3027A and the learning unit 3027B by the CPU, GPU, etc. of the computer reading the program stored in the storage unit 3026.
 寸法データ算出装置3020の処理部3024において、取得部3024Aは、対象物が撮影された画像データ、並びに対象物の全長データ及び重量データ等の属性データを取得する。取得部3024Aは、例えば、撮像装置により、対象物を複数の異なる方向から撮影した複数の画像データを取得する。 In the processing unit 3024 of the dimension data calculation device 3020, the acquisition unit 3024A acquires the image data of the object, and the attribute data such as the total length data and the weight data of the object. The acquisition unit 3024A acquires, for example, a plurality of pieces of image data obtained by shooting an object from a plurality of different directions with an imaging device.
 抽出部3024Bは、画像データから対象物の形状を示す形状データを抽出する。具体的には、抽出部3024Bは、対象物の種類毎に準備されたセマンティックセグメンテーションのアルゴリズム(Mask R-CNN等)を用いて、画像データに含まれる対象物領域を抽出することにより、対象物の形状データを抽出する。セマンティックセグメンテーションのアルゴリズムは、対象物の形状が特定されていない訓練データを用いて構築できる。 The extraction unit 3024B extracts shape data indicating the shape of the object from the image data. Specifically, the extraction unit 3024B extracts the target area included in the image data by using the semantic segmentation algorithm (Mask R-CNN or the like) prepared for each type of the target object to extract the target object. The shape data of is extracted. The semantic segmentation algorithm can be constructed using training data in which the shape of the object is not specified.
 なお、セマンティックセグメンテーションのアルゴリズムが、形状が不特定の対象物の訓練データを用いて構築されている場合、必ずしも高精度に対象物の形状を抽出することができないことがある。このような場合、抽出部3024Bは、対象物領域からグラブカット(Grab Cut)アルゴリズムにより対象物の形状データを抽出する。これにより、高精度に対象物の形状を抽出することが可能になる。 Note that if the semantic segmentation algorithm is constructed using the training data of an object whose shape is unspecified, it may not always be possible to extract the shape of the object with high accuracy. In such a case, the extraction unit 3024B extracts the shape data of the target object from the target object area by the Grab Cut algorithm. This makes it possible to extract the shape of the object with high accuracy.
 また、抽出部3024Bは、グラブカットアルゴリズムにより特定された対象物の画像を、対象物の特定部分の色画像に基づいて補正することにより、対象物と対象物以外の背景画像とを分離してもよい。これにより、更に高精度に対象物の形状データを生成することが可能となる。 Further, the extraction unit 3024B separates the target object and the background image other than the target object by correcting the image of the target object specified by the grab cut algorithm based on the color image of the specific portion of the target object. Good. This makes it possible to generate the shape data of the object with higher accuracy.
 変換部3024Cは、形状データを全長データに基づいてシルエット化する。つまり、対象物の形状データを変換して、対象物のシルエット画像を生成する。これにより、形状データが規格化される。変換部3024Cは、生成されたシルエット画像を推定部3024Dに入力するための受付部としても機能することになる。 The conversion unit 3024C converts the shape data into silhouettes based on the full length data. That is, the shape data of the object is converted to generate a silhouette image of the object. Thereby, the shape data is standardized. The conversion unit 3024C also functions as a reception unit for inputting the generated silhouette image to the estimation unit 3024D.
 推定部3024Dは、シルエット画像から所定個数の形状パラメータの値を推定する。推定には、対象物エンジン3021Aが使用される。推定部3024Dで推定された所定個数の対象物の形状パラメータの値は、対象物が有する任意の部位に関連する寸法データに関連付けられる。 The estimation unit 3024D estimates the values of a predetermined number of shape parameters from the silhouette image. The object engine 3021A is used for the estimation. The value of the shape parameter of the predetermined number of objects estimated by the estimation unit 3024D is associated with the dimension data related to an arbitrary part of the object.
 算出部3024Eは、推定部3024Dによって推定された所定個数の形状パラメータの値から、これに関連付けられた対象物の寸法データを算出する。具体的には、算出部3024Eは、推定部3024Dで推定された対象物の形状パラメータの値から、対象物における複数の頂点の3次元データを構成し、更に、当該3次元データに基づいて対象物における任意の2つの頂点の間の寸法データを算出する。 The calculation unit 3024E calculates, from the values of the predetermined number of shape parameters estimated by the estimation unit 3024D, the dimension data of the object associated with the shape parameter values. Specifically, the calculation unit 3024E configures three-dimensional data of a plurality of vertices in the object from the shape parameter values of the object estimated by the estimation unit 3024D, and further, the target based on the three-dimensional data. Calculate dimensional data between any two vertices of an object.
 学習装置3025の処理部3027において、前処理部3027Aは、学習のための各種前処理を実施する。特に、前処理部3027Aは、サンプル対象物の3次元データを次元削減により特徴抽出することを通じて、所定個数の形状パラメータを特定する。また、サンプル対象物ごとに所定個数(次元)の形状パラメータの値を得る。サンプル対象物の形状パラメータの値は、訓練データとして記憶部3026に格納される。 In the processing unit 3027 of the learning device 3025, the preprocessing unit 3027A carries out various preprocessing for learning. In particular, the pre-processing unit 3027A specifies a predetermined number of shape parameters through feature extraction of the three-dimensional data of the sample object by dimension reduction. Also, a predetermined number (dimension) of shape parameter values is obtained for each sample object. The value of the shape parameter of the sample object is stored in the storage unit 3026 as training data.
 また、前処理部3027Aは、サンプル対象物の3次元データに基づいて、3次元空間内にサンプル対象物の3次元物体を仮想的に構成した上で、3次元空間内に仮想的に設けた撮像装置を用いて所定方向から3次元物体を投影してサンプル対象物のシルエット画像を生成する。生成されたサンプル対象物のシルエット画像のデータは、訓練データとして記憶部3026に格納される。 Further, the preprocessing unit 3027A virtually constructs a three-dimensional object of the sample object in the three-dimensional space based on the three-dimensional data of the sample object, and then virtually provides it in the three-dimensional space. A silhouette image of a sample target object is generated by projecting a three-dimensional object from a predetermined direction using an imaging device. Data of the generated silhouette image of the sample object is stored in the storage unit 3026 as training data.
 学習部3027Bは、サンプル対象物のシルエット画像と、サンプル対象物に関連付けられる所定個数の形状パラメータの値との関係を関連付けるように学習する。学習の結果、対象物エンジン3021Aが生成される。生成された対象物エンジン3021Aは、電子ファイルの形態で保持できる。寸法データ算出装置3020において対象物の寸法データを算出する際には、対象物エンジン3021Aは、記憶部3021に格納されて推定部3024Dによって参照される。 The learning unit 3027B learns to associate the relationship between the silhouette image of the sample object and the values of the predetermined number of shape parameters associated with the sample object. As a result of the learning, the target engine 3021A is generated. The generated object engine 3021A can be held in the form of an electronic file. When the dimension data calculation device 3020 calculates the dimension data of the object, the object engine 3021A is stored in the storage unit 3021 and referred to by the estimation unit 3024D.
 (3-2)寸法データ算出システムの動作
 図12及び図13を参照して、図11の寸法データ算出システム3100の動作を説明する。図12は、学習装置3025の動作(S3010)を示すフローチャートであり、サンプル対象物データに基づいて対象物エンジン3021Aを生成する。図13は、寸法データ算出装置3020の動作を示すフローチャートであり、対象物の画像データに基づいて対象物の寸法データを算出する。
(3-2) Operation of Dimension Data Calculation System The operation of the size data calculation system 3100 of FIG. 11 will be described with reference to FIGS. 12 and 13. FIG. 12 is a flowchart showing the operation (S3010) of the learning device 3025, which generates the target engine 3021A based on the sample target data. FIG. 13 is a flowchart showing the operation of the dimension data calculation device 3020, and calculates the dimension data of the target object based on the image data of the target object.
 (3-2-1)学習装置の動作
 最初に、サンプル対象物のデータが準備され、記憶部3026に格納される(S3011)。一例では、準備されるデータは、400個のサンプル対象物のデータであり、サンプル対象物ごとに5,000個の3次元データを含む。3次元データには、サンプル対象物が有する頂点の3次元座標データが含まれる。また、3次元データには、3次元物体を構成する各メッシュの頂点情報及び各頂点の法線方向等のメッシュ・データ、並びに、全長データ、重量データ、及び経時データ(年齢等を含む。)等の属性データが含まれてもよい。
(3-2-1) Operation of Learning Device First, data of the sample target is prepared and stored in the storage unit 3026 (S3011). In one example, the data provided is data for 400 sample objects, including 5,000 three-dimensional data for each sample object. The three-dimensional data includes three-dimensional coordinate data of the vertices of the sample object. Further, the three-dimensional data includes apex information of each mesh forming the three-dimensional object, mesh data such as a normal direction of each apex, full length data, weight data, and elapsed time data (including age and the like). Attribute data such as "" may be included.
 サンプル対象物の3次元データは、頂点番号が関連付けられている。前述の例では、サンプル対象物ごとに、5,000個の頂点の3次元データが頂点番号#1~#5,000に対応付けられている。また、頂点番号の全部又は一部は、対象物の部位の情報が関連付けられている。例えば、対象物が「人」の場合、頂点番号#20は「頭頂点」に関連付けられ、同様に、頂点番号#313は「左肩の肩峰」に、頂点番号#521は「右肩の肩峰」に関連付けられる等である。 The vertex number is associated with the three-dimensional data of the sample object. In the above example, three-dimensional data of 5,000 vertices are associated with vertex numbers #1 to #5,000 for each sample object. Further, all or part of the vertex number is associated with the information on the part of the object. For example, when the object is a "person", the vertex number #20 is associated with the "head apex", and similarly, the vertex number #313 is the "acromion of the left shoulder" and the vertex number #521 is the "shoulder of the right shoulder". Associated with a "peak".
 続いて、前処理部3027Aは、次元削減により形状パラメータへの特徴変換を行う(S3012)。具体的には、サンプル対象物ごとに、サンプル対象物の3次元データを次元削減により特徴抽出を行う。その結果、所定個数(次元数)の形状パラメータを得る。一例では、形状パラメータの次元数は30である。次元削減は、主成分分析、Random Projection等の手法により実現される。 Subsequently, the preprocessing unit 3027A performs feature conversion into shape parameters by dimension reduction (S3012). Specifically, for each sample object, feature extraction is performed by dimension reduction of the three-dimensional data of the sample object. As a result, a predetermined number (number of dimensions) of shape parameters are obtained. In one example, the dimensionality of the shape parameter is 30. Dimension reduction is realized by methods such as principal component analysis and Random Projection.
 前処理部3027Aは、主成分分析の射影行列を用いて、サンプル対象物ごとに3次元データから所定個数の形状パラメータの値に変換する。これにより、関連する特徴的な情報を維持した上で、サンプル対象物の3次元データからノイズを取り除き、3次元データを圧縮することができる。 The preprocessing unit 3027A converts the three-dimensional data for each sample object into a predetermined number of shape parameter values using the projection matrix of the principal component analysis. This makes it possible to remove noise from the three-dimensional data of the sample object and compress the three-dimensional data while maintaining related characteristic information.
 前述の例のように、サンプル対象物のデータが400個あり、各サンプル対象物が5,000個の頂点の3次元(座標)データを含み、各3次元データを30次元の形状パラメータに特徴変換することを想定する。ここでは、400個のサンプル対象物のデータを表す〔400行,15,000列(5,000×3)〕の頂点座標行列を行列Xとする。また、主成分分析によって生成される〔15,000行、30列〕の射影行列を行列Wとする。頂点座標行列Xに対し、射影行列Wを右から掛けることによって、〔400行、30列〕の形状パラメータ行列である行列Λを得ることができる。 As in the above example, there are 400 pieces of sample object data, each sample object includes three-dimensional (coordinate) data of 5,000 vertices, and each three-dimensional data is characterized by a 30-dimensional shape parameter. It is supposed to be converted. Here, the vertex coordinate matrix of [400 rows, 15,000 columns (5,000×3)] representing the data of 400 sample objects is defined as a matrix X. A matrix W is a projection matrix of [15,000 rows, 30 columns] generated by the principal component analysis. By multiplying the vertex coordinate matrix X by the projection matrix W from the right, a matrix Λ that is a shape parameter matrix of [400 rows, 30 columns] can be obtained.
 つまり、形状パラメータ行列Λは次の数式から計算することができる。
Figure JPOXMLDOC01-appb-M000001
That is, the shape parameter matrix Λ can be calculated from the following equation.
Figure JPOXMLDOC01-appb-M000001
 射影行列Wを用いた行列演算の結果、400個のサンプル対象物がそれぞれ有する15,000次元のデータは、30次元の主成分の形状パラメータ(λ,...,λ30)に特徴変換される。なお、主成分分析では、λに対する400個分の値(λ,...,λ400,1)の平均値がゼロになるように計算されている。 As a result of the matrix calculation using the projection matrix W, the 15,000-dimensional data of each of the 400 sample objects is converted into the shape parameter (λ 1 ,..., λ 30 ) of the 30-dimensional main component. To be done. In the principal component analysis, the average value of 400 values (λ 1 , 1 ,..., λ 400,1 ) for λ i is calculated to be zero.
 S3012の結果、形状パラメータ行列Λを得たのに続いて、前処理部3027Aは、形状パラメータ行列Λに含まれている形状パラメータのデータ・セットを、乱数を用いて拡張する(S3013)。前述の例では、400個のデータ・セット(λi,1,...,λi,30(1≦i≦400))を、10,000個の形状パラメータの拡張データ・セット(λ,1,...,λj,30(1≦j≦10,000))にデータ拡張する。データ拡張は、正規分布を有する乱数を用いて行う。拡張データ・セットは、各形状パラメータの値の分散が3σの正規分布となる。 After obtaining the shape parameter matrix Λ as a result of S3012, the preprocessing unit 3027A expands the data set of the shape parameters included in the shape parameter matrix Λ using random numbers (S3013). In the above example, 400 data sets (λ i,1 ,..., λ i,30 (1≦i≦400)) are converted to 10,000 extended data sets (λ j ) of shape parameters. , 1,..., λ j,30 (1≦j≦10,000)). Data expansion is performed using random numbers having a normal distribution. The expanded data set has a normal distribution with a variance of 3σ for each shape parameter value.
 拡張データ・セットに対し、射影行列Wに基づく逆変換を行うことにより、拡張データ・セットの3次元データを構成することができる。前述の例において、更に、10,000個の拡張データ・セットを表す、〔10,000行、30列〕の拡張形状パラメータ行列を行列Λ'(λj,k,...,λj,k(1≦j≦10,000及び1≦k≦30))とする。10,000個のサンプル対象物の3次元データを表す頂点座標行列X'は、拡張形状パラメータ行列Λ'に対して、〔30行,15,000列〕である射影行列Wの転置行列Wを右から掛けることによって得られる。 By performing the inverse transformation based on the projection matrix W on the extension data set, the three-dimensional data of the extension data set can be constructed. In the above example, a [10,000 rows, 30 columns] extended shape parameter matrix representing 10,000 extended data sets is further transformed into a matrix Λ′(λ j,k ,..., λ j, k (1≦j≦10,000 and 1≦k≦30)). The vertex coordinate matrix X′ representing the three-dimensional data of 10,000 sample objects is [30 rows, 15,000 columns] the transposed matrix W T of the projection matrix W with respect to the extended shape parameter matrix Λ′. It is obtained by multiplying from the right.
 つまり、頂点座標行列X'は、次の数式から計算することができ、10,000個に拡張されたサンプル対象物ごとに、5,000個(15,000/3)の3次元データを得ることができる。
Figure JPOXMLDOC01-appb-M000002
That is, the vertex coordinate matrix X′ can be calculated from the following formula, and 5,000 (15,000/3) three-dimensional data are obtained for each sample object expanded to 10,000. be able to.
Figure JPOXMLDOC01-appb-M000002
 S3013の結果、頂点座標行列X'が得られたのに続いて、前処理部3027Aは、拡張されたサンプル対象物の3次元データに基づいて、それぞれのシルエット画像を生成する(S3014)。前述の例では、10,000個のサンプル対象物ごとに、3次元空間内において5,000個の3次元データから、サンプル対象物の3次元物体を仮想的に構成する。そして、同じく3次元空間内に仮想的に設けられ、任意の方向から投影可能な投影装置を用いて3次元物体を投影する。前述の例では、10,000個のサンプル対象物ごとに、正面方向及び側面方向の2枚のシルエット画像を投影により取得するのがよい。取得されるシルエット画像は、白黒の2値化データで表される。 After the vertex coordinate matrix X′ is obtained as a result of S3013, the preprocessing unit 3027A generates each silhouette image based on the expanded three-dimensional data of the sample object (S3014). In the above example, for every 10,000 sample objects, a three-dimensional object of the sample object is virtually constructed from 5,000 three-dimensional data in the three-dimensional space. Then, a three-dimensional object is projected using a projection device that is also virtually provided in the three-dimensional space and is capable of projecting from any direction. In the above example, it is preferable that two silhouette images in the front direction and the side direction are acquired by projection for every 10,000 sample objects. The acquired silhouette image is represented by monochrome binarized data.
 最後に、学習部3027Bは、学習により、サンプル対象物に関連付けられた形状パラメータの値と、サンプル対象物のシルエット画像との関係を関連付ける(S3015)。具体的には、S3013で得た形状パラメータのデータ・セットと、S3014で得たシルエット画像との組を訓練データに用いて、両者の関係をディープラーニングにより学習させるのがよい。 Finally, the learning unit 3027B associates the relationship between the shape parameter value associated with the sample object and the silhouette image of the sample object by learning (S3015). Specifically, it is preferable that the pair of the shape parameter data set obtained in S3013 and the silhouette image obtained in S3014 is used as training data, and the relationship between the two is learned by deep learning.
 より詳しくは、前述の例で10,000個に拡張されたサンプル対象物について、各シルエット画像の2値化データをディープラーニングのネットワーク・アーキテクチャに入力する。ディープラーニングにおける特徴抽出では、ネットワーク・アーキテクチャから出力されるデータが、30個の形状パラメータの値に近づくように、ネットワーク・アーキテクチャの重み係数が設定される。なお、ここでのディープラーニングは、一例では、畳み込みニューラルネットワーク(CNN:Convolutional Neural Network)を利用することができる。 More specifically, for the sample objects expanded to 10,000 in the above example, the binary data of each silhouette image is input to the deep learning network architecture. In feature extraction in deep learning, the weighting factor of the network architecture is set so that the data output from the network architecture approaches the values of the 30 shape parameters. Note that the deep learning here can use a convolutional neural network (CNN: Convolutional Neural Network) in one example.
 このようにして、S3014では、サンプル対象物に関連付けられた形状パラメータの値と、サンプル対象物のシルエット画像との関係をディープラーニングにより学習させ、ディープラーニングのネットワーク・アーキテクチャが構築される。その結果、対象物のシルエット画像が入力されるのに応じて、形状パラメータの値を推定する推定モデルである対象物エンジン3021Aが生成される。 In this way, in S3014, the relationship between the value of the shape parameter associated with the sample object and the silhouette image of the sample object is learned by deep learning, and a deep learning network architecture is constructed. As a result, the object engine 3021A, which is an estimation model for estimating the value of the shape parameter, is generated in response to the input of the silhouette image of the object.
 (3-2-2)寸法データ算出装置の動作
 寸法データ算出装置3020は、学習装置3025で生成された対象物エンジン3021Aの電子ファイル、及び学習装置3025で得た主成分分析の射影情報を予め記憶部3021に格納して、対象物の寸法データ算出に使用する。
(3-2-2) Operation of Dimension Data Calculating Device The dimension data calculating device 3020 preliminarily stores the electronic file of the target engine 3021A generated by the learning device 3025 and the projection information of the principal component analysis obtained by the learning device 3025. It is stored in the storage unit 3021 and used for calculating the dimension data of the object.
 最初に、取得部3024Aは、入出力部3122を通じて、外部の端末装置等を介して対象物の全体を異なる方向から撮影した複数の画像データを、対象物の全長を示す全長データと共に取得する(S3021)。次いで、抽出部3024Bは、各画像データから対象物の各部分の形状を示す形状データをそれぞれ抽出する(S3022)。続いて、変換部3024Cは、全長データに基づいて各形状データを所定の大きさに変換するリスケール処理を実行する(S3023)。S3021~S3023のステップを通じて、対象物のシルエット画像が生成され、寸法データ算出装置3020は対象物のシルエット画像を受け付ける。 First, the acquisition unit 3024A acquires, through the input/output unit 3122, a plurality of image data obtained by photographing the entire object from different directions via an external terminal device or the like, together with the full length data indicating the total length of the object ( S3021). Next, the extraction unit 3024B extracts shape data indicating the shape of each part of the object from each image data (S3022). Subsequently, the conversion unit 3024C executes rescaling processing for converting each shape data into a predetermined size based on the full length data (S3023). Through the steps of S3021 to S3023, the silhouette image of the target object is generated, and the dimension data calculation device 3020 receives the silhouette image of the target object.
 次いで、予め記憶部3021に格納された対象物エンジン3021Aを使用して、推定部3024Dは、受け付けたシルエット画像から対象物の形状パラメータの値を推定する(S3024)。そして、算出部3024Eは、対象物の形状パラメータの値に基づいて、対象物が有する部位に関連する寸法データを算出する(S3025)。 Next, using the target engine 3021A stored in the storage unit 3021 in advance, the estimation unit 3024D estimates the value of the shape parameter of the target object from the received silhouette image (S3024). Then, the calculation unit 3024E calculates the dimension data related to the part of the target object based on the value of the shape parameter of the target object (S3025).
 具体的には、S3025での寸法データの算出では、最初に、対象物について対象物エンジン3021Aで推定された所定個数の形状パラメータの値から、対象物の頂点の3次元データを構成する。ここでは、学習段階において前処理部3027Aで実施した次元削減に係る射影(S3010)の逆変換を行えばよい。より詳しくは、推定された所定個数の形状パラメータの値(列ベクトル)に対し、主成分分析に係る射影行列Wの転置行列WTを右から掛けることによって、3次元データを得ることができる。 Specifically, in the calculation of the dimension data in S3025, first, the three-dimensional data of the vertex of the object is constructed from the values of the predetermined number of shape parameters estimated by the object engine 3021A for the object. Here, the inverse transformation of the projection (S3010) related to the dimension reduction performed by the preprocessing unit 3027A in the learning stage may be performed. More specifically, three-dimensional data can be obtained by multiplying the transposed matrix WT of the projection matrix W related to the principal component analysis from the right to the estimated number of shape parameter values (column vectors).
 前述の例では、対象物の形状パラメータの値Λ''に対し、対象物の3次元データX''は、次の数式から計算することができる。
Figure JPOXMLDOC01-appb-M000003
In the above-mentioned example, the three-dimensional data X″ of the object can be calculated from the following formula for the value Λ″ of the shape parameter of the object.
Figure JPOXMLDOC01-appb-M000003
 S3025において、算出部3024Eは、3次元データを使用して、対象物における任意の2つの頂点の間の寸法データを算出する。ここでは、3次元データから3次元物体を仮想的に構成し、3次元物体上の曲面に沿って2つの頂点の間の寸法データを算出する。つまり、3次元物体の立体的な形状に沿うようにして、2つの頂点間の距離を立体的に算出することができる。2つの頂点間の距離を立体的に算出するためには、最初に、多数の頂点(前述の例では5,000個の3次元データ)によって構成される3次元のメッシュ上で2つの頂点の間を繋ぐ最短経路を探索し、最短経路が通るメッシュを特定する。次いで、特定されたメッシュの頂点座標データを用いて、最短経路に沿ってメッシュ毎に距離を算出し、合算する。その合計値が2つの頂点間の立体的な距離となる。なお、立体的な距離の計算には、3次元物体を構成する各メッシュの頂点情報及び各頂点の法線方向等のメッシュ・データを用いることができる。 In S3025, the calculation unit 3024E calculates the dimension data between any two apexes of the object using the three-dimensional data. Here, a three-dimensional object is virtually constructed from the three-dimensional data, and dimension data between two vertices is calculated along a curved surface on the three-dimensional object. That is, the distance between the two vertices can be calculated three-dimensionally along the three-dimensional shape of the three-dimensional object. In order to three-dimensionally calculate the distance between two vertices, first, two vertices on a three-dimensional mesh composed of a large number of vertices (5,000 three-dimensional data in the above example) are calculated. The shortest route connecting the routes is searched and the mesh through which the shortest route passes is specified. Next, using the vertex coordinate data of the identified mesh, the distance is calculated for each mesh along the shortest path and summed. The total value is the three-dimensional distance between the two vertices. In addition, for the calculation of the three-dimensional distance, mesh information such as vertex information of each mesh forming the three-dimensional object and a normal direction of each vertex can be used.
 例えば、対象物が「人」であり、人の「肩幅」を算出する場合を想定する。事前準備として、「肩幅=左肩の肩峰を示す頂点と右肩の肩峰を示す頂点の間の距離」であることを予め規定しておく。また、左肩の肩峰を示す頂点の頂点番号が例えば#313であり、右肩の肩峰を示す頂点の頂点番号が例えば#521であることを予め関連付けておく。これらの情報は予め記憶部3021に格納される。寸法データ算出の際は、頂点番号#313から#521に向けた最短経路を特定し、最短経路に関連して特定されたメッシュの頂点座標データを用いて、最短経路に沿ってメッシュ毎に距離を算出し、合計すればよい。 For example, assume that the object is a "person" and the "shoulder width" of the person is calculated. As a preliminary preparation, it is specified in advance that “shoulder width=distance between the apex indicating the acromion of the left shoulder and the apex indicating the acromion of the right shoulder”. Further, it is associated in advance that the apex number of the apex indicating the acromion of the left shoulder is #313, and the apex number of the apex indicating the acromion of the right shoulder is #521, for example. These pieces of information are stored in the storage unit 3021 in advance. When calculating the dimension data, the shortest path from the vertex numbers #313 to #521 is specified, and the vertex coordinate data of the mesh specified in relation to the shortest path is used to calculate the distance for each mesh along the shortest path. Can be calculated and summed.
 このように、本実施形態の寸法データ算出装置3020は、対象物エンジン3021Aを使用することにより、シルエット画像から所定個数の形状パラメータの値を高精度に推定することができる。また、高精度に推定された形状パラメータの値から、対象物の3次元データを高精度に復元することができるので、特定部位のみならず任意の2つの頂点間を採寸対象箇所として、高精度に算出することができる。特に、算出される2つの頂点の間の寸法データは、3次元データから構成される3次元物体に基づいて立体的な形状に沿って算出されるので、高精度である。 As described above, the dimension data calculation device 3020 according to the present embodiment can highly accurately estimate the value of the predetermined number of shape parameters from the silhouette image by using the target engine 3021A. In addition, since the three-dimensional data of the object can be restored with high accuracy from the value of the shape parameter estimated with high accuracy, not only the specific part but also any two vertices can be used as the measurement target part with high accuracy. Can be calculated. In particular, the calculated dimensional data between the two vertices is highly accurate because it is calculated along a three-dimensional shape based on a three-dimensional object composed of three-dimensional data.
 (3-3)寸法データ算出システムの特徴
 a)以上説明したように、本実施形態に係る寸法データ算出システム3100は、寸法データ算出装置3020及び学習装置3025を備える。寸法データ算出装置3020の一部として構成される情報処理装置は、変換部(受付部)3024C及び推定部3024Dを備える。変換部(受付部)3024Cは、対象物のシルエット画像を受け付ける。推定部3024Dは、サンプル対象物のシルエット画像とサンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジン3021Aを使用して、受け付けたシルエット画像から対象物の形状パラメータの値を推定する。そして、推定された対象物の形状パラメータの値が、対象物が有する任意の部位に関連する寸法データに関連付けられる。
(3-3) Characteristics of Dimension Data Calculation System a) As described above, the dimension data calculation system 3100 according to the present embodiment includes the dimension data calculation device 3020 and the learning device 3025. The information processing device configured as a part of the dimension data calculation device 3020 includes a conversion unit (reception unit) 3024C and an estimation unit 3024D. The conversion unit (reception unit) 3024C receives the silhouette image of the target object. The estimation unit 3024D uses the object engine 3021A that associates the silhouette image of the sample object with the values of the predetermined number of shape parameters associated with the sample object to determine the shape parameter of the object from the received silhouette image. Estimate the value. Then, the value of the estimated shape parameter of the object is associated with the dimensional data relating to an arbitrary part of the object.
 また、寸法データ算出装置3020は、取得部3024A、抽出部3024B、変換部3024C、推定部3024D、及び算出部3024Eを備える。取得部3024Aは、対象物が撮影された画像データ及び対象物の全長データを取得する。抽出部3024Bは、画像データから対象物の形状を示す形状データを抽出する。変換部3024Cは形状データを全長データに基づいてシルエット画像に変換する。推定部3024Dは、サンプル対象物のシルエット画像とサンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジン3021Aを使用して、シルエット画像から所定個数の形状パラメータの値を推定する。算出部3024Eは、推定された所定個数の形状パラメータの値に基づいて、対象物の寸法データを算出する。 Further, the dimension data calculation device 3020 includes an acquisition unit 3024A, an extraction unit 3024B, a conversion unit 3024C, an estimation unit 3024D, and a calculation unit 3024E. The acquisition unit 3024A acquires image data of a captured object and full-length data of the object. The extraction unit 3024B extracts shape data indicating the shape of the object from the image data. The conversion unit 3024C converts the shape data into a silhouette image based on the full length data. The estimation unit 3024D uses the object engine 3021A that associates the silhouette image of the sample object with the value of the predetermined number of shape parameters associated with the sample object to obtain the value of the predetermined number of shape parameters from the silhouette image. presume. The calculation unit 3024E calculates the dimension data of the target object based on the estimated values of the predetermined number of shape parameters.
 したがって、かかる寸法データ算出装置3020は、予め作成済みの対象物エンジン3021Aを使用することにより、シルエット画像から所定個数の形状パラメータの値を高精度に推定することができる。また、高精度に推定された形状パラメータの値を用いることで、対象物の任意の部位に関連するデータを効率的且つ高精度で算出することができる。このように、寸法データ算出装置3020によれば、対象物について算出される寸法データを効率的且つ高精度で提供することができる。 Therefore, the dimension data calculation device 3020 can highly accurately estimate the values of the predetermined number of shape parameters from the silhouette image by using the target object engine 3021A that has been created in advance. Further, by using the value of the shape parameter estimated with high accuracy, it is possible to efficiently and highly accurately calculate the data related to an arbitrary part of the object. As described above, according to the dimension data calculation device 3020, the dimension data calculated for the object can be efficiently provided with high accuracy.
 かかる寸法データ算出装置3020を用いることで、例えば、対象物として生物の各部分の寸法データを高精度に算出することができる。また、対象物として車や各種荷物等、任意の物体の各部分の寸法データを高精度に算出することができる。更に、寸法データ算出装置3020を、各種製品を製造する製品製造装置に組み込むことで、対象物の形状に適合した製品を製造することが可能となる。 By using the dimension data calculation device 3020, for example, the dimension data of each part of a living thing as an object can be calculated with high accuracy. Further, it is possible to highly accurately calculate the dimensional data of each part of an arbitrary object such as a car or various kinds of luggage as the object. Furthermore, by incorporating the dimension data calculation device 3020 into a product manufacturing device that manufactures various products, it is possible to manufacture a product that conforms to the shape of the object.
 かかる寸法データ算出装置3020を用いることで、例えば、対象物として生物の各部位に関連する寸法データを高精度に算出することができる。また、対象物として車や各種荷物等、任意の物体の各部分の寸法データを高精度に算出することができる。また、寸法データ算出装置3020を、各種製品を製造する製品製造装置に組み込むことで、対象物の形状に適合した製品を製造することが可能となる。 By using the dimension data calculation device 3020, for example, the dimension data relating to each part of the living thing as an object can be calculated with high accuracy. Further, it is possible to highly accurately calculate the dimensional data of each part of an arbitrary object such as a car or various kinds of luggage as the object. Further, by incorporating the dimension data calculation device 3020 into a product manufacturing device that manufactures various products, it is possible to manufacture a product that conforms to the shape of the object.
 b)かかる寸法データ算出装置3020では、サンプル対象物に関連付けられた所定個数の形状パラメータが、サンプル対象物の3次元データを次元削減することによって特定される。特に、次元削減は主成分分析により行われる。これにより、サンプル対象物の3次元データからノイズを効果的に取り除き、3次元データを圧縮することができる。 B) In the dimension data calculation device 3020, a predetermined number of shape parameters associated with the sample object are specified by dimensionally reducing the three-dimensional data of the sample object. In particular, dimension reduction is performed by principal component analysis. Thereby, noise can be effectively removed from the three-dimensional data of the sample object, and the three-dimensional data can be compressed.
 c)かかる寸法データ算出装置3020では、推定された所定個数の形状パラメータの値に対し、上記主成分分析に係る射影の逆変換により対象物の3次元データが算出され、3次元データが寸法データに関連付けられる。これにより、対象物のシルエット画像の入力に対し、3次元データを対象物の3次元データを高精度に構成することができる。 c) In the dimension data calculation device 3020, the three-dimensional data of the object is calculated by inverse transformation of the projection related to the above-mentioned principal component analysis with respect to the estimated value of the shape parameter, and the three-dimensional data is the dimension data. Associated with. This makes it possible to accurately configure the three-dimensional data of the target with respect to the input of the silhouette image of the target.
 d)かかる寸法データ算出装置3020では、サンプル対象物のシルエット画像が、サンプル対象物の3次元データから構成される3次元物体における所定方向の投影画像である。つまり、サンプル対象物の3次元データを用いて3次元物体を構成した上で、これを投影することによりシルエット画像が得られる。正面方向及び側面方向の2枚のシルエット画像を投影により取得するのがよい。これにより、サンプル対象物のシルエット画像を高精度に生成することができる。 D) In the dimension data calculation device 3020, the silhouette image of the sample target object is a projection image in a predetermined direction on a three-dimensional object composed of the three-dimensional data of the sample target object. That is, a silhouette image is obtained by constructing a three-dimensional object using the three-dimensional data of the sample object and then projecting the three-dimensional object. It is preferable to obtain two silhouette images in the front direction and the side direction by projection. Thereby, the silhouette image of the sample object can be generated with high accuracy.
 e)かかる寸法データ算出装置3020では、対象物エンジン3021Aが、サンプル対象物のシルエット画像とサンプル対象物に関連付けられた所定個数の形状パラメータの値との関係を学習することにより生成されている。当該学習は、ディープラーニングにより行うことができる。ディープラーニングによる学習を行うことにより、サンプル対象物のシルエット画像とサンプル対象物の形状パラメータの値との対応付けを高精度に行うことができる。 E) In the dimension data calculation device 3020, the object engine 3021A is generated by learning the relationship between the silhouette image of the sample object and the values of the predetermined number of shape parameters associated with the sample object. The learning can be performed by deep learning. By performing learning by deep learning, the silhouette image of the sample object and the shape parameter value of the sample object can be associated with high accuracy.
 f)かかる寸法データ算出装置3020の算出部3024Eでは、対象物について推定された所定個数の形状パラメータの値から、対象物における複数の頂点の3次元データを構成する。そして、構成された3次元データに基づいて、対象物における任意の2つの頂点の間の寸法データを算出する。つまり、対象物の3次元データを用いて3次元物体を構成した上で、この寸法データを算出する。これにより、対象物の3次元物体の形状から2つの頂点の間の寸法データを算出することができるので、採寸対象箇所が特定部分に限定されることがない。 F) The calculation unit 3024E of the dimension data calculation device 3020 constructs three-dimensional data of a plurality of vertices of the object from the values of the predetermined number of shape parameters estimated for the object. Then, the dimension data between any two apexes of the object is calculated based on the constructed three-dimensional data. That is, the dimension data is calculated after the three-dimensional object is constructed using the three-dimensional data of the object. Accordingly, the dimension data between the two vertices can be calculated from the shape of the three-dimensional object of the object, so that the measurement target location is not limited to a specific portion.
 特に、かかる寸法データ算出装置3020の算出部3024Eでは、2つの頂点の間の寸法データが、対象物において複数の頂点の3次元データから構成される3次元物体上の曲面に沿って算出される。これにより、寸法データを更に高精度に算出することができる。 Particularly, in the calculation unit 3024E of the dimension data calculation device 3020, the dimension data between the two vertices is calculated along the curved surface on the three-dimensional object including the three-dimensional data of the plurality of vertices in the object. .. Thereby, the dimension data can be calculated with higher accuracy.
 (3-4)変形例
 (3-4-1)
 上記説明においては、取得部3024Aが、対象物を異なる方向から撮影した複数の画像データを取得している。ここでは、対象物を複数の異なる方向から同時に撮影できる撮像装置として、深度データを併せて取得可能な深度データ測定装置が適用可能である。深度データ測定装置の一例はステレオカメラである。本明細書において「ステレオカメラ」とは、対象物を複数の異なる方向から同時に撮影して、両眼視差を再現する任意の形態の撮像装置を意味している。他方、必ずしも画像データが複数必要であるわけではなく、対象物の画像データが一枚であっても各部分の寸法データを算出することが可能である。
(3-4) Modification (3-4-1)
In the above description, the acquisition unit 3024A acquires a plurality of image data obtained by photographing the object from different directions. Here, a depth data measuring device capable of acquiring depth data together can be applied as an imaging device capable of simultaneously photographing an object from a plurality of different directions. An example of the depth data measuring device is a stereo camera. In the present specification, the “stereo camera” means an imaging device of any form that simultaneously captures an object from a plurality of different directions and reproduces binocular parallax. On the other hand, plural pieces of image data are not necessarily required, and it is possible to calculate the dimension data of each part even if the image data of the object is one piece.
 深度データ測定装置として前述のステレオカメラを適用した場合、取得部3024Aで取得することができる画像データはRGB-D(Red、Green、Blue、Depth)データとすることができる。具体的には、画像データは、通常の単眼カメラで取得可能なRGB画像データに加えて、深度データに基づいてピクセル毎に深度データを有する深度マップを含むことができる。 When the stereo camera described above is applied as the depth data measuring device, the image data that can be acquired by the acquisition unit 3024A can be RGB-D (Red, Green, Blue, Depth) data. Specifically, the image data can include a depth map having depth data for each pixel based on the depth data, in addition to the RGB image data that can be acquired by a normal monocular camera.
 (3-4-2)
 上記説明においては、セマンティックセグメンテーションアルゴリズムやグラブカットアルゴリズムを採用して、対象物の形状データを抽出し、対象物と対象物以外の背景画像とを分離してもよいものとした。これに加えて、或いはこれに替えて、例えば、ステレオカメラを用いた場合には、ステレオカメラから取得される深度マップを用いて、対象物の形状データを対象物の深度データに関連付けて取得し、背景画像を分離してもよい。これにより、更に高精度に対象物の形状データを生成することが可能となる。
(3-4-2)
In the above description, the semantic segmentation algorithm or the grab cut algorithm may be adopted to extract the shape data of the object, and the object and the background image other than the object may be separated. In addition to or instead of this, for example, when a stereo camera is used, the depth map acquired from the stereo camera is used to acquire the shape data of the object in association with the depth data of the object. The background image may be separated. This makes it possible to generate the shape data of the object with higher accuracy.
 具体的には、ステレオカメラを用いる場合、抽出部3024Bは、取得部3024Aで取得した深度マップに基づいて、対象物が撮影された画像データから、対象物が写っている部分である対象物領域を抽出するのがよい。例えば、深度マップから、深度データが所定の範囲にはない領域を取り除くことにより、対象物領域が抽出される。抽出された対象物領域では、形状データは、ピクセルごとに対象物の深度データに関連付けられている。 Specifically, when a stereo camera is used, the extraction unit 3024B, based on the depth map acquired by the acquisition unit 3024A, from the image data in which the target object is captured, the target object region that is the part in which the target object is captured. Is better to extract. For example, the object region is extracted by removing the region whose depth data is not within the predetermined range from the depth map. In the extracted object region, the shape data is associated with the object depth data on a pixel-by-pixel basis.
 変換部3024Cは、形状データを、前述の全長データに加えて、当該対象物領域の深度データにも基づいて新たな形状データに変換して、「階調シルエット画像」を生成する(後述)。 The conversion unit 3024C converts the shape data into new shape data based on the depth data of the target area in addition to the above-described full length data, and generates a “gradation silhouette image” (described later).
 生成される階調シルエット画像は、単なる白黒の2値化データではなく、深度データに基づいて、例えば輝度値が0(「黒」)から1(「白」)までのデータで表された単色多階調のモノクロ画像である。つまり、階調シルエット画像データは、深度データに関連づけられて、対象物の形状に関して更に多くの情報量を有するものである。 The gradation silhouette image to be generated is not simple black and white binarized data, but is a single color represented by data with a brightness value of 0 (“black”) to 1 (“white”) based on depth data. It is a multi-tone monochrome image. That is, the gradation silhouette image data is associated with the depth data and has a larger amount of information regarding the shape of the object.
 階調シルエット画像を用いる場合、学習装置3025の処理部3027における処理は次のように実施するのがよい。なお、階調シルエット画像データは全長データにより規格化されている。 When using the gradation silhouette image, the processing in the processing unit 3027 of the learning device 3025 is preferably performed as follows. The gradation silhouette image data is standardized by full length data.
 前処理部3027Aにおいて、3次元空間内に仮想的に設けた撮像装置を用いて所定方向からサンプル対象物の3次元物体を投影する際に、併せて撮像装置からサンプル対象物までの深度データも取得するのがよい。つまり、サンプル対象物のシルエット画像データは深度データに関連付けられる。生成されるサンプル対象物の階調シルエット画像は、深度データに基づいて、例えば輝度値が0(「黒」)から1(「白」)までで単色多階調のモノクロ画像とし、サンプル対象物の形状に関して更に多くの情報量を有する。 In the pre-processing unit 3027A, when the three-dimensional object of the sample target is projected from the predetermined direction by using the image capturing device virtually provided in the three-dimensional space, the depth data from the image capturing device to the sample target is also collected. Good to get. That is, the silhouette image data of the sample object is associated with the depth data. The generated gradation silhouette image of the sample object is, for example, a monochromatic multi-tone monochrome image with a brightness value of 0 (“black”) to 1 (“white”) based on the depth data. Has much more information about the shape of.
 学習部3027Bにおいて対象物エンジン3021Aを生成する際には、サンプル対象物に関する所定個数の形状パラメータの値と、深度データに関連付けられたサンプル対象物の階調シルエット画像とを関連付けるように学習するのがよい。つまり、学習装置3025においてサンプル対象物の深度データに基づく学習処理は、更に多くの情報量に基づいているので、更に高精度な対象物エンジン3021Aが生成可能となる。 When the object engine 3021A is generated by the learning unit 3027B, learning is performed so as to associate a predetermined number of shape parameter values regarding the sample object with the gradation silhouette image of the sample object associated with the depth data. Is good. That is, since the learning process based on the depth data of the sample target in the learning device 3025 is based on a larger amount of information, the target engine 3021A with higher accuracy can be generated.
 この変形例によれば、取得部3024Aに深度データを測定できる任意の機械を用いて構成される深度マップに基づいて対象物領域を抽出することにより、更に高精度に対象物の形状データを生成することが可能となる。また、階調シルエット画像データ(変換される形状データ)は対象物の深度データに関連付けられる。また、対象物エンジン3021Aも深度データに基づく学習処理の結果生成されたものである。したがって、対象物の形状に関して更に多くの情報量を有し、算出部3024Eによる対象物の各部分の寸法データを更に高精度に算出することが可能となる。 According to this modification, the shape data of the object is generated with higher accuracy by extracting the object region based on the depth map configured by the acquisition unit 3024A using any machine capable of measuring the depth data. It becomes possible to do. Further, the gradation silhouette image data (the shape data to be converted) is associated with the depth data of the object. The target engine 3021A is also generated as a result of the learning process based on the depth data. Therefore, the calculation unit 3024E can calculate the dimensional data of each part of the target object with higher accuracy, because the size information of the target object is larger.
 更に、ステレオカメラ以外にも、LiDAR(Light Detection and Ranging)装置により深度データを求め、対象物と対象物以外の背景画像とを分離することが可能である。すなわち、取得部3024Aに深度データを測定できる任意の機械(深度データ測定装置)を用いることで、高精度に対象物の形状データを生成することが可能となる。 Furthermore, in addition to stereo cameras, it is possible to obtain depth data with a LiDAR (Light Detection and Ranging) device and separate the target and background images other than the target. That is, by using an arbitrary machine (depth data measuring device) capable of measuring depth data in the acquisition unit 3024A, it becomes possible to generate the shape data of the object with high accuracy.
 なお、ここでは階調シルエット画像を単なるシルエット画像と区別して記述したが、他の実施形態及び他の変形例においては、両者を区別せずに単にシルエット画像と記載する場合がある。 Note that the gradation silhouette image is described here as being distinguished from a simple silhouette image, but in other embodiments and other modified examples, it may be simply described as a silhouette image without distinguishing both.
 (3-4-3)
 上記説明においては、S3013で、前処理部3027Aは、形状パラメータのデータ・セットを、乱数を用いて拡張するデータ拡張処理を行った。データ拡張処理は、サンプル対象物の数に応じてどの程度の数まで拡張するかを決定すればよい。サンプル数が予め十分に用意されている場合には、S3013の拡張処理は行わなくてもよい。
(3-4-3)
In the above description, in step S3013, the preprocessing unit 3027A performs data expansion processing for expanding the shape parameter data set using random numbers. In the data expansion processing, it is only necessary to determine the number of samples to be expanded according to the number of sample objects. If the number of samples is sufficiently prepared in advance, the expansion process of S3013 may not be performed.
 (3-4-4)
 上記説明においては、サンプル対象物に対し、形状パラメータ(λ,・・・,λ30)は学習装置3025の前処理部3027Aにおいて主成分分析によって取得した。ここで、サンプル対象物が「人」である場合の形状パラメータについて更に考察する。前述の例のように400個のサンプル対象物及び5,000個の頂点の3次元データを用いて主成分分析を行った結果、対象物が「人」である場合の形状パラメータは少なくとも次の特性を有することが考察された。
(3-4-4)
In the above description, the shape parameters (λ 1 ,..., λ 30 ) of the sample object are acquired by the principal component analysis in the preprocessing unit 3027A of the learning device 3025. Here, the shape parameter when the sample object is a “person” will be further considered. As a result of performing the principal component analysis using the three-dimensional data of 400 sample objects and 5,000 vertices as in the above example, the shape parameter when the object is “person” is at least It was considered to have properties.
 〔特性1〕
 第1順位の主成分λは、人の身長との間で線形の関係を有するように関連付けられていた。具体的には、図14に示すように、第1順位の主成分λが大きくなればなるほど、人の身長が小さくなるというものであった。
[Characteristic 1]
The first-ranked principal component λ 1 was associated to have a linear relationship with human height. Specifically, as shown in FIG. 14, the larger the first-order principal component λ 1 , the smaller the person's height.
 特性1を考慮すれば、推定部3024Dによる人の形状パラメータの値の推定時は、第1順位の主成分λに関しては、対象物エンジン3021Aを用いることなく取得部3024Aで取得した身長データを利用すればよい。具体的には、第1順位の主成分λの値は、人の身長を説明変数とする線形回帰モデルを利用することにより、別途計算するように構成すればよい。 Considering the characteristic 1, when estimating the value of the human shape parameter by the estimation unit 3024D, the height data acquired by the acquisition unit 3024A without using the target engine 3021A is used for the first-order principal component λ 1. You can use it. Specifically, the value of the first-order principal component λ 1 may be configured to be calculated separately by using a linear regression model in which the height of a person is an explanatory variable.
 この場合、学習段階で学習部3027Bにおいて対象物エンジン3021Aを生成する時においても、当該主成分λ1を学習対象から除いてもよい。前述のとおり、学習装置3025における学習段階では、ネットワーク・アーキテクチャの重み付けを行う。この際、入力されたシルエット画像を主成分分析することで得られる第1順位の主成分を除いた第2順位以降の主成分と、訓練データである形状パラメータλ2以降の値との誤差が最小化されるようにネットワーク・アーキテクチャの重み係数を設定してもよい。これにより、対象物が「人」である場合は、上記線形回帰モデルの利用と併せて、推定部3024Dにおける形状パラメータの値の推定精度を向上させることができる。 In this case, even when the learning unit 3027B generates the target engine 3021A in the learning stage, the main component λ1 may be excluded from the learning target. As described above, in the learning stage in the learning device 3025, the network architecture is weighted. At this time, the error between the second and subsequent main components excluding the first-priority main components obtained by performing the principal component analysis of the input silhouette image and the values after the shape parameter λ2, which is the training data, is minimum. The weighting factors of the network architecture may be set to be customized. As a result, when the object is a “person”, the estimation accuracy of the value of the shape parameter in the estimation unit 3024D can be improved in addition to the use of the linear regression model.
 また、学習段階で学習部3027Bにおいて対象物エンジン3021Aを生成するときには、第1順位の主成分も含めて、形状パラメータλ1以降の値との誤差が最小化されるようにネットワーク・アーキテクチャの重み係数を設定してもよい。そして、第1順位の主成分λ1の値は、人の身長を説明変数とする線形回帰モデルを利用することにより、別途計算した値と置き換えてもよい。これにより、対象物が「人」である場合は、上記線形回帰モデルの利用と併せて、推定部3024Dにおける形状パラメータの値の推定精度を向上させることができる。 Further, when the object engine 3021A is generated in the learning unit 3027B in the learning stage, the weighting coefficient of the network architecture is minimized so as to minimize the error with the values after the shape parameter λ1 including the main component of the first rank. May be set. Then, the value of the first-order principal component λ1 may be replaced with a value calculated separately by using a linear regression model in which the height of a person is used as an explanatory variable. As a result, when the object is a “person”, the estimation accuracy of the value of the shape parameter in the estimation unit 3024D can be improved in addition to the use of the linear regression model.
 〔特性2〕
 図15は、主成分である形状パラメータの再現率を示した概略グラフである。グラフにおいて横軸が寄与率によって順位付けされた主成分を示し、縦軸が固有値の分散説明率を示している。そして、棒グラフは、順位ごとの個別の分散説明率を示している。また、実線の折れ線グラフは、第1順位からの分散説明率の累積を示している。図15では、簡単のため、第10順位までの10個の主成分に関するグラフを概略的に示している。なお、主成分分析において求められる共分散行列の固有値は固有ベクトル(主成分)の大きさを表しており、固有値の分散説明率は主成分に対する再現率と考えてよい。
[Characteristic 2]
FIG. 15 is a schematic graph showing the recall of the shape parameter which is the main component. In the graph, the horizontal axis represents the principal component ranked by the contribution rate, and the vertical axis represents the variance explanation rate of the eigenvalue. Then, the bar graph shows individual variance explanation rates for each rank. Further, the solid line graph shows the accumulation of the variance explanation rates from the first rank. In FIG. 15, for simplification, a graph regarding 10 principal components up to the 10th rank is schematically shown. The eigenvalue of the covariance matrix obtained in the principal component analysis represents the size of the eigenvector (principal component), and the variance explanation ratio of the eigenvalue may be considered as the recall ratio for the principal component.
 図15のグラフを参照すると、第1順位から第10順位までの10個の主成分で分散説明率の累積が約0.95を示している(破線矢印)。つまり、第1順位から第10順位までの10個の主成分の再現率が約95%であることが当業者には理解される。すなわち、前述の例では、次元削減によって特徴変換される形状パラメータは30次元としたが、これに限らず、仮に形状パラメータを10次元としても約95%をカバーできている。すなわち、上記説明において形状パラメータの個数(次元数)は30個としたが、特性2を考慮すれば、10個程度のものとしてよい。 Referring to the graph in FIG. 15, the cumulative variance ratios of the ten principal components from the first rank to the tenth rank show about 0.95 (broken line arrow). That is, it is understood by those skilled in the art that the recall rate of the ten principal components from the first rank to the tenth rank is about 95%. That is, in the above-mentioned example, the shape parameter subjected to the feature conversion by dimension reduction is 30 dimensions, but the shape parameter is not limited to this, and even if the shape parameter is 10 dimensions, about 95% can be covered. That is, although the number of shape parameters (the number of dimensions) is 30 in the above description, it may be about 10 in consideration of the characteristic 2.
 (3-4-5)
 上記説明においては、学習装置3025の前処理部3027Aにおいて、10,000個のサンプル対象物ごとに、正面方向及び側面方向の2枚のシルエット画像を投影により取得した。他方、対象物のシルエット画像は必ずしも2枚必要とするわけではなく、1枚でよい。
(3-4-5)
In the above description, in the preprocessing unit 3027A of the learning device 3025, two silhouette images in the front direction and the side direction are obtained by projection for every 10,000 sample objects. On the other hand, it is not always necessary to have two silhouette images of the target object, but only one.
 (3-5)製品製造システムへの適用
 以下、前述の寸法データ算出装置3020を製品製造システム3001に適用する例について説明する。
(3-5) Application to Product Manufacturing System An example in which the above-described dimension data calculation device 3020 is applied to the product manufacturing system 3001 will be described below.
 (3-5-1)製品製造システムの構成
 図16は本実施形態に係る製品製造システム3001の概念を示す模式図である。製品製造システム3001は、ユーザ3005が保有する端末装置3010と通信可能な寸法データ算出装置3020と、製品製造装置3030とを備え、所望の製品3006を製造するためのシステムである。図16では、一例として、対象物3007が人であり、製品3006が椅子であるときの概念を示している。但し、本実施形態に係る製品製造システム3001において、対象物3007及び製品3006はこれらに限定されるものではない。
(3-5-1) Configuration of Product Manufacturing System FIG. 16 is a schematic diagram showing the concept of the product manufacturing system 3001 according to this embodiment. The product manufacturing system 3001 is a system for manufacturing a desired product 3006, including a dimension data calculation device 3020 capable of communicating with the terminal device 3010 owned by the user 3005 and a product manufacturing device 3030. In FIG. 16, as an example, the concept when the object 3007 is a person and the product 3006 is a chair is shown. However, in the product manufacturing system 3001 according to the present embodiment, the object 3007 and the product 3006 are not limited to these.
 端末装置3010は、所謂スマートデバイスにより実現することができる。ここでは、スマートデバイスに、ユーザ用プログラムがインストールされることで端末装置3010が各種機能を発揮する。具体的には、端末装置3010は、ユーザ3005により撮像される画像データを生成する。ここで、端末装置3010は、対象物を複数の異なる方向から同時に撮影して、両眼視差を再現するステレオカメラ機能を有するものでもよい。なお、画像データは端末装置3010で撮影されるものに限定されず、例えば、店舗内に設置されたステレオカメラを用いて撮影されたものを利用してもよい。 The terminal device 3010 can be realized by a so-called smart device. Here, the terminal device 3010 exerts various functions by installing the user program in the smart device. Specifically, the terminal device 3010 generates image data captured by the user 3005. Here, the terminal device 3010 may have a stereo camera function of simultaneously capturing images of a target object from a plurality of different directions and reproducing binocular parallax. Note that the image data is not limited to that captured by the terminal device 3010, and, for example, data captured using a stereo camera installed in a store may be used.
 また、端末装置3010は、対象物3007の属性を示す属性データの入力を受け付ける。「属性」には、対象物3007の全長・重量・生成からの経過時間(年齢を含む。)等が含まれる。また、端末装置3010は、通信機能を有しており、寸法データ算出装置3020及び製品製造装置3030と端末装置3010との間で各種情報の送受信を実行する。 Also, the terminal device 3010 accepts input of attribute data indicating the attribute of the target object 3007. The “attribute” includes the total length, weight, and elapsed time (including age) from the generation of the object 3007. In addition, the terminal device 3010 has a communication function, and executes transmission/reception of various information between the terminal device 3010 and the dimension data calculation device 3020 and the product manufacturing device 3030.
 寸法データ算出装置3020は任意のコンピュータにより実現することができる。ここでは、寸法データ算出装置3020の記憶部3021は、端末装置3010のユーザ3005を識別する識別情報に関連付けて、端末装置3010から送信される情報を記憶する。また、記憶部3021は、寸法データを算出する情報処理を実行するために必要なパラメータ等を記憶する。 The dimension data calculation device 3020 can be realized by any computer. Here, the storage unit 3021 of the dimension data calculation device 3020 stores the information transmitted from the terminal device 3010 in association with the identification information that identifies the user 3005 of the terminal device 3010. The storage unit 3021 also stores parameters and the like necessary for executing information processing for calculating dimension data.
 また、寸法データ算出装置3020の処理部3024は、前述のとおり、取得部3024A、抽出部3024B、変換部3024C、推定部3024D、及び算出部3024Eとして機能する。ここでは、取得部3024Aは、ユーザ3005によりステレオカメラで撮影された画像データ及び対象物3007の属性データを取得する。また、抽出部3024Bは、画像データから対象物3007の形状を示す形状データを抽出する。例えば、対象物の種類として「人」が予め設定されている場合には、人を識別するための訓練データを用いてセマンティックセグメンテーションのアルゴリズムが構築されている。また、抽出部3024Bは、ステレオカメラから取得される深度データに基づく深度マップを用いて、対象物と対象物以外の背景画像とを分離するものでもよい。この場合、変換部3024Cは、深度マップにおいて対象物の深度データに関連付けられた形状データを、全長データに基づいて階調シルエット画像に変換する。生成された階調シルエット画像は、深度データに基づいて、単色多階調のモノクロ画像とするのがよい。変換部3024Cは、生成されたシルエット画像を推定部3024Dに入力するための受付部としても機能することになる。 Also, the processing unit 3024 of the dimension data calculation device 3020 functions as the acquisition unit 3024A, the extraction unit 3024B, the conversion unit 3024C, the estimation unit 3024D, and the calculation unit 3024E, as described above. Here, the acquisition unit 3024A acquires the image data captured by the stereo camera by the user 3005 and the attribute data of the target object 3007. The extraction unit 3024B also extracts shape data indicating the shape of the object 3007 from the image data. For example, when “person” is set in advance as the type of object, a semantic segmentation algorithm is constructed using training data for identifying a person. Further, the extraction unit 3024B may separate the target object and the background image other than the target object by using the depth map based on the depth data acquired from the stereo camera. In this case, the conversion unit 3024C converts the shape data associated with the depth data of the object in the depth map into a gradation silhouette image based on the full length data. The generated gradation silhouette image is preferably a monochrome image with a single color and multiple gradations based on the depth data. The conversion unit 3024C also functions as a reception unit for inputting the generated silhouette image to the estimation unit 3024D.
 推定部3024Dは、サンプル対象物のシルエット画像とサンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジン3021Aを使用して、シルエット画像から所定個数の形状パラメータの値を推定する。算出部3024Eは、推定された所定個数の形状パラメータの値に基づいて、対象物の寸法データを算出する。具体的には、推定部3024Dで推定された対象物の形状パラメータの値から、対象物における複数の頂点の3次元データを構成し、更に、当該3次元データに基づいて対象物における任意の2つの頂点の間の寸法データを算出する。 The estimation unit 3024D uses the object engine 3021A that associates the silhouette image of the sample object with the value of the predetermined number of shape parameters associated with the sample object to obtain the value of the predetermined number of shape parameters from the silhouette image. presume. The calculation unit 3024E calculates the dimension data of the target object based on the estimated values of the predetermined number of shape parameters. Specifically, the shape parameter values of the target object estimated by the estimation unit 3024D form three-dimensional data of a plurality of vertices of the target object, and further, based on the three-dimensional data, an arbitrary 2 of the target object is calculated. Calculate dimensional data between two vertices.
 製品製造装置3030は、寸法データ算出装置3020を用いて算出された少なくとも1つの寸法データを用いて、対象物3007の形状に関連する所望の製品を製造する製造装置である。なお、製品製造装置3030は、自動で製品を製造・加工できる任意の装置を採用することができ、例えば3次元プリンタ等により実現することができる。 The product manufacturing apparatus 3030 is a manufacturing apparatus that manufactures a desired product related to the shape of the object 3007 by using at least one size data calculated using the size data calculation device 3020. Note that the product manufacturing apparatus 3030 can employ any device that can automatically manufacture and process a product, and can be realized by, for example, a three-dimensional printer.
 (3-5-2)製品製造システムの動作
 図17は、本実施形態に係る製品製造システム3001の動作を説明するためのシーケンス図である。また、図18及び図19は端末装置3010の画面遷移を示す模式図である。
(3-5-2) Operation of Product Manufacturing System FIG. 17 is a sequence diagram for explaining the operation of the product manufacturing system 3001 according to this embodiment. 18 and 19 are schematic diagrams showing screen transitions of the terminal device 3010.
 まず、端末装置3010を介して対象物3007の全体が異なる方向から写るように複数回撮像され、対象物3007が撮像された複数の画像データが生成される(T3001)。ここでは、図18及び図19にそれぞれ示すような、正面及び側面の写真が複数枚撮影される。このような正面及び側面の写真は端末装置3010のステレオカメラ機能をオンにして撮影するのがよい。 First, a plurality of images of the target 3007 are captured via the terminal device 3010 so that the entire target 3007 is captured from different directions, and a plurality of image data of the captured target 3007 is generated (T3001). Here, a plurality of front and side photographs as shown in FIGS. 18 and 19 are taken. Such front and side photographs are preferably taken with the stereo camera function of the terminal device 3010 turned on.
 次に、ユーザ3005により端末装置3010に、対象物3007の属性を示す属性データが入力される(T3002)。ここでは、属性データとして、対象物3007の全長データ・重量データ・経時データ(年齢等を含む。)等が入力される。そして、これらの複数の画像データ及び属性データが端末装置3010から寸法データ算出装置3020に送信される。 Next, the user 3005 inputs the attribute data indicating the attribute of the target object 3007 to the terminal device 3010 (T3002). Here, as the attribute data, full length data, weight data, elapsed time data (including age, etc.) of the object 3007, etc. are input. Then, the plurality of image data and the attribute data are transmitted from the terminal device 3010 to the dimension data calculation device 3020.
 寸法データ算出装置3020は、端末装置3010から複数の画像データ及び属性データを受信すると、これらのデータを用いて対象物3007の各部分の寸法データを算出する(T3003)。なお、端末装置3010には、設定に応じて、寸法データが画面に表示される。そして、製品製造装置3030が、寸法データ算出装置3020により算出された寸法データに基づいて所望の製品3006を製造する(T3004)。 When the dimension data calculation device 3020 receives a plurality of image data and attribute data from the terminal device 3010, the dimension data calculation device 3020 calculates the dimension data of each part of the object 3007 using these data (T3003). Note that the terminal device 3010 displays size data on the screen according to the settings. Then, the product manufacturing apparatus 3030 manufactures the desired product 3006 based on the dimension data calculated by the dimension data calculation apparatus 3020 (T3004).
 (3-5-3)製品製造システムの特徴
 以上説明したように、本実施形態に係る製品製造システム3001は、ユーザ3005が保有する端末装置3010と通信可能な寸法データ算出装置3020と、製品製造装置3030とを備える。
(3-5-3) Characteristics of Product Manufacturing System As described above, the product manufacturing system 3001 according to the present embodiment includes the dimension data calculation device 3020 capable of communicating with the terminal device 3010 owned by the user 3005, and the product manufacturing system. A device 3030.
 端末装置3010(撮影装置)は、対象物3007の画像を複数枚撮影する。寸法データ算出装置3020は、取得部3024A、抽出部3024B、変換部3024C、推定部3024D、及び算出部3024Eを備える。取得部3024Aは、対象物が撮影された画像データ及び対象物の全長データを取得する。抽出部3024Bは、画像データから対象物の形状を示す形状データを抽出する。変換部3024Cは形状データを全長データに基づいてシルエット画像に変換する。推定部3024Dは、サンプル対象物のシルエット画像とサンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジン3021Aを使用して、シルエット画像から所定個数の形状パラメータの値を推定する。算出部3024Eは、推定された所定個数の形状パラメータの値に基づいて、対象物の寸法データを算出する。製品製造装置3030は、算出部3024Eにより算出された寸法データを用いて製品3006を製造する。このような構成により、寸法データ算出装置3020が高精度に対象物3007の各部分を高精度に算出するので、対象物3007の形状に関連する所望の製品を提供できる。 The terminal device 3010 (imaging device) captures a plurality of images of the object 3007. The dimension data calculation device 3020 includes an acquisition unit 3024A, an extraction unit 3024B, a conversion unit 3024C, an estimation unit 3024D, and a calculation unit 3024E. The acquisition unit 3024A acquires image data of a captured object and full-length data of the object. The extraction unit 3024B extracts shape data indicating the shape of the object from the image data. The conversion unit 3024C converts the shape data into a silhouette image based on the full length data. The estimation unit 3024D uses the object engine 3021A that associates the silhouette image of the sample object with the value of the predetermined number of shape parameters associated with the sample object to obtain the value of the predetermined number of shape parameters from the silhouette image. presume. The calculation unit 3024E calculates the dimension data of the target object based on the estimated values of the predetermined number of shape parameters. The product manufacturing apparatus 3030 manufactures the product 3006 using the dimension data calculated by the calculation unit 3024E. With such a configuration, the dimension data calculation device 3020 calculates each part of the target object 3007 with high accuracy, and thus a desired product related to the shape of the target object 3007 can be provided.
 例えば、製品製造システム3001により、心臓等の各種臓器の形状の測定から、臓器の模型を製造することができる。また、例えば、人のウエスト形状の測定から各種ヘルスケア製品等を製造することができる。また、例えば、人の形状から当該人のフィギュア製品を製造することができる。また、例えば、人の形状から当該人に適合する椅子等を製造することができる。また、例えば、車の形状から車のおもちゃを製造することができる。また、例えば、任意の風景画からジオラマ等を製造することができる。 For example, the product manufacturing system 3001 can manufacture a model of an organ by measuring the shapes of various organs such as the heart. Further, for example, various healthcare products and the like can be manufactured by measuring the waist shape of a person. Also, for example, a person's figure product can be manufactured from the person's shape. Further, for example, a chair or the like suitable for a person can be manufactured from the shape of the person. Also, for example, car toys can be manufactured from car shapes. Also, for example, a diorama or the like can be manufactured from an arbitrary landscape picture.
 なお、上記説明においては、寸法データ算出装置3020と製品製造装置3030とが別部材の装置として説明しているが、これらは一体として構成されるものでもよい。 Note that, in the above description, the dimension data calculation device 3020 and the product manufacturing device 3030 are described as separate device devices, but they may be integrally configured.
 <第4実施形態>
 以下、既に説明した構成及び機能については略同一符号を付して説明を省略する。
 (4-1)寸法データ算出装置の構成
 図20は本実施形態に係る寸法データ算出システム4200の構成を示す模式図である。寸法データ算出システム4200は、寸法データ算出装置4120及び学習装置4125を備える。
<Fourth Embodiment>
Hereinafter, the configurations and functions that have already been described will be denoted by the same reference numerals, and description thereof will be omitted.
(4-1) Configuration of Dimension Data Calculation Device FIG. 20 is a schematic diagram showing the configuration of the dimension data calculation system 4200 according to this embodiment. The dimension data calculation system 4200 includes a dimension data calculation device 4120 and a learning device 4125.
 寸法データ算出装置4120は、記憶部4121、入出力部4122、通信部4123、及び処理部4124を備える。また、学習装置4125は、記憶部4126及び処理部4127を備える。なお、寸法データ算出装置4120及び学習装置4125は、LSI、ASIC、FPGA等を用いてハードウェアとして実現されてもよい。 The dimension data calculation device 4120 includes a storage unit 4121, an input/output unit 4122, a communication unit 4123, and a processing unit 4124. The learning device 4125 also includes a storage unit 4126 and a processing unit 4127. The dimension data calculation device 4120 and the learning device 4125 may be realized as hardware using an LSI, ASIC, FPGA, or the like.
 記憶部4121,4126は、何れも各種情報を記憶するものであり、メモリ及びハードディスク等の任意の記憶装置により実現される。例えば、記憶部4121は、処理部4124において寸法データ算出に関する情報処理を実行するために、対象物エンジン4121Aを含む各種データ、プログラム、情報等を格納する。また、記憶部4126は、対象物エンジン4121Aを生成するために、学習段階で利用される訓練データを格納する。 Each of the storage units 4121 and 4126 stores various information, and is realized by an arbitrary storage device such as a memory and a hard disk. For example, the storage unit 4121 stores various data including the target engine 4121A, programs, information, and the like in order for the processing unit 4124 to execute information processing regarding dimension data calculation. The storage unit 4126 also stores training data used in the learning stage to generate the target engine 4121A.
 入出力部4122は、前述の入出力部3022と同様の構成及び機能を有するものである。また、通信部4123は、前述の通信部3023と同様の構成及び機能を有するものである。 The input/output unit 4122 has the same configuration and function as the input/output unit 3022 described above. The communication unit 4123 has the same configuration and function as the communication unit 3023 described above.
 処理部4124は、コンピュータのCPU,GPU等に、記憶部4121に記憶されたプログラムが読み込まれることにより、取得部4124A、推定部4124D、及び算出部4124Eとして機能する。同様に、処理部4127は、コンピュータのCPU,GPU等に、記憶部4126に記憶されたプログラムが読み込まれることにより、前処理部4127A及び学習部4127Bとして機能する。 The processing unit 4124 functions as an acquisition unit 4124A, an estimation unit 4124D, and a calculation unit 4124E when the program stored in the storage unit 4121 is read by the CPU, GPU, etc. of the computer. Similarly, the processing unit 4127 functions as the preprocessing unit 4127A and the learning unit 4127B by the CPU, GPU, etc. of the computer reading the program stored in the storage unit 4126.
 寸法データ算出装置4120の処理部4124において、取得部4124Aは、対象物の全長データ、重量データ、経時データ(年齢等を含む。)とのうちの少なくともいずれかを含む属性データを取得する。本実施形態では、取得部4124Aが、属性データを推定部4124Dに入力するための受付部としても機能することになる。 In the processing unit 4124 of the dimension data calculation device 4120, the acquisition unit 4124A acquires the attribute data including at least one of the total length data, weight data, and elapsed time data (including age) of the object. In the present embodiment, the acquisition unit 4124A also functions as a reception unit for inputting the attribute data to the estimation unit 4124D.
 推定部4124Dは、属性データから所定個数の形状パラメータの値を推定する。推定には、対象物エンジン4121Aが使用される。推定部4124Dで推定された対象物の形状パラメータの値は、後述のように、対象物が有する任意の部位に関連する寸法データに関連付けることができる。 The estimating unit 4124D estimates the values of a predetermined number of shape parameters from the attribute data. The object engine 4121A is used for the estimation. The value of the shape parameter of the object estimated by the estimation unit 4124D can be associated with the dimension data related to an arbitrary part of the object, as described later.
 算出部4124Eは、推定部4124Dによって推定された対象物の形状パラメータの値から、対象物の寸法データを算出する。具体的には、算出部4124Eは、推定部4124Dで推定された対象物の形状パラメータの値から、対象物における複数の頂点の3次元データを構成し、更に、当該3次元データに基づいて対象物における任意の2つの頂点の間の寸法データを算出する。 The calculation unit 4124E calculates the dimension data of the object from the value of the shape parameter of the object estimated by the estimation unit 4124D. Specifically, the calculation unit 4124E configures three-dimensional data of a plurality of vertices in the object from the shape parameter values of the object estimated by the estimation unit 4124D, and further, based on the three-dimensional data, the target Calculate dimensional data between any two vertices of an object.
 学習装置4125の処理部4127において、前処理部4127Aは、学習のための各種前処理を実施する。特に、前処理部4127Aは、サンプル対象物の3次元データを次元削減により特徴抽出することを通じて、所定個数の形状パラメータを特定する。サンプル対象物の形状パラメータの値、及び対応する属性データは、訓練データとして予め記憶部4126に格納されている。 In the processing unit 4127 of the learning device 4125, the preprocessing unit 4127A carries out various preprocessing for learning. In particular, the preprocessing unit 4127A specifies a predetermined number of shape parameters by extracting the features of the three-dimensional data of the sample object by dimension reduction. The value of the shape parameter of the sample object and the corresponding attribute data are stored in the storage unit 4126 as training data in advance.
 なお、サンプル対象物の3次元データとして、対応する属性データ(全長データ、重量データ、経時データ(年齢等を含む。))が用意されているとする。対応する属性データは、訓練データとして記憶部4126に格納される。 Note that it is assumed that the corresponding attribute data (full length data, weight data, elapsed time data (including age, etc.)) is prepared as three-dimensional data of the sample object. The corresponding attribute data is stored in the storage unit 4126 as training data.
 学習部4127Bは、サンプル対象物の形状パラメータの値と、対応する属性データとの関係を関連付けるように学習する。学習の結果、対象物エンジン4121Aが生成される。生成された対象物エンジン4121Aは、電子ファイルの形態で保持できる。寸法データ算出装置4120で対象物の寸法データを算出する際には、対象物エンジン4121Aは、記憶部4121に格納され推定部4124Dによって参照される。 The learning unit 4127B learns to associate the value of the shape parameter of the sample object with the corresponding attribute data. As a result of the learning, the target engine 4121A is generated. The generated object engine 4121A can be held in the form of an electronic file. When the dimension data calculation device 4120 calculates the dimension data of the object, the object engine 4121A is stored in the storage unit 4121 and referred to by the estimation unit 4124D.
 (4-2)寸法データ算出システムの動作
 図21及び図22を参照して、図20の寸法データ算出システム4200の動作を説明する。図21は、学習装置4125の動作(S4110)を示すフローチャートであり、ここでは、サンプル対象物データに基づいて対象物エンジン4121Aを生成する。図22は寸法データ算出装置4120の動作を示すフローチャートであり、ここでは、対象物の画像データに基づいて対象物の寸法データを算出する。
(4-2) Operation of Dimension Data Calculation System With reference to FIGS. 21 and 22, the operation of the size data calculation system 4200 of FIG. 20 will be described. FIG. 21 is a flowchart showing the operation (S4110) of the learning device 4125, in which the target object engine 4121A is generated based on the sample target object data. FIG. 22 is a flowchart showing the operation of the dimension data calculation device 4120. Here, the dimension data of the target object is calculated based on the image data of the target object.
 (4-2-1)学習装置の動作
 最初に、サンプル対象物のデータが準備され、記憶部4126に格納される(S4111)。一例では、準備されるデータは、400個のサンプル対象物のデータであり、サンプル対象物ごとに準備される5,000個の3次元データと、サンプル対象物ごとに準備される属性データとを含む。3次元データには、サンプル対象物が有する頂点の3次元座標データが含まれる。また、3次元データには、3次元物体を構成する各メッシュの頂点情報及び各頂点の法線方向等のメッシュ・データが含まれてもよい。
(4-2-1) Operation of Learning Device First, data of the sample target is prepared and stored in the storage unit 4126 (S4111). In one example, the prepared data is data of 400 sample objects, and includes 5,000 three-dimensional data prepared for each sample object and attribute data prepared for each sample object. Including. The three-dimensional data includes three-dimensional coordinate data of the vertices of the sample object. Further, the three-dimensional data may include apex information of each mesh forming the three-dimensional object and mesh data such as a normal direction of each apex.
 また、第1実施形態と同様、サンプル対象物の3次元データは、頂点番号と共に部位の情報が関連付けられている。 Also, as in the first embodiment, the three-dimensional data of the sample object is associated with the part number information together with the vertex number.
 続いて、前処理部4127Aは、次元削減により、所定個数(次元)の形状パラメータへの特徴変換を行う(S4112)。この特徴変換処理も第1実施形態と同様である。前述の例では、主成分分析に係る射影行列を用いた行列演算の結果、400個のサンプル対象物がそれぞれ有する15,000次元(5,000×3)のデータは、例えば30次元の主成分の形状パラメータΛに特徴変換される。 Subsequently, the preprocessing unit 4127A performs feature conversion into a predetermined number (dimension) of shape parameters by dimension reduction (S4112). This feature conversion process is also the same as in the first embodiment. In the above example, as a result of the matrix calculation using the projection matrix according to the principal component analysis, the 15,000-dimensional (5,000×3) data that each of the 400 sample objects has is, for example, the 30-dimensional principal component. Is transformed into a shape parameter Λ of.
 そして、学習部4127Bが、S4111で準備された複数のサンプル対象物の属性データと、S4112で得た複数の形状パラメータのデータ・セットとの組を訓練データに用いて、両者の関係を機械学習する(S4115)。 Then, the learning unit 4127B uses the combination of the attribute data of the plurality of sample objects prepared in S4111 and the data set of the plurality of shape parameters obtained in S4112 as the training data to machine-learn the relationship between them. Yes (S4115).
 具体的には、学習部4027Bは、対象物の属性データから、変換属性データYを求める。変換属性データYの要素yと形状パラメータΛの要素λとを関連づける変換行列Zの要素をzrmと表記する。変換行列Zは、〔s行、n列〕からなる行列である。また、記号mは、1≦m≦nであり、前述の例ではnは形状パラメータΛの次元数である30である。記号rは、1≦r≦sであり、sは変換属性データYから得られる演算に用いられる要素の数である。 Specifically, the learning unit 4027B obtains the conversion attribute data Y from the attribute data of the object. An element of the conversion matrix Z that associates the element y r of the conversion attribute data Y with the element λ m of the shape parameter Λ is expressed as zrm. The conversion matrix Z is a matrix composed of [s rows, n columns]. The symbol m is 1≦m≦n, and n is 30 which is the number of dimensions of the shape parameter Λ in the above example. The symbol r is 1≦r≦s, and s is the number of elements used for the operation obtained from the conversion attribute data Y.
 例えば、対象物の属性データが全長データh、重量データw、及び経時データaからなるものと想定する。つまり、属性データは、(h,w,a)の要素のセットである。この場合、学習部4027Bは、対象物の属性データの各要素(h,w,a)を2乗した値(二次項ともいう。)と、各要素を掛け合わせた値(相互作用項ともいう。)と、各要素自体の値(一次項ともいう。)とを求める。 For example, it is assumed that the attribute data of the object consists of full length data h, weight data w, and elapsed time data a. That is, the attribute data is a set of (h,w,a) elements. In this case, the learning unit 4027B multiplies a value obtained by squaring each element (h, w, a) of the attribute data of the object (also called a quadratic term) and a value obtained by multiplying each element (also called an interaction term). .) and the value of each element itself (also called the primary term).
 その結果、次の9個の要素を有する変換属性データYが得られる。
Figure JPOXMLDOC01-appb-M000004
As a result, conversion attribute data Y having the following nine elements is obtained.
Figure JPOXMLDOC01-appb-M000004
 次に、学習部4027Bは、400個のサンプル対象物に関連付けられた属性データから得られる変換属性データYと、サンプル対象物の3次元データから得られた形状パラメータΛとの組を回帰分析することにより、次に示すような、〔9行、30列〕からなる変換行列Zを求める。 Next, the learning unit 4027B performs regression analysis on a set of the conversion attribute data Y obtained from the attribute data associated with 400 sample objects and the shape parameter Λ obtained from the three-dimensional data of the sample objects. As a result, a conversion matrix Z composed of [9 rows, 30 columns] as shown below is obtained.
Figure JPOXMLDOC01-appb-M000005
 このようにして求められた変換行列Zのデータは、対象物エンジン4121Aとして記憶部4026に記憶される。
Figure JPOXMLDOC01-appb-M000005
The data of the conversion matrix Z thus obtained is stored in the storage unit 4026 as the object engine 4121A.
 (4-2-2)寸法データ算出装置の動作
 寸法データ算出装置4120は、学習装置4125で生成された対象物エンジン4121Aの電子ファイル、及び学習装置4125で得た主成分分析の射影情報を記憶部4121に格納して、対象物の寸法データ算出に使用する。
(4-2-2) Operation of Dimension Data Calculation Device The dimension data calculation device 4120 stores the electronic file of the object engine 4121A generated by the learning device 4125 and the projection information of the principal component analysis obtained by the learning device 4125. It is stored in the unit 4121 and used for calculating the dimension data of the object.
 最初に、取得部4124Aは、入出力部4122を通じて、対象物の属性データを取得する(S4121)。これにより、対象物の属性データを受け付ける。次いで、予め記憶部4121に格納された対象物エンジン4121Aを使用して、推定部4124Dは、受け付けた属性データから対象物の形状パラメータの値を推定する(S4124)。 First, the acquisition unit 4124A acquires the attribute data of the target object via the input/output unit 4122 (S4121). Thereby, the attribute data of the target object is received. Next, using the target engine 4121A stored in advance in the storage unit 4121, the estimation unit 4124D estimates the value of the shape parameter of the target from the received attribute data (S4124).
 例えば、対象物の属性データが全長データh、重量データw、及び経時データaからなるものと想定する。つまり、属性データは、(h,w,a)の要素のセットである。S4124では、上述したように、推定部4124Dは、対象物の属性データの各要素(h,w,a)を2乗した値と、各要素を掛け合わせた値と、各要素自体の値とからなる変換属性データYを得る。 For example, it is assumed that the attribute data of the object consists of full length data h, weight data w, and elapsed time data a. That is, the attribute data is a set of (h,w,a) elements. In S4124, as described above, the estimation unit 4124D calculates the value obtained by squaring each element (h, w, a) of the attribute data of the object, the value obtained by multiplying each element, and the value of each element itself. To obtain the conversion attribute data Y.
 最後に、算出部4124Eは、対象物の形状パラメータの値に基づいて、対象物が有する部位に関連する寸法データを算出する(S4125)。具体的には、取得部4124Aが取得した対象物の属性データから、変換属性データYを算出する。そして、その変換属性データYに上述した変換行列Zを掛け合わせることで形状パラメータΛを算出する。この後は、第3実施形態におけるもの(S3025)と同様に、3次元データから3次元物体を仮想的に構成し、3次元物体上の曲面に沿って2つの頂点の間の寸法データを算出する。なお、立体的な距離の計算には、3次元物体を構成する各メッシュの頂点情報及び各頂点の法線方向等のメッシュ・データを用いることができる。 Finally, the calculation unit 4124E calculates the dimensional data related to the part of the object based on the value of the shape parameter of the object (S4125). Specifically, the conversion attribute data Y is calculated from the attribute data of the target object acquired by the acquisition unit 4124A. Then, the shape parameter Λ is calculated by multiplying the conversion attribute data Y by the above-mentioned conversion matrix Z. After this, similarly to the one in the third embodiment (S3025), a three-dimensional object is virtually constructed from the three-dimensional data, and the dimension data between the two vertices is calculated along the curved surface on the three-dimensional object. To do. In addition, for the calculation of the three-dimensional distance, mesh information such as vertex information of each mesh forming the three-dimensional object and a normal direction of each vertex can be used.
 このように、本実施形態の寸法データ算出装置4120は、対象物エンジン4121Aを使用することにより、対象物の属性データから所定個数の形状パラメータの値を高精度に推定することができる。第3実施形態とは異なり、対象物の画像入力の必要がなく、かつ図13のS3022(形状データの抽出処理)及びS3023(リスケール処理)を必要としないので、効率的である。 As described above, the dimension data calculation device 4120 of the present embodiment can highly accurately estimate the value of the predetermined number of shape parameters from the attribute data of the object by using the object engine 4121A. Unlike the third embodiment, there is no need to input an image of an object, and neither S3022 (shape data extraction processing) nor S3023 (rescale processing) of FIG. 13 is required, which is efficient.
 また、高精度に推定された形状パラメータの値から、対象物の3次元データを高精度に復元することができるので、特定部位のみならず任意の2つの頂点間を採寸対象箇所として、高精度に算出することができる。特に、算出される2つの頂点の間の寸法データは、3次元データから構成される3次元物体に基づいて、立体的な形状に沿って算出されるので、高精度である。 In addition, since the three-dimensional data of the object can be restored with high accuracy from the value of the shape parameter estimated with high accuracy, not only the specific part but also any two vertices can be used as the measurement target part with high accuracy. Can be calculated. In particular, the calculated dimensional data between the two vertices is highly accurate because it is calculated along a three-dimensional shape based on a three-dimensional object composed of three-dimensional data.
 (4-3)寸法データ算出システムの特徴
 以上説明したように、本実施形態に係る寸法データ算出システム4200は、寸法データ算出装置4120及び学習装置4125を備える。寸法データ算出装置4120の一部として構成される情報処理装置は、取得部(受付部)4124A、推定部4124D、及び算出部4124Eを備える。取得部(受付部)4124Aは、対象物の属性データを受け付ける。推定部4124Dは、サンプル対象物の属性データと、サンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジン4121Aを使用して、受け付けた属性データから対象物の形状パラメータの値を推定する。そして、推定された対象物の形状パラメータの値が、対象物が有する任意の部位に関連する寸法データに関連付けられる。
(4-3) Characteristics of Dimension Data Calculation System As described above, the dimension data calculation system 4200 according to the present embodiment includes the dimension data calculation device 4120 and the learning device 4125. The information processing device configured as a part of the dimension data calculation device 4120 includes an acquisition unit (reception unit) 4124A, an estimation unit 4124D, and a calculation unit 4124E. The acquisition unit (reception unit) 4124A receives the attribute data of the target object. The estimation unit 4124D uses the target object engine 4121A that associates the attribute data of the sample target object with the values of the predetermined number of shape parameters associated with the sample target object, and then uses the target object shape parameter from the received attribute data. Estimate the value of. Then, the value of the estimated shape parameter of the object is associated with the dimensional data relating to an arbitrary part of the object.
 したがって、寸法データ算出装置4120は、予め作成済みの対象物エンジン4121Aを使用することにより、属性データから所定個数の形状パラメータの値を効率的に推定することができる。また、推定される形状パラメータの値は高精度である。また、高精度に推定された形状パラメータの値を用いることで、対象物の任意の部位に関連するデータを効率的且つ高精度で算出することができる。このように、寸法データ算出装置4120によれば、対象物について算出される寸法データを効率的且つ高精度で提供することができる。 Therefore, the dimension data calculation device 4120 can efficiently estimate the values of the predetermined number of shape parameters from the attribute data by using the object engine 4121A that has been created in advance. Moreover, the value of the estimated shape parameter is highly accurate. Further, by using the value of the shape parameter estimated with high accuracy, it is possible to efficiently and highly accurately calculate the data related to an arbitrary part of the object. As described above, according to the dimension data calculation device 4120, the dimension data calculated for the object can be efficiently provided with high accuracy.
 (4-4)製品製造システムへの適用
 図23は本実施形態に係る製品製造システム4001Sの概念を示す模式図である。本実施形態に係る寸法データ算出装置4120も、第3実施形態に係る寸法データ算出装置3020と同様に、製品製造システム4001Sに適用することが可能である。
(4-4) Application to Product Manufacturing System FIG. 23 is a schematic diagram showing the concept of the product manufacturing system 4001S according to this embodiment. The dimension data calculation device 4120 according to the present embodiment can also be applied to the product manufacturing system 4001S, similarly to the dimension data calculation device 3020 according to the third embodiment.
 本実施形態に係る端末装置4010Sは、対象物4007の属性を示す属性データの入力を受け付けるものであればよい。「属性」としては、対象物1007の全長・重量・生成からの経過時間(年齢を含む)等が挙げられる。 The terminal device 4010S according to the present embodiment only needs to accept the input of attribute data indicating the attribute of the target object 4007. Examples of the “attribute” include the total length, weight, and elapsed time (including age) from the generation of the object 1007.
 また、前述のように、寸法データ算出装置4120の処理部4124は、取得部4124A、推定部4124D、及び算出部4124Eとして機能する。算出部4124Eは、推定部4124Dで得られた対象物の形状パラメータの値に基づいて、対象物が有する部位に関連する寸法データを算出する。 Further, as described above, the processing unit 4124 of the dimension data calculation device 4120 functions as the acquisition unit 4124A, the estimation unit 4124D, and the calculation unit 4124E. The calculation unit 4124E calculates the dimension data related to the part of the target object based on the value of the shape parameter of the target object obtained by the estimation unit 4124D.
 製品製造システム4001Sでは、寸法データ算出装置4120が効率的且つ高精度に対象物1007の寸法データを算出するので、対象物1007の形状に関連する所望の製品を提供できる。その他、第4実施形態に係る製品製造システム4001Sは、第3実施形態の製品製造システム3001と同様の効果を発揮することができる。 In the product manufacturing system 4001S, the dimension data calculation device 4120 efficiently and accurately calculates the dimension data of the target object 1007, so that a desired product related to the shape of the target object 1007 can be provided. In addition, the product manufacturing system 4001S according to the fourth embodiment can exhibit the same effects as the product manufacturing system 3001 according to the third embodiment.
 <他の実施形態1:シルエット画像生成装置>
 (5-1)シルエット画像生成装置の構成
 図24は、他の実施形態に係るシルエット画像生成装置5020の構成を示す模式図である。第1実施形態及び第3実施形態において生成されるシルエット画像(階調シルエット画像を含む)は、このシルエット画像生成装置5020にしたがって生成されてもよい。つまり、シルエット画像生成装置5020は、第1実施形態に係る寸法データ算出装置1020の一部として、又は第3実施形態に係る寸法データ算出装置3020の一部として構成されてもよい。
<Other Embodiment 1: Silhouette Image Generation Device>
(5-1) Configuration of Silhouette Image Generating Device FIG. 24 is a schematic diagram showing the configuration of a silhouette image generating device 5020 according to another embodiment. The silhouette image (including the gradation silhouette image) generated in the first embodiment and the third embodiment may be generated according to this silhouette image generation device 5020. That is, the silhouette image generation device 5020 may be configured as a part of the dimension data calculation device 1020 according to the first embodiment or as a part of the dimension data calculation device 3020 according to the third embodiment.
 シルエット画像生成装置5020は任意のコンピュータにより実現することができ、取得部5024A、抽出部5024B、及び変換部5024Cを備える。 The silhouette image generation device 5020 can be realized by any computer, and includes an acquisition unit 5024A, an extraction unit 5024B, and a conversion unit 5024C.
 取得部5024Aは、第1実施形態に係る寸法データ算出装置1020の取得部1024A、及び/又は第3実施形態に係る寸法データ算出装置3020の取得部3024Aの全部又は一部に相当してよい。また、抽出部5024Bは、第1実施形態に係る寸法データ算出装置1020の抽出部1024B、及び/又は第3実施形態に係る寸法データ算出装置3020の抽出部3024Bの全部又は一部に相当してよい。同様に、変換部5024Cは、第1実施形態に係る寸法データ算出装置1020の変換部1024C、及び/又は第3実施形態に係る寸法データ算出装置3020の変換部3024Cの全部又は一部に相当してよい。 The acquisition unit 5024A may correspond to all or a part of the acquisition unit 1024A of the dimension data calculation device 1020 according to the first embodiment and/or the acquisition unit 3024A of the dimension data calculation device 3020 according to the third embodiment. The extraction unit 5024B corresponds to all or a part of the extraction unit 1024B of the dimension data calculation device 1020 according to the first embodiment and/or the extraction unit 3024B of the dimension data calculation device 3020 according to the third embodiment. Good. Similarly, the conversion unit 5024C corresponds to all or a part of the conversion unit 1024C of the dimension data calculation device 1020 according to the first embodiment and/or the conversion unit 3024C of the dimension data calculation device 3020 according to the third embodiment. You may.
 取得部5024Aは、対象物が撮影された画像データを取得する。取得部5024Aは、例えば、撮像装置により、対象物を複数の異なる方向から撮影した複数の画像データを取得する。ここでは、深度データを取得可能な深度データ測定装置が適用可能であり、深度データに基づいてピクセル毎に深度データを有する深度マップが構成される。深度データ測定装置を適用することにより、取得部5024Aで取得することができる画像データは、RGB-D(Red、Green、Blue、Depth)データを含むことができる。具体的には、画像データは、通常の単眼カメラで取得可能なRGB画像データに加えて、このような深度マップを取得することができる。 The acquisition unit 5024A acquires image data in which the object is photographed. The acquisition unit 5024A acquires, for example, a plurality of pieces of image data obtained by shooting an object from a plurality of different directions with an imaging device. Here, a depth data measuring device capable of acquiring depth data is applicable, and a depth map having depth data for each pixel is configured based on the depth data. By applying the depth data measuring device, the image data that can be acquired by the acquisition unit 5024A can include RGB-D (Red, Green, Blue, Depth) data. Specifically, the image data can acquire such a depth map in addition to the RGB image data that can be acquired by a normal monocular camera.
 深度データ測定装置の一例は、ステレオカメラであり、以下の説明においてもステレオカメラを適用するものとする。ステレオカメラで対象物(特に人物)を撮影する際は、対象物を正確に特定できるようにするために、ディスプレイの所定範囲にその全体が収容されるようユーザにガイドするのがよい。一例では、ディスプレイ上にガイド領域を表示する、又はガイドメッセージを表示してユーザに促すようにするのがよい。これにより、ステレオカメラから所望の方向及び距離に対象物を位置することができ、シルエット画像生成時のノイズを低減することができる。 An example of the depth data measuring device is a stereo camera, and the stereo camera is also applied in the following description. When photographing an object (particularly a person) with a stereo camera, it is preferable to guide the user so that the entire object is accommodated within a predetermined range of the display so that the object can be accurately specified. In one example, a guide area may be displayed on the display, or a guide message may be displayed to prompt the user. As a result, the target object can be positioned in a desired direction and distance from the stereo camera, and noise during silhouette image generation can be reduced.
 抽出部5024Bは、画像データから対象物の形状を示す形状データを抽出する。より詳しくは、抽出部5024Bは、3次元点群生成部5124、背景点群除去部5224、平面点群除去部5324、対象物領域抽出部5424、形状データ抽出部5524、及び物体検出部5624を含む。 The extraction unit 5024B extracts shape data indicating the shape of the object from the image data. More specifically, the extraction unit 5024B includes a three-dimensional point cloud generation unit 5124, a background point cloud removal unit 5224, a plane point cloud removal unit 5324, an object region extraction unit 5424, a shape data extraction unit 5524, and an object detection unit 5624. Including.
 3次元点群生成部5124は、取得した深度マップから3次元点群データを生成し、仮想3次元座標空間に点の集合からなる3次元点群を配備する。それぞれの点は仮想3次元空間内に3次元座標を有する。なお、仮想3次元座標空間では、ステレオカメラが仮想的に原点に配置され、ステレオカメラの向きに応じて3次元座標(xyz)系が規定される。特に、ステレオカメラの光軸方向が奥行方向(z軸方向)として規定される。 The 3D point cloud generation unit 5124 generates 3D point cloud data from the acquired depth map, and deploys a 3D point cloud consisting of a set of points in a virtual 3D coordinate space. Each point has three-dimensional coordinates in a virtual three-dimensional space. In the virtual three-dimensional coordinate space, the stereo camera is virtually arranged at the origin, and the three-dimensional coordinate (xyz) system is defined according to the orientation of the stereo camera. In particular, the optical axis direction of the stereo camera is defined as the depth direction (z-axis direction).
 背景点群除去部5224は、生成された3次元点群データのうち、仮想3次元座標空間の奥行方向に沿って所定の距離より離れて存在する3次元点群のデータを取り除く。取り除かれる点は、ステレオカメラから遠方に存在することから背景画像を構成するものとみなすことができ、対象物が撮影された画像データから背景部分を除去するのがよい。これにより、ノイズとなる3次元点群を効果的に除去することができるので、対象物領域抽出部5424で抽出される対象物領域の特定の精度を向上させることができる。 The background point cloud removing unit 5224 removes, from the generated three-dimensional point cloud data, the data of the three-dimensional point cloud existing apart from the predetermined distance along the depth direction of the virtual three-dimensional coordinate space. The point to be removed can be regarded as constituting a background image because it exists far from the stereo camera, and it is preferable to remove the background portion from the image data in which the object is photographed. As a result, the three-dimensional point group that becomes noise can be effectively removed, so that the accuracy of specifying the object area extracted by the object area extraction unit 5424 can be improved.
 平面点群除去部5324は、生成された3次元点群データのうち平面の部分に対応して存在する3次元点群データを取り除く。対象物領域を精度よく特定するためには、対象物が撮影された画像データから、対象物の周囲に存在する平面部分を特定し、除去するのがよい。そのためには、生成された3次元点群のデータを用いて平面部分を推定する必要がある。 The plane point cloud removing unit 5324 removes the three-dimensional point cloud data existing corresponding to the plane portion from the generated three-dimensional point cloud data. In order to accurately specify the target object area, it is preferable to specify and remove the flat surface portion existing around the target object from the image data obtained by capturing the target object. For that purpose, it is necessary to estimate the plane part using the data of the generated three-dimensional point cloud.
 具体的には、平面点群除去部5324は次のように動作するのがよい。
 まず深度マップから生成された3次元点群データから、画像データにおける平面部分を推定する。ここでの平面は例えば床である。つまり、対象物が人である場合は、平面部分は、直立した人が接している床部分である。
Specifically, the plane point cloud removing unit 5324 may operate as follows.
First, the plane portion in the image data is estimated from the three-dimensional point cloud data generated from the depth map. The plane here is, for example, the floor. That is, when the object is a person, the plane portion is the floor portion which is in contact with the upright person.
 通常、3次元空間におけるxyz座標平面は次の方程式f(x,y,z)=0で表される。
 f(x,y,z)=ax+by+cz+d=0
Usually, the xyz coordinate plane in the three-dimensional space is expressed by the following equation f(x, y, z)=0.
f(x, y, z)=ax+by+cz+d=0
 一例では、3次元点群が配備された仮想3次元座標空間において、平面部分は、公知のランダムサンプリング手法によりサンプルされる複数のサンプル平面の中から1つが選択される。典型的には、ランダムサンプリングは、RANSAC(Random Sample Consensus)によるロバスト推定のアルゴリズムが適用可能である。 In one example, in the virtual three-dimensional coordinate space where the three-dimensional point cloud is arranged, one plane part is selected from a plurality of sample planes sampled by a known random sampling method. Typically, for random sampling, a robust estimation algorithm by RANSAC (Random Sample Consensus) can be applied.
 より詳しくは、まず、法線ベクトル(a,b,c)及びdをランダムに決定することでサンプル平面をサンプルする。次いで、サンプル平面に対して、どれくらいの3次元点群が関連付けられるかについて、次の不等式を満たす3次元点群の点を特定する。なお、DSTは所定の閾値距離である。
 |f(x,y,z)|=|ax+by+cz+d|≦DST
More specifically, first, the sample plane is sampled by randomly determining the normal vectors (a, b, c) and d. Then, the points of the three-dimensional point group that satisfy the following inequalities are identified with respect to how many three-dimensional point groups are associated with the sample plane. DST is a predetermined threshold distance.
|f(x, y, z)|=|ax+by+cz+d|≦DST
 上記不等式を満たす点(x,y,z値をそれぞれ有する。)は、3次元点群のうちサンプル平面上に存在するものとみなされる。理論的には閾値距離DST=0であるが、撮影環境やステレオカメラの性能等を考慮して、サンプル平面から所定の微少距離内にある3次元点群データも含めるべく、閾値距離DSTは0に近い値に予め設定するのがよい。ランダムに決定される複数のサンプル平面のうち、上記不等式を満たす3次元点群の数が多いサンプル平面、つまり、3次元点群の含有率が最も大きいサンプル平面が、画像データにおける所望の平面部分であると推定される。 Points that satisfy the above inequality (each has x, y, and z values) are considered to be on the sample plane of the three-dimensional point cloud. Theoretically, the threshold distance DST=0, but the threshold distance DST is 0 in order to include the three-dimensional point cloud data within a predetermined minute distance from the sample plane in consideration of the shooting environment and the performance of the stereo camera. It is better to set it to a value close to. Of a plurality of randomly determined sample planes, a sample plane having a large number of three-dimensional point groups satisfying the above inequality, that is, a sample plane having the largest content rate of the three-dimensional point group is a desired plane portion in the image data. Is estimated to be
 換言すると、平面点群除去部5324は、サンプル平面の抽出を複数回繰り返してから、3次元点群の含有率が最も大きいサンプル平面を決定するので、所望の平面部分の推定に対するロバスト性を高めることができる。 In other words, the plane point cloud removing unit 5324 repeats the extraction of the sample planes a plurality of times and then determines the sample plane having the largest content rate of the three-dimensional point cloud, so that the robustness to the estimation of the desired plane portion is increased. be able to.
 引き続き、平面点群除去部5324は、生成された3次元点群データから、推定された平面部分に存在する点の3次元点群データを取り除く。取り除かれる点を有する平面部分は、例えば、画像データにおける床部分である。つまり、平面点群除去部5324により、対象物が撮影された画像データから床部分を除去することができる。これにより、ノイズとなる3次元点群を効果的に除去することができるので、対象物領域抽出部5424で抽出される対象物の対象物領域の特定精度を向上させることができる。 Subsequently, the plane point cloud removing unit 5324 removes the 3D point cloud data of the points existing in the estimated plane part from the generated 3D point cloud data. The plane portion having the points to be removed is, for example, the floor portion in the image data. That is, the plane point cloud removing unit 5324 can remove the floor portion from the image data in which the object is photographed. As a result, the three-dimensional point group that becomes noise can be effectively removed, so that it is possible to improve the accuracy of specifying the target object area of the target object extracted by the target object area extracting unit 5424.
 また、平面点群除去部5324による処理を繰り返すことにより、別の平面部分を更に推定して、当該別の平面部分に存在する点の3次元点群データを更に取り除いてもよい。例えば、床部分を推定し、その3次元点群データを全体の3次元点群データから一旦取り除いた後に、再度、前述のランダムサンプリング手法によりサンプルされるサンプル平面の中から平面を推定する。これにより、今度は壁の部分を推定することができる。つまり、床部分のみならず壁部分の3次元点群データも画像データから除去することができ、対象物の対象物領域を特定する精度を更に向上させることができる。 Further, by repeating the processing by the plane point cloud removing unit 5324, another plane part may be further estimated, and the three-dimensional point cloud data of points existing in the other plane part may be further removed. For example, the floor portion is estimated, the three-dimensional point cloud data is once removed from the entire three-dimensional point cloud data, and then a plane is estimated again from the sample planes sampled by the random sampling method described above. With this, it is possible to estimate the wall portion. That is, not only the floor portion but also the three-dimensional point cloud data of the wall portion can be removed from the image data, and the accuracy of specifying the target object area of the target object can be further improved.
 なお、平面部分の推定の精度は、対象物を撮影する際の撮影環境にも依存する。例えば、平面部分の推定を正確に行うには、平面部分を構成する点の数を、対象物を構成する点の数よりも大きくする必要がある。したがって、例えば、多くの壁が写り込まない撮影環境をユーザに選択させる、又は店舗内のステレオカメラを多くの壁が写り込まない場所に固定的に設置する等を考慮するのがよい。 Note that the accuracy of estimation of the plane part also depends on the shooting environment when shooting the target object. For example, in order to accurately estimate the plane portion, it is necessary to make the number of points forming the plane portion larger than the number of points forming the object. Therefore, for example, it is preferable to allow the user to select a shooting environment in which many walls are not reflected, or fixedly install the stereo camera in the store in a place where many walls are not reflected.
 対象物領域抽出部5424は、3次元点群データを用いて対象物の対象物領域を抽出する。3次元点群生成部5124によって深度マップから生成された3次元点群から背景点群除去部5224及び/又は平面点群除去部5324によって取り除かれ、つまり、ノイズ除去された後の3次元点群データを用いて、更に対象物に相当する3次元点群を特定する。例えば、仮想3次元空間における所定の空間範囲にある3次元点群を特定すればよい。そして、特定された3次元点群データに基づくことで画像データにおける対象物の対象物領域を抽出することができる。このようにして抽出される対象物領域は、効果的にノイズ除去処理されたものであり、高精度である。これにより、変換部5024Cによって変換されるシルエット画像の精度をまた向上させることができる。 The target area extraction unit 5424 extracts the target area of the target using the three-dimensional point cloud data. The three-dimensional point cloud removed from the three-dimensional point cloud generated from the depth map by the three-dimensional point cloud generator 5124 by the background point cloud remover 5224 and/or the plane point cloud remover 5324, that is, noise-removed Using the data, a three-dimensional point cloud corresponding to the object is further specified. For example, a three-dimensional point group in a predetermined space range in the virtual three-dimensional space may be specified. Then, the object area of the object in the image data can be extracted based on the specified three-dimensional point cloud data. The object region extracted in this way is effectively noise-removed and has high accuracy. Thereby, the accuracy of the silhouette image converted by the conversion unit 5024C can be further improved.
 形状データ抽出部5524は、対象物領域抽出部5424によって抽出された対象物領域に対応する、深度マップ中の領域の深度データに基づいて、対象物の形状を示す形状データを抽出する。 The shape data extraction unit 5524 extracts shape data indicating the shape of the target object based on the depth data of the region in the depth map corresponding to the target region extracted by the target region extraction unit 5424.
 物体検出部5624は、取得部5024Aで取得されたRGB画像データを用いて、物体検出により画像データにおける対象物の画像領域を抽出する。対象物の画像領域は、奥行き(z)方向に対して垂直となる2次元(xy)座標の領域で規定される。物体検出は公知の手法を用いてよく、例えば、ディープラーニングによる物体検出アルゴリズムを用いた領域特定が適用可能である。ディープラーニングによる物体検出アルゴリズムの一例は、R-CNN(Regions with Convolutional Neural Networks)である。 The object detection unit 5624 uses the RGB image data acquired by the acquisition unit 5024A to extract the image area of the target object in the image data by object detection. The image area of the object is defined by a two-dimensional (xy) coordinate area that is perpendicular to the depth (z) direction. A known method may be used for object detection, and for example, region identification using an object detection algorithm by deep learning is applicable. An example of an object detection algorithm by deep learning is R-CNN (Regions with Convolutional Neural Networks).
 物体検出部5624で抽出された画像領域に対応する部分の深度マップに基づいて、前述の3次元点群生成部5124が3次元点群データを生成するようにしてもよい。これにより、更にノイズの少ない3次元点群データを生成することができ、その結果、形状データ抽出部5524により抽出される形状データの精度を向上させることができる。 The above-described three-dimensional point cloud generation unit 5124 may generate the three-dimensional point cloud data based on the depth map of the portion corresponding to the image area extracted by the object detection unit 5624. Thereby, three-dimensional point cloud data with less noise can be generated, and as a result, the accuracy of the shape data extracted by the shape data extraction unit 5524 can be improved.
 変換部5024Cは、形状データ抽出部5524で抽出された形状データを変換して、対象物のシルエット画像を生成する。変換されるシルエット画像は、単なる白黒の2値化データで表されるのではなく、深度データに基づいて、対象物の画像領域が例えば輝度値が0(「黒」)から1(「白」)までのデータで表された単色多階調のモノクロ画像(階調シルエット画像)とすることができる。つまり、シルエット画像データは、対象物の画像領域を深度データに関連づけられることにより、更に多くの情報量を有することができる。 The conversion unit 5024C converts the shape data extracted by the shape data extraction unit 5524 to generate a silhouette image of the object. The converted silhouette image is not simply represented by black and white binarized data, but based on the depth data, the image area of the object has, for example, a luminance value of 0 (“black”) to 1 (“white”). It is possible to obtain a monochromatic multi-tone monochrome image (gradation silhouette image) represented by the data up to ). That is, the silhouette image data can have a larger amount of information by associating the image area of the object with the depth data.
 (5-2)シルエット画像生成装置の動作
 図25は、図24で説明した他の実施形態に係るシルエット画像生成装置5020の動作を説明するためのフローチャートである。本フローチャートを通じて、対象物が撮影された画像データから対象物のシルエット画像を生成することができる(S5000)。
(5-2) Operation of Silhouette Image Generating Device FIG. 25 is a flowchart for explaining the operation of the silhouette image generating device 5020 according to another embodiment described in FIG. Through this flow chart, a silhouette image of the object can be generated from the image data of the image of the object (S5000).
 まず、取得部5024Aは、対象物が撮影された、深度マップを含む画像データを取得する(S5010)。次いで、物体検出部5624は、画像データに含まれるRGB画像データから、対象物の画像領域を抽出する(S5020)。なお、当該ステップは任意としてよい。続いて、3次元点群生成部5124は、画像データに含まれる深度マップに対応する3次元点群データを生成して仮想3次元座標空間を構成する(S5030)。S5020を実行している場合は、対象物の画像領域(xy座標領域)に対応する部分の深度マップに基づいて、3次元点群データを生成するのがよい。 First, the acquisition unit 5024A acquires image data including a depth map in which an object is photographed (S5010). Next, the object detection unit 5624 extracts the image area of the object from the RGB image data included in the image data (S5020). In addition, the said step may be arbitrary. Subsequently, the 3D point cloud generation unit 5124 generates 3D point cloud data corresponding to the depth map included in the image data to configure a virtual 3D coordinate space (S5030). When S5020 is executed, it is preferable to generate the three-dimensional point cloud data based on the depth map of the portion corresponding to the image area (xy coordinate area) of the object.
 次いで、背景点群除去部5224は、仮想3次元座標空間の奥行(z)方向に沿って所定の閾値距離より離れた3次元点群データを取り除く(S5040)。また、平面点群除去部5324は、画像データにおける平面部分を推定し(S5050)、更に3次元点群データから平面部分に対応する3次元点群データを取り除く(S5060)。なお、S5050及びS5060を繰り返して、画像データにおける複数の平面部分を推定し、3次元点群データを取り除いてもよい。 Next, the background point cloud removing unit 5224 removes the 3D point cloud data that is away from the predetermined threshold distance along the depth (z) direction of the virtual 3D coordinate space (S5040). Further, the plane point cloud removing unit 5324 estimates the plane part in the image data (S5050), and further removes the three-dimensional point cloud data corresponding to the plane part from the three-dimensional point cloud data (S5060). Note that S5050 and S5060 may be repeated to estimate a plurality of plane portions in the image data and remove the three-dimensional point cloud data.
 続いて、対象物領域抽出部5424は、取り除いた後の3次元点群データに基づいて、対象物の対象物領域を抽出する(S5070)。そして、形状データ抽出部5524は、深度マップにおける対象物領域の深度データに基づいて、対象物の形状を示す形状データを抽出する(S5080)。最後に、変換部5024Cは、形状データを変換することにより、対象物のシルエット画像を生成する。 Next, the target area extraction unit 5424 extracts the target area of the target based on the removed three-dimensional point cloud data (S5070). Then, the shape data extraction unit 5524 extracts the shape data indicating the shape of the target object based on the depth data of the target object region in the depth map (S5080). Finally, the conversion unit 5024C converts the shape data to generate a silhouette image of the object.
 (5-3)シルエット画像生成装置の特徴
 (5-3-1)
 以上説明したように、本実施形態に係るシルエット画像生成装置5020は、取得部5024A、抽出部5024B、及び変換部5024Cを備える。取得部5024Aは、対象物が撮影された、深度マップを含む画像データを取得する。抽出部5024Bは、深度マップから生成される3次元点群データを用いて対象物の対象物領域を抽出し、対象物領域に対応する深度マップの深度データに基づいて、対象物の形状を示す形状データを抽出する。変換部5024Cは、形状データを変換して、対象物のシルエット画像を生成する。
(5-3) Features of silhouette image generation device (5-3-1)
As described above, the silhouette image generation device 5020 according to this embodiment includes the acquisition unit 5024A, the extraction unit 5024B, and the conversion unit 5024C. The acquisition unit 5024A acquires image data including a depth map in which an object is photographed. The extraction unit 5024B extracts the object region of the object using the three-dimensional point cloud data generated from the depth map, and indicates the shape of the object based on the depth data of the depth map corresponding to the object region. Extract shape data. The conversion unit 5024C converts the shape data to generate a silhouette image of the object.
 したがって、シルエット画像生成装置5020は、深度マップを用いて3次元点群データを生成した上で、対象物のシルエット画像を生成する。3次元点群データにおいて、効果的にノイズとなる3次元点群を特定し、除去することができるので、対象物の対象物領域の特定精度を向上させることができる。これにより、高精度なシルエット画像を得ることができる。また、深度マップを用いることにより、シルエット画像として、データに関連付けられたモノクロ画像である階調シルエット画像を生成することができ、対象物の形状に関して多くの情報量を有することができる。 Therefore, the silhouette image generation device 5020 generates three-dimensional point cloud data using the depth map and then generates a silhouette image of the object. Since it is possible to effectively identify and remove the three-dimensional point cloud that becomes noise in the three-dimensional point cloud data, it is possible to improve the identification accuracy of the object region of the object. Thereby, a highly accurate silhouette image can be obtained. Further, by using the depth map, a gradation silhouette image, which is a monochrome image associated with the data, can be generated as the silhouette image, and a large amount of information can be provided regarding the shape of the object.
 (5-3-2)
 また、シルエット画像生成装置5020では、抽出部5024Bが、3次元点群データのうち、奥行方向に沿って所定の閾値距離より離れて存在する3次元点群データを取り除いたものに基づいて、対象物の対象物領域を抽出する。これにより、画面データにおいて背景を構成しノイズとなる3次元点群を効果的に除去することができるので、対象物領域抽出部5424で抽出される対象物の対象物領域の特定精度を向上させることができる。
(5-3-2)
In addition, in the silhouette image generation device 5020, the extraction unit 5024B removes the 3D point cloud data existing apart from the predetermined threshold distance along the depth direction from the 3D point cloud data based on the target. The object area of the object is extracted. As a result, it is possible to effectively remove the three-dimensional point group that constitutes the background and becomes noise in the screen data, and thus improve the accuracy of specifying the object area of the object extracted by the object area extraction unit 5424. be able to.
 (5-3-3)
 また、シルエット画像生成装置5020では、抽出部5024Bが、更に、深度マップから生成される3次元点群データから、画像データにおける平面部分を推定し、3次元点群データのうち、推定された平面部分に存在する3次元点群データを取り除いたものに基づいて、対象物の対象物領域を抽出する。
(5-3-3)
Further, in the silhouette image generation device 5020, the extraction unit 5024B further estimates a plane portion in the image data from the three-dimensional point cloud data generated from the depth map, and estimates the estimated plane of the three-dimensional point cloud data. The object area of the object is extracted based on the data obtained by removing the three-dimensional point cloud data existing in the part.
 (5-3-4)
 ここでは、抽出部5024Bが、ランダムサンプリングにしたがってサンプルされるサンプル平面に関連付けられる3次元点群データの含有率を算出することに基づいて、平面部分を推定する。そして、当該推定を繰り返すことにより、平面部分を推定する。これにより、サンプル平面の抽出を複数回繰り返してから、3次元点群データの含有率が最も大きいサンプル平面を決定するので、所望の平面部分の推定に対するロバスト性を高めることができる。
(5-3-4)
Here, the extraction unit 5024B estimates the plane portion based on calculating the content rate of the three-dimensional point cloud data associated with the sample plane sampled according to the random sampling. Then, the plane portion is estimated by repeating the estimation. As a result, the sampling plane is extracted a plurality of times and then the sampling plane having the highest content rate of the three-dimensional point cloud data is determined, so that the robustness with respect to the estimation of the desired plane portion can be improved.
 また、抽出部5024Bが、平面部分を推定する処理を繰り返すことにより、複数の平面部分を推定する。これにより、画面データにおいて平面を構成しノイズとなる3次元点群を効果的に除去することができるので、対象物領域抽出部5424で抽出される対象物の対象物領域の特定精度を向上させることができる。 Also, the extraction unit 5024B estimates a plurality of plane portions by repeating the process of estimating the plane portions. As a result, it is possible to effectively remove the three-dimensional point group that constitutes a plane and becomes noise in the screen data, and thus improves the accuracy of specifying the target object area of the target object extracted by the target object area extracting unit 5424. be able to.
 (5-3-5)
 更に、シルエット画像生成装置5020では、取得部5024Aが更に、RGB画像データを取得し、抽出部5024Bが更に、RGB画像データを用いて対象物の画像領域を抽出し、画像領域に対応する部分の深度マップから3次元点群データを生成する。3次元点群データの生成の前に、対象物の画像領域が予め抽出されることにより、対象物領域抽出部5424で抽出される対象物の対象物領域の特定精度をより一層向上させることができる。
(5-3-5)
Further, in the silhouette image generation device 5020, the acquisition unit 5024A further acquires RGB image data, and the extraction unit 5024B further extracts the image area of the target object using the RGB image data, and a part corresponding to the image area is extracted. Three-dimensional point cloud data is generated from the depth map. By extracting the image area of the object in advance before the generation of the three-dimensional point cloud data, it is possible to further improve the identification accuracy of the object area of the object extracted by the object area extraction unit 5424. it can.
 (5-3-6)
 更に、本実施形態においては、対象物が人であり、平面部分が床を含むものとするのが好適である。これにより、床に上に直立する人のシルエットが効果的に生成される。
(5-3-6)
Furthermore, in the present embodiment, it is preferable that the object is a person and the plane portion includes the floor. This effectively creates a silhouette of a person standing upright on the floor.
 <他の実施形態2:寸法データ算出装置> <Other Embodiment 2: Dimension Data Calculation Device>
 (6-1)寸法データ算出装置の構成
 図26は、他の実施形態に係る寸法データ算出装置6020の構成を示す模式図である。寸法データ算出装置6020は、形状パラメータ取得部6024D及び算出部6024Eを備える。
(6-1) Configuration of Dimension Data Calculation Device FIG. 26 is a schematic diagram showing the configuration of the dimension data calculation device 6020 according to another embodiment. The dimension data calculation device 6020 includes a shape parameter acquisition unit 6024D and a calculation unit 6024E.
 本実施形態に係る寸法データ算出装置6020は、第3実施形態及び第4実施形態、並びにこれらの変形例に適用可能である。例えば、本実施形態による寸法データ算出装置6020が備える形状パラメータ取得部6024Dは、第3実施形態の取得部3024A、抽出部3024B、変換部3024C、及び推定部3024Dの全部又は一部として構成されてもよい。或いは、第4実施形態の取得部4124A及び推定部4124Dの全部又は一部として構成されてもよい。また、本実施形態による寸法データ算出装置6020が備える算出部6024Eは、第3実施形態における算出部3024Eの全部又は一部、或いは第4実施形態における算出部4124Eの全部又は一部として構成されてもよい。 The dimension data calculation device 6020 according to the present embodiment can be applied to the third and fourth embodiments and their modifications. For example, the shape parameter acquisition unit 6024D included in the dimension data calculation device 6020 according to the present embodiment is configured as all or part of the acquisition unit 3024A, the extraction unit 3024B, the conversion unit 3024C, and the estimation unit 3024D of the third embodiment. Good. Alternatively, it may be configured as all or part of the acquisition unit 4124A and the estimation unit 4124D of the fourth embodiment. Further, the calculation unit 6024E included in the dimension data calculation device 6020 according to the present embodiment is configured as all or part of the calculation unit 3024E in the third embodiment, or as all or part of the calculation unit 4124E in the fourth embodiment. Good.
 形状パラメータ取得部6024Dは、対象物の形状パラメータの値を取得する。
 算出部6024Eは、対象物の形状パラメータの値から対象物の3次元データを構成し、所定の部位に関連付けられた所定の部位領域を構成する3次元データの頂点の情報に基づいて、任意の部位の寸法データを算出する。なお、寸法データは、任意の部位に関するものとすることができる。寸法データを算出するために、各部位に応じた算出アルゴリズムを設定することができる。
The shape parameter acquisition unit 6024D acquires the value of the shape parameter of the object.
The calculation unit 6024E configures the three-dimensional data of the target object from the value of the shape parameter of the target object, and based on the information of the vertices of the three-dimensional data that configures the predetermined part region associated with the predetermined part Calculate the dimensional data of the part. The dimensional data can be related to any part. In order to calculate the dimensional data, a calculation algorithm can be set according to each part.
 算出部6024Eは、3次元データ構成部6124、部位領域構成部6224、計算点抽出部6324、及び寸法データ算出部6424を備える。 The calculation unit 6024E includes a three-dimensional data configuration unit 6124, a part region configuration unit 6224, a calculation point extraction unit 6324, and a dimension data calculation unit 6424.
 算出部6024Eの3次元データ構成部6124は、取得した形状パラメータの値から対象物の3次元データを構成する。第3実施形態及び第4実施形態において説明したように、形状パラメータ取得部6024Dで取得した形状パラメータの値に対し、学習段階において実施した次元削減に関する射影の逆変換の処理を行うことにより、3次元データが構成される。構成される3次元データは、3次元メッシュ・データとするのがよく、3次元物体を構成するメッシュの頂点の集合の情報(例えば、頂点の3次元座標)である。 The three-dimensional data configuration unit 6124 of the calculation unit 6024E configures the three-dimensional data of the target object from the acquired shape parameter values. As described in the third embodiment and the fourth embodiment, by performing the inverse projection transformation process regarding the dimension reduction performed in the learning stage on the value of the shape parameter acquired by the shape parameter acquisition unit 6024D, 3 Dimensional data is constructed. The three-dimensional data to be constructed is preferably three-dimensional mesh data, and is information on a set of vertices of meshes forming a three-dimensional object (for example, three-dimensional coordinates of vertices).
 以下の例では、これに限定されないが、例えば、寸法測定対象となる対象物が人体であり、構成される3次元物体が人体モデルであることを想定する。図27a及び図27bは対象物を人体とした場合の3次元空間における人体モデルの概略図である。人体モデルは3次元メッシュ・データで構成されている(メッシュについて不図示)。また、人体モデルは、例えば水平面に直立したモデルとするのがよい。 In the following examples, it is assumed that, for example, the dimension measurement target is a human body and the configured three-dimensional object is a human body model, although not limited thereto. 27a and 27b are schematic views of a human body model in a three-dimensional space when the object is a human body. The human body model is composed of three-dimensional mesh data (mesh is not shown). The human body model is preferably a model that stands upright on a horizontal plane, for example.
 図27aは人体モデルの平面図であり、図27bはその正面図である。人体モデルに対し、3次元空間において、側面方向がx軸、正面方向がy軸、及び身長方向がz軸となるように3次元座標系が調整されているものとする。より詳しくは、x軸は、正方向が人体モデルを正面から見て身体重心から左半身方向であり、負方向が同右半身方向である。y軸は、正方向が身体重心から背面方向であり、負方向が身体重心から正面方向である。z軸は、正方向が身長方向の身体重心から上半身方向(又は鉛直上向き方向)であり、負方向が同下半身(又は鉛直下向き方向)方向である。 27a is a plan view of the human body model, and FIG. 27b is a front view thereof. It is assumed that the three-dimensional coordinate system is adjusted with respect to the human body model such that the side direction is the x-axis, the front direction is the y-axis, and the height direction is the z-axis in the three-dimensional space. More specifically, the positive direction of the x-axis is the direction of the left half of the body from the center of gravity of the body when the human body model is viewed from the front, and the negative direction is the direction of the right half of the body. Regarding the y-axis, the positive direction is from the body center of gravity to the back direction, and the negative direction is from the body center of gravity to the front direction. The positive direction of the z axis is the upper body direction (or the vertical upward direction) from the body center of gravity in the height direction, and the negative direction is the lower body direction (or the vertical downward direction).
 図26に戻り、算出部6024Eの部位領域構成部6224は、3次元データ構成部6124で構成された3次元メッシュ・データの頂点の情報から、所定の部位に関連付けられる所定の部位領域を構成する。例えば、部位領域は、重心軸を有する筒状領域とするのがよい。筒状領域は、所定の部位に応じて、所定の範囲で3次元物体を部分的に構成する3次元メッシュ・データの頂点の集合によって構成される。また、部位領域の構成には、頂点の集合の分布を所定の条件で分類(クラスタリング)することを含んでもよい。 Referring back to FIG. 26, the part region forming unit 6224 of the calculating unit 6024E forms a predetermined part region associated with a predetermined part from the information on the vertices of the three-dimensional mesh data formed by the three-dimensional data forming unit 6124. .. For example, the part region may be a tubular region having a center of gravity axis. The tubular region is composed of a set of three-dimensional mesh data vertices that partially form a three-dimensional object within a predetermined range according to a predetermined region. Further, the configuration of the part region may include classifying (clustering) the distribution of the set of vertices under a predetermined condition.
 より詳しくは、部位「ヒップ」や「ウエスト」には、これらを収容する胴体領域がそれぞれ関連付けられ、同様に、部位「手首まわり」には腕領域が、部位「アームホール」には肩領域が関連付けられる。なお、部位領域は、筒状領域に限定されず、任意の3次元空間内の領域としてよい。また部位領域は立体領域のみならず平面領域としてもよい。 More specifically, the body regions that accommodate these parts are associated with the parts “hip” and “waist”, and similarly, the arm region is associated with the part “wrist” and the shoulder region is associated with the part “armhole”. Be done. The part region is not limited to the cylindrical region, and may be a region in any three-dimensional space. Further, the part area may be a flat area as well as a three-dimensional area.
 計算点抽出部6324は、部位領域を構成する3次元メッシュ・データの頂点の集合から、所定個数の計算点を抽出する。より詳しくは、計算点抽出部6324は、所定の部位に応じて、3次元メッシュ・データの頂点の集合から、部位領域に部分的に関連付けられる計算点を選択的に抽出する。例えば、筒状領域の重心軸に直交して規定され、重心軸を原点とする(2次元)座標系の各象限に関して筒状領域を分割し、各象限から個別に計算点を選択的に抽出する。 The calculation point extraction unit 6324 extracts a predetermined number of calculation points from the set of vertices of the three-dimensional mesh data that constitutes the part area. More specifically, the calculation point extraction unit 6324 selectively extracts calculation points partially associated with the part region from the set of vertices of the three-dimensional mesh data according to a predetermined part. For example, the tubular region is divided with respect to each quadrant of a (two-dimensional) coordinate system that is defined orthogonally to the centroid axis of the tubular region and has the centroid axis as the origin, and the calculation points are selectively extracted individually from each quadrant. To do.
 計算点抽出部6324で抽出される計算点の個数は、部位領域が筒状領域の場合には、計算量及び精度の観点から3~5つの何れかとするのがよい。発明者の深い知見に基づけば、6つ以上の計算点を抽出すると計算量が増大し、計算効率が低下することになり得る一方、少なくとも3つの計算点が抽出できれば高い精度で寸法データが計算できることが判明している。 The number of calculation points extracted by the calculation point extraction unit 6324 is preferably 3 to 5 from the viewpoint of calculation amount and accuracy when the region area is a cylindrical area. Based on the inventor's deep knowledge, when 6 or more calculation points are extracted, the calculation amount may increase and the calculation efficiency may decrease, while if at least 3 calculation points can be extracted, the dimension data can be calculated with high accuracy. It turns out that you can.
 また、部位領域を分割する態様及びその分割数は、部位に応じて個別に設定するのがよい。更に、分割された部位領域に関連付けられる計算点を抽出するのに加えて、所定の条件を満たす3次元メッシュ・データの頂点を、付加的に計算点として抽出してもよい。 Also, it is recommended that the mode of dividing the region and the number of divisions be set individually according to the region. Furthermore, in addition to extracting the calculation points associated with the divided part regions, the vertices of the three-dimensional mesh data that satisfy a predetermined condition may be additionally extracted as the calculation points.
 寸法データ算出部6424は、計算点抽出部6324で抽出された計算点に基づいて、寸法データを具体的に算出する。例えば、部位領域が筒状領域である場合は、抽出される計算点の情報に基づいて、筒状領域の周に沿って周の長さを計算することにより、寸法データが算出される。より詳しくは、筒状領域の周に沿って隣り合う計算点をそれぞれ線で繋ぐように距離の総和を計算することにより、周の長さが計算される。このように、抽出された計算点の情報を用いて筒状領域の周の長さを計算することにより、効率的かつ高精度に寸法データを算出することができる。 The dimension data calculation unit 6424 concretely calculates the dimension data based on the calculation points extracted by the calculation point extraction unit 6324. For example, when the region area is a tubular area, the dimension data is calculated by calculating the length of the circumference along the circumference of the tubular area based on the information on the extracted calculation points. More specifically, the circumference length is calculated by calculating the total sum of the distances so that the calculation points adjacent to each other along the circumference of the tubular region are connected by lines. As described above, the dimension data can be calculated efficiently and highly accurately by calculating the length of the circumference of the cylindrical region using the information on the extracted calculation points.
 (6-2)寸法データ算出装置の動作
 図28は、図26に関して説明した寸法データ算出装置6020の動作を説明するためのフローチャートである。本フローチャートによる処理(S6000)を通じて、対象物に関し、形状パラメータの値から所定の部位の寸法データを算出することができる。
(6-2) Operation of Dimension Data Calculation Device FIG. 28 is a flowchart for explaining the operation of the size data calculation device 6020 described with reference to FIG. Through the process (S6000) according to this flowchart, it is possible to calculate the dimension data of a predetermined part of the object from the shape parameter values.
 まず、形状パラメータ取得部6024Dは、対象物の形状パラメータの値を取得する(S6010)。なお、当該ステップは、第3実施形態における推定部3024D及び第4実施形態の推定部4124Dにおける推定処理を通じて実施されるのがよい。 First, the shape parameter acquisition unit 6024D acquires the value of the shape parameter of the object (S6010). In addition, the said step is good to be implemented through the estimation process in the estimation part 3024D in 3rd Embodiment and the estimation part 4124D in 4th Embodiment.
 次に、算出部6024Eの3次元データ構成部6124は、対象物の形状パラメータの値から、3次元メッシュ・データを構成する(S6020)。構成される3次元メッシュ・データは、3次元物体を構成するメッシュの頂点の集合の情報(例えば、頂点の3次元座標)である。 Next, the three-dimensional data forming unit 6124 of the calculating unit 6024E forms three-dimensional mesh data from the shape parameter values of the target object (S6020). The configured three-dimensional mesh data is information (for example, three-dimensional coordinates of the vertices) of a set of vertices of meshes that form a three-dimensional object.
 引き続き、部位領域構成部6224は、構成された3次元メッシュ・データの頂点の集合の情報から、所定の部位に予め関連付けられている所定の部位領域を構成する(S6030)。 Subsequently, the part region construction unit 6224 forms a predetermined part region that is associated in advance with a predetermined part from the information on the set of vertices of the formed three-dimensional mesh data (S6030).
 そして、計算点抽出部6324は、部位領域に関連付けられる特徴的な複数の計算点を選択的に抽出する(S6040)。より詳しくは、所定の部位に応じて部位領域が分割され、分割された部位領域から個別に計算点が3~5つ程度選択される(具体的な部位に関する計算点の抽出例について後述)。 Then, the calculation point extraction unit 6324 selectively extracts a plurality of characteristic calculation points associated with the part area (S6040). More specifically, the part region is divided according to a predetermined part, and about 3 to 5 calculation points are individually selected from the divided part regions (a calculation point extraction example regarding a specific part will be described later).
 最後に、寸法データ算出部6424は、抽出された計算点に基づいて、寸法データを算出する(S6050)。例えば、部位領域が筒状領域である場合には、抽出された計算点に基づいて、筒状領域の周の長さを計算することにより、寸法データが算出される。より詳しくは、筒状領域の周に沿って隣り合う計算点をそれぞれ線で繋ぐようにして距離の総和を算出することにより、立体的な周の長さを計算する。 Finally, the dimension data calculation unit 6424 calculates the dimension data based on the extracted calculation points (S6050). For example, when the region area is a tubular area, the dimension data is calculated by calculating the circumference length of the tubular area based on the extracted calculation points. More specifically, the three-dimensional circumference length is calculated by connecting the calculation points adjacent to each other along the circumference of the tubular region with a line and calculating the sum of the distances.
 このように、対象物のモデルに対し、特徴的な計算点を抽出し、これを用いて、筒状領域の周の長さを計算することにより、効率的かつ高精度に寸法データを算出することができる。 As described above, the characteristic calculation points are extracted from the model of the object, and by using the calculated calculation points, the circumference length of the cylindrical region is calculated, thereby efficiently and highly accurately calculating the dimension data. be able to.
 (6-3)部位毎の計算点抽出の例
 図29~図35bを参照して、寸法データ算出装置6020の算出部6024Eが備える計算点抽出部6324に関し、具体的な部位に関して複数の計算点を選択的に抽出する例を説明する。ここでは、部位「ヒップ」、「ウエスト」、「手首まわり」、及び「アームホール」の例をそれぞれ説明するが、本実施形態による計算点の抽出は、これらの部位に限定されるのではない。
(6-3) Example of Extraction of Calculation Points for Each Part With reference to FIGS. 29 to 35b, regarding the calculation point extraction unit 6324 included in the calculation unit 6024E of the dimension data calculation device 6020, a plurality of calculation points for a specific part An example of selectively extracting will be described. Here, examples of the parts “hip”, “waist”, “wrist circumference”, and “armhole” will be described, but the extraction of calculation points according to the present embodiment is not limited to these parts.
 図29~図31は、人体モデルに対し、筒状領域である胴体領域及び腕領域の範囲を特定して、それぞれの領域を構成する例を示した概略図である。図32a及び図32bはヒップに関する計算点を抽出する例を示した概略図である。同様に、図33a及び図33bはウエスト、図34a及び図34bは手首まわり、そして、図35a及び図35bはアームホールに関する計算点を抽出する例を示した概略図である。 29 to 31 are schematic diagrams showing an example in which the range of the body region and the arm region, which are cylindrical regions, are specified with respect to the human body model and each region is configured. 32a and 32b are schematic diagrams showing an example of extracting calculation points regarding the hips. Similarly, FIGS. 33a and 33b are schematic diagrams showing an example of extracting calculation points regarding a waist, FIGS. 34a and 34b around a wrist, and FIGS. 35a and 35b.
 (6-3-1)胴体領域及び腕領域の特定
 図29の正面図に示すように、人体モデルの胴体領域BR及び腕領域ARは、身長方向(z軸方向)に所定の範囲にわたり抽出される立体領域を切り出すことで特定される。例えば、これに限定されないが、人体モデルにおいて、z軸に対し、身長の所定の割合(上からR%)の位置から上下に所定の距離(±Dcm)の領域を切り出すことにより、3次元メッシュ・データの頂点の集合を抽出するのがよい。切り出される領域は、右腕領域AR、左腕領域AR、及び胴体領域BRの部分を含む。なお、割合R及び距離Dの値は、算出対象の部位によって個別に選択されるのがよい。
(6-3-1) Identification of Body Region and Arm Region As shown in the front view of FIG. 29, the body region BR and arm region AR of the human body model are extracted over a predetermined range in the height direction (z-axis direction). It is specified by cutting out the three-dimensional area. For example, although not limited to this, in a human body model, a three-dimensional mesh is obtained by cutting out a region of a predetermined distance (±Dcm) vertically from a position of a predetermined ratio of the height (R% from the top) with respect to the z axis.・It is better to extract a set of data vertices. The region to be cut out includes the right arm region AR r , the left arm region AR 1 , and the body region BR. It should be noted that the values of the ratio R and the distance D are preferably selected individually depending on the part to be calculated.
 図30は、右腕領域AR、左腕領域AR、及び胴体領域BRをz軸正方向から見たxy平面を示す。C(c,c)は人体モデルの中心点であり、3次元メッシュ・データの頂点の集合の各座標値から計算される。例えば、中心点C(c,c)は身体重心とするのがよい。 FIG. 30 shows an xy plane in which the right arm region AR r , the left arm region AR 1 , and the body region BR are viewed from the z-axis positive direction. C(c x , c y ) is the center point of the human body model and is calculated from each coordinate value of the set of vertices of the three-dimensional mesh data. For example, the center point C(c x , c y ) should be the center of gravity of the body.
 図31は、図29及び図30に示した右腕領域AR、左腕領域AR、及び胴体領域BRに関し、3次元メッシュ・データの頂点の分布を示した図である。図31の座標系の横軸は、中心点C(c,c)から各頂点までのx軸方向の距離を示している。また、縦軸は、中心点C(c,c)から各頂点までの距離を示している。 FIG. 31 is a diagram showing the distribution of the vertices of the three-dimensional mesh data regarding the right arm region AR r , the left arm region AR 1 , and the body region BR shown in FIGS. 29 and 30. The horizontal axis of the coordinate system in FIG. 31 indicates the distance in the x-axis direction from the center point C(c x , c y ) to each vertex. The vertical axis indicates the distance from the center point C(c x , c y ) to each vertex.
 つまり、図31のグラフの横軸及び縦軸の値は、以下の数式で計算される。
 ・横軸の値=|x-c
 ・縦軸の値=(|x-c+|y-c1/2
That is, the values on the horizontal axis and the vertical axis of the graph of FIG. 31 are calculated by the following mathematical expressions.
・Value on the horizontal axis = |x−c x |
Value of vertical axis=(|x−c x | 2 +|y−c y | 2 ) 1/2
 図31の分布のうち、領域arは、右腕領域AR及び左腕領域ARに対応した領域である。また、領域brは、胴体領域BRに対応した領域である。すなわち、身長方向(z軸方向)に所定の範囲にわたり抽出される立体領域の各頂点の集合は、領域ar,brに分類(クラスタリング)することができる。これにより、立体領域の各頂点が、胴体領域BRと、腕領域AR(ここでは、右腕領域AR及び左腕領域AR)との何れかに所属するものかを判別することができる。 In the distribution shown in FIG. 31, the area ar is an area corresponding to the right arm area AR r and the left arm area AR 1 . The region br is a region corresponding to the body region BR. That is, a set of vertices of a three-dimensional area extracted over a predetermined range in the height direction (z-axis direction) can be classified (clustered) into areas ar and br. Thereby, it is possible to determine whether each vertex of the three-dimensional region belongs to any of the body region BR and the arm region AR (here, the right arm region AR r and the left arm region AR l ).
 部位「ヒップ」、「ウエスト」、「手首まわり」、及び「アームホール」における計算点は、次に説明するように、これら胴体領域BR又は腕領域ARの3次元メッシュ・データの頂点の集合の情報に基づいて抽出することができる。 As will be described below, the calculation points in the parts “hip”, “waist”, “wrist”, and “arm hole” are information on the vertices of the three-dimensional mesh data of these body regions BR or arm regions AR. Can be extracted based on.
 (6-3-2)ヒップ
 図32aは、図29に示した筒状の胴体領域BRを構成する3次元メッシュ・データの頂点の集合から、ヒップの寸法データを算出するために5つの計算点を抽出する例を示した概略平面図である。また、図32bは、その概略立体図である。
(6-3-2) Hip FIG. 32a shows five calculation points for calculating the hip size data from the set of vertices of the three-dimensional mesh data forming the tubular body region BR shown in FIG. It is a schematic plan view showing an example of extracting the. FIG. 32b is a schematic three-dimensional view thereof.
 図32a及び図32bにおいて、身長方向に沿う胴体領域BR1の重心軸AX1をz'軸とし、且つ、当該z'軸に直交し重心軸AX1を原点とするx'y'平面を有するx'y'z'座標系が規定される。特に、重心軸AX1方向から見たx'y'平面座標系に関し、x'方向は、略楕円形状を有する胴体領域BR1の断面の長軸方向(つまり、人体モデルの側面方向)であり、y'方向は、同断面の短軸方向(つまり、人体モデルの正面(背面)方向)である。 32a and 32b, x'y has a plane x'y' which has a center of gravity axis AX1 of the body region BR1 along the height direction as the z'axis, and is orthogonal to the z'axis and has the center of gravity axis AX1 as the origin. A'z' coordinate system is defined. Particularly, regarding the x′y′ plane coordinate system viewed from the direction of the center of gravity axis AX1, the x′ direction is the long axis direction of the cross section of the body region BR1 having a substantially elliptical shape (that is, the side direction of the human body model), and y The'direction is the minor axis direction of the same section (that is, the front (back) direction of the human body model).
 図32a及び図32bのとおり、ヒップの寸法データを算出するために、計算点は5つ(a1,b1,c1,d1,e1)抽出されるのがよい。このうち4つ(a1,b1,c1,d1)は、重心軸AX1方向から見て、x'y'平面の象限に存在する3次元メッシュ・データの頂点の集合からそれぞれ個別に抽出される。各象限から1つずつ合計4つの計算点が抽出されてもよいし、任意の3つの象限から個別に合計3つの計算点が抽出されてもよい。以下では、合計4つの計算点を抽出する例を想定している。 As shown in FIGS. 32a and 32b, five calculation points (a1, b1, c1, d1, e1) should be extracted in order to calculate hip size data. Of these, four (a1, b1, c1, d1) are individually extracted from the set of vertices of the three-dimensional mesh data existing in the quadrant of the x'y' plane when viewed from the direction of the center of gravity axis AX1. A total of four calculation points may be extracted, one from each quadrant, or a total of three calculation points may be individually extracted from any three quadrants. In the following, it is assumed that a total of four calculation points are extracted.
 具体的には、長軸であるx'値を-k≦x'≦kとする制限領域LR1を設け、各象限において制限領域LR1内に位置する3次元メッシュ・データの頂点から4つの計算点a1~d1を個別に抽出するのがよい。なお、kの値は、例えば、胴体領域BR1の断面の長軸方向の長さの約1/4程度とするのがよい。 Specifically, a restriction region LR1 having a long axis x'value of -k 1 ≤x' ≤k 1 is provided, and four vertices of three-dimensional mesh data located in the restriction region LR1 are provided in each quadrant. It is preferable to individually extract the calculation points a1 to d1. The value of k 1 is preferably about ¼ of the length of the cross-section of the body region BR1 in the long axis direction.
 より詳しくは、各象限において、x'軸方向に原点(重心軸AX1)から最も離れ、つまり、制限領域LR1内のうち境界に近接して存在する4つの頂点を計算点として個別に抽出するのがよい。つまり、第1象限及び第4象限においては、制限領域LR1内でx'値が最大となる頂点a1,d1を計算点として抽出するのがよく、また、第2象限及び第3象限においては、制限領域LR1内でx'値が最小となる頂点を計算点b1,c1として抽出するのがよい。 More specifically, in each quadrant, the four vertices that are farthest from the origin (the center of gravity axis AX1) in the x′-axis direction, that is, the four vertices that exist close to the boundary within the restricted region LR1 are individually extracted as calculation points. Is good. That is, in the first quadrant and the fourth quadrant, it is preferable to extract the vertices a1 and d1 having the maximum x′ value in the restricted region LR1 as calculation points, and in the second quadrant and the third quadrant, It is preferable to extract the vertices having the smallest x′ value in the restricted region LR1 as the calculation points b1 and c1.
 加えて、特にヒップは通常、人体モデルにおいて正面方向及び背面方向に出っ張る形状となる。そこで、前述の4つの計算点(a1~d1)に加えて、残りの1つの計算点e1として、正面方向に出っ張りに相当する箇所に位置する頂点を付加的に抽出するのがよい。具体的には、計算点e1は、原点からy'軸(短軸)方向に最も離れた(ここではy'値が最小となる)頂点として抽出されるのがよい。 In addition, especially the hip is usually a shape that projects in the front and back directions in the human body model. Therefore, in addition to the above-mentioned four calculation points (a1 to d1), as the remaining one calculation point e1, it is preferable to additionally extract a vertex located at a portion corresponding to the protrusion in the front direction. Specifically, the calculation point e1 is preferably extracted as the vertex that is farthest from the origin in the y'axis (short axis) direction (here, the y'value is the minimum).
 このように、ヒップに関し、人体モデルにおけるヒップ形状に応じて計算点を抽出することにより、寸法測定の精度を更に向上させることができる。なお、このような付加的な計算点e1の抽出は任意であり、必ずしも抽出しなければならない訳ではない。 In this way, regarding the hip, by extracting the calculation points according to the hip shape in the human body model, it is possible to further improve the accuracy of dimension measurement. It should be noted that such extraction of the additional calculation point e1 is optional and does not necessarily have to be extracted.
 (6-3-3)ウエスト
 図33aは、図29に示した筒状の胴体領域BRを構成する3次元メッシュ・データの頂点の集合から、ウエストの寸法データを算出するために5つの計算点を抽出する例を示した概略平面図である。また、図33bは、その概略立体図である。
(6-3-3) Waist FIG. 33a shows five calculation points for calculating waist size data from the set of vertices of the three-dimensional mesh data forming the tubular body region BR shown in FIG. It is a schematic plan view showing an example of extracting the. Further, FIG. 33b is a schematic stereoscopic view thereof.
 図33a及び図33bにおいて規定されるx'y'z'座標系は図32a及び図32bと同様に規定される。ここでは、胴体領域BR2の重心軸AX2がx'y'平面の原点となる。 The x'y'z' coordinate system defined in FIGS. 33a and 33b is defined similarly to FIGS. 32a and 32b. Here, the center of gravity axis AX2 of the body region BR2 is the origin of the x'y' plane.
 図33a及び図33bのとおり、ウエストの寸法データを算出するために、計算点は5つ(a2,b2,c2,d2,e2)抽出されるのがよい。このうち4つ(a2,b2,c2,d2)は、重心軸AX2方向から見て、x'y'平面の象限に存在する3次元メッシュ・データの頂点の集合からそれぞれ個別に抽出される。各象限から1つずつ合計4つの計算点が抽出されてもよいし、任意の3つの象限から合計3つの計算点が抽出されてもよい。以下では、合計4つの計算点を抽出する例を想定する。 As shown in FIGS. 33a and 33b, five calculation points (a2, b2, c2, d2, e2) should be extracted in order to calculate waist size data. Four of them (a2, b2, c2, d2) are individually extracted from a set of vertices of the three-dimensional mesh data existing in the quadrant of the x'y' plane when viewed from the direction of the center of gravity axis AX2. A total of four calculation points may be extracted from each quadrant, or a total of three calculation points may be extracted from arbitrary three quadrants. Below, the example which extracts a total of four calculation points is assumed.
 特に、ウエストに関しては、基準となる頂点stdを基準点として最初に選択し、基準点stdに関連付けられる頂点を4つの計算点a2,b2,c2,d2として個別に抽出するのがよい。通常、ウエストは人体モデルにおいて正面方向(ここでは、y'軸の負方向)が出っ張る形状となり、且つ3次元メッシュ・データの頂点の集合も出っ張りの部分に偏ることが知られる。この点、図33a及び図33bにおいても、胴体領域BR2の略断面形状の中心は、重心軸AX2である原点からy'軸の正方向に遷移している。 Particularly, regarding the waist, it is preferable to first select the reference vertex std as the reference point and individually extract the vertices associated with the reference point std as the four calculation points a2, b2, c2, d2. In general, it is known that the waist has a shape in which the front direction (here, the negative direction of the y′ axis) projects in the human body model, and the set of vertices of the three-dimensional mesh data is biased toward the projecting portion. In this regard, also in FIGS. 33a and 33b, the center of the substantially sectional shape of the body region BR2 is shifted from the origin, which is the center of gravity axis AX2, in the positive direction of the y′ axis.
 したがって、寸法データの算出に関し、このようなウエストの出っ張りの部分を考慮して、計算点を抽出すべきである。そして、前述の基準点stdは、このような出っ張りに相当する箇所に位置する頂点を特定したものである。基準点stdは、原点(重心軸AX2)から人体モデルにおける正面方向(ここでは、y'軸の負方向)に最も離れた(y'軸の値が最小となる)頂点として抽出されるのがよい。但し、ウエストの例では、必ずしも、当該基準点std自体を計算点として採用する必要はない。 Therefore, regarding the calculation of dimensional data, the calculation points should be extracted in consideration of such a protruding portion of the waist. The above-mentioned reference point std specifies the apex located at the location corresponding to such a protrusion. The reference point std is extracted as a vertex that is farthest from the origin (the center of gravity axis AX2) in the frontal direction (here, the negative direction of the y'axis) in the human body model (the value of the y'axis is the minimum). Good. However, in the waist example, the reference point std itself does not necessarily have to be adopted as the calculation point.
 基準点stdが抽出された後、各象限において、z'軸(重心軸AX2)方向において基準点stdの位置に近接する頂点を計算点a2,b2,c2,d2として個別に抽出するのがよい。つまり、計算点a2,b2,c2,d2は、z'軸の値が基準点stdのz'値に近く、凡そ同じ高さを有する頂点である。ウエストのように水平面に沿うように寸法が測定されるべき部位においては、身長方向に近い高さを有する頂点を抽出するのがよく、これにより、更に寸法の精度を向上させることができる。 After the reference point std is extracted, it is preferable to individually extract the vertices close to the position of the reference point std in the z′-axis (centroid axis AX2) direction as calculation points a2, b2, c2, d2 in each quadrant. .. That is, the calculation points a2, b2, c2, and d2 are vertices whose z′-axis values are close to the z′ value of the reference point std and have approximately the same height. In a region such as a waist whose dimension is to be measured along a horizontal plane, it is preferable to extract a vertex having a height close to the height direction, which can further improve the dimension accuracy.
 ウエストの例では、更に、x'y'座標系において長軸であるx'値を-k≦x'≦kとする制限領域LR2を設けるのがよい。そして、残りの1つ計算点e2として、短軸であるy'軸方向に沿って原点に関して基準点stdと反対側(ここでは、y'軸の正方向)にある任意の頂点を付加的に抽出するのがよい。なお、kの値は、例えば、胴体領域BR2の断面の長軸の長さの約1/4程度とするのがよい。 In the example of West further preferably provided a limited area LR2 which the value -k 2 ≦ x '≦ k 2 ' x is a major axis in the coordinate system 'x'y. Then, as the remaining one calculation point e2, an arbitrary vertex on the opposite side (here, the positive direction of the y'axis) from the reference point std with respect to the origin along the y'axis which is the short axis is additionally added. It is good to extract. The value of k 2 may be, for example, about ¼ of the length of the long axis of the cross section of the body region BR2.
 このように、4つの計算点a2,b2,c2,d2に加えて5つ目の計算点e2を付加的に抽出するのは、前述したように、ウエストの場合は特に、重心が正面方向に偏ることを考慮するためである。つまり、胴体領域BR2を構成する3次元メッシュ・データの頂点の集合は原点(重心軸AX2)から-y'領域に偏って分布することになる。このような事情を考慮して、基準点stdと反対側にある頂点を5つ目の計算点e2として付加的に抽出することにより、寸法上の誤差を生じさせないばかりか、寸法の精度を更に向上させるように工夫している。 In this way, in addition to the four calculation points a2, b2, c2, d2, the fifth calculation point e2 is additionally extracted, as described above, especially in the case of the waist, the center of gravity is in the front direction. This is because the bias is taken into consideration. That is, the set of vertices of the three-dimensional mesh data forming the body region BR2 is distributed from the origin (center of gravity axis AX2) to the −y′ region. In consideration of such a situation, by additionally extracting the vertex on the opposite side of the reference point std as the fifth calculation point e2, not only a dimensional error does not occur but the dimensional accuracy is further improved. I am trying to improve it.
 (6-3-4)手首まわり
 図34aは、図29に示した筒状の腕領域AR又はARを構成する3次元メッシュ・データの頂点の集合から、手首まわりの寸法データを算出するために4つの計算点を抽出する例を示した概略平面図である。また、図34bは、その概略立体図である。なお、腕領域はAR又はARの何れか一方を寸法計測の対象としてよい(以下、腕領域ARと総称する。)。
(6-3-4) Wrist circumference FIG. 34a calculates wrist size data from a set of vertices of the three-dimensional mesh data forming the cylindrical arm region AR r or AR l shown in FIG. FIG. 6 is a schematic plan view showing an example of extracting four calculation points for this purpose. Further, FIG. 34b is a schematic stereoscopic view thereof. Note that, in the arm region, either AR r or AR l may be a target for dimension measurement (hereinafter, generically referred to as arm region AR).
 図34a及び図34bにおいて、身長方向に沿う腕領域ARの重心軸AX3をz'軸とし、且つ、当該z'軸に直交し重心軸AX3を原点とするx'y'平面を有するx'y'z'座標系が規定される。特に、x'y'平面座標系に関し、x'方向は、略楕円形状を有する腕領域ARの断面の長軸方向(つまり、人体モデルの側面方向)であり、y'方向は、同断面の短軸方向(つまり、人体モデルの正面方向)である。 In FIGS. 34a and 34b, x'y' has an x'y' plane which has the center of gravity axis AX3 of the arm region AR along the height direction as the z'axis, and is orthogonal to the z'axis and has the center of gravity axis AX3 as the origin. A'z' coordinate system is defined. Particularly, regarding the x′y′ plane coordinate system, the x′ direction is the major axis direction of the cross section of the arm region AR having a substantially elliptical shape (that is, the side direction of the human body model), and the y′ direction is the same cross section. The short axis direction (that is, the front direction of the human body model).
 図34a及び図34bのとおり、手首まわりの寸法データを算出するために、計算点は4つ(a3,b3,c3,d3)抽出されるのがよい。手首まわりの領域は、腕領域のうち、最もくぼんだ部分であると想定することができる。 As shown in FIGS. 34a and 34b, four calculation points (a3, b3, c3, d3) should be extracted in order to calculate the dimension data around the wrist. The region around the wrist can be assumed to be the most recessed part of the arm region.
 つまり、手首まわりの部位の場合は、最初に、腕領域ARを重心軸方向で走査して、長軸方向の長さ及び/又は短軸方向の長さが最小となる断面領域csを抽出することで、くぼんだ部分を特定するのがよい。ここでは、長軸方向の長さだけに基づいても、短軸方向だけの長さに基づいても、或いはその両方としてもよい。断面領域csは平面領域として規定されてもよいし、z'軸方向に厚みを有する立体領域としてもよい。なお、断面領域csは手の甲として部位が規定されてもよい。 That is, in the case of the region around the wrist, first, the arm region AR is scanned in the direction of the center of gravity to extract the cross-sectional region cs having the minimum length in the major axis direction and/or the minimum length in the minor axis direction. Therefore, it is better to identify the recessed part. Here, it may be based on only the length in the major axis direction, based on the length only in the minor axis direction, or both. The cross-sectional area cs may be defined as a plane area or may be a three-dimensional area having a thickness in the z′-axis direction. The cross-sectional area cs may be defined as the back of the hand.
 次いで、腕領域ARを重心軸方向で走査して、3次元メッシュ・データの頂点の集合のうち、長軸方向に最も離れた2つの頂点と、短軸方向に最も離れた2つの頂点との4つを計算点(a3,b3,c3,d3)として抽出する。より詳細には、断面領域csにおいて、x'軸の正方向に最も離れた位置(ここでは、x'値が最大となる)でz'軸方向に最近傍の頂点を計算点a3、また、x'軸の負方向に最も離れた位置(ここでは、x'値が最小となる)でz'軸方向に最近傍の頂点を計算点c3として抽出する。同様に、断面領域csにおいてy'軸の正方向に最も離れた位置(ここでは、y'値が最大となる)でz'軸方向に最近傍の頂点を計算点b3、また、y'軸の負方向に最も離れた位置(ここでは、y'値が最小となる)でz'方向に最近傍の頂点を計算点d3として抽出する。 Next, the arm region AR is scanned in the direction of the center of gravity axis, and in the set of vertices of the three-dimensional mesh data, the two vertices most distant in the long axis direction and the two vertices most distant in the short axis direction. Four points are extracted as calculation points (a3, b3, c3, d3). More specifically, in the cross-sectional area cs, at the position farthest in the positive direction of the x'axis (here, the x'value is the maximum), the vertex closest to the z'axis is the calculation point a3, and The vertex closest to the x'-axis in the negative direction in the negative direction (here, the x'value is the minimum) and closest to the z'-axis is extracted as the calculation point c3. Similarly, in the cross-sectional area cs, the closest vertex in the z'-axis direction at the position most distant in the positive direction of the y'-axis (here, the y'-value becomes maximum) is the calculation point b3, and the y'-axis. The closest vertex in the z'direction at the position farthest away in the negative direction (here, the y'value is the minimum) is extracted as the calculation point d3.
 このように、腕領域を重心軸方向で走査して、長軸方向の長さ及び/又は短軸方向の長さが最小となる領域部分が該当部位であると推定して、4つの計算点(a3,b3,c3,d3)として抽出する。これにより、手首まわりの寸法データの算出を効率的に行うことができる。特に、手首まわりは、ヒップやウエストに比べて長さが短いので、このような手法で計算点を4点抽出すれば足り、必ずしも5点目は必要としない。 In this way, the arm region is scanned in the direction of the center of gravity, and it is estimated that the region portion in which the length in the major axis direction and/or the length in the minor axis direction is the minimum is the relevant part, and four calculation points are calculated. Extract as (a3, b3, c3, d3). As a result, it is possible to efficiently calculate the dimension data around the wrist. In particular, since the length around the wrist is shorter than that at the hip or waist, it is sufficient to extract four calculation points by such a method, and the fifth point is not necessarily required.
 (6-3-5)アームホール
 図35aは、図29に示した筒状の胴体領域BRに基づいて規定される筒状の肩領域SRを構成する3次元メッシュ・データの頂点の集合から、アームホールの寸法データを算出するために4つの計算点を抽出する例を示した概略平面図である。また、図35bは、その概略立体図である。
(6-3-5) Armhole FIG. 35a shows an armhole from a set of vertices of the three-dimensional mesh data forming the tubular shoulder region SR defined based on the tubular body region BR shown in FIG. 5 is a schematic plan view showing an example of extracting four calculation points in order to calculate the dimension data of FIG. Further, FIG. 35b is a schematic stereoscopic view thereof.
 アームホールの例では、胴体領域を構成する3次元メッシュ・データの頂点の集合のうち、図32aで規定したようなx'y'z'座標系(不図示)においてx'軸方向に原点から最も離れた頂点のx'値を抽出する。更に、3次元メッシュ・データの頂点の全体の集合のうち、当該x'値を有する頂点の集合を抽出する。 In the example of the armhole, of the set of vertices of the three-dimensional mesh data that configures the body region, in the x'y'z' coordinate system (not shown) defined in FIG. Extract the x'values of distant vertices. Further, of the entire set of vertices of the three-dimensional mesh data, the set of vertices having the x'value is extracted.
 このx'値を有する頂点の集合においてz'軸(重心軸)の正方向に最も離れた(最も高い)頂点を肩の頂点として規定し、肩の頂点を基準にしてx'y'z'方向に所定の範囲で切り出す。そして、切り出された範囲内に存在する3次元メッシュ・データの頂点の集合を用いて、筒状の肩領域SRを構成する。 In the set of vertices having this x'value, the furthest (highest) vertex in the positive direction of the z'axis (center of gravity axis) is defined as the shoulder vertex, and x'y'z' is based on the shoulder vertex. Cut out in a predetermined range in the direction. Then, the cylindrical shoulder region SR is configured by using a set of vertices of the three-dimensional mesh data existing within the cut out range.
 図35a及び図35bにおいて、肩領域SRの重心軸AX4をx''軸とし、且つ、当該x''軸に直交し重心軸AX4を原点とするy''z''平面を有するx''y''z''座標系が規定される。特に、重心軸AX4方向から見たy''z''平面座標系に関し、y''方向は、略楕円形状を有する肩領域SRの断面の短軸方向(つまり、人体モデルの正面(波面)方向)であり、z''方向は、同断面の長軸方向(つまり、人体モデルの身長方向)である。 In FIGS. 35a and 35b, an x'' having a y''z'' plane which has the center of gravity axis AX4 of the shoulder region SR as the x'' axis and which is orthogonal to the x'' axis and has the center of gravity axis AX4 as the origin. A y″z″ coordinate system is defined. Particularly, regarding the y″z″ plane coordinate system viewed from the direction of the center of gravity axis AX4, the y″ direction is the minor axis direction of the cross section of the shoulder region SR having a substantially elliptical shape (that is, the front surface (wavefront) of the human body model). Direction), and the z″ direction is the major axis direction of the same cross section (that is, the height direction of the human body model).
 図35a及び図35bのとおり、アームホールの寸法データを算出するために、計算点は4つ(a4,b4,c4,e4)抽出されるのがよい。このうち3つ(a4,b4,c4)は、y''z''平面において、重心軸AX4から短軸の正負方向にそれぞれ最も離れた頂点、及び長軸(z'軸)の正方向(つまり鉛直上方向)に最も離れた頂点とするのがよい。 As shown in FIGS. 35a and 35b, four calculation points (a4, b4, c4, e4) should be extracted in order to calculate the dimension data of the armhole. Three of them (a4, b4, c4) are the vertices most distant from the center of gravity axis AX4 in the positive and negative directions of the short axis and the positive direction of the long axis (z' axis) in the y''z'' plane. That is, it is preferable to set the vertex that is most distant in the vertical direction).
 より詳しくは、重心軸AX4から短軸(y''軸)の正方向に最も離れた(ここでは、y''値が最大となる)頂点を計算点a4として抽出し、同様に、重心軸AX4から短軸の負方向に最も離れた頂点(ここでは、y''値が最小となる)を計算点c4として抽出するのがよい。また、重心軸AX4から長軸(z'軸)の正方向(つまり鉛直上方向)に最も離れた頂点(ここでは、z'値が最大となる)を計算点b4として抽出するのがよい。 More specifically, the vertex farthest from the center of gravity axis AX4 in the positive direction of the short axis (y″ axis) (here, the maximum y″ value) is extracted as the calculation point a4, and similarly, the center of gravity axis is extracted. It is preferable to extract the vertex farthest away from AX4 in the negative direction of the short axis (here, the y″ value is the minimum) as the calculation point c4. Further, it is preferable to extract the vertex (here, the z'value is the maximum) farthest from the center of gravity axis AX4 in the positive direction of the long axis (z' axis) (that is, the vertically upward direction) as the calculation point b4.
 残りの1つの計算点e4について、z''軸の負方向(つまり鉛直下方向)に、長軸のz''値を所定の範囲(z''≦-j)とする制限領域LR4を設け、制限領域LR4内で境界に近接する頂点(ここでは、制限領域LR4内でz''値が最大となる)を抽出するのがよい。 For the remaining one calculation point e4, a restricted region LR4 is set in the negative direction of the z″ axis (that is, vertically downward direction) so that the z″ value of the long axis falls within a predetermined range (z″≦−j). It is preferable to extract vertices close to the boundary in the restricted region LR4 (here, the z″ value becomes maximum in the restricted region LR4).
 ここで、制限領域LR4は、人体モデルの脇領域に応じて規定される。例えば、これに限定されないが、脇領域を鉛直方向下1/4に分割した領域を制限領域LR4とするのがよい。なお、脇領域(不図示)は、脇の頂点を基準にしてx''y''z''方向に所定の範囲で切り出し、切り出された範囲内に存在する3次元メッシュ・データの頂点の集合を用いて構成するのがよい。脇の頂点は、図31で示したような胴体領域BRと腕領域ARの分類において、両者が判別できなくなるような高さ(z値)を有する頂点として規定するのがよい。補足すると、肩領域SRにおいて、脇領域は上に凸の形状となっているので、x''軸方向に走査した場合、脇の頂点は、胴体領域BRと腕領域ARの分類において、両者が判別できなくなるような高さに対応することになる。 Here, the restricted area LR4 is defined according to the side area of the human body model. For example, although not limited to this, it is preferable that the side region is divided into a vertically lower quarter and the limited region LR4 is defined. The side region (not shown) is cut out in a predetermined range in the x"y"z" direction with the side apex as a reference, and the apex of the three-dimensional mesh data existing in the cut out range is extracted. It is better to use a set. The side vertices are preferably defined as vertices having a height (z value) such that they cannot be distinguished from each other in the classification of the body region BR and the arm region AR as shown in FIG. Supplementally, in the shoulder region SR, the armpit region has a convex shape upward, so when scanning in the x″-axis direction, the apex of the armpit is classified into the body region BR and the arm region AR when both are classified. It corresponds to the height where it cannot be distinguished.
 このように、アームホールにおいては、胴体領域に基づいて規定される肩領域SRを用いて4つの計算点が規定される。特に、脇領域に基づく制限領域LR4を設定することにより、効率的かつ高精度に寸法データを算出することができる。 In this way, in the armhole, four calculation points are specified using the shoulder area SR specified based on the body area. Particularly, by setting the restricted area LR4 based on the side area, the dimension data can be calculated efficiently and highly accurately.
 (6-3-6)その他
 人体モデルに関し、ヒップ、ウエスト、手首まわり、及びアームホールの各部位における計算点の抽出例について説明してきたが、これらを応用することにより、更に他の部位に関する計算点も抽出することができる。例えば、「胸囲」及び「二の腕」のような筒状領域に適用可能な計算点を3つ~5つ抽出することができる。これにより、所定の部位に関連付けられる筒状領域に対し、計算点の抽出の手法が予め規定されていることを条件に、当該所定の部位の寸法データを効率的且つ高精度に算出することができる。
(6-3-6) Others Regarding the human body model, we have explained the example of extracting calculation points for each part of the hip, waist, wrist, and armholes. By applying these, calculation points for other parts can be calculated. Can also be extracted. For example, it is possible to extract three to five calculation points applicable to the tubular regions such as “chest measurement” and “two arms”. As a result, the dimension data of the predetermined region can be calculated efficiently and highly accurately on the condition that the method of extracting the calculation points is previously defined for the cylindrical region associated with the predetermined region. it can.
 また、筒状領域における周の長さを計算するための計算点の抽出について説明してきたが、これに限らず、筒状領域以外の寸法データの算出にも応用可能である。例えば、手首まわりの計算点の抽出の途中で特定される手の甲の頂点と、及びアームホールの計算点の抽出の途中で特定される肩の頂点との情報に基づいて、部位「袖丈」の長さを計算することができる。また、左右両肩の頂点の情報に基づいて、部位「肩幅」の長さを計算することができる。 Also, the extraction of the calculation points for calculating the length of the circumference in the cylindrical region has been described, but the present invention is not limited to this, and can be applied to the calculation of dimension data other than the cylindrical region. For example, based on the information of the apex of the back of the hand specified during the extraction of the calculation points around the wrist, and the apex of the shoulder specified during the extraction of the calculation points of the armhole, the length of the part "sleeve length" Can be calculated. Further, the length of the part “shoulder width” can be calculated based on the information on the vertices of the left and right shoulders.
 (6-4)寸法データ算出装置の算出部の特徴
 (6-4-1)
 以上説明したように、本実施形態に係る寸法データ算出装置6020は、形状パラメータ取得部6024D及び算出部6024Eを備える。形状パラメータ取得部6024Dは、対象物の形状パラメータの値を取得する。算出部6024Eは、対象物の形状パラメータの値から対象物の3次元メッシュ・データを構成し、所定の部位に関連付けられた所定の部位領域を構成する3次元メッシュ・データの頂点の情報に基づいて、所定の部位の寸法データを算出する。
(6-4) Features of the calculation unit of the dimension data calculation device (6-4-1)
As described above, the dimension data calculation device 6020 according to this embodiment includes the shape parameter acquisition unit 6024D and the calculation unit 6024E. The shape parameter acquisition unit 6024D acquires the value of the shape parameter of the object. The calculation unit 6024E configures the three-dimensional mesh data of the target from the value of the shape parameter of the target, and based on the vertex information of the three-dimensional mesh data that configures the predetermined body region associated with the predetermined body. Then, the dimensional data of the predetermined part is calculated.
 かかる寸法データ算出装置6020によれば、部位領域を構成する3次元メッシュ・データの頂点の情報に基づいて、所定の部位の寸法データを算出することにより、測定対象の部位に関する寸法データの算出の精度を向上させることができる。 According to the dimension data calculation device 6020, the dimension data of the predetermined portion is calculated based on the information of the vertices of the three-dimensional mesh data forming the portion region, thereby calculating the dimension data regarding the measurement target portion. The accuracy can be improved.
 (6-4-2)
 また、寸法データ算出装置6020において、算出部6024Eは、所定の部位に応じて、3次元メッシュ・データの頂点の集合から、部位領域に部分的に関連付けられる計算点を選択的に抽出し、各計算点に基づいて寸法データを算出する。これにより、計算点の選択的な抽出を効果的に行うことにより、寸法データの算出を効率化することができる。
(6-4-2)
Further, in the dimension data calculation device 6020, the calculation unit 6024E selectively extracts the calculation points partially associated with the region of the region from the set of vertices of the three-dimensional mesh data according to a predetermined region. Dimension data is calculated based on the calculation points. As a result, the calculation of the dimension data can be made efficient by effectively performing the selective extraction of the calculation points.
 (6-4-3)
 更に、寸法データ算出装置6020において、部位領域が筒状領域であり、計算点に基づいて筒状領域の周の長さを計算することにより、寸法データが算出される。これにより、筒状形状の寸法データの算出を高精度化且つ効率化することができる。
(6-4-3)
Further, in the dimension data calculation device 6020, the region area is a cylindrical area, and the dimension data is calculated by calculating the circumference length of the cylindrical area based on the calculation points. Thereby, the calculation of the cylindrical dimension data can be performed with high accuracy and efficiency.
 (6-4-4)
 加えて、周の長さが、筒状領域の周に沿って、隣り合う計算点をそれぞれ線で繋ぐように計算した距離の総和となるように計算される。これにより、筒状形状の寸法データの算出を更に高精度化且つ効率化することができる。
(6-4-4)
In addition, the length of the circumference is calculated so as to be the sum of the distances calculated by connecting the adjacent calculation points with lines along the circumference of the tubular region. As a result, the calculation of the cylindrical dimension data can be performed with higher accuracy and efficiency.
 (6-4-5)
 また、本実施形態に係る実施形態に係る寸法データ算出方法は、対象物の形状パラメータの値を取得するステップ(S6010)と、対象物の形状パラメータの値から対象物の3次元メッシュ・データを構成するステップと(S6020)、構成された3次元メッシュ・データの頂点の集合の情報から、所定の部位に予め関連付けられている所定の部位領域を構成するステップと(S6030)、部位領域に関連付けられる計算点を選択的に抽出するステップと(S6040)、抽出された計算点に基づいて、寸法データを算出するステップ(S6050)と、を含む。
(6-4-5)
In addition, the dimension data calculation method according to the embodiment according to the present embodiment obtains the three-dimensional mesh data of the object from the step of acquiring the value of the shape parameter of the object (S6010) and the value of the shape parameter of the object. Configuring step (S6020), configuring a predetermined part region previously associated with a predetermined part from the information on the set of vertices of the formed three-dimensional mesh data (S6030), associating part part region The method further includes a step of selectively extracting the calculated calculation points (S6040) and a step of calculating dimensional data based on the extracted calculation points (S6050).
 かかる寸法データ算出方法によれば、部位領域を構成する3次元メッシュ・データの頂点の情報に基づいて、所定の部位の寸法データを算出することにより、測定対象の部位に関する寸法データの算出の精度を向上させることができる。 According to such a dimension data calculation method, by calculating the dimension data of a predetermined region based on the information of the vertices of the three-dimensional mesh data forming the region of the region, the accuracy of the calculation of the dimension data regarding the region to be measured is calculated. Can be improved.
 <他の実施形態3:端末装置>
 図36は、他の実施形態に係る端末装置7020の構成を示す模式図である。
 端末装置7020は、第1実施形態に係る端末装置1010、第2実施形態に係る端末装置2010S、第3実施形態に係る端末装置3010、又は第4実施形態に係る端末装置4010Sの各機能を有するものである。また、上述した各寸法データ算出装置1020,2120,3020,4120,6020に接続可能なものである。さらに、端末装置7020は、寸法データ算出装置に限られず、対象物7007が撮影された画像データから対象物7007に関する情報を処理する任意の情報処理装置と接続可能である。
<Other Embodiment 3: Terminal Device>
FIG. 36 is a schematic diagram showing the configuration of a terminal device 7020 according to another embodiment.
The terminal device 7020 has the functions of the terminal device 1010 according to the first embodiment, the terminal device 2010S according to the second embodiment, the terminal device 3010 according to the third embodiment, or the terminal device 4010S according to the fourth embodiment. It is a thing. Further, it can be connected to each of the above-mentioned dimension data calculation devices 1020, 2120, 3020, 4120, 6020. Furthermore, the terminal device 7020 is not limited to the dimension data calculation device, and can be connected to any information processing device that processes information about the object 7007 from image data of the object 7007 captured.
 端末装置7020は、取得部7011、通信部7012、処理部7013、及び入出力部7014を有する。 The terminal device 7020 includes an acquisition unit 7011, a communication unit 7012, a processing unit 7013, and an input/output unit 7014.
 取得部7011は、対象物7007が撮影された画像データを取得するものである。例えば、取得部7011は、任意の単眼カメラにより構成される。取得部7011により取得されたデータは処理部7013で処理される。 The acquisition unit 7011 acquires image data of the object 7007 captured. For example, the acquisition unit 7011 is composed of an arbitrary monocular camera. The data acquired by the acquisition unit 7011 is processed by the processing unit 7013.
 通信部7012は、任意のネットワークカード等のネットワークインタフェースにより実現され、有線又は無線によりネットワーク上の通信機器との通信を可能にする。 The communication unit 7012 is realized by a network interface such as an arbitrary network card and enables communication with a communication device on the network by wire or wirelessly.
 処理部7013は、各種情報処理を実行するために、CPU(Central Processing Unit)及び/又はGPU(Graphical Processing Unit)といったプロセッサ、並びにメモリにより実現され、プログラムが読み込まれることにより処理を実行する。ここでは、処理部(判定部)7013は、画像データに含まれる(つまり、画像データに写る)対象物が予め登録された対象物であるか否かを判定する。詳しくは、処理部7013は、画素毎に所定の対象物であるか否かを識別する「対象物識別モデル」を用いて、画像データに含まれる対象物が予め登録された対象物であるか否かを判定する。 The processing unit 7013 is realized by a processor such as a CPU (Central Processing Unit) and/or a GPU (Graphical Processing Unit) and a memory in order to execute various types of information processing, and executes processing by reading a program. Here, the processing unit (determination unit) 7013 determines whether the target object included in the image data (that is, reflected in the image data) is a target object registered in advance. Specifically, the processing unit 7013 uses the “target identification model” for identifying whether each pixel is a predetermined target or not, and determines whether the target included in the image data is a pre-registered target. Determine whether or not.
 入出力部7014は、端末装置7020への各種情報の入力を受け付けたり、端末装置7020から各種情報を出力したりする。例えば、入出力部7014は、任意のタッチパネルにより実現される。入出力部(受付部)7014は、後述する処理部7013による判定結果を入出力部7014のスクリーン(出力部)に示す。また、入出力部7014は、対象物識別モデルを用いて識別された画素毎の識別結果から得られる判定用画像データを、取得部7011により取得された画像データに重畳して入出力部7014のスクリーンに示す。 The input/output unit 7014 receives input of various information to the terminal device 7020, and outputs various information from the terminal device 7020. For example, the input/output unit 7014 is realized by an arbitrary touch panel. The input/output unit (reception unit) 7014 shows the determination result by the processing unit 7013 described later on the screen (output unit) of the input/output unit 7014. Further, the input/output unit 7014 superimposes the determination image data obtained from the identification result for each pixel identified using the object identification model on the image data acquired by the acquisition unit 7011, and the input/output unit 7014 of the input/output unit 7014. Shown on the screen.
 例えば、図37に示すように、対象物7007(ここでは人)に対して、対象物以外の領域BGの画像データ(図37斜線部分)に重畳して表示される。重畳にはアルファブレンドなどの透過画像を重畳して描画する手法を用いることができる。また、入出力部(受付部)7014は、画像データを情報処理装置(例えば、寸法データ算出装置1020,2120,3020,4120,6020)に送信するか否かの入力を受け付ける。これにより、ユーザは重畳描画されている画像を目視して確認し、誤認識を誘発する物体が写っていないことを確認してから寸法データ算出装置に画像データを送信することが可能となる。 For example, as shown in FIG. 37, the object 7007 (here, a person) is displayed by being superimposed on the image data (hatched portion in FIG. 37) of the area BG other than the object. For the superimposition, a method of superimposing and drawing a transparent image such as alpha blending can be used. Further, the input/output unit (reception unit) 7014 receives an input as to whether or not to transmit the image data to the information processing device (for example, the dimension data calculation device 1020, 2120, 3020, 4120, 6020). As a result, the user can visually check the superimposed image and confirm that the object that induces the misrecognition is not present, and then transmit the image data to the dimension data calculation device.
 図38は端末装置7020の動作を説明するための端末装置7020と情報処理装置(例えば、寸法データ算出装置1020,2120,3020,4120,6020)との間のシーケンス図である。 FIG. 38 is a sequence diagram between the terminal device 7020 and the information processing device (for example, the dimension data calculation devices 1020, 2120, 3020, 4120, 6020) for explaining the operation of the terminal device 7020.
 まず、ユーザの操作により端末装置7020を介して対象物7007の画像データが取得される(V1)。次に、端末装置7020が、対象物識別モデルを用いて、画像データに含まれる対象物7007が予め登録された対象物であるか否かを判定し、入出力部7014を構成するスクリーン上に判定結果を出力する(V2)。例えば、図37に示すような画面が判定結果として表示される。 First, the image data of the object 7007 is acquired through the terminal device 7020 by the operation of the user (V1). Next, the terminal device 7020 determines whether or not the target object 7007 included in the image data is a target object registered in advance by using the target object identification model, and displays it on the screen configuring the input/output unit 7014. The judgment result is output (V2). For example, a screen as shown in FIG. 37 is displayed as the determination result.
 次に、ユーザの操作により端末装置7020を介して、取得した画像データを寸法データ算出装置に送信するか否かが入力される(V3)。端末装置7020は入出力部7014から送信許可の入力を受け付けた場合、端末装置7020は、通信部7012を通じて、取得した画像データを寸法データ算出装置に送信する(V3-Yes,V4)。 Next, whether or not to transmit the acquired image data to the dimension data calculation device is input via the terminal device 7020 by a user operation (V3). When the terminal device 7020 receives the input of the transmission permission from the input/output unit 7014, the terminal device 7020 transmits the acquired image data to the dimension data calculation device through the communication unit 7012 (V3-Yes, V4).
 そして、当該画像データを受信した寸法データ算出装置により、端末装置7020から送信された画像データを用いて対象物7007の寸法データが算出される(V5,V6)。 Then, the dimension data calculation device that has received the image data calculates the dimension data of the object 7007 using the image data transmitted from the terminal device 7020 (V5, V6).
 以上説明したように、上記端末装置7020は、画像データに含まれる対象物が予め登録された対象物であるか否かの判定結果を出力するとともに、画像データを寸法データ算出装置に送信するか否かの入力を受け付けるので、ユーザによる操作時間を短縮し得る端末装置を提供することができる。 As described above, the terminal device 7020 outputs the determination result as to whether or not the object included in the image data is a previously registered object, and transmits the image data to the dimension data calculation device. Since the input of whether or not is accepted, it is possible to provide a terminal device that can reduce the operation time by the user.
 補足すると、寸法データ算出装置1020,2120,3020,4120,6020は、画像データを受信し、セグメンテーションすることで背景を分離しシルエット画像を生成する。そして、寸法データ算出装置は、例えば、人体各部の寸法データを算出する。このような寸法データ算出装置では、端末装置7020から寸法データ算出装置へ画像データを送信し、寸法データ算出装置による画像データの情報処理が完了してからでないと、算出結果の信頼性を確認することができない。そして、算出結果の信頼性が低い場合には、ユーザは端末装置7020を用いて画像データを改めて取得する必要がある。これに対し、上記端末装置7020を用いた場合には、対象物が写った画像データを送信する前に、ユーザに画像データの妥当性の確認を促すことができるので、寸法データ算出装置による信頼性の高い算出結果を得るまでの時間を短縮できる場合がある。 Supplementally, the dimension data calculation devices 1020, 2120, 3020, 4120, 6020 receive image data and segment the background by segmentation to generate a silhouette image. Then, the dimension data calculation device calculates, for example, the dimension data of each part of the human body. In such a dimension data calculation device, the image data is transmitted from the terminal device 7020 to the dimension data calculation device, and the reliability of the calculation result is confirmed only after the image data information processing by the dimension data calculation device is completed. I can't. If the reliability of the calculation result is low, the user needs to acquire the image data again using the terminal device 7020. On the other hand, when the terminal device 7020 is used, it is possible to prompt the user to confirm the validity of the image data before transmitting the image data in which the object is photographed. In some cases, the time required to obtain a highly accurate calculation result can be shortened.
 例えば、店舗のような多様な色を配する環境や、マネキンなどを人体と誤認識してしまう環境でも、寸法データ算出装置によるシルエット画像の生成が成功することを端末装置7020のユーザが予測及び確認しながら実行できるので、操作時間を短縮できる場合がある。 For example, the user of the terminal device 7020 predicts that the generation of a silhouette image by the dimension data calculation device will be successful even in an environment where various colors are arranged, such as in a store, or an environment in which a mannequin or the like is mistakenly recognized as a human body. Since it can be executed while checking, the operation time may be shortened in some cases.
 なお、端末装置7020に搭載される対象物識別モデルはセグメンテーションを高速に実行することが要求される。そのため、セグメンテーションの精度は多少犠牲にしても高速に推論できるモデルが好ましい。要するに、寸法データ算出装置側のセグメンテーションと、端末装置7020側のセグメンテーションとの両方を用意することで、精密なシルエット画像の生成と不要なオブジェクトの除去とを両立することが可能となる。 Note that the object identification model installed in the terminal device 7020 is required to execute segmentation at high speed. Therefore, a model that can infer at high speed is preferable, even if the accuracy of segmentation is sacrificed to some extent. In short, by preparing both the segmentation on the side of the dimension data calculation device and the segmentation on the side of the terminal device 7020, it is possible to achieve both generation of a precise silhouette image and removal of unnecessary objects.
 本開示は、上記各実施形態そのままに限定されるものではない。本開示は、実施段階ではその要旨を逸脱しない範囲で構成要素を変形して具体化できるものである。また、本開示は、上記各実施形態に開示されている複数の構成要素の適宜な組み合わせにより種々の開示を形成できるものである。例えば、実施形態に示される全構成要素から幾つかの構成要素は削除してもよいものである。さらに、異なる実施形態に構成要素を適宜組み合わせてもよいものである。 The present disclosure is not limited to the above embodiments as they are. The present disclosure can be embodied by modifying the constituent elements within a range not departing from the gist of the present invention at the implementation stage. Further, the present disclosure can form various disclosures by appropriately combining a plurality of constituent elements disclosed in each of the above-described embodiments. For example, some components may be deleted from all the components shown in the embodiment. Further, the constituent elements may be appropriately combined with different embodiments.
 本明細書によれば、以下の各観点の構成もまた開示される。
 第1観点の寸法データ算出装置は、取得部と、抽出部と、変換部と、算出部と、を備える。取得部は、対象物が撮影された画像データ及び対象物の全長データを取得する。抽出部は、画像データから前記対象物の形状を示す形状データを抽出する。変換部は、形状データを全長データに基づいて変換する。算出部は、変換部により変換された形状データを次元削減し、削減した各次元の値と対象物の部分ごとに最適化された重み係数とを用いて、対象物の各部分の寸法データを算出する。
 第2観点の寸法データ算出装置は、第1観点の寸法データ算出装置であって、取得部が、対象物を異なる方向から撮影した複数の画像データを取得する。
 第3観点の寸法データ算出装置は、第1観点又は第2観点の寸法データ算出装置であって、算出部が、変換部により変換された形状データに対して1回目の次元削減を行う。算出部は、1回目の次元削減で得られた各次元の値と対象物の部分ごとに最適化された重み係数とを線形結合して所定値を求めるか、又は1回目の次元削減で得られた各次元の値から2次の特徴量を生成し、当該2次の特徴量と対象物の部分ごとに最適化された重み係数とを結合して所定値を求める。算出部は、前記所定値と、対象物の長さ及び重さの属性を少なくとも含む属性データとを用いて2回目の次元削減を行う。算出部は、2回目の次元削減で得られた各次元の値に基づいて、対象物の各部分の寸法データを算出する。
 第4観点の寸法データ算出装置は、第1観点から第3観点の寸法データ算出装置であって、抽出部が、対象物の種類毎に準備された教師データを用いて構築されたセマンティックセグメンテーションのアルゴリズムを用いて、画像データに含まれる対象物領域を抽出することにより、対象物の形状データを抽出する。
 第5観点の寸法データ算出装置は、第4観点の寸法データ算出装置であって、抽出部が、対象物領域からグラブカットアルゴリズムにより対象物の形状データを抽出する。
 第6観点の寸法データ算出装置は、第5観点の寸法データ算出装置であって、抽出部が、グラブカットアルゴリズムにより抽出された対象物の画像を、画像データにおける特定部分の色画像に基づいて補正して、新たな形状データを生成する。
 第7観点の寸法データ算出装置は、第1観点から第6観点のいずれかの寸法データ算出装置であって、対象物が人である。
 第8観点の製品製造装置は、第1観点から第7観点のいずれかの寸法データ算出装置を用いて算出した寸法データを用いて、対象物の形状に関連する製品を製造する。
 第9観点の寸法データ算出プログラムは、コンピュータを、取得部、抽出部、変換部、算出部、として実現させる。取得部は、対象物が撮影された画像データ及び前記対象物の全長データを取得する。抽出部は、画像データから対象物の形状を示す形状データを抽出する。変換部は、形状データを全長データに基づいて変換する。算出部は、変換部により変換された形状データを用いて、対象物の各部分の寸法データを算出する。
 第10観点の寸法データ算出方法は、対象物が撮影された画像データ及び前記対象物の全長データを取得する。次に、画像データから前記対象物の形状を示す形状データを抽出する。次に、形状データを全長データに基づいて変換する。そして、変換された形状データを用いて、対象物の各部分の寸法データを算出する。
 第11観点に係る製品製造システムは、取得部と、抽出部と、変換部と、算出部と、製品製造装置と、を備える。取得部は、対象物の画像を複数枚撮影する撮影装置から、対象物の画像データを当該対象物の全長データとともに取得する。抽出部は、画像データから対象物の形状を示す形状データを抽出する。変換部は、形状データを全長データに基づいて変換する。算出部は、変換部により変換された形状データを用いて、対象物の各部分の寸法データを算出する。製品製造装置は、算出部により算出された寸法データを用いて、対象物の形状に関連する製品を製造する。
 第12観点に係る情報処理装置は、受付部と、推定部と、を備える。受付部は、対象物のシルエット画像を受け付ける。推定部は、サンプル対象物のシルエット画像とサンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジンを使用して、受け付けたシルエット画像から対象物の形状パラメータの値を推定する。推定された対象物の形状パラメータの値は、対象物が有する任意の部位に関連する寸法データに関連付けられる。
 第13観点に係る情報処理装置は、第12観点に係る情報処理装置であって、サンプル対象物に関連付けられた所定個数の形状パラメータが、サンプル対象物の3次元データを次元削減することによって得られる。
 第14観点に係る情報処理装置は、第13観点に係る情報処理装置であって、次元削減が主成分分析により行われる。また、推定された所定個数の形状パラメータの値に対し、主成分分析に係る射影の逆変換により前記対象物の3次元データが算出される。3次元データは、寸法データに関連付けられる。
 第15観点に係る情報処理装置は、第14観点に係る情報処理装置であって、第1順位の主成分を除いた第2順位以降の所定個数の主成分が形状パラメータに選択される。
 第16観点に係る情報処理装置は、第15観点に係る情報処理装置であって、対象物が人であり、第1順位の主成分が人の身長に関連付けられる。
 第17観点に係る情報処理装置は、第12観点から第16観点のいずれかの情報処理装置であって、サンプル対象物のシルエット画像が、サンプル対象物の3次元データから構成される3次元物体における所定方向の投影画像である。
 第18観点に係る情報処理装置は、第12観点から第17観点のいずれかの情報処理装置であって、対象物エンジンが、サンプル対象物のシルエット画像とサンプル対象物に関連付けられた所定個数の形状パラメータの値との関係を学習することにより生成されている。
 第19観点に係る情報処理装置は、第12観点から第18観点のいずれかの情報処理装置であって、算出部をさらに備える。算出部は、対象物について推定された形状パラメータの値から、対象物における複数の頂点の3次元データを構成する。そして、構成された3次元データに基づいて、対象物における任意の2つの前記頂点の間の寸法データを算出する。
 第20観点に係る情報処理装置は、第19観点の情報処理装置であって、算出部において、2つの頂点の間の寸法データが、対象物における複数の頂点の3次元データから構成される3次元物体上の曲面に沿って算出される。
 第21観点に係る情報処理装置は、第12観点から第20観点のいずれかの情報処理装置であって、対象物のシルエット画像が、深度データ測定装置を用いて得られる深度データに基づいて、対象物の画像と対象物以外の画像とを分離することで生成される。
 第22観点に係る情報処理装置は、第21観点に係る情報処理装置であって、深度データ測定装置がステレオカメラである。
 第23観点に係る情報処理方法は、サンプル対象物のシルエット画像とサンプル対象物に関連付けられた所定個数の形状パラメータの値との関係を学習させ、対象物エンジンを生成する。次に、対象物のシルエット画像を受け付ける。次に、対象物エンジンを使用して、受け付けたシルエット画像から対象物の形状パラメータの値を推定する。そして、対象物の形状パラメータの値に基づいて、対象物が有する部位に関連する寸法データを算出する。
 第24観点に係る情報処理装置は、受付部と、推定部と、を備える。受付部は、対象物の属性データを受け付ける。推定部は、サンプル対象物の属性データとサンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジンを使用して、受け付けた属性データから対象物の形状パラメータの値を推定する。推定された対象物の形状パラメータの値は、対象物が有する任意の部位に関連する寸法データに関連付けられる。
 第25観点に係る情報処理装置は、第24観点に係る情報処理装置であって、サンプル対象物に関連付けられた所定個数の形状パラメータが、サンプル対象物の3次元データを次元削減することによって得られる。
 第26観点に係る情報処理装置は、第25観点に係る情報処理装置であって、次元削減が主成分分析により行われる。そして、第1順位の主成分を除いた第2順位以降の所定個数の主成分が形状パラメータに選択される。
 第27観点に係る情報処理装置は、第26観点に係る情報処理装置であって、対象物が人である。また、第1順位の主成分が人の身長に関連付けられる。そして、属性データには対象物の身長データが含まれる。
 第28観点に係る情報処理装置は、第24観点から第27観点のいずれかの情報処理装置であって、対象物エンジンが、サンプル対象物の属性データとサンプル対象物に関連付けられた所定個数の形状パラメータの値との関係を学習することにより生成されている。
 第29観点に係る情報処理装置は、第24観点から第28観点のいずれかの情報処理装置であって、算出部をさらに備える。算出部は、対象物について推定された形状パラメータの値から、対象物における複数の頂点の3次元データを構成する。また、算出部は、構成された3次元データに基づいて、対象物における任意の2つの前記頂点の間の寸法データを算出する。
 第30観点に係る情報処理装置は、第29観点に係る情報処理装置であって、算出部において、2つの頂点の間の寸法データが、対象物における複数の頂点の3次元データから構成される3次元物体上の曲面に沿って算出される。
 第31観点に係る情報処理方法は、サンプル対象物の属性データとサンプル対象物に関連付けられた所定個数の形状パラメータの値との関係を学習させ、対象物エンジンを生成する。次に、対象物の属性データを受け付ける。次に、受け付けた属性データから対象物の形状パラメータの値を推定する。そして、対象物の形状パラメータの値に基づいて、対象物が有する部位に関連する寸法データを算出する。
 第32観点に係る寸法データ算出装置は、取得部と、抽出部と、変換部と、推定部と、算出部と、を備える。取得部は、対象物が撮影された画像データ及び対象物の全長データを取得する。抽出部は、画像データから対象物の形状を示す形状データを抽出する。変換部は、形状データを全長データに基づいてシルエット画像に変換する。推定部は、サンプル対象物のシルエット画像とサンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジンを使用して、シルエット画像から所定個数の形状パラメータの値を推定する。算出部は、推定された所定個数の形状パラメータの値に基づいて、対象物の寸法データを算出する。
 第33観点に係る寸法データ算出装置は、第32観点に係る寸法データ算出装置であって、所定個数の形状パラメータが、サンプル対象物の3次元データを次元削減することによって得られる。
 第34観点に係る寸法データ算出装置は、第33観点に係る寸法データ算出装置であって、次元削減が主成分分析により行われる。また、算出部において、所定個数の形状パラメータの値に対し、主成分分析に係る射影行列に基づいて逆変換を行うことにより3次元データが算出される。そして、寸法データが3次元データから算出される。
 第35観点に係る製品製造装置は、第32観点から第34観点のいずれかの寸法データ算出装置を用いて算出された少なくとも1つの寸法データを用いて、対象物の形状に関連する製品を製造する。
 第36観点に係る寸法データ算出装置は、取得部と、算出部と、備える。取得部は、対象物の全長データと重量データとのうちの少なくともいずれかを含む属性データを取得する。算出部は、属性データを、機械学習により学習された係数を用いて多項式回帰することにより、対象物の各部分の寸法データを算出する。
 第37観点に係る寸法データ算出装置は、第36観点の寸法データ算出装置であって、算出部が、属性データを、機械学習により学習された係数を用いて二次回帰することにより、対象物の各部分の寸法データを算出する。
 第38観点に係る寸法データ算出装置は、第36観点又は第37観点に係る寸法データ算出装置であって、対象物は人である。
 第39観点に係る寸法データ算出プログラムは、コンピュータを、取得部、算出部、として実現させる。取得部は、対象物の全長データと重量データとのうちの少なくともいずれかを含む属性データを取得する。算出部は、属性データを、機械学習により学習された係数を用いて多項式回帰することにより、対象物の各部分の寸法データを算出する。
 第40観点に係る寸法データ算出方法は、対象物の全長データと重量データとのうちの少なくともいずれかを含む属性データを取得する。そして、属性データを、機械学習により学習された係数を用いて多項式回帰することにより、対象物の各部分の寸法データを算出する。
 第41観点に係るシルエット画像生成装置は、取得部と、抽出部と、変換部と、を備える。取得部は、対象物が撮影された、深度マップを含む画像データを取得する。抽出部は、深度マップから生成される3次元点群データを用いて対象物の対象物領域を抽出し、対象物領域に対応する深度マップの深度データに基づいて、対象物の形状を示す形状データを抽出する。変換部は、形状データを変換して、対象物のシルエット画像を生成する。
 第42観点に係るシルエット画像生成装置は、第41観点のシルエット画像生成装置であって、変換部が、シルエット画像として、対象物領域に対応した深度データに関連付けられたモノクロ画像である階調シルエット画像を生成する。
 第43観点に係るシルエット画像生成装置は、第41観点又は第42観点のシルエット画像生成装置であって、抽出部が、3次元点群データのうち、奥行方向に沿って所定の閾値距離より離れて存在する3次元点群データを取り除いたものに基づいて、対象物の対象物領域を抽出する。
 第44観点に係るシルエット画像生成装置は、第41観点から第43観点のいずれかのシルエット画像生成装置であって、抽出部が、更に、深度マップから生成される3次元点群データから、画像データにおける平面部分を推定する。そして、抽出部が、3次元点群データのうち、推定された平面部分に存在する3次元点群データを取り除いたものに基づいて、対象物の対象物領域を抽出する。
 第45観点に係るシルエット画像生成装置は、第44観点のシルエット画像生成装置であって、抽出部が、ランダムサンプリングにしたがってサンプルされるサンプル平面に関連付けられる3次元点群データの含有率を算出することに基づいて、平面部分を推定する。
 第46観点に係るシルエット画像生成装置は、第41観点から第45観点のいずれかのシルエット画像生成装置であって、対象物が人であり、平面部分が床を含む。
 第47観点に係る寸法データ算出装置は、取得部と、抽出部と、算出部と、を備える。取得部は、対象物が撮影された画像データ及び対象物の全長データを取得する。抽出部は、画像データから対象物の形状を示す形状データを抽出する。算出部は、形状データを用いて、対象物の各部分の寸法データを算出する。画像データは深度マップを含み、抽出部により抽出される形状データが深度マップにおける対象物の深度データに関連付けられる。
 第48観点に係る寸法データ算出装置は、第47観点の寸法データ算出装置であって、抽出部は、深度マップに基づいて、画像データから前記対象物の対象物領域を抽出する。対象物領域において、形状データが前記対象物の深度データに関連付けられる。
 第49観点に係る寸法データ算出装置は、第47観点又は第48観点の寸法データ算出装置であって、形状データを全長データに基づいて変換する変換部を更に備える。
 第50観点に係る寸法データ算出装置は、第49観点の寸法データ算出装置であって、変換部が、形状データを、全長データ及び対象物領域の深度データに基づいて新たな形状データに変換する。
 第51観点に係る寸法データ算出装置は、第49観点又は第50観点の寸法データ算出装置であって、算出部は、変換部により変換された形状データを次元削減し、削減した各次元の値と対象物の部分毎に最適化された重み係数とを用いて、対象物の各部分の寸法データを算出する。
 第52観点に係る製品製造装置は、第47観点から第51観点のいずれかの寸法データ算出装置を用いて算出した寸法データを用いて、対象物の形状に関連する製品を製造する。
 第53観点に係る寸法データ算出装置は、取得部と、抽出部と、推定部と、算出部と、を備える。取得部は、対象物が撮影された画像データ及び前記対象物の全長データを取得する。抽出部は、画像データから対象物の形状を示す形状データを抽出する。推定部は、サンプル対象物のシルエット画像とサンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジンを使用して、対象物のシルエット画像から所定個数の形状パラメータの値を推定する。算出部は、推定された所定個数の形状パラメータの値に基づいて、対象物の寸法データを算出する。画像データは深度マップを含み、形状データが深度マップにおける対象物の深度データに関連付けられる。
 第54観点に係る寸法データ算出装置は、第53観点の寸法データ算出装置であって、抽出部が、深度マップに基づいて、画像データから対象物の対象物領域を抽出する。対象物領域において、形状データが前記対象物の深度データに関連付けられる。
 第55観点に係る寸法データ算出装置は、第53観点又は第54観点の寸法データ算出装置であって、形状データを、対象物領域の深度データに基づいて新たな形状データに変換する変換部を更に備える。
 第56観点に係る寸法データ算出装置は、第53観点から第55観点のいずれかの寸法データ算出装置であって、所定個数の形状パラメータが、サンプル対象物の3次元データを次元削減することによって得られる。
 第57観点に係る製品製造装置は、第53観点から第56観点のいずれかの寸法データ算出装置を用いて算出した寸法データを用いて、対象物の形状に関連する製品を製造する。
 第58観点に係る端末装置は、対象物が撮影された画像データから対象物に関する情報を処理する情報処理装置に接続される。端末装置は、取得部と、判定部と、受付部と、を備える。取得部は、対象物が撮影された画像データを取得する。判定部は、画像データに含まれる対象物が予め登録された対象物であるか否かを判定する。受付部は、判定部による判定結果を出力部に示すとともに、画像データを情報処理装置に送信するか否かの入力を受け付ける。
 第59観点に係る端末装置は、第58観点の端末装置であって、判定部が、画素毎に所定の対象物であるか否かを識別する対象物識別モデルを用いて画像データに含まれる対象物が予め登録された対象物であるか否かを判定する。
 第60観点に係る端末装置は、第59観点の端末装置であって、受付部は、対象物識別モデルを用いて識別された画素毎の識別結果から得られる判定用画像データを、取得部により取得された画像データに重畳して出力部に示す。
 第61観点に係る寸法データ算出装置は、形状パラメータ取得部と、算出部と、を備える。形状パラメータ取得部は、対象物の形状パラメータの値を取得する。算出部は、対象物の形状パラメータの値から対象物の3次元メッシュ・データを構成し、所定の部位に関連付けられた所定の部位領域を構成する3次元メッシュ・データの頂点の情報に基づいて、所定の部位の寸法データを算出する。
 第62観点に係る寸法データ算出装置は、第61観点の寸法データ算出装置であって、算出部は、所定の部位に応じて、3次元メッシュ・データの頂点の集合から、部位領域に部分的に関連付けられる計算点を選択的に抽出する。そして、算出部は、各計算点に基づいて寸法データを算出する。
 第63観点に係る寸法データ算出装置は、第62観点の寸法データ算出装置であって、部位領域が筒状領域である。そして、計算点に基づいて筒状領域の周の長さを計算することにより、寸法データが算出される。
 第64観点に係る寸法データ算出装置は、第63観点の寸法データ算出装置であって、算出部が、筒状領域の重心軸に直交し、重心軸を原点とする、筒状領域の断面の長軸方向及び短軸方向に関して規定される座標系の3つ以上の象限内に存在する頂点の集合から個別に前記計算点を抽出する。
 第65観点に係る寸法データ算出装置は、第63観点に係る寸法データ算出装置であって、算出部が、筒状領域を重心軸方向で走査して、長軸方向の長さ及び/又は短軸方向の長さが最小となるような筒状領域の断面領域を抽出する。そして、算出部は、断面領域において、長軸方向の長さが最小となる2つの頂点と、短軸方向の長さが最小となる2つの頂点とを、前記計算点として抽出する。
 なお、上記説明では、次元削減の例として主成分分析による手法に言及したが、次元削減の手法は上述したものに限定されるものではない。例えば、オートエンコーダ、潜在意味解析(LSA)、独立成分分析(ICA)等による次元削減の手法を採用してもよい。また、寸法データの教師データを用いる場合、部分的最小二乗法(PLS)、正準相関解析(CCA)、線形判別分析(LDA)等による次元削減の手法を採用することも可能である。
 また、上記説明において、対象物が「人」である場合、その各部分の例として、特に、首回り・肩幅・胸囲・腹囲・ウエスト・袖丈・ヒップ・太ももの幅・膝幅・股下・着丈・アームホール・二の腕の太さ・手首まわり等の寸法データを高精度に算出することが可能である。
 また、上記説明において、製品製造装置は、製品として「服」を製造するものでもよい。
 また、上記説明において、対象物が「人」である場合、一定の年齢層に区分けして、年齢層毎に異なるパラメータを用いて寸法データを算出してもよい。同様に、性別毎に異なるパラメータを用いて寸法データを算出してもよい。
According to the present specification, the configurations of the following aspects are also disclosed.
The dimension data calculation device according to the first aspect includes an acquisition unit, an extraction unit, a conversion unit, and a calculation unit. The acquisition unit acquires image data of a captured object and full-length data of the object. The extraction unit extracts shape data indicating the shape of the object from the image data. The conversion unit converts the shape data based on the full length data. The calculation unit reduces the dimension of the shape data converted by the conversion unit, and uses the reduced value of each dimension and the weighting coefficient optimized for each part of the target to calculate the dimension data of each part of the target. calculate.
The dimension data calculation device according to the second aspect is the dimension data calculation device according to the first aspect, in which the acquisition unit acquires a plurality of image data obtained by photographing the object from different directions.
The dimension data calculation device according to the third aspect is the dimension data calculation device according to the first or second aspect, and the calculation unit performs the first dimension reduction on the shape data converted by the conversion unit. The calculation unit linearly combines the value of each dimension obtained in the first dimension reduction and the weighting coefficient optimized for each part of the object to obtain a predetermined value, or obtains the predetermined value in the first dimension reduction. A quadratic feature amount is generated from the obtained values of each dimension, and the quadratic feature amount and the weighting coefficient optimized for each part of the object are combined to obtain a predetermined value. The calculation unit performs the second dimension reduction using the predetermined value and the attribute data including at least the attributes of the length and the weight of the object. The calculation unit calculates the dimension data of each part of the object based on the value of each dimension obtained in the second dimension reduction.
A dimension data calculation device according to a fourth aspect is the dimension data calculation device according to the first to third aspects, in which the extraction unit performs the semantic segmentation constructed using the teacher data prepared for each type of object. The shape data of the object is extracted by extracting the object area included in the image data using an algorithm.
The dimension data calculation device according to the fifth aspect is the dimension data calculation device according to the fourth aspect, in which the extraction unit extracts the shape data of the target object from the target object region by a grab cut algorithm.
A dimension data calculation device according to a sixth aspect is the dimension data calculation device according to the fifth aspect, in which the extraction unit causes the image of the object extracted by the grab cut algorithm to be based on the color image of the specific portion in the image data. Correction is performed to generate new shape data.
The dimension data calculation device according to the seventh aspect is the dimension data calculation device according to any of the first to sixth aspects, and the object is a person.
The product manufacturing apparatus according to the eighth aspect manufactures a product related to the shape of the target object by using the dimension data calculated by using the dimension data calculation apparatus according to any one of the first to seventh aspects.
The dimension data calculation program of the ninth aspect causes a computer to be realized as an acquisition unit, an extraction unit, a conversion unit, and a calculation unit. The acquisition unit acquires image data of a captured object and full-length data of the object. The extraction unit extracts shape data indicating the shape of the object from the image data. The conversion unit converts the shape data based on the full length data. The calculation unit calculates the dimension data of each part of the object using the shape data converted by the conversion unit.
The dimension data calculation method according to the tenth aspect obtains image data of a photographed object and full length data of the object. Next, shape data indicating the shape of the object is extracted from the image data. Next, the shape data is converted based on the full length data. Then, using the converted shape data, the dimension data of each part of the object is calculated.
A product manufacturing system according to an eleventh aspect includes an acquisition unit, an extraction unit, a conversion unit, a calculation unit, and a product manufacturing device. The acquisition unit acquires the image data of the target together with the full length data of the target from a photographing device that captures a plurality of images of the target. The extraction unit extracts shape data indicating the shape of the object from the image data. The conversion unit converts the shape data based on the full length data. The calculation unit calculates the dimension data of each part of the object using the shape data converted by the conversion unit. The product manufacturing apparatus manufactures a product related to the shape of the object by using the dimension data calculated by the calculation unit.
The information processing device according to the twelfth aspect includes a reception unit and an estimation unit. The reception unit receives the silhouette image of the object. The estimating unit uses the object engine that associates the silhouette image of the sample object and the values of the shape parameters associated with the sample object to obtain the shape parameter value of the object from the received silhouette image. presume. The estimated value of the shape parameter of the object is associated with the dimensional data related to an arbitrary part of the object.
An information processing apparatus according to a thirteenth aspect is the information processing apparatus according to the twelfth aspect, wherein a predetermined number of shape parameters associated with the sample target object are obtained by dimensionally reducing three-dimensional data of the sample target object. Be done.
The information processing apparatus according to the fourteenth aspect is the information processing apparatus according to the thirteenth aspect, in which dimension reduction is performed by principal component analysis. In addition, three-dimensional data of the object is calculated by inversely transforming the projections related to the principal component analysis for the estimated values of the shape parameters. The three-dimensional data is associated with the dimension data.
An information processing apparatus according to a fifteenth aspect is the information processing apparatus according to the fourteenth aspect, in which a predetermined number of main components after the second rank excluding the first rank main components are selected as shape parameters.
An information processing apparatus according to a sixteenth aspect is the information processing apparatus according to the fifteenth aspect, in which the object is a person and the main component of the first rank is associated with the height of the person.
An information processing apparatus according to a seventeenth aspect is the information processing apparatus according to any one of the twelfth aspect to the sixteenth aspect, wherein the silhouette image of the sample object is a three-dimensional object including three-dimensional data of the sample object. It is a projection image of a predetermined direction in.
An information processing apparatus according to an eighteenth aspect is the information processing apparatus according to any one of the twelfth aspect to the seventeenth aspect, wherein the object engine has a predetermined number of objects associated with the silhouette image of the sample object and the sample object. It is generated by learning the relationship with the shape parameter value.
An information processing apparatus according to a nineteenth aspect is the information processing apparatus according to any one of the twelfth aspect to the eighteenth aspect, further including a calculation unit. The calculation unit configures three-dimensional data of a plurality of vertices in the object from the shape parameter values estimated for the object. Then, based on the constructed three-dimensional data, the dimension data between any two of the apexes of the object is calculated.
An information processing apparatus according to a twentieth aspect is the information processing apparatus according to the nineteenth aspect, wherein in the calculation unit, the dimensional data between the two vertices is composed of three-dimensional data of a plurality of vertices in the object. It is calculated along the curved surface on the three-dimensional object.
An information processing apparatus according to a twenty-first aspect is the information processing apparatus according to any one of the twelfth aspect to the twentieth aspect, in which the silhouette image of the object is based on depth data obtained using the depth data measuring device, It is generated by separating the image of the object and the image other than the object.
An information processing apparatus according to a twenty-second aspect is the information processing apparatus according to the twenty-first aspect, wherein the depth data measuring device is a stereo camera.
An information processing method according to a twenty-third aspect is to learn a relationship between a silhouette image of a sample target object and values of a predetermined number of shape parameters associated with the sample target object to generate a target object engine. Next, the silhouette image of the target object is received. Next, the object engine is used to estimate the value of the shape parameter of the object from the received silhouette image. Then, based on the value of the shape parameter of the object, the dimension data related to the part of the object is calculated.
The information processing device according to the twenty-fourth aspect includes a reception unit and an estimation unit. The reception unit receives the attribute data of the target object. The estimation unit uses the object engine that associates the attribute data of the sample object with the values of the predetermined number of shape parameters associated with the sample object to obtain the value of the shape parameter of the object from the received attribute data. presume. The estimated value of the shape parameter of the object is associated with the dimensional data related to an arbitrary part of the object.
An information processing apparatus according to a twenty-fifth aspect is the information processing apparatus according to the twenty-fourth aspect, wherein a predetermined number of shape parameters associated with the sample object are obtained by dimensionally reducing three-dimensional data of the sample object. Be done.
An information processing apparatus according to a twenty-sixth aspect is the information processing apparatus according to the twenty-fifth aspect, in which dimension reduction is performed by principal component analysis. Then, a predetermined number of main components after the second rank, excluding the first rank main components, are selected as the shape parameters.
The information processing apparatus according to the twenty-seventh aspect is the information processing apparatus according to the twenty-sixth aspect, wherein the object is a person. Also, the first-ranked main component is associated with the height of the person. The attribute data includes height data of the object.
An information processing apparatus according to a twenty-eighth aspect is the information processing apparatus according to any one of the twenty-fourth aspect to the twenty-seventh aspect, wherein the object engine is a predetermined number of objects associated with the attribute data of the sample object and the sample object. It is generated by learning the relationship with the shape parameter value.
An information processing apparatus according to a twenty-ninth aspect is the information processing apparatus according to any one of the twenty-fourth aspect to the twenty-eighth aspect, further including a calculation unit. The calculation unit configures three-dimensional data of a plurality of vertices in the object from the shape parameter values estimated for the object. In addition, the calculation unit calculates the dimension data between any two of the apexes of the object based on the configured three-dimensional data.
An information processing apparatus according to a thirtieth aspect is the information processing apparatus according to the twenty-ninth aspect, in which the dimension data between the two vertices is composed of three-dimensional data of a plurality of vertices in the object in the calculation unit. It is calculated along the curved surface on the three-dimensional object.
An information processing method according to a thirty-first aspect is to learn a relationship between attribute data of a sample target object and values of a predetermined number of shape parameters associated with the sample target object to generate a target object engine. Next, the attribute data of the target object is received. Next, the value of the shape parameter of the object is estimated from the received attribute data. Then, based on the value of the shape parameter of the object, the dimension data related to the part of the object is calculated.
The dimension data calculation device according to the thirty-second aspect includes an acquisition unit, an extraction unit, a conversion unit, an estimation unit, and a calculation unit. The acquisition unit acquires image data of a captured object and full-length data of the object. The extraction unit extracts shape data indicating the shape of the object from the image data. The conversion unit converts the shape data into a silhouette image based on the full length data. The estimating unit estimates the value of the predetermined number of shape parameters from the silhouette image by using the object engine that associates the silhouette image of the sample object with the value of the predetermined number of shape parameters associated with the sample object. .. The calculation unit calculates the dimension data of the object based on the estimated values of the predetermined number of shape parameters.
The dimension data calculation device according to the thirty-third aspect is the dimension data calculation device according to the thirty-second aspect, wherein a predetermined number of shape parameters are obtained by dimensionally reducing the three-dimensional data of the sample object.
The dimension data calculation apparatus according to the thirty-fourth aspect is the dimension data calculation apparatus according to the thirty-third aspect, in which dimension reduction is performed by principal component analysis. In addition, three-dimensional data is calculated in the calculation unit by performing inverse transformation on a predetermined number of shape parameter values based on the projection matrix related to principal component analysis. Then, the dimension data is calculated from the three-dimensional data.
A product manufacturing apparatus according to a thirty-fifth aspect manufactures a product related to the shape of an object by using at least one dimension data calculated using the dimension data calculation apparatus according to any one of the thirty-second to thirty-fourth aspects. To do.
The dimension data calculation device according to the thirty sixth aspect includes an acquisition unit and a calculation unit. The acquisition unit acquires attribute data including at least one of full length data and weight data of the object. The calculation unit calculates the dimension data of each part of the object by performing a polynomial regression on the attribute data using the coefficient learned by machine learning.
A dimension data calculation device according to a thirty-seventh aspect is the dimension data calculation device according to the thirty-sixth aspect, in which the calculation unit secondarily regresses the attribute data using a coefficient learned by machine learning, Dimension data of each part of is calculated.
The dimension data calculation device according to the thirty-eighth aspect is the dimension data calculation device according to the thirty-sixth aspect or the thirty-seventh aspect, and the object is a person.
The dimension data calculation program according to the thirty-ninth aspect causes a computer to be realized as an acquisition unit and a calculation unit. The acquisition unit acquires attribute data including at least one of full length data and weight data of the object. The calculation unit calculates the dimension data of each part of the object by performing a polynomial regression on the attribute data using the coefficient learned by machine learning.
A dimension data calculating method according to a fortieth aspect obtains attribute data including at least one of full length data and weight data of an object. Then, the attribute data is subjected to polynomial regression using the coefficient learned by machine learning to calculate the dimension data of each part of the object.
The silhouette image generation device according to the forty-first aspect includes an acquisition unit, an extraction unit, and a conversion unit. The acquisition unit acquires image data including a depth map, in which an object is photographed. The extraction unit extracts a target object area of the target object using the three-dimensional point cloud data generated from the depth map, and a shape indicating the shape of the target object based on the depth data of the depth map corresponding to the target object area. Extract the data. The conversion unit converts the shape data to generate a silhouette image of the object.
A silhouette image generating apparatus according to a forty-second aspect is the silhouette image generating apparatus according to the forty-first aspect, in which the conversion unit is a monochrome image associated with the depth data corresponding to the target region as the silhouette image. Generate an image.
A silhouette image generating apparatus according to a 43rd viewpoint is the silhouette image generating apparatus according to the 41st viewpoint or the 42nd viewpoint, wherein the extraction unit is separated from a predetermined threshold distance along the depth direction in the three-dimensional point cloud data. The object region of the object is extracted based on the data obtained by removing the existing three-dimensional point cloud data.
The silhouette image generating apparatus according to the 44th viewpoint is the silhouette image generating apparatus according to any one of the 41st viewpoint to the 43rd viewpoint, wherein the extraction unit further generates an image from the three-dimensional point cloud data generated from the depth map. Estimate the plane portion of the data. Then, the extraction unit extracts the target object region of the target object based on the three-dimensional point cloud data from which the three-dimensional point cloud data existing in the estimated plane portion is removed.
A silhouette image generating apparatus according to a 45th aspect is the silhouette image generating apparatus according to the 44th aspect, wherein the extraction unit calculates the content rate of the three-dimensional point cloud data associated with the sample plane sampled according to the random sampling. Based on that, the plane part is estimated.
The silhouette image generating apparatus according to the 46th viewpoint is the silhouette image generating apparatus according to any of the 41st viewpoint to the 45th viewpoint, wherein the object is a person and the plane portion includes the floor.
A dimension data calculation device according to a 47th aspect includes an acquisition unit, an extraction unit, and a calculation unit. The acquisition unit acquires image data of a captured object and full-length data of the object. The extraction unit extracts shape data indicating the shape of the object from the image data. The calculation unit calculates the dimension data of each part of the object using the shape data. The image data includes a depth map, and the shape data extracted by the extraction unit is associated with the depth data of the object in the depth map.
A dimension data calculation device according to a forty-eighth aspect is the dimension data calculation device according to the forty-seventh aspect, in which the extraction unit extracts the object region of the object from the image data based on the depth map. Shape data is associated with depth data of the object in the object region.
A dimension data calculation device according to a forty-ninth aspect is the dimension data calculation device according to the forty-seventh aspect or the forty-eighth aspect, further including a conversion unit that converts the shape data based on the full length data.
A dimension data calculation device according to a fiftieth aspect is the dimension data calculation device according to the forty-ninth aspect, in which the conversion unit converts the shape data into new shape data based on the total length data and the depth data of the object region. ..
The dimension data calculation device according to the fifty-first aspect is the dimension data calculation device according to the fifty-ninth aspect or the fiftieth aspect, wherein the calculation unit reduces the dimension of the shape data converted by the conversion unit and reduces the value of each dimension. Using the weighting coefficient optimized for each part of the object, the dimension data of each part of the object is calculated.
The product manufacturing apparatus according to the fifty-second aspect manufactures a product related to the shape of the target object by using the dimension data calculated by using the dimension data calculation apparatus according to any of the 47th to 51st aspects.
The dimension data calculation device according to the fifty-third aspect includes an acquisition unit, an extraction unit, an estimation unit, and a calculation unit. The acquisition unit acquires image data of a captured object and full-length data of the object. The extraction unit extracts shape data indicating the shape of the object from the image data. The estimation unit uses the object engine that associates the silhouette image of the sample object with the value of the predetermined number of shape parameters associated with the sample object, and uses the object engine to extract the value of the predetermined number of shape parameter values from the silhouette image of the object. To estimate. The calculation unit calculates the dimension data of the object based on the estimated values of the predetermined number of shape parameters. The image data includes a depth map, and the shape data is associated with the depth data of the object in the depth map.
The dimension data calculation device according to the fifty-fourth aspect is the dimension data calculation device according to the fifty-third aspect, in which the extraction unit extracts the object region of the object from the image data based on the depth map. Shape data is associated with depth data of the object in the object region.
A dimension data calculation device according to a fifty-fifth aspect is the dimension data calculation device according to the fifty-third aspect or the fifty-fourth aspect, including a conversion unit that converts the shape data into new shape data based on the depth data of the object region. Further prepare.
The dimension data calculation device according to the fifty-sixth aspect is the dimension data calculation device according to any of the fifty-third aspect to the fifty-fifth aspect, wherein the predetermined number of shape parameters dimensionally reduces the three-dimensional data of the sample object. can get.
The product manufacturing apparatus according to the fifty-seventh aspect manufactures a product related to the shape of the target object using the dimension data calculated using the dimension data calculation apparatus according to any one of the fifty-third to the fifty-sixth aspects.
The terminal device according to the fifty-eighth aspect is connected to an information processing device that processes information about an object from image data obtained by capturing the object. The terminal device includes an acquisition unit, a determination unit, and a reception unit. The acquisition unit acquires image data of a target object. The determination unit determines whether or not the target object included in the image data is a target object registered in advance. The acceptance unit indicates the determination result by the determination unit to the output unit, and accepts an input as to whether or not to transmit the image data to the information processing device.
A terminal device according to a fifty-ninth aspect is the terminal device according to the fifty-eighth aspect, wherein the determination unit includes the target object identification model for identifying whether or not each pixel is a predetermined target object in the image data. It is determined whether or not the object is a previously registered object.
A terminal device according to a 60th aspect is the terminal device according to the 59th aspect, in which the accepting unit causes the obtaining unit to obtain the determination image data obtained from the identification result for each pixel identified using the object identification model. It is shown on the output unit by being superimposed on the acquired image data.
A dimension data calculation device according to a sixty-first aspect includes a shape parameter acquisition unit and a calculation unit. The shape parameter acquisition unit acquires the value of the shape parameter of the object. The calculation unit configures the three-dimensional mesh data of the target from the value of the shape parameter of the target, and based on the information of the vertices of the three-dimensional mesh data that configures the predetermined region of the body associated with the predetermined region. , Dimension data of a predetermined part is calculated.
A dimension data calculation device according to a sixty-second aspect is the dimension data calculation device according to the sixty-first aspect, in which the calculation unit partially forms a part region from a set of vertices of the three-dimensional mesh data according to a predetermined part. Selectively extract the calculation points associated with. Then, the calculation unit calculates the dimension data based on each calculation point.
The dimension data calculation device according to the 63rd aspect is the dimension data calculation device according to the 62nd aspect, wherein the part region is a tubular region. Then, the dimension data is calculated by calculating the circumference length of the cylindrical region based on the calculation points.
A dimension data calculation device according to a sixty-fourth aspect is the dimension data calculation device according to the sixty-third aspect, in which the calculation section is orthogonal to the centroid axis of the tubular region and has the origin at the centroid axis. The calculation points are individually extracted from a set of vertices existing in three or more quadrants of the coordinate system defined with respect to the major axis direction and the minor axis direction.
A dimension data calculation apparatus according to a sixty-fifth aspect is the dimension data calculation apparatus according to the sixty-third aspect, in which the calculation unit scans the tubular region in the direction of the center of gravity axis to determine the length and/or the length in the long axis direction. A cross-sectional area of the tubular area having the smallest axial length is extracted. Then, the calculation unit extracts, as the calculation points, two vertices having a minimum length in the long axis direction and two vertices having a minimum length in the short axis direction in the cross-sectional area.
In the above description, a method based on principal component analysis is mentioned as an example of dimension reduction, but the dimension reduction method is not limited to the above. For example, a dimension reduction method such as an automatic encoder, latent semantic analysis (LSA), and independent component analysis (ICA) may be adopted. Further, when using the teacher data of the dimension data, it is also possible to adopt a dimension reduction method such as partial least squares (PLS), canonical correlation analysis (CCA), and linear discriminant analysis (LDA).
Further, in the above description, when the object is a "person", as an example of each part thereof, in particular, neck width, shoulder width, chest measurement, waist circumference, waist, sleeve length, hips, thigh width, knee width, inseam, length・It is possible to calculate dimensional data such as armholes, thickness of upper arms, and wrist circumference with high accuracy.
Further, in the above description, the product manufacturing apparatus may be one that manufactures "clothes" as a product.
Further, in the above description, when the object is a “person”, the size data may be calculated by dividing into a certain age group and using different parameters for each age group. Similarly, the dimensional data may be calculated using different parameters for each gender.
1001  製品製造システム
1010  端末装置
1020  寸法データ算出装置
1021  記憶部
1022  入出力部
1023  通信部
1024  処理部
1024A 取得部
1024B 抽出部
1024C 変換部
1024D 算出部
1030  製品製造装置
2001S 製品製造システム
2120  寸法データ算出装置
2121  記憶部
2122  入出力部
2123  通信部
2124  処理部
2124A 取得部
2124D 算出部
3001  製品製造システム
3020  寸法データ算出装置
3021  記憶部
3022  入出力部
3023  通信部
3024  処理部
3024A 取得部
3024B 抽出部
3024C 変換部
3024D 推定部
3024E 算出部
3025  学習装置
3026  記憶部
3027  処理部
3027A 前処理部
3027B 学習部
3100  寸法データ算出システム
4001S 製品製造システム
4120  寸法データ算出装置
4121  記憶部
4122  入出力部
4123  通信部
4124  処理部
4124A 取得部
4124D 推定部
4124E 算出部
4125  学習装置
4126  記憶部
4127  処理部
4127A 前処理部
4127B 学習部
4200  寸法データ算出システム
5020  シルエット画像生成装置
5024A 取得部
5024B 抽出部
5024C 変換部
5124  3次元点群生成部
5224  背景点群除去部
5324  平面点群除去部
5424  対象物領域抽出部
5524  形状データ抽出部
5624  物体検出部
6020  寸法データ算出装置
6024D 形状パラメータ取得部
6024E 算出部
6124  3次元データ構成部
6224  部位領域構成部
6324  計算点抽出部
6424  寸法データ算出部
7007  対象物
7011  取得部
7012  通信部
7013  処理部
7014  入出力部
7020  端末装置
1001 product manufacturing system 1010 terminal device 1020 size data calculation device 1021 storage unit 1022 input/output unit 1023 communication unit 1024 processing unit 1024A acquisition unit 1024B extraction unit 1024C conversion unit 1024D calculation unit 1030 product manufacturing device 2001S product manufacturing system 2120 size data calculation device 2121 storage unit 2122 input/output unit 2123 communication unit 2124 processing unit 2124A acquisition unit 2124D calculation unit 3001 product manufacturing system 3020 dimension data calculation device 3021 storage unit 3022 input/output unit 3023 communication unit 3024 processing unit 3024A acquisition unit 3024B extraction unit 3024C conversion unit 3024D Estimation unit 3024E Calculation unit 3025 Learning device 3026 Storage unit 3027 Processing unit 3027A Pre-processing unit 3027B Learning unit 3100 Dimension data calculation system 4001S Product manufacturing system 4120 Dimension data calculation device 4121 Storage unit 4122 Input/output unit 4123 Communication unit 4124 Processing unit 4124A Acquisition unit 4124D Estimation unit 4124E Calculation unit 4125 Learning device 4126 Storage unit 4127 Processing unit 4127A Pre-processing unit 4127B Learning unit 4200 Dimension data calculation system 5020 Silhouette image generation device 5024A Acquisition unit 5024B Extraction unit 5024C Conversion unit 5124 Three-dimensional point cloud generation unit 5224 background point cloud removal unit 5324 plane point cloud removal unit 5424 object region extraction unit 5524 shape data extraction unit 5624 object detection unit 6020 dimension data calculation device 6024D shape parameter acquisition unit 6024E calculation unit 6124 three-dimensional data configuration unit 6224 site region configuration 6324 Calculation point extraction section 6424 Dimension data calculation section 7007 Object 7011 Acquisition section 7012 Communication section 7013 Processing section 7014 Input/output section 7020 Terminal device

Claims (25)

  1.  対象物が撮影された画像データ及び前記対象物の全長データを取得する取得部と、
     前記画像データから前記対象物の形状を示す形状データを抽出する抽出部と、
     前記形状データを前記全長データに基づいて変換する変換部と、
     前記変換部により変換された形状データを次元削減し、削減した各次元の値と前記対象物の部分ごとに最適化された重み係数とを用いて、前記対象物の各部分の寸法データを算出する算出部と、
    を備える、寸法データ算出装置。
    An acquisition unit that acquires image data of a captured object and full-length data of the object,
    An extraction unit that extracts shape data indicating the shape of the object from the image data,
    A conversion unit for converting the shape data based on the full length data,
    The dimension data of the shape data converted by the conversion unit is reduced, and the dimension data of each portion of the target object is calculated using the reduced value of each dimension and the weighting coefficient optimized for each portion of the target object. A calculation unit that
    A dimension data calculation device comprising:
  2.  前記算出部は、
     前記変換部により変換された形状データに対して1回目の次元削減を行い、
     1回目の次元削減で得られた各次元の値と前記対象物の部分ごとに最適化された重み係数とを線形結合して所定値を求めるか、又は1回目の次元削減で得られた各次元の値から2次の特徴量を生成し、当該2次の特徴量と前記対象物の部分ごとに最適化された重み係数とを結合して所定値を求め、
     前記所定値と、前記対象物の長さ及び重さの属性を少なくとも含む属性データとを用いて2回目の次元削減を行い、
     2回目の次元削減で得られた各次元の値に基づいて、前記対象物の各部分の寸法データを算出する、
    請求項1に記載の寸法データ算出装置。
    The calculation unit
    The first dimension reduction is performed on the shape data converted by the conversion unit,
    Each dimension obtained by the first dimension reduction is linearly combined with a value of each dimension obtained by the first dimension reduction and a weighting coefficient optimized for each part of the object to obtain a predetermined value. A quadratic feature amount is generated from the dimension value, the quadratic feature amount and the weighting coefficient optimized for each part of the object are combined to obtain a predetermined value,
    A second dimension reduction is performed using the predetermined value and attribute data including at least attributes of the length and weight of the object,
    Dimension data of each part of the object is calculated based on the value of each dimension obtained in the second dimension reduction,
    The dimensional data calculation device according to claim 1.
  3.  請求項1又は2に記載の寸法データ算出装置を用いて算出した寸法データを用いて、前記対象物の形状に関連する製品を製造する、製品製造装置。 A product manufacturing apparatus that manufactures a product related to the shape of the target object using the dimension data calculated using the dimension data calculation apparatus according to claim 1 or 2.
  4.  対象物のシルエット画像を受け付ける受付部と、
     サンプル対象物のシルエット画像と前記サンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジンを使用して、前記受け付けたシルエット画像から前記対象物の形状パラメータの値を推定する推定部と、
    を備え、前記推定された対象物の形状パラメータの値が、前記対象物が有する任意の部位に関連する寸法データに関連付けられる、情報処理装置。
    A reception unit that receives the silhouette image of the object,
    Estimating the shape parameter value of the object from the accepted silhouette image using an object engine that associates a silhouette image of the sample object with a value of a predetermined number of shape parameters associated with the sample object An estimation unit that
    An information processing apparatus, comprising: the value of the estimated shape parameter of the target object is associated with dimension data related to an arbitrary part of the target object.
  5.  前記対象物エンジンが、前記サンプル対象物のシルエット画像と前記サンプル対象物に関連付けられた所定個数の形状パラメータの値との関係を学習することにより生成されている、請求項4に記載の情報処理装置。 The information processing according to claim 4, wherein the object engine is generated by learning a relationship between a silhouette image of the sample object and a value of a predetermined number of shape parameters associated with the sample object. apparatus.
  6.  前記対象物について前記推定された形状パラメータの値から、前記対象物における複数の頂点の3次元データを構成し、前記構成された3次元データに基づいて、前記対象物における任意の2つの頂点の間の寸法データを算出する算出部と、
    を更に備える、請求項4又は5に記載の情報処理装置。
    Three-dimensional data of a plurality of vertices in the object is configured from the value of the estimated shape parameter for the object, and based on the configured three-dimensional data, any two vertices of the object are calculated. A calculation unit that calculates the dimension data between
    The information processing apparatus according to claim 4, further comprising:
  7.  前記対象物のシルエット画像が、深度データ測定装置を用いて得られる深度データに基づいて、対象物の画像と前記対象物以外の画像とを分離することで生成される、請求項4から6の何れか一項記載の情報処理装置。 The silhouette image of the object is generated by separating an image of the object and an image other than the object based on depth data obtained using a depth data measuring device. The information processing device according to any one of claims.
  8.  対象物の属性データを受け付ける受付部と、
     サンプル対象物の属性データと前記サンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジンを使用して、前記受け付けた属性データから前記対象物の形状パラメータの値を推定する推定部と、
    を備え、前記推定された対象物の形状パラメータの値が、前記対象物が有する任意の部位に関連する寸法データに関連付けられる、情報処理装置。
    A reception unit that receives the attribute data of the object,
    Estimating the value of the shape parameter of the object from the received attribute data using an object engine that associates the attribute data of the sample object with the value of a predetermined number of shape parameters associated with the sample object An estimation unit that
    An information processing apparatus, comprising: the value of the estimated shape parameter of the target object is associated with dimension data related to an arbitrary part of the target object.
  9.  前記対象物エンジンが、前記サンプル対象物の属性データと前記サンプル対象物に関連付けられた所定個数の形状パラメータの値との関係を学習することにより生成されている、請求項8に記載の情報処理装置。 The information processing according to claim 8, wherein the object engine is generated by learning a relationship between attribute data of the sample object and values of a predetermined number of shape parameters associated with the sample object. apparatus.
  10.  前記対象物について前記推定された形状パラメータの値から、前記対象物における複数の頂点の3次元データを構成し、前記構成された3次元データに基づいて、前記対象物における任意の2つの頂点の間の寸法データを算出する算出部と、
    を更に備える、請求項8又は9に記載の情報処理装置。
    Three-dimensional data of a plurality of vertices in the object is configured from the value of the estimated shape parameter for the object, and based on the configured three-dimensional data, any two vertices of the object are calculated. A calculation unit that calculates the dimension data between
    The information processing apparatus according to claim 8, further comprising:
  11.  対象物が撮影された画像データ及び前記対象物の全長データを取得する取得部と、
     前記画像データから前記対象物の形状を示す形状データを抽出する抽出部と、
     前記形状データを前記全長データに基づいてシルエット画像に変換する変換部と、
     サンプル対象物のシルエット画像と前記サンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジンを使用して、前記シルエット画像から所定個数の形状パラメータの値を推定する推定部と、
     前記推定された所定個数の形状パラメータの値に基づいて、前記対象物の寸法データを算出する算出部と、
    を備える、寸法データ算出装置。
    An acquisition unit that acquires image data of a captured object and full-length data of the object,
    An extraction unit that extracts shape data indicating the shape of the object from the image data,
    A conversion unit that converts the shape data into a silhouette image based on the full length data,
    An estimation unit that estimates a value of a predetermined number of shape parameters from the silhouette image using an object engine that associates a silhouette image of a sample object with a value of a predetermined number of shape parameters associated with the sample object. When,
    Based on the value of the estimated predetermined number of shape parameters, a calculation unit for calculating the dimension data of the object,
    A dimension data calculation device comprising:
  12.  請求項11に記載の寸法データ算出装置を用いて算出された少なくとも1つの寸法データを用いて、前記対象物の形状に関連する製品を製造する、製品製造装置。 A product manufacturing apparatus that manufactures a product related to the shape of the target object using at least one size data calculated using the size data calculation apparatus according to claim 11.
  13.  対象物の全長データと重量データとのうちの少なくともいずれかを含む属性データを取得する取得部と、
     前記属性データを、機械学習により学習された係数を用いて多項式回帰することにより、前記対象物の各部分の寸法データを算出する算出部と、
    を備える、寸法データ算出装置。
    An acquisition unit that acquires attribute data including at least one of full length data and weight data of the object,
    The attribute data, by a polynomial regression using a coefficient learned by machine learning, a calculation unit for calculating the dimensional data of each part of the object,
    A dimension data calculation device comprising:
  14.  前記算出部は、前記属性データを、機械学習により学習された係数を用いて二次回帰することにより、前記対象物の各部分の寸法データを算出する、
    請求項13に記載の寸法データ算出装置。
    The calculation unit calculates the dimension data of each part of the object by performing a second-order regression on the attribute data using a coefficient learned by machine learning.
    The dimension data calculation device according to claim 13.
  15.  対象物が撮影された、深度マップを含む画像データを取得する取得部と、
     前記深度マップから生成される3次元点群データを用いて前記対象物の対象物領域を抽出し、前記対象物領域に対応する前記深度マップの深度データに基づいて、前記対象物の形状を示す形状データを抽出する抽出部と、
     前記形状データを変換して、前記対象物のシルエット画像を生成する変換部と、
    を備える、シルエット画像生成装置。
    An acquisition unit that acquires image data including a depth map, in which an object is photographed,
    An object area of the object is extracted using three-dimensional point cloud data generated from the depth map, and the shape of the object is shown based on the depth data of the depth map corresponding to the object area. An extraction unit for extracting shape data,
    A conversion unit that converts the shape data to generate a silhouette image of the object,
    A silhouette image generation device comprising:
  16.  前記抽出部が、更に、
     前記深度マップから生成される3次元点群データから、画像データにおける平面部分を推定し、
     前記3次元点群データのうち、前記推定された平面部分に存在する3次元点群データを取り除いたものに基づいて、前記対象物の対象物領域を抽出する、
    請求項15に記載のシルエット画像生成装置。
    The extraction unit is further
    Estimating the plane portion in the image data from the three-dimensional point cloud data generated from the depth map,
    The object region of the object is extracted based on the three-dimensional point cloud data excluding the three-dimensional point cloud data existing in the estimated plane portion,
    The silhouette image generation device according to claim 15.
  17.  対象物が撮影された画像データ及び前記対象物の全長データを取得する取得部と、
     前記画像データから前記対象物の形状を示す形状データを抽出する抽出部と、
     前記形状データを用いて、前記対象物の各部分の寸法データを算出する算出部と、を備え、
     前記画像データが深度マップを含み、前記抽出部により抽出される形状データが前記深度マップにおける前記対象物の深度データに関連付けられる、寸法データ算出装置。
    An acquisition unit that acquires image data of a captured object and full-length data of the object,
    An extraction unit that extracts shape data indicating the shape of the object from the image data,
    A calculator that calculates the dimension data of each part of the object using the shape data;
    The dimension data calculation device, wherein the image data includes a depth map, and the shape data extracted by the extraction unit is associated with the depth data of the object in the depth map.
  18.  請求項17に記載の寸法データ算出装置を用いて算出した寸法データを用いて、前記対象物の形状に関連する製品を製造する、製品製造装置。 A product manufacturing apparatus that manufactures a product related to the shape of the target object using the dimension data calculated using the dimension data calculation apparatus according to claim 17.
  19.  対象物が撮影された画像データ及び前記対象物の全長データを取得する取得部と、
     前記画像データから前記対象物の形状を示す形状データを抽出する抽出部と、
     サンプル対象物のシルエット画像と前記サンプル対象物に関連付けられた所定個数の形状パラメータの値とを関連付けた対象物エンジンを使用して、前記対象物のシルエット画像から所定個数の形状パラメータの値を推定する推定部と、
     前記推定された所定個数の形状パラメータの値に基づいて、前記対象物の寸法データを算出する算出部と、を備え、
     前記画像データが深度マップを含み、前記形状データが前記深度マップにおける前記対象物の深度データに関連付けられる、寸法データ算出装置。
    An acquisition unit that acquires image data of a captured object and full-length data of the object,
    An extraction unit that extracts shape data indicating the shape of the object from the image data,
    Estimating the value of the predetermined number of shape parameters from the silhouette image of the object using an object engine that associates the silhouette image of the sample object with the value of the predetermined number of shape parameters associated with the sample object An estimation unit that
    A calculation unit that calculates the dimension data of the object based on the values of the estimated predetermined number of shape parameters;
    A dimension data calculation device, wherein the image data includes a depth map, and the shape data is associated with depth data of the object in the depth map.
  20.  請求項19に記載の寸法データ算出装置を用いて算出した寸法データを用いて、前記対象物の形状に関連する製品を製造する、製品製造装置。 A product manufacturing apparatus that manufactures a product related to the shape of the object by using the dimension data calculated by the dimension data calculating apparatus according to claim 19.
  21.  対象物が撮影された画像データから前記対象物に関する情報を処理する情報処理装置に接続される端末装置であって、
     前記対象物が撮影された画像データを取得する取得部と、
     前記画像データに含まれる対象物が予め登録された対象物であるか否かを判定する判定部と、
     前記判定部による判定結果を出力部に示すとともに、前記画像データを前記情報処理装置に送信するか否かの入力を受け付ける受付部と、
    を備える、端末装置。
    A terminal device connected to an information processing device that processes information about the object from image data of the object captured,
    An acquisition unit that acquires image data of the object captured,
    A determination unit that determines whether the target object included in the image data is a pre-registered target object,
    While showing the determination result by the determination unit to the output unit, a reception unit that receives an input as to whether or not to transmit the image data to the information processing device,
    A terminal device comprising:
  22.  前記判定部が、画素毎に所定の対象物であるか否かを識別する対象物識別モデルを用いて前記画像データに含まれる対象物が予め登録された対象物であるか否かを判定する、
    請求項21に記載の端末装置。
    The determination unit determines whether or not the target included in the image data is a pre-registered target by using a target identification model that identifies whether or not each pixel is a predetermined target. ,
    The terminal device according to claim 21.
  23.  対象物の形状パラメータの値を取得する形状パラメータ取得部と、
     前記対象物の形状パラメータの値から前記対象物の3次元メッシュ・データを構成し、所定の部位に関連付けられた所定の部位領域を構成する前記3次元メッシュ・データの頂点の情報に基づいて、前記所定の部位の寸法データを算出する算出部と、
    を備える、寸法データ算出装置。
    A shape parameter acquisition unit that acquires the value of the shape parameter of the object,
    Based on the vertex information of the three-dimensional mesh data that forms the three-dimensional mesh data of the object from the value of the shape parameter of the object and that forms a predetermined part region associated with a predetermined part, A calculation unit that calculates the dimension data of the predetermined part,
    A dimension data calculation device comprising:
  24.  前記算出部は、前記所定の部位に応じて、
     前記3次元メッシュ・データの頂点の集合から、前記部位領域に部分的に関連付けられる計算点を選択的に抽出し、
     各前記計算点に基づいて前記寸法データを算出する、
    請求項23に記載の寸法データ算出装置。
    The calculation unit, according to the predetermined site,
    From the set of vertices of the three-dimensional mesh data, calculation points that are partially associated with the part region are selectively extracted,
    Calculating the dimensional data based on each of the calculation points,
    The dimension data calculation device according to claim 23.
  25.  前記部位領域が筒状領域であり、
     前記計算点に基づいて前記筒状領域の周の長さを計算することにより、前記寸法データが算出される、
    請求項24に記載の寸法データ算出装置。
    The part region is a tubular region,
    The dimension data is calculated by calculating the length of the circumference of the tubular region based on the calculation points.
    The dimension data calculation device according to claim 24.
PCT/JP2019/046896 2018-11-30 2019-11-29 Dimensional data calculation device, product manufacturing device, information processing device, silhouette image generating device, and terminal device WO2020111269A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/333,008 US11922649B2 (en) 2018-11-30 2021-05-27 Measurement data calculation apparatus, product manufacturing apparatus, information processing apparatus, silhouette image generating apparatus, and terminal apparatus

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
JP2018224376A JP6531273B1 (en) 2018-11-30 2018-11-30 Dimension data calculation apparatus, program, method, product manufacturing apparatus, and product manufacturing system
JP2018-224376 2018-11-30
JP2019082513A JP6579353B1 (en) 2019-04-24 2019-04-24 Information processing apparatus, information processing method, dimension data calculation apparatus, and product manufacturing apparatus
JP2019-082513 2019-04-24
JP2019-086892 2019-04-26
JP2019086892 2019-04-26
JP2019186653A JP6792273B2 (en) 2019-04-26 2019-10-10 Dimension data calculation device, product manufacturing device, and silhouette image generation device
JP2019-186653 2019-10-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/333,008 Continuation US11922649B2 (en) 2018-11-30 2021-05-27 Measurement data calculation apparatus, product manufacturing apparatus, information processing apparatus, silhouette image generating apparatus, and terminal apparatus

Publications (1)

Publication Number Publication Date
WO2020111269A1 true WO2020111269A1 (en) 2020-06-04

Family

ID=70853045

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/046896 WO2020111269A1 (en) 2018-11-30 2019-11-29 Dimensional data calculation device, product manufacturing device, information processing device, silhouette image generating device, and terminal device

Country Status (1)

Country Link
WO (1) WO2020111269A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011227692A (en) * 2010-04-20 2011-11-10 Sanyo Electric Co Ltd Size measurement device
JP2013196355A (en) * 2012-03-19 2013-09-30 Toshiba Corp Object measuring device and object measuring method
JP2013228334A (en) * 2012-04-26 2013-11-07 Topcon Corp Three-dimensional measuring system, three-dimensional measuring method and three-dimensional measuring program
US20150062301A1 (en) * 2013-08-30 2015-03-05 National Tsing Hua University Non-contact 3d human feature data acquisition system and method
JP2017162251A (en) * 2016-03-10 2017-09-14 プライムテックエンジニアリング株式会社 Three-dimensional noncontact input device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011227692A (en) * 2010-04-20 2011-11-10 Sanyo Electric Co Ltd Size measurement device
JP2013196355A (en) * 2012-03-19 2013-09-30 Toshiba Corp Object measuring device and object measuring method
JP2013228334A (en) * 2012-04-26 2013-11-07 Topcon Corp Three-dimensional measuring system, three-dimensional measuring method and three-dimensional measuring program
US20150062301A1 (en) * 2013-08-30 2015-03-05 National Tsing Hua University Non-contact 3d human feature data acquisition system and method
JP2017162251A (en) * 2016-03-10 2017-09-14 プライムテックエンジニアリング株式会社 Three-dimensional noncontact input device

Similar Documents

Publication Publication Date Title
JP6425780B1 (en) Image processing system, image processing apparatus, image processing method and program
CN105447529B (en) Method and system for detecting clothes and identifying attribute value thereof
CN109684920A (en) Localization method, image processing method, device and the storage medium of object key point
CN105389774B (en) The method and apparatus for being aligned image
US20160249041A1 (en) Method for 3d scene structure modeling and camera registration from single image
JP6719497B2 (en) Image generation method, image generation device, and image generation system
JP2008194146A (en) Visual line detecting apparatus and its method
US20200057778A1 (en) Depth image pose search with a bootstrapped-created database
CN111160111B (en) Human body key point detection method based on deep learning
JP6723798B2 (en) Information processing device, method, and program
JP6792273B2 (en) Dimension data calculation device, product manufacturing device, and silhouette image generation device
CN108010122B (en) Method and system for reconstructing and measuring three-dimensional model of human body
JP6579353B1 (en) Information processing apparatus, information processing method, dimension data calculation apparatus, and product manufacturing apparatus
KR102468306B1 (en) Apparatus and method for measuring body size
US11922649B2 (en) Measurement data calculation apparatus, product manufacturing apparatus, information processing apparatus, silhouette image generating apparatus, and terminal apparatus
WO2020111269A1 (en) Dimensional data calculation device, product manufacturing device, information processing device, silhouette image generating device, and terminal device
JP7238998B2 (en) Estimation device, learning device, control method and program
JP6892569B2 (en) A device that associates depth images based on the human body with composition values
JP2008171074A (en) Three-dimensional shape model generation device, three-dimensional shape model generation method, computer program, and three-dimensional model generation system
Chang et al. Seeing through the appearance: Body shape estimation using multi-view clothing images
JP6667785B1 (en) A program for learning by associating a three-dimensional model with a depth image
Karargyris et al. A video-frame based registration using segmentation and graph connectivity for Wireless Capsule Endoscopy
CN113111743A (en) Personnel distance detection method and device
KR102258114B1 (en) apparatus and method for tracking pose of multi-user
JP6593830B1 (en) Information processing apparatus, information processing method, dimension data calculation apparatus, and product manufacturing apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19888352

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19888352

Country of ref document: EP

Kind code of ref document: A1