WO2022137494A1 - Shoe appearance inspection system, shoe appearance inspection method, and shoe appearance inspection program - Google Patents

Shoe appearance inspection system, shoe appearance inspection method, and shoe appearance inspection program Download PDF

Info

Publication number
WO2022137494A1
WO2022137494A1 PCT/JP2020/048681 JP2020048681W WO2022137494A1 WO 2022137494 A1 WO2022137494 A1 WO 2022137494A1 JP 2020048681 W JP2020048681 W JP 2020048681W WO 2022137494 A1 WO2022137494 A1 WO 2022137494A1
Authority
WO
WIPO (PCT)
Prior art keywords
shoe
image
inspected
reference point
product
Prior art date
Application number
PCT/JP2020/048681
Other languages
French (fr)
Japanese (ja)
Inventor
淳也 平柴
剛史 小川
Original Assignee
株式会社アシックス
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社アシックス filed Critical 株式会社アシックス
Priority to CN202080108063.2A priority Critical patent/CN116802483A/en
Priority to PCT/JP2020/048681 priority patent/WO2022137494A1/en
Priority to JP2022570939A priority patent/JPWO2022137494A1/ja
Publication of WO2022137494A1 publication Critical patent/WO2022137494A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined

Definitions

  • the present invention relates to a technique for inspecting the appearance of shoes.
  • Patent Document 1 Conventionally, for standard products such as metal products whose shape is fixed while remaining constant, inspection efficiency and inspection accuracy may be improved by performing visual inspection by sensors or image processing (see, for example, Patent Document 1). Even in the case of amorphous products whose shapes can change significantly, there is also known a technique of determining the correspondence and consistency between parts between products having completely different shapes by image processing (for example, non-patent documents). 1). Both the technique of Patent Document 1 and the technique of Non-Patent Document 1 are common in that they are characterized in that the identity of the product is determined, and the determination content is whether or not the image processing substantially matches 100%. There is.
  • the manufacturing process of shoe products includes an inspection process in order to maintain the quality of the product.
  • the shoe product is a product whose shape is fixed to some extent
  • the upper material is a mesh fiber material or a leather material, and the shape is not completely fixed and is easily deformed. Depending on the situation, there may be slight differences in shape.
  • the manufacturing process such as the attachment of the upper to the sole and the application of the adhesive is performed manually by a human being, the attachment position and the application position may vary slightly. Due to these properties of shoes, the appearance inspection of shoe products has traditionally been performed by human visual inspection. However, visual inspection cannot deny the possibility of work variation and inspection omission, and the load of inspection is heavy. Therefore, it is desired to establish a technique capable of improving inspection efficiency and inspection accuracy.
  • the present invention has been made in view of these problems, and an object thereof is to provide a shoe appearance inspection technique capable of improving inspection efficiency and inspection accuracy.
  • the shoe appearance inspection system is an image acquisition unit that acquires an image of the shoe to be inspected, and an appearance feature point in the image of the shoe to be inspected.
  • a reference point extraction unit that extracts a reference point by a predetermined reference point extraction method, and an element that extracts a plurality of element points that are appearance feature points in an image of shoes to be inspected by a predetermined element point extraction method.
  • the point extraction unit, the virtual line extraction unit that extracts multiple virtual lines connecting the reference point and multiple element points from the image of the shoe to be inspected, and the virtual line extracted from the images of multiple shoes that are acceptable products.
  • the model storage unit that stores the learning model generated by machine learning using lines as teacher data, and the shoes to be inspected by inputting multiple virtual lines extracted from the image of the shoes to be inspected into the training model. It is provided with a pass / fail judgment unit for determining whether or not the product has passed.
  • the acquired image of the shoe is an image of the shoe in a state of being suspended by the last, and the reference point extraction unit extracts the appearance feature points of the last exposed from the shoe as the reference point in the image. You may.
  • the acquired shoe image is composed of a plurality of images taken from a plurality of angles, and the model storage unit uses machine learning using virtual lines extracted from the plurality of images for one shoe as teacher data.
  • the generated learning model is stored, and the pass / fail judgment unit determines whether or not the shoe to be inspected is a pass product by inputting virtual lines extracted from a plurality of images for the shoe to be inspected into the learning model. You may.
  • a contour extraction unit that extracts the contour of the shoe from the image of the shoe to be inspected may be further provided.
  • the model storage unit stores a learning model generated by machine learning using virtual lines and contours extracted from images of a plurality of acceptable shoes as teacher data, and the pass / fail judgment unit stores the learning model of the shoe to be inspected.
  • the pass / fail judgment unit stores the learning model of the shoe to be inspected.
  • Another aspect of the present invention is a shoe appearance inspection method.
  • any combination of the above components, or the components and expressions of the present invention are mutually replaced between a method, a device, a program, a temporary or non-temporary storage medium in which the program is stored, a system, and the like. Is also effective as an aspect of the present invention.
  • FIG. 1 is a configuration diagram of a shoe appearance inspection system 100 according to the present embodiment.
  • the shoe appearance inspection system 100 includes a shoe appearance inspection device 110 and a shoe appearance inspection learning device 112.
  • the shoe appearance inspection device 110 and the shoe appearance inspection learning device 112 are derived from a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), a RAM (RandomAccessMemory), a ROM (ReadOnlyMemory), an auxiliary storage device, a communication device, and the like. It may consist of a computer.
  • the shoe appearance inspection device 110 and the shoe appearance inspection learning device 112 may be configured by separate computers, or may be realized by one computer having both functions. In this embodiment, an example realized by a separate computer will be described.
  • the shoe appearance inspection device 110 is communicated and connected with a plurality of photographing devices that capture images of a plurality of shoe products 10. Since the shoe product 10 is easily deformed when held by the operator and it is difficult to perform an accurate inspection, for example, the shoe product 10 is photographed while being placed on a table as shown in the figure. Not limited to.
  • the plurality of photographing devices include a left side photographing device 50 for photographing the shoe product 10 from the left side, a right side photographing device 52 for photographing from the right side, an upper photographing device 54 for photographing from directly above, and a front photographing device 56 for photographing from the front. It is composed of a rear photographing device 58 for photographing from the rear and a lower photographing device 59 for photographing the bottom surface from below.
  • the left side photographing device 50 photographs the outer instep side of the shoe product 10 for the left foot and photographs the inner instep side of the shoe product 10 for the right foot.
  • the right side photographing device 52 photographs the inner instep side of the shoe product 10 for the left foot and photographs the outer instep side of the shoe product 10 for the right foot.
  • the images taken by the left side photographing device 50, the right side photographing device 52, the upper photographing device 54, the front photographing device 56, the rear photographing device 58, and the lower photographing device 59 are transmitted to the shoe appearance inspection device 110.
  • the shoe appearance inspection device 110 inspects the appearance of the shoe product 10 based on the received image.
  • the shoe appearance inspection learning device 112 communicates with and connects with the shoe appearance inspection device 110, and machine-learns an image of the shoe product 10 to generate a learning model.
  • the learning model is used for inspection by the shoe appearance inspection device 110.
  • FIG. 2 is a diagram comparing the upper shape of the accepted product and the upper shape of the rejected product with images taken from the side of the outer instep side of the shoe product.
  • FIG. 2A is an example of a accepted product
  • FIG. 2B is an example of a rejected product.
  • the sole is formed by laminating the outsole 12, the lower midsole 14, and the upper midsole 16 in this order from the bottom to the top.
  • the shoe product 10 is configured in a so-called suspended state in which the upper 20 is attached to the vicinity of the instep of the last (foot type) 30 placed on the upper midsole 16 and adhered to the sole.
  • the lower midsole 14, upper midsole 16, and outsole 12 made of resin such as EVA (Ethylene-Vinyl Acetate) are hard to deform after manufacturing and the shape is almost fixed, except for manufacturing variations.
  • the upper 20 is made of a material such as a mesh fiber material or leather whose shape is not necessarily fixed even after manufacturing.
  • the last 30 is pulled out from the shoe product 10
  • the height of the toes is slightly lowered due to the repulsive force of the sole, so that the upper 20 is easily deformed and the shape of each individual is likely to vary. In this state, it is easy to keep the shape of the upper 20 constant. Therefore, the state with the last 30 is more suitable for the inspection, but the inspection accuracy is improved by machine learning even without the last 30.
  • the upper is attached to the sole in the suspended state, but since the attaching process is done manually by the operator, there are variations in the work, and the variations may cause variations in the shape.
  • the upper shape 22a has a curved shape that slightly warps from the instep to the toe
  • the upper shape 22b has an instep to the toe. It has a shape that is almost straight.
  • the error range of the ratio of the length to the accepted product is modeled by image processing and machine learning, and if the error is judged to be within the allowable range by the learning model, it is estimated as the accepted product, and if it exceeds the allowable range, it is not acceptable. Estimated to be a passing product.
  • FIG. 3 is a diagram comparing the tilt of the axis in the accepted product and the tilt of the axis in the rejected product in the rear image.
  • FIG. 3A is an example of a accepted product
  • FIG. 3B is an example of a rejected product.
  • the horizontal axis 18a is almost horizontal and the vertical axis 24a is almost vertical
  • the horizontal axis 18b is horizontal. It is tilted slightly downward to the left, and the vertical axis 24b is also tilted slightly downward from the vertical.
  • the range of the inclination ratio with the accepted product is modeled by image processing and machine learning, and if the error is judged to be within the allowable range by the learning model, it is estimated as a accepted product, and if it exceeds the allowable range, it is rejected. I presume.
  • running shoes are exemplified and described as shoe products 10, and various sports shoes including running shoes, leather shoes, and various other shoe products such as those manufactured by attaching an upper to a sole are used.
  • FIG. 4 shows the misalignment of the midfoot high hardness material in the midsole.
  • a high-hardness material may be partially used at the position corresponding to the midfoot of the foot to ensure rigidity, but at the position where the high-hardness material is used in the sole molding process. Errors can occur. However, since the portion where the high hardness material is used does not appear in the appearance of the lower midsole 14, it is difficult for the operator to visually detect the error.
  • the figure shows the outline of the sole.
  • the midfoot portion M1 shown by a diagonal line pattern from the upper left to the lower right indicates the high hardness material portion in the accepted product.
  • the midfoot portion M2 shown by a diagonal line pattern from the upper right to the lower left indicates a high hardness material portion in the rejected product.
  • the difference in position is clear when comparing the two as shown in the figure, but when viewed from the inside of the shoe (left side of the figure), there is no difference in the position between the midfoot part M1 and the midfoot part M2, and the outside of the shoe (figure). There is a difference in the positions of the midfoot portion M1 and the midfoot portion M2 only when viewed from the right side of). However, when a worker visually inspects, it is difficult to find a positional deviation such as a difference in the positions of the midfoot portion M1 and the midfoot portion M2 just by looking at a single shoe.
  • the virtual lines La and La'from the reference point Ra at the apex on the toe side to the start points of the midfoot portions M1 and M2 have the same length.
  • the virtual line Lb from the reference point Ra to the start point of the midfoot portion M1 and the virtual line Lb'from the reference point Ra to the start point of the midfoot portion M2 have different lengths.
  • the width Lc of the midfoot portion M1 and the width Lc'of the midfoot portion M2 are substantially the same length in the example of this figure.
  • the magnitude of the error can be expressed by the ratio (Ln'/ Ln) of the lengths of the virtual lines of the accepted product and the rejected product.
  • the ratio of the length of the virtual line extracted from each image and the length of the virtual line extracted from the image of the accepted product is calculated and learned.
  • FIG. 5 is a functional block diagram showing the basic configuration of the shoe appearance inspection device 110.
  • the shoe appearance inspection device 110 includes an image acquisition unit 120, an image storage unit 122, a reference point extraction unit 124, an element point extraction unit 126, a virtual line extraction unit 128, a contour extraction unit 130, an extraction data storage unit 132, and a pass / fail determination unit 134.
  • the model storage unit 136 is included.
  • the image acquisition unit 120 acquires an image of the shoe product 10 to be inspected from each of the left side photographing device 50, the right side photographing device 52, the upper photographing device 54, the front photographing device 56, and the rear photographing device 58, and stores the image.
  • Store in unit 122 The image storage unit 122 is classified and stored together with attribute information such as the product model name and size of the shoe product 10 to be inspected, and whether it is for the left foot or the right foot.
  • the reference point extraction unit 124 extracts a reference point, which is an appearance feature point in the image of the shoe product 10 to be inspected, by a predetermined reference point extraction method.
  • the element point extraction unit 126 extracts a plurality of element points, which are appearance feature points in the image of the shoe product 10 to be inspected, by a predetermined element point extraction method.
  • the virtual line extraction unit 128 extracts a plurality of virtual lines connecting the reference point and the plurality of element points from the image of the shoe product 10 to be inspected.
  • the contour extraction unit 130 extracts the contour of the shoe product 10 from the image of the shoe product 10 to be inspected.
  • FIG. 6 schematically shows a method of extracting reference points, element points, and virtual lines in an image taken from the side of the outer instep side of a shoe product with a last.
  • a reference point and an element point are extracted as feature points from the image of the shoe product 10, and a virtual line connecting the reference point and the element point is extracted.
  • Multiple such virtual lines are extracted, and the data of these multiple virtual lines are input to a predetermined machine learning model, and the position, inclination, inclination ratio of each virtual line to the accepted product, the length, and the accepted product are used.
  • the length ratio is within the allowable error range, it is estimated whether or not the shoe product 10 is a acceptable product.
  • the shoe It is estimated whether or not the product 10 is a acceptable product. Even in the contour inspection, it is possible to determine when an error exceeding the permissible range occurs in the positional relationship and balance of the contour between a plurality of images taken from a plurality of directions.
  • Reference points and element points are feature points on the appearance with a stable shape that can be extracted based on a predetermined extraction method by image processing.
  • the reference point extraction unit 124 extracts the end point on the heel side of the upper edge, which is a part of the last 30 exposed from the opening of the shoe product 10 and whose edge is detected by image processing for the last 30, as the first reference point R1. do.
  • the upper edge of the last 30 has little shape change in the shoemaking process, and the last 30 used even when inspecting other product models has a common shape, so that it is suitable as a reference point for easy extraction.
  • the number of feature points to be extracted is determined. It is possible to avoid an increase in processing load without increasing it indiscriminately. In this respect, it is more advantageous in terms of processing load than the method of extracting a plurality of virtual lines by connecting a plurality of element points from a plurality of different feature points in a many-to-many manner.
  • a feature point that can be extracted by using a pattern or a character attached to a part of the last 30 as a mark may be extracted as the first reference point R1.
  • the element point extraction unit 126 extracts the tip of the toe side of the outsole 12 whose edge is detected by image processing on the outsole 12 as the first element point P1.
  • the first element point P1 is the apex that overhangs the most forward in the arcuate contour on the toe side of the outsole 12.
  • the virtual line extraction unit 128 extracts the first virtual line L1 connecting the first reference point R1 and the first element point P1.
  • the element point extraction unit 126 is a point at which the curvature of the winding portion on the toe side whose edge is detected by image processing with respect to the outsole 12 changes, that is, a starting point for winding from the ground plane of the outsole 12 toward the toe (“toe spring start point”). ”) As the second element point P2.
  • the virtual line extraction unit 128 extracts the second virtual line L2 connecting the first reference point R1 and the second element point P2.
  • the element point extraction unit 126 is a point where the curvature of the heel-side winding portion detected by image processing on the outsole 12 changes, that is, a starting point of winding from the ground plane of the outsole 12 toward the heel (“heel cut start point”). ”) As the third element point P3.
  • the virtual line extraction unit 128 extracts the third virtual line L3 connecting the first reference point R1 and the third element point P3.
  • the element point extraction unit 126 extracts the rearmost end portion on the heel side whose edge is detected by image processing on the sole as the fourth element point P4.
  • the fourth element point P4 is the apex that protrudes most rearward in the arcuate contour on the heel side of the lower midsole 14.
  • the virtual line extraction unit 128 extracts the fourth virtual line L4 connecting the first reference point R1 and the fourth element point P4.
  • the element point extraction unit 126 extracts the uppermost end portion whose edge is detected by image processing for the shoe opening as the fifth element point P5.
  • the shoe product 10 has a wavy shape on both the outside and the inside, and the element point extraction unit 126 extracts the apex or the highest point in the arc of the wave shape as the fifth element point P5.
  • the virtual line extraction unit 128 extracts the fifth virtual line L5 connecting the first reference point R1 and the fifth element point P5. If the shoe product 10 to be inspected does not have a wavy shape such as boots, for example, the front end or the rearmost end of the shoe opening may be extracted as the fifth element point P5.
  • FIG. 7 schematically shows a method of extracting reference points, element points, and virtual lines in an image taken from the side of the outer instep side of a shoe product without a last.
  • the shoe product 10 in this figure is an example in which the state after removing the last 30 is inspected. Unlike the case with the last 30, the reference point is extracted from a part of the shoe product 10.
  • the reference point extraction unit 124 extracts the lowermost end portion whose edge is detected by image processing for the wearing opening as the second reference point R2.
  • the opening of the shoe product 10 has a wave shape both on the outside and inside, and the reference point extraction unit 124 extracts the lowest point or the lowest point in the arc of the wave shape as the second reference point R2.
  • the rearmost end or the frontmost end of the shoe opening may be used as a reference point for extraction.
  • a specification may be made in which any one of a plurality of element points is set as a reference point.
  • the virtual line extraction unit 128 extracts the first virtual line L1 connecting the second reference point R2 and the first element point P1.
  • the virtual line extraction unit 128 extracts the second virtual line L2 connecting the second reference point R2 and the second element point P2.
  • the virtual line extraction unit 128 extracts the third virtual line L3 connecting the second reference point R2 and the third element point P3.
  • the virtual line extraction unit 128 extracts the fourth virtual line L4 connecting the second reference point R2 and the fourth element point P4.
  • the virtual line extraction unit 128 extracts the fifth virtual line L5 connecting the second reference point R2 and the fifth element point P5.
  • FIG. 8 schematically shows a method of extracting reference points, element points, and virtual lines in a rear image of a shoe product with a last.
  • the reference point extraction unit 124 extracts the apex or the highest point at the upper edge detected by the image processing for the last 30 as the third reference point R3.
  • the third reference point R3 in the rear image is set to a point different from the first reference point R1 in the side image. For example, even in the side image, the upper edge is detected by the image processing for the last 30.
  • common feature points may be used as reference points.
  • the element point extraction unit 126 extracts the leftmost end detected by the image processing for the lower midsole 14 as the sixth element point P6.
  • the sixth element point P6 is the leftmost apex of the arcuate contour on the left side of the lower midsole 14.
  • the virtual line extraction unit 128 extracts the sixth virtual line L6 connecting the third reference point R3 and the sixth element point P6.
  • the element point extraction unit 126 extracts the lowermost end where the edge is detected by the image processing for the outsole 12 as the seventh element point P7.
  • the seventh element point P7 is the lowest point or the lowest point in the outsole 12 arcuate contour.
  • the virtual line extraction unit 128 extracts the seventh virtual line L7 connecting the third reference point R3 and the seventh element point P7.
  • the element point extraction unit 126 extracts the rightmost edge detected by image processing on the lower midsole 14 as the eighth element point P8.
  • the eighth element point P8 is the apex that overhangs to the right in the arcuate contour on the right side of the lower midsole 14.
  • the virtual line extraction unit 128 extracts the eighth virtual line L8 connecting the third reference point R3 and the eighth element point P8.
  • reference points and element points are also extracted from the right side, front, and upper images of the shoe product 10, and virtual lines are extracted.
  • FIG. 9 schematically shows a method of extracting contours in an image taken from the side of the outer instep side of a shoe product with a last.
  • the contour extraction unit 130 extracts the contour SL of the entire shoe product 10 by edge detection in the image processing for the shoe product 10. Since only one contour SL can be obtained from one image of the shoe product 10, the number of items that can be inspected is less than that of inspection using virtual lines, and the number of objects to be machine-learned is smaller than that of virtual lines. There is an aspect that it is difficult to improve the detection accuracy by learning compared to the virtual line that can learn teacher data.
  • the edge detection of the contour SL does not require the feature of the shape unlike the extraction of the reference point and the element point, so that it can be extracted more easily.
  • the contour inspection can be used not only in the inspection of the finished product but also in the inspection in each of the plurality of processes included in the manufacturing process.
  • the contour is extracted from the right side, front, rear, upper, and lower images of the shoe product 10 and the image of the shoe product 10 without the last.
  • the model storage unit 136 stores a learning model that has been pre-generated and learned by machine learning using a plurality of virtual lines and contours extracted from images of a plurality of passing shoe products as teacher data.
  • the model storage unit 136 stores a learning model generated by machine learning using a plurality of virtual lines and contours extracted from a plurality of images for each shoe product 10 as teacher data.
  • This training model has a tolerance for data errors such as virtual line position, slope, slope ratio with pass product, length, length ratio with pass product, contour position, and balance in many pass products.
  • the learning model is generated in advance by the shoe appearance inspection learning device 112 and stored in the model storage unit 136 as described later.
  • the pass / fail determination unit 134 inputs a plurality of virtual lines and contours extracted from the image of the shoe product 10 to be inspected into the learning model, and the position, inclination, ratio of inclination to the accepted product, and length of the virtual line, By comparing the length ratio, the position of the contour, and the balance with the accepted product, it is possible to estimate whether the error is within the allowable range, that is, whether the shoe product 10 to be inspected is the accepted product.
  • the pass / fail determination unit 134 inputs a plurality of virtual lines and contours extracted from a plurality of images for the shoe product 10 to be inspected into the learning model, and determines the position, inclination, and inclination ratio of the virtual lines to the accepted products.
  • the pass / fail determination unit 134 outputs the estimation result by a method such as screen display, and feeds it back to the learning model stored in the model storage unit 136 as data of a pass product or a fail product.
  • FIG. 10 is a functional block diagram showing the basic configuration of the shoe appearance inspection learning device 112.
  • the image acquisition unit 220, the image storage unit 222, the reference point extraction unit 224, the element point extraction unit 226, the virtual line extraction unit 228, the contour extraction unit 230, and the extraction data storage unit 232 are the image acquisition unit 120 and the image storage unit 122, respectively.
  • Reference point extraction unit 124, element point extraction unit 126, virtual line extraction unit 128, contour extraction unit 130, and extraction data storage unit 132 each of which has the same function.
  • the machine learning unit 234 uses the data of a plurality of virtual lines and the contour data stored in the extracted data storage unit 232 as teacher data, and uses a learning model for determining whether or not the error between the virtual lines and the contour falls within an allowable range. It is generated by machine learning and stored in the model storage unit 236. The learning model is transmitted to the shoe appearance inspection device 110 and used for the appearance inspection of the shoe product 10.
  • the teacher data includes information on a plurality of virtual lines and contours extracted from shoes as shown in FIGS. 6 to 9.
  • the virtual line data is the position and inclination of the virtual line obtained from a plurality of images of a large number of accepted products, the ratio of the inclination to the accepted product, the length, and the ratio of the length to the accepted product.
  • the contour data is the position and balance of the contour obtained from a plurality of images of a large number of accepted products. Since the virtual lines and contours have variations in position, inclination, length, balance, etc., machine learning is performed to model the allowable range as an error.
  • the shoe product 10 has a plurality of types of product models, and even in one product model, there are a plurality of sizes, and the shoe product 10 is divided into one for the left foot and one for the right foot.
  • the machine learning unit 234 machine-learns a plurality of virtual lines and contours for each attribute such as product model, size, left foot and right foot. By performing machine learning separately for each attribute, the determination accuracy can be further improved.
  • a learning model by machine learning only the images of the passing shoe products 10 has been described.
  • the virtual lines and contours extracted from the image of the accepted product are learned and labeled as pass, and the virtual lines and contours extracted from the image of the rejected product are further learned and labeled as rejected. It may be that. Although more teacher data is required than when only passing products are trained, the accuracy of judgment for classifying pass and fail can be improved accordingly.
  • a learning model may be generated only by machine learning of virtual lines without using machine learning of contours.
  • FIG. 11 is a flowchart showing a procedure for extracting a plurality of virtual lines and contours from an image of a shoe product and estimating whether or not the product is acceptable based on a learning model.
  • the image acquisition unit 120 captures images of the shoe product 10 from a plurality of imaging directions by a plurality of imaging devices such as the left side photographing device 50 and the right side photographing device 52 (S10), and the image acquisition unit 120 captures a plurality of images.
  • Acquire (S11).
  • the reference point extraction unit 124 and the element point extraction unit 126 extract the reference point and a plurality of element points from the image (S12), and the virtual line extraction unit 128 extracts a plurality of virtual lines from the image based on the reference point and the plurality of element points. Is extracted and stored in the extracted data storage unit 132 (S14).
  • the pass / fail determination unit 134 inputs virtual line data into the learning model stored in the model storage unit 136, and determines whether the error of the virtual line is within the allowable range (S16).
  • the contour extraction unit 130 extracts a contour from the image of the shoe product 10 and stores it in the extraction data storage unit 132 (S18).
  • the pass / fail determination unit 134 inputs contour data into the learning model stored in the model storage unit 136, and determines whether the contour error is within the permissible range (S19).
  • the pass / fail determination unit 134 estimates whether the shoe product 10 is a pass product or not by comprehensively determining whether the error of the virtual line is within the allowable range and the error of the contour is within the allowable range. (S20).
  • the present invention can provide a shoe appearance inspection technique that can improve inspection efficiency and inspection accuracy.

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Footwear And Its Accessory, Manufacturing Method And Apparatuses (AREA)

Abstract

This invention provides shoe appearance inspection technology that is capable of enhancing inspection efficiency and inspection accuracy. In this shoe appearance inspection system, an imaginary line extraction unit 128 extracts, from an image of a shoe under inspection, a plurality of imaginary lines joining a reference point and a plurality of element points. A model storage unit 136 stores a trained model generated through machine learning using, as training data, imaginary lines and contours extracted from images of a plurality of shoes that are acceptable products. A pass/fail determination unit 134 determines whether a shoe under inspection is an acceptable product by inputting, into the trained model, a plurality of imaginary lines and a contour extracted from an image of the shoe under inspection, and outputs the determination.

Description

靴外観検査システム、靴外観検査方法および靴外観検査プログラムShoe appearance inspection system, shoe appearance inspection method and shoe appearance inspection program
 この発明は、靴の外観を検査する技術に関する。 The present invention relates to a technique for inspecting the appearance of shoes.
 従来、形状が一定のまま固定される金属製品などの定形製品は外観検査をセンサや画像処理によって実施することで検査効率と検査精度を高めることがある(例えば、特許文献1参照)。形状が大きく変化し得る不定形の製品の場合も、逆にまったく異なる形状同士の製品間で部分同士の対応関係と一致性を画像処理で判定する技術も知られている(例えば、非特許文献1参照)。特許文献1の技術も、非特許文献1の技術も、製品の同一性を判定することを特徴としている点では共通し、画像処理としては実質的に100%一致するかしないかを判定内容としている。 Conventionally, for standard products such as metal products whose shape is fixed while remaining constant, inspection efficiency and inspection accuracy may be improved by performing visual inspection by sensors or image processing (see, for example, Patent Document 1). Even in the case of amorphous products whose shapes can change significantly, there is also known a technique of determining the correspondence and consistency between parts between products having completely different shapes by image processing (for example, non-patent documents). 1). Both the technique of Patent Document 1 and the technique of Non-Patent Document 1 are common in that they are characterized in that the identity of the product is determined, and the determination content is whether or not the image processing substantially matches 100%. There is.
特開2019-076819号公報Japanese Unexamined Patent Publication No. 2019-076819
 靴製品の製造工程は、製品の品質を維持するため、検査工程を含む。ここで、靴製品は形状がある程度固定される性質の製品ではあるものの、特にアッパーの素材はメッシュ繊維素材や皮革素材であり、形状が完全には固定されず変形しやすいため、個体により、または状況により、僅かに形状に違いが生じ得る。また、ソールへのアッパーの貼り付けや接着剤の塗布等の製造工程は人間の手作業によるため、貼り付け位置や塗布位置に僅かなばらつきが生じ得る。こうした靴の性質から、従来、靴製品の外観検査は人間の目視によってなされている。しかし、目視検査では作業のばらつきや検査漏れの可能性を否定できず、検査の負荷も大きいため、検査効率および検査精度を向上させることができる技術の確立が望まれている。 The manufacturing process of shoe products includes an inspection process in order to maintain the quality of the product. Here, although the shoe product is a product whose shape is fixed to some extent, the upper material is a mesh fiber material or a leather material, and the shape is not completely fixed and is easily deformed. Depending on the situation, there may be slight differences in shape. Further, since the manufacturing process such as the attachment of the upper to the sole and the application of the adhesive is performed manually by a human being, the attachment position and the application position may vary slightly. Due to these properties of shoes, the appearance inspection of shoe products has traditionally been performed by human visual inspection. However, visual inspection cannot deny the possibility of work variation and inspection omission, and the load of inspection is heavy. Therefore, it is desired to establish a technique capable of improving inspection efficiency and inspection accuracy.
 本発明はこうした課題に鑑みてなされたものであり、その目的は、検査効率や検査精度を高められる靴外観検査技術を提供することにある。 The present invention has been made in view of these problems, and an object thereof is to provide a shoe appearance inspection technique capable of improving inspection efficiency and inspection accuracy.
 上記課題を解決するために、本発明のある態様の靴外観検査システムは、検査対象である靴の画像を取得する画像取得部と、検査対象である靴の画像における外観上の特徴点である基準点を、所定の基準点抽出方法により抽出する基準点抽出部と、検査対象である靴の画像における外観上の特徴点である複数の要素点を、所定の要素点抽出方法により抽出する要素点抽出部と、基準点と複数の要素点をそれぞれ結ぶ複数の仮想線を検査対象である靴の画像から抽出する仮想線抽出部と、合格品である複数の靴の画像から抽出された仮想線を教師データとする機械学習により生成された学習モデルを記憶するモデル記憶部と、検査対象である靴の画像から抽出された複数の仮想線を学習モデルに入力することにより検査対象の靴が合格品か否かを判定する合否判定部と、を備える。 In order to solve the above problems, the shoe appearance inspection system according to an aspect of the present invention is an image acquisition unit that acquires an image of the shoe to be inspected, and an appearance feature point in the image of the shoe to be inspected. A reference point extraction unit that extracts a reference point by a predetermined reference point extraction method, and an element that extracts a plurality of element points that are appearance feature points in an image of shoes to be inspected by a predetermined element point extraction method. The point extraction unit, the virtual line extraction unit that extracts multiple virtual lines connecting the reference point and multiple element points from the image of the shoe to be inspected, and the virtual line extracted from the images of multiple shoes that are acceptable products. The model storage unit that stores the learning model generated by machine learning using lines as teacher data, and the shoes to be inspected by inputting multiple virtual lines extracted from the image of the shoes to be inspected into the training model. It is provided with a pass / fail judgment unit for determining whether or not the product has passed.
 取得される靴の画像は、足型による吊り込みがされた状態の靴の画像であり、基準点抽出部は、画像において靴から露出する足型の外観上の特徴点を基準点として抽出してもよい。 The acquired image of the shoe is an image of the shoe in a state of being suspended by the last, and the reference point extraction unit extracts the appearance feature points of the last exposed from the shoe as the reference point in the image. You may.
 取得される靴の画像は、複数通りの角度から撮影された複数の画像で構成され、モデル記憶部は、一つの靴につき複数の画像からそれぞれ抽出された仮想線を教師データとする機械学習により生成された学習モデルを記憶し、合否判定部は、検査対象である靴につき複数の画像からそれぞれ抽出された仮想線を学習モデルに入力することにより検査対象の靴が合格品か否かを判定してもよい。 The acquired shoe image is composed of a plurality of images taken from a plurality of angles, and the model storage unit uses machine learning using virtual lines extracted from the plurality of images for one shoe as teacher data. The generated learning model is stored, and the pass / fail judgment unit determines whether or not the shoe to be inspected is a pass product by inputting virtual lines extracted from a plurality of images for the shoe to be inspected into the learning model. You may.
 検査対象である靴の画像から靴の輪郭を抽出する輪郭抽出部をさらに備えてもよい。モデル記憶部は、合格品である複数の靴の画像から抽出された仮想線および輪郭を教師データとする機械学習により生成された学習モデルを記憶し、合否判定部は、検査対象である靴の画像から抽出された複数の仮想線および輪郭を学習モデルに入力することにより検査対象の靴が合格品か否かを判定してもよい。 A contour extraction unit that extracts the contour of the shoe from the image of the shoe to be inspected may be further provided. The model storage unit stores a learning model generated by machine learning using virtual lines and contours extracted from images of a plurality of acceptable shoes as teacher data, and the pass / fail judgment unit stores the learning model of the shoe to be inspected. By inputting a plurality of virtual lines and contours extracted from the image into the learning model, it may be determined whether or not the shoe to be inspected is a pass product.
 本発明の別の態様は、靴外観検査方法である。この方法は、検査対象である靴の画像を所定の画像取得手段により取得する過程と、検査対象である靴の画像における外観上の特徴点である基準点を、コンピュータによる所定の基準点抽出方法により抽出する過程と、検査対象である靴の画像における外観上の特徴点である複数の要素点を、コンピュータによる所定の要素点抽出方法により抽出する過程と、基準点と複数の要素点をそれぞれ結ぶ複数の仮想線を検査対象である靴の画像からコンピュータにより抽出する過程と、合格品である複数の靴の画像から抽出された仮想線を教師データとする機械学習により生成された学習モデルを所定の記憶手段から読み出す過程と、検査対象である靴の画像から抽出された複数の仮想線を学習モデルに入力することにより検査対象の靴が合格品か否かをコンピュータにより判定する過程と、を備える。 Another aspect of the present invention is a shoe appearance inspection method. In this method, a process of acquiring an image of shoes to be inspected by a predetermined image acquisition means and a method of extracting a reference point, which is a feature point on the appearance of the image of shoes to be inspected, by a computer. The process of extracting a plurality of element points, which are the appearance feature points in the image of the shoe to be inspected, by a predetermined element point extraction method by a computer, and the reference point and the plurality of element points, respectively. The process of extracting multiple virtual lines to be connected by a computer from the image of the shoe to be inspected, and the learning model generated by machine learning using the virtual lines extracted from the images of multiple shoes that are acceptable products as teacher data. The process of reading from a predetermined storage means, the process of inputting a plurality of virtual lines extracted from the image of the shoe to be inspected into the learning model, and the process of determining whether or not the shoe to be inspected is a pass product by a computer. To prepare for.
 なお、以上の構成要素の任意の組み合わせや、本発明の構成要素や表現を方法、装置、プログラム、プログラムを記憶した一時的なまたは一時的でない記憶媒体、システムなどの間で相互に置換したものもまた、本発明の態様として有効である。 It should be noted that any combination of the above components, or the components and expressions of the present invention are mutually replaced between a method, a device, a program, a temporary or non-temporary storage medium in which the program is stored, a system, and the like. Is also effective as an aspect of the present invention.
 本発明によれば、検査効率や検査精度を高められる靴外観検査技術を提供することができる。 According to the present invention, it is possible to provide a shoe appearance inspection technique capable of improving inspection efficiency and inspection accuracy.
本実施の形態に係る靴外観検査システムの構成図である。It is a block diagram of the shoe appearance inspection system which concerns on this embodiment. 合格品のアッパー形状と不合格品のアッパー形状を靴製品の外甲側を側方から撮影した画像で比較する図である。It is a figure which compares the upper shape of the accepted product and the upper shape of the rejected product with the image taken from the side of the outer instep side of the shoe product. 合格品における軸の傾きと不合格品における軸の傾きを後方画像で比較する図である。It is a figure which compares the inclination of the axis in the accepted product and the inclination of the axis in a rejected product with a rear image. ミッドソールにおける中足部高硬度素材の位置ズレを示す図である。It is a figure which shows the positional deviation of the midfoot high hardness material in the midsole. 靴外観検査装置の基本構成を示す機能ブロック図である。It is a functional block diagram which shows the basic structure of a shoe appearance inspection apparatus. ラストありの靴製品の外甲側を側方から撮影した画像における基準点、要素点、仮想線の抽出方法を模式的に示す図である。It is a figure which shows typically the extraction method of the reference point, the element point, and the virtual line in the image which took the outer instep side of the shoe product with the last from the side. ラストなしの靴製品の外甲側を側方から撮影した画像における基準点、要素点、仮想線の抽出方法を模式的に示す図である。It is a figure which shows typically the extraction method of the reference point, the element point, and the virtual line in the image which took the outer instep side of the shoe product without last from the side. ラストありの靴製品の後方画像における基準点、要素点、仮想線の抽出方法を模式的に示す図である。It is a figure which shows typically the extraction method of the reference point, the element point, and the virtual line in the rear image of the shoe product with the last. ラストありの靴製品の外甲側を側方から撮影した画像における輪郭の抽出方法を模式的に示す図である。It is a figure which shows typically the method of extracting the contour in the image which took the outer instep side of the shoe product with the last from the side. 靴外観検査学習装置の基本構成を示す機能ブロック図である。It is a functional block diagram which shows the basic structure of the shoe appearance inspection learning apparatus. 靴製品の画像から複数の仮想線および輪郭を抽出して学習モデルに基づいて合格品か否かを推定する手順を示すフローチャートである。It is a flowchart which shows the procedure which extracts a plurality of virtual lines and contours from the image of a shoe product, and estimates whether or not it is a pass product based on a learning model.
 図1は、本実施の形態に係る靴外観検査システム100の構成図である。靴外観検査システム100は、靴外観検査装置110と靴外観検査学習装置112を含む。靴外観検査装置110および靴外観検査学習装置112は、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)、RAM(Random Access Memory)、ROM(Read Only Memory)、補助記憶装置、通信装置等からなるコンピュータで構成されてよい。靴外観検査装置110および靴外観検査学習装置112は、それぞれ別個のコンピュータで構成されてもよいし、両者の機能を兼ね備えた1台のコンピュータで実現してもよい。本実施の形態では別個のコンピュータで実現する例を説明する。 FIG. 1 is a configuration diagram of a shoe appearance inspection system 100 according to the present embodiment. The shoe appearance inspection system 100 includes a shoe appearance inspection device 110 and a shoe appearance inspection learning device 112. The shoe appearance inspection device 110 and the shoe appearance inspection learning device 112 are derived from a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), a RAM (RandomAccessMemory), a ROM (ReadOnlyMemory), an auxiliary storage device, a communication device, and the like. It may consist of a computer. The shoe appearance inspection device 110 and the shoe appearance inspection learning device 112 may be configured by separate computers, or may be realized by one computer having both functions. In this embodiment, an example realized by a separate computer will be described.
 靴外観検査装置110は、複数の靴製品10の画像を撮影する複数の撮影装置と通信接続される。靴製品10は、作業者が手に持った状態では変形しやすく、正確な検査が困難であるため、例えば図のように台に置いた状態で撮影するが、靴製品10の固定方法はこれに限定されない。複数の撮影装置は、靴製品10を左側方から撮影する左側方撮影装置50、右側方から撮影する右側方撮影装置52、真上から撮影する上方撮影装置54、前方から撮影する前方撮影装置56、後方から撮影する後方撮影装置58、下方から底面を撮影する下方撮影装置59で構成される。左側方撮影装置50は、左足用の靴製品10の外甲側を撮影し、右足用の靴製品10の内甲側を撮影する。右側方撮影装置52は、左足用の靴製品10の内甲側を撮影し、右足用の靴製品10の外甲側を撮影する。左側方撮影装置50、右側方撮影装置52、上方撮影装置54、前方撮影装置56、後方撮影装置58、下方撮影装置59によって撮影された画像は靴外観検査装置110へ送信される。靴外観検査装置110は、受信した画像に基づいて靴製品10の外観を検査する。靴外観検査学習装置112は、靴外観検査装置110と通信接続するとともに、靴製品10の画像を機械学習して学習モデルを生成する。学習モデルは、靴外観検査装置110によって検査に用いられる。 The shoe appearance inspection device 110 is communicated and connected with a plurality of photographing devices that capture images of a plurality of shoe products 10. Since the shoe product 10 is easily deformed when held by the operator and it is difficult to perform an accurate inspection, for example, the shoe product 10 is photographed while being placed on a table as shown in the figure. Not limited to. The plurality of photographing devices include a left side photographing device 50 for photographing the shoe product 10 from the left side, a right side photographing device 52 for photographing from the right side, an upper photographing device 54 for photographing from directly above, and a front photographing device 56 for photographing from the front. It is composed of a rear photographing device 58 for photographing from the rear and a lower photographing device 59 for photographing the bottom surface from below. The left side photographing device 50 photographs the outer instep side of the shoe product 10 for the left foot and photographs the inner instep side of the shoe product 10 for the right foot. The right side photographing device 52 photographs the inner instep side of the shoe product 10 for the left foot and photographs the outer instep side of the shoe product 10 for the right foot. The images taken by the left side photographing device 50, the right side photographing device 52, the upper photographing device 54, the front photographing device 56, the rear photographing device 58, and the lower photographing device 59 are transmitted to the shoe appearance inspection device 110. The shoe appearance inspection device 110 inspects the appearance of the shoe product 10 based on the received image. The shoe appearance inspection learning device 112 communicates with and connects with the shoe appearance inspection device 110, and machine-learns an image of the shoe product 10 to generate a learning model. The learning model is used for inspection by the shoe appearance inspection device 110.
 図2は、合格品のアッパー形状と不合格品のアッパー形状を靴製品の外甲側を側方から撮影した画像で比較する図である。図2(a)は合格品の例であり、図2(b)は不合格品の例である。例示する靴製品10は、底部から上部に向けて、アウトソール12、下層ミッドソール14、上層ミッドソール16の順に貼り合わされる形でソールが構成される。靴製品10は、上層ミッドソール16に載せたラスト(足型)30の甲部周辺にアッパー20を張り合わせてソールに接着する、いわゆる吊り込みがされた状態で靴製品10が構成される。EVA(Ethylene-Vinyl Acetate、エチレン酢酸ビニル)等の樹脂で構成される下層ミッドソール14、上層ミッドソール16、アウトソール12は製造上の成形ばらつきを除き、製造後は変形しにくく形状がほぼ固定されるのに対し、アッパー20はメッシュ繊維素材や皮革など、形状が製造後も必ずしも固定されない素材で構成される。靴製品10からラスト30を抜き出すと、ソールの反発力により爪先の高さが少し下がるためアッパー20が変形しやすく、個体ごとの形状のばらつきが生じやすいのに対し、ラスト30により吊り込みがされた状態ではアッパー20の形状を一定に保ちやすい。したがって、ラスト30ありの状態の方が検査には適するが、ラスト30なしの状態でも機械学習によって検査精度を高めている。 FIG. 2 is a diagram comparing the upper shape of the accepted product and the upper shape of the rejected product with images taken from the side of the outer instep side of the shoe product. FIG. 2A is an example of a accepted product, and FIG. 2B is an example of a rejected product. In the example shoe product 10, the sole is formed by laminating the outsole 12, the lower midsole 14, and the upper midsole 16 in this order from the bottom to the top. The shoe product 10 is configured in a so-called suspended state in which the upper 20 is attached to the vicinity of the instep of the last (foot type) 30 placed on the upper midsole 16 and adhered to the sole. The lower midsole 14, upper midsole 16, and outsole 12 made of resin such as EVA (Ethylene-Vinyl Acetate) are hard to deform after manufacturing and the shape is almost fixed, except for manufacturing variations. On the other hand, the upper 20 is made of a material such as a mesh fiber material or leather whose shape is not necessarily fixed even after manufacturing. When the last 30 is pulled out from the shoe product 10, the height of the toes is slightly lowered due to the repulsive force of the sole, so that the upper 20 is easily deformed and the shape of each individual is likely to vary. In this state, it is easy to keep the shape of the upper 20 constant. Therefore, the state with the last 30 is more suitable for the inspection, but the inspection accuracy is improved by machine learning even without the last 30.
 吊り込みがされた状態でアッパーがソールに貼り付けられるが、その貼り付け過程は作業者によって手作業でなされるため、作業のばらつきが生じ、そのばらつきによって形状にもばらつきが生じる場合がある。図2(a)の合格品においては、アッパー形状22aは甲から爪先にかけて少し反り返るような湾曲形状を持つのに対し、図2(b)の不合格品においては、アッパー形状22bは甲から爪先にかけてほぼ直線に近い形状を持つ。これらの違いは、人間の目視では見落とす可能性があるほどの微差であり、また適切な方向から観察しなければ判別も困難である。靴全体においては特徴点の位置関係や長さのバランスが合格品と不合格品では明確な相違がある。そこで、画像処理と機械学習によって合格品との長さの比率の誤差範囲をモデル化し、その学習モデルによって誤差が許容範囲であると判定されれば合格品と推定し、許容範囲を超えれば不合格品と推定する。 The upper is attached to the sole in the suspended state, but since the attaching process is done manually by the operator, there are variations in the work, and the variations may cause variations in the shape. In the accepted product of FIG. 2A, the upper shape 22a has a curved shape that slightly warps from the instep to the toe, whereas in the rejected product of FIG. 2B, the upper shape 22b has an instep to the toe. It has a shape that is almost straight. These differences are so small that they may be overlooked by human eyes, and it is difficult to distinguish them unless they are observed from an appropriate direction. In the whole shoe, there is a clear difference in the positional relationship of the feature points and the balance of length between the passed product and the rejected product. Therefore, the error range of the ratio of the length to the accepted product is modeled by image processing and machine learning, and if the error is judged to be within the allowable range by the learning model, it is estimated as the accepted product, and if it exceeds the allowable range, it is not acceptable. Estimated to be a passing product.
 図3は、合格品における軸の傾きと不合格品における軸の傾きを後方画像で比較する図である。図3(a)は合格品の例であり、図3(b)は不合格品の例である。図3(a)の合格品においては、横軸18aはほぼ水平であり、縦軸24aはほぼ垂直であるのに対し、図3(b)の不合格品においては、横軸18bは水平より僅かに左下がりで傾いており、縦軸24bも垂直より僅かに左下がりで傾いている。これら傾きの違いは、人間の目視では見落とす可能性があるほど僅かであり、少なくとも水平台に適切に載置した状態でなければ判別も困難であるが、傾きに違いがある点で合格品と不合格品で明確な相違がある。そこで、画像処理と機械学習によって合格品との傾きの比率の範囲をモデル化し、その学習モデルによって誤差が許容範囲であると判定されれば合格品と推定し、許容範囲を超えれば不合格品と推定する。 FIG. 3 is a diagram comparing the tilt of the axis in the accepted product and the tilt of the axis in the rejected product in the rear image. FIG. 3A is an example of a accepted product, and FIG. 3B is an example of a rejected product. In the accepted product of FIG. 3 (a), the horizontal axis 18a is almost horizontal and the vertical axis 24a is almost vertical, whereas in the rejected product of FIG. 3 (b), the horizontal axis 18b is horizontal. It is tilted slightly downward to the left, and the vertical axis 24b is also tilted slightly downward from the vertical. These differences in inclination are so small that they may be overlooked by human eyes, and it is difficult to distinguish them unless they are properly placed on a horizontal table. There is a clear difference in the rejected products. Therefore, the range of the inclination ratio with the accepted product is modeled by image processing and machine learning, and if the error is judged to be within the allowable range by the learning model, it is estimated as a accepted product, and if it exceeds the allowable range, it is rejected. I presume.
 本実施の形態においては、靴製品10としてランニングシューズを例示して説明し、ランニングシューズを含む各種スポーツシューズや革靴等、特にソールにアッパーを貼り付けて製造されるような様々な靴製品の他、アッパーのないサンダルやスリッパ等も含む履物に対する検査に用いることができる。 In the present embodiment, running shoes are exemplified and described as shoe products 10, and various sports shoes including running shoes, leather shoes, and various other shoe products such as those manufactured by attaching an upper to a sole are used. , Can be used for inspection of footwear including sandals and slippers without an upper.
 アッパーだけでなく、ソールにおいても樹脂の成形工程において誤差が生じる場合がある。図4は、ミッドソールにおける中足部高硬度素材の位置ズレを示す。下層ミッドソール14においては、足の中足部に対応する位置に剛性を担保するための高硬度素材を部分的に用いる場合があるが、ソールの成形工程でその高硬度素材が使われる位置に誤差が生じる可能性がある。しかし、高硬度素材が使われている部分は下層ミッドソール14の外観には表れないため、作業者が目視でその誤差を発見することは難しい。図はソールの輪郭を示す。左上から右下の向きの斜線模様で示す中足部M1は合格品における高硬度素材部分を示す。右上から左下の向きの斜線模様で示す中足部M2は不合格品における高硬度素材部分を示す。図のように両者を対比すれば位置の違いが明確であるが、靴の内側(図の左側)から見ると中足部M1と中足部M2に位置の違いはなく、靴の外側(図の右側)から見たときだけ中足部M1と中足部M2の位置の違いがある。しかし、作業者が目視検査する場合、単体の靴を見るだけでこの中足部M1と中足部M2の位置の違いのような位置ズレを発見することは困難である。 There may be an error in the resin molding process not only in the upper but also in the sole. FIG. 4 shows the misalignment of the midfoot high hardness material in the midsole. In the lower midsole 14, a high-hardness material may be partially used at the position corresponding to the midfoot of the foot to ensure rigidity, but at the position where the high-hardness material is used in the sole molding process. Errors can occur. However, since the portion where the high hardness material is used does not appear in the appearance of the lower midsole 14, it is difficult for the operator to visually detect the error. The figure shows the outline of the sole. The midfoot portion M1 shown by a diagonal line pattern from the upper left to the lower right indicates the high hardness material portion in the accepted product. The midfoot portion M2 shown by a diagonal line pattern from the upper right to the lower left indicates a high hardness material portion in the rejected product. The difference in position is clear when comparing the two as shown in the figure, but when viewed from the inside of the shoe (left side of the figure), there is no difference in the position between the midfoot part M1 and the midfoot part M2, and the outside of the shoe (figure). There is a difference in the positions of the midfoot portion M1 and the midfoot portion M2 only when viewed from the right side of). However, when a worker visually inspects, it is difficult to find a positional deviation such as a difference in the positions of the midfoot portion M1 and the midfoot portion M2 just by looking at a single shoe.
 靴製品10の内側においては、爪先側頂点にある基準点Raから中足部M1,M2の開始点までの仮想線La,La’は同じ長さである。靴製品10の外側においては、基準点Raから中足部M1の開始点までの仮想線Lbと、基準点Raから中足部M2の開始点までの仮想線Lb’は、長さが異なる。中足部M1の幅Lcと中足部M2の幅Lc’は、本図の例ではほぼ同じ長さである。この場合、合格品と不合格品の仮想線同士の長さの比率(Ln’/Ln)によって誤差の大きさを表すことができる。したがって、検査対象を複数の撮影方向から撮影した複数の画像を取得した上で、各画像から抽出する仮想線と合格品の画像から抽出された仮想線の長さの比率を算出し、その学習結果と比較すれば位置ズレの誤差の大きさが許容範囲であるか否かを判定することが理論上可能となる。 Inside the shoe product 10, the virtual lines La and La'from the reference point Ra at the apex on the toe side to the start points of the midfoot portions M1 and M2 have the same length. On the outside of the shoe product 10, the virtual line Lb from the reference point Ra to the start point of the midfoot portion M1 and the virtual line Lb'from the reference point Ra to the start point of the midfoot portion M2 have different lengths. The width Lc of the midfoot portion M1 and the width Lc'of the midfoot portion M2 are substantially the same length in the example of this figure. In this case, the magnitude of the error can be expressed by the ratio (Ln'/ Ln) of the lengths of the virtual lines of the accepted product and the rejected product. Therefore, after acquiring a plurality of images of the inspection target taken from a plurality of shooting directions, the ratio of the length of the virtual line extracted from each image and the length of the virtual line extracted from the image of the accepted product is calculated and learned. When compared with the result, it is theoretically possible to determine whether or not the magnitude of the misalignment error is within the allowable range.
 図5は、靴外観検査装置110の基本構成を示す機能ブロック図である。本図では機能に着目した機能ブロックを描いており、これらの機能ブロックはハードウェア、ソフトウェア、またはそれらの組合せによっていろいろな形で実現することができる。靴外観検査装置110は、画像取得部120、画像記憶部122、基準点抽出部124、要素点抽出部126、仮想線抽出部128、輪郭抽出部130、抽出データ記憶部132、合否判定部134、モデル記憶部136を含む。 FIG. 5 is a functional block diagram showing the basic configuration of the shoe appearance inspection device 110. In this figure, functional blocks focusing on functions are drawn, and these functional blocks can be realized in various forms by hardware, software, or a combination thereof. The shoe appearance inspection device 110 includes an image acquisition unit 120, an image storage unit 122, a reference point extraction unit 124, an element point extraction unit 126, a virtual line extraction unit 128, a contour extraction unit 130, an extraction data storage unit 132, and a pass / fail determination unit 134. , The model storage unit 136 is included.
 画像取得部120は、検査対象である靴製品10の画像を左側方撮影装置50、右側方撮影装置52、上方撮影装置54、前方撮影装置56、後方撮影装置58のそれぞれから取得し、画像記憶部122に格納する。画像記憶部122には、検査対象である靴製品10の製品モデル名、サイズ、左足用か右足用かの別、といった属性情報とともに分類されて格納される。 The image acquisition unit 120 acquires an image of the shoe product 10 to be inspected from each of the left side photographing device 50, the right side photographing device 52, the upper photographing device 54, the front photographing device 56, and the rear photographing device 58, and stores the image. Store in unit 122. The image storage unit 122 is classified and stored together with attribute information such as the product model name and size of the shoe product 10 to be inspected, and whether it is for the left foot or the right foot.
 基準点抽出部124は、検査対象である靴製品10の画像における外観上の特徴点である基準点を、所定の基準点抽出方法により抽出する。要素点抽出部126は、検査対象である靴製品10の画像における外観上の特徴点である複数の要素点を、所定の要素点抽出方法により抽出する。仮想線抽出部128は、基準点と複数の要素点をそれぞれ結ぶ複数の仮想線を検査対象である靴製品10の画像から抽出する。輪郭抽出部130は、検査対象である靴製品10の画像から靴製品10の輪郭を抽出する。 The reference point extraction unit 124 extracts a reference point, which is an appearance feature point in the image of the shoe product 10 to be inspected, by a predetermined reference point extraction method. The element point extraction unit 126 extracts a plurality of element points, which are appearance feature points in the image of the shoe product 10 to be inspected, by a predetermined element point extraction method. The virtual line extraction unit 128 extracts a plurality of virtual lines connecting the reference point and the plurality of element points from the image of the shoe product 10 to be inspected. The contour extraction unit 130 extracts the contour of the shoe product 10 from the image of the shoe product 10 to be inspected.
 図6は、ラストありの靴製品の外甲側を側方から撮影した画像における基準点、要素点、仮想線の抽出方法を模式的に示す。本実施の形態においては、靴製品10の画像から特徴点として基準点と要素点を抽出し、その基準点と要素点を結ぶ仮想線を抽出する。そのような仮想線を複数抽出し、これら複数の仮想線のデータを所定の機械学習モデルに入力し、各仮想線の位置、傾き、合格品との傾きの比率、長さ、合格品との長さの比率が許容される誤差の範囲であるかを判定することで、靴製品10が合格品であるか否かを推定する。 FIG. 6 schematically shows a method of extracting reference points, element points, and virtual lines in an image taken from the side of the outer instep side of a shoe product with a last. In the present embodiment, a reference point and an element point are extracted as feature points from the image of the shoe product 10, and a virtual line connecting the reference point and the element point is extracted. Multiple such virtual lines are extracted, and the data of these multiple virtual lines are input to a predetermined machine learning model, and the position, inclination, inclination ratio of each virtual line to the accepted product, the length, and the accepted product are used. By determining whether the length ratio is within the allowable error range, it is estimated whether or not the shoe product 10 is a acceptable product.
 また、一つの靴製品10を複数の撮影方向から撮影した複数の画像を検査することで、一方向からの画像だけで合格品であるか否かを推定するのではなく、靴全体において総合的に合格品か否かを推定する。一つの側からの画像内ではその画像から抽出される複数の仮想線の位置、傾き、合格品との傾きの比率、長さ、合格品との長さの比率が許容範囲であったとしても、複数方向から撮影された複数の画像から抽出される複数の仮想線同士の位置関係においては許容範囲を超える誤差が生じている場合もあるためである。 In addition, by inspecting a plurality of images of one shoe product 10 taken from a plurality of shooting directions, it is not possible to estimate whether or not the product is a pass product based on only the images from one direction, but the entire shoe is comprehensive. Estimate whether or not the product has passed. In the image from one side, even if the position, inclination, inclination ratio with the accepted product, length, and length ratio with the accepted product are within the allowable range of the multiple virtual lines extracted from the image. This is because there may be an error exceeding the permissible range in the positional relationship between a plurality of virtual lines extracted from a plurality of images taken from a plurality of directions.
 さらに、靴製品10の画像から輪郭を抽出し、その輪郭を所定の機械学習モデルに入力し、輪郭の形状や位置の全体バランスが許容される誤差の範囲であるかを判定することで、靴製品10が合格品であるか否かを推定する。輪郭による検査においても、複数方向から撮影した複数の画像間で輪郭の位置関係やバランスに許容範囲を超える誤差が生じている場合にこれを判定することができる。 Further, by extracting a contour from the image of the shoe product 10, inputting the contour into a predetermined machine learning model, and determining whether the overall balance of the shape and position of the contour is within the allowable error range, the shoe It is estimated whether or not the product 10 is a acceptable product. Even in the contour inspection, it is possible to determine when an error exceeding the permissible range occurs in the positional relationship and balance of the contour between a plurality of images taken from a plurality of directions.
 基準点と要素点は、画像処理による所定の抽出方法に基づいて抽出し得る、形状が安定した外観上の特徴点である。基準点抽出部124は、靴製品10の履き口から露出したラスト30の一部分であって、ラスト30に対する画像処理でエッジ検出される上縁において最も踵側の端点を第1基準点R1として抽出する。ラスト30の上縁は製靴工程上で形状変化が少なく、他の製品モデルを検査する場合でも用いられるラスト30は形状が共通するため、抽出しやすい基準点として好適である。また、一つの画像から抽出する基準点を一つとし、共通する基準点を起点として複数の要素点を1対多で結んで複数の仮想線を抽出するため、抽出すべき特徴点の数を無闇に増やすことがなく、処理負担の増大を回避できる。この点、互いに異なる複数の特徴点から複数の要素点をそれぞれ多対多で結んで複数の仮想線を抽出する手法より処理負荷の面で有利である。なお、変形例として、ラスト30の一部に付された模様や文字を目印として抽出可能な特徴点を第1基準点R1として抽出してもよい。 Reference points and element points are feature points on the appearance with a stable shape that can be extracted based on a predetermined extraction method by image processing. The reference point extraction unit 124 extracts the end point on the heel side of the upper edge, which is a part of the last 30 exposed from the opening of the shoe product 10 and whose edge is detected by image processing for the last 30, as the first reference point R1. do. The upper edge of the last 30 has little shape change in the shoemaking process, and the last 30 used even when inspecting other product models has a common shape, so that it is suitable as a reference point for easy extraction. In addition, since one reference point is to be extracted from one image and a plurality of element points are connected one-to-many with a common reference point as a starting point to extract a plurality of virtual lines, the number of feature points to be extracted is determined. It is possible to avoid an increase in processing load without increasing it indiscriminately. In this respect, it is more advantageous in terms of processing load than the method of extracting a plurality of virtual lines by connecting a plurality of element points from a plurality of different feature points in a many-to-many manner. As a modification, a feature point that can be extracted by using a pattern or a character attached to a part of the last 30 as a mark may be extracted as the first reference point R1.
 要素点抽出部126は、アウトソール12に対する画像処理でエッジ検出されるアウトソール12の爪先側の最先端を第1要素点P1として抽出する。第1要素点P1は、アウトソール12の爪先側の円弧状輪郭において最も前方に張り出した頂点である。仮想線抽出部128は、第1基準点R1と第1要素点P1を結ぶ第1仮想線L1を抽出する。 The element point extraction unit 126 extracts the tip of the toe side of the outsole 12 whose edge is detected by image processing on the outsole 12 as the first element point P1. The first element point P1 is the apex that overhangs the most forward in the arcuate contour on the toe side of the outsole 12. The virtual line extraction unit 128 extracts the first virtual line L1 connecting the first reference point R1 and the first element point P1.
 要素点抽出部126は、アウトソール12に対する画像処理でエッジ検出される爪先側の巻き上げ部分の曲率が変化する点、すなわちアウトソール12の接地平面から爪先方向へ巻き上げる開始点(「トースプリング開始点」ともいう)を第2要素点P2として抽出する。仮想線抽出部128は、第1基準点R1と第2要素点P2を結ぶ第2仮想線L2を抽出する。 The element point extraction unit 126 is a point at which the curvature of the winding portion on the toe side whose edge is detected by image processing with respect to the outsole 12 changes, that is, a starting point for winding from the ground plane of the outsole 12 toward the toe (“toe spring start point”). ”) As the second element point P2. The virtual line extraction unit 128 extracts the second virtual line L2 connecting the first reference point R1 and the second element point P2.
 要素点抽出部126は、アウトソール12に対する画像処理でエッジ検出される踵側の巻き上げ部分の曲率が変化する点、すなわちアウトソール12の接地平面から踵方向へ巻き上げる開始点(「ヒールカット開始点」ともいう)を第3要素点P3として抽出する。仮想線抽出部128は、第1基準点R1と第3要素点P3を結ぶ第3仮想線L3を抽出する。 The element point extraction unit 126 is a point where the curvature of the heel-side winding portion detected by image processing on the outsole 12 changes, that is, a starting point of winding from the ground plane of the outsole 12 toward the heel (“heel cut start point”). ”) As the third element point P3. The virtual line extraction unit 128 extracts the third virtual line L3 connecting the first reference point R1 and the third element point P3.
 要素点抽出部126は、ソールに対する画像処理でエッジ検出される踵側の最後端部を第4要素点P4として抽出する。第4要素点P4は、下層ミッドソール14の踵側の円弧状輪郭において最も後方に張り出した頂点である。仮想線抽出部128は、第1基準点R1と第4要素点P4を結ぶ第4仮想線L4を抽出する。 The element point extraction unit 126 extracts the rearmost end portion on the heel side whose edge is detected by image processing on the sole as the fourth element point P4. The fourth element point P4 is the apex that protrudes most rearward in the arcuate contour on the heel side of the lower midsole 14. The virtual line extraction unit 128 extracts the fourth virtual line L4 connecting the first reference point R1 and the fourth element point P4.
 要素点抽出部126は、履き口に対する画像処理でエッジ検出される最上端部を第5要素点P5として抽出する。靴製品10の履き口は外側も内側も波形状を呈しており、要素点抽出部126は、その波形状の円弧における頂点ないし最高点を第5要素点P5として抽出する。仮想線抽出部128は、第1基準点R1と第5要素点P5を結ぶ第5仮想線L5を抽出する。なお、検査対象である靴製品10がブーツなど履き口の形状が波形状でない場合は、例えば履き口の最前端または最後端を第5要素点P5として抽出するようにしてもよい。 The element point extraction unit 126 extracts the uppermost end portion whose edge is detected by image processing for the shoe opening as the fifth element point P5. The shoe product 10 has a wavy shape on both the outside and the inside, and the element point extraction unit 126 extracts the apex or the highest point in the arc of the wave shape as the fifth element point P5. The virtual line extraction unit 128 extracts the fifth virtual line L5 connecting the first reference point R1 and the fifth element point P5. If the shoe product 10 to be inspected does not have a wavy shape such as boots, for example, the front end or the rearmost end of the shoe opening may be extracted as the fifth element point P5.
 図6の例では、5点の要素点を抽出し、5本の仮想線を抽出する例を説明したが、要素点や仮想線の数はこれに限定されず、靴の形状の安定性に応じて別の箇所を要素点としてもよい。 In the example of FIG. 6, an example of extracting five element points and extracting five virtual lines has been described, but the number of element points and virtual lines is not limited to this, and the stability of the shoe shape is determined. Depending on the situation, another point may be used as an element point.
 図7は、ラストなしの靴製品の外甲側を側方から撮影した画像における基準点、要素点、仮想線の抽出方法を模式的に示す。本図の靴製品10は、ラスト30を抜いた後の状態を検査する場合の例である。ラスト30ありの場合と異なり、靴製品10の一部から基準点を抽出する。基準点抽出部124は、履き口に対する画像処理でエッジ検出される最下端部を第2基準点R2として抽出する。靴製品10の履き口は外側も内側も波形状を呈しており、基準点抽出部124は、その波形状の円弧における最低部ないし最下点を第2基準点R2として抽出する。なお、検査対象である靴製品10がブーツなど履き口の形状が波形状でない場合は、例えば履き口の最後端または最前端を基準点として抽出するようにしてもよい。変形例として、複数の要素点のいずれかを基準点に設定する仕様としてもよい。 FIG. 7 schematically shows a method of extracting reference points, element points, and virtual lines in an image taken from the side of the outer instep side of a shoe product without a last. The shoe product 10 in this figure is an example in which the state after removing the last 30 is inspected. Unlike the case with the last 30, the reference point is extracted from a part of the shoe product 10. The reference point extraction unit 124 extracts the lowermost end portion whose edge is detected by image processing for the wearing opening as the second reference point R2. The opening of the shoe product 10 has a wave shape both on the outside and inside, and the reference point extraction unit 124 extracts the lowest point or the lowest point in the arc of the wave shape as the second reference point R2. If the shoe product 10 to be inspected does not have a wavy shape such as boots, for example, the rearmost end or the frontmost end of the shoe opening may be used as a reference point for extraction. As a modification, a specification may be made in which any one of a plurality of element points is set as a reference point.
 仮想線抽出部128は、第2基準点R2と第1要素点P1を結ぶ第1仮想線L1を抽出する。仮想線抽出部128は、第2基準点R2と第2要素点P2を結ぶ第2仮想線L2を抽出する。仮想線抽出部128は、第2基準点R2と第3要素点P3を結ぶ第3仮想線L3を抽出する。仮想線抽出部128は、第2基準点R2と第4要素点P4を結ぶ第4仮想線L4を抽出する。仮想線抽出部128は、第2基準点R2と第5要素点P5を結ぶ第5仮想線L5を抽出する。 The virtual line extraction unit 128 extracts the first virtual line L1 connecting the second reference point R2 and the first element point P1. The virtual line extraction unit 128 extracts the second virtual line L2 connecting the second reference point R2 and the second element point P2. The virtual line extraction unit 128 extracts the third virtual line L3 connecting the second reference point R2 and the third element point P3. The virtual line extraction unit 128 extracts the fourth virtual line L4 connecting the second reference point R2 and the fourth element point P4. The virtual line extraction unit 128 extracts the fifth virtual line L5 connecting the second reference point R2 and the fifth element point P5.
 図8は、ラストありの靴製品の後方画像における基準点、要素点、仮想線の抽出方法を模式的に示す。基準点抽出部124は、ラスト30に対する画像処理でエッジ検出される上縁における頂点ないし最高点を第3基準点R3として抽出する。なお、後方画像における第3基準点R3として、側方画像における第1基準点R1とは異なる点を基準点としているが、例えば側方画像においてもラスト30に対する画像処理でエッジ検出される上縁における頂点ないし最高点を基準点とすることにより、共通する特徴点をそれぞれ基準点としてもよい。 FIG. 8 schematically shows a method of extracting reference points, element points, and virtual lines in a rear image of a shoe product with a last. The reference point extraction unit 124 extracts the apex or the highest point at the upper edge detected by the image processing for the last 30 as the third reference point R3. The third reference point R3 in the rear image is set to a point different from the first reference point R1 in the side image. For example, even in the side image, the upper edge is detected by the image processing for the last 30. By using the apex or the highest point in the above as a reference point, common feature points may be used as reference points.
 要素点抽出部126は、下層ミッドソール14に対する画像処理でエッジ検出される最左端を第6要素点P6として抽出する。第6要素点P6は、下層ミッドソール14の左側の円弧状輪郭において最も左方に張り出した頂点である。仮想線抽出部128は、第3基準点R3と第6要素点P6を結ぶ第6仮想線L6を抽出する。 The element point extraction unit 126 extracts the leftmost end detected by the image processing for the lower midsole 14 as the sixth element point P6. The sixth element point P6 is the leftmost apex of the arcuate contour on the left side of the lower midsole 14. The virtual line extraction unit 128 extracts the sixth virtual line L6 connecting the third reference point R3 and the sixth element point P6.
 要素点抽出部126は、アウトソール12に対する画像処理でエッジ検出される最下端を第7要素点P7として抽出する。第7要素点P7は、アウトソール12円弧状輪郭における最低部ないし最下点である。仮想線抽出部128は、第3基準点R3と第7要素点P7を結ぶ第7仮想線L7を抽出する。 The element point extraction unit 126 extracts the lowermost end where the edge is detected by the image processing for the outsole 12 as the seventh element point P7. The seventh element point P7 is the lowest point or the lowest point in the outsole 12 arcuate contour. The virtual line extraction unit 128 extracts the seventh virtual line L7 connecting the third reference point R3 and the seventh element point P7.
 要素点抽出部126は、下層ミッドソール14に対する画像処理でエッジ検出される最右端を第8要素点P8として抽出する。第8要素点P8は、下層ミッドソール14の右側の円弧状輪郭において最も右方に張り出した頂点である。仮想線抽出部128は、第3基準点R3と第8要素点P8を結ぶ第8仮想線L8を抽出する。 The element point extraction unit 126 extracts the rightmost edge detected by image processing on the lower midsole 14 as the eighth element point P8. The eighth element point P8 is the apex that overhangs to the right in the arcuate contour on the right side of the lower midsole 14. The virtual line extraction unit 128 extracts the eighth virtual line L8 connecting the third reference point R3 and the eighth element point P8.
 図6から図8のように、靴製品10の右側方、前方、上方の画像からも基準点および要素点を抽出し、仮想線を抽出する。 As shown in FIGS. 6 to 8, reference points and element points are also extracted from the right side, front, and upper images of the shoe product 10, and virtual lines are extracted.
 図9は、ラストありの靴製品の外甲側を側方から撮影した画像における輪郭の抽出方法を模式的に示す。輪郭抽出部130は、靴製品10に対する画像処理で、靴製品10全体の輪郭SLをエッジ検出により抽出する。輪郭SLは、靴製品10の一つの画像から一つしか得られないため、検査可能な項目としては仮想線を用いた検査より少なく、また機械学習させる対象としても仮想線より少ない分、多数の教師データを学習できる仮想線よりも学習による検出精度を高めにくい側面がある。一方、輪郭SLのエッジ検出は基準点や要素点の抽出と異なり形状の特徴を必要としないため、より容易に抽出できる。また、輪郭による検査は、完成品の検査だけでなく、製造過程に含まれる複数の工程のそれぞれにおける検査においても用いることができる。 FIG. 9 schematically shows a method of extracting contours in an image taken from the side of the outer instep side of a shoe product with a last. The contour extraction unit 130 extracts the contour SL of the entire shoe product 10 by edge detection in the image processing for the shoe product 10. Since only one contour SL can be obtained from one image of the shoe product 10, the number of items that can be inspected is less than that of inspection using virtual lines, and the number of objects to be machine-learned is smaller than that of virtual lines. There is an aspect that it is difficult to improve the detection accuracy by learning compared to the virtual line that can learn teacher data. On the other hand, the edge detection of the contour SL does not require the feature of the shape unlike the extraction of the reference point and the element point, so that it can be extracted more easily. Further, the contour inspection can be used not only in the inspection of the finished product but also in the inspection in each of the plurality of processes included in the manufacturing process.
 図9のように、靴製品10の右側方、前方、後方、上方、下方の画像やラストなしの靴製品10の画像からも輪郭を抽出する。 As shown in FIG. 9, the contour is extracted from the right side, front, rear, upper, and lower images of the shoe product 10 and the image of the shoe product 10 without the last.
 再び図1を参照する。モデル記憶部136は、合格品である複数の靴製品の画像から抽出された複数の仮想線および輪郭を教師データとする機械学習によりあらかじめ生成されて学習済みの学習モデルを記憶する。モデル記憶部136は、一つの靴製品10につき複数の画像からそれぞれ抽出された複数の仮想線および輪郭を教師データとする機械学習により生成された学習モデルを記憶する。この学習モデルには、多数の合格品における仮想線の位置、傾き、合格品との傾きの比率、長さ、合格品との長さの比率、輪郭の位置、バランスといったデータの誤差の許容範囲が形成されているため、検査対象の画像から抽出された仮想線の位置、傾き、合格品との傾きの比率、長さ、合格品との長さの比率、輪郭の位置、バランスと比較することで、誤差が許容範囲に収まるか、すなわち合格品であるか否かを推定することができる。学習モデルは、後述するように靴外観検査学習装置112によりあらかじめ生成されてモデル記憶部136に格納される。 Refer to Fig. 1 again. The model storage unit 136 stores a learning model that has been pre-generated and learned by machine learning using a plurality of virtual lines and contours extracted from images of a plurality of passing shoe products as teacher data. The model storage unit 136 stores a learning model generated by machine learning using a plurality of virtual lines and contours extracted from a plurality of images for each shoe product 10 as teacher data. This training model has a tolerance for data errors such as virtual line position, slope, slope ratio with pass product, length, length ratio with pass product, contour position, and balance in many pass products. Is formed, the position and inclination of the virtual line extracted from the image to be inspected, the ratio of inclination to the accepted product, the length, the ratio of the length to the accepted product, the position of the contour, and the balance are compared. Therefore, it is possible to estimate whether or not the error is within the allowable range, that is, whether or not the product is acceptable. The learning model is generated in advance by the shoe appearance inspection learning device 112 and stored in the model storage unit 136 as described later.
 合否判定部134は、検査対象である靴製品10の画像から抽出された複数の仮想線および輪郭を学習モデルに入力し、仮想線の位置、傾き、合格品との傾きの比率、長さ、合格品との長さの比率、輪郭の位置、バランスを比較することにより、誤差が許容範囲に収まるか、すなわち検査対象の靴製品10が合格品であるか否かを推定することができる。合否判定部134は、検査対象である靴製品10につき複数の画像からそれぞれ抽出された複数の仮想線および輪郭を学習モデルに入力し、仮想線の位置、傾き、合格品との傾きの比率、長さ、合格品との長さの比率、輪郭の位置、バランスを靴全体として総合的に比較し、検査対象の靴が合格品か否かを推定する。合否判定部134は、推定の結果を画面表示などの方法により出力するとともに、モデル記憶部136に記憶される学習モデルへ、合格品または不合格品のデータとしてフィードバックする。 The pass / fail determination unit 134 inputs a plurality of virtual lines and contours extracted from the image of the shoe product 10 to be inspected into the learning model, and the position, inclination, ratio of inclination to the accepted product, and length of the virtual line, By comparing the length ratio, the position of the contour, and the balance with the accepted product, it is possible to estimate whether the error is within the allowable range, that is, whether the shoe product 10 to be inspected is the accepted product. The pass / fail determination unit 134 inputs a plurality of virtual lines and contours extracted from a plurality of images for the shoe product 10 to be inspected into the learning model, and determines the position, inclination, and inclination ratio of the virtual lines to the accepted products. Comprehensively compare the length, the ratio of the length to the accepted product, the position of the contour, and the balance of the shoe as a whole, and estimate whether the shoe to be inspected is the accepted product. The pass / fail determination unit 134 outputs the estimation result by a method such as screen display, and feeds it back to the learning model stored in the model storage unit 136 as data of a pass product or a fail product.
 図10は、靴外観検査学習装置112の基本構成を示す機能ブロック図である。画像取得部220、画像記憶部222、基準点抽出部224、要素点抽出部226、仮想線抽出部228、輪郭抽出部230、抽出データ記憶部232は、それぞれ画像取得部120、画像記憶部122、基準点抽出部124、要素点抽出部126、仮想線抽出部128、輪郭抽出部130、抽出データ記憶部132に対応し、それぞれが同じ機能を有する。 FIG. 10 is a functional block diagram showing the basic configuration of the shoe appearance inspection learning device 112. The image acquisition unit 220, the image storage unit 222, the reference point extraction unit 224, the element point extraction unit 226, the virtual line extraction unit 228, the contour extraction unit 230, and the extraction data storage unit 232 are the image acquisition unit 120 and the image storage unit 122, respectively. , Reference point extraction unit 124, element point extraction unit 126, virtual line extraction unit 128, contour extraction unit 130, and extraction data storage unit 132, each of which has the same function.
 機械学習部234は、抽出データ記憶部232に格納された複数の仮想線のデータと輪郭のデータとを教師データとして、仮想線と輪郭の誤差が許容範囲に収まるかどうかを判定する学習モデルを機械学習により生成し、モデル記憶部236に格納する。学習モデルは、靴外観検査装置110に送信され、靴製品10の外観検査に用いられる。 The machine learning unit 234 uses the data of a plurality of virtual lines and the contour data stored in the extracted data storage unit 232 as teacher data, and uses a learning model for determining whether or not the error between the virtual lines and the contour falls within an allowable range. It is generated by machine learning and stored in the model storage unit 236. The learning model is transmitted to the shoe appearance inspection device 110 and used for the appearance inspection of the shoe product 10.
 教師データには、図6から図9で示したような靴から抽出される複数の仮想線および輪郭の情報が含まれる。仮想線のデータは、多数の合格品の複数の画像から得られた仮想線の位置、傾き、合格品との傾きの比率、長さ、合格品との長さの比率である。輪郭のデータは、多数の合格品の複数の画像から得られた輪郭の位置、バランスである。仮想線および輪郭は、それぞれ位置、傾き、長さ、バランス等にばらつきがあるため、これらを機械学習することで、誤差として許容される範囲をモデル化する。靴製品10は、複数種類の製品モデルがあり、一つの製品モデルにおいても複数のサイズがあり、また左足用と右足用に分かれる。機械学習部234は、製品モデル、サイズ、左足用と右足用、といった属性ごとに分けて複数の仮想線および輪郭を機械学習する。属性ごとに分けて機械学習することにより、判定精度をより高めることができる。 The teacher data includes information on a plurality of virtual lines and contours extracted from shoes as shown in FIGS. 6 to 9. The virtual line data is the position and inclination of the virtual line obtained from a plurality of images of a large number of accepted products, the ratio of the inclination to the accepted product, the length, and the ratio of the length to the accepted product. The contour data is the position and balance of the contour obtained from a plurality of images of a large number of accepted products. Since the virtual lines and contours have variations in position, inclination, length, balance, etc., machine learning is performed to model the allowable range as an error. The shoe product 10 has a plurality of types of product models, and even in one product model, there are a plurality of sizes, and the shoe product 10 is divided into one for the left foot and one for the right foot. The machine learning unit 234 machine-learns a plurality of virtual lines and contours for each attribute such as product model, size, left foot and right foot. By performing machine learning separately for each attribute, the determination accuracy can be further improved.
 本実施の形態においては、合格品の靴製品10の画像のみを機械学習して学習モデルを生成する例を説明した。変形例においては、合格品の画像から抽出された仮想線および輪郭を学習して合格ラベルを付けるとともに、さらに不合格品の画像から抽出された仮想線および輪郭を学習して不合格ラベルを付けることとしてもよい。合格品のみを学習させる場合より教師データを多く必要とするが、その分、合格と不合格を分類する判定精度を高めることができる。また、別の変形例においては、輪郭の機械学習を用いずに仮想線の機械学習だけで学習モデルを生成してもよい。 In the present embodiment, an example of generating a learning model by machine learning only the images of the passing shoe products 10 has been described. In the modified example, the virtual lines and contours extracted from the image of the accepted product are learned and labeled as pass, and the virtual lines and contours extracted from the image of the rejected product are further learned and labeled as rejected. It may be that. Although more teacher data is required than when only passing products are trained, the accuracy of judgment for classifying pass and fail can be improved accordingly. Further, in another modification, a learning model may be generated only by machine learning of virtual lines without using machine learning of contours.
 図11は、靴製品の画像から複数の仮想線および輪郭を抽出して学習モデルに基づいて合格品か否かを推定する手順を示すフローチャートである。 FIG. 11 is a flowchart showing a procedure for extracting a plurality of virtual lines and contours from an image of a shoe product and estimating whether or not the product is acceptable based on a learning model.
 画像取得部120は、左側方撮影装置50や右側方撮影装置52等の複数の撮影装置により複数の撮影方向から靴製品10の画像を撮影し(S10)、画像取得部120が複数の画像を取得する(S11)。基準点抽出部124および要素点抽出部126が基準点と複数の要素点を画像から抽出し(S12)、仮想線抽出部128が基準点と複数の要素点に基づいて画像から複数の仮想線を抽出して抽出データ記憶部132に格納する(S14)。合否判定部134は、モデル記憶部136に格納された学習モデルに仮想線のデータを入力し、仮想線の誤差が許容範囲内であるかを判定する(S16)。輪郭抽出部130は、靴製品10の画像から輪郭を抽出して抽出データ記憶部132に格納する(S18)。合否判定部134は、モデル記憶部136に格納された学習モデルに輪郭のデータを入力し、輪郭の誤差が許容範囲内であるかを判定する(S19)。合否判定部134は、仮想線の誤差が許容範囲内であるか、および、輪郭の誤差が許容範囲であるかを総合的に判定することにより、靴製品10が合格品か否かを推定する(S20)。 The image acquisition unit 120 captures images of the shoe product 10 from a plurality of imaging directions by a plurality of imaging devices such as the left side photographing device 50 and the right side photographing device 52 (S10), and the image acquisition unit 120 captures a plurality of images. Acquire (S11). The reference point extraction unit 124 and the element point extraction unit 126 extract the reference point and a plurality of element points from the image (S12), and the virtual line extraction unit 128 extracts a plurality of virtual lines from the image based on the reference point and the plurality of element points. Is extracted and stored in the extracted data storage unit 132 (S14). The pass / fail determination unit 134 inputs virtual line data into the learning model stored in the model storage unit 136, and determines whether the error of the virtual line is within the allowable range (S16). The contour extraction unit 130 extracts a contour from the image of the shoe product 10 and stores it in the extraction data storage unit 132 (S18). The pass / fail determination unit 134 inputs contour data into the learning model stored in the model storage unit 136, and determines whether the contour error is within the permissible range (S19). The pass / fail determination unit 134 estimates whether the shoe product 10 is a pass product or not by comprehensively determining whether the error of the virtual line is within the allowable range and the error of the contour is within the allowable range. (S20).
 以上、本発明を実施の形態をもとに説明した。実施の形態は例示であり、それらの各構成要素や各処理プロセスの組合せにいろいろな変形例が可能なこと、またそうした変形例も本発明の範囲にあることは当業者に理解されるところである。 The present invention has been described above based on the embodiments. It is understood by those skilled in the art that the embodiments are exemplary and that various modifications are possible for each of these components and combinations of processing processes, and that such modifications are also within the scope of the present invention. ..
 この発明は、検査効率や検査精度を高められる靴外観検査技術を提供することができる。 The present invention can provide a shoe appearance inspection technique that can improve inspection efficiency and inspection accuracy.
 R1 第1基準点、 P1 第1要素点、 L1 第1仮想線、 R2 第2基準点、 P2 第2要素点、 L2 第2仮想線、 R3 第3基準点、 P3 第3要素点、 L3 第3仮想線、 P4 第4要素点、 L4 第4仮想線、 P5 第5要素点、 L5 第5仮想線、 L6 第6仮想線、 P7 第7要素点、 L8 第8仮想線、 12 アウトソール、 14 下層ミッドソール、 16 上層ミッドソール、 20 アッパー、 30 ラスト、 SL 輪郭、 100 靴外観検査システム、 110 靴外観検査装置、 112 靴外観検査学習装置、 120 画像取得部、 124 基準点抽出部、 126 要素点抽出部、 128 仮想線抽出部、 130 輪郭抽出部、 134 合否判定部、 136 モデル記憶部、 220 画像取得部、 224 基準点抽出部、 226 要素点抽出部、 228 仮想線抽出部、 230 輪郭抽出部、 236 モデル記憶部。 R1 1st reference point, P1 1st element point, L1 1st virtual line, R2 2nd reference point, P2 2nd element point, L2 2nd virtual line, R3 3rd reference point, P3 3rd element point, L3 1st 3 virtual lines, P4 4th element point, L4 4th virtual line, P5 5th element point, L5 5th virtual line, L6 6th virtual line, P7 7th element point, L8 8th virtual line, 12 outsole, 14 lower midsole, 16 upper midsole, 20 upper, 30 last, SL contour, 100 shoe appearance inspection system, 110 shoe appearance inspection device, 112 shoe appearance inspection learning device, 120 image acquisition section, 124 reference point extraction section, 126 Element point extraction unit, 128 virtual line extraction unit, 130 contour extraction unit, 134 pass / fail judgment unit, 136 model storage unit, 220 image acquisition unit, 224 reference point extraction unit, 226 element point extraction unit, 228 virtual line extraction unit, 230 Contour extraction unit, 236 model storage unit.

Claims (6)

  1.  検査対象である靴の画像を取得する画像取得部と、
     前記検査対象である靴の画像における外観上の特徴点である基準点を、所定の基準点抽出方法により抽出する基準点抽出部と、
     前記検査対象である靴の画像における外観上の特徴点である複数の要素点を、所定の要素点抽出方法により抽出する要素点抽出部と、
     前記基準点と前記複数の要素点をそれぞれ結ぶ複数の仮想線を前記検査対象である靴の画像から抽出する仮想線抽出部と、
     合格品である複数の靴の画像から抽出された前記仮想線を教師データとする機械学習により生成された学習モデルを記憶するモデル記憶部と、
     前記検査対象である靴の画像から抽出された複数の仮想線を前記学習モデルに入力することにより前記検査対象の靴が合格品か否かを判定する合否判定部と、
     を備えることを特徴とする靴外観検査システム。
    The image acquisition unit that acquires the image of the shoes to be inspected,
    A reference point extraction unit that extracts a reference point, which is an appearance feature point in the image of the shoe to be inspected, by a predetermined reference point extraction method, and a reference point extraction unit.
    An element point extraction unit that extracts a plurality of element points, which are appearance feature points in the image of the shoe to be inspected, by a predetermined element point extraction method.
    A virtual line extraction unit that extracts a plurality of virtual lines connecting the reference point and the plurality of element points from the image of the shoe to be inspected, and a virtual line extraction unit.
    A model storage unit that stores a learning model generated by machine learning using the virtual lines extracted from images of a plurality of acceptable shoes as teacher data, and a model storage unit.
    A pass / fail determination unit that determines whether or not the shoe to be inspected is a pass product by inputting a plurality of virtual lines extracted from the image of the shoe to be inspected into the learning model.
    A shoe appearance inspection system characterized by being equipped with.
  2.  前記取得される靴の画像は、足型による吊り込みがされた状態の靴の画像であり、
     前記基準点抽出部は、前記画像において靴から露出する足型の外観上の特徴点を前記基準点として抽出することを特徴とする請求項1に記載の靴外観検査システム。
    The acquired image of the shoe is an image of the shoe in a state of being suspended by the last.
    The shoe appearance inspection system according to claim 1, wherein the reference point extraction unit extracts the appearance feature points of the foot shape exposed from the shoes in the image as the reference points.
  3.  前記取得される靴の画像は、複数通りの角度から撮影された複数の画像で構成され、
     前記モデル記憶部は、一つの靴につき複数の画像からそれぞれ抽出された仮想線を教師データとする機械学習により生成された学習モデルを記憶し、
     前記合否判定部は、前記検査対象である靴につき複数の画像からそれぞれ抽出された仮想線を前記学習モデルに入力することにより前記検査対象の靴が合格品か否かを判定することを特徴とする請求項1または2に記載の靴外観検査システム。
    The acquired shoe image is composed of a plurality of images taken from a plurality of angles.
    The model storage unit stores a learning model generated by machine learning using virtual lines extracted from a plurality of images for one shoe as teacher data.
    The pass / fail determination unit is characterized in that it determines whether or not the shoe to be inspected is a pass product by inputting virtual lines extracted from a plurality of images of the shoe to be inspected into the learning model. The shoe appearance inspection system according to claim 1 or 2.
  4.  前記検査対象である靴の画像から前記靴の輪郭を抽出する輪郭抽出部をさらに備え、
     前記モデル記憶部は、合格品である複数の靴の画像から抽出された前記仮想線および前記輪郭を教師データとする機械学習により生成された学習モデルを記憶し、
     前記合否判定部は、前記検査対象である靴の画像から抽出された複数の仮想線および輪郭を前記学習モデルに入力することにより前記検査対象の靴が合格品か否かを判定することを特徴とする請求項1から3のいずれかに記載の靴外観検査システム。
    Further, a contour extraction unit for extracting the contour of the shoe from the image of the shoe to be inspected is provided.
    The model storage unit stores a learning model generated by machine learning using the virtual line and the contour as teacher data extracted from images of a plurality of shoes that are acceptable products.
    The pass / fail determination unit is characterized in that it determines whether or not the shoe to be inspected is a pass product by inputting a plurality of virtual lines and contours extracted from the image of the shoe to be inspected into the learning model. The shoe appearance inspection system according to any one of claims 1 to 3.
  5.  検査対象である靴の画像を所定の画像取得手段により取得する過程と、
     前記検査対象である靴の画像における外観上の特徴点である基準点を、コンピュータによる所定の基準点抽出方法により抽出する過程と、
     前記検査対象である靴の画像における外観上の特徴点である複数の要素点を、コンピュータによる所定の要素点抽出方法により抽出する過程と、
     前記基準点と前記複数の要素点をそれぞれ結ぶ複数の仮想線を前記検査対象である靴の画像からコンピュータにより抽出する過程と、
     合格品である複数の靴の画像から抽出された前記仮想線を教師データとする機械学習により生成された学習モデルを所定の記憶手段から読み出す過程と、
     前記検査対象である靴の画像から抽出された複数の仮想線を前記学習モデルに入力することにより前記検査対象の靴が合格品か否かをコンピュータにより判定する過程と、
     を備えることを特徴とする靴外観検査方法。
    The process of acquiring an image of shoes to be inspected by a predetermined image acquisition means, and
    A process of extracting a reference point, which is an appearance feature point in the image of the shoe to be inspected, by a predetermined reference point extraction method using a computer, and a process of extracting the reference point.
    A process of extracting a plurality of element points, which are appearance feature points in the image of the shoe to be inspected, by a predetermined element point extraction method by a computer, and
    A process of extracting a plurality of virtual lines connecting the reference point and the plurality of element points from the image of the shoe to be inspected by a computer, and a process of extracting the plurality of virtual lines from the image of the shoe to be inspected.
    A process of reading a learning model generated by machine learning using the virtual lines extracted from images of a plurality of acceptable shoes as teacher data from a predetermined storage means, and
    A process of determining whether or not the shoe to be inspected is a pass product by inputting a plurality of virtual lines extracted from the image of the shoe to be inspected into the learning model, and a process of determining whether or not the shoe to be inspected is a pass product.
    A shoe appearance inspection method characterized by comprising.
  6.  検査対象である靴の画像を取得する機能と、
     前記検査対象である靴の画像における外観上の特徴点である基準点を、所定の基準点抽出方法により抽出する機能と、
     前記検査対象である靴の画像における外観上の特徴点である複数の要素点を、所定の要素点抽出方法により抽出する機能と、
     前記基準点と前記複数の要素点をそれぞれ結ぶ複数の仮想線を前記検査対象である靴の画像から抽出する機能と、
     合格品である複数の靴の画像から抽出された前記仮想線を教師データとする機械学習により生成された学習モデルを記憶する機能と、
     前記検査対象である靴の画像から抽出された複数の仮想線を前記学習モデルに入力することにより前記検査対象の靴が合格品か否かを判定する機能と、
     をコンピュータに実現させることを特徴とする靴外観検査プログラム。
    The function to acquire the image of the shoe to be inspected and
    A function of extracting a reference point, which is an appearance feature point in the image of the shoe to be inspected, by a predetermined reference point extraction method, and
    A function of extracting a plurality of element points, which are appearance feature points in the image of the shoe to be inspected, by a predetermined element point extraction method, and
    A function of extracting a plurality of virtual lines connecting the reference point and the plurality of element points from the image of the shoe to be inspected, and
    A function to store a learning model generated by machine learning using the virtual lines extracted from images of multiple shoes that are acceptable products as teacher data, and
    A function of determining whether or not the shoe to be inspected is a pass product by inputting a plurality of virtual lines extracted from the image of the shoe to be inspected into the learning model.
    A shoe appearance inspection program characterized by the realization of a computer.
PCT/JP2020/048681 2020-12-25 2020-12-25 Shoe appearance inspection system, shoe appearance inspection method, and shoe appearance inspection program WO2022137494A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202080108063.2A CN116802483A (en) 2020-12-25 2020-12-25 Shoe appearance inspection system, shoe appearance inspection method, and shoe appearance inspection program
PCT/JP2020/048681 WO2022137494A1 (en) 2020-12-25 2020-12-25 Shoe appearance inspection system, shoe appearance inspection method, and shoe appearance inspection program
JP2022570939A JPWO2022137494A1 (en) 2020-12-25 2020-12-25

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/048681 WO2022137494A1 (en) 2020-12-25 2020-12-25 Shoe appearance inspection system, shoe appearance inspection method, and shoe appearance inspection program

Publications (1)

Publication Number Publication Date
WO2022137494A1 true WO2022137494A1 (en) 2022-06-30

Family

ID=82157646

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/048681 WO2022137494A1 (en) 2020-12-25 2020-12-25 Shoe appearance inspection system, shoe appearance inspection method, and shoe appearance inspection program

Country Status (3)

Country Link
JP (1) JPWO2022137494A1 (en)
CN (1) CN116802483A (en)
WO (1) WO2022137494A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107114861A (en) * 2017-03-22 2017-09-01 青岛小步科技有限公司 A kind of customization footwear preparation method and system based on imaging pressure and dimensional Modeling Technology
US20180322623A1 (en) * 2017-05-08 2018-11-08 Aquifi, Inc. Systems and methods for inspection and defect detection using 3-d scanning
WO2019044870A1 (en) * 2017-09-04 2019-03-07 日本電産コパル株式会社 Visual inspection device and product manufacturing system
US20190096135A1 (en) * 2017-09-26 2019-03-28 Aquifi, Inc. Systems and methods for visual inspection based on augmented reality
JP2019074525A (en) * 2017-10-13 2019-05-16 マネスキ、アレッサンドロMANNESCHI,Alessandro Inspection of shoes using thermal camera
US20200175669A1 (en) * 2018-12-04 2020-06-04 General Electric Company System and method for work piece inspection
CN111340098A (en) * 2020-02-24 2020-06-26 安徽大学 STA-Net age prediction method based on shoe print image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107114861A (en) * 2017-03-22 2017-09-01 青岛小步科技有限公司 A kind of customization footwear preparation method and system based on imaging pressure and dimensional Modeling Technology
US20180322623A1 (en) * 2017-05-08 2018-11-08 Aquifi, Inc. Systems and methods for inspection and defect detection using 3-d scanning
WO2019044870A1 (en) * 2017-09-04 2019-03-07 日本電産コパル株式会社 Visual inspection device and product manufacturing system
US20190096135A1 (en) * 2017-09-26 2019-03-28 Aquifi, Inc. Systems and methods for visual inspection based on augmented reality
JP2019074525A (en) * 2017-10-13 2019-05-16 マネスキ、アレッサンドロMANNESCHI,Alessandro Inspection of shoes using thermal camera
US20200175669A1 (en) * 2018-12-04 2020-06-04 General Electric Company System and method for work piece inspection
CN111340098A (en) * 2020-02-24 2020-06-26 安徽大学 STA-Net age prediction method based on shoe print image

Also Published As

Publication number Publication date
JPWO2022137494A1 (en) 2022-06-30
CN116802483A (en) 2023-09-22

Similar Documents

Publication Publication Date Title
US7409256B2 (en) Footwear measurement and footwear manufacture systems and methods
US10013803B2 (en) System and method of 3D modeling and virtual fitting of 3D objects
JP4137942B2 (en) Shoe selection support system and shoe selection support method
KR102028563B1 (en) Method of measuring foot size and shape using image processing
US11176738B2 (en) Method for calculating the comfort level of footwear
KR101624203B1 (en) Realtime quality inspection method for shoes sole
CN106714914B (en) Movement posture analytical equipment and movement posture analyze information generating method
JP2022177028A (en) Information processing device, information processing method, and program
CN105976406A (en) Measurement system, measurement device, foot type measurement method, and foot type measurement system
CN108348045A (en) Method and system for the size for determining apparatus for correcting
WO2016067573A1 (en) Orientation estimation method and orientation estimation device
Sarghie et al. Anthropometric study of the foot using 3D scanning method and statistical analysis
WO2022137494A1 (en) Shoe appearance inspection system, shoe appearance inspection method, and shoe appearance inspection program
JP2020018365A (en) Foot state analysis method
CN110664409B (en) Arch type identification method based on pressure acquisition
US20210267315A1 (en) Markerless foot size estimation device, markerless foot size estimation method, and markerless foot size estimation program
Luximon et al. Sizing and grading methods with consideration of footwear styles
KR101781359B1 (en) A Method Of Providing For Searching Footprint And The System Practiced The Method
KR101678166B1 (en) System and method for manufacturing the custom made-to-order golf insole using golf swing analyzing
JP7356665B2 (en) Shoe data generation device, shoe data generation method, shoe data generation program
US20240212270A1 (en) Representations of foot features
JP2020012667A (en) Identification apparatus, identification method and program
Xiong et al. Foot measurements from 2D digital images
KR102224944B1 (en) Method for computation buffing route of shoes without soles
TWM645543U (en) Automated foot recognition and analysis system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20966980

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022570939

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 202080108063.2

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20966980

Country of ref document: EP

Kind code of ref document: A1