CN102157013A - System for fully automatically reconstructing foot-type three-dimensional surface from a plurality of images captured by a plurality of cameras simultaneously - Google Patents

System for fully automatically reconstructing foot-type three-dimensional surface from a plurality of images captured by a plurality of cameras simultaneously Download PDF

Info

Publication number
CN102157013A
CN102157013A CN2011100914555A CN201110091455A CN102157013A CN 102157013 A CN102157013 A CN 102157013A CN 2011100914555 A CN2011100914555 A CN 2011100914555A CN 201110091455 A CN201110091455 A CN 201110091455A CN 102157013 A CN102157013 A CN 102157013A
Authority
CN
China
Prior art keywords
point
pin type
image
model
shoe tree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011100914555A
Other languages
Chinese (zh)
Inventor
罗胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou University
Original Assignee
Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou University filed Critical Wenzhou University
Priority to CN2011100914555A priority Critical patent/CN102157013A/en
Publication of CN102157013A publication Critical patent/CN102157013A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The name of the invention is 'system for fully automatically reconstructing foot-type three-dimensional surface from a plurality of images captured by a plurality of cameras simultaneously'. The invention relates to a system for fully automatically reconstructing foot-type three-dimensional surface from a plurality of images, comprising the following main steps of: firstly carrying out statistical analysis on a shoe-tree sample set so as to obtain the statistical deformation model; arranging the imaging environment; obtaining the calibrated template and the image of the human foot by a plurality of cameras; deforming statistical model, and fitting the foot type image so as to obtain the initially estimated model and generate the sparse grid model; and iterating to finely divide the grid model, wherein the image characteristic points from each image are divided in each turn of iteration, the plane characteristic point is matched with the space point, and the grid model is subdivided by using the spatial characteristic point; and finally obtaining the foot type model consistent with the target object. In the invention, a mark point is unnecessary to be set on the foot, and a high-precision laser measurement device is unnecessary; the camera and the computer are only used, and the shutter imaging time and the calculation processing time are only required. The system has fast speed, and can be widely applied to the condition for the three-dimensional reconstruction of the biological skin with relatively low precision request.

Description

The system that the multiple image that is absorbed simultaneously by a plurality of cameras is automatically rebuild pin type three-dimensional surface
Technical field
The present invention relates to a kind of system that carries out three-dimensional measurement and reconstruction at the pin type, the multiple image that can utilize a plurality of cameras to absorb the pin type is simultaneously automatically rebuild pin type three-dimensional surface, can be used for a plurality of fields such as foot shape measurement, footwear and shoe last or hat block customization, motion analysis, foot medical treatment, have the characteristics such as simple, with low cost of using.
Background technology
Along with improving constantly of social modernization's degree and living standard, people strengthen day by day to the comfortableness and the personalized demand of the own shoes of wearing.Do not have two identical pin types in the world, even wear two people of same footwear sizes, the length difference of its pin, fat or thin difference, arch height also can be different, and the posture of walking is also inequality, are through with a pair of shoes that comfort level can be not identical on the pin.Especially be engaged in the personnel of specific occupation, as sportsman, dancer, soldier etc., more urgent to the demand of personalized footwear boots.
Design the footwear boots of a pair of personalization, pin type three-dimensional measurement is primary sport technique segment.The three-dimensional measurement of pin type mainly contains four kinds of methods at present, comprising:
1, manual measurement method, for example traditional out-of-date methods are measured key position with instruments such as slide calliper rule, tapes, and speed is slow, precision is low, and can not directly obtain three-dimensional model.
2, laser measurement method is as CANFIT-PLUS TMYeti TM3D Foot Scanner is present three-dimensional data obtain manner all the fashion, precision height, applied widely, but the costing an arm and a leg of equipment.Critical piece has high-speed camera head, generating laser, signal processor, high precision stepper motor etc.Owing to adopted the mode of scanning, so speed is slow, generally more than several minutes; And the head and the tail of pin type, precision are very low, are difficult to handle.But laser measurement method is subject to the influence of object surface optical property, because this human body surface dermatoglyph of pin type is abundant, optical property is changeable, and is therefore also inapplicable.
3, contact type measurement, as the power hand, low price, but manual operation, speed are slow, precision is lower.And the most important thing is that for this object that is wrapped in soft skin of pin type, contact type measurement is also improper.
4, camera measuring method, as the series methods of Foot in 3D with foam impression and Pan Yunhe, or the like.Foot in 3D with foam impression is two camera apparatus of hand-held, and the surrounding target object is taken pictures, and carries out three-dimensional reconstruction from image then.Because equipment is moving, so the image of different angles need carry out coordinate conversion, and whole technique realizes that difficulty is bigger, needs the local many of manual adjustment, and precision is general, does not also have full automatic method at present.Pan Yunhe catches and has supplied a series of camera to measure pin type method, " based on the three-dimensional foot type measuring of specifiable lattice pattern and the method for modeling " (in please number 200310108856.2) need be put on the socks that are printed on special grid, certain restriction is arranged, and precision is also lower simultaneously; " towards the three-dimension foot type fast acquiring method based on the distortion of standard pin of sparse grid " (application number 2005100612723) and " towards the three-dimension foot type data measuring method based on surface subdivision of sparse grid " (application number 200510061271.9), set up standard pin type storehouse earlier, manual then alignment sparse grid model and target pin type, the wide information of the long pin of the pin of basis is chosen the standard pin model that the wide information of the long pin of pin is close with it from standard pin type storehouse again, the position and the posture of the pin of manual setting standard again model, corresponding point between last manual searching standard pin model and the sparse grid pin model, remodify the control vertex of sparse grid pin model according to corresponding point, by new control vertex sparse grid pin model is rebuild, and then grid segmentation, obtain three-dimension foot type data, basic manual the finishing of entire method, and standard pin type storehouse is difficult to set up.
Method involved in the present invention need not be provided with gauge point on pin, do not need to wear special socks yet, and directly three-dimensional reconstruction is carried out in imaging to the pin type.In reconstruction, do not need pin type storehouse, but from the shoe tree sample set, generate the statistics distorted pattern, remove match target pin type with the statistics distorted pattern then, and then segment the statistics distorted pattern, obtain the three-dimensional surface of target pin type at last.The advantage of Chu Liing is like this: it is simply more accurate than generate the statistics distorted pattern from pin type storehouse to generate the statistics distorted pattern from the shoe tree sample set, implements easily; The statistics distorted pattern data that generate still less, it is easier to control.This method is compared with directly recover spatial point from image, and the much middle step of individual statistics distorted pattern has reduced difficulty, has increased reliability.This method organically combines hardware unit and software systems, has both avoided single hard-wired expensive problem, the low precision problem of also having avoided single software to realize, and automaticity height, workable is beneficial to and promotes the use of.
Summary of the invention
Technical matters to be solved by this invention is, provides a kind of and gauge point need be set on pin, does not also need to wear special socks, and directly the method for automatic three-dimensional reconstruction is carried out in imaging to the pin type.Feature of the present invention is to comprise following concrete steps:
1, generate shoe tree statistics distorted pattern: selection shoe tree sample, constitute the shoe tree sample set, and the design gauge point, the gauge point that obtains sample generates the some distributed model, and the some distributed model collection from sample set carries out principal component analysis generation shoe tree statistics distorted pattern then;
2, imaging circumstances is set: arrange a plurality of cameras by shoe tree statistics distorted pattern, arrange light again;
3, pickup image: the imaging of pin type calibrating template, the inside and outside parameter of demarcating each camera, and then to the imaging of pin type;
4, just estimate the pin type: just estimate the pin type with shoe tree statistics distorted pattern, comprise that posture is estimated and shape is estimated;
5, generating mesh model: convert first estimation model to grid model;
6, segmentation grid model: generate newly-increased spatial point from the feature of each view picture, the segmentation grid model, segmentation process multiresolution iteration carries out can't cutting out the more point of details in each image, and segmentation finishes;
7, output pin type three-dimensional model is optimized in the grid arrangement.
Characteristics of the present invention are the roughly profiles of catching earlier the pin type with the statistics distorted pattern of shoe tree, catch the minutia of pin type then with the method for grid segmentation, and little feature after the earlier big feature, is recovered pin type surface from coarse to fine at multiresolution.Wherein shoe tree has been represented the general character of pin type, represents shoe tree with the some distributed model of shoe tree, represents the statistical attribute of shoe tree shape with the statistics distorted pattern of shoe tree; Catch the roughly profile of pin type earlier with the distorted pattern that a small amount of point is only arranged, iteration is segmented initial model then, the shape that has guaranteed reconstruction model changes along with target shape, simultaneously guarantee that again the precision that reconstruction model can reach increases with the quantity of information that each view picture is provided, the topology that is reconstruction model can adapt to target shape automatically, and the precision of reconstruction model can adapt to amount of image information automatically.
The present invention is as a kind of improvement, and described gauge point is derived by the original pattern figure of shoe tree sample set and the key point of side elevational view, and can reflect the shoe tree shape facility.
The present invention is as a kind of improvement, and described generation shoe tree statistics distorted pattern specifically may further comprise the steps:
1, selects the shoe tree sample, constitute the shoe tree sample set;
2, design gauge point, the position of definite gauge point scans computing machine to the position of gauge point with the three-dimensional scanner, then as the some distributed model of sample on every shoe tree sample;
3, the alignment of the some distributed model of each sample in the shoe tree sample set, guarantee the distance minimization between each sample on the whole sample set;
4, adopt the PCA method that the some distributed model collection from sample set is carried out principal component analysis, generate the statistics distorted pattern of shoe tree, the shape of shoe tree is resolved into general character and individual character two parts, and individual character is the product of a sex factor and individual character vector.
As a kind of improvement, describedly arrange that according to shoe tree statistics distorted pattern a plurality of cameras specifically may further comprise the steps:
1, initial step: introduce the shoe tree statistics distorted pattern that preceding step produced, and preset model point threshold value of image-forming information amount on image-forming information amount, the profile in profile, the initial information amount of each model points is made as 0;
2, computing camera distribution ball: according to the radius R of known parameters computing camera distribution balls such as camera focus, target sizes;
3, calculate the maximum camera position of dominant shape shape factor information amount: on camera distribution ball, calculate and comprise the maximum camera position of dominant shape shape factor information amount in the shoe tree statistics distorted pattern;
4, output camera position: determine the camera position point of output, increase under this camera position the quantity of information of model points on the visible model points and profile, and the image-forming information amount on image-forming information amount, the profile removed in profile is respectively greater than the model points of setting threshold;
5, calculate new statistics distorted pattern: the new shoe tree statistics distorted pattern after recomputating model points and reducing;
6, iterative computation; Repeat the step 3,4,5 of this link, all model points can both rebuild out all by abundant imaging according to image information in the statistics distorted pattern;
7, end step: the coordinate of output camera position point, coordinate is arranged camera in view of the above.
The present invention is as a kind of improvement, describedly estimates just that according to shoe tree statistics distorted pattern the pin type specifically may further comprise the steps:
1, initial step: introduce shoe tree statistics distorted pattern, camera parameter that preceding step produced and the pin type image that is absorbed;
2, calculate pose parameter such as pin type size, orientation and position: select image from the imaging of sole direction, position calibration, image and the groove on the load-bearing glass plate according to sole calculate pin type size and orientation, then the multiple image of pin type all is divided into bianry image, promptly pin type and background segment are come, the pin type is a black, and background is a white, asks for the center of gravity of pin type in each plane of delineation then, and calculating the center of gravity of pin type in the space thus, this is the position of pin type;
3, shape is just estimated: the multiple image of pin type all is divided into four value images, promptly pin type and background segment are come, sole contacts with glass plate, the darker part of color is a black in the image, sole portion does not contact with glass plate, the more shallow part of color is a Dark grey in the image, do not belong to the most shallow part of color in sole, the image on the pin type for light grey, background is a white, change the individual sex factor in the shoe tree statistics distorted pattern then, make model projection point color add up and maximum, obtain the first estimation shape of pin type;
4, consistency of contour calculates: consistency of contour calculates iteration to carry out, point and exterior point in each is taken turns and earlier model points is divided in the iteration, the amount of movement of point and exterior point in calculating, retrained by shoe tree statistics distorted pattern with whole model then, therefore calculate and the close new shape of shoe tree statistics distorted pattern, one by one the initial estimation model is fitted to true pin type and get on;
5, illumination consistance is calculated: search in a plurality of pixel coverages that the illumination consistance is best, the images match point of sub-pixel, generate model points position accurately, obtain comparatively precise analytic model;
6, end step: the roughly estimation model of profile of pin type has been caught in output.
The present invention is as a kind of improvement, the step of described segmentation grid model is: at first detect the feature in each view picture and cut feature generation plane characteristic point, mate the newly-increased spatial point of plane characteristic dot generation then, then with newly-increased spatial point segmentation grid, and under the framework of multiresolution analysis, repeat above step iteration and segment grid, in image, do not have the more feature of details, finish dense reconstruction.
The invention solves shoemaking manufacturer and order the major technique difficulty of footwear, have following three advantages by pin:
1, gauge point need be set on pin, need on pin, not wear special socks yet, under the free position, pin that can imaging, can three-dimensional reconstruction.Statistics distorted pattern in the method for reconstructing provides initial estimation, and the iteration segmentation can become destination object just estimating model deformation, so this method to target pin type without limits, and existing certain rapidity also has certain adaptability to difform pin type;
2, adopted several different methods such as statistical model, distorted pattern, image segmentation, the segmentation of grid iteration in this method, repeatedly extracting clue from pin type image carries out resurfacing, completely different with other method of surface reconstruction;
3, this method is only used devices such as camera and computing machine, and reconstruction tasks is automatically finished by software, and is though precision less than the laser measuring equipment height, does not need can not reach high-precision measurement to the pin type is this yet, practical.And do not need to scan, only needing shutter imaging time and calculating treatmenting time, speed is fast; Adapt to widely, can be widely used in accuracy requirement is not under the extra high situation.
Description of drawings
Fig. 1 is a method flow diagram of the present invention;
Fig. 2 is two crucial views of shoe tree design, original pattern figure and side elevational view;
Fig. 3 is gauge point, warp, parallel position view on right flank of shoe tree;
Fig. 4 is gauge point, warp, parallel position view on the bottom surface of shoe tree;
Fig. 5 is gauge point, warp, parallel position view on left surface of shoe tree;
Fig. 6 is 58 gauge point locus synoptic diagram of shoe tree;
Fig. 7 is a synoptic diagram of pasting warp, parallel on shoe tree with the thin adhesive tape of white;
Fig. 8 is the some distributed model alignment algorithm flow chart of shoe tree sample set;
Fig. 9 is an imaging circumstances synoptic diagram of the present invention;
Figure 10 is the synoptic diagram of pin type calibrating template;
Figure 11 is in the view of sole position as the image-forming principle synoptic diagram;
Figure 12 is the Cartesian coordinates and the spherical coordinates definition synoptic diagram of imaging circumstances of the present invention;
Figure 13 is overall construction drawings such as imaging circumstances of the present invention, light layout;
Figure 14 is 1 one-tenth view picture of camera;
Figure 15 is 2 one-tenth view pictures of camera;
Figure 16 is 3 one-tenth view pictures of camera;
Figure 17 is 4 one-tenth view pictures of camera;
Figure 18 is 5 one-tenth view pictures of camera;
Figure 19 is 6 one-tenth view pictures of camera;
Figure 20 is 7 one-tenth view pictures of camera;
Figure 21 is 8 one-tenth view pictures of camera;
Figure 22 is that 8 width of cloth view pictures carry out binaryzation earlier, calculates the center of gravity of prospect then, calculates the synoptic diagram of pin type center of gravity in the space more thus;
Figure 23 is pin type image segmentation the synoptic diagram of four value images for shape when just estimating;
Figure 24 is the synoptic diagram of image-driven edge projection point in spatial movement;
The process flow diagram of Figure 25 step 6 segmentation grid model;
Figure 26 cuts the synoptic diagram of each view picture for the grid projection;
As seen Figure 27 looks and the main key diagram of looking for leg-of-mutton;
Figure 28 is the spatial relation of unique point, match point and spatial point;
Figure 29 is the restriction relation synoptic diagram of smoothness constraint, sequence constraint, illumination consistency constraint.
Embodiment
The multiple image that is absorbed simultaneously by a plurality of cameras of the present invention is automatically rebuild the embodiment of pin type three-dimensional surface, and flow process comprises following seven steps as shown in Figure 1:
1, generates shoe tree statistics distorted pattern: select the shoe tree sample, constitute the shoe tree sample set, and the design gauge point, the statistics distorted pattern generated, the gauge point that obtains sample generates the some distributed model, and the some distributed model collection from sample set carries out principal component analysis generation shoe tree statistics distorted pattern then.
1.1) select the shoe tree sample, constitute the shoe tree sample set.The selection of shoe tree sample set has material impact to the statistics distorted pattern that the back generated, and also therefore influences the pin type precision of estimation model just.Different sample sets will obtain different statistics distorted patterns.If the shoe tree sample set stresses the shoe tree of certain type, adding up distorted pattern so, to express this type of shoe tree will be more accurate, and shoe tree that can't other type will be expressed more coarsely.If sample set concentrates on the special shoe tree of certain type entirely, obtain adding up may the be beyond expression shoe tree of other type of distorted pattern.Therefore, will select the shoe tree sample of respective type from customizing certain type shoe tree.Present embodiment has been selected 121 different round end leather shoes shoe trees, the sample size configuration of its medium and small child, middle child, Da Tong, adult female, each yard of adult male section is as shown in table 1, and the quantity configuration draws according to shoe tree stepping and population distribution statistical law two aspect factors.
The composition of table 1 shoe tree sample
Figure BSA00000472414500041
1.2) design gauge point, and on every shoe tree sample, determine the position of gauge point, then the position of gauge point is scanned computing machine, as the some distributed model of sample, specifically comprise following four steps:
(1.2.1) the shoe tree gauge point of deriving and to reflect the shoe tree shape facility by the key point of last outline figure and side elevational view.
As Fig. 2 is two crucial views of shoe tree design, original pattern figure and side elevational view.At first determine the key point among these two figure when shoe tree designs in the past, obtain the profile in original pattern figure and the side elevational view then, generate three-dimensional shoe tree more thus.In the present embodiment, be starting point with shoe last or hat block heel end point O point, shoe last or hat block top point A point is a terminal point, and OA length is 1, installs key point at 1,0.92,0.83,0.72,0.67,0.61,0.58,0.50,0.42,0.37,0.15,0 equipotential respectively, and is as shown in table 2.
The position of key point among the table 2 original pattern figure
Figure BSA00000472414500042
On side elevational view, the J1 point is a starting point, and the O4 point is a terminal point, and J1O4 length is 1, and set-point on position 0,0.05,0.11,0.18,0.22,0.30,0.40,0.56,0.94,1 is as shown in table 3 respectively, and wherein, J1 is the corresponding point of J among the original pattern figure.
The position of key point in table 3 side elevation
Figure BSA00000472414500043
The point that selection is derived by these key points serves as a mark a little, and point is totally 58 points, the position of its numbering and each gauge point such as Fig. 3, Fig. 4, Fig. 5, Fig. 6, shown in Figure 7.
When on the shoe tree sample, determining gauge point, the at first manual mid point of setting on shoe tree tiptoe point or the tack shoe tree flat head section sole is a gauge point 1, cross at least five parallels that separate and can reflect the shoe tree feature relatively of the manual setting of the thin ribbon of gauge point 1 usefulness, on shoe tree, be provided with at least seven warps that separate and can reflect the shoe tree feature relatively by hand then with thin ribbon, set the intersection point of warp and parallel, getting the intersection point that can reflect the shoe tree features of shape is gauge point.As a kind of preferred version, described parallel is five, and described thin ribbon is the thin adhesive tape of white; First parallel: set a shoe tree system mouthful preceding point and be gauge point 6, shoe tree system staphylion is a gauge point 9, and the thin adhesive tape of stretching white earlier is in line after gauge point 1, gauge point 6 and gauge point 9 sticking cards to become encapsulated coil on shoe tree; Second parallel: shoe tree is ajusted, from up to down observed shoe tree, constitute encapsulated coil with the largest contours on the sticking obedient shoe tree of the thin adhesive tape of white; The 3rd parallel: set first parallel and shoe last or hat block bottom surface intersection point is a gauge point 11, the thin adhesive tape of white is drawn into straight line earlier, crosses gauge point 1 and the thin adhesive tapes of gauge point 11 sticking card whites constitute encapsulated coil to curved surface of last; The 4th parallel: cross gauge point 1 and the thin adhesive tape of gauge point 9 sticking card whites, constitute the shortest encapsulated coil; The 5th parallel: getting shoe last or hat block heel most salient point on first parallel is gauge point 10, and adhesive tape is drawn into straight line earlier, and to cross the thin adhesive tape of gauge point 10 and gauge point 1 sticking card white be the 5th parallel to curved surface of last, and described the 5th parallel is got the shoe tree s heel portion.Article 5, the parallel position determines as Fig. 3, Fig. 4, shown in Figure 5.As a kind of preferred version, described warp is eight, get a little 1 and the line segment length of point 11 be 1, be starting point with gauge point 11, get gauge point respectively in the position of length 0.15,0.25,0.37,0.42,0.58,0.61,0.67,0.72,0.83,0.92, be designated as H, H3, G, G3, F, J, E, D, C, B, wherein the H point is a gauge point 12, and H3 is a gauge point 13, and G3 is a gauge point 14, J is a gauge point 15, and B is a gauge point 17; The 7th warp: cross H point glue be posted at the bottom of the shoe last or hat block on perpendicular to first parallel and on curved surface of last the encapsulated coil vertical with second parallel; The six channels line: cross the sticking card of H3 point perpendicular to first parallel, cross gauge point 6 and on curved surface of last, constitute the encapsulated coil that encloses long minimum; Five Classics line: cross the sticking obedient straight line vertical with first parallel of G, the described straight line and second parallel intersection point outside the shoe last or hat block bottom surface is as G1, and promptly gauge point 24, crosses gauge point 24, gauge point 14 and gauge point 6 sticking card formation on curved surface of last and encloses the minimum encapsulated coil of length; The 4th warp: crossing the thin adhesive tape of the sticking card white of the most recessed point of gauge point 24, gauge point 14 and shoe last or hat block back is the 4th warp, and the 4th warp is to enclose long minimum encapsulated coil through gauge point 24, gauge point 14; The 3rd warp: cross the straight line of F work perpendicular to first parallel, the described straight line and second parallel intersection point outside the shoe last or hat block bottom surface is as F1, it is gauge point 23, cross the straight line of E work perpendicular to first parallel, this straight line and second parallel intersection point in inboard, shoe last or hat block bottom surface is E1, be gauge point 30, cross gauge point 23, gauge point 15 and gauge point 30 sticking card on curved surface of last and constitute that to enclose long minimum encapsulated coil be exactly the 3rd warp; Second warp: cross the straight line of D work perpendicular to first parallel, this straight line and second parallel are D1 at the intersection point in the outside, shoe last or hat block bottom surface, it is gauge point 22, cross the straight line of C work perpendicular to first parallel, this straight line and second parallel intersection point in inboard, shoe last or hat block bottom surface is C1, be gauge point 31, cross gauge point 22, gauge point 31 sticking card formation on curved surface of last and enclose long minimum encapsulated coil; First warp: cross the sticking card of B point perpendicular to the thin adhesive tape of the white of first parallel, constitute and enclose long minimum encapsulated coil; The 8th warp: cross gauge point 11, gauge point 6 sticking card formation on curved surface of last and enclose long minimum encapsulated coil.Article 8, determining through line position as Fig. 3, Fig. 4, shown in Figure 5.
As Fig. 3, Fig. 4, Fig. 5, Fig. 6, shown in Figure 7, adopt above-mentioned establishing method after, on shoe tree, set 58 points.Wherein, some 1-point 17 (first parallels) are the key points of side master drawing, and point 1 and some 21-point 32 (second parallels) are the key points of original pattern figure.Point 1, point 11 and some 41-point 51 constitute the 3rd parallel, and put 1, point 9 and some 60-point 75 constitute the 4th parallel, point 80, puts 10 and put 81 formations the 5th parallel.First parallel-5 is totally 5 parallels.And put 17, point 21, point 41, point 75, point 2, point 60, point 51, the cross section that point 32 (first warps) constitute, point 22, point 16, point 31, point 50, point 61, point 3, point 74, the cross section that point 42 (second warps) constitute, point 23, point 15, point 30, point 49, point 62, point 4, the cross section that point 73 and point 43 (the 3rd warps) constitute, point 24, point 14, point 29, point 48, point 63, point 5, the cross section that point 72 and point 44 (the 4th warps) constitute, point 24, point 14, point 29, point 47, point 64, point 6, the cross section that point 71 (Five Classics lines) constitute, point 25, point 13, point 28, point 46, point 65, point 6, the cross section that point 70 and point 45 (the six channels lines) constitute, point 26, point 12, point 27, point 80, point 67, point 68 and point 81 (the 7th warps), point 6, point 66, point 80, point 11, the cross section that point 81 and point 69 (the 8th warps) constitute, the warp of totally 8 section constitution last surfaces, article 8, determining through line position as Fig. 6, shown in Figure 7, these warps all are national Specification measuring positions in the foot shape measurement.Point among Fig. 6 in the square frame is the visible gauge point in front, and the point in the circle is the sightless gauge point in the back side.
(1.2.2) gauge point extracting method.In order to guarantee gauge point of the same name correspondence position on different shoe trees, before scanning, on shoe tree, use the thin adhesive tape mark warp and weft of white earlier, as shown in Figure 7.Because warp and weft is the determined line of have a mind to selecting than being easier to determine the position of gauge point, so the position of intersecting point of warp and weft also must be able to get and guarantees.
(1.2.3) adopt the three-dimensional scanner Immersion MicroScribe MX of contact that gauge point is scanned computing machine then.
(1.2.4) some distributed model that the shoe tree scan sample obtains, 121 121 some distributed models that the shoe tree scan sample obtains.
1.3) the some distributed model of each sample in the shoe tree sample set is snapped under the same coordinate system, with the ICP algorithm same place of each distributed model is alignd then, guarantee the distance minimization between each sample on the whole sample set, obtain the snap point distributed model of shoe tree sample set, flow process specifically includes following 6 steps as shown in Figure 8:
(1.3.1) ask the averaging model S of each distributed model, averaging model S asks the new some distributed model that obtains after average to the identical point coordinates of a have distributed model, and calculating formula is as follows:
S = 1 121 Σ i = 1 121 s i
In the formula, s iThe point distributed model of representing each shoe tree sample.
(1.3.2) each point distributed model s iComputings such as process convergent-divergent λ, translation T, rotation R when snapping to averaging model S.Three coordinate components of translation vector T are respectively T x, T y, T z, the rotation R correspondence with angles three coordinates be α, β, γ.
Total alignment operand C is defined as:
C = 1 121 Σ i = 1 121 ( | λ | + | T x | + | T y | + | T z | + | α | + | β | + | γ | )
Total alignment operand C zero clearing.
(1.3.3) the some distributed model s of a certain shoe tree sample of selection i, this distributed model is snapped to averaging model through computings such as convergent-divergent λ, translation T, rotation R.As fruit dot distributed model s iWith averaging model apart from minimum, promptly put distributed model s iIn in each gauge point and the averaging model between each gauge point distance add up and minimum, carry out next step; Otherwise continue convergent-divergent, translation, rotation.Point distributed model s iBe defined as follows with the distance D of averaging model:
D=[S-s i] IW[s i-S]
In the formula, W is the unit diagonal matrix.
This sample point distributed model is snapped to convergent-divergent λ, the translation T of averaging model, the total alignment of the operand adding operand C of rotation R.
(1.3.4) judgement sample collection every bit distributed model all disposes, if carry out next step; If not, continue to carry out the step 1.3.3 of this link;
(1.3.5) the some distributed model collection after the alignment is more neat, and then the averaging model of looking for novelty.New averaging model is more accurate than last averaging model, more can represent the general character of whole sample set.So iteration is all alignd up to all models, and the alignment operand no longer increases, and averaging model also no longer changes.The condition of convergence is that total alignment operand C no longer changes in iteration, i.e. the operand of this alignment is compared with the operand that alignd last time, and amplitude of variation Δ C approaches 0, that is:
ΔC=∑(|Δλ|+Δ|T x|+Δ|T y|+Δ|T z|+Δ|α|+Δ|β|+Δ|γ|)→0
(1.3.6) output corresponding point model.
Present embodiment does not have each sample point model of disposable alignment, but iteration alignment, to add up as the distance of two models apart from sum between the same place of being had a few on two models, converge to the termination condition of alignment operation to the alignment amount of movement of average shoe tree with the point model of each sample, comprise that ratio lambda, rotation R and translation T no longer change, this alignment can allow the distribution of each gauge point reflect truth as far as possible.
1.4) adopt the principal component analysis method to generate the statistics distorted pattern of shoe tree from the some distributed model collection of sample set, the shape of shoe tree is resolved into general character and individual character two parts, and individual character is the product of individual character form factor and individual character form factor coefficient.
The principal component analysis method is a kind of information extracting method, relevant originally a plurality of variablees is reassembled into one group of a few separate overall target, and reflects the main information of former a plurality of variablees.These independently the minority overall target just be called the individual character form factor.
The principal component analysis (PCA) calculation procedure is as follows: at first each some distributed model is expressed as 58 gauge point P 1, P 2P 58Column vector
s i={P 1,P 2…P 58} T
And each gauge point P iBe expressed as 1 * 3 row vector.If d is (s i) expression point distributed model s iDeviation with averaging model S
d(s i)=s i-S
Have the deviation of a distributed model to constitute deviation matrix together
Δs=[d(s 1)d(s 2)…d(s 121)]
Covariance matrix C then Can be expressed as
C Σ = 1 121 Σ i = 1 121 { d ( s i ) [ d ( s i ) ] T } = 1 121 Δs ( Δs ) I
Find the solution covariance matrix C Eigenwert and unit character vector, and sort from big to small by eigenwert, eigenwert and the unit character vector established after the ordering are respectively λ PCAiAnd U i:
C U i=λ PCAiU i i=1,2…,λ PCAi≥λ PCA(i+1)
In the formula, U 1, U 2U 174Being the unit character vector successively, also is the ingredient of the individual character shape of shoe tree, i.e. individual character form factor, λ PCA1, λ PCA2λ PCA174Represented that corresponding individual character form factor accounts for the weight of total individual character shape, i.e. the size that total individual character shape is influenced.
Because d (s i) be zero-mean, therefore little eigenwert means to the influence of total individual character shape seldom, can ignore, therefore at the N preceding, that weight is big unit character vector the described shape information of original all unit character vectors just can be described exactly with ordering, removed the information of repeated and redundant, and the information that loses is few.This just principal component analytical method carry out the foundation of data compression and signal dimensionality reduction.In the present embodiment, select 14 orderings at the unit character vector preceding, that weight is big, vectorial U=[U is arranged in rows 1U 2U 14], be called individual character form factor vector.In the present embodiment, to the statistical results show of 121 shoe tree shapes, these 14 orderings have occupied 96% of total individual character shape at the unit character vector preceding, that weight is big, so individual character shape vector U can express the shoe tree shape fully, and loss is little.The statistics distorted pattern of shoe tree is expressed as:
s i = S + U × b 3 D = S + Σ i = 1 14 U i × b i
Wherein, S is an averaging model, represents the general character part of shoe tree; U is an individual character form factor vector, represents the combination of shoe tree individual character form factor.b 3D=[b 1b 2B 14] be individual character form factor coefficient vector, element b wherein iIt is individual character form factor coefficient.The individual character shape vector is pressed individual character form factor coefficient vector b 3DCarry out linear combination, just obtain concrete shoe tree; Different individual character form factor coefficient vector b 3DValue, corresponding different shoe tree shapes.Individual character form factor coefficient vector b 3DIn each factor can only change within the specific limits, can think that the model that exceeds this scope is a shoe tree no longer just.If:
b i∈[b imin b imax]
Individual character form factor coefficient vector b 3DScope, can be called the shoe tree space, define jointly by the scope of each individual character form factor coefficient, concrete numerical value determines that by the statistics of shoe tree space sample collection range size is made as D Max
Obviously, the explanation of this step only as an example.Under the situation that does not break away from described principle of claim of the present invention and scope, can carry out many variations to the related gauge point of this step, each computing method that adopted step by step.
2, imaging circumstances is set: the statistical attribute by shoe tree sample set shape is arranged a plurality of cameras and light, as Fig. 9, Figure 10, Figure 11, Figure 12, shown in Figure 13, includes:
2.1) camera support frame 1.As shown in Figure 9, described camera support frame 1 is provided with four and is parallel to each other, the longeron 11 that spacing is identical and contour, described four longeron that is parallel to each other 11 upper ends are fixed by two cross one another crossbeams 12, described four be parallel to each other and longeron 11 middle and lower parts that spacing is identical between be provided with four brace summers 13, the support beam 13 that is oppositely arranged in described four brace summers 13 is provided with two template brace summers 14, described calibrating template 2 is arranged in the camera support frame 1 by two template brace summers 14, described camera 3 is arranged on the camera support frame 1 by clip, a kind of preferred version, described longeron 11 is 1.7m, crossbeam 12 length are 2.1m, article four, longeron 11 and 12 square steel bars of two crossbeams are made the framework of a rectangular parallelepiped, surround square, the square then middle two template brace summers 14 of placing spacing 0.7m with four brace summers 13 in the place of height 0.7m.
2.2) pin type calibrating template 2.As shown in figure 10, described pin type calibrating template 2 is transparent poly (methyl methacrylate) plate, have multiple lines and multiple rows and phase graticule across a certain distance on the described pin type calibrating template 2, described pin type calibrating template 2 centers are provided with the center sign, sign periphery, described center also is provided with the bearing mark group, described pin type calibrating template 2 is arranged in the camera support frame 1 by support, a kind of preferred version, described pin type calibrating template 2 is the wide 0.5m * 0.7m of a block length, the poly (methyl methacrylate) plate of thick 5cm, form by polymethylmethacrylate, highly transparent, transmittance reaches 90%-92%, refractive index n is 1.49, tack milling cutter blaze on poly (methyl methacrylate) plate with diameter 1cm is known, and single face has as shown in figure 10, totally 16 * 16 co-ordinations, distance between centers of tracks 3cm, the center is designated " O " word and " tens' " stack body, is initial point with pin type calibrating template 2, and the horizontal line in the co-ordination is an X-axis, the ordinate of co-ordination is a Y-axis, and the position and the layout of bearing mark group are as follows:
Coordinate is in the grid of [(10.5,1.5) (7.5 ,-1.5)] 0 word to be arranged; Coordinate is in the grid of [(10.5,10.5) (7.5,7.5)] 1 word to be arranged; Coordinate is in the grid of [(7.5,10.5) (10.5,7.5)] 2 words to be arranged; Coordinate is in the grid of [(7.5 ,-7.5) (10.5 ,-10.5)] 4 words to be arranged; Coordinate is in the grid of [(10.5 ,-7.5) (7.5 ,-10.5)] 3 words to be arranged; Coordinate is in the grid of [(16.5,16.5) (13.5,13.5)] 6 words to be arranged; Coordinate is in the grid of [(13.5,16.5) (16.5,13.5)] 7 words to be arranged; Coordinate is in the grid of [(13.5 ,-13.5) (16.5 ,-16.5)] 8 words to be arranged; Coordinate is in the grid of [(16.5 ,-13.5) (13.5 ,-16.5)] 9 words to be arranged; Coordinate is to have in the grid of [(19.5,19.5) (16.5,16.5)] "/" word; Coordinate is to have in the grid of [(16.5,19.5) (19.5,16.5)] " * " word; Coordinate is to have in the grid of [(16.5 ,-16.5) (19.5 ,-19.5)] " * " word; Coordinate is to have in the grid of [(19.5 ,-16.5) (16.5 ,-19.5)] " ten " word; Coordinate is to have in the grid of [(22.5,22.5) (19.5,19.5)] " O " word; Coordinate is to have in the grid of [(19.5,22.5) (22.5,19.5)] " OO " word; Coordinate is to have in the grid of [(19.5 ,-19.5) (22.5 ,-22.5)] " OOO " word; Coordinate is to have in the grid of [(22.5 ,-19.5) (19.5 ,-22.5)] " OOOO " word.
Four numerical value is represented x, the y coordinate in the upper left corner, the x in the lower right corner, y coordinate respectively in the above-mentioned coordinate.Above-mentioned bearing mark group is four groups of different pattern, and every stack orientations identified group is provided with the bearing mark group and helps the identification orientation for be positioned at four patterns on the different levels with respect to the center sign, reduces the calculated amount in the modeling; Taking at least three prescription formulas signs, is in order to prevent the still identification orientation easily after pin covers the part bearing mark, and bearing mark designs three groups or five groups or other group number also can.
2.3) compensation of sole image.As shown in Figure 9, there are three cameras 3 to be in the position of pin type calibrating template 2 belows, must see through the poly (methyl methacrylate) plate imaging, because light has laterally offset when passing poly (methyl methacrylate) plate, therefore just can be used for many apparent weights after the image that camera 3 becomes of below must calculate through compensation and build, what be in the sole position looks imaging model as shown in figure 11.Compensation method: as shown in figure 11, each pixel is according to known parameters in the image that pin type calibrating template 2 below cameras 3 collect, comprise that pin type calibrating template thickness D, camera 3 coordinates, camera 3 are from pin type calibrating template height H, pixel coordinate, refractive index n, calculate pixel offset distance du and dv on camera plane, reimaging.The place of camera 3 high H under glass plate is by the principle of geometrical optics, the some P (X on the template P, Y P, 0) and be imaged as p, compare the situation that does not have glass plate, be displaced sideways apart from d:
d = D sin α 1 ( 1 - cos α 1 n 2 - sin 2 α 1 )
If camera photocentre coordinate (X C, Y C, Z C), following relation is arranged:
Z C=-(D+H)
sin α 1 = ( X c - X p ) 2 + ( Y c - Y p ) 2 / H 2 + ( X c - X p ) 2 + ( Y c - Y p ) 2 cos α 1 = H / H 2 + ( X c - X p ) 2 + ( Y c - Y p ) 2
Through calculating, offset d is 2.2193mm on the camera of on the throne being changed to of true origin (600 ,-200 ,-440) of coordinate (0,0).As can be seen from the above equation, can replace d with du and dv, u, v direction can separate processes, calculate du and dv.
2.4) layout of camera 3.In order to analyze the problem of camera arrangements, the observation mechanism of human eye is summarized, only need positive photo during normal condition during the observer, just can learn whom this person is; Even but the acquaintance, if the unconspicuous side of feature photo is only arranged, whose content so just is difficult to guess this person is; If back side photo, more difficult learning, this illustrates the different quantity of information of looking different, can obtain more three-dimensional reconstruction clue from abundant the looking of quantity of information, therefore, the main task of looking layout is exactly to allow each look the information that comprises as much as possible, and the information that repeats between looking few is as far as possible.
Because shoe tree statistics distorted pattern has been represented the statistical attribute of shoe tree shape, and individual character change of shape factor wherein to have the people to have little, A distinction should be made between what is primary and what is secondary.According to the position of the statistical attribute computing camera of shoe tree shape, selection can reflect the visual angle imaging of main change of shape factor as far as possible in image.The statistics distorted pattern of sample shoe tree is:
s i = S + U × b 3 D = S + Σ i = 1 14 U i × b i
Individual character factor system number vector b 3DDetermined the shape of shoe tree uniquely.Therefore, can reflect ground individual character factor system number vector b as far as possible 3DIn looking of each element be exactly big the looking of information content.If P ProjBe projection matrix to certain plane of delineation imaging, model s iAfter the projection be:
P proj×s i=P proj×S+P proj×U×b 3D
P Proj* U * b 3DBe the flat shape that obtains after the projection, can think: the flat shape P that obtains after the projection Proj* U * b 3DThe variation space big more, to the individual character factor system number vector b in space 3DShow fully more, it is many more to look the quantity of information that comprises; The flat shape P that obtains after the projection Proj* U * b 3DThe variation space more little, to the individual character factor system number vector b in space 3DShow insufficiently more, it is few more to look the quantity of information that comprises.P Proj* U * b 3DThe variation SPACE V ProjBe calculated as follows:
Figure BSA00000472414500084
λ in the formula PCAiIndividual character form factor U iEigenwert, (b Imax-b Imin) be individual character factor system number vector b 3DMiddle element b iThe variation space, λ PCAiBe weight to be set for each sex factor.Adding up behind 14 each and every one the sex factor index variation space projections can be regarded under this position camera as and also can regard the quantity of information that this position is looked as to the reflection of main change of shape.
According to above principle, camera is at first selected the position of quantity of information maximum, calculates the position of quantity of information maximum in all the other positions then under guarantee information repeats to try one's best few situation.As far as possible few in order to guarantee that an information of looking repeats, get rid of with the imaging number of times of the gauge point of average shoe last model and to look an information redundancy.In principle, each gauge point just can be rebuilt by two camera imagings; Gauge point may need two above camera imagings, could accurately be rebuild but in fact.And, if gauge point is imaged on the profile of model, will be than being imaged on easier location, coupling and reconstruction in the profile, the two kinds of needed camera imaging quantity of situation differences.Therefore 58 gauge points to average shoe tree are arranged on two records of image-forming information amount on image-forming information amount, the profile in the profile.When gauge point when image-forming information amount and the image-forming information amount on profile are all greater than predetermined threshold value in profile, gauge point just can be rebuilt, no longer participates in calculating; To all the other gauge points, recomputate new statistics distorted pattern.Because it is no longer identical newly to add up the main individual character form factor of distorted pattern, because the camera position of the quantity of information maximum of Que Dinging can be not identical yet thus.Above process iteration is carried out, and when image-forming information amount and the image-forming information amount on profile are all greater than predetermined threshold value in profile, just can calculate rational camera arrangements up to 58 gauge points.
According to above principle, calculate according to the following steps the position of camera 3:
(2.4.1) initial step: introduce the shoe tree statistics distorted pattern that preceding step 1 is produced, and preset model point threshold value of image-forming information amount on image-forming information amount, the profile in profile, the initial information amount of each model points is made as 0;
(2.4.2) computing camera distribution ball: if be initial point with the center of pin type calibrating template 2, direction with template brace summer 14 is the Y direction, with the vertical direction is the Z direction, define Cartesian coordinates thus, adopt identical initial point and X-axis, the definition spherical coordinates, as shown in figure 12, according to the radius R of known parameters (comprise camera focus, target people little) computing camera distribution ball;
(2.4.3) calculate the maximum camera position of dominant shape shape factor information amount: each position on the traversal camera distribution ball calculates the people's of quantity of information camera position;
(2.4.4) determine the camera position point of output, increase under this camera position the quantity of information of gauge point on the witness marking point and profile, and the image-forming information amount on image-forming information amount, the profile of deleting in profile is respectively greater than the gauge point of setting threshold;
(2.4.5) calculate new statistics distorted pattern: recomputate the new statistics distorted pattern after gauge point reduces;
(2.4.6) iterative computation: repeating step (2.4.3), (2.4.4), (2.4.5), all gauge points can both be rebuild out all by abundant imaging according to image information in the statistics distorted pattern;
(2.4.7) end step: the coordinate of output camera position point, coordinate is arranged camera in view of the above.
58 gauge points to average shoe tree calculate according to the method described above, and under perfect condition, six location points can satisfy the requirement of three-dimensional reconstruction, but for making the reconstruction better effects if, present embodiment provides certain redundancy, adopt eight positions.The definition Cartesian coordinates adopts identical initial point and X-axis as shown in figure 12, defines spherical coordinates, so 8 a kind of schemes of camera arrangements that calculate by the statistics distorted pattern of pin type shown in Fig. 9 and table 4, camera 3 resolution 640 * 480 wherein, focal length 40mm.Camera 3 usefulness clip are in the relevant position, and camera 3 can two directions rotations, and angle [alpha], β are spherical coordinates in the table, and camera all is on the sphere of radius 746mm.
Table 4 is respectively looked physical location (r=746mm, angular unit radian, coordinate unit mm)
Figure BSA00000472414500091
2.5) layout in lamp source 4.Adopted the lamp source 4 of 8 50W, with the identical elements of a fix of camera 3 is down, the extended line of each camera 3 place radius is provided with identical lamp source 4, described eight lamp sources 4 are positioned on the same sphere, the position is as shown in table 5, described lamp source 4 spherical radius are greater than described camera 3 spherical radius, the general layout in camera support frame 1, pin type calibrating template 2, camera 3 and lamp source 4 as shown in figure 13, lamp source 4 spherical radius are 10m.
Each daylight lamp position of table 5 (r=10m, angular unit radian)
Obviously, the explanation of this step only as an example.Under the situation that does not break away from described principle of claim of the present invention and scope, can carry out many variations to the related camera support frame of this step, pin type calibrating template, camera arrangements computing method.
3, pickup image.To graduated transparent pin type calibrating template imaging in the environment, image is demarcated the inside and outside parameter of each camera in view of the above, determines the accurate position and the imaging parameters such as focal length, photocentre of each camera earlier, and the pin type is placed on the template then, obtains 8 width of cloth view pictures of pin type.Pin type 8 width of cloth view pictures that obtain such as Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, shown in Figure 21.
4, just estimate the pin type.With the statistics distorted pattern of the shoe tree model according to a preliminary estimate as the pin type, the posture and the shape of estimation model comprise position, size, direction, shape from each view picture, and be specific as follows:
4.1) initial step: introduce shoe tree statistics distorted pattern, camera parameter that preceding step produced and the pin type image that is absorbed;
4.2) calculate pin type size: because the pin type gently steps down on the template, inscribe the pattern that has as shown in figure 10 on the template.Therefore, in the view of seeing from bottom to top, can clearly reflect the size of pin type, as Figure 18, Figure 19, shown in Figure 20.According to the shoe tree design rule, the length of pin type is taken as 1.067 times of sole length, and this length is composed to first estimation model.
4.3) calculating pin type orientation: the orientation of estimating the pin type according to the multiple image of pin type.Because only interested in the part below the foot anklebone, two angles of all the other of shank and glass plate only need calculate the anglec of rotation on the glass plate to the not influence of pin type.From Figure 14, Figure 15, Figure 16, Figure 17, Figure 21 this point as can be seen.Therefore, the view of seeing from bottom to top as Figure 18, Figure 19, shown in Figure 20, can clearly reflect the orientation of pin type.This orientation is composed to first estimation model.Because follow-up iterative deformation method is arranged, so center of gravity and orientation do not need accurate estimation.The deviation in center of gravity and orientation is in the deformability scope of model, through repeatedly being moved to accurate position after the iteration.
4.4) calculating pin type position: the multiple image of pin type is divided into bianry image, promptly pin type and background segment is come, the pin type is a black, and background is a white, as shown in figure 22, asks for the center of gravity of pin type in each plane picture then.According to the double ratio unchangeability, the central projection of pin type remains the center of respectively looking to respectively looking.According to the pin type plane center of gravity of each view picture, estimate the pin type center of gravity in the space, with the center of gravity of this center of gravity as first estimation model, this is the position of pin type.
4.5) shape just estimates: the multiple image of pin type all is divided into four value images, promptly pin type and background segment are come, sole contacts with glass plate, the darker part of color is a black in the image, sole portion does not contact with glass plate, the more shallow part of color is a Dark grey in the image, do not belong to sole on the pin type, the most shallow part of color is light grey in the image, background is a white, as shown in figure 23, change the individual sex factor in the shoe tree statistics distorted pattern then, is the color of model points projection position the color of model points, make model projection point color add up and maximum, repeatedly iteration obtains the first estimation shape of pin type;
4.6) consistency of contour calculating: first estimation model is the statistics distorted pattern of shoe tree, changes individual character form factor coefficient wherein, so that its projection in respectively looking can be with the image of pin type in respectively looking consistent.This method is similar to the elasticity socks that only have a small amount of point to prop up and is set on the target pin type.If socks and destination object are inconsistent, some point is beyond pin type surface, and some point moves outside point in pin type surface so to the inside, and the point of the inside moves to the outside.Simultaneously, because the strong point of socks is statistical models of shoe tree, be subjected to the constraint of shoe tree statistical parameter, must still keep the shape of shoe tree, therefore each these move the point that can not only move outside or point inside, and all point moves together.These points pin down mutually, can not once move just consistent with destination object, but after each the moving, no matter be outside or inside point, all more approach destination object.Through such moving repeatedly, socks are inevitable consistent with the pin type, and both projections in each view picture coincide.
Specifically comprise following five steps:
(4.6.1) each camera parameter that calculates according to step 3, model points to each view as projection.The outer contour of view field is coupled together, and the point on profile is called the projecting edge point, and corresponding spatial point is called the edge projection point.Point and exterior point in the edge projection point is divided into, the point within true pin type is to be interior point, otherwise is exactly exterior point;
(4.6.2) distinguish interior point and exterior point.All edge projection points are set to interior point; If this o'clock, this point was exterior point so outside one or one plane domain that is in true pin type image with the projection on the top view picture; If this puts projection on all view pictures all within the plane domain of true pin type, this point is interior point so.
(4.6.3) calculate the distance of each edge projection point and true pin type, distance drives edge projection point and moves to target surface thus.To each edge projection point P iIf, only look and be projected as marginal point at one, so at its view that is projected as projecting edge point as V mOn, choose and projecting edge point p ImPlanar point p on the nearest true pin type plane picture m, structure is by projecting edge point p ImAs starting point, objective plane point p mPlane deformation vector as terminal point.Edge projection is pressed this vector and is carried out spatial movement, first estimation model s iBecome new model s i'.If edge projection point P iOn a plurality of looking, be projected as marginal point, can construct a plurality of plane deformation vectors at its a plurality of view pictures that are projected as marginal point in the same way.As shown in figure 24, gauge point P iLooking V mWith look V nMiddle subpoint is respectively P ImAnd P InIf p ImLooking V mThe midplane deformation vector is
T im → = ( u m , v m )
p InLook V nThe midplane deformation vector is
T in → = ( u n , v n )
Because be the image of having demarcated, establish and look V mProjection matrix be P Prom, look V nProjection matrix be P Prom, so:
p im = P prom P i p in = P pron P i
The geometric distortion vector Satisfy projection relation equally, therefore:
p im = T im → = P prom ( P i + T i → ) p in + T in → = P pron ( P i + T i → )
Above two formulas subtract each other:
T im → = u m v m 1 ′ = P prom T i → T in → = u n v n 1 ′ = P pron T i →
3 unknown numbers, 4 equations are asked under the least square meaning in the formula
Figure BSA00000472414500116
It is the geometric distortion vector.
After edge projection pressed the geometric distortion vector and move, obtain new first estimation model s i'.
(4.6.4) because model must keep the shape of shoe tree, so new model s i' must satisfy the statistics distorted pattern of shoe tree
s′ i=S+U×b 3D
Promptly
b 3D=U -1×(s’ i-S)
Because individual character factor system number vector b 3DCan only in the scope that statistics obtains, change.If by s i' b that calculates 3DExceeded scope D Max, model will no longer be the shape of shoe tree so.Therefore, be b 3DMove into tolerance band, promptly ask for b therewith 3DThe most approaching and the b in scope 3D'.New individual character factor system number vector b 3D' in each component b i' by former component b iBe calculated as follows:
b i ′ = b i × Σ i 14 ( b i 2 / λ PCAi ) D max
By new individual character factor system number vector b 3D' determined new shoe last model is:
s″ i=S+U×b′ 3D
(4.6.5) repeat step (4.5.1), (4.5.2), (4.5.3), (4.5.4) of this link, point and exterior point in each is taken turns and distinguishes earlier in the iteration, calculate amount of movement, use the shape constraining amount of movement then, calculate new shape at last, one by one the initial estimation model is fitted to true pin type and get on, obtain the first estimation model of pin type at last.The termination condition of double counting is that new round iterative computation gained individual character factor system number vector is compared the variation delta b of the individual character factor system number vector of last time 3D, approach 0, that is:
Δb 3D→0
4.7) the illumination consistance calculates: by shape just estimate, consistency of contour calculates, model is roughly consistent with the pin type, the error of all the other model points is in the 2mm scope except that tiptoe point.Because adopted shoe tree statistics distorted pattern, the tiptoe point tolerance is bigger.Search among a small circle that the illumination consistance is best, the images match point of sub-pixel.For example the camera of resolution 640 * 480 is got 10 pixels, is 100 sub-pixs to this scope linear interpolation, and each model points is calculated the most consistent match point of gray scale, regeneration spatial point in this scope.
4.8) end step: the roughly estimation model of profile of pin type has been caught in output.
Obviously, the explanation of this step only as an example.Under the situation that does not break away from described principle of claim of the present invention and scope, can carry out many variations to the related computing method according to a preliminary estimate of this step.
5, generating mesh model: first estimation point model net is formatted the generating mesh model.
6, segmentation grid model.Generate newly-increased spatial point from the feature of each view picture, the segmentation grid model, segmentation process multiresolution iteration carries out can't cutting out the more point of details in each image, and segmentation finishes.Flow process comprises following five steps as shown in figure 25:
6.1) grid is projected to each view picture, each width of cloth image segmentation is become piece, as shown in figure 26.As can be seen, between the plane triangle during this difference is looked corresponding relation is arranged.And the corresponding flat triangle in different the looking includes the different quantity of information of corresponding space triangle.Forward is looked in the face of spatial triangle, and the quantity of information that comprises is the abundantest, can obtain more three-dimensional reconstruction clue; Non-forward is looked in the face of spatial triangle, and the quantity of information that comprises is less.For space lattice, each net point is carried out projection, whether the position of leg-of-mutton normal direction of considering gridding and camera optical axis relation and this triangle are blocked again simultaneously, just can obtain the projection of grid.As shown in figure 27, camera V p, V qAt the leg-of-mutton back side, triangle is blocked, and this triangle is at V p, V qThere is not projection on the view picture; And camera V i, V j, V kNot being blocked, is that as seen this leg-of-muttonly look.Angle between the normal of camera normal direction and mistake triangle center has been defined the visual angle, as the θ among Figure 27.To as seen look by the view angle theta ascending order and arrange, and obtain the leg-of-mutton formation of as seen looking, the formation of as seen looking of Figure 27 intermediate cam shape is { V i, V j, V k.Wherein first is looked, and promptly looking of view angle theta minimum is called as main looking.Main looking be and the angle minimum of triangle normal, triangle projected area maximum, sight line " just " thereon, obviously, and the main feature that can reflect in the spatial triangle of looking.
6.2) adopt feature detection algorithm, detect the feature in each view picture and cut feature generation plane characteristic point.Wherein feature detection algorithm adopts multiresolution, and the feature level of telling is determined by the parameter of feature detection algorithm.The method of abstract image feature is a lot, for example Marr-Hildreth operator, LOG operator, Canny operator, Wallis operator, Sobel operator, LOG wave filter, wavelet analysis, based on method of generalized entropy mapping or the like.A kind of preferred version adopts Marr-Hildreth operator detected image feature, cuts feature then, generates the plane characteristic point.The cutting feature is to be that the circle of r goes to intercept feature with radius, is the center of circle with one side end points of feature, justifies with radius, with unique point of the crossing generation of feature; Be the center of circle with this unique point then, advance, continue cutting, up to the end points that arrives another side to another end points.If spacing is less than radius between the edge two-end-point, with arbitrary end points at edge as the intersection point that cuts out.Along with the continuous iteration of algorithm, the radius of circle of cutting image feature also diminishes gradually, and feature is more and more thinner.
The leg-of-mutton longest edge length of setting in all view pictures of projection is L Maxinmesh, bond length is L Mininmesh, the proportional range of both maximums is 2.If the length of side surpasses ratio, then in the 1/3-2/3 scope on long limit, get the peaked position of shade of gray as cut point.Simultaneously, if at the 0.1 * L of unique point in the net point projection MininmeshIn the scope, then deleted, no longer carry out the processing of back.Because cut point can accumulate near the projection of net point sometimes, can produce the triangle of a large amount of yardstick polarizations with these some segmentation grids.
6.3) match point of search plane characteristic point in the cut section of other view picture correspondence, generate newly-increased spatial point, then with newly-increased spatial point segmentation grid.
Each spatial triangle is carried out following processing: at first find the master of spatial triangle to look, in the main homolographic projection triangle of looking, find unique point then, can calculate the match point of seeking these unique points in looking at other of spatial triangle again, unique point and match point generate newly-increased spatial point then, then with newly-increased spatial point segmentation grid, not the unique point in the homolographic projection triangle in main the looking, do not process.
Wherein the match point of search characteristics point combines the accuracy that utmost point constraint, space lattice is guaranteed match point to the constraint of image projection split image, illumination consistency constraint.The main foundation of seeking match point is to five kinds of clues such as utmost point constraint, unique constraint, smoothness constraint, sequence constraint, illumination consistency constraints, as Figure 28, shown in Figure 29.Among Figure 28, spatial point M looks among O and the O` imaging m and m` respectively at two, and the picture of the photocentre of O` in O be e, and the picture of the photocentre of O in O ' looks is e '.By perspective geometry relation as can be known, view is corresponding straight line with view as the straight line l ' that crosses 2 of e ' and m ' among the O ' as the straight line l that crosses 2 of e and m among the O, and the point on two direct lines is correspondence one by one.If known features point m, so match point m ' just with the corresponding line l ' of straight line l on, Here it is retrains the utmost point; And m and match point m ' both be unique corresponding, this is called as unique constraint; As shown in figure 29, if spatial point L, X imaging are more approaching, and grey scale change is little, and this space length of 2 can be not far away yet so, and this is called as smoothness constraint; And spatial point L, the X position relation between each view picture is identical under certain condition, and the picture of L in one is looked appears at the picture top of X, also can be like this in other is looked, and this is a sequence constraint; And the most important thing is, the picture of spatial point in each is looked, gray scale should be the same in theory, this is called as the illumination consistency constraint, all should be consistent as color and the gray scale of 2 of m among Figure 28 and m '; The locus constraint is projected in two view pictures as the grey spatial triangle among Figure 26, looks V so iIn unique point looking V jIn corresponding match point fix in the corresponding grey plane triangle with regard to one.In the present embodiment, just can find match point for unique point exactly to utmost point constraint, grid to the constraint of image projection split image and two constraints of illumination consistency constraint.
If unique point and match point thereof are all known, known as m among Figure 28 and m`, and parameter such as focal length of camera, position is all known, just can reduce spatial point M according to geometric relationship so.
As a kind of improvement, increased the notion that the master of spatial triangle looks, and the unique point during only coupling, reconstruction process master look, increased the accuracy of coupling, reconstruction.
As a kind of improvement, to main Feature Points Matching of looking, when rebuilding, a plurality ofly calculate the situation of looking, so unique point has a plurality of match points owing to have, directly calculating will produce a plurality of spatial point.But in fact, unique point and these match points are actually the projection of the same space point.Spatial point X imaging m in three views among Figure 29 for example 1, m 2, m 3, be corresponding each other.But because the influence of factor such as image quality, three points that found and m 1, m 2, m 3Skew is arranged, and then three picture point can calculate three spatial point.Calculating in native system is that with three spatial point that directly calculate, its center is the position of real space point.
As a kind of improvement, preferential newly-increased spatial point segmentation spatial triangle with the most close spatial triangle center, all the other newly-increased spatial point are segmented the triangle that is decomposed out more then, guarantee in the grid of each segmentation back not tangle mutually between little close, the triangle of each leg-of-mutton people, help guaranteeing that the topological structure of grid can be consistent with the profile of destination object automatically.
Empty triangle be may produce after the mesh refinement, triangle, cavity intersected.If the projection of triangle in each is looked all is exactly empty triangle outside the image of destination object.If the limit in the grid is only attached on the triangle, just there is the cavity in this place so.The arrangement grid will be deleted empty triangle exactly, differentiates crossing triangle, filling cavity.If triangle intersect is removed crossing by following method.4 triangle formations are set: the leg-of-mutton seed formation of storage seed SQ, leg-of-mutton crossing formation IQ is intersected in storage, triangle formation DQ that delete and the last qualified triangle formation VQ of storage.The seed triangle is not have crossing triangle, randomly draws from triangular mesh when beginning.To each seed triangle, at first put into formation VQ, check adjacent triangle then.If adjacent triangle does not intersect, put into seed formation SQ; Intersect formation IQ otherwise put into.If seed queue empty, the triangle that is untreated in addition except intersecting formation is got the seed triangle again from the triangle that is untreated, and continues to handle up to the triangle that is not untreated.These operations are the conventional contents in the computer graphics teaching material, repeat no more.
6.4) constantly reduce the feature detection algorithm parameter, the iterative detection characteristics of image, repeating step 6.1,6.2,6.3 does not have the more feature of details in image, and the segmentation of grid iteration approaches destination object.
6.5) each summit of grid is relocated to sub-pixel precision; Because the projection of spatial point in one is looked is just in time on a pixel, and the projection in another is looked, might be not on a pixel, and between two pixels.Therefore each summit re-projection of grid with each view picture, whether the color of inspection projection consistent with gray scale.If consistent, illustrate that vertex position is accurate; If inconsistent, then, seek more accurate location in the sub-pixel scope interpolate value of adjacent domain.
Obviously, the explanation of this step only as an example.Under the situation that does not break away from described principle of claim of the present invention and scope, can carry out many variations to the related grid iteration divided method of this step.
7, the grid arrangement is optimized, and generates the Delaunay triangle gridding, the grid that obtains rebuilding, output pin type three-dimensional model.
Obviously, the above embodiment of the present invention only is for example clearly is described, and is not to be qualification to embodiment of the present invention.For those of ordinary skill in the field, can also make other changes in different forms on the basis of the above description.Here need not also can't give exhaustive to all embodiments.And these belong to conspicuous variation or the change that spirit of the present invention extended out and still are among protection scope of the present invention.

Claims (13)

1. the system that the multiple image that is absorbed simultaneously by a plurality of cameras is automatically rebuild pin type three-dimensional surface is characterized in that, comprises following concrete steps:
1) generates shoe tree statistics distorted pattern: select the shoe tree sample, constitute the shoe tree sample set, and the design gauge point, the coordinate that obtains sample labeling point generates the some distributed model, and the some distributed model collection from sample set carries out principal component analysis generation shoe tree statistics distorted pattern then;
2) imaging circumstances is set: arrange a plurality of cameras by shoe tree statistics distorted pattern, arrange light again;
3) pickup image: the imaging of pin type calibrating template, the inside and outside parameter of demarcating each camera, and then to the imaging of pin type:
4) just estimate the pin type: just estimate the pin type with shoe tree statistics distorted pattern, comprise that posture is estimated and shape is estimated;
5) generating mesh model: convert first estimation model to grid model;
6) segmentation grid model: generate newly-increased spatial point from the feature of each view picture, the segmentation grid model, segmentation process multiresolution iteration carries out can't cutting out the more point of details in each image, and segmentation finishes;
7) output pin type three-dimensional model is optimized in the grid arrangement.
2. the multiple image that is absorbed simultaneously by a plurality of cameras according to claim 1 is automatically rebuild the system of pin type three-dimensional surface, it is characterized in that, described gauge point is derived by the original pattern figure of shoe tree sample set and the key point of side elevational view, and can reflect the shoe tree shape facility.
3. the multiple image that is absorbed simultaneously by a plurality of cameras according to claim 1 and 2 is automatically rebuild the system of pin type three-dimensional surface, it is characterized in that, described generation shoe tree statistics distorted pattern specifically may further comprise the steps:
1) selects the shoe tree sample, constitute the shoe tree sample set;
2) design gauge point, the position of definite gauge point reads computing machine to the position of gauge point, then as the some distributed model of sample on every shoe tree sample;
3) the alignment of the some distributed model of each sample in the shoe tree sample set, guarantee the distance minimization between each sample on the whole sample set:
4) adopt the principal component analysis method on the other side's some distributed collection, to generate the statistics distorted pattern of shoe tree, the shape of shoe tree is resolved into general character and individual character two parts, and individual character is the product of individual sex factor and individual character vector.
4. the multiple image that is absorbed simultaneously by a plurality of cameras according to claim 1 and 2 is automatically rebuild the system of pin type three-dimensional surface, it is characterized in that, the described imaging circumstances that is provided with specifically may further comprise the steps:
1) initial step: introduce the shoe tree statistics distorted pattern that preceding step produced, and the threshold value of the threshold value of preset model point image-forming information amount in profile, model points image-forming information amount on profile, the initial information amount of each model points is made as 0;
2) computing camera distribution ball: according to the radius R of known parameters computing camera distribution balls such as camera focus, target sizes;
3) calculate the maximum camera position of dominant shape shape factor information amount: on camera distribution ball, calculate and comprise the maximum camera position of dominant shape shape factor information amount in the shoe tree statistics distorted pattern;
4) output camera position: determine the camera position point of output, increase under this camera position the quantity of information of model points on the visible model points and profile, and the image-forming information amount on image-forming information amount, the profile removed in profile is respectively greater than the model points of setting threshold;
5) calculate new statistics distorted pattern: the new shoe tree statistics distorted pattern after recomputating model points and reducing;
6) iterative computation: repeating step 3), 4), 5), until the statistics distorted pattern in all model points all by abundant imaging, can both rebuild out according to image information;
7) end step: the coordinate of output camera position point, coordinate is arranged camera in view of the above.
5. the multiple image that is absorbed simultaneously by a plurality of cameras according to claim 1 and 2 is automatically rebuild the system of pin type three-dimensional surface, it is characterized in that, the described pin of estimation just type specifically may further comprise the steps:
1) initial step: introduce shoe tree statistics distorted pattern, camera parameter that preceding step produced and the pin type image that is absorbed;
2) calculate pin type size, orientation and position: select image from the imaging of sole direction, position calibration, image and the groove on the load-bearing glass plate according to sole calculate pin type size and orientation, then the multiple image of pin type all is divided into bianry image, promptly pin type and background segment are come, the pin type is a black, and background is a white, asks for the center of gravity of pin type in each plane of delineation then, and calculating the center of gravity of pin type in the space thus, this is the position of pin type;
3) shape is just estimated: the multiple image of pin type all is divided into four value images, promptly pin type and background segment are come, sole contacts with glass plate, the darker part of color is a black in the image, sole portion does not contact with glass plate, the more shallow part of color is a Dark grey in the image, do not belong to the most shallow part of color in sole, the image on the pin type for light grey, background is a white, change the individual sex factor in the shoe tree statistics distorted pattern then, make model projection point color add up and maximum, obtain the first estimation shape of pin type;
4) consistency of contour calculates: consistency of contour calculates iteration to carry out, point and exterior point in each is taken turns and earlier model points is divided in the iteration, the amount of movement of point and exterior point in calculating, retrained by shoe tree statistics distorted pattern with whole model then, therefore calculate and the close new shape of shoe tree statistics distorted pattern, one by one the initial estimation model is fitted to true pin type and get on;
5) illumination consistance is calculated: search in a plurality of pixel coverages that the illumination consistance is best, the images match point of sub-pixel, obtain model points position accurately, generate comparatively precise analytic model;
6) end step: the roughly estimation model of profile of pin type has been caught in output.
6. the multiple image that is absorbed simultaneously by a plurality of cameras according to claim 3 is automatically rebuild the system of pin type three-dimensional surface, it is characterized in that, the described pin of estimation just type specifically may further comprise the steps:
1) initial step: introduce shoe tree statistics distorted pattern, camera parameter that preceding step produced and the pin type image that is absorbed;
2) calculate pin type size, orientation and position: select image from the imaging of sole direction, position calibration, image and the groove on the load-bearing glass plate according to sole calculate pin type size and orientation, then the multiple image of pin type all is divided into bianry image, promptly pin type and background segment are come, the pin type is a black, and background is a white, asks for the center of gravity of pin type in each plane of delineation then, and calculating the center of gravity of pin type in the space thus, this is the position of pin type;
3) shape is just estimated: the multiple image of pin type all is divided into four value images, promptly pin type and background segment are come, sole contacts with glass plate, the darker part of color is a black in the image, sole portion does not contact with glass plate, the more shallow part of color is a Dark grey in the image, do not belong to the most shallow part of color in sole, the image on the pin type for light grey, background is a white, change the individual sex factor in the shoe tree statistics distorted pattern then, make the adding up and people of color of model projection point, obtain the first estimation shape of pin type;
4) consistency of contour calculates: consistency of contour calculates iteration to carry out, point and exterior point in each is taken turns and earlier model points is divided in the iteration, the amount of movement of point and exterior point in calculating, retrained by shoe tree statistics distorted pattern with whole model then, therefore calculate and the close new shape of shoe tree statistics distorted pattern, one by one the initial estimation model is fitted to true pin type and get on;
5) illumination consistance is calculated: search in 6 pixel coverages that the illumination consistance is best, the images match point of sub-pixel, span point generates comparatively precise analytic model;
6) end step: the roughly estimation model of profile of pin type has been caught in output.
7. the multiple image that is absorbed simultaneously by a plurality of cameras according to claim 4 is automatically rebuild the system of pin type three-dimensional surface, it is characterized in that, the described pin of estimation just type specifically may further comprise the steps:
1) initial step: introduce shoe tree statistics distorted pattern, camera parameter that preceding step produced and the pin type image that is absorbed;
2) calculate pin type size, orientation and position: select image from the imaging of sole direction, position calibration, image and the groove on the load-bearing glass plate according to sole calculate pin type size and orientation, then the multiple image of pin type all is divided into bianry image, promptly pin type and background segment are come, the pin type is a black, and background is a white, asks for the center of gravity of pin type in each plane of delineation then, and calculating the center of gravity of pin type in the space thus, this is the position of pin type;
3) shape is just estimated: the multiple image of pin type all is divided into four value images, promptly pin type and background segment are come, sole contacts with glass plate, the darker part of color is a black in the image, sole portion does not contact with glass plate, the more shallow part of color is a Dark grey in the image, do not belong to the most shallow part of color in sole, the image on the pin type for light grey, background is a white, change the individual sex factor in the shoe tree statistics distorted pattern then, make model projection point color add up and maximum, obtain the first estimation shape of pin type;
4) consistency of contour calculates: consistency of contour calculates iteration to carry out, point and exterior point in each is taken turns and earlier model points is divided in the iteration, the amount of movement of point and exterior point in calculating, retrained by shoe tree statistics distorted pattern with whole model then, therefore calculate and the close new shape of shoe tree statistics distorted pattern, one by one the initial estimation model is fitted to true pin type and get on;
5) illumination consistance is calculated: search in 6 pixel coverages that the illumination consistance is best, the images match point of sub-pixel, span point generates comparatively precise analytic model;
6) end step: the roughly estimation model of profile of pin type has been caught in output.
8. the multiple image that is absorbed simultaneously by a plurality of cameras according to claim 1 and 2 is automatically rebuild the system of pin type three-dimensional surface, it is characterized in that, the step of described segmentation grid model is: at first detect the feature in each view picture and cut feature generation plane characteristic point, mate the newly-increased spatial point of plane characteristic dot generation then, then with newly-increased spatial point segmentation grid, and under the framework of multiresolution analysis, repeat above step iteration and segment grid, in image, do not have the more feature of details, finish dense reconstruction.
9. the multiple image that is absorbed simultaneously by a plurality of cameras according to claim 3 is automatically rebuild the system of pin type three-dimensional surface, it is characterized in that, the step of described segmentation grid model is: at first detect the feature in each view picture and cut feature generation plane characteristic point, mate the newly-increased spatial point of plane characteristic dot generation then, then with newly-increased spatial point segmentation grid, and under the framework of multiresolution analysis, repeat above step iteration and segment grid, in image, do not have the more feature of details, finish dense reconstruction.
10. the multiple image that is absorbed simultaneously by a plurality of cameras according to claim 4 is automatically rebuild the system of pin type three-dimensional surface, it is characterized in that, the step of described segmentation grid model is: at first detect the feature in each view picture and cut feature generation plane characteristic point, mate the newly-increased spatial point of plane characteristic dot generation then, then with newly-increased spatial point segmentation grid, and under the framework of multiresolution analysis, repeat above step iteration and segment grid, in image, do not have the more feature of details, finish dense reconstruction.
11. the multiple image that is absorbed simultaneously by a plurality of cameras according to claim 5 is automatically rebuild the system of pin type three-dimensional surface, it is characterized in that, the step of described segmentation grid model is: at first detect the feature in each view picture and cut feature generation plane characteristic point, mate the newly-increased spatial point of plane characteristic dot generation then, then with newly-increased spatial point segmentation grid, and under the framework of multiresolution analysis, repeat above step iteration and segment grid, in image, do not have the more feature of details, finish dense reconstruction.
12. the multiple image that is absorbed simultaneously by a plurality of cameras according to claim 6 is automatically rebuild the system of pin type three-dimensional surface, it is characterized in that, the step of described segmentation grid model is: at first detect the feature in each view picture and cut feature generation plane characteristic point, mate the newly-increased spatial point of plane characteristic dot generation then, then with newly-increased spatial point segmentation grid, and under the framework of multiresolution analysis, repeat above step iteration and segment grid, in image, do not have the more feature of details, finish dense reconstruction.
13. the multiple image that is absorbed simultaneously by a plurality of cameras according to claim 7 is automatically rebuild the system of pin type three-dimensional surface, it is characterized in that, the step of described segmentation grid model is: at first detect the feature in each view picture and cut feature generation plane characteristic point, mate the newly-increased spatial point of plane characteristic dot generation then, then with newly-increased spatial point segmentation grid, and under the framework of multiresolution analysis, repeat above step iteration and segment grid, in image, do not have the more feature of details, finish dense reconstruction.
CN2011100914555A 2011-04-09 2011-04-09 System for fully automatically reconstructing foot-type three-dimensional surface from a plurality of images captured by a plurality of cameras simultaneously Pending CN102157013A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011100914555A CN102157013A (en) 2011-04-09 2011-04-09 System for fully automatically reconstructing foot-type three-dimensional surface from a plurality of images captured by a plurality of cameras simultaneously

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011100914555A CN102157013A (en) 2011-04-09 2011-04-09 System for fully automatically reconstructing foot-type three-dimensional surface from a plurality of images captured by a plurality of cameras simultaneously

Publications (1)

Publication Number Publication Date
CN102157013A true CN102157013A (en) 2011-08-17

Family

ID=44438491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011100914555A Pending CN102157013A (en) 2011-04-09 2011-04-09 System for fully automatically reconstructing foot-type three-dimensional surface from a plurality of images captured by a plurality of cameras simultaneously

Country Status (1)

Country Link
CN (1) CN102157013A (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411796A (en) * 2011-10-13 2012-04-11 长春工业大学 Three-dimensional modeling method of last body computer based on general three-dimensional software
CN102917175A (en) * 2012-09-13 2013-02-06 西北工业大学 Sheltering multi-target automatic image matting method based on camera array synthetic aperture imaging
CN103824288A (en) * 2014-02-17 2014-05-28 哈尔滨工业大学 Array image registration template for lens array
CN104054079A (en) * 2011-11-18 2014-09-17 耐克国际有限公司 Automated 3-d modeling of shoe parts
CN104068561A (en) * 2013-03-29 2014-10-01 林恭志 Foot analysis system, computer program appliance and carrier thereof
CN104643407A (en) * 2013-11-19 2015-05-27 耐克创新有限合伙公司 Conditionally visible bite lines for footwear
CN105913444A (en) * 2016-05-03 2016-08-31 华南农业大学 Livestock body contour reconstruction method and body condition scoring method based on soft laser ranging
CN106157367A (en) * 2015-03-23 2016-11-23 联想(北京)有限公司 Method for reconstructing three-dimensional scene and equipment
CN106296799A (en) * 2015-06-10 2017-01-04 西安蒜泥电子科技有限责任公司 Characteristic point for object scanning is supplemented and extracting method
CN106952249A (en) * 2017-02-20 2017-07-14 广东电网有限责任公司惠州供电局 Insulator chain axis detection method based on Cross ration invariability
CN107114863A (en) * 2017-07-07 2017-09-01 李宁体育(上海)有限公司 For the manufacture method and manufacture system of the shoe tree for testing footwear non-skid property
CN107183835A (en) * 2017-07-24 2017-09-22 重庆小爱科技有限公司 A kind of method of use mobile phone photograph scanning generation human foot model and data
US9939803B2 (en) 2011-11-18 2018-04-10 Nike, Inc. Automated manufacturing of shoe parts
CN108446597A (en) * 2018-02-14 2018-08-24 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D collecting methods and device based on Visible Light Camera
CN109043737A (en) * 2018-08-13 2018-12-21 顾萧 A kind of footwear customization system based on 3-D scanning and three-dimensional modeling
US10194716B2 (en) 2011-11-18 2019-02-05 Nike, Inc. Automated identification and assembly of shoe parts
CN109816724A (en) * 2018-12-04 2019-05-28 中国科学院自动化研究所 Three-dimensional feature extracting method and device based on machine vision
CN109815813A (en) * 2018-12-21 2019-05-28 深圳云天励飞技术有限公司 Image processing method and Related product
CN110211100A (en) * 2019-05-20 2019-09-06 浙江大学 A kind of foot measurement method of parameters based on image
CN110490973A (en) * 2019-08-27 2019-11-22 大连海事大学 A kind of multiple view shoes model three-dimensional rebuilding method of model-driven
US10552551B2 (en) 2011-11-18 2020-02-04 Nike, Inc. Generation of tool paths for shore assembly
CN111278321A (en) * 2017-08-25 2020-06-12 鞋履检索公司 System and method for footwear sizing
CN111553986A (en) * 2020-05-19 2020-08-18 北京数字绿土科技有限公司 Construction method and construction device of triangulation network and generation method of digital surface model
CN111602922A (en) * 2019-02-26 2020-09-01 韩建林 Method for making handmade leather shoes
CN112229642A (en) * 2020-08-06 2021-01-15 沈阳工业大学 Passenger vehicle driving dynamic comfort test analysis method based on ergonomics
CN112465755A (en) * 2020-11-18 2021-03-09 熵智科技(深圳)有限公司 Initial sub-area subdivision method and device, computer equipment and storage medium
CN112617809A (en) * 2020-12-24 2021-04-09 新拓三维技术(深圳)有限公司 Footprint area calculation method and system
CN113615936A (en) * 2021-08-31 2021-11-09 浙江奥云数据科技有限公司 Intelligent system for shoe customization
CN113673457A (en) * 2021-08-26 2021-11-19 北京环境特性研究所 Analog measurement image processing method and device, computing equipment and storage medium
CN113744394A (en) * 2021-11-05 2021-12-03 广东时谛智能科技有限公司 Shoe tree three-dimensional modeling method, device, equipment and storage medium
US11317681B2 (en) 2011-11-18 2022-05-03 Nike, Inc. Automated identification of shoe parts
CN114923665A (en) * 2022-05-27 2022-08-19 上海交通大学 Image reconstruction method and image reconstruction test system for wave three-dimensional height field

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887477A (en) * 2010-06-25 2010-11-17 温州大学 Method for customizing digitalized shoe trees according to a plurality of images of foot shapes
CN201725140U (en) * 2010-05-01 2011-01-26 温州大学 Foot type three-dimensional reconstruction multi-view imaging device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201725140U (en) * 2010-05-01 2011-01-26 温州大学 Foot type three-dimensional reconstruction multi-view imaging device
CN101887477A (en) * 2010-06-25 2010-11-17 温州大学 Method for customizing digitalized shoe trees according to a plurality of images of foot shapes

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WANG, J.: "Shape reconstruction of human foot from multi-camera images based on PCA of human shape database", 《3-D DIGITAL IMAGING AND MODELING》 *
顾铭秋: "基于特征信息的三维鞋楦处理中的若干技术及其应用", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技I辑》 *

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411796A (en) * 2011-10-13 2012-04-11 长春工业大学 Three-dimensional modeling method of last body computer based on general three-dimensional software
US10671048B2 (en) 2011-11-18 2020-06-02 Nike, Inc. Automated manufacturing of shoe parts
US11341291B2 (en) 2011-11-18 2022-05-24 Nike, Inc. Generation of tool paths for shoe assembly
CN104054079A (en) * 2011-11-18 2014-09-17 耐克国际有限公司 Automated 3-d modeling of shoe parts
US10393512B2 (en) 2011-11-18 2019-08-27 Nike, Inc. Automated 3-D modeling of shoe parts
US10552551B2 (en) 2011-11-18 2020-02-04 Nike, Inc. Generation of tool paths for shore assembly
US11879719B2 (en) 2011-11-18 2024-01-23 Nike, Inc. Automated 3-D modeling of shoe parts
US11763045B2 (en) 2011-11-18 2023-09-19 Nike, Inc. Generation of tool paths for shoe assembly
US11641911B2 (en) 2011-11-18 2023-05-09 Nike, Inc. Automated identification and assembly of shoe parts
CN104054079B (en) * 2011-11-18 2017-07-07 耐克创新有限合伙公司 The D modeling methods of automation 3 and system of shoes parts
US11422526B2 (en) 2011-11-18 2022-08-23 Nike, Inc. Automated manufacturing of shoe parts
US11346654B2 (en) 2011-11-18 2022-05-31 Nike, Inc. Automated 3-D modeling of shoe parts
US10667581B2 (en) 2011-11-18 2020-06-02 Nike, Inc. Automated identification and assembly of shoe parts
CN107452051A (en) * 2011-11-18 2017-12-08 耐克创新有限合伙公司 The D modeling methods of automation 3 and system of shoes parts
US9939803B2 (en) 2011-11-18 2018-04-10 Nike, Inc. Automated manufacturing of shoe parts
US11317681B2 (en) 2011-11-18 2022-05-03 Nike, Inc. Automated identification of shoe parts
US11266207B2 (en) 2011-11-18 2022-03-08 Nike, Inc. Automated identification and assembly of shoe parts
US10194716B2 (en) 2011-11-18 2019-02-05 Nike, Inc. Automated identification and assembly of shoe parts
CN107452051B (en) * 2011-11-18 2020-12-04 耐克创新有限合伙公司 Automated 3-D modeling method and system for shoe parts
CN102917175A (en) * 2012-09-13 2013-02-06 西北工业大学 Sheltering multi-target automatic image matting method based on camera array synthetic aperture imaging
CN104068561A (en) * 2013-03-29 2014-10-01 林恭志 Foot analysis system, computer program appliance and carrier thereof
CN104643407B (en) * 2013-11-19 2019-11-19 耐克创新有限合伙公司 Visual lines of occlusion of having ready conditions for footwear
CN104643407A (en) * 2013-11-19 2015-05-27 耐克创新有限合伙公司 Conditionally visible bite lines for footwear
CN103824288A (en) * 2014-02-17 2014-05-28 哈尔滨工业大学 Array image registration template for lens array
CN106157367A (en) * 2015-03-23 2016-11-23 联想(北京)有限公司 Method for reconstructing three-dimensional scene and equipment
CN106157367B (en) * 2015-03-23 2019-03-08 联想(北京)有限公司 Method for reconstructing three-dimensional scene and equipment
CN106296799A (en) * 2015-06-10 2017-01-04 西安蒜泥电子科技有限责任公司 Characteristic point for object scanning is supplemented and extracting method
CN105913444A (en) * 2016-05-03 2016-08-31 华南农业大学 Livestock body contour reconstruction method and body condition scoring method based on soft laser ranging
CN105913444B (en) * 2016-05-03 2019-07-19 华南农业大学 Livestock figure profile reconstructing method and Body Condition Score method based on soft laser ranging
CN106952249B (en) * 2017-02-20 2020-06-09 广东电网有限责任公司惠州供电局 Insulator string axis extraction method based on cross ratio invariance
CN106952249A (en) * 2017-02-20 2017-07-14 广东电网有限责任公司惠州供电局 Insulator chain axis detection method based on Cross ration invariability
CN107114863A (en) * 2017-07-07 2017-09-01 李宁体育(上海)有限公司 For the manufacture method and manufacture system of the shoe tree for testing footwear non-skid property
CN107114863B (en) * 2017-07-07 2022-09-09 李宁体育(上海)有限公司 Manufacturing method and manufacturing system of shoe tree for testing anti-skid performance of shoes
CN107183835A (en) * 2017-07-24 2017-09-22 重庆小爱科技有限公司 A kind of method of use mobile phone photograph scanning generation human foot model and data
CN111278321A (en) * 2017-08-25 2020-06-12 鞋履检索公司 System and method for footwear sizing
CN108446597B (en) * 2018-02-14 2019-06-25 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D collecting method and device based on Visible Light Camera
CN108446597A (en) * 2018-02-14 2018-08-24 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D collecting methods and device based on Visible Light Camera
CN109043737A (en) * 2018-08-13 2018-12-21 顾萧 A kind of footwear customization system based on 3-D scanning and three-dimensional modeling
CN109816724A (en) * 2018-12-04 2019-05-28 中国科学院自动化研究所 Three-dimensional feature extracting method and device based on machine vision
CN109816724B (en) * 2018-12-04 2021-07-23 中国科学院自动化研究所 Three-dimensional feature extraction method and device based on machine vision
CN109815813B (en) * 2018-12-21 2021-03-05 深圳云天励飞技术有限公司 Image processing method and related product
CN109815813A (en) * 2018-12-21 2019-05-28 深圳云天励飞技术有限公司 Image processing method and Related product
CN111602922A (en) * 2019-02-26 2020-09-01 韩建林 Method for making handmade leather shoes
CN110211100A (en) * 2019-05-20 2019-09-06 浙江大学 A kind of foot measurement method of parameters based on image
CN110490973B (en) * 2019-08-27 2022-09-16 大连海事大学 Model-driven multi-view shoe model three-dimensional reconstruction method
CN110490973A (en) * 2019-08-27 2019-11-22 大连海事大学 A kind of multiple view shoes model three-dimensional rebuilding method of model-driven
CN111553986A (en) * 2020-05-19 2020-08-18 北京数字绿土科技有限公司 Construction method and construction device of triangulation network and generation method of digital surface model
CN112229642B (en) * 2020-08-06 2022-08-19 沈阳工业大学 Passenger vehicle driving dynamic comfort test analysis method based on ergonomics
CN112229642A (en) * 2020-08-06 2021-01-15 沈阳工业大学 Passenger vehicle driving dynamic comfort test analysis method based on ergonomics
CN112465755B (en) * 2020-11-18 2021-09-10 熵智科技(深圳)有限公司 Initial sub-area subdivision method and device, computer equipment and storage medium
CN112465755A (en) * 2020-11-18 2021-03-09 熵智科技(深圳)有限公司 Initial sub-area subdivision method and device, computer equipment and storage medium
CN112617809A (en) * 2020-12-24 2021-04-09 新拓三维技术(深圳)有限公司 Footprint area calculation method and system
CN112617809B (en) * 2020-12-24 2024-05-24 新拓三维技术(深圳)有限公司 Foot print area calculation method and system
CN113673457A (en) * 2021-08-26 2021-11-19 北京环境特性研究所 Analog measurement image processing method and device, computing equipment and storage medium
CN113673457B (en) * 2021-08-26 2023-06-30 北京环境特性研究所 Analog measurement image processing method, device, computing equipment and storage medium
CN113615936A (en) * 2021-08-31 2021-11-09 浙江奥云数据科技有限公司 Intelligent system for shoe customization
CN113744394A (en) * 2021-11-05 2021-12-03 广东时谛智能科技有限公司 Shoe tree three-dimensional modeling method, device, equipment and storage medium
CN114923665A (en) * 2022-05-27 2022-08-19 上海交通大学 Image reconstruction method and image reconstruction test system for wave three-dimensional height field

Similar Documents

Publication Publication Date Title
CN102157013A (en) System for fully automatically reconstructing foot-type three-dimensional surface from a plurality of images captured by a plurality of cameras simultaneously
CN107767442B (en) Foot type three-dimensional reconstruction and measurement method based on Kinect and binocular vision
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN109410256B (en) Automatic high-precision point cloud and image registration method based on mutual information
CN107358648B (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
CN108549873B (en) Three-dimensional face recognition method and three-dimensional face recognition system
CN101887477B (en) Method for customizing digitalized shoe trees according to a plurality of images of foot shapes
US6664956B1 (en) Method for generating a personalized 3-D face model
CN107945268B (en) A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
CN104126989B (en) A kind of based on the foot surfaces 3 D information obtaining method under multiple stage RGB-D pick up camera
CN102222357B (en) Foot-shaped three-dimensional surface reconstruction method based on image segmentation and grid subdivision
US10013803B2 (en) System and method of 3D modeling and virtual fitting of 3D objects
CA2764135C (en) Device and method for detecting a plant
CN103971409A (en) Measuring method for foot three-dimensional foot-type information and three-dimensional reconstruction model by means of RGB-D camera
CN109949899A (en) Image three-dimensional measurement method, electronic equipment, storage medium and program product
CN106780619A (en) A kind of human body dimension measurement method based on Kinect depth cameras
CN101347332A (en) Measurement method and equipment of digitized measurement system of human face three-dimensional surface shape
CN104573180A (en) Real-person shoe type copying device and shoe tree manufacturing method based on single-eye multi-angle-of-view robot vision
CN110500954A (en) A kind of aircraft pose measuring method based on circle feature and P3P algorithm
CN108898673A (en) A kind of reconstruct foot triangle grid model processing method and system
CN108615256A (en) A kind of face three-dimensional rebuilding method and device
CN109102569A (en) A kind of reconstruct foot point cloud model processing method and system
CN110074788A (en) A kind of body data acquisition methods and device based on machine learning
CN114119872A (en) Method for analyzing 3D printing intraspinal plants based on artificial intelligence big data
CN109801326A (en) It is a kind of for obtaining the image measuring method of human somatotype data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20110817