CN109816724A - Three-dimensional feature extracting method and device based on machine vision - Google Patents

Three-dimensional feature extracting method and device based on machine vision Download PDF

Info

Publication number
CN109816724A
CN109816724A CN201811474153.4A CN201811474153A CN109816724A CN 109816724 A CN109816724 A CN 109816724A CN 201811474153 A CN201811474153 A CN 201811474153A CN 109816724 A CN109816724 A CN 109816724A
Authority
CN
China
Prior art keywords
measured
characteristic point
dimensional feature
machine vision
extracting method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811474153.4A
Other languages
Chinese (zh)
Other versions
CN109816724B (en
Inventor
沈震
熊刚
李志帅
彭泓力
郭超
董西松
商秀芹
王飞跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201811474153.4A priority Critical patent/CN109816724B/en
Publication of CN109816724A publication Critical patent/CN109816724A/en
Priority to PCT/CN2019/105962 priority patent/WO2020114035A1/en
Application granted granted Critical
Publication of CN109816724B publication Critical patent/CN109816724B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

The invention belongs to field of machine vision, specifically provide a kind of three-dimensional feature extracting method and device based on machine vision.Present invention seek to address that the problems such as reconstructing three-dimensional model process complicated and time consumption in the prior art, universal difficulty.For this purpose, the step of three-dimensional feature extracting method of the invention based on machine vision includes: to obtain the multi-angle image for presetting characteristic point to be measured comprising object;Extract location information of the characteristic point to be measured in each described image;The spatial positional information of the characteristic point to be measured is obtained according to location information of the characteristic point to be measured in each described image;Based on the spatial positional information and preset three-dimensional feature classification, the corresponding first distance information of some characteristic point to be measured and/or second distance information are calculated.The spatial positional information of characteristic point to be measured is obtained so that the range information of object is calculated comprising the different angle image of characteristic point to be measured by machine vision acquisition.

Description

Three-dimensional feature extracting method and device based on machine vision
Technical field
The invention belongs to field of machine vision, and in particular to a kind of three-dimensional feature extracting method and dress based on machine vision It sets.
Background technique
With cloud manufacture, the development of cloud computing and closing on for " industry 4.0 ", social manufacturing mode, i.e., customer-oriented customization The mode of production is come into being.The characteristics of society's manufacture is consumer demand can be converted into product, in terms of society Calculate it is theoretical based on, be based on development of Mobile Internet technology, Social Media and 3D printing technique, allow the social common people by forms such as crowdsourcings Sufficiently participate in product full life manufacturing process, realize personalization, real time implementation, economization production and consumption models.Namely It says, in society's manufacture, each consumer can participate in each stage of production Life cycle, setting including product Meter, manufacture and consumption.By taking shoemaking as an example, the application of society's manufacture in the shoe making process is embodied in user can be according to oneself need It asks to carry out personalized customization and selection, it is three-dimensional special this requires can simply, fast and accurately obtain the foot type of user Sign.
But the foot type parameter that original hand dipping can obtain is less, can not accurate description foot type, only have The professional tool of shoe-making industry could obtain accurate measurement result.To make layman that can also obtain accurate foot type For parameter to realize the personalized customization of shoes, the invention proposes calculate the method for obtaining foot type parameter using establishing model. Since everyone arch of foot height and toe and plantar surface angle are different, if only obtaining foot length and the wide two feature rulers of foot Very little is the difference that not can accurately reflect the Different Individual foot type for belonging to same model, therefore just needs to carry out threedimensional model to foot type It rebuilds to obtain accurate foot type parameter.Currently, foot type reconstructing three-dimensional model can be carried out by equipment such as laser three-dimensional scannings, But this method it is complicated for operation it is time-consuming, hardware cost is high, universal difficulty.So, it is necessary to a kind of more easy three Dimension module method accurately to obtain foot type parameter.
Correspondingly, this field needs a kind of new method for reconstructing three-dimensional model to solve the above problems.
Summary of the invention
In order to solve the above problem in the prior art, in order to solve existing reconstructing three-dimensional model process complexity consumption When, it is universal difficult the problems such as, first aspect present invention discloses a kind of three-dimensional feature extracting method based on machine vision, described Three-dimensional feature extracting method includes the following steps: to obtain the pre- of the object comprising object of reference and relative to object of reference setting If the multi-angle image of characteristic point to be measured;Extract location information of the characteristic point to be measured in each described image;According to institute State the spatial positional information that location information of the characteristic point to be measured in each described image obtains the characteristic point to be measured;Based on institute State spatial positional information and preset three-dimensional feature classification, calculate the corresponding first distance information of some characteristic point to be measured and/or Second distance information;Wherein, the first distance information is between some described characteristic point to be measured and other characteristic points to be measured Range information, the second distance information are the vertical range information between described some characteristic point and preset plane to be measured;Institute It states some characteristic point, other described characteristic points to be measured and described plane to be measured and each depends on the three-dimensional feature classification.
In the optimal technical scheme of the above-mentioned three-dimensional feature extracting method based on machine vision, " spy to be measured is extracted Sign point in the location information in each described image " the step of include: to be obtained in some described image using hand labeled method The location of pixels of the characteristic point to be measured;Using preset Feature Points Matching method and according to acquired location of pixels, extract The characteristic point to be measured corresponding location of pixels in other images.
In the optimal technical scheme of the above-mentioned three-dimensional feature extracting method based on machine vision, " spy to be measured is extracted Sign point is in the location information in each described image " the step of include: where obtaining characteristic point to be measured described in the object The corresponding region shape in region;The corresponding region to be measured of each image is obtained according to the region shape;According to it is described to The relative position and each region to be measured between characteristic point and the region shape are surveyed, the characteristic point to be measured is obtained and exists Location information in each image.
In the optimal technical scheme of the above-mentioned three-dimensional feature extracting method based on machine vision, " spy to be measured is extracted The step of location information of the sign point in each described image " includes: described to be measured using the neural network acquisition constructed in advance Location information of the characteristic point in each described image;Wherein, the neural network is based on preset training set and using deeply The deep neural network of degree study related algorithm training.
In the optimal technical scheme of the above-mentioned three-dimensional feature extracting method based on machine vision, " according to the spy to be measured Location information of the sign point in each described image obtains the spatial positional information of the characteristic point to be measured " the step of include: benefit It is obtained with Triangulation Algorithm and according to location information of the characteristic point to be measured in each image with camera inside and outside parameter Take the Euclidean position of the characteristic point to be measured.
In the optimal technical scheme of the above-mentioned three-dimensional feature extracting method based on machine vision, " according to the spy to be measured Location information of the sign point in each described image obtains the spatial positional information of the characteristic point to be measured " the step of include: benefit The location information of the characteristic point to be measured described in increment type SFM method and each described image constructs sparse model, and utilizes three Angling method calculates spatial positional information of the characteristic point to be measured under world coordinate system;Utilize the scale coefficient obtained in advance Restore spatial positional information of the characteristic point to be measured under world coordinate system obtained in above-mentioned steps, obtains the spy to be measured Levy the actual position of point.
In the optimal technical scheme of the above-mentioned three-dimensional feature extracting method based on machine vision, " acquisition in advance is being utilized Scale coefficient restore spatial positional information of the characteristic point under world coordinate system obtained in above-mentioned steps, obtain described Before the actual position of characteristic point to be measured ", the three-dimensional feature extracting method based on machine vision further include: utilize described dilute Dredge model and according to the location of pixels on the object of reference vertex under camera coordinates system, acquisition object of reference top under world coordinate system The coordinate of point, it should be noted that the apex coordinate under world coordinate system differs scale coefficient λ with space actual position;According to The coordinate on the object of reference vertex and the space actual position on object of reference vertex, calculate the scale coefficient under world coordinate system λ。
In the optimal technical scheme of the above-mentioned three-dimensional feature extracting method based on machine vision, the Triangulation Algorithm packet Include: according to the camera inside and outside parameter and location information of the characteristic point to be measured in each image, obtain it is described to The projective space position of characteristic point is surveyed, and homogeneous partial differential is carried out to the projective space position and handles to obtain the characteristic point to be measured Euclidean space position.
It will be appreciated by persons skilled in the art that in the inventive solutions, by the difference for obtaining object Angular image simultaneously extracts the position of characteristic point to be measured in the picture, is then solved using Triangulation Algorithm or sparse Problems of Reconstruction Spatial position of the characteristic point to be measured under world coordinate system is calculated, according to the space bit confidence for the characteristic point to be measured being calculated Breath calculates the first distance information and/or second distance information between characteristic point.Three-dimensional feature extracting method of the invention is only led to The multi-angle image for crossing photographing device acquisition can quickly determine the three-dimensional feature point of object, and then object is calculated Range information simplifies three-dimensional reconstruction process without using the high cost such as laser three-dimensional scanning, hardware device complicated for operation.
In the preferred technical solution of the present invention, determine characteristic point to be measured each by hand labeled or automated process Location of pixels in width image, wherein automated process includes being recycled according to the corresponding region shape in characteristic point region to be measured The region to be measured of each image obtains the position of characteristic point to be measured in each image using the neural network constructed in advance Confidence breath.Then using object of reference automatic Calibration camera parameter again trigonometric ratio or by sparse Problems of Reconstruction solve seek to The true spatial location for surveying characteristic point, does not need the Model Reconstruction to entire object, can reduce calculation amount, simplified model is built Vertical process.True spatial location and preset three-dimensional feature classification finally based on characteristic point to be measured, calculate characteristic point to be measured Corresponding range information.
Second aspect of the present invention provides a kind of storage device, and the storage device is stored with a plurality of program, described program Suitable for being loaded by processor to execute aforementioned described in any item three-dimensional feature extracting methods based on machine vision.
It should be noted that the storage device has all of the three-dimensional feature extracting method above-mentioned based on machine vision Technical effect, details are not described herein.
Third aspect present invention additionally provides a kind of control device, and the control device includes processor and storage equipment, The counter foil equipment is suitable for storing a plurality of program, and described program is suitable for being loaded as the processor to execute described in aforementioned any one The three-dimensional feature extracting method based on machine vision.
It should be noted that the control device has all of the three-dimensional feature extracting method above-mentioned based on machine vision Technical effect, details are not described herein.
Detailed description of the invention
Three-dimensional feature extracting method of the invention based on machine vision is described with reference to the accompanying drawings and in conjunction with foot type.It is attached In figure:
Fig. 1 is a kind of key step stream of the foot type three-dimensional feature extracting method based on machine vision in the embodiment of the present invention Cheng Tu;
Fig. 2 be in the embodiment of the present invention a kind of foot type three-dimensional feature extracting method based on machine vision using circle as mould Plate detects the schematic diagram of characteristic point using generalised Hough transform;
Fig. 3 be in the embodiment of the present invention a kind of foot type three-dimensional feature extracting method based on machine vision using circle as mould Plate detects the schematic diagram of characteristic point using generalised Hough transform;
Fig. 4 be in the embodiment of the present invention a kind of foot type three-dimensional feature extracting method based on machine vision using circle as mould Plate detects the schematic diagram of characteristic point using generalised Hough transform;
Fig. 5 be in the embodiment of the present invention a kind of foot type three-dimensional feature extracting method based on machine vision using straight line as mould Plate detects the schematic diagram of object of reference using generalised Hough transform;
Fig. 6 is a kind of trigonometric ratio process of the foot type three-dimensional feature extracting method based on machine vision in the embodiment of the present invention Solve the process schematic of characteristic point spatial positional information;
Fig. 7 is a kind of sparse rebuilding of the foot type three-dimensional feature extracting method based on machine vision in the embodiment of the present invention The process schematic of journey solution characteristic point spatial positional information.
Specific embodiment
The preferred embodiment of the present invention described with reference to the accompanying drawings.It will be apparent to a skilled person that this A little embodiments are used only for explaining technical principle of the invention, it is not intended that limit the scope of the invention.Although for example, The present invention is described by taking foot type as an example, it is also possible that other can be by establishing the target that model conversation is product Object, such as clothes.In addition, the present invention is described using A4 paper as object of reference, it is also possible that other known dimensions Object (such as floor tile).Those skilled in the art, which can according to need, makes adjustment to it, to adapt to specific applied field It closes.
It should be noted that in the description of the present invention, term " first ", " second ", " third " are used for description purposes only, It is not understood to indicate or imply relative importance.
The foot type three-dimensional feature extracting method to provided by the invention based on machine vision is illustrated with reference to the accompanying drawing.
In a kind of specific embodiment of the invention, convert the extraction calculating process of the three-dimensional coefficient of foot type to It determines the spatial position of character pair point, is then calculated using Euclidean distance formula and the feature of the foot type measured is needed to join Number.Wherein it is possible to the basic parameter of obtained foot type include: foot length, bottoms, instep enclose height, on bow curved point height, foot it is wide, Foot type parameter information needed for the shoemaking such as thumb heights, heel convexity point height, the outer anklebone central point height of foot.Below to obtain It takes for foot is long, foot is wide and three parameters of ankle point height and illustrates that the foot type of the invention based on machine vision is three-dimensional special Levy the possible implementation of extracting method.
The foot type three-dimensional feature in the embodiment of the present invention based on machine vision is schematically illustrated with reference first to Fig. 1, Fig. 1 The key step of extracting method, the present invention in the foot type three-dimensional feature extracting method based on machine vision may include following steps It is rapid:
Step S100 obtains the multi-angle image for presetting characteristic point to be measured comprising object.
Specifically, foot is just being placed on A4 paper, using mobile photographing device, such as camera shoots the foot type of multiple angles Image, so as to sufficiently show foot type feature, enough characteristic points to be measured can be obtained, after longest toe vertex and foot The length that foot type to be measured is calculated with salient point refers to thenar points outside and tail toe root points outside for another example to calculate foot to be measured The width of type, for another example ankle point is to calculate the height etc. of ankle point.It should be noted that the quantity of the image of the foot type of shooting Should be at least at three or more, the quantity of the image comprising characteristic point to be measured is more, the foot type calculated according to characteristic point to be measured Parameter is more accurate.
Step S200 extracts the location information of characteristic point to be measured in each image.
Specifically, three-dimensional feature extracting method shown in FIG. 1 can be by a preferred embodiment of the present embodiment The location of pixels (x, y) of characteristic point to be measured in each image is obtained according to following steps, specifically:
First by location of pixels of the hand labeled characteristic point to be measured in some image, Feature Points Matching side is then utilized Method, such as scale invariant feature conversion (Scale Invariant Feature Transform, SIFT) or iteration closest approach (Iterative Closest Point, ICP) etc. finds the characteristic point to be measured corresponding location of pixels in other images.With For the height for measuring ankle, an image comprising ankle point, the pixel position of hand labeled ankle point in the images are chosen It sets, then finds ankle point in the image of the other angles comprising ankle point using characteristic point matching methods such as SIFT or ICP In corresponding location of pixels.It can be quickly found out characteristic point to be measured corresponding location of pixels in all images by this method Hand labeled without carrying out characteristic point to each image, improves the efficiency for obtaining the location of pixels of characteristic point.
Optionally, three-dimensional feature extracting method shown in FIG. 1 may be used also in another preferred embodiment of the present embodiment To obtain the location of pixels (x, y) of characteristic point to be measured in each image according to the following steps, specifically:
According to the uniqueness of the region shape of characteristic point to be measured, the method detected using feature, as Generalized Hough becomes It changes detection specific shape and then determines the location information of the characteristic point to be measured in each image.Specifically it is first determined to be measured Then the corresponding region shape in characteristic point region is automatically found to be measured according to the region shape and using generalised Hough transform Characteristic point corresponding region to be measured in each image, further according to the relative position between characteristic point to be measured and the region shape with And region to be measured in each image obtains the location information of characteristic point to be measured in each image.Below using circle as template And the possible implementation that illustrates is found for characteristic point using generalised Hough transform.
Referring to Fig. 2, Fig. 3 and Fig. 4, Fig. 2 is that a kind of foot type three-dimensional feature based on machine vision mentions in the embodiment of the present invention Take method is schematic diagram of the template using generalised Hough transform detection characteristic point with circle;Fig. 3 is one in the embodiment of the present invention Foot type three-dimensional feature extracting method of the kind based on machine vision is that template detects characteristic point using generalised Hough transform with circle Schematic diagram;Fig. 4 is that a kind of foot type three-dimensional feature extracting method based on machine vision is in the embodiment of the present invention with circle Template detects the schematic diagram of characteristic point using generalised Hough transform, respectively illustrates under different angle in Fig. 2, Fig. 3 and Fig. 4 with foot It is template to utilize generalised Hough transform to find the specific implementation of characteristic point that ankle point region, which is round,.Such as Fig. 2, Fig. 3 With shown in Fig. 4, the ankle where ankle center is circular, it can be seen from the figure that this circular contour is only in foot type One, so, when using generalised Hough transform, using circle as template, it is automatically found circular position in the picture (such as Circular shuttering shown in dotted line in Fig. 2-4), which is the position where ankle, searches the center G point of round position i.e. It is the position of characteristic point ankle point to be measured in the picture.
It is understood that when determining the location information on longest toe vertex, can using the profile of longest toe as The template of generalised Hough transform, scans in the picture, after finding toe profile, passes through the profile and longest toe vertex Relative position determines the location of pixels of this feature point.
Optionally, three-dimensional feature extracting method shown in FIG. 1 may be used also in another preferred embodiment of the present embodiment To obtain the location of pixels (x, y) of characteristic point to be measured in each image according to the following steps, specifically:
The data sample of the characteristic point of the foot type marked based on sufficient amount simultaneously utilizes deep learning algorithm building depth mind Through network, the location information of characteristic point to be measured in each image then is obtained using the neural network.Specifically, training mind When through network, input is the image data comprising characteristic point to be measured, and output is the pixel position of characteristic point to be measured in the picture Set (x, y), wherein output includes true output and desired output, and that the last full articulamentum of network really exports is spy to be measured Point corresponding location of pixels (x, y) in the images is levied, the desired output of network is that characteristic point to be measured marks in the picture Actual pixels position.Then the error reverse train whole network generated using the true output of network with desired output, repeatedly Generation training inputs the testing image that some includes characteristic point to be measured, nerve net after network convergence, neural metwork training The automatic output nerve network of network exports location of pixels in the images automatically.For obtaining the location of pixels of ankle point, choosing That selects sufficient amount has marked the image pattern of ankle point as training set, and builds deep-neural-network, then with training training Practice the deep neural network, after training, one testing image comprising ankle point of input, neural network exports ankle automatically The location of pixels of point in the images.It is understood that determine other characteristic points location of pixels when, using with this feature point The deep neural network that corresponding image data sample training is built in advance, then input comprising this feature point testing image from And obtain the location of pixels of this feature point in the picture.
Step S300 obtains the spatial position of characteristic point to be measured according to the location information of characteristic point to be measured in each image Information.
Specifically, three-dimensional feature extracting method shown in FIG. 1 can be by a preferred embodiment of the present embodiment The spatial positional information of characteristic point to be measured is obtained according to following steps, specifically:
First with object of reference calibration for cameras parameter, the spatial position of characteristic point to be measured is then calculated using Triangulation Algorithm Information.Specifically, foot type is placed on A4 paper for using A4 paper as object of reference, obtained using picture pick-up device, such as camera The image of multiple and different angles contains the profile of A4 paper in the image of these different angles.Utilize the figure of these different angles As carrying out calibration for cameras, camera Intrinsic Matrix K is determined, outer parameter is with respect to world coordinate system spin matrix R, translation matrix t.Then The location of pixels (x, y) of the characteristic point to be measured obtained according to step S200 in the picture, and utilize Triangulation Algorithm and homogeneous Change the spatial positional information (X, Y, Z) for solving characteristic point to be measured under world coordinate system.Illustrate to lead to below with reference to Fig. 5 and Fig. 6 Cross the possible implementation that Triangulation Algorithm obtains characteristic point true spatial location.
Referring to Fig. 5, Fig. 5 be in the embodiment of the present invention a kind of foot type three-dimensional feature extracting method based on machine vision with Straight line is the schematic diagram that template detects object of reference using generalised Hough transform.As shown in figure 5, using Linear Template and using at random The edge line of A4 paper in Hough transformation detection image.As can be seen that detect four edges edge straight line, each two two-phase of straight line It hands over, intersection point is the location of pixels (x of four vertex (A, B, C, D) of A4 paperi, yi), i=1,2,3,4.With continued reference to Fig. 2, Fig. 3 And Fig. 4, A point can be obtained in the following relationship of Euclidean space and projective space by space geometry transformation knowledge:
In formula (1) parameter K, R and t be respectively camera Intrinsic Matrix, camera with respect to world coordinate system spin matrix With translation matrix ([R | t] is collectively referred to as Camera extrinsic matrix number).Wherein, symbol " | " represents augmented matrix, r1、r2、r3It is respectively Expanded form of the camera with respect to the spin matrix R of world coordinate system, by matrix multiplication it is found that r3Disappear with 0 element multiplication.
Wherein,It is the location of pixels of A4 paper vertex A, (XA, YA, ZA)TIt is that it is true under world coordinate system Position, K [R | t] are the inside and outside parameters of camera.Homography matrix H=K [r1r2| t] there are 8 freedom degrees, world coordinate system is established On the vertex A of A4 paper, then the world coordinate system on four vertex of A4 paper be (0,0,0), (X, 0,0), (0, Y, 0), (X, Y, 0), Wherein, X=210mm, Y=297mm.Each vertex can be write as formula (1) form to construct two groups of linear equations.Therefore, four groups Vertex can construct 8 groups of linear equations, be asked by direct linear transformation's (Direct Linear Transform, DLT) mode Solve H.
Since the angle for obtaining three photos is different, thus the camera pose of three photos is different, according to above-mentioned same sample prescription The three groups of homography matrix H of the available world coordinate system of method in the camera1, H2, H3
K can be acquired from homography matrix H, due to H=[h1 h2 h3]=K [r1 r2| t], therefore can obtain:
K-1[h1 h2 h3]=[r1 r2|t] (2)
Parameter K in formula (2)-1, R, t and H be the inverse matrix of camera Intrinsic Matrix, camera respectively with respect to world coordinate system Spin matrix, translation matrix and homography matrix.Wherein, r1、r2It is outside the camera obtained by the image of two different angles respectively Spin matrix of the parameter with respect to world coordinate system, h1、h2、h3It is the opposite generation obtained by the image of three different angles respectively The three groups of homography matrixs of boundary's coordinate system in the camera.
Wherein, R=[r1 r2 r3] it is spin matrix, there is property of orthogonality, it may be assumed that r1 Tr2=0 and ‖ r1‖=‖ r2‖=1.Cause This, it is available: h1 TK-TK-1h2=0, and then available:
h1 TK-TK-1h1=h2 TK-TK-1h2 (3)
Parameter K in formula (3)-TAnd K-1It is the orthogonal matrix and inverse matrix of the transposed matrix of camera Intrinsic Matrix respectively, h1、h2It is that the two groups of lists of the opposite world coordinate system that is obtained by the image of wherein two different angles in the camera answer square respectively Battle array, h1 T、h2 TIt is homography matrix h1、h2Transposed matrix, obtained by above-mentioned, every two images can obtain the internal reference of two cameras Several constraint equations.
Camera Intrinsic Matrix K is upper triangular matrix, w=K-TK-1Symmetrical matrix, according to fig. 2, three of Fig. 3 and Fig. 4 not With angle image and w is gone out by DLT linear solution, and then can be solved by Orthogonal Decomposition and obtain K.According to formula (1) it is found that [r1 r2| t]=K-1[h1 h2 h3], in conjunction with the aforementioned h for solving and obtaining1、h2、h3And K, it can solve and obtain r1、r2,t.By revolving The orthogonality of torque battle array obtains r3=r1×r2, therefore R=[r1 r2 r3].When obtaining shooting figure 2, Fig. 3 and Fig. 4 by this method Inside and outside parameter K [the R of camera1|t1]、K[R2|t2]、K[R3|t3]。
Referring to Fig. 6, Fig. 6 is three of a kind of foot type three-dimensional feature extracting method based on machine vision in the embodiment of the present invention The process schematic of angling process solution characteristic point spatial positional information.As shown in fig. 6, being shown in Fig. 6 with Fig. 3 and Fig. 4 (i.e. Image1 and Image2) in ankle point G for trigonometric ratio process, existed according to the point of ankle obtained in step S200 G Location of pixels x in Image1 and Image21And x2And the inside and outside parameter P of the camera acquired in above-mentioned steps1=K1[R1| t1]、P2=K2[R2|t2], it successively carries out re-projection error quadratic sum and minimizes min ∑i‖PiX-xi‖, to obtain feature to be measured Position X=(M, N, O, w) of the point in projective space, wherein P1、P2It is the camera shooting obtained according to scaling method respectively Image1 and Image2 two opens inside and outside parameter when image, K1、K2When being that camera shooting Image1 and Image2 two opens image respectively Camera Intrinsic Matrix, R1、R2It is opposite world coordinate system when camera shooting Image1 and Image2 two opens image respectively Spin matrix, t1、t2It is translation matrix respectively.Finally, can be obtained characteristic point G's to be measured by homogeneous partial differential projective space coordinate Euclidean space position X=(M/w, N/w, O/w)=(X, Y, Z), wherein M, N, O, w are characteristic point G respectively under projective space Position coordinates.
Optionally, three-dimensional feature extracting method shown in FIG. 1 may be used also in another preferred embodiment of the present embodiment To obtain the true spatial location of characteristic point to be measured according to the following steps, specifically:
Three-dimensional reconstruction problem is converted into the sparse Problems of Reconstruction of characteristic point to be measured, is such as constructed using increment type SFM method Sparse model simultaneously solves sparse Problems of Reconstruction using Triangulation Algorithm.Specifically, the spy to be measured obtained according to step S200 Location of pixels (x, y) of the sign point in multiple images, using increment type SFM method come directly unlike a upper embodiment Solve camera Intrinsic Matrix K, camera spin matrix R, relatively with the translational movement t of world coordinates, characteristic point to be measured in world's seat Coordinate λ (X, Y, Z) under mark system is omitted with object of reference calibration for cameras process, then the object of reference determination of the known specification of use Scale coefficient λ, and then obtain the true spatial location coordinate (X, Y, Z) of characteristic point.7 illustrate using increment with reference to the accompanying drawing Formula SFM method solves the possible implementation of sparse Problems of Reconstruction, by taking the image of 3 different angles as an example.
Referring to Fig. 7, Fig. 7 is a kind of the dilute of the foot type three-dimensional feature extracting method based on machine vision in the embodiment of the present invention Dredge the process schematic that reconstruction process solves characteristic point spatial positional information.As shown in fig. 7, being solved using increment type SFM method Certainly the step of sparse Problems of Reconstruction specifically includes:
Step 1: selecting two image Image1 and Image2 at random in the image of 3 different angles to determine initial graph As right, the initial value [R | t] of the inside and outside parameter of the camera of shooting image Image1 and Image2 is calculated using increment type SFM method Matrix: using 5 groups of characteristic points in image Image1 and Image2 to (longest toe vertex and heel salient point refer to thenar Points outside and tail toe root points outside, ankle point), the corresponding essence of image Image1 and Image2 is calculated separately using 5 methods Matrix E1And E2, wherein E=[R | t], can decomposite camera spin matrix R from essential matrix E1、R2It is sat with relative to the world Target translational movement t1、t2Matrix.Then, the characteristic point to be measured in conjunction with obtained in step S200 under camera coordinates system is in image Location of pixels in Image1 and Image2 constructs initial sparse model;
Step 2: calculating feature to be measured according to the initial sparse model of the middle building of step 1, and using Triangulation Algorithm Position coordinates λ (X of the point under the world coordinate system in image Image1 and Image21, Y1, Z1) and λ (X2, Y2, Z2);
Step 3: the location of pixels of characteristic point to be measured under the camera coordinates system that image Image3 is obtained in step s 200 In the obtained initial sparse model of input step 2, camera inside and outside parameter [R | t] matrix can be reacquired, i.e. camera rotates Matrix R3With the translational movement t relative to world coordinates3, and initial sparse model is corrected using the camera inside and outside parameter;
Step 4: according to the revised sparse model of step 3, and calculating characteristic point to be measured using Triangulation Algorithm and scheming As the spatial position coordinate λ (X under the world coordinate system in Image33, Y3, Z3);
Step 5: using bundle adjustment (Bundle Adjustment, BA) method to characteristic point obtained in step 2 and 4 Position coordinates be modified, the sparse model after being optimized.
Wherein, different coordinate positions is obtained in other remaining images to characteristic point to be measured in step 5 and bundle is repeated Adjustment is tied up, until the error of the coordinate λ (X, Y, Z) for the characteristic point to be measured that front and back is calculated twice is less than or equal to preset threshold Value.
Although the present invention provides only using increment type SFM method the space bit for solving characteristic point to be measured in three images Confidence ceases a kind of this specific embodiment, it will be appreciated by those skilled in the art that increment type SFM provided by the invention Method can also be used to solve the image of multiple and different angles, during using increment type SFM method building sparse model, The picture element position information of characteristic point to be measured under camera coordinates system in new images is substituted into repeatedly, reacquires camera inside and outside parameter simultaneously And sparse model is corrected using the camera inside and outside parameter, until obtaining all images is all added into sparse model.It can manage Solution, the different angle of the image of acquisition is more, and the number of iterative calculation is more, and obtained camera inside and outside parameter is also got over Accurately, spatial positional information of the characteristic point to be measured that is calculated of sparse model constructed according to it under world coordinate system is also It is more accurate.
Step 6: using the A point in Fig. 4 as coordinate origin, being sat according to the vertex D of A4 paper obtained in step S200 in camera The lower picture element position information of mark system, sparse model obtained in recycle step 5 be calculated the space coordinate of vertex D for (M, N, 0), and the true spatial position of vertex D is (210mm, 297mm, 0), and therefore, available scale coefficient λ=210mm/M =297mm/N.In conjunction with space coordinate λ (X, Y, Z) of the characteristic point to be measured obtained in step 5 under world coordinate system, divided by Scale coefficient λ obtains the true spatial location (X, Y, Z) of characteristic point to be measured.
Step S400 is based on spatial positional information and preset three-dimensional feature classification, and it is corresponding to calculate some characteristic point to be measured First distance information and/or second distance information.
It should be noted that first distance information is the distance between some characteristic point to be measured and other characteristic points to be measured letter Breath, such as length, second distance information is the vertical range information between some characteristic point and preset plane to be measured, such as height.
Specifically, by taking foot type as an example, according to the space bit confidence for five characteristic points to be measured being calculated in step S300 Breath, e.g., longest toe vertex are (X1, Y1, Z1), heel salient point be (X2, Y2, Z2), refer to thenar points outside be (X3, Y3, Z3), tail toe root points outside be (X4, Y4, Z4), ankle point be (X5, Y5, Z5), using range formula, such as Euclidean distance formulaAvailable following calculation formula:
Parameter L, W and H are that foot is long, foot is wide and ankle height respectively in formula (4).
So, foot is long, foot is wide and three parameters of ankle point height can be found out.Although the present invention provides only logical Cross extraction three-dimensional feature point to calculate that foot is long, foot is wide and a kind of three parameters of ankle point height this specific embodiments, still It will be appreciated by persons skilled in the art that three-dimensional feature extracting method provided by the invention can also calculate other foot type ginsengs Number such as calculates instep height, at this point, being both needed in the image of different angle comprising characteristic point instep point, then successively according to above-mentioned The step of three-dimensional feature extracting method of the invention as described in the examples, calculates instep height.
In conclusion obtaining the longest toe comprising foot type using picture pick-up device in the preferred technical solution of the present invention Vertex, heel salient point, refer to five of thenar points outside, tail toe root points outside and ankle five characteristic points to be measured of point not With the image of angle, pixel position of each characteristic point to be measured in every piece image is determined by hand labeled or automated process Then confidence breath trigonometric ratio or is solved by sparse Problems of Reconstruction again using manual calibration for cameras parameter and seeks feature to be measured The true spatial location of point, does not need the Model Reconstruction to entire object, can reduce calculation amount, simplified model was established Journey.Spatial position finally based on five characteristic points to be measured, using Euclidean distance formula, so as to which foot length, foot is calculated Wide and three foot type parameters of ankle point height.And so on, the image of the different angle of different characteristic point is obtained, can also be calculated The corresponding foot type parameter of this feature point is obtained, the image of the different angle comprising instep point is such as obtained, it can be with according to above-mentioned steps The spatial positional information of instep point is calculated, so that this parameter of instep height be calculated.
Further, it is deposited in the storage device based on above method embodiment the present invention also provides a kind of storage device A plurality of program is contained, which can be adapted for being loaded as processor is regarded described in above method embodiment based on machine to execute The three-dimensional feature extracting method of feel.
Further, it is based on above method embodiment, the present invention also provides a kind of control device, the control device packets Include processor and storage equipment, wherein storage equipment can be adapted for storing a plurality of program, which can be suitable for by described Processor is loaded to execute the three-dimensional feature extracting method described in above method embodiment based on machine vision.
So far, it has been combined preferred embodiment shown in the drawings and describes technical solution of the present invention, still, this field Technical staff is it is easily understood that protection scope of the present invention is expressly not limited to these specific embodiments.Without departing from this Under the premise of the principle of invention, those skilled in the art can make equivalent change or replacement to the relevant technologies feature, these Technical solution after change or replacement will fall within the scope of protection of the present invention.

Claims (10)

1. a kind of three-dimensional feature extracting method based on machine vision, which is characterized in that the three-dimensional feature extracting method includes The following steps:
Obtain the multi-angle image for presetting characteristic point to be measured of the object comprising object of reference and relative to object of reference setting;
Extract location information of the characteristic point to be measured in each described image;
The spatial position of the characteristic point to be measured is obtained according to location information of the characteristic point to be measured in each described image Information;
Based on the spatial positional information and preset three-dimensional feature classification, the corresponding first distance of some characteristic point to be measured is calculated Information and/or second distance information;
Wherein, the first distance information is the distance between some described characteristic point to be measured and other characteristic points to be measured information, The second distance information is the vertical range information between described some characteristic point and preset plane to be measured;It is described that some is to be measured Characteristic point, other described characteristic points to be measured and the plane each depend on the three-dimensional feature classification.
2. the three-dimensional feature extracting method according to claim 1 based on machine vision, which is characterized in that " described in extraction The step of location information of the characteristic point to be measured in each described image " includes:
The location of pixels of the characteristic point to be measured in some described image is obtained using hand labeled method;
Using preset Feature Points Matching method and according to acquired location of pixels, the characteristic point to be measured is extracted in other figures The corresponding location of pixels as in.
3. the three-dimensional feature extracting method according to claim 1 based on machine vision, which is characterized in that " described in extraction The step of location information of the characteristic point to be measured in each described image " includes:
Obtain the corresponding region shape in characteristic point region to be measured described in the object;
The corresponding region to be measured of each image is obtained according to the region shape;
According to the relative position and each region to be measured between the characteristic point to be measured and the region shape, institute is obtained State location information of the characteristic point to be measured in each image.
4. the three-dimensional feature extracting method according to claim 1 based on machine vision, which is characterized in that " described in extraction The step of location information of the characteristic point to be measured in each described image " includes:
Location information of the characteristic point to be measured in each described image is obtained using the neural network constructed in advance;
Wherein, the neural network is based on preset training set and to utilize the depth nerve net of deep learning related algorithm training Network.
5. the three-dimensional feature extracting method described in any one of -4 based on machine vision according to claim 1, which is characterized in that " the space bit confidence of the characteristic point to be measured is obtained according to location information of the characteristic point to be measured in each described image Breath " the step of include:
Using Triangulation Algorithm and according to the characteristic point to be measured inside and outside the location information and camera in each image Parameter obtains the Euclidean position of the characteristic point to be measured.
6. the three-dimensional feature extracting method described in any one of -4 based on machine vision according to claim 1, which is characterized in that " the space bit confidence of the characteristic point to be measured is obtained according to location information of the characteristic point to be measured in each described image Breath " the step of include:
Sparse model is constructed using the location information of characteristic point to be measured described in increment type SFM method and each described image, and Spatial positional information of the characteristic point to be measured under world coordinate system is calculated using Triangulation Algorithm;
Restore the characteristic point to be measured obtained in above-mentioned steps under world coordinate system using the scale coefficient obtained in advance Spatial positional information obtains the actual position of the characteristic point to be measured.
7. the three-dimensional feature extracting method according to claim 6 based on machine vision, which is characterized in that " using in advance The scale coefficient first obtained restores spatial positional information of the characteristic point under world coordinate system obtained in above-mentioned steps, obtains To the actual position of the characteristic point to be measured " before, the three-dimensional feature extracting method based on machine vision further include:
Using the sparse model and according to the location of pixels on the object of reference vertex under camera coordinates system, obtain in world coordinate system Under the object of reference vertex spatial position;
According to the actual position of the spatial position on the object of reference described under world coordinate system vertex and object of reference vertex, slide ruler Spend coefficient.
8. the three-dimensional feature extracting method according to claim 7 based on machine vision, which is characterized in that the trigonometric ratio Method includes:
According to the camera inside and outside parameter and location information of the characteristic point to be measured in each image, obtain it is described to The projective space position of characteristic point is surveyed, and
Homogeneous partial differential is carried out to the projective space position to handle to obtain the Euclidean position of the characteristic point to be measured.
9. a kind of storage device, wherein being stored with a plurality of program, which is characterized in that described program is suitable for being loaded by processor to hold The row three-dimensional feature extracting method of any of claims 1-8 based on machine vision.
10. a kind of control device, including processor and storage equipment, the storage equipment are suitable for storing a plurality of program, feature It is, described program, which is suitable for being loaded as the processor, to be required described in any one of 1-8 with perform claim based on machine vision Three-dimensional feature extracting method.
CN201811474153.4A 2018-12-04 2018-12-04 Three-dimensional feature extraction method and device based on machine vision Active CN109816724B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811474153.4A CN109816724B (en) 2018-12-04 2018-12-04 Three-dimensional feature extraction method and device based on machine vision
PCT/CN2019/105962 WO2020114035A1 (en) 2018-12-04 2019-09-16 Three-dimensional feature extraction method and apparatus based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811474153.4A CN109816724B (en) 2018-12-04 2018-12-04 Three-dimensional feature extraction method and device based on machine vision

Publications (2)

Publication Number Publication Date
CN109816724A true CN109816724A (en) 2019-05-28
CN109816724B CN109816724B (en) 2021-07-23

Family

ID=66601919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811474153.4A Active CN109816724B (en) 2018-12-04 2018-12-04 Three-dimensional feature extraction method and device based on machine vision

Country Status (2)

Country Link
CN (1) CN109816724B (en)
WO (1) WO2020114035A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110133443A (en) * 2019-05-31 2019-08-16 中国科学院自动化研究所 Based on the transmission line part detection method of parallel vision, system, device
CN110223383A (en) * 2019-06-17 2019-09-10 重庆大学 A kind of plant three-dimensional reconstruction method and system based on depth map repairing
CN110796705A (en) * 2019-10-23 2020-02-14 北京百度网讯科技有限公司 Error elimination method, device, equipment and computer readable storage medium
WO2020114035A1 (en) * 2018-12-04 2020-06-11 中国科学院自动化研究所 Three-dimensional feature extraction method and apparatus based on machine vision
CN112070883A (en) * 2020-08-28 2020-12-11 哈尔滨理工大学 Three-dimensional reconstruction method for 3D printing process based on machine vision
CN114841959A (en) * 2022-05-05 2022-08-02 广州东焊智能装备有限公司 Automatic welding method and system based on computer vision

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112487979B (en) * 2020-11-30 2023-08-04 北京百度网讯科技有限公司 Target detection method, model training method, device, electronic equipment and medium
CN112541936B (en) * 2020-12-09 2022-11-08 中国科学院自动化研究所 Method and system for determining visual information of operating space of actuating mechanism
CN114113163B (en) * 2021-12-01 2023-12-08 北京航星机器制造有限公司 Automatic digital ray detection device and method based on intelligent robot
CN115112098B (en) * 2022-08-30 2022-11-08 常州铭赛机器人科技股份有限公司 Monocular vision one-dimensional two-dimensional measurement method
CN116672082B (en) * 2023-07-24 2024-03-01 苏州铸正机器人有限公司 Navigation registration method and device of operation navigation ruler

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04349583A (en) * 1991-05-27 1992-12-04 Nippon Telegr & Teleph Corp <Ntt> Generalized hough transform circuit
WO2002025592A2 (en) * 2000-09-22 2002-03-28 Hrl Laboratories, Llc Sar and flir image registration method
CN102157013A (en) * 2011-04-09 2011-08-17 温州大学 System for fully automatically reconstructing foot-type three-dimensional surface from a plurality of images captured by a plurality of cameras simultaneously
CN102354457A (en) * 2011-10-24 2012-02-15 复旦大学 General Hough transformation-based method for detecting position of traffic signal lamp
CN102376089A (en) * 2010-12-09 2012-03-14 深圳大学 Target correction method and system
CN105184857A (en) * 2015-09-13 2015-12-23 北京工业大学 Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging
CN106127258A (en) * 2016-07-01 2016-11-16 华中科技大学 A kind of target matching method
JP2017191022A (en) * 2016-04-14 2017-10-19 有限会社ネットライズ Method for imparting actual dimension to three-dimensional point group data, and position measurement of duct or the like using the same
CN107767442A (en) * 2017-10-16 2018-03-06 浙江工业大学 A kind of foot type three-dimensional reconstruction and measuring method based on Kinect and binocular vision

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7580546B2 (en) * 2004-12-09 2009-08-25 Electronics And Telecommunications Research Institute Marker-free motion capture apparatus and method for correcting tracking error
CN106204727A (en) * 2016-07-11 2016-12-07 北京大学深圳研究生院 The method and device that a kind of foot 3-D scanning is rebuild
CN108305286B (en) * 2018-01-25 2021-09-07 哈尔滨工业大学深圳研究生院 Color coding-based multi-view stereoscopic vision foot type three-dimensional measurement method, system and medium
CN109816724B (en) * 2018-12-04 2021-07-23 中国科学院自动化研究所 Three-dimensional feature extraction method and device based on machine vision

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04349583A (en) * 1991-05-27 1992-12-04 Nippon Telegr & Teleph Corp <Ntt> Generalized hough transform circuit
WO2002025592A2 (en) * 2000-09-22 2002-03-28 Hrl Laboratories, Llc Sar and flir image registration method
CN102376089A (en) * 2010-12-09 2012-03-14 深圳大学 Target correction method and system
CN102157013A (en) * 2011-04-09 2011-08-17 温州大学 System for fully automatically reconstructing foot-type three-dimensional surface from a plurality of images captured by a plurality of cameras simultaneously
CN102354457A (en) * 2011-10-24 2012-02-15 复旦大学 General Hough transformation-based method for detecting position of traffic signal lamp
CN105184857A (en) * 2015-09-13 2015-12-23 北京工业大学 Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging
JP2017191022A (en) * 2016-04-14 2017-10-19 有限会社ネットライズ Method for imparting actual dimension to three-dimensional point group data, and position measurement of duct or the like using the same
CN106127258A (en) * 2016-07-01 2016-11-16 华中科技大学 A kind of target matching method
CN107767442A (en) * 2017-10-16 2018-03-06 浙江工业大学 A kind of foot type three-dimensional reconstruction and measuring method based on Kinect and binocular vision

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JUN SHEN等: ""Trinocular stereovision by generalized Hough transform"", 《INTELLIGENT ROBOTS AND COMPUTER VISION XIV: ALGORITHMS, TECHNIQUES, ACTIVE VISION, AND MATERIALS HANDLING》 *
ZHU T等: ""A generalized Hough transform template and its applications in computer vision"", 《JOURNAL OF COMPUTATIONAL INFORMATION SYSTEMS》 *
史传飞等: ""面向大型装备的工业摄影测量技术及实现"", 《航空制造技术》 *
秦绪功: ""低成本多目立体视觉脚型三维测量方法研究"", 《中国优秀硕士学位论文全文数据库·信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020114035A1 (en) * 2018-12-04 2020-06-11 中国科学院自动化研究所 Three-dimensional feature extraction method and apparatus based on machine vision
CN110133443A (en) * 2019-05-31 2019-08-16 中国科学院自动化研究所 Based on the transmission line part detection method of parallel vision, system, device
CN110133443B (en) * 2019-05-31 2020-06-16 中国科学院自动化研究所 Power transmission line component detection method, system and device based on parallel vision
CN110223383A (en) * 2019-06-17 2019-09-10 重庆大学 A kind of plant three-dimensional reconstruction method and system based on depth map repairing
CN110796705A (en) * 2019-10-23 2020-02-14 北京百度网讯科技有限公司 Error elimination method, device, equipment and computer readable storage medium
CN112070883A (en) * 2020-08-28 2020-12-11 哈尔滨理工大学 Three-dimensional reconstruction method for 3D printing process based on machine vision
CN114841959A (en) * 2022-05-05 2022-08-02 广州东焊智能装备有限公司 Automatic welding method and system based on computer vision
CN114841959B (en) * 2022-05-05 2023-04-04 广州东焊智能装备有限公司 Automatic welding method and system based on computer vision

Also Published As

Publication number Publication date
CN109816724B (en) 2021-07-23
WO2020114035A1 (en) 2020-06-11

Similar Documents

Publication Publication Date Title
CN109816724A (en) Three-dimensional feature extracting method and device based on machine vision
CN112002014B (en) Fine structure-oriented three-dimensional face reconstruction method, system and device
CN104392426B (en) A kind of no marks point three-dimensional point cloud method for automatically split-jointing of self adaptation
Bernardini et al. The 3D model acquisition pipeline
US6664956B1 (en) Method for generating a personalized 3-D face model
CN106709947A (en) RGBD camera-based three-dimensional human body rapid modeling system
CN107767442A (en) A kind of foot type three-dimensional reconstruction and measuring method based on Kinect and binocular vision
US11042973B2 (en) Method and device for three-dimensional reconstruction
JP2001283229A (en) Method for calculating position and direction of object in three-dimensional space
US20170053422A1 (en) Mobile device human body scanning and 3d model creation and analysis
CN102222357B (en) Foot-shaped three-dimensional surface reconstruction method based on image segmentation and grid subdivision
CN105608737B (en) A kind of human foot three-dimensional rebuilding method based on machine learning
CN104487999B (en) Posture estimation device and posture estimation method
US9715759B2 (en) Reference object for three-dimensional modeling
CN107392947A (en) 2D 3D rendering method for registering based on coplanar four point set of profile
US10755433B2 (en) Method and system for scanning an object using an RGB-D sensor
CN109074666B (en) System and method for estimating pose of non-texture object
CN105139007B (en) Man face characteristic point positioning method and device
CN106705849A (en) Calibration method of linear-structure optical sensor
CN108564619A (en) A kind of sense of reality three-dimensional facial reconstruction method based on two photos
CN108830888A (en) Thick matching process based on improved multiple dimensioned covariance matrix Feature Descriptor
CN107949851A (en) The quick and robust control policy of the endpoint of object in scene
CN109815830A (en) A method of obtaining foot information in the slave photo based on machine learning
CN106408654B (en) A kind of creation method and system of three-dimensional map
CN113034581A (en) Spatial target relative pose estimation method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant