CN108109197B - Image processing modeling method - Google Patents

Image processing modeling method Download PDF

Info

Publication number
CN108109197B
CN108109197B CN201711350936.7A CN201711350936A CN108109197B CN 108109197 B CN108109197 B CN 108109197B CN 201711350936 A CN201711350936 A CN 201711350936A CN 108109197 B CN108109197 B CN 108109197B
Authority
CN
China
Prior art keywords
image
target object
model
angle
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711350936.7A
Other languages
Chinese (zh)
Other versions
CN108109197A (en
Inventor
吴秋红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongrui Huaxin Information Technology Co ltd
Original Assignee
Beijing Zhongrui Huaxin Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongrui Huaxin Information Technology Co ltd filed Critical Beijing Zhongrui Huaxin Information Technology Co ltd
Priority to CN201711350936.7A priority Critical patent/CN108109197B/en
Publication of CN108109197A publication Critical patent/CN108109197A/en
Application granted granted Critical
Publication of CN108109197B publication Critical patent/CN108109197B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Abstract

The invention discloses an image processing modeling method, which comprises the following steps: s1, collecting video images of a target object; s2, performing edge analysis processing on each frame of image in the video image, identifying the edge outline of the target object, marking the shooting angles of different frames, and forming outline information of the target object at different angles; and S3, performing simulated rotation modeling of the virtual 3D space on the contour information of different angles generated in the step S2 to form a 3D model. The image processing modeling method provided by the invention analyzes and identifies the target object in the image through the frame data in the shot video image, the identification algorithm can be diversified, and the image can be processed by utilizing the existing network open source identification algorithm. The method provided by the invention has small operand, and is suitable for intelligent terminal equipment such as mobile phones and flat panels, which have not very strong processing capability.

Description

Image processing modeling method
Technical Field
The invention belongs to the technical field of 3D modeling, and particularly relates to an image processing modeling method.
Background
The image modeling technology is a technology for acquiring photos of an object by using equipment such as a camera and the like, and performing graphic image processing and three-dimensional calculation by using a computer so as to fully automatically generate a three-dimensional model of the shot object, belongs to the field of three-dimensional reconstruction technology, and relates to subjects such as computer geometry, computer graphics, computer vision, image processing, mathematical calculation and the like.
From the situation mastered by the long-term tracking research institute in the relevant technical fields at home and abroad, the institutions such as microsoft corporation, Autodesk corporation, stanford university and massachusetts institute and the like have good research results in the aspect of rapid reconstruction of three-dimensional body based on images internationally at present, but the research results are only laboratory research results, and the research results cannot be used commercially at present. Microsoft provides a service based on three-dimensional reconstruction of images on the internet, but the corresponding server is closed quickly because the user has large access volume and technology is not related, so that heavy technical service cannot be borne. Currently, three-dimensional reconstruction systems based on images, such as FOTO3D of Canadian corporation, are popularized in the market internationally, but a large amount of manual interaction is needed, and the requirements on the shooting environment and the shooting precision of photos are quite high, so that the market acceptance is not high.
And the existing image processing modeling method is complicated and has large computation amount, so that the method cannot be applied to computing equipment with weak processing capability, such as mobile phones, flat panels and the like.
Disclosure of Invention
The present invention is to solve the above problems and to provide an image processing modeling method with a low computation amount.
In order to solve the technical problems, the technical scheme of the invention is as follows: an image processing modeling method, comprising the steps of:
s1, collecting video images of the target object;
s2, performing edge analysis processing on each frame of image in the video image, identifying the edge outline of the target object, marking the shooting angles of different frames, and forming outline information of different sides of the target object;
and S3, performing virtual 3D space simulated rotation modeling on the contour information of different angles generated in the step S2 to form a 3D model.
By analyzing and identifying each frame data in the shot video image, the target object in the image can be identified by various identification algorithms, and the image can be processed by utilizing the existing network open source identification algorithm.
Preferably, the step S2 includes the following sub-steps:
s21, performing brightness identification on each frame of image, and calculating a brightness mean value and dispersion;
s22, performing edge sharpening and binarization on the image to obtain a binary gray scale image;
s23, correcting the binary gray-scale image:
s231, performing boundary continuity correction by using the information of the image, and eliminating the influence of singular points and noise points;
and S232, performing boundary continuity correction on the current frame by using the supplementary data of the previous and next frames.
Preferably, the step S231 includes: detecting the nearby direction at discontinuous odd points, selecting the odd points with the most matched distance and direction for connection, and marking in a binary gray-scale map:
Figure GDA0002842620370000021
Figure GDA0002842620370000022
for the distance and direction of the pixel point P and P', Δ 0 … Δ n traced back from the point P to each point P0 … Pn in the continuous connection direction can be obtained similarly, and singular point fitting is performed according to the direction of the Δ sequence, so as to finally determine the most suitable connection point.
Preferably, the corrected region marked by the current frame is compared with the previous and subsequent frames, and if the previous and subsequent frames are continuous, the approximate matching is performed according to the continuous situation of the previous and subsequent frames.
Preferably, the step S3 includes the following sub-steps:
s31, selecting a characteristic point with a fixed relative position in the target object as an angle rotation reference point;
s32, calculating the inclination angle, the relative position and the relative angle of the target object according to the change of the relative position of the selected reference point, and judging the angle change of the boundary contour of the current image in the 2D space;
s33, carrying out three-dimensional angle reduction correction on the change sequence of the reference points in each frame to obtain the real rotation angle of the target object, using the real rotation angle as the 3D contour of the boundary of the current frame, carrying out 3D position labeling on the boundary in the 2D image, and completing 3D model modeling of the target object;
and S34, if the shooting terminal shoots the standing target object in a mobile shooting mode, recording data of an acceleration sensor, an inertia sensor and a magnetic sensor of the shooting terminal in each frame of image data, and carrying out angle analysis on the target object according to the data to obtain 2D contours of the target object at different angles so as to synthesize a 3D model.
Preferably, the step S3 includes the following sub-steps:
s31, selecting a fixed reference object beside the target object and further selecting characteristic points on the reference object to generate a reference vector;
and S32, labeling the angle of the current frame through the included angle relation between the labeling point vector on the target object and the reference object vector to generate a frame of 2D contour data with angle information, and synthesizing the 3D model of the target object to model after all the 360-degree contour data are analyzed.
Preferably, the step S3 is followed by:
and S4, detail description and modification are carried out on the 3D model.
Preferably, in step S4, if the target object is a human body, the direction of the skeleton and the position of the joint are confirmed by using a median calculation method.
Preferably, the step S4 includes: firstly, shooting a reference standard object by using a shooting terminal, then comparing the obtained data of each angle of an image with the data of the reference standard object to obtain the characteristics and the calculation proportion of spherical distortion of the shooting terminal, accurately measuring and calculating various shooting terminals capable of carrying out video image acquisition on a target object, and establishing a correction model database by using the obtained characteristics and the calculation proportion of the spherical distortion of each shooting terminal; after a user shoots with a known shooting terminal and before a 3D model is generated, a corresponding distortion data correction model is searched for in a video image through a correction model database, and model identification is carried out after the video image is processed.
Preferably, the step S4 includes: and directly carrying out local size correction on the 3D model.
The invention has the beneficial effects that: according to the image processing modeling method provided by the invention, the target object in the image is analyzed and identified through the frame data in the shot video image, the identification algorithm can be diversified, and the image can be processed by utilizing the existing network open source identification algorithm. The method has small operand, and is suitable for intelligent terminal equipment such as mobile phones, flat panels and the like with not very strong processing capacity.
Drawings
Fig. 1 is a schematic view showing the upright and bent state of a human leg according to the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments:
example one
The image processing modeling method provided by the embodiment comprises the following steps:
s1, acquiring video images of the target object through the shooting terminal; the shooting terminal can be an electronic device such as a mobile phone and a tablet.
And S2, performing edge analysis processing on each frame of image in the video image, identifying the edge outline of the target object, marking the shooting angles of different frames, and forming outline information of the target object at different angles.
Step S2 includes the following substeps:
s21, performing brightness identification on each frame of image, and calculating a brightness mean value and dispersion;
in order to obtain better recognition effect, the effect of the whole image needs to be evaluated first, so as to set basic parameters and boundary conditions for the subsequent algorithm. Firstly, brightness identification is carried out on a video key frame by utilizing image processing: (L0 … Ln), and then the luminance mean value and the dispersion are calculated by a weighted average method.
Figure GDA0002842620370000031
Ln is the overall brightness value of the nth frame image, the calculation method can perform linear calculation by using the gray average value, that is, the RGB color values of each point in each frame image are averaged and added, and Z is the number of pixels.
Figure GDA0002842620370000032
Where B is the last recognition result parameter, and when a0 is 0 and a' is 1, the original initial value B0 is obtained. a0 is a manual adjustment correction parameter, a' is a recommended coefficient, a0 is 0 generally without manual intervention, and the integral gray scale adjustment of images and videos can also be performed according to the requirements of practical application, that is, the value of a0 is adjusted, and can be embodied on the images which can be seen by users in advance. The value of a' is between 0.7 and 1.3.
S22, performing edge sharpening and binarization on the image to obtain a binary gray scale image;
and (4) carrying out edge sharpening and binarization on the image by using a high-pass filtering and spatial differentiation method (the value exceeds a threshold value and is set to be 255, and the value is less than the threshold value and is set to be 0), so that extreme edge identification is achieved. And then, in the sharpening image of each frame of image, comparing according to the previous brightness discrete weighted value B to form a binary gray-scale image:
Figure GDA0002842620370000041
where G (x, y) represents the gray scale (or RGB component) of the image point f (x, y), and G [ f (x, y) ] is the gradient value of the image point f (x, y).
S23, correcting the binary gray-scale image:
the sharpened binary grayscale image may have local discontinuity or local unsharpness due to a noise or a quality problem of the image itself, and for this reason, the binary grayscale image is corrected in two stages in this embodiment:
s231, carrying out boundary continuity correction by utilizing the information of the image:
detecting the nearby direction at discontinuous odd points, selecting the odd points with the most matched distance and direction for connection, and marking in a binary gray-scale map:
Figure GDA0002842620370000042
Figure GDA0002842620370000043
for the distance and direction of the pixel point P and P', Δ 0 … Δ n traced back from the point P to each point P0 … Pn in the continuous connection direction can be obtained similarly, and singular point fitting is performed according to the direction of the Δ sequence, so as to finally determine the most suitable connection point.
S232, performing boundary continuity correction on the current frame by using the supplementary data of the previous and next frames:
and comparing the corrected region marked by the current frame with the previous and later frames, if the previous and later frames have continuous conditions, performing approximate matching according to the continuous conditions of the previous and later frames, and performing similar analysis on the matching value according to the boundary region not marked as corrected by the current frame.
And S3, performing virtual 3D space simulated rotation modeling on the contour information of different angles generated in the step S2 to form a 3D model. Step S3 includes the following substeps:
s31, selecting a characteristic point with a fixed relative position in the target object as an angle rotation reference point; the characteristic point may be an inflection point in the outer contour line of the object. The number of the characteristic points is at least three, such as color points which are identified in advance for positioning convenience, sharp corners of a cube, double ears of a human body and fixed sewing points of clothes.
S32, calculating the inclination angle, the relative position and the relative angle of the target object according to the change of the relative position of the selected reference point, and judging the angle change of the boundary contour of the current image in the 2D space;
Figure GDA0002842620370000044
Figure GDA0002842620370000045
and delta theta is the angle change of the target object and represents a direction vector formed by two reference points before and after the target object rotates.
And S33, in order to restore the change sequence of the reference points in each frame to the 3D field, carrying out three-dimensional angle restoration correction on the change sequence of the reference points in each frame to obtain the real rotation angle of the target object, using the real rotation angle as the 3D contour of the boundary of the current frame, carrying out 3D position annotation on the boundary in the 2D image, and finally synthesizing all the 2D contours to complete the 3D model modeling of the target object.
Furthermore, for the target object, given the size between specific points, the system performs a size deduction of the full model according to the specific meaning of the size in the actual 3D model, so as to form a 3D model of the target object closer to the actual size. For example, for the calibration of the height of a human body, it can assist in deducing the size of other parts of the human body, such as: arm length, three-dimensional circumference, etc.
And S34, if the shooting terminal shoots the standing target object in a mobile shooting mode, recording data of an acceleration sensor, an inertia sensor and a magnetic sensor of the shooting terminal in each frame of image data, and carrying out angle analysis on the target object according to the data to obtain 2D contours of the target object at different angles so as to synthesize a 3D model.
And S4, detail description and modification are carried out on the 3D model.
Firstly, flexible object 3D model modification: in the case of a flexible loose object, such as a human body. The target person is required to shoot the 360-degree image video according to different postures, such as the postures of horizontal extension and vertical extension of two arms, natural downward vertical falling and vertical falling of two arms, natural squatting and the like, and modeling is respectively carried out corresponding to different postures, so that richer 'joint' details of the target model are obtained.
Since the human body is a very special "object", it is inappropriate for 3D modeling of the human body if scanning is performed only from an external model, because different bone shapes and joint shapes have a very large influence on external deformation of the human body during movement. And internal calculation is carried out according to the characteristics of the shape bending change of the human body, so that the bone data influencing the 3D model is determined, and the 3D model of the human body is enriched and improved.
For the relevant parameters of joints and bones, the original data acquisition can be carried out according to the bending actions of standing, arm-reporting squatting and the like. The invention adopts a median calculation method to confirm the trend of the skeleton and the joint position. This information will be used to account for the changes in the articulation of the target object to account for the degree of fit of the outer covering (garment, etc.).
As shown in fig. 1, for a bendable body part, we measure respectively: the length L of the part in a straightening state, the length L0 of the first arm, the length L1 of the second arm, the radius R0 of the first joint, the radius R1 of the second joint, the arc length L2 of the tangent point of the first joint and the first arm and the second arm, and the arc length L3 of the tangent point of the second joint and the second arm, wherein one end of the first arm is connected with one end of the second arm through the first joint, and the other end of the second arm is connected with the second joint. For the leg, in the case of standing and bending, L is the length of the standing leg, L0 is the length of the thigh, L1 is the length of the calf, R0 is the radius of the knee, R1 is the radius of the ankle, L2 is the arc length of the tangent point of the knee to the thigh and calf, L3 is the arc length of the tangent point of the ankle to the calf, the joint center is obtained by calculating the positions of the centers of the circles of R0 and R1, and the bone length is calculated as:
Figure GDA0002842620370000051
meanwhile, according to R0, the positions of the centre points of R1 and the length Lb of the bones can depict the relative positions of the bones and joints in the 3D model. Based on the method, the relative position information of the skeleton in the body is obtained, so that the required design allowance and design detail can be calculated conveniently when partial analysis is performed.
The same principle can be used to determine data for joints such as arm, elbow, neck, etc.
Secondly, correcting the spherical distortion of the shooting terminal: because different shooting terminals, such as different mobile phone brands, have different degrees of spherical distortion in different position area imaging when shooting images, a spherical distortion database based on the mobile phone brands and software versions is established according to the experience values of the spherical distortion of the different mobile phone brands, so that the shot and recognized 3D model is further corrected to achieve the most accurate recognition effect.
Specifically, a reference standard object is shot by a shooting terminal, then the obtained data of each angle of an image is compared with the data of the reference standard object to obtain the spherical distortion characteristics and the calculation proportion of the shooting terminal, various shooting terminals capable of carrying out video image acquisition on a target object are accurately measured and calculated, and a correction model database is established by using the obtained spherical distortion characteristics and the calculation proportion of each shooting terminal; after a user shoots with a known shooting terminal and before a 3D model is generated, a corresponding distortion data correction model is searched for in a video image through a correction model database, and model identification is carried out after the video image is processed.
And thirdly, directly correcting the local size of the 3D model: according to own preference, the original model can be subjected to local small-size correction, such as adjusting the local sizes. Particularly, the human body model can adjust the size of a specific part or a user can manually correct the size according to the actual measurement condition.
Example two
The image processing modeling method provided in this embodiment is different from the image processing modeling method provided in the embodiment only in that step S3 in this embodiment is the following sub-steps:
s31, selecting a fixed reference object beside the target object and further selecting a marking point on the reference object to generate a reference vector; the reference object may be a reference object, such as a ruler or similar object, artificially placed beside the target object. The annotation point may be an inflection point in the outer contour of the object. The number of the marking points is at least two.
And S32, when the target object rotates, labeling the angle of the current frame through the included angle relationship between the labeling point vector on the target object and the reference object vector to generate a frame of 2D contour data with angle information, and synthesizing the 3D model modeling of the target object after all 360-degree contour data are analyzed.
The reference object can more conveniently and accurately complete the reduction of the 3D coordinates. If the specific size of the reference object is given, the size of the target object can be labeled according to the specific size of the reference object, so that a 3D model closer to the actual effect is obtained.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (8)

1. An image processing modeling method, comprising the steps of:
s1, collecting video images of the target object;
s2, performing edge analysis processing on each frame of image in the video image, identifying the edge outline of the target object, marking the shooting angles of different frames, and forming outline information of the target object at different angles;
s3, performing virtual 3D space simulation rotation modeling on the contour information of different angles generated in the step S2 to form a 3D model;
the step S2 includes the following sub-steps:
s21, performing brightness identification on each frame of image, and calculating a brightness mean value and dispersion;
s22, performing edge sharpening and binarization on the image to obtain a binary gray scale image;
s23, correcting the binary gray-scale image:
s231, performing boundary continuity correction by using the information of the image;
s232, performing boundary continuity correction on the current frame by using the supplementary data of the previous and next frames;
the step S231 includes: detecting the nearby direction at discontinuous odd points, selecting the odd points with the most matched distance and direction for connection, and marking in a binary gray-scale map:
Figure FDA0002842620360000011
Figure FDA0002842620360000012
for the distance and direction of the pixel points P and P', the same method can obtain the delta 0 … delta n traced back from the point P to each point P0 … Pn in the continuous connection direction, and the singular point fitting is carried out according to the direction of the delta sequence, finally the singular point fitting is ensuredThe most suitable connection point is determined.
2. The image processing modeling method of claim 1, wherein: the step S232 includes: and comparing the corrected area marked by the current frame with the previous and next frames, and if the previous and next frames are continuous, performing approximate matching according to the continuous situation of the previous and next frames.
3. The image processing modeling method of claim 1, wherein: the step S3 includes the following sub-steps:
s31, selecting a characteristic point with a fixed relative position in the target object as an angle rotation reference point;
s32, calculating the inclination angle, the relative position and the relative angle of the target object according to the change of the relative position of the selected reference point, and judging the angle change of the boundary contour of the current image in the 2D space;
s33, carrying out three-dimensional angle reduction correction on the change sequence of the reference points in each frame to obtain the real rotation angle of the target object, using the real rotation angle as the 3D contour of the boundary of the current frame, carrying out 3D position labeling on the boundary in the 2D image, and completing 3D model modeling of the target object;
and S34, if the shooting terminal shoots the standing target object in a mobile shooting mode, recording data of an acceleration sensor, an inertia sensor and a magnetic sensor of the shooting terminal in each frame of image data, and carrying out angle analysis on the target object according to the data to obtain 2D contours of the target object at different angles so as to synthesize a 3D model.
4. The image processing modeling method of claim 1, wherein: the step S3 includes the following sub-steps:
s31, selecting a fixed reference object beside the target object and further selecting characteristic points on the reference object to generate a reference vector;
and S32, labeling the angle of the current frame through the included angle relation between the labeling point vector on the target object and the reference object vector to generate a frame of 2D contour data with angle information, and synthesizing the 3D model of the target object after all 360-degree contour data are analyzed.
5. The image processing modeling method of claim 1, wherein: the step S3 is followed by:
and S4, detail description and modification are carried out on the 3D model.
6. The image processing modeling method of claim 5, wherein: in step S4, if the target object is a human body, the direction of the skeleton and the joint position are confirmed by using a median calculation method.
7. The image processing modeling method of claim 5, wherein: the step S4 includes: firstly, shooting a reference standard object by using a shooting terminal, then comparing the obtained data of each angle of an image with the data of the reference standard object to obtain the characteristics and the calculation proportion of spherical distortion of the shooting terminal, accurately measuring and calculating various shooting terminals capable of carrying out video image acquisition on a target object, and establishing a correction model database by using the obtained characteristics and the calculation proportion of the spherical distortion of each shooting terminal; after a user shoots with a known shooting terminal and before a 3D model is generated, a corresponding distortion data correction model is searched for in a video image through a correction model database, and model identification is carried out after the video image is processed.
8. The image processing modeling method of claim 5, wherein: the step S4 includes: and directly carrying out local size correction on the 3D model.
CN201711350936.7A 2017-12-15 2017-12-15 Image processing modeling method Active CN108109197B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711350936.7A CN108109197B (en) 2017-12-15 2017-12-15 Image processing modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711350936.7A CN108109197B (en) 2017-12-15 2017-12-15 Image processing modeling method

Publications (2)

Publication Number Publication Date
CN108109197A CN108109197A (en) 2018-06-01
CN108109197B true CN108109197B (en) 2021-03-02

Family

ID=62216262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711350936.7A Active CN108109197B (en) 2017-12-15 2017-12-15 Image processing modeling method

Country Status (1)

Country Link
CN (1) CN108109197B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398886A (en) * 2008-03-17 2009-04-01 杭州大清智能技术开发有限公司 Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN102364524A (en) * 2011-10-26 2012-02-29 清华大学 Three-dimensional reconstruction method and device based on variable-illumination multi-visual-angle differential sampling
CN105893675A (en) * 2016-03-31 2016-08-24 东南大学 Open space periphery building form optimization control method based on sky visible range evaluation
CN107134008A (en) * 2017-05-10 2017-09-05 广东技术师范学院 A kind of method and system of the dynamic object identification based under three-dimensional reconstruction
CN107423729A (en) * 2017-09-20 2017-12-01 湖南师范大学 A kind of remote class brain three-dimensional gait identifying system and implementation method towards under complicated visual scene

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7065242B2 (en) * 2000-03-28 2006-06-20 Viewpoint Corporation System and method of three-dimensional image capture and modeling
US7253832B2 (en) * 2001-08-13 2007-08-07 Olympus Corporation Shape extraction system and 3-D (three dimension) information acquisition system using the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398886A (en) * 2008-03-17 2009-04-01 杭州大清智能技术开发有限公司 Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN102364524A (en) * 2011-10-26 2012-02-29 清华大学 Three-dimensional reconstruction method and device based on variable-illumination multi-visual-angle differential sampling
CN105893675A (en) * 2016-03-31 2016-08-24 东南大学 Open space periphery building form optimization control method based on sky visible range evaluation
CN107134008A (en) * 2017-05-10 2017-09-05 广东技术师范学院 A kind of method and system of the dynamic object identification based under three-dimensional reconstruction
CN107423729A (en) * 2017-09-20 2017-12-01 湖南师范大学 A kind of remote class brain three-dimensional gait identifying system and implementation method towards under complicated visual scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视频序列的人体骨架提取与三维重建;肖雪;《中国优秀硕士学位论文全文数据库 信息科技辑》;20100616(第7期);第I138-940页 *

Also Published As

Publication number Publication date
CN108109197A (en) 2018-06-01

Similar Documents

Publication Publication Date Title
CN108053283B (en) Garment customization method based on 3D modeling
JP6560480B2 (en) Image processing system, image processing method, and program
Rhodin et al. General automatic human shape and motion capture using volumetric contour cues
CN109949899B (en) Image three-dimensional measurement method, electronic device, storage medium, and program product
CN110599540B (en) Real-time three-dimensional human body shape and posture reconstruction method and device under multi-viewpoint camera
CN108305312B (en) Method and device for generating 3D virtual image
US9727787B2 (en) System and method for deriving accurate body size measures from a sequence of 2D images
CN108475439B (en) Three-dimensional model generation system, three-dimensional model generation method, and recording medium
US8842906B2 (en) Body measurement
Pujades et al. The virtual caliper: rapid creation of metrically accurate avatars from 3D measurements
JP5873442B2 (en) Object detection apparatus and object detection method
CN104898832B (en) Intelligent terminal-based 3D real-time glasses try-on method
US20170053422A1 (en) Mobile device human body scanning and 3d model creation and analysis
CN111932678A (en) Multi-view real-time human motion, gesture, expression and texture reconstruction system
CN113177977A (en) Non-contact three-dimensional human body size measuring method
US20220027602A1 (en) Deep Learning-Based Three-Dimensional Facial Reconstruction System
CN107145224A (en) Human eye sight tracking and device based on three-dimensional sphere Taylor expansion
CN112509117A (en) Hand three-dimensional model reconstruction method and device, electronic equipment and storage medium
CN107066095B (en) Information processing method and electronic equipment
WO2020156627A1 (en) The virtual caliper: rapid creation of metrically accurate avatars from 3d measurements
CN108109197B (en) Image processing modeling method
CN116152121B (en) Curved surface screen generating method and correcting method based on distortion parameters
CN107901424A (en) A kind of Image Acquisition modeling
CN110215001A (en) One kind, which is cut the garment according to the figure, measures interactive accurate measurement method
CN208497700U (en) A kind of Image Acquisition modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant