CN112700539A - Method and system for constructing virtual mannequin - Google Patents

Method and system for constructing virtual mannequin Download PDF

Info

Publication number
CN112700539A
CN112700539A CN202110008383.7A CN202110008383A CN112700539A CN 112700539 A CN112700539 A CN 112700539A CN 202110008383 A CN202110008383 A CN 202110008383A CN 112700539 A CN112700539 A CN 112700539A
Authority
CN
China
Prior art keywords
angle
image
shot
garment
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110008383.7A
Other languages
Chinese (zh)
Inventor
李小波
秦晓飞
李昆仑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengxin Shambala Culture Co ltd
Original Assignee
Hengxin Shambala Culture Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengxin Shambala Culture Co ltd filed Critical Hengxin Shambala Culture Co ltd
Priority to CN202110008383.7A priority Critical patent/CN112700539A/en
Publication of CN112700539A publication Critical patent/CN112700539A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a system for constructing a virtual mannequin, which comprise the following steps: acquiring a multi-angle garment shot image; obtaining a first three-dimensional characteristic diagram of each garment shot image under a plurality of preset visual angles; and fusing the first three-dimensional characteristic images of the multi-angle clothes shooting images under a plurality of preset visual angles to form a fused human body model as a virtual mannequin. This application utilizes the clothing to shoot all points in the image and construct and fuse the manikin to form virtual people's platform, greatly reduced the error between the actual size of manikin and the object of being shot.

Description

Method and system for constructing virtual mannequin
Technical Field
The application relates to the technical field of non-contact human body measurement, in particular to a method and a system for constructing a virtual mannequin.
Background
In recent years, computer vision graphics has many applications in the construction of non-contact three-dimensional human body models. In the prior art, a plurality of two-dimensional images of a human body at different angles are obtained through an image generator, an image processor calculates the scaling ratio of the two-dimensional images according to the position relationship of human body feature points in the two-dimensional images, then generates three-dimensional human body feature points according to the scaling ratio of the two-dimensional images and the identified human body feature points, and finally calculates required human body shape data through the three-dimensional human body feature points, thereby constructing a human body model.
However, since the human feature points exhibit discontinuity and the feature of each point of the human body cannot be reflected in the three-dimensional human model, there is a large error between the size of the human model obtained by the conventional method and the actual size of the human body.
Disclosure of Invention
The application aims to provide a method and a system for constructing a virtual mannequin, which are used for solving the technical problem that a large error exists between the size of a human body model obtained by the existing method and the actual size of the human body.
The application provides a method for constructing a virtual mannequin, which comprises the following steps: acquiring a multi-angle garment shot image; obtaining a first three-dimensional characteristic diagram of each garment shot image under a plurality of preset visual angles; and fusing the first three-dimensional characteristic images of the multi-angle clothes shooting images under a plurality of preset visual angles to form a fused human body model as a virtual mannequin.
Preferably, before the first three-dimensional characteristic diagram, calibrating a camera according to a multi-angle garment shot image to obtain a correction parameter; and correcting the multi-angle clothes shooting image according to the correction parameters.
Preferably, before obtaining the first three-dimensional feature maps of each garment shot image at a plurality of predetermined viewing angles, the method further comprises processing each corrected garment shot image to obtain the shooting angle of the garment shot image.
Preferably, the clothing shot image comprises a shot object, a part of the rotating disc which is in fixed position relation with the shot object and is abutted against the foot of the shot object, and a reference position, the reference position is located on the outer edge of the top surface of the rotating disc, and a plurality of color blocks are uniformly arranged on the rotating disc along the circumferential direction.
Preferably, processing each corrected garment capture image includes obtaining an image of a target area from the garment capture image, the target area including a portion of the carousel.
Preferably, the capturing angle for obtaining the captured image of the garment includes: obtaining a color block and a target boundary near a reference position and an included angle between the target boundary and the front center line of the shot object from the image of the target area, wherein the included angle is used as a first angle, and the front center line of the shot object is an intersection line of an axial symmetry plane of the shot object in the direction from the face to the back and the top surface of the turntable; calculating an included angle between a target boundary and a reference line as a third angle, wherein the reference line is a connecting line between a reference position and the central point of the top surface of the turntable; and calculating the sum or difference of the first angle and the third angle as the shooting angle of the clothes.
Preferably, the fusing the first three-dimensional feature maps of the multi-angle garment shot images under a plurality of predetermined visual angles to form a fused human body model comprises: splicing the first three-dimensional characteristic diagrams of the multi-angle clothes shooting images into second three-dimensional characteristic diagrams under the preset visual angles according to the shooting angles aiming at each preset visual angle; and performing surface fitting according to the second three-dimensional characteristic graphs of all the preset visual angles to form a fused human body model.
Preferably, after the forming of the fused human body model, the method further comprises: obtaining a multi-angle third three-dimensional characteristic diagram of the fused human body model under a plurality of preset visual angles; comparing the first three-dimensional characteristic diagram and the third three-dimensional characteristic diagram at the same angle under the same preset visual angle to obtain a difference value; and if the difference value is smaller than the threshold value, taking the fused human body model as a virtual mannequin.
Preferably, if the difference value exceeds the threshold value, the first three-dimensional feature map is obtained again and is fused.
The application also provides a virtual mannequin construction system which comprises a processor, wherein the processor executes the virtual mannequin construction method.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a flow chart of a method for constructing a virtual mannequin provided by the present application;
FIG. 2 is a flow chart of a capture angle for obtaining a captured image of a garment provided by the present application;
FIG. 3 is a flowchart of obtaining a color block near a reference position and a boundary of a target from an image of the target area according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of calculating a second angle in the embodiment of FIG. 3;
FIG. 5 is a flowchart of obtaining patches and boundary lines of a target near a reference position from an image of the target area according to another embodiment of the present disclosure;
FIG. 6 is a flow chart for obtaining a first three-dimensional feature map as provided herein;
FIG. 7 is a flowchart illustrating a process of fusing first three-dimensional feature maps of multi-angle garment images at a plurality of predetermined viewing angles to form a fused human body model according to the present application;
FIG. 8 is a flow chart for obtaining a second three-dimensional feature map at a predetermined viewing angle as provided herein;
fig. 9 is a flowchart for stitching the first three-dimensional feature map of the first shooting angle and the first three-dimensional feature map of the second shooting angle provided by the present application.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example one
The application provides a method for constructing a virtual mannequin. Fig. 1 is a flowchart of a method for constructing a virtual mannequin provided by the present application.
As shown in fig. 1, the method for constructing a virtual mannequin includes the following steps:
s110: the method comprises the steps of obtaining a multi-angle clothes shooting image, wherein the multi-angle clothes shooting image records the characteristics of a shot object at an angle of 360 degrees.
The multi-angle clothes shooting image is an image obtained after shooting from a plurality of angles by a camera on the basis that a shot object (such as a measured human body or a measured platform) wears a tight-fitting clothes.
In one embodiment, the leg of the subject is in contact with the turntable, the subject and the turntable have a fixed positional relationship, the bottom of the turntable is connected to the base, and the turntable is rotatably connected to the base. The position of the camera is unchanged in the shooting process, the range of the visual angle is unchanged, the shot object is rotated through the rotating disc, and the shot object is shot once every time the rotating disc rotates, so that the shot clothes image is obtained.
The carousel rotates along its vertical central axis, evenly is equipped with the fan-shaped color piece of a plurality of different colours along circumference on the top surface of carousel, and the radian in the fan-shaped region of every color piece is the same promptly. The outer edge of the top surface of the turntable is provided with a reference point which is used as a reference position for the rotation of the shot object, and the color of the reference position is different from the color of all the color blocks. The subject is fixed on the top surface of the turntable, and the central axis of the subject coincides with the central axis of the turntable.
The garment shot image includes a shot object, a portion of the turntable, and a reference position. The intersection line of the axial symmetry plane of the face to back direction of the object and the top surface of the turntable is taken as the front center line of the object. And taking a connecting line between the reference position and the central point of the top surface of the turntable as a reference line.
As an example, a reference line is provided on the people table assembly. As another example, no reference line is set on the mannequin component, and the reference line is a virtual line.
Specifically, the center line of the front face of the mannequin coincides with the reference line in the initial state of the turntable, the rotation direction of the turntable is counterclockwise, and the shooting angle of the garment is the rotation angle of the turntable.
S120: and calibrating a camera according to the multi-angle clothes shooting image to obtain correction parameters.
S130: and correcting the multi-angle clothes shooting image according to the correction parameters.
The clothes shot image is corrected, so that the characteristics of the shot object can be more accurately obtained, and the construction quality of the virtual mannequin is improved.
S140: and processing each corrected garment shot image to obtain the shooting angle of the garment shot image.
As an embodiment, processing the corrected photographed garment image includes obtaining an image of a target area from the photographed garment image, the target area including a portion of the rotating disk extracting an image of the rotating disk from the photographed garment image as a basis for obtaining a photographing angle.
As one embodiment, the image of the target region is obtained by a threshold segmentation algorithm.
Fig. 2 shows a flowchart for obtaining a shooting angle of a garment shot image according to the present application. As shown in fig. 2, the photographing angles for obtaining the photographed image of the garment include:
s210: a patch near the reference position and a target boundary and an angle between the target boundary and a front center line of the subject are obtained as a first angle from the image of the target region. Specifically, the first angle is an angle of the target boundary line with respect to a center line of the front face of the mannequin in a counterclockwise direction.
S220: and calculating an included angle between the target boundary and a reference line as a second angle, wherein the reference line is a connecting line between the reference position and the central point of the top surface of the turntable.
S230: and calculating the sum or difference of the first angle and the second angle as the shooting angle of the clothes.
As an embodiment, in the case where no reference line is set on the body station assembly, as shown in fig. 3, obtaining a patch near the reference position and a target boundary line from the image of the target area includes the steps of:
s310: the respective patches of the image of the target area and the reference positions are determined by identifying the color values.
S320: and connecting lines of pixel points with color values changed between two adjacent color blocks as boundary lines of the color blocks.
S330: a first distance value between the reference position and each of the borderlines is calculated.
S340: and taking the boundary line of which the first distance value is smaller than the first distance threshold value as the boundary line of the color block near the reference position.
S350: any one of the boundaries of the patches near the reference position is taken as a target boundary.
S360: the patch between the target boundary and the boundary adjacent thereto is set as a patch near the reference position.
The angle between each dividing line on the turntable and the relative position of the center line of the front face of the mannequin is preset, so that after the target dividing line is obtained, the first angle can be determined according to the preset value.
On this basis, as shown in fig. 4, the step of calculating the second angle in S220 includes the following steps:
s410: and constructing a virtual three-dimensional coordinate system, wherein the image of the target area is in an X-Z axis plane, and the reference line is parallel to the Z axis of the virtual three-dimensional coordinate system.
S420: the slope of the target boundary line with respect to the X-axis is detected.
S430: and calculating an included angle between the target boundary and the X axis of the virtual three-dimensional coordinate system according to the slope, and taking the included angle as a third angle.
S440: and judging whether the slope is positive or not. If yes, go to S450; otherwise, S460 is performed.
S450: and calculating a complementary angle of the third angle as the second angle.
S460: and calculating a complementary angle of the third angle as the second angle.
Preferably, a boundary line in which the first distance value is smallest among the boundary lines of the patches near the reference position is taken as the target boundary line. On the basis, if the slope of the target boundary line is positive, the sum of the first angle and the second angle is used as the shooting angle of the clothes; if the slope of the target boundary line is negative, the difference between the first angle and the second angle is defined as the imaging angle of the garment.
As another embodiment, in the case where the reference line is provided on the human table assembly, as shown in fig. 5, obtaining a patch near the reference position and a target boundary line from the image of the target area includes the steps of:
s510: the pixel coordinates of each patch, reference position, and reference line of the image of the target area are determined by identifying the color values.
S520: and detecting a reference line and a plurality of boundary lines in the image of the target area through Hough transform, wherein the plurality of boundary lines are intersected with the reference line after Hough transform, and the intersection points are the central points of the top surface of the turntable.
S530: and calculating a second distance value between the first pixel point on the reference line and the second pixel point on each boundary, wherein the first pixel point and the second pixel points of all the boundaries are positioned on the same straight line, and the first pixel point and the second pixel point are not overlapped.
S540: and taking the boundary with the minimum second distance value as a target boundary.
S550: the patch between the target boundary and the boundary adjacent thereto is set as a patch near the reference position.
The angle between each dividing line on the turntable and the relative position of the center line of the front face of the mannequin is preset, so that after the target dividing line is obtained, the first angle can be determined according to the preset value.
On the basis, a second angle is obtained by detecting an included angle between the object boundary and the reference line after Hough transform.
The relative position of the target boundary line and the patch near the reference position includes both cases where the target boundary line is on the side of the patch near the reference position that is away from or near the center line of the front surface. And if the target boundary line is positioned on one side of the color block near the reference position, which is far away from the center line of the front surface, the difference between the first angle and the second angle is taken as the shooting angle of the clothes. If the target boundary line is on the side of the color block near the reference position near the center line of the front surface, the sum of the first angle and the second angle is used as the shooting angle of the garment.
S150: and obtaining a first three-dimensional characteristic diagram of each corrected garment shot image under a plurality of preset visual angles.
As shown in fig. 6, obtaining the first three-dimensional feature map includes the steps of:
s610: and learning the shot object through a deep neural network to obtain the prior knowledge of the shot object.
S620: and inputting the corrected clothes shooting image and the mask into a depth neural network to obtain a first three-dimensional characteristic diagram corresponding to the mask. Wherein each mask corresponds to a predetermined viewing angle.
In the virtual three-dimensional coordinate system, the height direction of the object corresponds to the Z-axis of the coordinate system, and the mask defines a mapping plane parallel to the X-Y plane of the coordinate system. The Z values corresponding to different preset visual angles are different, namely the different preset visual angles correspond to different height positions of the shot object.
Specifically, as an embodiment, the mask is an image with the same size as the garment shot image, and pixels in the garment shot image correspond to the pixel positions of the mask in a one-to-one manner. The first three-dimensional feature map includes predetermined perspective information and three-dimensional coordinates of points on the image, and the Z values of the three-dimensional coordinates of all the points are the same.
As an embodiment, the first three-dimensional feature map is obtained by performing an operation (e.g., an and operation) on the photographed image of the garment and the mask.
S160: and fusing the first three-dimensional characteristic images of the multi-angle clothes shooting images under a plurality of preset visual angles to form a fused human body model. As one embodiment, the fused mannequin is used as a virtual mannequin.
As an embodiment, as shown in fig. 7, fusing first three-dimensional feature maps of multi-angle garment shot images at a plurality of predetermined viewing angles to form a fused human body model, includes:
s710: and splicing the first three-dimensional characteristic diagrams of the multi-angle clothes shooting images into second three-dimensional characteristic diagrams under the preset visual angles according to the shooting angles aiming at each preset visual angle.
Specifically, as an embodiment, obtaining a second three-dimensional feature map at a predetermined viewing angle is shown in fig. 8, and includes the following steps:
s810: and splicing to form a first spliced image by taking the first three-dimensional feature map of the first shooting angle under the preset angle of view as a reference image and the first three-dimensional feature map of the second shooting angle under the preset angle of view as an image to be spliced, wherein the second shooting angle is a shooting angle obtained by next rotation on the basis of the first shooting angle, and the first shooting angle is a shooting angle in an initial state.
Specifically, as shown in fig. 9, the step of splicing the first three-dimensional feature map at the first shooting angle and the first three-dimensional feature map at the second shooting angle includes the following steps:
s910: and respectively marking the splicing areas for the first three-dimensional feature icons at the first shooting angle and the second shooting angle under the preset visual angle.
Order to
Figure BDA0002883999170000081
Wherein S is(x,z)The three-dimensional feature map is marked by a pixel with a position (x, z) of the three-dimensional feature map, L is the distance between a camera and a background wall, alpha is the visual angle of the camera in the left-right direction, m is a marking coefficient, and m is more than 0 and less than or equal to 1/2.
In consideration of the fact that in the image shot by the garment, the feature display of the area directly opposite to the camera is more sufficient than the feature display of the image obtained after the area is rotated to other angles, in this step, the splicing of the area is abandoned, and the pixel positions in the areas on the two sides of the directly opposite area are marked as the splicing area. This reduces the computational load of the stitching and helps to improve the quality of the fit.
As an embodiment, the right splicing area is referred to as a first splicing area, the left splicing area is referred to as a second splicing area, and the middle area that is not marked is referred to as a center area.
S920: and performing multiple smooth sampling on the second splicing area of the first three-dimensional characteristic diagram at the first shooting angle and the first splicing area of the first three-dimensional characteristic diagram at the second shooting angle under the preset visual angle to obtain multiple sampling images at the first shooting angle and multiple sampling images at the second shooting angle.
Wherein the sampling parameter epsilonn=kn-1ε0 (2)
Wherein epsilonnIs the sampling parameter of the nth sampling, k is the sampling coefficient, epsilon0And (4) sampling parameters of the first sampling for N times.
The garment shot images at a plurality of angles are shot according to the sequence of the anticlockwise rotation direction of the rotating disc and the shot object, so that the splicing area on the left side of the upper angle and the splicing area on the right side of the lower angle are overlapped parts of the two shots, and the second splicing area of the first three-dimensional feature map at the first shooting angle and the first splicing area of the first three-dimensional feature map at the second shooting angle are spliced.
S930: acquiring a preliminary feature point set G of a first shooting angle according to the plurality of sampling images of the first shooting angle and the plurality of sampling images of a second shooting angle1And a preliminary feature point set G of a second photographing angle2
Take the first photographing angle as an example, G1={Pir|i=1,2…N,r=1,2…R} (3)
Wherein, PirRepresenting the r-th first feature point on the i-th sampled image.
First characteristic point PirHas a pixel value of Vir
Vir=MAX{Vijs|s=1,2…q×q} (4)
Wherein, VijsIs calculated as the jth second feature point P on the ith sampling imageijA first designated neighborhood A of centered qxq size1And the pixel value of the inner s-th pixel point.
Second feature point PijHas a pixel value of Vij
Vij=MAX{Vitl|l=1,2…n×n} (5)
Wherein, VitlTo take the t-th pixel point P on the ith sampling imageitA second designated neighborhood A of size n x n as center2The pixel value of the ith pixel point in the pixel.
S940: and matching the feature points in the preliminary feature point set of the first shooting angle and the feature points in the preliminary feature point set of the second shooting angle.
Specifically, the matching for each feature point in the preliminary feature point set at the first shooting angle and the preliminary feature point set at the second shooting angle includes the following steps:
p1: and taking a preliminary characteristic point set where the characteristic points to be matched are located as a first preliminary characteristic point set, and taking another preliminary characteristic point set as a second preliminary characteristic point set.
P2: calculating a first preliminary feature point P to be matcheduAnd all the second preliminary feature points P in the second preliminary feature point setvCorrelation of (2)
Figure BDA0002883999170000101
Wherein R isuvRepresenting a first preliminary feature point PuAnd the second preliminary feature point PvF is the first preliminary feature point PuAssociated windows, g, centereduRepresenting a first preliminary feature point PuPixel value of gvRepresenting a first preliminary feature point PvThe pixel value of (2).
P3: and obtaining second preliminary feature points with the maximum and second largest correlation with the first preliminary feature points to be matched as third preliminary feature points and fourth preliminary feature points.
P4: and respectively calculating the difference value between Euclidean distances between the first preliminary characteristic point to be matched and the third preliminary characteristic point and the fourth preliminary characteristic point. If the difference is greater than the first threshold, executing P5; otherwise, outputting a matching failure result, namely outputting a second preliminary feature point which is not matched with the first preliminary feature point in the second preliminary feature point set.
P5: and taking the third preliminary characteristic point as a matching characteristic point of the first preliminary characteristic point.
S950: and preliminarily splicing the first three-dimensional feature map of the first shooting angle and the first three-dimensional feature map of the second shooting angle under the preset visual angle according to the feature point matching result. Specifically, the second splicing area of the first three-dimensional feature map at the first shooting angle and the first splicing area of the first three-dimensional feature map at the second shooting angle at the preset angle are spliced according to the feature point matching result, and other parts (namely the first splicing area of the first three-dimensional feature map at the first shooting angle and the second splicing area of the first three-dimensional feature map at the second shooting angle) are kept unchanged and are not spliced to form a third spliced image. In particular, the splicing may be performed using a splicing method known in the art.
S960: and performing edge fusion on the splicing area of the third spliced image and the adjacent central area thereof to form a first spliced image.
S820: and taking the first spliced image as a reference image, taking the first three-dimensional characteristic diagram of a third shooting angle under the preset angle as an image to be spliced, and splicing to form a second spliced image, wherein the third shooting angle is the shooting angle closest to the second shooting angle, and the rotating direction of the second shooting angle relative to the first shooting angle is the same as the rotating direction of the third shooting angle relative to the second shooting angle. Wherein the third photographing angle is a photographing angle obtained by making a next rotation on the basis of the second photographing angle.
S830: and judging whether the third shooting angle is the maximum or minimum shooting angle under the preset angle, namely judging whether the first three-dimensional feature map of the third shooting angle under the preset angle is the last spliced first three-dimensional feature map under the preset angle. If yes, executing S850; otherwise, S840 is performed.
S840: and updating the first spliced image by using the second spliced image, updating the second shooting angle by using the third shooting angle, and returning to the step S820.
S850: and taking the second spliced image as a second three-dimensional feature map under the preset visual angle.
S720: and performing surface fitting according to the second three-dimensional characteristic graphs of all the preset visual angles to form a fused human body model.
Preferably, the forming of the fused phantom further comprises verifying whether the fused phantom is qualified. Specifically, the method comprises the following steps:
s170: and obtaining a third three-dimensional feature map fusing the multiple angles of the human body model under multiple preset visual angles.
S180: and comparing the first three-dimensional characteristic diagram and the third three-dimensional characteristic diagram at the same angle under the same preset visual angle to obtain a difference value.
S190: judging whether the difference value is smaller than a second threshold value, if so, executing S1100; otherwise, return to S150.
S1100: and taking the fused human body model as a virtual mannequin.
Example two
The application also provides a virtual mannequin construction system which comprises a processor and the virtual mannequin construction method of the processor.
The beneficial effects obtained by the application are as follows:
1. this application utilizes the clothing to shoot all points in the image and construct and fuse the manikin to form virtual people's platform, greatly reduced the error between the actual size of manikin and the object of being shot.
2. The spliced image under the preset visual angle is constructed by utilizing the images obtained by the multiple shooting angles formed by multiple rotations of the clothing shooting image within the range of 360 degrees, so that the spliced image completely reflects the characteristics of all parts of the shot object, and the human body model can better reflect the characteristics of the shot object.
The method and the device solve the problem that a large error exists between the size of the human body model obtained by the existing method and the actual size of the human body.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for constructing a virtual mannequin is characterized by comprising the following steps:
acquiring a multi-angle garment shot image;
obtaining a first three-dimensional characteristic diagram of each garment shot image under a plurality of preset visual angles;
and fusing the first three-dimensional characteristic graphs of the multi-angle clothing shooting images under a plurality of preset visual angles to form a fused human body model as a virtual mannequin.
2. The method of constructing as claimed in claim 1, further comprising, prior to the first three-dimensional feature map:
calibrating a camera according to the multi-angle clothes shooting image to obtain correction parameters;
and correcting the multi-angle clothes shooting image according to the correction parameters.
3. The construction method according to claim 2, wherein the step of obtaining the first three-dimensional feature map of each garment shot at a plurality of predetermined viewing angles further comprises processing each corrected garment shot to obtain the shooting angle of the garment shot.
4. The construction method according to claim 3, wherein the garment shot image includes a shot object, a part of a turntable which has a fixed positional relationship with the shot object and abuts against a foot of the shot object, and a reference position which is located on an outer edge of a top surface of the turntable, and a plurality of color patches are uniformly provided on the turntable in a circumferential direction.
5. The construction method according to claim 4, wherein processing each corrected garment shot image comprises obtaining an image of a target area from the garment shot image, the target area comprising a portion of the carousel.
6. The construction method according to claim 5, wherein obtaining the shooting angle of the shot image of the garment comprises:
obtaining a color block and a target boundary near the reference position and an included angle between the target boundary and a front center line of the shot object from the image of the target area, wherein the front center line of the shot object is an intersection line of an axial symmetry plane of the shot object in the direction from the face to the back and the top surface of the turntable;
calculating an included angle between the target boundary and a reference line as a third angle, wherein the reference line is a connecting line between the reference position and the central point of the top surface of the turntable;
and calculating the sum or difference of the first angle and the third angle as the shooting angle of the clothes.
7. The construction method according to claim 3, wherein fusing the first three-dimensional feature maps of the multi-angle garment shot images at a plurality of predetermined viewing angles to form a fused human body model comprises:
splicing the first three-dimensional characteristic diagrams of the multi-angle clothes shooting images into second three-dimensional characteristic diagrams under the preset visual angles according to the shooting angles aiming at each preset visual angle;
and performing surface fitting according to the second three-dimensional characteristic graphs of all the preset visual angles to form a fused human body model.
8. The method of constructing as claimed in claim 1, further comprising after forming the fused mannequin:
obtaining a multi-angle third three-dimensional characteristic diagram of the fused human body model under a plurality of preset visual angles;
comparing the first three-dimensional characteristic diagram and the third three-dimensional characteristic diagram at the same angle under the same preset visual angle to obtain a difference value;
and if the difference value is smaller than a threshold value, taking the fused human body model as a virtual mannequin.
9. The method according to claim 8, wherein if the difference value exceeds a threshold value, the first three-dimensional feature map is acquired again and fused.
10. A virtual body station construction system comprising a processor that executes the virtual body station construction method according to any one of claims 1 to 9.
CN202110008383.7A 2021-01-05 2021-01-05 Method and system for constructing virtual mannequin Pending CN112700539A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110008383.7A CN112700539A (en) 2021-01-05 2021-01-05 Method and system for constructing virtual mannequin

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110008383.7A CN112700539A (en) 2021-01-05 2021-01-05 Method and system for constructing virtual mannequin

Publications (1)

Publication Number Publication Date
CN112700539A true CN112700539A (en) 2021-04-23

Family

ID=75514748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110008383.7A Pending CN112700539A (en) 2021-01-05 2021-01-05 Method and system for constructing virtual mannequin

Country Status (1)

Country Link
CN (1) CN112700539A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170140578A1 (en) * 2014-06-12 2017-05-18 Shenzhen Orbbec Co., Ltd. Depth camera-based human-body model acquisition method and network virtual fitting system
CN107230224A (en) * 2017-05-19 2017-10-03 深圳奥比中光科技有限公司 Three-dimensional virtual garment model production method and device
CN108564612A (en) * 2018-03-26 2018-09-21 广东欧珀移动通信有限公司 Model display methods, device, storage medium and electronic equipment
CN109801374A (en) * 2019-01-14 2019-05-24 盾钰(上海)互联网科技有限公司 A kind of method, medium and system reconstructing threedimensional model by multi-angle image collection
CN111783182A (en) * 2020-07-07 2020-10-16 恒信东方文化股份有限公司 Modeling method and system of three-dimensional virtual mannequin

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170140578A1 (en) * 2014-06-12 2017-05-18 Shenzhen Orbbec Co., Ltd. Depth camera-based human-body model acquisition method and network virtual fitting system
CN107230224A (en) * 2017-05-19 2017-10-03 深圳奥比中光科技有限公司 Three-dimensional virtual garment model production method and device
CN108564612A (en) * 2018-03-26 2018-09-21 广东欧珀移动通信有限公司 Model display methods, device, storage medium and electronic equipment
CN109801374A (en) * 2019-01-14 2019-05-24 盾钰(上海)互联网科技有限公司 A kind of method, medium and system reconstructing threedimensional model by multi-angle image collection
CN111783182A (en) * 2020-07-07 2020-10-16 恒信东方文化股份有限公司 Modeling method and system of three-dimensional virtual mannequin

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
潘博;钟跃崎;: "基于二维图像的三维服装重建", 纺织学报, no. 04, 15 April 2020 (2020-04-15) *

Similar Documents

Publication Publication Date Title
CN110568447B (en) Visual positioning method, device and computer readable medium
TWI485650B (en) Method and arrangement for multi-camera calibration
US11521311B1 (en) Collaborative disparity decomposition
CN107766855B (en) Chessman positioning method and system based on machine vision, storage medium and robot
Scharstein View synthesis using stereo vision
CN106228538B (en) Binocular vision indoor orientation method based on logo
US6917702B2 (en) Calibration of multiple cameras for a turntable-based 3D scanner
Carrera et al. SLAM-based automatic extrinsic calibration of a multi-camera rig
RU2642167C2 (en) Device, method and system for reconstructing 3d-model of object
US9519968B2 (en) Calibrating visual sensors using homography operators
CN106981078B (en) Sight line correction method and device, intelligent conference terminal and storage medium
US20130002828A1 (en) Context and Epsilon Stereo Constrained Correspondence Matching
CN114119751A (en) Method and system for large scale determination of RGBD camera poses
CN107924571A (en) Three-dimensional reconstruction is carried out to human ear from a cloud
CN110838164B (en) Monocular image three-dimensional reconstruction method, system and device based on object point depth
CN109902675B (en) Object pose acquisition method and scene reconstruction method and device
CN113706373A (en) Model reconstruction method and related device, electronic equipment and storage medium
JP2008309595A (en) Object recognizing device and program used for it
JP7398819B2 (en) Three-dimensional reconstruction method and device
Ling et al. A dense 3D reconstruction approach from uncalibrated video sequences
Kochi et al. 3D modeling of architecture by edge-matching and integrating the point clouds of laser scanner and those of digital camera
CN112700539A (en) Method and system for constructing virtual mannequin
Paudel et al. Localization of 2D cameras in a known environment using direct 2D-3D registration
Mkhitaryan et al. RGB-D sensor data correction and enhancement by introduction of an additional RGB view
Motai et al. Concatenate feature extraction for robust 3d elliptic object localization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination