CN112489100A - Virtual dressing image correction method and system - Google Patents

Virtual dressing image correction method and system Download PDF

Info

Publication number
CN112489100A
CN112489100A CN202110150927.3A CN202110150927A CN112489100A CN 112489100 A CN112489100 A CN 112489100A CN 202110150927 A CN202110150927 A CN 202110150927A CN 112489100 A CN112489100 A CN 112489100A
Authority
CN
China
Prior art keywords
image
mannequin
angle
multiple angles
correcting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110150927.3A
Other languages
Chinese (zh)
Inventor
李小波
秦晓飞
李昆仑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengxin Shambala Culture Co ltd
Original Assignee
Hengxin Shambala Culture Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengxin Shambala Culture Co ltd filed Critical Hengxin Shambala Culture Co ltd
Priority to CN202110150927.3A priority Critical patent/CN112489100A/en
Publication of CN112489100A publication Critical patent/CN112489100A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The application discloses a virtual dressing image correction method and a virtual dressing image correction system, wherein the virtual dressing image correction method specifically comprises the following steps: acquiring a multi-angle mannequin picture; according to the multi-angle people's platform pictures, constructing multi-angle digital people's platform images corresponding to the people's platform pictures of each angle; acquiring an original image with a mannequin at multiple angles; acquiring a multi-angle target image from a multi-angle original image with a mannequin; correcting the multi-angle target image; and matching the corrected multi-angle target image with the multi-angle digital mannequin image. The virtual dressing image correction method and the virtual dressing image correction system can acquire the clothing images, automatically match the images to the corresponding digital mannequins, can match the digital mannequins at multiple angles by shooting the clothing once, provide more choices for clothing display, and reduce the clothing display manufacturing cost.

Description

Virtual dressing image correction method and system
Technical Field
The application relates to the field of data processing, in particular to a virtual dressing image correction method and a virtual dressing image correction system.
Background
In the prior art, a virtual fitting mode is increasingly popularized, and in a virtual fitting product, after various clothes and accessories are photographed to generate images, the images of the clothes need to be converted and then attached to a model body to achieve the effect of fitting the clothes. However, the traditional method cannot finish fitting the garment image into a mannequin with multiple angles, and usually needs manual execution, so that the effect of virtual dressing is not ideal.
Therefore, a method and a system for correcting a virtual dressing image are needed, so that the dressing image is automatically matched into a digital mannequin with multiple angles, and more choices of dress display are provided.
Disclosure of Invention
The application aims to provide a processing method of a virtual dressing image, which specifically comprises the following steps: acquiring a multi-angle mannequin picture; according to the multi-angle people's platform pictures, constructing multi-angle digital people's platform images corresponding to the people's platform pictures of each angle; acquiring an original image with a mannequin at multiple angles; acquiring a multi-angle target image from a multi-angle original image with a mannequin; correcting the multi-angle target image; and matching the corrected multi-angle target image with the multi-angle digital mannequin image.
The original image with the mannequin in multiple angles corresponds to the angles of the mannequin picture in multiple angles in a one-to-one manner, and the digital mannequin image in multiple angles corresponds to the angles of the target image in multiple angles in a one-to-one manner.
The method for acquiring the original image with the mannequin of the multiple angles comprises dividing the original image with the mannequin of each angle into two parts of C0 and C1 according to the gray level, wherein the probability of the gray level of the C0 part is
Figure 166363DEST_PATH_IMAGE001
C1 partial gray scaleThe probability of a stage occurring is
Figure 308631DEST_PATH_IMAGE002
WhereintIs a natural number, and is provided with a plurality of groups,iis as followsiThe number of the gray levels is one,
Figure 957525DEST_PATH_IMAGE003
is as followsiThe probability of the occurrence of a gray level,Lthe total number of gray levels of the original image with the mannequin for each angle.
As above, the correcting the target image of multiple angles specifically includes the following sub-steps: identifying the types of target images of multiple angles; performing primary matching on the multi-angle target images according to the types of the multi-angle target images; and correcting the multi-angle target image according to the primary matching result.
The primary matching of the target image of multiple angles and the digital mannequin image of multiple angles is completed according to the clothing feature points and the mannequin feature points.
The above, wherein the garment feature points comprise two points of an opening portion of the garment.
As above, wherein the person table feature points comprise designated body parts in the digital person table image.
The above, wherein the clothing feature points are calibrated in advance at the designated positions of the target images at multiple angles, and the positions of the clothing feature points in the target images are clothing feature point coordinates.
As above, wherein, according to the result of the primary matching, correcting the target image from multiple angles includes checking whether the coordinates of the mannequin feature points after the primary matching coincide with the coordinates of the clothing feature points.
A system for rectifying a virtual dressing image, comprising a processor, wherein the processor performs any one of the methods described above.
The application has the following beneficial effects:
the virtual dressing image correction method and the virtual dressing image correction system can acquire the clothing images, automatically match the images to the corresponding digital mannequins, can match the digital mannequins at multiple angles by shooting the clothing once, provide more choices for clothing display, and reduce the clothing display manufacturing cost.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a flowchart of a method for correcting a virtual dressing image according to an embodiment of the present application;
fig. 2 is an internal structural view of a virtual dressing image rectification system according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application relates to a virtual dressing image correction method and a virtual dressing image correction system. According to the application, the clothes can be automatically matched with the digital mannequin according to the extraction and shooting, and the clothes can be matched with a plurality of digital mannequins after being shot once.
Fig. 1 is a flowchart of a virtual dressing image correction method provided by the present application, which specifically includes the following steps:
step S110: and acquiring a multi-angle mannequin picture.
Specifically, the multi-angle people's platform picture is a people's platform picture of a real state of shooting.
Step S120: and constructing a multi-angle digital mannequin image corresponding to the mannequin picture of each angle according to the multi-angle mannequin images.
The digital mannequin image is constructed according to the shot multi-angle mannequin image, and the constructed digital mannequin image is also the multi-angle digital mannequin image. Specifically, the multi-angle people's platform picture and the constructed multi-angle digital people's platform picture are in a one-to-one correspondence relationship.
Preferably, the construction of the digital mannequin image can be performed with reference to existing means, for example.
Step S130: and acquiring an original image with a mannequin at multiple angles.
Wherein, the mannequin in step S110 is dressed with a garment and photographed, thereby obtaining a multi-angle original image with the mannequin. The original image with the mannequin comprises an upper image or a lower image.
Step S140: and acquiring a multi-angle target image from the multi-angle original image with the mannequin.
Specifically, the step S140 specifically includes the following sub-steps:
step S1401: and determining the stability degree of the original image with the mannequin at multiple angles.
Wherein, the stable value of the original image with the mannequin is calculated, and when the original image with the mannequin is more disordered, the stable value is smaller, and the stable degree is smaller. When the original image with the mannequin is ordered more, the stability value is larger, and the stability degree is higher. Wherein the stable value H is specifically represented as:
Figure 427821DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 485775DEST_PATH_IMAGE005
the frequency of occurrence of the grey value x in the original image with the mannequin is indicated, wherein the specific numerical value of the grey value x is the minimum grey value of all grey values in the original image with the mannequin. Wherein the stable value is capable of representing the brightness level of the image. Wherein the higher the brightness level, the greater the stabilization value; the lower the brightness, the more stableThe smaller the value.
Preferably, the original image with the mannequin of multiple angles may be converted into a gray image before the stable value is acquired.
Step S1402: determining a segmentation threshold according to the stability degree of the original image with the mannequin at multiple angles, and determining a target image according to the segmentation threshold.
Specifically, if the stable value is greater than the first specified threshold and less than the second specified threshold, it indicates that the brightness of the image at this time is neither too bright nor too dark, and the segmentation threshold is conveniently determined. Where the segmentation threshold belongs to [0,255 ].
The first specified threshold is smaller than the second specified threshold, and the specific numerical value can be adjusted and determined according to the actual situation, which is not limited herein.
Wherein, the original image with the mannequin with any angle is setLThe number of the gray levels is one,
Figure 623496DEST_PATH_IMAGE006
is as followsiThe number of pixels included in each gray scale level,Nis the total number of pixels, then
Figure 210335DEST_PATH_IMAGE007
Is provided with
Figure 699085DEST_PATH_IMAGE003
Is as followsiProbability of occurrence of a gray level, expressed as
Figure 170518DEST_PATH_IMAGE008
Then there is
Figure 287378DEST_PATH_IMAGE009
Further, before setting a segmentation threshold, dividing the original image with the mannequin into two parts of C0 and C1 according to the gray level, wherein the probability of the gray level of the C0 part is
Figure 451643DEST_PATH_IMAGE010
The probability of the C1 partial gray level occurring is
Figure 162110DEST_PATH_IMAGE011
WhereintIs a natural number, and is provided with a plurality of groups,iis shown asiA grey level.
Mean value of gray scale of part C0
Figure 797753DEST_PATH_IMAGE012
Expressed as:
Figure 910066DEST_PATH_IMAGE013
wherein the content of the first and second substances,tis a natural number, and is provided with a plurality of groups,iis shown asiThe number of the gray levels is one,Lthe total number of gray levels of the original image with the mannequin,
Figure 979653DEST_PATH_IMAGE014
as the probability of the occurrence of the ith gray level,
Figure 302050DEST_PATH_IMAGE015
is the probability of the occurrence of the C0 partial gray level.
The gray scale mean of the C1 part is expressed as:
Figure 115285DEST_PATH_IMAGE016
wherein the content of the first and second substances,tis a natural number, and is provided with a plurality of groups,iis shown asiThe number of the gray levels is one,Lthe total number of gray levels of the original image with the mannequin,
Figure 206738DEST_PATH_IMAGE017
as the probability of the occurrence of the ith gray level,
Figure 712806DEST_PATH_IMAGE018
is the probability of the occurrence of the C1 partial gray level.
Wherein the variance of part C0
Figure 132286DEST_PATH_IMAGE019
Expressed as:
Figure 608266DEST_PATH_IMAGE020
wherein the content of the first and second substances,tis a natural number, and is provided with a plurality of groups,iis shown asiThe number of the gray levels is one,
Figure 695171DEST_PATH_IMAGE003
is as followsiThe probability of the occurrence of a gray level,
Figure 729730DEST_PATH_IMAGE015
the probability of the occurrence of the partial gray level of C0,
Figure 636506DEST_PATH_IMAGE021
the gray average of the C0 part is represented.
Wherein the variance of part C1
Figure 650598DEST_PATH_IMAGE022
Expressed as:
Figure 857589DEST_PATH_IMAGE023
wherein the content of the first and second substances,tis a natural number, and is provided with a plurality of groups,iis shown asiThe number of the gray levels is one,Lthe total number of gray levels of the original image with the mannequin,
Figure 439880DEST_PATH_IMAGE024
is as followsiThe probability of the occurrence of a gray level,
Figure 958586DEST_PATH_IMAGE025
the probability of the occurrence of the partial gray level of C1,
Figure 917315DEST_PATH_IMAGE026
the gray average of the C1 part is represented.
Furthermore, three evaluation functions are set, wherein the three evaluation functions are respectively as follows:
Figure 572287DEST_PATH_IMAGE027
wherein
Figure 591058DEST_PATH_IMAGE028
Figure 472427DEST_PATH_IMAGE029
Figure 860945DEST_PATH_IMAGE030
Wherein the content of the first and second substances,
Figure 776948DEST_PATH_IMAGE015
the probability of the occurrence of the partial gray level of C0,
Figure 435463DEST_PATH_IMAGE031
represents the mean value of the gray levels of the C0 portion,
Figure 194340DEST_PATH_IMAGE025
the probability of the occurrence of the partial gray level of C1,
Figure 229292DEST_PATH_IMAGE026
represents the mean value of the gray levels of the C1 portion,iis shown asiThe number of the gray levels is one,
Figure 265382DEST_PATH_IMAGE003
is as followsiThe probability of the occurrence of a gray level,Lthe total number of gray levels of the original image with the mannequin,
Figure 219431DEST_PATH_IMAGE032
the variance of the part C0 is shown,
Figure 340971DEST_PATH_IMAGE033
the variance of part C1 is shown.
Taking three evaluation functions
Figure 304248DEST_PATH_IMAGE034
Figure 929264DEST_PATH_IMAGE035
Figure 195160DEST_PATH_IMAGE036
The threshold corresponding to the maximum value of (a) is the segmentation threshold, and C0 divided according to the segmentation threshold is the target image.
According to the method, the target image of the corresponding angle can be determined in the original image with the mannequin at any angle, so that the multi-angle target image can be determined.
Preferably, after the target images of multiple angles are acquired, since the target images exist on the mannequin, a coordinate system is constructed by taking the upper left corner of the target image as an origin, and the position of any target image in the corresponding mannequin is acquired.
Step S150: and correcting the target image in multiple angles.
Specifically, where the target image is acquired in step S140, the acquired target image is rectified.
Specifically, the step S150 specifically includes the following sub-steps:
step S1501: the types of target images of multiple angles are identified.
In this embodiment, only a single top-up image or a single bottom-up image is matched, so that it is necessary to identify whether the target image is a top-up image or a bottom-up image, where the types of the target images of multiple angles are the same.
Step S1502: and carrying out primary matching on the target images of multiple angles according to the types of the target images of multiple angles.
The multi-angle target image is aligned with the corresponding digital mannequin image, and correction processing of the target image is achieved.
Specifically, the coordinates of the clothing feature points are calibrated on the multi-angle target image in advance, wherein the clothing feature points are multiple, and some points can be calibrated at the designated positions of the clothing to serve as the clothing feature points. For example, 2 points are set at the opening of the left cuff in the garment as garment feature points, 2 points are set at the opening of the hem edge in the garment as garment feature points, and for example, two points on the left and right of the shoulder of the garment are set as garment feature points, and the positions of the garment feature points in the target image are garment feature point coordinates.
Furthermore, the human platform feature points are calibrated in the multi-angle digital human platform image in advance, the number of the human platform feature points is multiple, and the appointed human body part in the digital human platform image is used as the human platform feature point. For example, the two outermost points on the left and right sides at the wrist of the left arm of the human platform are human platform characteristic points; the left side and the right side of the shoulder of the mannequin are characteristic points of the mannequin; setting the left and right points on the outermost side of the waist as the mannequin characteristic points and the like at the waist position of the mannequin; alternatively, two points on the right and left sides of the outermost ankle portion of the platform are set as platform feature points and the like. The position of the mannequin characteristic point in the digital mannequin image is the coordinate of the mannequin characteristic point.
The digital mannequin image is constructed by the mannequin picture, so the proportions of the mannequin in the digital mannequin image and the mannequin image are consistent, and the coordinates of the clothing feature points of the target image are correspondingly matched with the coordinates of the mannequin feature points of the digital mannequin image.
Specifically, there is a corresponding relationship between the clothing feature points and the platform feature points, for example, the clothing feature points at the left cuff opening and the platform feature points at the wrist of the left arm of the platform are in a corresponding relationship, the clothing feature points at the shoulder of the clothing and the platform feature points at the shoulder of the platform are in a corresponding relationship, and the like, so that the corresponding feature points are aligned one by one, thereby completing the primary matching between the target image of multiple angles and the corresponding digital platform image of multiple angles.
Step S1503: and correcting the multi-angle target image according to the primary matching result.
In the embodiment, the number, the coordinates and the like of the clothing feature points of the target image are relatively intuitive aiming at the matching of the single top-up image or the bottom-up image and the digital mannequin image, so that the target image can be corrected according to the coordinates of the clothing feature points and the coordinates of the mannequin feature points.
Specifically, whether the coordinates of the mannequin feature points after the initial matching coincide with the coordinates of the clothing feature points is checked. The target image of the multi-angle and the mannequin image of the multi-angle are in one-to-one correspondence, theoretically, the digital mannequin image is constructed according to the mannequin picture, the proportion of the digital mannequin image and the mannequin picture is consistent, but in an actual state, the digital mannequin image may have difference with the proportion of the mannequin picture, therefore, whether the coordinate of the mannequin feature point after the initial matching is overlapped with the coordinate of the clothing feature point needs to be checked, and whether the target image is perfectly attached to the digital mannequin is judged.
And judging the number of the overlapped coordinates of the mannequin characteristic points and the clothing characteristic points, and if the number of the overlapped coordinates of the mannequin characteristic points and the clothing characteristic points is less than a first specified threshold value, performing integral correction on the target image.
Specifically, the target image is zoomed integrally according to a certain proportion, whether the number of the coordinates of the overlapped mannequin feature points and the number of the coordinates of the clothing feature points are changed or not is judged after the target image is zoomed integrally, and if the number of the overlapped coordinates is increased and is larger than a first specified threshold value, the target image is corrected locally.
Further, the coordinates of the clothing feature points which are not overlapped with the coordinates of the mannequin feature points are searched, and the clothing feature points are locally corrected, for example, the clothing feature points are stretched or shrunk, so that the clothes feature points and the mannequin feature points are overlapped. Other supplementary garment features may also be partially corrected in this manner, or in other suitable manners, not limited herein.
If the number of the determined coordinates of the overlapped mannequin feature points and the clothing feature points is greater than a first specified threshold value before the integral zooming, the target image is directly subjected to local correction, and the method for local correction refers to the above description.
Step S160: and matching the corrected target image with the digital mannequin image.
And checking whether the edge of the corrected target image is completely covered in the digital mannequin image, and finishing the final matching of the target image and the digital mannequin image if the edge of the corrected target image is completely covered.
The present application further provides a processing system of a virtual dressing image, as shown in fig. 2, the processing system of the virtual dressing image includes a processor, wherein the processor executes the methods of steps S110 to S160, and the processor includes a human platform picture acquiring unit 201, a constructing unit 202, an original image acquiring unit 203, a target image acquiring unit 204, a correcting unit 205, and a matching unit 206.
Wherein the people's platform picture obtaining unit 201 is used for obtaining the people's platform picture of multiple angles.
The construction unit 202 is connected to the people's platform picture obtaining unit 201, and is configured to construct a multi-angle digital people's platform image corresponding to the people's platform picture at each angle according to the multi-angle people's platform picture.
Specifically, the construction unit 202 specifically includes a stability degree determination module and a segmentation threshold determination module.
The stability determining module is used for determining the stability of the original image with the mannequin from multiple angles.
The segmentation threshold determining module is connected with the stability determining module and used for determining a segmentation threshold according to the stability of the original image with the mannequin of each angle and determining a multi-angle target image according to the segmentation threshold.
The original image acquiring unit 203 is connected to the constructing unit 202, and is used for acquiring original images with a mannequin at multiple angles.
The target image obtaining unit 204 is connected to the original image obtaining unit 203, and is configured to obtain a target image of multiple angles from an original image of multiple angles with a mannequin.
The rectification unit 205 is connected to the target image obtaining unit 204, and is used for rectifying the target image of multiple angles.
The matching unit 206 is connected to the correcting unit 205, and is configured to match the corrected target images of multiple angles with the digital mannequin images of multiple angles.
The application has the following beneficial effects:
the virtual dressing image correction method and the virtual dressing image correction system can acquire the clothing images, automatically match the images to the corresponding digital mannequins, can match the digital mannequins at multiple angles by shooting the clothing once, provide more choices for clothing display, and reduce the clothing display manufacturing cost.
Although the present application has been described with reference to examples, which are intended to be illustrative only and not to be limiting of the application, changes, additions and/or deletions may be made to the embodiments without departing from the scope of the application.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A correction method of a virtual dressing image is characterized by comprising the following steps:
acquiring a multi-angle mannequin picture;
according to the multi-angle people's platform pictures, constructing multi-angle digital people's platform images corresponding to the people's platform pictures of each angle;
acquiring an original image with a mannequin at multiple angles;
acquiring a multi-angle target image from a multi-angle original image with a mannequin;
correcting the multi-angle target image;
and matching the corrected multi-angle target image with the multi-angle digital mannequin image.
2. The method for rectifying a virtual dressing image according to claim 1, wherein the original image with the mannequin of multiple angles corresponds to the angles of the mannequin picture of multiple angles one to one, and the digital mannequin image of multiple angles corresponds to the angles of the target image of multiple angles one to one.
3. The method for correcting the virtual dressing image according to claim 1, wherein the obtaining of the original image with the mannequin of multiple angles comprises dividing the original image with the mannequin of each angle into two parts of C0 and C1 according to the gray scale, wherein the probability of the occurrence of the gray scale of the C0 part is
Figure 521893DEST_PATH_IMAGE001
The probability of the C1 partial gray level occurring is
Figure 531044DEST_PATH_IMAGE002
WhereintIs a natural number, and is provided with a plurality of groups,iis as followsiThe number of the gray levels is one,
Figure 779622DEST_PATH_IMAGE003
is as followsiThe probability of the occurrence of a gray level,Lthe total number of gray levels of the original image with the mannequin for each angle.
4. The method for correcting a virtual dressing image according to claim 1, wherein the correction of the target image at multiple angles specifically comprises the following sub-steps:
identifying the types of target images of multiple angles;
performing primary matching on the multi-angle target images according to the types of the multi-angle target images;
and correcting the multi-angle target image according to the primary matching result.
5. The method for correcting the virtual dressing image according to claim 4, wherein the primary matching of the multi-angle target image and the multi-angle digital mannequin image is performed according to the clothing feature points and the mannequin feature points.
6. The method for correcting a virtual dressing image according to claim 5, wherein the clothing feature points include two points in an opening portion of the clothing.
7. The method for correcting a virtual dressing image according to claim 6, wherein the mannequin feature points include designated body parts in the digital mannequin image.
8. The method for correcting the virtual dressing image according to claim 7, wherein the garment feature point is calibrated at a designated position of the target image in multiple angles in advance, and the position of the garment feature point in the target image is a garment feature point coordinate.
9. The method for correcting a virtual dressing image according to claim 8, wherein correcting the target image at multiple angles according to the result of the primary matching includes checking whether coordinates of the mannequin feature points after the primary matching coincide with coordinates of the dress feature points.
10. A system for correcting a virtual dressing image, comprising a processor, wherein the processor performs the method of any one of claims 1-9.
CN202110150927.3A 2021-02-04 2021-02-04 Virtual dressing image correction method and system Pending CN112489100A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110150927.3A CN112489100A (en) 2021-02-04 2021-02-04 Virtual dressing image correction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110150927.3A CN112489100A (en) 2021-02-04 2021-02-04 Virtual dressing image correction method and system

Publications (1)

Publication Number Publication Date
CN112489100A true CN112489100A (en) 2021-03-12

Family

ID=74912213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110150927.3A Pending CN112489100A (en) 2021-02-04 2021-02-04 Virtual dressing image correction method and system

Country Status (1)

Country Link
CN (1) CN112489100A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2677497A2 (en) * 2012-04-02 2013-12-25 Fashion3D Sp. z o.o. Method and system of spacial visualisation of objects and a platform control system included in the system, in particular for a virtual fitting room
CN112258660A (en) * 2020-12-23 2021-01-22 恒信东方文化股份有限公司 Processing method and system of virtual dressing image
CN112257819A (en) * 2020-12-23 2021-01-22 恒信东方文化股份有限公司 Image matching method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2677497A2 (en) * 2012-04-02 2013-12-25 Fashion3D Sp. z o.o. Method and system of spacial visualisation of objects and a platform control system included in the system, in particular for a virtual fitting room
CN112258660A (en) * 2020-12-23 2021-01-22 恒信东方文化股份有限公司 Processing method and system of virtual dressing image
CN112257819A (en) * 2020-12-23 2021-01-22 恒信东方文化股份有限公司 Image matching method and system

Similar Documents

Publication Publication Date Title
JP5873442B2 (en) Object detection apparatus and object detection method
CN110168562B (en) Depth-based control method, depth-based control device and electronic device
US5995639A (en) Apparatus for identifying person
US9262674B2 (en) Orientation state estimation device and orientation state estimation method
US7176945B2 (en) Image processor, image processing method, recording medium, computer program and semiconductor device
US8699749B2 (en) Computer-readable storage medium, image processing apparatus, image processing system, and image processing method
CN110199296A (en) Face identification method, processing chip and electronic equipment
CN109598210B (en) Picture processing method and device
JP2013504918A (en) Image processing system
CN109117753A (en) Position recognition methods, device, terminal and storage medium
CN109785228B (en) Image processing method, image processing apparatus, storage medium, and server
CN109274883A (en) Posture antidote, device, terminal and storage medium
CN109086724A (en) A kind of method for detecting human face and storage medium of acceleration
KR20170092533A (en) A face pose rectification method and apparatus
JP2000251078A (en) Method and device for estimating three-dimensional posture of person, and method and device for estimating position of elbow of person
CN108875623A (en) A kind of face identification method based on multi-features correlation technique
KR20210027028A (en) Body measuring device and controlling method for the same
CN113706431A (en) Model optimization method and related device, electronic equipment and storage medium
CN113191200A (en) Push-up test counting method, device, equipment and medium
CN112489100A (en) Virtual dressing image correction method and system
US9323981B2 (en) Face component extraction apparatus, face component extraction method and recording medium in which program for face component extraction method is stored
CN110264320B (en) Information display method and device based on reality augmentation equipment and storage medium
CN112257819B (en) Image matching method and system
TWI641999B (en) Eyeball recognition method and system
JP2008250407A (en) Image processing program and image processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210312