CN111695525A - 360-degree clothes fitting display method and device - Google Patents
360-degree clothes fitting display method and device Download PDFInfo
- Publication number
- CN111695525A CN111695525A CN202010542049.5A CN202010542049A CN111695525A CN 111695525 A CN111695525 A CN 111695525A CN 202010542049 A CN202010542049 A CN 202010542049A CN 111695525 A CN111695525 A CN 111695525A
- Authority
- CN
- China
- Prior art keywords
- clothing
- image
- garment
- images
- clothes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 230000007704 transition Effects 0.000 claims abstract description 36
- 239000011159 matrix material Substances 0.000 claims description 40
- 238000000926 separation method Methods 0.000 claims description 13
- 238000000354 decomposition reaction Methods 0.000 claims description 6
- 230000001502 supplementing effect Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 2
- 239000000463 material Substances 0.000 abstract description 2
- 238000005286 illumination Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Shopping interfaces
- G06Q30/0643—Graphical representation of items or shoppers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The application discloses a 360-degree clothes fitting display method and device. The method comprises the steps of collecting a plurality of clothing pictures with continuous angles; separating the clothes in the collected clothes pictures from the mannequin and the background, acquiring a transparent channel of the clothes, and generating a clothes image with the transparent channel; inserting a transition frame between two continuous frames of the clothing image of the transparent channel to generate a clothing video of an integral smooth sequence frame; and fusing the clothing video and the new background to generate a 360-degree sequence frame with a scene. By adopting the 360-degree clothes fitting display method and device, 360-degree dead-angle-free display of clothes can be realized more clearly and smoothly, and a 3D browsing scene with real material and real wearing effect is displayed for a customer.
Description
Technical Field
The application relates to the technical field of virtual fitting, in particular to a 360-degree clothes fitting display method and device.
Background
In the traditional clothes purchasing process, because a customer needs to take off clothes on the body to try on new clothes better, the process takes a lot of time for the customer, and particularly, the work efficiency of crowded stores is low easily caused on holidays. When too many underwear garments are not placed in free positions, the shops cannot provide clothes for customers to show, or when too many upper garment are placed on the goods shelves, the customers do not have energy and patience to sequentially find clothes suitable for the customers, so that the customers are easy to feel tired; limiting the development of clothing stores.
Based on this, a need exists for a clothes fitting display terminal capable of 360 degrees and better displaying clothes.
Disclosure of Invention
The application provides a 360-degree clothes fitting display method, which comprises the following steps:
collecting a plurality of clothing pictures with continuous angles;
separating the clothes in the collected clothes pictures from the mannequin and the background, acquiring a transparent channel of the clothes, and generating a clothes image with the transparent channel;
inserting a transition frame between two continuous frames of the clothing image of the transparent channel to generate a clothing video of an integral smooth sequence frame;
and fusing the clothing video and the new background to generate a 360-degree sequence frame with a scene.
The 360-degree garment fitting display method comprises the steps of collecting a plurality of garment pictures with continuous angles, specifically, arranging the garment to be shot on a human model in a shooting range of a display terminal, controlling the human model to rotate for 360 degrees after the display terminal receives a shooting instruction, and controlling a shooting device to perform, so that 360-degree continuous garment pictures are obtained.
The 360-degree clothing fitting display method comprises the following steps of:
converting the clothing photo into a one-dimensional gray image, calculating neighborhood pixel points of each pixel point in the gray image, calculating weights of the clothing image pixel points and the neighborhood pixel points to form a weight data set, and obtaining a similarity matrix corresponding to the data set;
performing characteristic decomposition on the similarity matrix to obtain a characteristic value, and inputting the characteristic value into an image separation model;
calculating a weighted average color value and a covariance matrix thereof in an image separation model, and respectively calculating Gaussian probabilities of the foreground and the background of the clothing image according to the weighted average color value and the covariance matrix;
and calculating the transparency of the clothing image, and clustering according to the Gaussian probability of the foreground and the background and the transparency to obtain the clothing in the clothing image.
The 360-degree clothing fitting display method specifically comprises the following sub-steps of:
converting the clothing photo into a one-dimensional gray image;
calculating the weight of a neighborhood pixel point of each pixel point in the gray level image to form a weight data set;
and then calculating the scale parameter of each point in the weight data set, and obtaining a similarity matrix of the data set according to the scale parameter.
The 360 ° fitting display method for clothing as described above, wherein a transition frame is inserted between two consecutive frames of the transparent channel clothing image, specifically includes the following sub-steps:
clustering and fusing adjacent key frames of every two images into a segment, and taking the segment as a transition frame;
and judging whether the time interval of the two images is smaller than a preset interval, if so, directly adding a transition frame into the gap between the two images, otherwise, determining the number of the transition frames needing to be filled according to the time interval of the two images and the preset interval, and supplementing the corresponding number of the transition frames into the gap between the two images according to the number.
The application also provides a 360 dress fitting display device, includes:
the 360-degree clothing picture acquisition module is used for acquiring a plurality of clothing pictures with continuous angles;
the transparent channel clothing image generation module is used for separating clothing in the collected multiple clothing pictures from the human model and the background, acquiring a transparent channel of the clothing and generating a transparent channel clothing image;
the clothing video generation module is used for inserting transition frames between two continuous frames of clothing images of the transparent channel to generate clothing videos of the whole smooth sequence frames; and fusing the clothing video and the new background to generate a 360-degree sequence frame with a scene.
The 360-degree clothes fitting display device comprises a 360-degree clothes picture acquisition module, wherein the 360-degree clothes picture acquisition module is specifically used for arranging clothes to be shot on a human model in a shooting range of a display terminal, and after the display terminal receives a shooting instruction, the human model is controlled to rotate by 360 degrees and the shooting device is controlled to perform, so that 360-degree continuous clothes pictures are obtained.
The 360-degree clothing fitting display device comprises a transparent channel clothing image generation module, a weight data set and a similarity matrix, wherein the transparent channel clothing image generation module is specifically used for converting a clothing photo into a one-dimensional gray image, then calculating neighborhood pixel points of each pixel point in the gray image, calculating weights of the clothing image pixel points and the neighborhood pixel points to form the weight data set, and obtaining the similarity matrix corresponding to the data set; performing characteristic decomposition on the similarity matrix to obtain a characteristic value, and inputting the characteristic value into an image separation model; calculating a weighted average color value and a covariance matrix thereof in an image separation model, and respectively calculating Gaussian probabilities of the foreground and the background of the clothing image according to the weighted average color value and the covariance matrix; and calculating the transparency of the clothing image, and clustering according to the Gaussian probability of the foreground and the background and the transparency to obtain the clothing in the clothing image.
The 360-degree garment fitting display device comprises a transparent channel garment image generation module, a display module and a display module, wherein the transparent channel garment image generation module specifically comprises a similarity matrix generation submodule for converting a garment photo into a one-dimensional gray image; calculating the weight of a neighborhood pixel point of each pixel point in the gray level image to form a weight data set; and then calculating the scale parameter of each point in the weight data set, and obtaining a similarity matrix of the data set according to the scale parameter.
The 360-degree clothing fitting display device comprises a clothing video generation module, a transition frame generation module and a clothing video generation module, wherein the clothing video generation module is specifically used for clustering and fusing adjacent key frames of every two images into a segment, and the segment is used as the transition frame; and judging whether the time interval of the two images is smaller than a preset interval, if so, directly adding a transition frame into the gap between the two images, otherwise, determining the number of the transition frames needing to be filled according to the time interval of the two images and the preset interval, and supplementing the corresponding number of the transition frames into the gap between the two images according to the number.
The beneficial effect that this application realized is as follows: by adopting the 360-degree clothes fitting display method and device provided by the application, 360-degree blind angle-free display of clothes can be realized more clearly and smoothly, and a 3D browsing scene with real material and real wearing effect is displayed for a customer.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a flowchart of a 360 ° fitting display method according to an embodiment of the present disclosure;
fig. 2 is a schematic view of a 360 ° fitting display device according to the second embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
An embodiment of the present application provides a 360 ° fitting display method, as shown in fig. 1, including:
specifically, be provided with the shooting device among the display terminal that this application provided, will wait to shoot the clothing and settle on the people's model of display terminal shooting within range, after display terminal received and shoots the instruction, control people's model and carry out 360 rotations to control shooting device goes on, obtain 360 continuous clothing photos from this.
in the embodiment of the application, the clothes, the mannequin and the background in the collected multiple clothes pictures are separated, and the method specifically comprises the following substeps:
step 121, converting the clothing photo into a one-dimensional gray image, then calculating neighborhood pixel points of each pixel point in the gray image, calculating weights of the clothing image pixel points and the neighborhood pixel points to form a weight data set, and obtaining a similarity matrix corresponding to the weight data set;
specifically, a garment photo is converted into a one-dimensional gray image, and X is ═ X1,x2,……,xn}∈RDIs represented by Xiis a point in the data set, where i ∈ (1, n), D represents the dimension of the data, and R represents the entire set of real numbers;
then, calculating the weight of the neighborhood pixel point of each pixel point in the gray level image as follows:
wi=ai 2gi
wherein I is the adjacent vertex of the current pixel point, aiIs the phase value of point i, giIs a weight coefficient;
and then calculating the scale parameter of each point in the weight data set, and obtaining a similarity matrix of the data set according to the scale parameter:wherein A isij=exp(-||xi-xj||/αiαj) i, j ∈ (1, n), Aij denotes any element in the similarity matrix, αiand alphajRepresenting arbitrary in a data setPoint xiAnd xjCorresponding scale parameter, | xi-xjI represents xiAnd xjThe distance of (d);
step 122, performing characteristic decomposition on the similarity matrix to obtain a characteristic value, and inputting the characteristic value into an image separation model;
the characteristic values extracted from the similarity matrix comprise characteristics such as dark channels, local contrast, color difference and saturation of each pixel point of the clothing images in the clothing training set;
the dark channel characteristic is the minimum value of the ratio of the red, green and blue three color channels of all pixels in the local area of the image pixel to the initial background color, and specifically comprises the following steps:
D(x,I)=miny∈α(x)minc∈r,g,bIc(y)/Aij
d (x, I) is a dark channel of the clothing image, α (x) is a neighborhood of dxd with a pixel point x as the center, D is the width of the neighborhood and is generally 0-5, c represents one of three color channels of red r, green g and blue b, and Ic(y) c-channel, A, representing pixel y in the neighborhood of pixel xijRepresenting the parameter values in the similarity matrix.
The local contrast is the maximum average value of the sum of squares of the differences of illumination intensity between all pixels in adjacent areas of the pixels of the clothing picture:
wherein C (x, I) represents the local maximum contrast of a pixel point x in a d × d neighborhood in the clothing image, y is the pixel point in the d × d neighborhood of x, z represents the pixel point in the d × d neighborhood of y, and Ic(z) and Ic(y) c-channel illumination respectively representing pixel points y and z in clothing photo, AijRepresenting the parameter values in the similarity matrix.
The color difference is the color difference value between the clothing image and the reverse image thereof:
H(x,I)=|Ic(x)-F(x,I)|
wherein, Ic(x) F (x, I) is an inverted image, specifically, the inverted image is represented as follows: f (x, I) ═ maxy∈α(x)[Ic(y),1-Ic(y)],c∈{r,g,b}。
The saturation is the maximum inverse color of the ratio of the minimum color channel to the maximum color channel in the d × d neighborhood of the clothing picture pixel:
wherein S (x, I) is the maximum saturation of the pixel point x in the clothing photo, IcAnd (y) represents the c-channel illumination of the pixel point y in the clothing photo.
Step 123, calculating a weighted average color value and a covariance matrix thereof in the image separation model, and calculating Gaussian probabilities of the foreground and the background of the clothing image according to the weighted average color value and the covariance matrix;
specifically, the weighted average color value and its corresponding covariance matrix are calculated by the following formula:
step 124, calculating the transparency of the clothing image, and clustering according to the Gaussian probability of the foreground and the background and the transparency to obtain clothing in the clothing image;
specifically, according to an image equation C ═ alpha F + (1-alpha) B, an image C is known, and a foreground color value F, a background color value B and a transparency alpha are solved; the above equation is converted into the maximum probability:
argmaxF,B,αP(F,B,α|C)=argmaxF,B,αP(C|F,B,α)P(F)P(B)p(α)/P(C)
=argmaxF,B,αL(C|F,B,α)+L(F)+L(B)+L(α)
then, for the convenience of calculation, the above formula is converted into addition by taking logarithm, and the constant P (C) is omitted, so as to obtain the following formula:
L(C|F,B,α)=-||C-αF-(1-α)B||2
obtaining the color values of the foreground and the background as follows:
and then clustering to obtain the clothes in the clothes image.
Referring back to fig. 1, step 130, inserting a transition frame between two consecutive frames of the transparent channel clothing image, and generating a clothing video of an overall smooth sequence frame;
splicing the processed clothes images of the full-angle transparent channel in sequence to generate a 360-degree clothes video, and inserting a transition frame between two continuous frames spliced by the two images, so that the spliced part of the two images of the clothes video is smoother and more fluent to play;
specifically, inserting a transition frame between two consecutive frames of the transparent channel clothing image specifically includes the following sub-steps:
step 131, clustering and fusing adjacent key frames of every two images into a segment, and taking the segment as a transition frame;
step 132, judging whether the time interval of the two images is smaller than a preset interval, if so, directly adding a transition frame to the gap between the two images, otherwise, executing step 133;
and step 133, determining the number of transition frames to be filled according to the time interval of the two images and the preset interval, and supplementing a corresponding number of transition frames in the gap between the two images according to the number.
Referring back to fig. 1, step 140, fusing the clothing video and the new background to generate a 360-degree sequence frame with a scene;
in the embodiment of the application, the clothing video and the new background are fused, and the method specifically comprises the following substeps:
step 141, taking each clothing image in the clothing video as the bottommost layer image, performing convolution on the clothing image by using a Gaussian core, performing up-sampling on the image after convolution, and calculating the gradient field of each clothing image and the background image in the clothing video;
specifically, the image is expanded by two times in each dimension, new lines are filled with 0, then a specified filter is convoluted to estimate an approximate value of a lost pixel, the previous layer of image is used as input, the operations of rewinding and upsampling are repeated to obtain a more previous layer of image, and the iteration is repeated for multiple times to form a Gaussian pyramid image data structure;
142, performing Poisson fusion on the clothing image and the background image, covering the gradient field of the clothing image on the gradient field of the background image to obtain the gradient field of the fused image, and solving the divergence of the fused image;
specifically, each layer of the Gaussian pyramid is expanded to the resolution ratio same as that of the original image, image superposition is performed again to obtain a final output image, and the gradient of the fused clothing image is subjected to partial derivation in the x and y directions to obtain the divergence.
Example two
The second embodiment of the present application provides a 360 clothing fitting display device, as shown in fig. 2, includes:
the 360-degree clothing picture acquisition module 210 is used for acquiring a plurality of clothing pictures with continuous angles;
a transparent channel clothing image generation module 220, configured to separate clothing from the mannequin and the background in the collected multiple clothing pictures, obtain a transparent channel of the clothing, and generate a transparent channel clothing image;
a clothing video generating module 230, configured to insert a transition frame between two consecutive frames of the clothing image of the transparent channel, and generate a clothing video of an overall smooth sequence frame; and fusing the clothing video and the new background to generate a 360-degree sequence frame with a scene.
In the embodiment of the application, the 360-degree clothing picture collecting module 210 is specifically used for arranging the clothing to be shot on the mannequin in the shooting range of the display terminal, and after the display terminal receives the shooting instruction, the mannequin is controlled to rotate 360 degrees and the shooting device is controlled to perform, so that 360-degree continuous clothing pictures are obtained.
Specifically, the transparent channel clothing image generating module 220 is specifically configured to convert the clothing photo into a one-dimensional gray image, calculate a neighborhood pixel point of each pixel point in the gray image, calculate weights of the clothing image pixel point and the neighborhood pixel point, form a weight data set, and obtain a similarity matrix corresponding to the data set; performing characteristic decomposition on the similarity matrix to obtain a characteristic value, and inputting the characteristic value into an image separation model; calculating a weighted average color value and a covariance matrix thereof in an image separation model, and respectively calculating Gaussian probabilities of the foreground and the background of the clothing image according to the weighted average color value and the covariance matrix; and calculating the transparency of the clothing image, and clustering according to the Gaussian probability of the foreground and the background and the transparency to obtain the clothing in the clothing image.
Further, the transparent channel clothing image generating module 220 specifically includes a similarity matrix generating submodule for converting the clothing photo into a one-dimensional gray image; calculating the weight of a neighborhood pixel point of each pixel point in the gray level image to form a weight data set; and then calculating the scale parameter of each point in the weight data set, and obtaining a similarity matrix of the data set according to the scale parameter.
In addition, the clothing video generation module is specifically used for clustering and fusing adjacent key frames of every two images into a segment, and taking the segment as a transition frame; and judging whether the time interval of the two images is smaller than a preset interval, if so, directly adding a transition frame into the gap between the two images, otherwise, determining the number of the transition frames needing to be filled according to the time interval of the two images and the preset interval, and supplementing the corresponding number of the transition frames into the gap between the two images according to the number.
The above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A360-degree clothes fitting display method is characterized by comprising the following steps:
collecting a plurality of clothing pictures with continuous angles;
separating the clothes in the collected clothes pictures from the mannequin and the background, acquiring a transparent channel of the clothes, and generating a clothes image with the transparent channel;
inserting a transition frame between two continuous frames of the clothing image of the transparent channel to generate a clothing video of an integral smooth sequence frame;
and fusing the clothing video and the new background to generate a 360-degree sequence frame with a scene.
2. The 360-degree garment fitting display method according to claim 1, characterized in that a plurality of angularly continuous garment pictures are collected, specifically, the garment to be photographed is placed on a mannequin within a photographing range of the display terminal, after the display terminal receives a photographing instruction, the mannequin is controlled to rotate 360 degrees, and the photographing device is controlled to perform, so that 360-degree continuous garment pictures are obtained.
3. The 360 ° garment fitting display method according to claim 1, wherein the separation of the garments from the mannequin and the background in the collected multiple garment pictures comprises the following sub-steps:
converting the clothing photo into a one-dimensional gray image, calculating neighborhood pixel points of each pixel point in the gray image, calculating weights of the clothing image pixel points and the neighborhood pixel points to form a weight data set, and obtaining a similarity matrix corresponding to the weight data set;
performing characteristic decomposition on the similarity matrix to obtain a characteristic value, and inputting the characteristic value into an image separation model;
calculating a weighted average color value and a covariance matrix thereof in an image separation model, and respectively calculating Gaussian probabilities of the foreground and the background of the clothing image according to the weighted average color value and the covariance matrix;
and calculating the transparency of the clothing image, and clustering according to the Gaussian probability of the foreground and the background and the transparency to obtain the clothing in the clothing image.
4. A 360 ° fitting display method for garments according to claim 3, wherein obtaining the similarity matrix comprises the following sub-steps:
converting the clothing photo into a one-dimensional gray image;
calculating the weight of a neighborhood pixel point of each pixel point in the gray level image to form a weight data set;
and then calculating the scale parameter of each point in the weight data set, and obtaining a similarity matrix of the data set according to the scale parameter.
5. A 360 ° fitting display method according to claim 1, wherein a transition frame is inserted between two consecutive frames of transparent channel garment images, comprising the following sub-steps:
clustering and fusing adjacent key frames of every two images into a segment, and taking the segment as a transition frame;
and judging whether the time interval of the two images is smaller than a preset interval, if so, directly adding a transition frame into the gap of the two images, otherwise, determining the number of the transition frames needing to be filled according to the time interval of the two images and the preset interval, and supplementing the corresponding number of the transition frames in the gap of the two images according to the number.
6. A360 garment fitting display device, comprising:
the 360-degree clothing picture acquisition module is used for acquiring a plurality of clothing pictures with continuous angles;
the transparent channel clothing image generation module is used for separating clothing in the collected multiple clothing pictures from the human model and the background, acquiring a transparent channel of the clothing and generating a transparent channel clothing image;
the clothing video generation module is used for inserting transition frames between two continuous frames of clothing images of the transparent channel to generate clothing videos of the whole smooth sequence frames; and fusing the clothing video and the new background to generate a 360-degree sequence frame with a scene.
7. The 360-degree garment fitting display device according to claim 6, wherein the 360-degree garment picture acquisition module is specifically used for placing the garment to be shot on a human model within a shooting range of the display terminal, and after the display terminal receives the shooting instruction, the human model is controlled to rotate for 360 degrees, and the shooting device is controlled to perform, so that 360-degree continuous garment pictures are obtained.
8. The 360 ° garment fitting display device of claim 6, wherein the transparent channel garment image generation module is specifically configured to convert the garment photograph into a one-dimensional grayscale image, then calculate neighborhood pixel points of each pixel point in the grayscale image, calculate weights of the garment image pixel points and the neighborhood pixel points, form a weight data set, and obtain a similarity matrix corresponding to the data set; performing characteristic decomposition on the similarity matrix to obtain a characteristic value, and inputting the characteristic value into an image separation model; calculating a weighted average color value and a covariance matrix thereof in an image separation model, and respectively calculating Gaussian probabilities of the foreground and the background of the clothing image according to the weighted average color value and the covariance matrix; and calculating the transparency of the clothing image, and clustering according to the Gaussian probability of the foreground and the background and the transparency to obtain the clothing in the clothing image.
9. The 360 ° garment fitting display of claim 8, wherein the transparent channel garment image generation module comprises in particular a similarity matrix generation submodule for converting the garment photograph into a one-dimensional grayscale image; calculating the weight of a neighborhood pixel point of each pixel point in the gray level image to form a weight data set; and then calculating the scale parameter of each point in the weight data set, and obtaining a similarity matrix of the data set according to the scale parameter.
10. A360 ° fitting display device according to claim 6, wherein the video generating module of the garment is specifically configured to cluster and fuse adjacent key frames of every two images into a segment, and use the segment as a transition frame; and judging whether the time interval of the two images is smaller than a preset interval, if so, directly adding a transition frame into the gap between the two images, otherwise, determining the number of the transition frames needing to be filled according to the time interval of the two images and the preset interval, and supplementing the corresponding number of the transition frames into the gap between the two images according to the number.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010542049.5A CN111695525B (en) | 2020-06-15 | 2020-06-15 | 360-degree clothing fitting display method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010542049.5A CN111695525B (en) | 2020-06-15 | 2020-06-15 | 360-degree clothing fitting display method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111695525A true CN111695525A (en) | 2020-09-22 |
CN111695525B CN111695525B (en) | 2023-05-16 |
Family
ID=72481027
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010542049.5A Active CN111695525B (en) | 2020-06-15 | 2020-06-15 | 360-degree clothing fitting display method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111695525B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113115097A (en) * | 2021-03-30 | 2021-07-13 | 北京达佳互联信息技术有限公司 | Video playing method and device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018130016A1 (en) * | 2017-01-10 | 2018-07-19 | 哈尔滨工业大学深圳研究生院 | Parking detection method and device based on monitoring video |
CN108648053A (en) * | 2018-05-10 | 2018-10-12 | 南京衣谷互联网科技有限公司 | A kind of imaging method for virtual fitting |
CN110769159A (en) * | 2019-11-19 | 2020-02-07 | 海南车智易通信息技术有限公司 | Photographing method, photographing device and panoramic photographing system |
-
2020
- 2020-06-15 CN CN202010542049.5A patent/CN111695525B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018130016A1 (en) * | 2017-01-10 | 2018-07-19 | 哈尔滨工业大学深圳研究生院 | Parking detection method and device based on monitoring video |
CN108648053A (en) * | 2018-05-10 | 2018-10-12 | 南京衣谷互联网科技有限公司 | A kind of imaging method for virtual fitting |
CN110769159A (en) * | 2019-11-19 | 2020-02-07 | 海南车智易通信息技术有限公司 | Photographing method, photographing device and panoramic photographing system |
Non-Patent Citations (1)
Title |
---|
杜俊俐;袁守华;: "基于图形图像的服装电子商务系统特色功能研究" * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113115097A (en) * | 2021-03-30 | 2021-07-13 | 北京达佳互联信息技术有限公司 | Video playing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111695525B (en) | 2023-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10846836B2 (en) | View synthesis using deep convolutional neural networks | |
WO2019101113A1 (en) | Image fusion method and device, storage medium, and terminal | |
WO2022160701A1 (en) | Special effect generation method and apparatus, device, and storage medium | |
EP2374107B1 (en) | Devices and methods for processing images using scale space | |
CN112543317B (en) | Method for converting high-resolution monocular 2D video into binocular 3D video | |
CN110335343A (en) | Based on RGBD single-view image human body three-dimensional method for reconstructing and device | |
US8743119B2 (en) | Model-based face image super-resolution | |
CN112419472B (en) | Augmented reality real-time shadow generation method based on virtual shadow map | |
EP3136717A1 (en) | Video display device, video projection device, dynamic illusion presentation device, video generation device, method thereof, data construct, and program | |
CN111681177B (en) | Video processing method and device, computer readable storage medium and electronic equipment | |
CN108668050B (en) | Video shooting method and device based on virtual reality | |
CN107862718B (en) | 4D holographic video capture method | |
CN107918948B (en) | 4D video rendering method | |
WO2011031331A1 (en) | Interactive tone mapping for high dynamic range video | |
CN107767339A (en) | A kind of binocular stereo image joining method | |
CN108876936A (en) | Virtual display methods, device, electronic equipment and computer readable storage medium | |
Rawat et al. | A spring-electric graph model for socialized group photography | |
CN105827975A (en) | Color on-line correction method for panoramic video stitching | |
CN111695525B (en) | 360-degree clothing fitting display method and device | |
JP2004341642A (en) | Image compositing and display method, image compositing and display program, and recording medium with the image compositing and display program recorded | |
Nakamae et al. | Rendering of landscapes for environmental assessment | |
KR20220000123A (en) | Method for 2d virtual fitting based key-point | |
CN111105350A (en) | Real-time video splicing method based on self homography transformation under large parallax scene | |
CN113240573B (en) | High-resolution image style transformation method and system for local and global parallel learning | |
CN116051593A (en) | Clothing image extraction method and device, equipment, medium and product thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |