CN117391805A - Fitting pattern generation method, fitting pattern generation system, electronic device and storage medium - Google Patents

Fitting pattern generation method, fitting pattern generation system, electronic device and storage medium Download PDF

Info

Publication number
CN117391805A
CN117391805A CN202311270584.XA CN202311270584A CN117391805A CN 117391805 A CN117391805 A CN 117391805A CN 202311270584 A CN202311270584 A CN 202311270584A CN 117391805 A CN117391805 A CN 117391805A
Authority
CN
China
Prior art keywords
image
model
clothing
target
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311270584.XA
Other languages
Chinese (zh)
Inventor
张赛
周涛
刘雪琦
陈国文
王维民
朱大鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Alibaba Overseas Internet Industry Co ltd
Original Assignee
Hangzhou Alibaba Overseas Internet Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Alibaba Overseas Internet Industry Co ltd filed Critical Hangzhou Alibaba Overseas Internet Industry Co ltd
Priority to CN202311270584.XA priority Critical patent/CN117391805A/en
Publication of CN117391805A publication Critical patent/CN117391805A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a test chart generation method, a test chart generation system, electronic equipment and a test chart medium. The test-drawing generation method comprises the following steps: obtaining the category of target clothes input by a user and a try-on graph to generate configuration information, and uploading an image by the user; responding to the user selection system mannequin to generate a model try-on diagram, and acquiring a target dummy model image and model posture information of the target dummy model image; synthesizing a clothing canvas image based on the user uploaded image and the target dummy model image; and generating configuration information based on the canvas image of the clothing, model gesture information, categories and the fitting patterns, and generating a first fitting pattern generating task, so that a preset server generates a model fitting pattern of the target clothing based on the first fitting pattern generating task. According to the method, the model fitting diagram is generated based on the model posture selected by the user, and the quality of the model fitting diagram is improved.

Description

Fitting pattern generation method, fitting pattern generation system, electronic device and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method for generating a fitting chart, a system for generating a fitting chart, an electronic device, and a storage medium.
Background
The e-commerce website provides e-commerce services, and users from various countries and regions can purchase commodity objects from various different countries on the cross-border e-commerce website. And the image of the commodity object presents an intuitive and important channel of commodity object information to the user. The model fitting pattern of the garment can intuitively display the information such as the color, style, wearing effect and the like of the garment by taking the commodity object as a garment example, so that the model fitting pattern is particularly important to the commodity object of the garment on an e-commerce website. However, due to practical factors such as high cost of real shooting of the commodity model, many merchants can only select a simple commodity tiling diagram or a network diagram with high similarity to display on a commodity page when issuing commodities. In the prior art, a scheme of generating a clothing fitting pattern based on an AIGC (Artificial Intelligence Generated Content, generating artificial intelligence) image generating technology occurs, a merchant is required to upload an image containing clothing to be fitted, and then the system automatically generates the clothing fitting pattern according to the image uploaded by the merchant. However, the difference of the image quality uploaded by the merchant is large, and the shooting angle, the image quality, the integrity of the mannequin and the like of the image are uneven, so that the effect of the generated try-on chart is directly affected.
In summary, the method for generating a test chart in the prior art needs to be improved.
Disclosure of Invention
The embodiment of the application provides a fitting pattern generation method which can be used for generating a clothing fitting pattern comprising clothing, accessories, bags, shoes, caps and the like, and improves the quality of generating the clothing fitting pattern.
Correspondingly, the embodiment of the application also provides a test chart generating system, electronic equipment and a storage medium, which are used for guaranteeing the implementation and application of the method.
In order to solve the above problems, an embodiment of the present application discloses a method for generating a fitting chart, which is applied to a client, and the method includes:
obtaining the category of target clothes input by a user and a try-on graph to generate configuration information, and uploading an image by the user;
responding to a user selection system mannequin to generate a model try-on diagram, and acquiring a target dummy model image and model posture information of the target dummy model image;
synthesizing a clothing canvas image based on the user uploaded image and the target dummy model image;
and generating a first try-on generating task based on the clothing canvas image, the model gesture information, the category and the try-on generating configuration information, so that a preset server generates a model try-on of the target clothing based on the first try-on generating task.
The embodiment of the application also discloses a test-drawing generation method which is applied to the server, and the method comprises the following steps:
acquiring the category, model posture information, fitting pattern generation configuration information and a clothing canvas image of a target clothing corresponding to a first fitting pattern generation task;
performing image segmentation processing on the clothing canvas image to obtain a clothing mask;
acquiring a preset real model image based on the model posture information;
and generating configuration information, the clothing canvas image, the real model image and the clothing mask layout based on the category, the fitting pattern, and calling a preset patterning service to generate a model fitting pattern of the target clothing.
The embodiment of the application discloses a fitting chart generation method, which is applied to a server, and comprises the following steps:
obtaining the category of target clothes corresponding to the second try-on generating task, try-on generating configuration information and uploading an image by a user;
performing image segmentation processing on the user uploaded image to obtain a clothing canvas image, and performing image segmentation processing on the user uploaded image to obtain a model mask;
performing image segmentation processing on the clothing canvas image to obtain a clothing mask;
And generating configuration information, the clothing canvas image, the model mask layout and the clothing mask layout based on the category, the fitting pattern, and calling a preset pattern generation service to generate a model fitting pattern of the target clothing.
The embodiment of the application also discloses a fitting picture generation system, which comprises:
the client is used for executing the clothing try-on generating method applied to the client, disclosed in the embodiment of the application;
the server side is used for executing the clothing try-on generating method applied to the server side.
The embodiment of the application also discloses electronic equipment, which comprises: a processor, and a memory communicatively coupled to the processor; the memory stores computer-executable instructions; the processor executes the computer-executable instructions stored by the memory to implement the methods as described in embodiments of the present application.
The embodiment of the application also discloses a computer readable storage medium, wherein computer executable instructions are stored in the computer readable storage medium, and the computer executable instructions are used for realizing the method according to the embodiment of the application when being executed by a processor.
Compared with the prior art, the embodiment of the application has the following advantages:
In the embodiment of the application, when uploading an image by a user who cannot upload a full-limb mannequin or a real mannequin wearing target clothes, the user uploads the image of the clothes to be tried on, configures categories and generates configuration information by the try-on, then the user can choose to adopt a system mannequin to generate a model try-on, a client side will display a dummy model image with at least one preset model gesture for the user to select, and the user uploads the image according to the dummy model image selected by the user to synthesize a clothes canvas image; and then, generating configuration information based on the canvas image of the clothes, the model gesture information, the categories and the fitting pattern, and generating a first fitting pattern generating task, so that a preset server generates a model fitting pattern of the target clothes based on the first fitting pattern generating task.
Further, when the canvas image of the clothes is synthesized, a user can adjust the adaptation relation between the clothes and the model, so that the combination of the clothes and the model in the generated try-on picture is more natural and fit, and the showing effect of the model try-on picture on the clothes is improved.
When the first try-on generating task is generated, model gesture information is reserved, and when the user server executes the first try-on generating task to generate the model try-on, a real person model image which corresponds to the model gesture information and is built in a system is further acquired and used for generating the model try-on, so that the faces and bones of the model in the generated try-on are more natural and attractive.
Drawings
FIG. 1 is a flow chart of steps of one embodiment of a method of generating a fitting pattern disclosed herein;
FIG. 2 is a schematic illustration of a user uploaded image uploaded by a user;
FIG. 3 is a schematic diagram of a dummy model image for user selection;
FIG. 4 is a schematic illustration of a canvas image of apparel generated based on the user upload image shown in FIG. 2;
FIG. 5 is a schematic illustration of a try-on generated based on the apparel canvas image shown in FIG. 4;
FIG. 6 is a flowchart of steps of another embodiment of a method of generating a fitting pattern as disclosed herein;
FIG. 7 is a flowchart of steps of yet another embodiment of a method of generating a fitting pattern as disclosed herein;
FIG. 8 is a flowchart illustrating steps of yet another embodiment of a method of generating a fitting pattern as disclosed herein;
FIG. 9 is a schematic diagram of a trial generation system of the present disclosure;
Fig. 10 is a schematic structural view of an exemplary apparatus provided in one embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings.
Services are provided for seller users and buyer users in hundreds of countries in cross-border e-commerce websites, so that services for generating clothing try-on patterns based on given clothing are required to be provided for users in different countries. For example, a seller may present an effect map of model try-on garments that respond to national style for buyer users in different countries.
The method for generating the fitting patterns disclosed in the embodiment of the application can be implemented as a fitting application, and corresponding fitting patterns are generated according to the fitting images uploaded by a merchant user and information such as the configured product categories, model styles, model faces, background styles and the like, so that the fitting patterns are released on an electronic commerce platform for browsing by a buyer user.
Among them, apparel includes, but is not limited to: clothing, hats, bags, accessories, etc. In the embodiments of the present application, for convenience of understanding of readers, a scene of generating a clothing try-on map based on clothing images is mainly taken as an example, and specific technical solutions of each embodiment are described. It should be understood by those skilled in the art that the technical solutions disclosed in the embodiments of the present application are also applicable to generating a hat fitting map based on a hat image, generating a back fitting map based on a backpack image, generating a fitting map based on a glasses image, and the like. In the embodiment of the application, the image generated based on the clothing image uploaded by the user and showing the wearing effect by the model is collectively referred to as "clothing try-on image".
Referring to fig. 1, in an alternative embodiment, a fitting chart generating method disclosed in the present application is applied to a client of a clothing fitting application or a clothing fitting system, and the method includes: steps 100 to 106.
The following describes the specific embodiments of each step.
Step 100, obtaining the category of the target clothing and the try-on generated configuration information input by the user, and uploading the image by the user.
Wherein, the category can be a secondary product category on the target apparel system electronic commerce platform. In some alternative embodiments, the categories include, but are not limited to, one or more of the following: jacket, trousers, half-skirt, dress, etc. The fitting pattern generation configuration information is used for describing the image style, model style and the like of the fitting pattern to be generated. The fitting pattern generation configuration information is used as part of the source of the prompt word to generate the prompt word to invoke the patterning application or service, and in some alternative embodiments, the fitting pattern generation configuration information includes, but is not limited to, one or more of the following: model skin color, model face, background style, etc. The background style is used for describing the background style in the generated clothing try-on drawing. In some alternative embodiments, the background styles include, but are not limited to: random background images, random background colors, specified style background images, etc. The model face is used for describing the model face adopted in the generated model fitting diagram.
In an embodiment of the present application, the target garment is a garment for which a fitting pattern needs to be generated. In order to ensure the quality of the generated try-on, the image of the target clothing uploaded by the user through the preset image uploading inlet of the client needs to comprise the complete image of the target clothing. In the embodiment of the application, the image of the target clothes uploaded by the user is recorded as a user uploading image. In some optional embodiments, the client may preset one or more image uploading entries, and the user may upload the image of the target garment by triggering the corresponding image uploading entries.
In some alternative embodiments, the user uploaded image may be a clothing tiling, a clothing perspective worn on a mannequin, and the like.
In some alternative embodiments, the preset image upload entries include, but are not limited to, one or more of the following: AI (Artificial Intelligence ) generates a graph upload entry, a local image upload entry, a media center graph selection entry, and a product link graph upload entry.
Optionally, a method for generating a test chart disclosed in the present application may be implemented as follows: network applications of clients and servers. The client is provided with an interactive interface for interacting with a user, and a server user performs information processing according to information acquired by the client so as to obtain a clothing try-on diagram. Optionally, the server may be implemented as one module or multiple modules according to application development and deployment requirements, where the one or multiple modules may be deployed in one or multiple servers respectively. For example, in one alternative embodiment, a user uploads via a client an image of a target apparel that is to be used to generate an apparel try-on, and sets the product category of the target apparel, and sets the try-on to generate configuration information, etc. Correspondingly, the server side obtains the user uploading image and the user setting information through the client side.
For example, in some alternative embodiments, the try-on generating application or system is preconfigured with one or more model faces trained by the graphical technique, and a face name is set for each model face. Correspondingly, the client can display an entry for selecting the model face, display the model face pre-configured in the application or system for the user to select after the user triggers the entry, and generate configuration information as a fitting chart after the user selects the designated model face. The model face information may be model name or other identification information.
In some alternative embodiments, the AI-generated graph upload entry may be in the form of an icon, hyperlink, or the like. Specifically, for example, the AI generation diagram uploading entry may be an icon set on the client page, after the user triggers the icon, the client obtains a prompt word input by the user and used for generating an image of the target clothing, and calls a preset generation type large model based on the prompt word input by the user to generate the clothing image. And then, the client displays the clothing image generated by the preset generation type large model, and acquires the clothing image selected by the user as a user uploading image of the target clothing. The preset generation type large model is a generation type large model with the capability of generating images based on text prompts in the prior art.
In some alternative embodiments, the local image upload portal may be an interface image upload portal, or may be a command line image upload portal, or may be another portal for uploading pictures in the prior art. Specifically, for example, the local image uploading entry may be a picture uploading control set on a client page, and the user may trigger the picture uploading control, select an image of a target garment that is stored locally in the client and needs to generate a garment try-on image, and execute an uploading operation. In this way, the client can acquire the image uploaded by the user through the picture uploading control, and take the image as the image uploaded by the user of the target clothes.
In some alternative embodiments, the media center selection icon is implemented in the form of an icon, hyperlink, or the like. For example, the media center map entry may be a hyperlink set on a page of the client, and after the user triggers the hyperlink, the client accesses a page of a preset media center corresponding to the hyperlink based on the trigger of the user. And the page of the preset media center is provided with a public product image, and a user can select the image in the page. The preset media center may feed back the image selected by the user to the client. In this way, the client can upload the image fed back by the preset media center as the user of the target clothes.
In some alternative embodiments, the product link pass entry may be implemented in the form of an edit box or the like. For example, the product link map entry may be an edit box set on the client page, and the user may input the product link of the product that has been released on the e-commerce platform into the edit box. And the client acquires the product link input in the edit box, analyzes the product link to obtain a product identifier contained in the product link, searches product details associated with the product identifier in a commodity center of the electronic commerce platform, and displays a list of the searched product detail diagrams to a user. The user may select a product detail drawing in the displayed list. In this way, the client uploads the product detail map selected by the user as the user of the target clothes.
In the embodiment of the application, the convenience of user operation is effectively improved by arranging the plurality of clothes image uploading modes. For example, for a business who has taken a photograph of a product, the taken garment image may be uploaded directly for use in generating a garment try-on; for businesses that do not take photographs of the product, the photographs of the product that have been released may be used directly. The uploading of the clothing image can be completed by selecting the image from the media center for the not-yet-released product. In addition, under certain image transmission modes, the product category of the target clothing can be automatically acquired, and greater convenience is provided for users.
In terms of technical support, in the embodiment of the application, the product data of the electronic commerce platform is fully utilized by opening all product modules of the electronic commerce platform, so that greater operation convenience is provided for users.
Step 102, responding to the selection of a system mannequin by a user to generate a model try-on diagram, and acquiring a target dummy model image and model posture information of the target dummy model image.
In the practical application process, when an image uploaded by a user is a clothing tiling image, model gesture information is absent in the image uploaded by the user. When the user uploads the image of the clothing stereoscopic image worn by the mannequin, the situation that the mannequin limb is absent in the user uploaded image, such as the user uploaded image in fig. 2, often occurs due to the limitation of shooting conditions. The user often lacks model pose information in the uploaded image. In this case, the prior art test pattern generation application cannot generate a high-quality test pattern.
In the embodiment of the present application, in order to improve the quality of the generated fitting pattern, a mannequin selection portal is provided at the client for the user to select to use the model gesture and the clothing wearing state in the user uploading image uploaded by the merchant to generate the clothing fitting pattern, or the client is used to configure the model gesture and the clothing wearing state to generate the clothing fitting pattern. For example, at the system mannequin option and the merchant mannequin option set by the client, if the user selects the system mannequin option, the client starts a model gesture and clothing wearing state configuration flow, and when the server generates a clothing try-on diagram, the model gesture and clothing wearing state which are finally configured by the user are reserved; if the user selects the merchant personal station option, when the clothing try-on map is generated at the server, model gestures and clothing wearing states in the uploaded image of the user are reserved.
In some alternative embodiments, generating a model try-on in response to a user selecting a system mannequin, acquiring a target pseudonym model image, and model pose information for the target pseudonym model image, includes: generating a model try-on graph in response to a user selecting the system mannequin, presenting a dummy model image of at least one model pose; and acquiring a target dummy model image selected by a user and model posture information of the target dummy model image.
For example, when the user selects to use the system mannequin to generate a model fitting picture, the client reads the dummy model images with different built-in postures of the system (such as the fitting picture generating system) and model posture information corresponding to each dummy model image, and displays the dummy model images to the user, as shown in fig. 3, for the user to select. In some alternative embodiments, the system may preset the dummy model images in a plurality of poses for presentation to the user to select and incorporate the real model images in the same pose as each of the dummy model images. The real model image is used as an image source for extracting the gesture when the server generates the fitting image. Each dummy model image corresponds to a model pose, and each model pose corresponds to a real model image. In the embodiment of the application, the model pose corresponding to the dummy model image is represented by model pose information.
And then taking the dummy model image selected by the user as a target dummy model image, and acquiring model posture information of the target dummy model image.
In some alternative embodiments, the user may select a dummy model image or images as the target dummy model image for subsequent generation of the try-on map. That is, the user may choose to generate a model fitting pattern of one posture or a model fitting pattern of a plurality of postures.
And the client records model gesture information corresponding to the target dummy model image according to the dummy model image selected by the user.
And 104, synthesizing a clothing canvas image based on the user uploading image and the target dummy model image.
After the user selects the dummy model image built in the system, the client further performs image synthesis processing on the user uploaded image and the dummy model image selected by the user to obtain a clothing canvas image.
And for each target dummy model image selected by the user, the client performs image synthesis processing on the target dummy model image and the user uploading image respectively to obtain a clothing canvas image corresponding to each target dummy model image.
In some alternative embodiments, the client may invoke an image segmentation service or application in the prior art, perform image segmentation processing on the image uploaded by the user according to the category of the target clothing, and segment an image only including the target clothing, denoted as a "clothing image". The client may identify the body part in the dummy model image by a human body identification technique. And then, taking the dummy model image as a canvas, taking the clothing image as a clothing layer on the canvas, determining a wearing body part corresponding to the target clothing according to the category of the target clothing, and determining information such as the display position, the display size and the like of the clothing layer on the canvas according to the determined wearing body part, so that the clothing image in the clothing layer just covers the wearing body part, and achieving the visual effect of wearing the clothing image on the wearing body part. Finally, according to the canvas and the determined information such as the display position and the size of the clothing layer on the canvas, image synthesis is carried out on the canvas and the clothing layer, and the image of the target clothing worn by the dummy model is recorded as a clothing canvas image in the embodiment of the application.
In further alternative embodiments, synthesizing a apparel canvas image based on the user upload image and the target dummy model image, includes: creating a canvas based on the target dummy model image; performing clothing segmentation processing on the user uploaded image to obtain a clothing image of the target clothing; creating a clothing layer based on the clothing image; based on the adjustment operation of the user on the display information of the clothing layer in the canvas, displaying the adaptation result of the target clothing in the clothing layer and the dummy model in the target dummy model image in real time; and based on the confirmation operation of the user on the adaptation result, synthesizing the clothing layer and the canvas according to the adjusted display information to obtain the clothing canvas image.
Wherein the presentation information of the clothing layer in the canvas includes, but is not limited to, one or more of the following: size, position, rotation angle.
Alternatively, the client may create a window after the user selects the dummy model image, and take the dummy model image (i.e., the target dummy image) selected by the user as the canvas of the window.
Optionally, the client invokes an interface of a segmentation service preset by the server, so that the server performs image segmentation processing on the image uploaded by the user to obtain a clothing image only including the target clothing. The clothing image is a transparent background, and the foreground is a clothing image of the target clothing. For example, performing clothing segmentation processing on the user uploaded image, and obtaining the clothing image of the target clothing includes: and triggering a server to perform clothing segmentation processing on the user uploaded image to obtain a clothing image of the target clothing.
And then, the client uses the clothing image as a clothing layer, and draws the clothing layer on the upper layer of the canvas in the window to obtain a synthetic image of the dummy model image and the target clothing.
Because the dummy model images built in the system are of uniform size, and the sizes of the target clothes in the uploaded images of the users are not uniform, the situation that the target clothes are not matched with the dummy model in size may occur. In addition, due to the reason that the user uploads the placement position, the shooting angle and the like of the target clothes in the image, the situation that the wearing state and the posture of the target clothes on the dummy model in the composite image are stiff and unnatural can also occur. In order to improve the display effect of the generated try-on graph on the clothing, in the embodiment of the application, the client side supports the user to adapt to the target clothing and the dummy model by adjusting the clothing layer, and the wearing state of the target clothing on the dummy model in the dummy model image can be adjusted by dragging the clothing layer.
Wherein the adjusting operation includes: and drag operation or inputting display parameters of the clothing layer.
In some optional embodiments, the client may obtain adjustment information of the clothing layer by detecting a mouse drag event of the clothing layer, for example, obtain a rotation angle of the clothing layer in a horizontal direction or a vertical direction, obtain a scaling ratio of the clothing layer, and the like, and refresh and display the clothing layer in the canvas according to the obtained adjustment information, so as to display the adjustment effect in real time.
In some optional embodiments, the client may set a clothing layer display parameter configuration interface, through which a user may configure display information of the clothing layer on the canvas as adjustment information, and refresh and display the clothing layer in the canvas according to the adjustment information acquired in real time, so as to display the adjustment effect in real time.
Finally, the user can confirm the final adjustment result by confirming the submitting operation. And after confirming the adapting result, the client synthesizes the clothing layer and the canvas according to the adjusted display information to obtain a clothing canvas image. And synthesizing the clothing layers and the canvas to obtain a specific implementation mode of the clothing canvas image, wherein the specific implementation mode refers to the technology of synthesizing two or more layers and generating images in the prior art, and the detailed description is omitted here. Taking the user uploaded image shown in fig. 2 as an example, the clothing canvas image shown in fig. 4 can be obtained through the synthesis process.
In the embodiment of the application, after the user selects to generate the try-on picture based on the system mannequin, the clothing layer is obtained by dividing the image uploaded by the user, and the user selects the image of the built-in mannequin as the canvas, so that the user is supported to freely adjust the position, the size and the angle of the clothing layer in the canvas, and the clothing is attached to the figure of the model, thereby improving the effect of the generated try-on picture.
In some embodiments of the present application, the server performs a garment segmentation process on the user uploaded image to obtain a garment image of the target garment, including: invoking a preset target detector to perform target detection on the user uploading image based on a first preset semantic prompt word to obtain the position information of the target clothes in the user uploading image; performing image segmentation processing on the uploaded image of the user based on the position information to obtain a clothing mask map of the target clothing mask; carrying out matting processing on the image uploaded by the user based on the clothing mask to obtain a matting image; and optimizing the edge area of the matting image by adopting a pixel mapping method and a Gaussian blur processing method to obtain a clothing image.
The grouping DINO (an open source object detector) is an open set object detection algorithm, and by specifying the object semantics to be segmented, the grouping DINO can identify the position of the object element to be segmented in the image. Taking the clothing in the image to be segmented as an example, the clothe (namely the clothing) can be used as a first preset semantic cue word, the user uploading image is used as an input image, and the grouping DINO is called based on the first preset semantic cue word, so that the position information of the clothing element in the user uploading image can be identified.
In some alternative embodiments, a Segment analysis algorithm may be used to further Segment mask images, such as clothing mask, at corresponding positions in the user uploaded image based on the position information output by the grouping DINO.
And then, based on an algorithm, according to the clothing mask image, performing corresponding pixel mapping on the user uploaded image to obtain a matting image. In the prior art, the mask image is based on the mask image, and the obtained image has the following defects: first, the edge of the target has a white edge; second, the target edge jaggies are severe and require further fine optimization.
In some optional embodiments, optimizing the edge region of the matting image to obtain the clothing image by adopting a pixel mapping method and a gaussian blur processing method includes: determining an edge region of the target garment based on alpha channel (alpha channel) pixel values of the matting image; performing N-degree pixel mapping in the edge area, and reducing the edge area to obtain a white edge removed image; and carrying out blurring treatment on the matt boundary of the deblurred edge image by adopting a Gaussian blurring algorithm to obtain a clothing image. Wherein N is an integer greater than 2.
Judging a clothing main body area, a clothing edge area and a background area according to the pixel values of the alpha channel of the matting image, and processing the pixel values by a mapping function to the power of N in the clothing edge area, namely mapping the pixel range of the white edge to be smaller, so that the naked eyes cannot perceive any white edge, and the aim of reducing the edge area is fulfilled. And then, for edge saw teeth, a Gaussian blur algorithm is adopted to blur the edge area to a certain extent, so that an optimized matting boundary is obtained, and a clothing image of the target clothing is obtained.
And the edge mapping is carried out on the pixel values of the alpha channel, so that the white edge area is reduced, and then, the Gaussian blur algorithm is adopted to carry out blurring processing on the matting boundary, so that the saw teeth of the matting boundary can be reduced, and the quality of the clothing image obtained by matting is further improved.
In other embodiments of the present application, other matting methods in the prior art may be further adopted, and the user uploaded image is subjected to clothing segmentation processing, so as to obtain a clothing image of the target clothing, which is not described in detail in the embodiments of the present application.
And 106, generating configuration information based on the canvas image of the clothes, the model gesture information, the categories and the fitting patterns, and generating a first fitting pattern generating task, so that a preset server side generates a model fitting pattern of the target clothes based on the first fitting pattern generating task.
And then, the client can generate configuration information according to the clothing canvas image, the category of the target clothing input by the user and the try-on diagram, and submit a first try-on diagram generation task. In this way, the server side can execute corresponding try-on generating operation based on the first try-on generating task.
In some alternative embodiments, if the user selects multiple dummy model images, the client submits a first try-on generating task for each dummy model image, that is, generates a set of model try-on diagrams according to the model pose in each dummy model image.
In some optional embodiments, a first try-on generating task may be generated based on the clothing canvas image, the model pose information, the category, and the try-on generating configuration information, so that a preset server generates a model try-on of the target clothing based on the first try-on generating task, including: and taking the image data of the clothing canvas image as clothing canvas image information, generating configuration information based on the clothing canvas image information, model gesture information, the category and the fitting pattern, and generating a first fitting pattern generating task, so that a preset server side generates the fitting pattern of the target clothing based on the first fitting pattern generating task. Thus, the data carried by the first task data may include: and generating configuration information and task identification by the image data of the canvas image of the clothes, model gesture information, the category and the fitting picture corresponding to the target clothes, which are input by the user through the client.
Wherein the fitting pattern generates configuration information including, but not limited to: model skin color, model face, background style, etc.
For example, after a user selects a dummy model image and confirms the generated clothing canvas image, the category and the fitting pattern generation configuration information, clicking a button of a client interface set for submitting a generation task, and the client detects the operation of the user, and submits the clothing canvas image, the category and the fitting pattern generation configuration information as task data to a preset server to generate a first fitting pattern generation task.
Optionally, the client submits the first try-on generating task to the server by sending a task message to the task message executing queue, wherein the task message carries the task data. Optionally, the client may further submit a first try-on generating task to the server by sending a task request to a preset server, where the task request carries the task data.
In some optional embodiments, generating a first try-on generating task based on the clothing canvas image, the model pose information, the category, and the try-on generating configuration information, so that a preset server generates a model try-on of the target clothing based on the first try-on generating task, including: the clothing canvas image is uploaded to a preset storage end, and a storage address of the clothing canvas image is obtained and used as the clothing canvas image information; and generating a first try-on generating task based on the clothing canvas image information, the model gesture information, the categories and the try-on generating configuration information, so that a preset server generates a model try-on of the target clothing based on the first try-on generating task.
For example, after a user selects a dummy model image and confirms the generated clothing canvas image, the category and the fitting pattern generating configuration information, clicking a button of a client interface set for submitting a generating task, and detecting the operation of the user by the client, firstly uploading the clothing canvas image to a preset storage end, for example, to a preset storage server, and acquiring a storage address of the clothing canvas image returned by the preset storage end. And then, the client uses the storage address, model gesture information, categories and fitting diagram generation configuration information of the clothing canvas image as task data, and submits and generates a first fitting diagram generation task to a preset server.
The clothing canvas image is uploaded to the preset storage end for storage, and is downloaded and used when the service end executes the try-on generating task, so that the flexibility of using the clothing canvas image can be improved, the task data volume is reduced, and the system stability is improved.
The preset server generates a model fitting diagram of the target clothing based on the first fitting diagram generating task, and the model fitting diagram comprises the following steps: acquiring the category, model posture information, fitting pattern generation configuration information and a clothing canvas image of a target clothing corresponding to a first fitting pattern generation task; performing image segmentation processing on the clothing canvas image to obtain a clothing mask; acquiring a preset real model image based on the model posture information; and generating configuration information, the clothing canvas image, the real model image and the clothing mask layout based on the category, the fitting pattern, and calling a preset patterning service to generate a model fitting pattern of the target clothing.
The server executes the first try-on generating task, and specific implementation manners of each step when generating the try-on of the target clothing based on task data corresponding to the first try-on generating task are referred to in the following description of related embodiments, which are not repeated herein. For the canvas image of apparel depicted in fig. 4, the server will be able to generate a model try-on as shown in fig. 5, which is an anti-human model image generated by AI.
In some optional embodiments, the server may upload the generated model test pattern to a preset storage for storage, and send a notification message to the server to notify the server that the execution of the first test pattern generation task is completed. In this way, the server can further push the generated model try-on diagram to the corresponding client according to the query request of the client or actively.
Optionally, the task identification may be used to mark the first try-on generating task, so as to store and query the model try-on generated by the first try-on generating task. For example, after the client submits the first try-on generating task to the server, the server allocates a task identifier for the first try-on generating task, and binds the current user account of the client with the task identifier. And the execution result of the first try-on generating task is stored through a task identification mark of the task. The notification message associated with a task is also tagged with the task's task identification. Thus, the generated model try-on can be queried and pushed through the task mark.
In other embodiments, the execution sequence of step 100 and step 102 may be interchanged, in this embodiment, the order of selecting a mannequin type by the user, uploading an image by the user, configuring the category and generating configuration information by the fitting picture is not limited, and correspondingly, the order of acquiring the image of the dummy model by the client, uploading the image by the user, generating configuration information by the category and the fitting picture is not limited.
Referring to fig. 6, in an alternative embodiment, based on the foregoing embodiment, after obtaining the category of the target apparel and the try-on map input by the user to generate the configuration information, and the user uploads the image, the method further includes: step 101 and step 108.
Step 101, judging the type of the mannequin selected by the user, in response to the user selecting the merchant mannequin, jumping to execute step 109, and in response to the user selecting the system mannequin, jumping to execute step 102.
In some alternative embodiments, the people table type selection portal provided by the client interface includes: the system personnel and the merchant personnel, and the client and the server execute subsequent interaction according to different personnel types selected by the user.
For example, when the user selects the system mannequin, steps 102 to 106 are performed; when the user selects the merchant stage, step 108 is performed.
And step 108, responding to the selection of a user to generate a model fitting picture by a merchant personal station, generating configuration information based on the uploaded image of the user, the categories and the fitting picture, and generating a second fitting picture generation task, so that a preset server side generates the model fitting picture of the target clothing based on the second fitting picture generation task.
When the user selects to adopt the merchant personal station to generate the model try-on diagram, the client generates configuration information directly based on the categories of the uploaded image of the user and the try-on diagram, and generates a second try-on diagram generating task. Based on the user uploaded image, the category and the fitting picture generation configuration information, a specific implementation of the second fitting picture generation task is generated, and reference is made to a specific implementation of the first fitting picture generation task generated based on the clothing canvas image, the category and the fitting picture generation configuration information, which is not described herein.
The source image for generating the model test chart in the task data corresponding to the second test chart generating task is an image uploaded by a merchant, and the model posture information is not included in the task data corresponding to the second test chart generating task.
In summary, when a clothing try-on picture needs to be generated, a user uploads an image of clothing to be tried on, configures categories and generates configuration information for the try-on picture, then the user can select a system mannequin to generate a mannequin try-on picture, a client side will display a dummy model image with at least one preset model posture for the user to select, and the clothing canvas image is synthesized according to the dummy model image selected by the user and the user uploaded image; and then, generating configuration information based on the canvas image of the clothes, the model gesture information, the categories and the fitting pattern, and generating a first fitting pattern generating task, so that a preset server generates a model fitting pattern of the target clothes based on the first fitting pattern generating task.
Further, when the canvas image of the clothes is synthesized, a user can adjust the adaptation relation between the clothes and the model, so that the combination of the clothes and the model in the generated try-on picture is more natural and fit, and the showing effect of the model try-on picture on the clothes is improved.
When the first try-on generating task is generated, model gesture information is reserved, and when the user server executes the first try-on generating task to generate the model try-on, a real person model image which corresponds to the model gesture information and is built in a system is further acquired and used for generating the model try-on, so that the faces and bones of the model in the generated try-on are more natural and attractive.
On the basis of the embodiment, the application also provides a clothing fitting diagram generating method, which is used for executing a first fitting diagram generating task to generate a model fitting diagram based on clothing canvas images, model posture information, categories, fitting diagram generating configuration information and the like corresponding to the first fitting diagram generating task.
Referring to fig. 7, the method for generating a fitting pattern disclosed in the present embodiment is applied to a fitting pattern generating system or a server of an application, and includes: steps 702 to 708.
Step 702, obtaining the category, model gesture information, fitting pattern generation configuration information and a clothing canvas image of a target clothing corresponding to a first fitting pattern generation task.
In some alternative embodiments, the task data may be used to obtain the category, model pose information, fitting diagram generation configuration information, and canvas image of the target garment corresponding to the first fitting diagram generation task. The task data can be stored in a preset task message queue through a task identification mark. In other alternative embodiments, the task data may be transmitted through interface parameters of a preset service. In the embodiment of the application, the transmission mode of the task data is not limited.
Wherein, the category can be a second-level product category of target clothes on an electronic commerce platform. The fitting pattern generation configuration information is set by a user for the target apparel, including but not limited to: model skin color, model face, background style, etc. The clothing canvas image is as follows: and taking the dummy model of the target clothes as a foreground and taking a transparent background as an image, wherein the clothes canvas image is a synthesized image according to a user uploading image uploaded by a user and a dummy model image with one gesture selected by the user from a system or application preset dummy model image. The model pose information is used to identify a dummy model image that synthesizes the clothing canvas image.
In some optional embodiments, if the fitting diagram generating configuration information included in the task data corresponding to the first fitting diagram generating task is a clothing canvas image, the server may directly obtain the image data of the clothing canvas image from the task data. In some optional embodiments, the fitting diagram generating configuration information included in the task data corresponding to the first fitting diagram generating task is a storage address of the clothing canvas image, and the server side needs to download the clothing canvas image from the storage address.
The method for generating the task of the first test chart and the method for specifically generating the task data are described in the foregoing embodiments, and are not repeated herein.
And step 704, performing image segmentation processing on the clothing canvas image to obtain a clothing mask.
In some optional embodiments, the performing image segmentation processing on the canvas image of the garment to obtain a garment mask layout includes: invoking a preset target detector based on a first preset semantic prompt word, and performing target detection on the clothing canvas image to obtain the position information of the target clothing in the clothing canvas image; and carrying out image segmentation processing on the clothing canvas image based on the position information to obtain the clothing mask map of the target clothing mask. For example, the clothings (i.e., the clothing) may be used as a first preset semantic cue word, and the grouping DINO may be called based on the first preset semantic cue word, and the clothing canvas image is used as an input image, where the grouping DINO may identify the location information of the clothing element in the clothing canvas image.
In some optional embodiments, a Segment analysis algorithm may be used to further Segment a mask image at a corresponding position in the canvas image of the garment based on the position information output by the grouping DINO, so as to obtain the garment mask layout.
Step 706, acquiring a preset real model image based on the model posture information.
As described above, the model pose information corresponds to a real model image built in the try-on generating application or system, and therefore, the real model image corresponding to the model pose information can be obtained by retrieving the built-in real model image based on the model pose information. Taking the built-in real person model image and dummy model image as an example, after the user selects to use the system mannequin to generate the model fitting image and further selects the dummy model image of the gesture a, the model gesture information may be represented as the gesture a. In this step, the built-in real model image is searched for based on the pose a, and the real model image of the pose a can be obtained.
Step 708, generating configuration information, the clothing canvas image, the real model image and the clothing mask layout based on the category, the fitting pattern, and calling a preset graphic generation service to generate a model fitting pattern of the target clothing.
And generating configuration information based on the category and the fitting pattern to generate a prompt word, and calling a preset drawing service by taking the clothing canvas image, the real model image and the clothing mask as input patterns to generate a model fitting pattern of the target clothing.
The Stable Diffuse large model has the capability of generating images according to prompt words, and the Stable Diffuse can be combined with other models to generate images with specified effects. Control net is a new neural network concept that controls a pre-trained large model such as Stable diffu ion through additional inputs. Control net is used for controlling and generating screen contents, and currently supports control net models such as Inpaint (image restoration), openpoint (gesture), and the like. For example, the Stable Diffusion large model is combined with an inpainting (image restoration model), and Stable Diffusion inpainting (Stable Diffusion restoration) is an efficient, robust and high-quality image restoration algorithm, and the algorithm is combined to interpolate the missing part by means of Diffusion, gradient descent and the like based on partial differential equation, so that the restoration of the image is realized. The expected picture effect can be achieved by combining the positive prompt word and the negative prompt word of Stable Diffusion.
In some alternative embodiments, the real model image is generated based on Stable Diffusion inpainting techniques. Optionally, the preset graphical service integrates the following control models through a preset generation type large model: a first model for controlling the generation of graph details, a second model for controlling the generation of graph contour details, and a third model for controlling the generation of model gestures in the graph. The preset generation type large model may be a Stable Diffusion large model. The first model may be an Inpainting model, the second model may be a SoftEdge preprocessing model, and the third model may be an openPose model. The first model, the second model and the third model can adopt a model trained in advance in the prior art, and then the first model, the second model and the third model can adopt the model trained in advance in the prior art, and model integration is carried out according to the model loading specification of the Stable diffration large model, so that the preset graphical map service is obtained. The generation mode of the preset map service refers to the combined application technology of the Stable diffration large model and the control net model in the prior art, and will not be repeated here.
Further, for optimization of the generating effect in Stable dispersion, in the embodiment of the present application, an adaptive face_yolv8s.pt model (face optimization model) is used to perform face optimization on a model, and a hand_yolv8n.pt model (a hand optimization model) is used to perform hand optimization, so as to generate a model try-on diagram with finer face and hand details.
Correspondingly, the generating configuration information, the clothing canvas image, the real model image and the clothing mask layout based on the category, the fitting pattern, calling a preset patterning service, and generating the model fitting pattern of the target clothing comprises the following steps: generating configuration information and preset prompt words based on the category and the try-on graph, and generating target prompt words; and calling a preset icon service by combining the target prompt word to generate a model try-on diagram of the target clothing, wherein the input of the first model is the clothing canvas image and the clothing mask image, the input of the second model is the clothing canvas image and the real model image, and the input of the third model is the real model image.
The prompt word is used for controlling the image style generated by the Stable Diffusion large model. The fitting pattern generation configuration information input by the user is used for controlling the generation of the model fitting pattern style, so that a part of prompt words are required to be generated based on the fitting pattern generation configuration information. In addition, the training-obtained universal forward prompt (prompt word) is preset for the clothes of all categories supported in the test pattern generation system or application, and can be used as a part of the prompt word to improve the quality and stability of the generated model test pattern.
In some optional embodiments, the generating the configuration information and the preset prompting word based on the category, the try-on graph, and generating the target prompting word include: acquiring a first prompt word corresponding to each category based on a prompt word preset for the category; generating configuration information based on the try-on graph, and generating a second prompt word; and splicing the first prompt word and the second prompt word to generate a target prompt word. The preset prompting words of each preset category are obtained through repeated training, and the preset prompting words of various categories are not limited in the embodiment of the application. As described above, the fitting pattern generation configuration information includes, but is not limited to, one or more of the following: the model skin color, the model face and the background style are generated according to the fitting diagram, and the second prompt words of the model skin color, the model face and the fitting diagram background style in the generated model fitting diagram can be respectively described. And then, splicing the first prompt word and the second prompt word according to a specified sequence to generate a target prompt word.
In some embodiments of the present application, the obtaining, based on the alert words preset for each preset category, a first alert word corresponding to the category includes any one of the following: under the condition that the category is the upper package, acquiring a first prompt word for generating the lower package based on the preset prompt word for each preset category; under the condition that the categories are semi-skirts or trousers, acquiring a first prompting word for generating uploading based on prompting words preset for each preset category; and under the condition that the category is one-piece dress, acquiring a first prompting word which does not generate matched clothes based on the prompting word preset for each preset category. The method comprises the following steps of: for example, the upper garment, the half-skirt, the one-piece dress and the trousers, the preset prompting words may respectively include the following prompting words: for upper packaging, the preset prompting words comprise prompting words generated by lower packaging; for the semi-skirt/trousers, the preset prompting words comprise prompting words generated by uploading; for one-piece dress, the preset prompting words comprise negative prompting words which do not generate inner lap or lower suit. By setting the prompt word describing the collocation information for the preset category, clothing collocation can be realized in the model try-on diagram, and the user experience is further improved.
In the embodiment of the application, a third model (such as an openPose preprocessor of ControlNet) is required to generate a skeleton map from the real model image, so that the skeleton map is used as part of input data of a Stable Diffusion big model, and the model gesture in the generated model try-on model is natural. In addition, the canvas image of the clothing and the mask image of the clothing are required to be used as input images of a first model, and the first model (such as a SoftEdge preprocessor of ControlNet) is used for limiting the approximate area of the model human body and the clothing. Meanwhile, the canvas image of the clothes and the model image of the real person are used as the input of the second model (such as an Inpainting preprocessor of the control Net), the photo is restored to the greatest extent, and the details of the clothes materials and the edge area are controlled in a refined mode.
The Stable diffration's graphical service may be invoked via HTTP (Hypertext Transfer Protocol ) protocol in JSON format, how parameters in JSON map to follow the graphical application standard. For example, when the canvas image of the garment and the mask image of the garment are required to be used as input of the Inpating model, the value of the init_images field in the JSON field can be set as a character string for converting the canvas image of the garment into a base64 code, the value of the mask field in the JSON field is set as a character string for converting the mask image of the garment into a base64 code, and other images are converted into character strings for converting the base64 code and assigning values to the corresponding fields, so that the JSON text is obtained.
And then, taking the target prompt word and the JSON text as the input of the Stable Diffusion drawing service, and starting to execute drawing operation based on the input text by the Stable Diffusion drawing service. In the stage of generating the figure of Stable Diffusion, a control Net preprocessor OpenPose, softEdge adopted by Stable Diffusion is used for controlling the pose and the display effect of the model, and the model face and hand effect is optimized by using an ADetailer plug-in, so that a high-quality model try-on figure with the target garment, wherein the skin color, the face and the background styles of the model are matched with the user configuration, and the expression and the pose of the model are natural is obtained.
In summary, in the method for generating the fitting pattern disclosed in the embodiment of the present application, after obtaining the category of the target clothing, model pose information, fitting pattern generation configuration information and clothing canvas image corresponding to the first fitting pattern generation task, image segmentation processing is performed on the clothing canvas image to obtain a clothing mask, and a preset real person model image is obtained based on the model pose information; and then, based on the category, the fitting pattern generation configuration information, the clothing canvas image, the real model image and the clothing mask, invoking a preset drawing service to generate a model fitting pattern of the target clothing, so that the original appearance of the clothing in the uploaded image of the user is reserved, the customized model skin color, model face and background style in the fitting pattern generation configuration information set by the user are integrated, and the model gesture selected by the user is improved, and the quality of the generated model fitting pattern is improved. And the real model image preset by the system is selected to replace the dummy model image, so that the hand and face details can be kept to the greatest extent, the generated model is more natural in try-on face and hand details, and the quality of the generated image is more stable.
Furthermore, through presetting the upper and lower dress collocation prompt words for each category, the collocation test-on diagram is output, the visual effect of the model test-on diagram is perfected, and the use experience of a user is improved. In addition, the stability of the generated model fitting diagram can be improved by presetting prompt words for each category.
On the basis of the embodiment, the application also provides a clothing try-on generating method, which is used for executing a second try-on generating task to generate model try-on based on the user uploading images, categories, try-on generating configuration information and the like corresponding to the second try-on generating task.
Referring to fig. 8, the method for generating a fitting pattern disclosed in the present embodiment is applied to a fitting pattern generating system or a server of an application, and includes: steps 802 to 808.
Step 802, obtaining the category of the target clothing corresponding to the second try-on generating task, the try-on generating configuration information and the user uploading image.
In some optional embodiments, the task data may be used to obtain the category of the target clothing corresponding to the second try-on generating task, the try-on generating configuration information, and the user uploading image. The task data can be stored in a preset task message queue through a task identification mark. In other alternative embodiments, the task data may be transmitted through interface parameters of a preset service. In the embodiment of the application, the transmission mode of the task data is not limited.
Wherein, the category can be a second-level product category of target clothes on an electronic commerce platform. The fitting pattern generation configuration information is set by a user for the target apparel, including but not limited to: model skin color, model face, background style, etc. The user uploaded image is an original image of the target clothes uploaded by the user.
In some optional embodiments, the task data corresponding to the second task may include task data of the second task, where the task data includes task generation configuration information, and the server may directly obtain image data of the user uploaded image from the task data. In some optional embodiments, the fitting pattern generating configuration information included in the task data corresponding to the second fitting pattern generating task is a storage address of the user uploading image, and the server needs to download the user uploading image from the storage address.
The method for generating the task for generating the second test chart and the method for generating the task data specifically are described in the foregoing embodiments, and are not described herein again.
And step 804, performing image segmentation processing on the user uploaded image to obtain a clothing canvas image, and performing image segmentation processing on the user uploaded image to obtain a model mask layout.
In some optional embodiments, performing image segmentation processing on the user uploaded image to obtain a clothing canvas image, including: invoking a preset target detector based on a first preset semantic cue word, and performing target detection on the user uploading image to obtain first position information of the target clothes in the user uploading image; performing image segmentation processing on the uploaded image of the user based on the first position information to obtain a mask pattern of the target clothing mask; carrying out matting processing on the image uploaded by the user based on the mask to obtain a matting image; and optimizing the edge area of the matting image by adopting a pixel mapping method and a Gaussian blur processing method to obtain the clothing canvas image.
In the embodiment of the application, the target in the image can be detected by a grouping DINO target detector. For example, taking a close (i.e. clothing) as a first preset semantic cue word, calling a grouping DINO based on the first preset semantic cue word, taking the user uploading image as an input image, and identifying position information of the clothing element in the user uploading image, which is marked as 'first position information' in the embodiment of the present application.
And then, a Segment analysis algorithm can be adopted to further Segment mask images at corresponding positions in the uploaded images of the user based on the first position information output by the grouping DINO, and the mask image corresponding to the clothing main body is obtained.
And carrying out image matting processing on the user uploaded image based on the mask to obtain a specific implementation mode of the image matting, wherein the specific implementation mode of the image matting processing on the user uploaded image based on the clothing mask is described in the foregoing, and is not repeated here.
And optimizing the edge area of the matting image by adopting a pixel mapping method and a Gaussian blur processing method to obtain the specific implementation of the canvas image of the clothes, which are referred to in the related description in the previous embodiment and are not repeated here.
And edge mapping is carried out on the pixel value of the alpha channel of the matting image, so that the white edge area is reduced, and then, the Gaussian blur algorithm is adopted to carry out blurring processing on the matting boundary, so that the saw teeth of the matting boundary can be reduced, and the quality of the clothing canvas image obtained by matting is further improved.
In some optional embodiments, the performing image segmentation processing on the image uploaded by the user to obtain a model mask layout includes: invoking a preset target detector based on a second preset semantic cue word to perform target detection on the user uploading image to obtain second position information of the model in the user uploading image; and carrying out image segmentation processing on the uploaded image of the user based on the second position information to obtain a model mask layout of the model mask.
The grouping DINO target detector trains a plurality of semantic prompt words in advance, and can acquire the prompt words for indicating the detection targets from the semantic prompt words trained in advance. In some alternative embodiments, the model (i.e. the model) may be used as a second preset semantic cue word, and the grouping DINO is called based on the second preset semantic cue word, and the user uploading image is used as the input image, where the grouping DINO identifies the location information of the element of the model in the user uploading image, which is denoted as "second location information" in this embodiment.
In some optional embodiments, a Segment analysis algorithm may be used to further Segment the mask image at the corresponding position in the uploaded image of the user based on the second position information output by the grouping DINO, so as to obtain the model mask.
And step 806, performing image segmentation processing on the clothing canvas image to obtain a clothing mask.
And performing image segmentation processing on the clothing canvas image to obtain a specific implementation mode of the clothing mask, which is described in the foregoing, and is not repeated here.
And step 808, generating configuration information, the clothing canvas image, the model mask layout and the clothing mask layout based on the category, the fitting pattern, and calling a preset graphic generation service to generate a model fitting pattern of the target clothing.
The preset graphical service integrates the following control models through a preset generation type large model: a first model for controlling the generation of graph details, a second model for controlling the generation of graph contour details, and a third model for controlling the generation of model gestures in the graph.
The preset graphics service invoked in this embodiment is described above, and is not described here again.
Correspondingly, the generating configuration information, the clothing canvas image, the model mask layout and the clothing mask layout based on the category, the fitting pattern, calling a preset graphics service, and generating the model fitting pattern of the target clothing comprises the following steps: generating configuration information and preset prompt words based on the category and the try-on graph, and generating target prompt words; and calling a preset icon service by combining the target prompt word to generate a model try-on diagram of the target clothing, wherein the input of the first model is the clothing canvas image and the clothing mask image, the input of the second model is the clothing canvas image and the model mask image, and the input of the third model is the model mask image.
In some optional embodiments, the generating the configuration information and the preset prompting word based on the category, the try-on graph, and generating the target prompting word include: acquiring a third prompt word corresponding to each category based on a prompt word preset for the category; generating configuration information based on the try-on graph, and generating a fourth prompting word; and splicing the third prompting word and the fourth prompting word to generate a target prompting word.
Based on the prompt words preset for each preset category, the specific implementation manner of the third prompt words corresponding to the category is obtained, and the specific implementation manner of the first prompt words corresponding to the category is obtained based on the prompt words preset for each preset category in the foregoing, which is not repeated here.
Generating configuration information based on the fitting chart, and generating a specific embodiment of a fourth prompting word, which is described above based on prompting words preset for each preset category, to obtain a specific embodiment of a second prompting word corresponding to the category, and will not be described herein.
In some embodiments of the present application, the obtaining, based on the alert words preset for each preset category, a third alert word corresponding to the category includes any one of the following: under the condition that the category is the upper package, acquiring a third prompting word for generating the lower package based on prompting words preset for each preset category; under the condition that the category is a half skirt or trousers, acquiring a third prompting word for generating uploading based on prompting words preset for each preset category; and when the category is one-piece dress, acquiring a third prompting word which does not generate downloading or inner overlapping based on the prompting word preset for each preset category. The method comprises the following steps of: for example, the upper garment, the half-skirt, the one-piece dress and the trousers, the preset prompting words may respectively include the following prompting words: for upper packaging, the preset prompting words comprise prompting words generated by lower packaging; for the semi-skirt/trousers, the preset prompting words comprise prompting words generated by uploading; for one-piece dress, the preset prompting words comprise negative prompting words which do not generate inner lap or lower suit. By setting the prompt word describing the collocation information for the preset category, clothing collocation can be realized in the model try-on diagram, and the user experience is further improved.
In the embodiment of the present application, when the vendor platform is used to generate the model try-on diagram, a third model (such as the openPose preprocessor of ControlNet) is required to generate the model mask diagram to generate the skeleton diagram, so that the skeleton diagram is used as part of input data of the Stable Diffusion big model, and the gesture of the model in the image uploaded by the user is reserved. In addition, the canvas image of the clothing and the mask image of the clothing are required to be used as input images of a first model, and the first model (such as a SoftEdge preprocessor of ControlNet) is used for limiting the approximate area of the model human body and the clothing. Meanwhile, the canvas image of the clothes and the model mask are used as the input of the second model (such as an Inpainting preprocessor of a control Net), the photo is restored to the greatest extent, and the material details and the edge area of the clothes are controlled in a refined mode.
The interface calling method of the Stable diffration graph service is described above, and will not be described here.
In the figure generation stage of Stable Diffusion, a control Net preprocessor OpenPose, softEdge adopted by Stable Diffusion is used for controlling the model posture and the display effect, an ADetailer plug-in is started, the model face and hand effect is optimized by using the ADetailer plug-in, and therefore a high-quality model wearing diagram with the target clothes, wherein the model skin color, the model face and the background style are matched with user configuration, and the model expression and the model posture are natural is obtained.
In summary, in the fitting pattern generation method disclosed in the embodiment of the present application, after the category of the target clothing corresponding to the second fitting pattern generation task, the fitting pattern generation configuration information and the user uploading image are obtained, image segmentation processing is performed on the user uploading image to obtain a clothing canvas image and a model mask image, which are used for restoring clothing details and preserving model gestures in the user uploading image, then image segmentation processing is performed on the clothing canvas image to obtain a clothing mask image, which is used for describing edge information of the clothing, and then, based on the category, the fitting pattern generation configuration information, the clothing canvas image, the model mask image and the clothing mask image, a preset pattern generation service is invoked to generate a model fitting pattern of the target clothing, so that in the generated model fitting pattern, not only the gesture and the model original feature of the model in the user uploading image are preserved, but also customized model skin color, model face and background style in the fitting pattern generation configuration information are fused, and model fitting patterns, and face details and hands of the model fitting are processed more naturally.
Furthermore, through presetting the upper and lower dress collocation prompt words for each category, the collocation test-on diagram is output, the visual effect of the model test-on diagram is perfected, and the use experience of a user is improved. In addition, the stability of the generated model fitting diagram can be improved by presetting prompt words for each category.
With reference to fig. 9, on the basis of the foregoing embodiment, an embodiment of the present application further provides a fitting chart generating system, where the system includes: the system comprises a client 900 and a server 910, wherein the client 900 is configured to execute the garment fitting chart generating method described in the foregoing embodiments corresponding to the flowcharts of fig. 1 and 6; the server 910 is configured to execute the method for generating a clothing try-on map according to the foregoing embodiment corresponding to the step flow chart in fig. 7 and/or fig. 8.
In an alternative embodiment, as shown in fig. 9, the server 910 further includes: a control module 911, a segmentation service module 912, and a graphical service module 913. The control module 911, the segmentation service module 912, and the graphical service module 913 may be deployed separately or partially independently, or may be deployed in an integrated manner. The steps of an alternative embodiment of the execution of a fitting pattern generation system disclosed in the present application are illustrated in conjunction with the interactive flow chart shown in fig. 9.
The client 900 is configured to obtain the category of the target clothing and the fitting pattern input by the user, generate configuration information, and upload an image by the user.
The client 900 is further configured to determine a type of a mannequin selected by a user, and if the type of the mannequin is a system mannequin, the client 900 is further configured to obtain a target dummy model image and model gesture information of the target dummy model image, synthesize a clothing canvas image based on the user uploaded image and the target dummy model image, generate configuration information based on the clothing canvas image, the model gesture information, the category, and the try-on, generate a first try-on generating task, and send the first try-on generating task to the server.
The client 900 is further configured to generate a second fitting pattern generating task based on the category of the user uploaded image and the fitting pattern generating configuration information when determining that the user chooses to generate the model fitting pattern using the merchant platform, and send the second fitting pattern generating task to the server.
The server 910 is configured to generate a model fitting diagram of the target garment based on the first fitting diagram generating task; and/or generating a model fitting map of the target apparel based on the second fitting map generating task.
In some alternative embodiments, during the process of generating the model fitting map of the target apparel by the server 910 based on the first fitting map generating task,
the control module 911 is configured to obtain a category of a target garment corresponding to the first try-on generating task, model pose information, try-on generating configuration information, and a canvas image of the garment;
the control module 911 is further configured to invoke the segmentation service module 912 to perform image segmentation processing on the canvas image of the garment to obtain a garment mask;
the control module 911 is further configured to obtain a preset real model image based on the model pose information;
The control module 911 is further configured to generate configuration information, the clothing canvas image, the real model image, and the clothing mask layout based on the category, the fitting pattern, and call the icon service module 913 to generate a model fitting pattern of the target clothing.
So far, the execution of the flow for generating model test patterns based on the system mannequin is completed.
In some alternative embodiments, during the process of generating the model fitting map of the target apparel based on the second fitting map generating task by the server 910,
the control module 911 is further configured to obtain a category of the target clothing corresponding to the second try-on generating task, try-on generating configuration information, and a user uploading image;
the control module 911 is further configured to invoke the segmentation service module 912 to perform image segmentation processing on the user uploaded image to obtain a clothing canvas image, and invoke the segmentation service module 912 to perform image segmentation processing on the user uploaded image to obtain a model mask layout;
the control module 911 is further configured to invoke the segmentation service module 912 to perform image segmentation processing on the canvas image of the garment to obtain a garment mask;
The control module 911 is further configured to generate configuration information, the clothing canvas image, the model mask layout, and the clothing mask layout based on the category, the fitting plan, and call the icon service module 913 to generate a model fitting plan of the target clothing.
Thus, the execution of the process of generating the model fitting pattern based on the merchant station is completed.
The specific implementation of the client 900 and the control module 911 are described in the foregoing embodiments, and are not repeated here. The specific embodiment of the map service module 913 refers to the Stable diffration map service, and is not described herein. In some alternative embodiments, the segmentation service module 912 may be implemented based on a round DINO object detector and Segment analysis (an image segmentation algorithm). The split service module 912 may also use other split services in the prior art, and the specific implementation of the split service module 912 in the embodiments of the present application is not limited.
The client segment 900 is further configured to query and display the model try-on.
In summary, in the fitting pattern generation system disclosed in the embodiment of the present application, when the user uploads an image that is an image of clothing worn by a non-full-limb mannequin or a tiled pattern, the user uploads the image of target clothing on the client, and configures and generates the category, the model skin color, and/or the model face and/or the background style of the model fitting pattern as required, and then only needs to select the mannequin of the system, the system can display the dummy models of various postures for the user, and generates the model fitting pattern according to the model posture selected by the user, and in the generated model fitting pattern, the model wears the target clothing in the model posture selected by the user, thereby effectively improving the quality of the generated model fitting pattern, and the details such as the model face, the hand and the like in the model fitting pattern are natural, so that the generated model fitting pattern effect is more stable. For the situation that the model gesture in the image uploaded by the user is needed, the user can generate a model fitting diagram according to the information configured by the user by selecting a merchant platform mode, and the model wears the target clothes in the model gesture in the image uploaded by the user in the generated model fitting diagram.
It should be noted that, the model in the model try-on chart generated in the embodiment of the present application may be a model generated by adopting the graphic technique, or may be a model authorized by the model, and may be a model with a portrait.
It should be noted that, in the embodiments of the present application, the use of user data may be involved, and in practical applications, user specific personal data may be used in the schemes described herein within the scope allowed by applicable legal regulations in the country where the applicable legal regulations are met (for example, the user explicitly agrees to the user to actually notify the user, etc.).
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments and that the acts referred to are not necessarily required by the embodiments of the present application.
On the basis of the above embodiment, the present embodiment further provides a fitting chart generating device, which is applied to electronic devices such as a client device, and the device includes:
The user uploading image and configuration information acquisition module is used for acquiring the category of target clothes input by the user and generating configuration information by the try-on image, and uploading the image by the user;
the system comprises a whole-body mannequin image and gesture acquisition module, a model fitting module and a gesture recognition module, wherein the whole-body mannequin image and gesture acquisition module is used for responding to the selection of a system mannequin by a user to generate a model fitting image, and acquiring a target dummy model image and model gesture information of the target dummy model image;
the clothing canvas image generation module is used for synthesizing clothing canvas images based on the user uploading images and the target dummy model images;
the first model fitting diagram generating module is used for generating configuration information based on the clothing canvas image, the model gesture information, the categories and the fitting diagram to generate a first fitting diagram generating task, so that a preset server side generates a model fitting diagram of the target clothing based on the first fitting diagram generating task.
In some optional embodiments, the apparel canvas image generation module is further to:
creating a canvas based on the target dummy model image;
performing clothing segmentation processing on the user uploaded image to obtain a clothing image of the target clothing;
Creating a clothing layer based on the clothing image;
based on the adjustment operation of the user on the display information of the clothing layer in the canvas, displaying the adaptation result of the target clothing in the clothing layer and the dummy model in the target dummy model image in real time;
and based on the confirmation operation of the user on the adaptation result, synthesizing the clothing layer and the canvas according to the adjusted display information to obtain the clothing canvas image.
In some optional embodiments, the method further includes, after the obtaining the category of the target clothing and the try-on map input by the user to generate the configuration information, uploading the image by the user, the method further includes:
the second model fitting pattern generating module is used for responding to the fact that a user selects a merchant personal station to generate a model fitting pattern, generating configuration information based on the user uploading image, the categories and the fitting pattern, generating a second fitting pattern generating task, and enabling a preset server to generate the model fitting pattern of the target clothing based on the second fitting pattern generating task.
In some alternative embodiments, the fitting pattern generation configuration information includes one or more of the following: model skin color, model face, background style.
In summary, when a clothing try-on image needs to be generated, a user uploads an image of clothing to be tried on, configures categories and generates configuration information for the try-on image, then the user can select a system mannequin to generate a model try-on image, a client side will display a preset dummy model image with at least one model gesture for the user to select, and the clothing canvas image is synthesized according to the dummy model image selected by the user and the user uploaded image; and then, generating configuration information based on the canvas image of the clothes, the model gesture information, the categories and the fitting pattern, and generating a first fitting pattern generating task, so that a preset server generates a model fitting pattern of the target clothes based on the first fitting pattern generating task.
Further, when the canvas image of the clothes is synthesized, a user can adjust the adaptation relation between the clothes and the model, so that the combination of the clothes and the model in the generated try-on picture is more natural and fit, and the showing effect of the model try-on picture on the clothes is improved.
When the first try-on generating task is generated, model gesture information is reserved, and when the user server executes the first try-on generating task to generate the model try-on, a real person model image which corresponds to the model gesture information and is built in a system is further acquired and used for generating the model try-on, so that the faces and bones of the model in the generated try-on are more natural and attractive.
On the basis of the above embodiment, the present embodiment further provides a fitting chart generating device, which is applied to electronic devices such as a server device, and the device includes:
the first task data acquisition module is used for acquiring the category, model posture information, fitting diagram generation configuration information and a clothing canvas image of the target clothing corresponding to the first fitting diagram generation task;
the clothing mask generation module is used for carrying out image segmentation processing on the clothing canvas image to obtain a clothing mask;
the real model image acquisition module is used for acquiring a preset real model image based on the model posture information;
and the model fitting diagram generating module is used for generating configuration information, the clothing canvas image, the real person model image and the clothing mask diagram based on the category, the fitting diagram, calling a preset drawing service and generating the model fitting diagram of the target clothing.
In some alternative embodiments, the preset graphical service integrates the following control models through a preset generation type large model: a first model for controlling the generation of graph details, a second model for controlling the generation of graph contour details, a third model for controlling the generation of model gestures in the graph; the model try-on generating module is further configured to:
generating configuration information and preset prompt words based on the category and the try-on graph, and generating target prompt words;
and calling a preset icon service by combining the target prompt word to generate a model try-on diagram of the target clothing, wherein the input of the first model is the clothing canvas image and the clothing mask image, the input of the second model is the clothing canvas image and the real model image, and the input of the third model is the real model image.
In some optional embodiments, the generating the configuration information and the preset prompting word based on the category, the try-on graph, and generating the target prompting word include:
acquiring a first prompt word corresponding to each category based on a prompt word preset for the category;
generating configuration information based on the try-on graph, and generating a second prompt word;
And splicing the first prompt word and the second prompt word to generate a target prompt word.
In some optional embodiments, the obtaining, based on the alert words preset for each preset category, the first alert word corresponding to the category includes any one of the following:
under the condition that the category is the upper package, acquiring a first prompt word for generating the lower package based on the preset prompt word for each preset category;
under the condition that the categories are semi-skirts or trousers, acquiring a first prompting word for generating uploading based on prompting words preset for each preset category;
and under the condition that the category is one-piece dress, acquiring a first prompting word which does not generate matched clothes based on the prompting word preset for each preset category.
In some optional embodiments, the clothing mask layout generation module is further configured to:
invoking a preset target detector based on a first preset semantic prompt word, and performing target detection on the clothing canvas image to obtain the position information of the target clothing in the clothing canvas image;
and carrying out image segmentation processing on the clothing canvas image based on the position information to obtain the clothing mask map of the target clothing mask.
In summary, after obtaining the category, model posture information, fitting pattern generation configuration information and a clothing canvas image of a target clothing corresponding to a first fitting pattern generation task, firstly performing image segmentation processing on the clothing canvas image to obtain a clothing mask map, and obtaining a preset real person model image based on the model posture information; and then, based on the category, the fitting pattern generation configuration information, the clothing canvas image, the real model image and the clothing mask, invoking a preset drawing service to generate a model fitting pattern of the target clothing, so that the original appearance of the clothing in the uploaded image of the user is reserved, the customized model skin color, model face and background style in the fitting pattern generation configuration information set by the user are integrated, and the model gesture selected by the user is improved, and the quality of the generated model fitting pattern is improved. And the real model image preset by the system is selected to replace the dummy model image, so that the hand and face details can be kept to the greatest extent, the generated model is more natural in try-on face and hand details, and the quality of the generated image is more stable.
Furthermore, through presetting the upper and lower dress collocation prompt words for each category, the collocation test-on diagram is output, the visual effect of the model test-on diagram is perfected, and the use experience of a user is improved. In addition, the stability of the generated model fitting diagram can be improved by presetting prompt words for each category.
On the basis of the foregoing embodiment, the present application further provides a clothing try-on generating method, configured to execute a second try-on generating task, so as to generate a model try-on based on the user uploading image, category, try-on generating configuration information, etc. corresponding to the second try-on generating task
On the basis of the above embodiment, the present embodiment further provides a fitting chart generating device, which is applied to electronic devices such as a server device, and the device includes:
the second task data acquisition module is used for acquiring the category of the target clothing corresponding to the second try-on generating task, the try-on generating configuration information and the user uploading image;
the clothing canvas image and model mask acquisition module is used for carrying out image segmentation processing on the user uploaded image to obtain a clothing canvas image, and carrying out image segmentation processing on the user uploaded image to obtain a model mask;
the clothing mask acquisition module is used for carrying out image segmentation processing on the clothing canvas image to obtain a clothing mask;
and the model fitting diagram generating module is used for generating configuration information, the clothing canvas image, the model mask layout and the clothing mask layout based on the category, the fitting diagram, calling a preset graphical diagram service and generating the model fitting diagram of the target clothing.
In some alternative embodiments, the preset graphical service integrates the following control models through a preset generation type large model: a first model for controlling the generation of graph details, a second model for controlling the generation of graph contour details, a third model for controlling the generation of model gestures in the graph;
the model try-on generating module is further configured to:
generating configuration information and preset prompt words based on the category and the try-on graph, and generating target prompt words;
and calling a preset icon service by combining the target prompt word to generate a model try-on diagram of the target clothing, wherein the input of the first model is the clothing canvas image and the clothing mask image, the input of the second model is the clothing canvas image and the model mask image, and the input of the third model is the model mask image.
In some optional embodiments, the performing image segmentation processing on the user uploaded image to obtain a clothing canvas image includes:
invoking a preset target detector based on a first preset semantic cue word, and performing target detection on the user uploading image to obtain first position information of the target clothes in the user uploading image;
Performing image segmentation processing on the uploaded image of the user based on the first position information to obtain a mask pattern of the target clothing mask;
carrying out matting processing on the image uploaded by the user based on the mask to obtain a matting image;
and optimizing the edge area of the matting image by adopting a pixel mapping method and a Gaussian blur processing method to obtain the clothing canvas image.
In some optional embodiments, the performing image segmentation processing on the image uploaded by the user to obtain a model mask layout includes:
invoking a preset target detector based on a second preset semantic cue word to perform target detection on the user uploading image to obtain second position information of the model in the user uploading image;
and carrying out image segmentation processing on the uploaded image of the user based on the second position information to obtain a model mask layout of the model mask.
In summary, the fitting pattern generating device disclosed in the embodiment of the present application performs image segmentation processing on a user uploaded image after obtaining a category of a target garment corresponding to a second fitting pattern generating task, fitting pattern generating configuration information and the user uploaded image, to obtain a garment canvas image and a model mask image, which are used for restoring garment details and preserving model gestures in the user uploaded image, then performs image segmentation processing on the garment canvas image to obtain a garment mask image used for describing edge information of the garment, and then, based on the category, the fitting pattern generating configuration information, the garment canvas image, the model mask image and the garment mask image, invokes a preset pattern generating service to generate a model fitting pattern of the target garment, so that in the generated model fitting pattern, not only is the pose and the model original feature of the model in the user uploaded image preserved, but also the model face and the background style of the customized model in the fitting pattern generating configuration information are fused, and the model fitting pattern, face and the model detail processing are more natural.
Furthermore, through presetting the upper and lower dress collocation prompt words for each category, the collocation test-on diagram is output, the visual effect of the model test-on diagram is perfected, and the use experience of a user is improved. In addition, by presetting prompt words for each category, the stability of the generated model test-wear diagram can be improved
The embodiment of the application also provides a non-volatile readable storage medium, where one or more modules (programs) are stored, where the one or more modules are applied to a device, and the device may be caused to execute instructions (instractions) of each method step in the embodiment of the application.
Embodiments of the present application also provide a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, are configured to implement a method according to embodiments of the present application.
The embodiment of the application also provides electronic equipment, which comprises: a processor, and a memory communicatively coupled to the processor; the memory stores computer-executable instructions; the processor executes the computer-executable instructions stored by the memory to implement the methods as described in embodiments of the present application. In this embodiment of the present application, the electronic device includes a server, a terminal device, and other devices.
Embodiments of the present disclosure may be implemented as an apparatus for performing a desired configuration using any suitable hardware, firmware, software, or any combination thereof, which may include a server (cluster), terminal, or the like. Fig. 10 schematically illustrates an example apparatus 1000 that may be used to implement various embodiments described herein.
For one embodiment, fig. 10 illustrates an example apparatus 1000 having one or more processors 1002, a control module (chipset) 1004 coupled to at least one of the processor(s) 1002, a memory 1006 coupled to the control module 1004, a non-volatile memory (NVM)/storage 1008 coupled to the control module 1004, one or more input/output devices 1010 coupled to the control module 1004, and a network interface 1012 coupled to the control module 1004.
The processor 1002 may include one or more single-core or multi-core processors, and the processor 1002 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the apparatus 1000 can be used as a server, a terminal, or the like in the embodiments of the present application.
In some embodiments, the apparatus 1000 can include one or more computer-readable media (e.g., memory 1006 or NVM/storage 1008) having instructions 1014 and one or more processors 1002 in combination with the one or more computer-readable media configured to execute the instructions 1014 to implement the modules to perform the actions described in this disclosure.
For one embodiment, the control module 1004 may include any suitable interface controller to provide any suitable interface to at least one of the processor(s) 1002 and/or any suitable device or component in communication with the control module 1004.
The control module 1004 may include a memory controller module to provide an interface to the memory 1006. The memory controller modules may be hardware modules, software modules, and/or firmware modules.
Memory 1006 may be used to load and store data and/or instructions 1014 for device 1000, for example. For one embodiment, the memory 1006 may include any suitable volatile memory, such as a suitable DRAM. In some embodiments, the memory 1006 may comprise a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, the control module 1004 may include one or more input/output controllers to provide an interface to the NVM/storage 1008 and the input/output device(s) 1010.
For example, NVM/storage 1008 may be used to store data and/or instructions 1014. NVM/storage 1008 may include any suitable nonvolatile memory (e.g., flash memory) and/or may include any suitable nonvolatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 1008 may include storage resources as part of a device on which apparatus 1000 is installed, or may be accessible by the device without necessarily being part of the device. For example, NVM/storage 1008 may be accessed over a network via input/output device(s) 1010.
Input/output device(s) 1010 may provide an interface for apparatus 1000 to communicate with any other suitable device, input/output device 1010 may include communication components, audio components, sensor components, and the like. Network interface 1012 may provide an interface for device 1000 to communicate over one or more networks, and device 1000 may communicate wirelessly with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as accessing a wireless network based on a communication standard, such as bluetooth, wiFi, 2G, 3G, 4G, 5G, etc., or a combination thereof.
For one embodiment, at least one of the processor(s) 1002 may be packaged together with logic of one or more controllers (e.g., memory controller modules) of the control module 1004. For one embodiment, at least one of the processor(s) 1002 may be packaged together with logic of one or more controllers of the control module 1004 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 1002 may be integrated on the same mold as logic of one or more controllers of the control module 1004. For one embodiment, at least one of the processor(s) 1002 may be integrated on the same die with logic of one or more controllers of the control module 1004 to form a system on chip (SoC).
In various embodiments, the apparatus 1000 may be, but is not limited to being: a server, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.), among other terminal devices. In various embodiments, device 1000 may have more or fewer components and/or different architectures. For example, in some embodiments, the apparatus 1000 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and a speaker.
The detection device can adopt a main control chip as a processor or a control module, sensor data, position information and the like are stored in a memory or an NVM/storage device, a sensor group can be used as an input/output device, and a communication interface can comprise a network interface.
The embodiment of the application also provides electronic equipment, which comprises: a processor; and a memory having executable code stored thereon that, when executed, causes the processor to perform a method as described in one or more of the embodiments herein. The memory in the embodiment of the application can store various data, such as various data including target files, file and application related data, and the like, and also can include user behavior data and the like, so as to provide a data base for various processes.
Embodiments also provide one or more machine-readable media having executable code stored thereon that, when executed, cause a processor to perform a method as described in one or more of the embodiments of the present application.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present embodiments have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the present application.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing has described in detail a method for generating a fitting pattern, a system for generating a fitting pattern, an electronic device and a storage medium, and specific examples have been used herein to illustrate the principles and embodiments of the present application, and the above examples are only used to help understand the method and core ideas of the present application; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (16)

1. A method for generating a fitting pattern, applied to a client, the method comprising:
obtaining the category of target clothes input by a user and a try-on graph to generate configuration information, and uploading an image by the user;
responding to a user selection system mannequin to generate a model try-on diagram, and acquiring a target dummy model image and model posture information of the target dummy model image;
synthesizing a clothing canvas image based on the user uploaded image and the target dummy model image;
and generating a first try-on generating task based on the clothing canvas image, the model gesture information, the category and the try-on generating configuration information, so that a preset server generates a model try-on of the target clothing based on the first try-on generating task.
2. The method of claim 1, wherein the synthesizing a apparel canvas image based on the user upload image and the target dummy model image comprises:
creating a canvas based on the target dummy model image;
performing clothing segmentation processing on the user uploaded image to obtain a clothing image of the target clothing;
creating a clothing layer based on the clothing image;
based on the adjustment operation of the user on the display information of the clothing layer in the canvas, displaying the adaptation result of the target clothing in the clothing layer and the dummy model in the target dummy model image in real time;
and based on the confirmation operation of the user on the adaptation result, synthesizing the clothing layer and the canvas according to the adjusted display information to obtain the clothing canvas image.
3. The method of claim 1, wherein the obtaining the category and the try-on of the target garment entered by the user generates configuration information, and wherein after the user uploads the image, further comprising:
responding to the selection of a user to generate a model fitting picture by a merchant personal station, generating configuration information based on the uploaded image of the user, the categories and the fitting picture, and generating a second fitting picture generation task, so that a preset server side generates the model fitting picture of the target clothing based on the second fitting picture generation task.
4. The method of claim 1, wherein the fitting pattern generation configuration information includes one or more of the following: model skin color, model face, background style.
5. A method of generating a fitting pattern, the method comprising:
acquiring the category, model posture information, fitting pattern generation configuration information and a clothing canvas image of a target clothing corresponding to a first fitting pattern generation task;
performing image segmentation processing on the clothing canvas image to obtain a clothing mask;
acquiring a preset real model image based on the model posture information;
and generating configuration information, the clothing canvas image, the real model image and the clothing mask layout based on the category, the fitting pattern, and calling a preset patterning service to generate a model fitting pattern of the target clothing.
6. The method of claim 5, wherein the preset graphical drawing service integrates the following control models through a preset generative large model: a first model for controlling the generation of graph details, a second model for controlling the generation of graph contour details, a third model for controlling the generation of model gestures in the graph; the generating configuration information, the clothing canvas image, the real model image and the clothing mask layout based on the category, the fitting pattern, and calling a preset graphical drawing service to generate a model fitting pattern of the target clothing comprises the following steps:
Generating configuration information and preset prompt words based on the category and the try-on graph, and generating target prompt words;
and calling a preset icon service by combining the target prompt word to generate a model try-on diagram of the target clothing, wherein the input of the first model is the clothing canvas image and the clothing mask image, the input of the second model is the clothing canvas image and the real model image, and the input of the third model is the real model image.
7. The method of claim 6, wherein the generating the target cue word based on the category, the fitting plan, and the preset cue word comprises:
acquiring a first prompt word corresponding to each category based on a prompt word preset for the category;
generating configuration information based on the try-on graph, and generating a second prompt word;
and splicing the first prompt word and the second prompt word to generate a target prompt word.
8. The method of claim 7, wherein the obtaining the first alert word corresponding to each category based on the alert word preset for the category includes any one of:
Under the condition that the category is the upper package, acquiring a first prompt word for generating the lower package based on the preset prompt word for each preset category;
under the condition that the categories are semi-skirts or trousers, acquiring a first prompting word for generating uploading based on prompting words preset for each preset category;
and under the condition that the category is one-piece dress, acquiring a first prompting word which does not generate matched clothes based on the prompting word preset for each preset category.
9. The method of claim 5, wherein the performing image segmentation processing on the canvas image of the garment to obtain the garment mask comprises:
invoking a preset target detector based on a first preset semantic prompt word, and performing target detection on the clothing canvas image to obtain the position information of the target clothing in the clothing canvas image;
and carrying out image segmentation processing on the clothing canvas image based on the position information to obtain the clothing mask map of the target clothing mask.
10. A method of generating a fitting pattern, the method comprising:
obtaining the category of target clothes corresponding to the second try-on generating task, try-on generating configuration information and uploading an image by a user;
Performing image segmentation processing on the user uploaded image to obtain a clothing canvas image, and performing image segmentation processing on the user uploaded image to obtain a model mask;
performing image segmentation processing on the clothing canvas image to obtain a clothing mask;
and generating configuration information, the clothing canvas image, the model mask layout and the clothing mask layout based on the category, the fitting pattern, and calling a preset pattern generation service to generate a model fitting pattern of the target clothing.
11. The method of claim 10, wherein the preset graphical drawing service integrates the following control models through a preset generative large model: a first model for controlling the generation of graph details, a second model for controlling the generation of graph contour details, a third model for controlling the generation of model gestures in the graph; generating configuration information, the clothing canvas image, the model mask layout and the clothing mask layout based on the category, the fitting pattern, and calling a preset graphics service, and generating a model fitting pattern of the target clothing, comprising:
generating configuration information and preset prompt words based on the category and the try-on graph, and generating target prompt words;
And calling a preset icon service by combining the target prompt word to generate a model try-on diagram of the target clothing, wherein the input of the first model is the clothing canvas image and the clothing mask image, the input of the second model is the clothing canvas image and the model mask image, and the input of the third model is the model mask image.
12. The method of claim 10, wherein the performing image segmentation on the user-uploaded image to obtain a clothing canvas image comprises:
invoking a preset target detector based on a first preset semantic cue word, and performing target detection on the user uploading image to obtain first position information of the target clothes in the user uploading image;
performing image segmentation processing on the uploaded image of the user based on the first position information to obtain a mask pattern of the target clothing mask;
carrying out matting processing on the image uploaded by the user based on the mask to obtain a matting image;
and optimizing the edge area of the matting image by adopting a pixel mapping method and a Gaussian blur processing method to obtain the clothing canvas image.
13. The method of claim 10, wherein the performing image segmentation on the user-uploaded image to obtain a model mask layout comprises:
invoking a preset target detector based on a second preset semantic cue word to perform target detection on the user uploading image to obtain second position information of the model in the user uploading image;
and carrying out image segmentation processing on the uploaded image of the user based on the second position information to obtain a model mask layout of the model mask.
14. A fitting pattern generation system, the system comprising:
a client for performing the garment try-on generation method of any one of claims 1 to 4;
a server for executing the garment fitting pattern generation method according to any one of claims 5 to 13.
15. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method of any one of claims 1-13.
16. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of claims 1-13.
CN202311270584.XA 2023-09-27 2023-09-27 Fitting pattern generation method, fitting pattern generation system, electronic device and storage medium Pending CN117391805A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311270584.XA CN117391805A (en) 2023-09-27 2023-09-27 Fitting pattern generation method, fitting pattern generation system, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311270584.XA CN117391805A (en) 2023-09-27 2023-09-27 Fitting pattern generation method, fitting pattern generation system, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN117391805A true CN117391805A (en) 2024-01-12

Family

ID=89464033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311270584.XA Pending CN117391805A (en) 2023-09-27 2023-09-27 Fitting pattern generation method, fitting pattern generation system, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN117391805A (en)

Similar Documents

Publication Publication Date Title
US11488359B2 (en) Providing 3D data for messages in a messaging system
WO2021004114A1 (en) Automatic meme generation method and apparatus, computer device and storage medium
US10593023B2 (en) Deep-learning-based automatic skin retouching
US11798201B2 (en) Mirroring device with whole-body outfits
US11107185B2 (en) Automatic image inpainting using local patch statistics
CN111787242B (en) Method and apparatus for virtual fitting
US20190378242A1 (en) Super-Resolution With Reference Images
US20180047200A1 (en) Combining user images and computer-generated illustrations to produce personalized animated digital avatars
CN108924440B (en) Sticker display method, device, terminal and computer-readable storage medium
US20230057566A1 (en) Multimedia processing method and apparatus based on artificial intelligence, and electronic device
CN116420171A (en) Programmatically generating augmented reality content generators
US9373188B2 (en) Techniques for providing content animation
US20230058793A1 (en) Retouching digital images utilizing layer specific deep-learning neural networks
KR20090054779A (en) Apparatus and method of web based fashion coordination
EP4268048A1 (en) 3d painting on an eyewear device
WO2022072418A1 (en) Ingestion pipeline for augmented reality content generators
CN117391805A (en) Fitting pattern generation method, fitting pattern generation system, electronic device and storage medium
CN116452745A (en) Hand modeling, hand model processing method, device and medium
US11854579B2 (en) Video reenactment taking into account temporal information
KR101277553B1 (en) Method for providing fashion coordination image in online shopping mall using avatar and system therefor
US11836905B2 (en) Image reenactment with illumination disentanglement
WO2022205001A1 (en) Information exchange method, computer-readable storage medium, and communication terminal
US20240037869A1 (en) Systems and methods for using machine learning models to effect virtual try-on and styling on actual users
US20230343038A1 (en) Method and system for creating augmented reality filters on mobile devices
US20210144297A1 (en) Methods System and Device for Safe-Selfie

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination