CN114339434A - Method and device for displaying goods fitting effect - Google Patents

Method and device for displaying goods fitting effect Download PDF

Info

Publication number
CN114339434A
CN114339434A CN202011064416.1A CN202011064416A CN114339434A CN 114339434 A CN114339434 A CN 114339434A CN 202011064416 A CN202011064416 A CN 202011064416A CN 114339434 A CN114339434 A CN 114339434A
Authority
CN
China
Prior art keywords
goods
display data
virtual model
effect display
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011064416.1A
Other languages
Chinese (zh)
Inventor
颜盈盈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202011064416.1A priority Critical patent/CN114339434A/en
Publication of CN114339434A publication Critical patent/CN114339434A/en
Pending legal-status Critical Current

Links

Images

Abstract

One or more embodiments of the present disclosure provide a method and an apparatus for displaying a fitting effect of an article, where the method may include: acquiring specification description information of goods and figure attribute information of a virtual model; and generating effect display data for wearing the goods to the corresponding part of the virtual model according to the matching condition of the specification description information and the stature attribute information.

Description

Method and device for displaying goods fitting effect
Technical Field
One or more embodiments of the present disclosure relate to the field of data processing, and in particular, to a method and an apparatus for displaying a fitting effect of a good.
Background
At the present stage, the goods provider and the goods demander can complete the goods transaction process through the goods interaction platform or the live broadcast platform. To wearable goods such as clothes, shoes and hats, the goods provider usually needs to show the dressing effect of this goods for the goods demander to the goods demander looks over, compares and purchases goods.
In the goods interaction platform, in order to show the wearing effect of goods on the goods demand side with different body sizes, a goods provider usually uses a real person model to try on the goods, and then uses pictures obtained by shooting and processing to show the goods. However, since the size of each model is fixed and unique, the wearing effect of the model itself can be displayed even if a plurality of real models are used, and thus the requirement of the goods demander for the wearing effect of the goods in various different sizes cannot be met.
And in the live platform, try on and show the goods by the anchor usually, similar, because the stature size of anchor is also fixed and unique, so the wearing effect after the anchor tries on the goods also is difficult to reflect the wearing effect when the goods is worn to the goods demand side (like spectator) of other statures, and then is difficult to satisfy the diversified demand of knowing of spectator to the goods wearing effect.
Disclosure of Invention
In view of this, one or more embodiments of the present disclosure provide a method and an apparatus for displaying a fitting effect of an article.
To achieve the above object, one or more embodiments of the present disclosure provide the following technical solutions:
according to a first aspect of one or more embodiments of the present specification, there is provided a method for displaying a try-on effect of an article, including:
acquiring specification description information of goods and figure attribute information of a virtual model;
and generating effect display data for wearing the goods to the corresponding part of the virtual model according to the matching condition of the specification description information and the stature attribute information.
According to a second aspect of one or more embodiments of the present specification, there is provided a method of displaying a try-on effect of an article, including:
acquiring effect display data, wherein the effect display data are generated according to the matching condition of the specification description information of goods and the figure attribute information of the virtual model;
and displaying the effect display data to present the display effect of wearing the goods to the corresponding part of the virtual model.
According to a third aspect of one or more embodiments of the present specification, there is provided a method for displaying a try-on effect of an article, including:
responding to a virtual putting-through operation implemented by a user, and acquiring specification description information of goods corresponding to the virtual putting-through operation and figure attribute information of a virtual model;
generating effect display data according to the matching condition of the specification description information and the stature attribute information;
and displaying the effect display data so as to present the display effect of wearing the goods to the corresponding part of the virtual model to the user.
According to a fourth aspect of one or more embodiments of the present specification, there is provided an article recommendation method including:
identifying sample materials in an article interaction platform, wherein the sample materials are used for introducing sample articles provided by an article provider, and ownership of the sample materials belongs to other article providers different from the article provider;
generating effect display data for wearing the sample goods to the corresponding part of the virtual model;
providing the effect display data to the item provider.
According to a fifth aspect of one or more embodiments of the present specification, there is provided a method of displaying a try-on effect of an article, including:
responding to a fitting instruction sent by a live client to goods, and acquiring specification description information of the goods and figure attribute information of a virtual model, wherein live videos displayed by the live client are used for introducing the goods;
generating effect display data for wearing the goods to the corresponding part of the virtual model according to the matching condition of the specification description information and the stature attribute information;
and returning the effect display data to the live client so that the live client displays the effect display data in a live interface.
According to a sixth aspect of one or more embodiments of the present specification, there is provided a method of displaying a try-on effect of an article, including:
displaying a live video for introducing goods in a live interface;
under the condition that the fitting operation carried out aiming at the goods is detected, obtaining effect display data for fitting the goods to the corresponding part of the virtual model, wherein the effect display data is generated according to the matching condition of the specification description information of the goods and the figure attribute information of the virtual model;
and displaying the effect display data in the live broadcast interface.
According to a seventh aspect of one or more embodiments of the present specification, there is provided an article try-on effect display apparatus, including:
the information acquisition unit is used for acquiring specification description information of goods and figure attribute information of the virtual model;
and the data generating unit is used for generating effect display data for wearing the goods to the corresponding part of the virtual model according to the matching condition of the specification description information and the stature attribute information.
According to an eighth aspect of one or more embodiments of the present specification, there is provided an article try-on effect display device, comprising:
the data acquisition unit is used for acquiring effect display data, and the effect display data are generated according to the matching condition of the specification description information of goods and the figure attribute information of the virtual model;
and the data display unit is used for displaying the effect display data so as to display the display effect of wearing the goods to the corresponding part of the virtual model.
According to a ninth aspect of one or more embodiments of the present specification, there is provided an article try-on effect display device, comprising:
the information acquisition unit is used for responding to virtual putting-in operation implemented by a user and acquiring specification description information of goods corresponding to the virtual putting-in operation and figure attribute information of a virtual model;
the data generation unit is used for generating effect display data according to the matching condition of the specification description information and the stature attribute information;
and the data display unit is used for displaying the effect display data so as to present the display effect of wearing the goods to the corresponding part of the virtual model to the user.
According to a tenth aspect of one or more embodiments of the present specification, there is provided an article recommendation device including:
the system comprises a picture identification unit, a picture identification unit and a display unit, wherein the picture identification unit is used for identifying sample materials in an article interaction platform, the sample materials are used for introducing sample articles provided by an article provider, and ownership of the sample materials belongs to other article providers different from the article provider;
the data generating unit is used for generating effect display data for wearing the sample goods to the corresponding part of the virtual model;
a data providing unit for providing the effect display data to the goods provider.
According to an eleventh aspect of one or more embodiments of the present specification, there is provided an article try-on effect display device, comprising:
the information acquisition unit is used for responding to a fitting instruction sent by a live client to goods, acquiring specification description information of the goods and figure attribute information of a virtual model, wherein live videos displayed by the live client are used for introducing the goods;
the data generation unit is used for generating effect display data for wearing the goods to the corresponding part of the virtual model according to the matching condition of the specification description information and the stature attribute information;
and the data returning unit is used for returning the effect display data to the live client so that the live client can display the effect display data in a live interface.
According to a twelfth aspect of one or more embodiments of the present specification, there is provided an article try-on effect display device, including:
the video display unit is used for displaying live videos used for introducing goods in a live interface;
the instruction sending unit is used for acquiring effect display data for wearing the goods to the corresponding part of the virtual model under the condition that the fitting operation carried out on the goods is detected, wherein the effect display data is generated according to the matching condition of the specification description information of the goods and the stature attribute information of the virtual model;
and the data display unit is used for displaying the effect display data in the live interface.
In accordance with a thirteenth aspect of one or more embodiments of the present specification, there is provided an electronic device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of the first to sixth aspects by executing the executable instructions.
According to a fourteenth aspect of one or more embodiments of the present specification, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to any one of the first to sixth aspects.
Drawings
FIG. 1 is a schematic diagram of an architecture of an online shopping system according to an exemplary embodiment.
Fig. 2 is a flowchart of a method for displaying an effect of trying on an article according to an exemplary embodiment.
Fig. 3 is a flowchart of a method for displaying the goods try-on effect according to another exemplary embodiment.
Fig. 4 is a flowchart of a method for displaying an effect of trying on an article according to a third exemplary embodiment.
Fig. 5 is an interaction flowchart of a method for displaying an effect of trying on an article according to an exemplary embodiment.
FIG. 6 is a flow chart of a method for item recommendation provided by an exemplary embodiment.
Fig. 7 is an interaction flowchart of a method for displaying an effect of trying on an article according to a second exemplary embodiment.
Fig. 8-10 are illustrations of effects of one or more item fitting effects provided by one or more exemplary embodiments.
Fig. 11 is a flowchart of a method for displaying the goods try-on effect according to the fourth exemplary embodiment.
Fig. 12 is a flowchart of a method for displaying the goods try-on effect according to the fifth exemplary embodiment.
Fig. 13 is an interactive flowchart of a method for displaying an effect of trying on an article according to a third exemplary embodiment.
Fig. 14 is a schematic structural diagram of an apparatus according to an exemplary embodiment.
Fig. 15-19 are block diagrams of a device for displaying the goods try-on effect according to an exemplary embodiment.
Fig. 20 is a schematic structural diagram of another apparatus provided in an exemplary embodiment.
FIG. 21 is a block diagram of an item recommendation device provided in an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Referring to fig. 1, a schematic diagram of an architecture of an online shopping system is shown. As shown in fig. 1, the system may include a network 10, a server 11, a plurality of terminals, such as a mobile phone 12, a mobile phone 14, and the like.
The server 11 may be a physical server comprising a separate host, or the server 11 may be a virtual server carried by a cluster of hosts. In the operation process, the server 11 may operate a server-side program of an application to implement a related service function of the application, for example, when the server 11 operates a program of a goods interaction platform, the server may be implemented as a server of the goods interaction platform, and correspondingly, the mobile phones 12 to 14 may be implemented as clients of the goods interaction platform, where the mobile phone 12 may be implemented as a seller client corresponding to a goods provider (hereinafter referred to as a seller), and the mobile phones 13 to 14 may be implemented as buyer clients corresponding to a goods demander (hereinafter referred to as a buyer), and at this time, the buyer may view an actual wearing effect of the goods in a virtual try-on service provided by the goods interaction platform or a virtual try-on application independent of the goods interaction platform. For another example, when the server 11 runs a program of a live broadcast platform, the server may be implemented as a server of the live broadcast platform, and correspondingly, the mobile phones 12 to 14 may be implemented as clients of the live broadcast platform, where the mobile phone 12 may be implemented as an anchor client corresponding to a subject (hereinafter, anchor), and the mobile phones 13 to 14 may be implemented as viewer clients corresponding to program viewers (hereinafter, viewers), and at this time, the anchor may present a good in the live broadcast program, and the viewers may further view an actual wearing effect of purchasing the good when viewing the good presented by the anchor, and then purchase the good.
Handsets 12-14 are just one type of terminal that a user may use. In fact, it is obvious that the user can also use terminals of the type such as: tablet devices, notebook computers, Personal Digital Assistants (PDAs), wearable devices (e.g., smart glasses, smart watches, etc.), etc., which are not limited by one or more embodiments of the present disclosure.
It should be noted that: an application program of a client of a live broadcast platform can be pre-installed on a terminal, so that the client can be started and run on the terminal; of course, when an online "client" such as HTML5 technology is employed, the client can be obtained and run without installing a corresponding application on the terminal.
And the network 10 for interaction between the handsets 12-14 and the server 11 may include various types of wired or wireless networks.
Under the online shopping scene, the goods provider displays the related information of the goods on the goods interaction platform, and the user can know the goods by checking the related information so as to check, compare and buy the goods.
At the present stage, the goods provider and the goods demander can complete the goods transaction process through the goods interaction platform or the live broadcast platform. To wearable goods such as clothes, shoes and hats, the goods provider usually needs to show the dressing effect of this goods for the goods demander to the goods demander looks over, compares and purchases goods.
In the goods interaction platform, in order to show the wearing effect of goods on the goods demand side with different body sizes, a goods provider usually uses a real person model to try on the goods, and then uses pictures obtained by shooting and processing to show the goods. However, since the size of each model is fixed and unique, the wearing effect of the model itself can be displayed even if a plurality of real models are used, and thus the requirement of the goods demander for the wearing effect of the goods in various different sizes cannot be met.
And in the live platform, try on and show the goods by the anchor usually, similar, because the stature size of anchor is also fixed and unique, so the wearing effect after the anchor tries on the goods also is difficult to reflect the wearing effect when the goods is worn to the goods demand side (like spectator) of other statures, and then is difficult to satisfy the diversified demand of knowing of spectator to the goods wearing effect.
Therefore, the goods display mode in the related technology is difficult to meet the requirement of the user for acquiring the actual wearing effect of the goods on the objects with different body attributes, so that not only is the cost for displaying the goods of the seller higher, but also the shopping experience of the buyer is influenced.
In order to solve the technical problems in the related art, the specification provides a method for displaying a goods try-on effect, wherein effect display data for wearing goods to corresponding parts of a virtual model are generated according to specification description information of the goods and body attribute information of the virtual model and matching conditions between the specification description information and the body attribute information of the virtual model, and when the effect display data are displayed, a vivid three-dimensional wearing effect for wearing the goods to the corresponding parts of the virtual model can be displayed for a user to check.
Fig. 2 is a flowchart of a method for displaying an effect of trying on an article according to an exemplary embodiment. As shown in fig. 2, the method applied to the server may include the following steps:
step 202, specification description information of goods and figure attribute information of the virtual model are obtained.
In this embodiment, the service end serving as the execution main body of the scheme may be a service end corresponding to an article interaction platform providing an article display and transaction function, and at this time, the user may be a seller or buyer of the article; the server can also be a server corresponding to a live broadcast platform with goods display and transaction functions, and the user can be a main broadcast or audience at the moment; the server may also be a server corresponding to a virtual fitting application providing a function of displaying a wearing effect of an article, and the user is a user of a terminal installed with the application, such as a buyer of the article, and the description does not limit this.
The goods referred to in the solution of the present specification are wearable goods, for example, in the case that the wearing object (i.e. the entity image corresponding to the virtual model) of the goods is a human figure (such as an adult, a child, etc.), the goods may be wearable goods such as clothes, shoes, hats, socks, bags, ornaments, etc.; in the case that the wearing object of the goods is an animal image (such as a cat, a dog, etc.), the goods may be a chain, a tether, clothes, shoes, hats, etc. wearable goods; under the condition that the wearing object of the goods is the image of the goods (such as a vehicle, electronic equipment and the like), the goods can be wearable goods such as a vehicle raincoat, an electronic equipment protective sleeve and a screen film. Any image can be a simulation model close to a real image or an anthropomorphic cartoon image model, which is not limited in the specification.
In one embodiment, the server may determine the items in a variety of ways. For example, in order to achieve automatic acquisition of specification description information of an item, a server may identify an item in an item display picture by scanning the item display picture, where the item display picture may be a picture used for introducing the item and uploaded and displayed by a seller in an item interaction platform, or may be an item display picture uploaded by a user in a client or a browser page corresponding to the server. For another example, when the user performs an operation of formulating the candidate items in the client, the client may send a corresponding item specifying instruction to the server, so that the server may determine the items specified by the user according to the item specifying instruction. For another example, in the process of live broadcasting, the server may determine the goods introduced by the live video, or the goods specified by the anchor or the audience in the introduced goods, as the above-mentioned goods, so that the wearing effect of the goods is visually presented in the live broadcasting process, and the viewing and purchasing by the audience are facilitated.
In one embodiment, for any item, its specification information may be obtained in a variety of ways. For example, the server may identify the display content in the picture containing the item information to obtain the specification description information of the item, where the picture containing the item information may be an item homepage picture contained in the item display tag or an item information picture displayed in the item detail display page. For another example, the server may extract specification description information of the goods from an information display interface corresponding to the goods in the goods interaction platform, for example, extract specification description information in a text form from a detail display page of the goods, thereby avoiding a recognition error that may exist in picture recognition and improving the accuracy of obtaining the specification description information. For another example, the user may provide specification description information of the goods interactively, such as a seller uploading a comment or a buyer leaving a message, and correspondingly, the server may obtain the specification description information of the goods provided by the user.
In one embodiment, the specification description information of the article may include size information, for example, for an article of jacket type, the size information may include one or more of length of jacket, height, chest circumference, waist circumference, body type, etc., and if the specification description information of the article of jacket is "170/92 a", it indicates that the jacket is suitable for the height of 170cm, the chest circumference of 92cm, and the body type of a; alternatively, for the protective case of the electronic device, the size information may be length, width, thickness, opening position, etc. For different goods, the corresponding specification description information may be correspondingly different, and the specification does not limit this, but it can be understood that the more abundant the obtained specification description information of the goods is, the more accurate the subsequently generated effect display data corresponding to the wearing effect is.
In an embodiment, the number of the above-mentioned items determined by the server may be one or more, for example, only the specification description information of one coat may be obtained, the specification description information of one coat and one pair of trousers may also be obtained simultaneously, and the specification description information of multiple coats and multiple pairs of trousers or even other multiple items may also be obtained simultaneously, which is not described again. Under the condition that the quantity of goods is a plurality of, a plurality of goods can come from same goods interaction platform, perhaps a plurality of goods also can come from two at least different goods interaction platforms to realize the concentrated show of cross-platform goods wearing effect, the user of being convenient for compares. For example, the item specification information corresponding to the jacket from the shopping platform 1 and the trousers from the shopping platform 2 can be respectively acquired, so that the cross-platform item putting-on effect display is realized, the item comparison operation of a user is effectively simplified, and the user experience is improved.
In one embodiment, the virtual model may take a variety of forms. For example, the virtual model may be obtained by three-dimensional modeling of a solid model, such as a stereoscopic virtual model obtained by shooting a solid model (e.g., a real manikin) from a plurality of consecutive or spaced observation angles to obtain a plurality of image frames, and performing three-dimensional modeling based on the plurality of image frames obtained by shooting. For another example, the virtual model may be obtained by performing three-dimensional modeling for the server or other devices according to the stature attribute information specified by the user, for example, the user may input the stature size data of the user in a virtual fitting application, the application uploads the stature size data to the server, and the server performs three-dimensional modeling according to the data to generate the corresponding virtual model. The virtual model is obtained through a three-dimensional modeling mode, specifically, a three-dimensional rendering engine based on a CG (Computer Graphics ) technology in the related technology can be used for three-dimensional modeling (or called three-dimensional rendering), so that the effect presentation of virtual fitting can be realized by using the current mature CG technology, and the specific calculation process may refer to the related content of three-dimensional rendering disclosed in the related technology, and is not described herein again. The virtual model obtained through three-dimensional modeling has stronger third dimension, and the show attributes such as its gesture, angle can change wantonly moreover to adopt this mode not only can realize more lifelike virtual model and goods and wear the bandwagon show effect, because need not use a large amount of training data moreover, so can realize faster modeling speed, shorten user's the long time of waiting of rendering, further promote user's use and experience. For example, the virtual model may be a virtual model automatically generated by a preset model generation algorithm according to the stature attribute information specified by the user, for example, after the user provides stature attribute information such as height, weight and the like of the user, the corresponding virtual model is automatically generated according to a preset model generation method, so that the high requirement of three-dimensional modeling on computing capacity is avoided, and the lightweight modeling service is facilitated to be realized, so as to reduce the operation pressure of a service end.
In an embodiment, the figure attribute information of the virtual model may be default figure attribute information of the virtual model, that is, the virtual model may be a model with a default figure size, for example, a general virtual model preset by the server for multiple figures such as male, female, children, animals, and articles, and correspondingly, the figure attribute information of the general virtual model may be obtained by statistics according to the figure attribute information of a large number of sample figures to represent the figures of the general figures of the various figure objects, and at this time, the figure attribute information of the general virtual model is the default figure attribute information of the virtual model that the server needs to obtain. Specifically, the default stature attribute information may be pre-stored in a local storage space of the server, or the server may also request to acquire the default stature attribute information from other devices, which is not limited in this specification. Or, the user may send a model configuration instruction carrying custom-input figure attribute information, and at this time, the figure attribute information of the virtual model is the personalized figure attribute information. Or the server can acquire the historical goods purchase record of the user, further determine the historical figure attribute information of the user according to the specification description information of the historical goods of the same type as the goods, and use the historical figure attribute information as the figure attribute information of the virtual model corresponding to the user so as to ensure the accuracy of the figure attribute information.
In an embodiment, the body attribute information may include body size information of a virtual model, such as a virtual model corresponding to an image of a person such as an adult or a child, and the body size information may include at least one of height, weight, shoulder width, chest circumference, waist circumference, hip circumference, leg length, hand length, head length, and the like; the figure size information of the virtual model corresponding to the animal figure such as cat, dog and the like can comprise at least one piece of information of height, weight, length and the like, and the figure size information of the virtual model corresponding to the object figure such as vehicle and the like can comprise at least one piece of information of body length, body width, body height, rearview mirror position and the like.
And 204, generating effect display data for wearing the goods to the corresponding part of the virtual model according to the matching condition of the specification description information and the stature attribute information.
In one embodiment, the CG technology may be used to perform three-dimensional modeling to obtain effect display data for wearing goods to a corresponding portion of the virtual model. In fact, the modeling process is a process of matching specification description information of the goods with figure attribute information of the virtual model and comprehensively generating appearance display data corresponding to the virtual model wearing the goods. For convenience of description, hereinafter, when it comes to a process of wearing goods, the "virtual model on which goods are not yet worn" will be collectively referred to as "model to be worn", and the "virtual model on which goods are worn" will be collectively referred to as "model worn", and will be described hereinafter.
In an embodiment, the number of the goods may be multiple, and the server may generate the independent effect display data for wearing each of the goods to the corresponding portion of the virtual model, for example, for a jacket and a hat, the independent effect display data of the worn model corresponding to the virtual model only wearing the jacket may be generated, and the independent effect display data of the worn model corresponding to the virtual model only wearing the hat may be generated, so that the user can view the independent display effects corresponding to the jacket and the hat, respectively. Or, the server may also generate merging effect display data for wearing a plurality of goods to corresponding parts of the virtual model at the same time, and still take the case that the goods include a coat and a hat, the server may generate merging effect display data of the worn model corresponding to the virtual model when the virtual model wears both the coat and the hat, so that the user can conveniently view the merging effect display of the simultaneous wearing of the coat and the hat.
In addition, under the condition that a plurality of goods exist, the goods can be combined automatically or according to a selection instruction of a user, and corresponding multi-combination effect display data are generated so that the user can select to check and compare the wearing effects of various wearing combinations. The effect display data can be an effect display image, such as an image generated by the worn model according to display attributes such as different observation angles and zoom degrees; or, according to the effect show video that preset observation orbit generated to dressing the model, the observation visual angle of this video observes orbit dynamic change according to the aforesaid to the dressing effect of dressing the model can the omnidirectional complete show of this video. Or, the effect display data may be the worn model, so that the user can conveniently view the display effect of the worn model in the three-dimensional space.
In an embodiment, a user may customize a scene in which the virtual model is located in advance, for example, the scene may include a skin color, a makeup, an expression, a posture and other scenes of the model, and may also include environmental scenes such as props and backgrounds used by the model, which are not described again. Therefore, the server side can acquire scene attribute information of a scene where the virtual model is located, and then effect display data which enable goods to be worn to the corresponding part of the virtual model and enable the virtual model to be located in the scene are generated according to the scene attribute information, so that a more vivid putting-on and putting-on effect is achieved.
In an embodiment, the server side can generate at least one effect display picture for wearing the goods to the corresponding part of the virtual model according to the preset display attributes, so that the effect display data can be generated quickly. Or, under the condition that the virtual model comprises a three-dimensional model generated through three-dimensional modeling, the server can also respond to a parameter control instruction sent by a user aiming at the three-dimensional model, adjust the display parameters of the three-dimensional model with goods and generate effect display data corresponding to the three-dimensional model with goods according to the adjusted display parameters. Therefore, the display effect of the three-dimensional model is adjusted according to the intention of the user, and the user can adjust the zoom degree (such as looking at the through details) or the display angle (such as looking at the display effect around through rotation) of the three-dimensional model according to the intention of the user at will so as to realize the interaction process of the user and the three-dimensional model.
In an embodiment, under the condition that the matching degree of the specification description information and the stature attribute information is lower than a preset matching degree threshold value, the server side can obtain a candidate goods, and the specification description information of the candidate goods is matched with the stature attribute information; and then recommending the candidate goods to the user corresponding to the virtual model. The matching degree can be calculated according to the difference value between the specification description information and the figure attribute information, if the specification description information of the coat shows that the coat is suitable for a man with the height of 170cm, and for a virtual model A (corresponding to a user A) with the height of 180cm, the corresponding matching degree M can be calculatedAFor a virtual model B (corresponding to user B) of 165cm height, the corresponding degree of matching M can be calculated as |, 180 |/170 |/100% ═ 5.89%BSubscriber B is obviously more suited to the jacket than subscriber a by 165 |/170 |/100% > -2.94%. At this time, if the matching degree range preset for the jacket is 3%, another jacket more suitable for the height of the user can be acquired and recommended to the user a. The threshold of the matching degree may be set as a positive threshold range and a negative threshold range according to actual situations, and the positive boundary of the threshold range may be larger than the negative boundary because the user usually accepts a slightly larger and difficult to accept a slightly smaller range when purchasing clothes. In addition, under the condition that the matching degree of the specification description information and the figure attribute information is lower than a preset matching degree threshold value or under the condition that a candidate item of which the corresponding specification description information is matched with the figure attribute information of the virtual model is not obtained, the server side can also send a corresponding reminding message to the user so that the user can know the matching condition; alternatively, a user may be sent a tendency to cross-talk opinion corresponding to the degree of match, such as "the piece fits well! "this is a little bit smaller, please choose carefully" etc., which will not be described in detail.
Corresponding to the above-mentioned embodiment described in fig. 1, the present specification further proposes a method for displaying the goods try-on effect, and a corresponding flowchart can be seen in fig. 3. The method is applied to the client and can comprise the following steps:
step 302, obtaining effect display data, wherein the effect display data is generated according to the matching condition of the specification description information of the goods and the figure attribute information of the virtual model.
And 304, displaying the effect display data to present a display effect of wearing the goods to the corresponding part of the virtual model.
In this embodiment, the client may obtain the above-mentioned effect display data produced and returned by the server, and then show the effect display data to the user, so as to present an actual putting-on effect of putting on goods at the corresponding position of the virtual model for the user, so as to facilitate the user to check, compare or shop. The generation and display process of the related data can refer to the related description of the corresponding embodiment in fig. 1, and details are not repeated here.
In an embodiment, the client may send a corresponding parameter control instruction to the server when detecting that a user performs a parameter control operation on the worn model, so that the server adjusts a display parameter of the worn model according to the parameter control instruction, and generates adjusted display data corresponding to the stereoscopic model according to the adjusted display parameter; and then the client can receive and display the adjusted display data returned by the server so as to present the adjusted display effect. At the moment, the user can control the zoom degree of the worn model and display parameters such as the display angle through real-time parameter control operation, so that the actual wearing and putting-on effect of the goods can be observed more fully and meticulously.
In fact, the generating and displaying steps of the effect display data may be completed by the client, that is, the client locally generates and displays the effect display data. Correspondingly, the specification also provides a method for displaying the goods try-on effect, and a flow chart can be seen in fig. 4. The method is applied to the client and can comprise the following steps:
step 402, responding to a virtual putting-through operation implemented by a user, and acquiring specification description information of goods and figure attribute information of a virtual model corresponding to the virtual putting-through operation.
And 404, generating effect display data according to the matching condition of the specification description information and the stature attribute information.
And 406, displaying the effect display data to present a display effect of wearing the goods to the corresponding part of the virtual model to the user.
The specific process of the client generating and displaying the effect display data may refer to the relevant records of the corresponding embodiments in fig. 1 or fig. 2, and is not described herein again.
In an embodiment, a user may implement a parameter control operation for a worn model in a client, and correspondingly, the client may adjust a display parameter of the worn model according to the parameter control operation when detecting the parameter control operation, and generate and display adjusted display data of a stereoscopic model according to the adjusted display parameter, so as to present an adjusted display effect to the user.
Corresponding to the above embodiment, for the goods interaction platform, the service end for generating the effect display data may provide the generated effect display data to the seller client for the seller to use; the generated effect display data can also be provided to the buyer client so as to be displayed to the buyer by the buyer client; for a live broadcast platform, a server for generating effect display data can provide the generated effect display data for a main broadcast client and a spectator client for display; the anchor client or the audience client for generating the effect display data may display the effect display data generated by the anchor client or the audience client in a live interface, which is described below with reference to the accompanying drawings.
Fig. 5 is an interaction flowchart of a method for displaying an item fitting effect according to an exemplary embodiment. The display process of the goods try-on effect under the scene can comprise the following steps:
step 501, the server scans goods display materials in the goods interaction platform to identify sample materials.
It should be noted that the seller is an item provider of the wearable item, and the specific form of the wearable item may be referred to in the foregoing embodiments, which is not described herein again. The server for generating the effect display data may be a server of the goods interaction platform, and correspondingly, the seller client may be a client used by a goods provider in the goods interaction platform, and at this time, the goods try-on function is integrated in the goods interaction platform and provided to the seller. Or, the server for generating the effect display data may also be a server corresponding to a virtual try-on application independent of the goods interaction platform, and accordingly, the seller client may be a seller client corresponding to the virtual try-on application, and at this time, the goods try-on function is provided to the seller as an independent virtual try-on application, and the virtual try-on application may perform data interaction with the client or the server of the goods interaction platform in the working process. The present embodiment is described by taking a scenario in which the goods try-on function is integrated in the goods interaction platform and provided to the seller as an example.
For an item sold on an item interaction platform, an item provider (or called seller, merchant) generally needs to create an item display material about the item in advance so as to display a wearing effect of the item to an item demander (or called seller, consumer). Taking the picture materials of clothing goods as an example, at present, when an original merchant wants to create a group of picture materials related to clothing, the original merchant needs to pay a very large model fee to hire a real model, and the merchant has less choices on the appearance, the stature and the like of the real model; secondly, setting up a scene and adjusting light rays in the shooting process, and continuously changing clothes, adjusting postures and the like of the model; after the shooting is finished, a professional is often required to repair the picture, so that the final picture material can be obtained. Therefore, the creation mode of the goods display material is complex in process, time-consuming, labor-consuming and high in cost.
Because of this, some merchants can directly steal the goods display materials created by the original merchants for their own use, that is, there is an infringement behavior, so these merchants often face the risk of being protected by the original merchants or being governed by the regulators, and if being protected by the original merchants or being governed, the loss is usually large. In order to solve the above problems faced by the merchants, the display method of the goods try-on effect of the embodiment identifies the stolen goods display materials by scanning the goods display materials, and then provides effect display data generated by specification description information of sample goods corresponding to the materials and the physical attribute information of the virtual model to the merchants with infringement behaviors for use, so that on one hand, the models with various physical sizes can exhibit corresponding try-on effects, on the other hand, the material creation cost of the merchants is remarkably reduced, and the merchants are enabled to operate. The following description will be made in detail by taking picture materials as examples.
In an embodiment, the server may identify the stolen picture from the goods interaction platform by combining with a preset copyright protection database, and then generate effect display data corresponding to the goods try-on effect for the goods in the picture and provide the effect display data to the seller (corresponding to step 501 and step 505), where the seller may continue to use the goods try-on function when satisfied. Because the seller who steals other seller's pictures has the demand that reduces the picture cost of manufacture usually, so through the sample material that the picture action corresponds is stolen to discernment goods interaction platform, can select the seller that has the user demand to the virtual function of trying on to help reducing seller's data generation cost, the goods function of trying on that this scheme provided is promoted to the high efficiency of being convenient for moreover.
Or, for the seller who has used the goods try-on function, the seller can upload the goods display picture corresponding to the goods sold by the self in the client or browser page corresponding to the server, so that the server can identify the sample goods in the picture by scanning the picture. Of course, the server may also scan non-image multimedia resources such as videos and texts to determine the sample goods, or determine the goods in other manners, which is not limited in this specification. In addition, the server can classify and scan the goods display pictures according to multiple dimensions such as platforms, brands, regions, merchants, goods types and goods selling time.
In one embodiment, in order to automatically acquire the specification description information of the goods, the server side can identify sample materials in the goods display materials by scanning the goods display materials. For example, the above-mentioned goods display material may be a goods display picture uploaded and displayed by a seller in the goods interaction platform, and at this time, the server may determine the goods display picture as a sample picture when identifying that the ownership of the goods display picture corresponding to the goods sold by a certain seller belongs to other sellers different from the seller, and accordingly, the goods is the sample goods. Alternatively, the article display material may also be a video, a gif motion picture, a specific character added with a copyright identifier, and the like, which is not limited in this application.
Specifically, taking the goods display picture as an example, under the condition that any one of the goods display pictures in the goods interaction platform is registered by a certain seller in the original platform in advance, whether the seller using the goods display picture is the seller registering the goods display picture can be judged: when the seller using the goods display picture is the seller registering the picture, indicating that the user of the goods display picture has ownership of the goods display picture; otherwise, under the condition that the seller using the goods display picture is not the seller registered with the picture, the fact that the user of the goods display picture does not have ownership of the goods display picture is indicated, at the moment, the goods display picture can be judged to be the picture stolen by the seller, the picture can be further determined to be a sample picture, and the goods corresponding to the picture are sample goods.
Step 502, the server side obtains specification description information of the sample goods.
After the sample goods are determined, the server side can acquire the specification description information of the sample goods in various ways.
At one end
In one embodiment, the specification description information can be obtained in various ways for any sample item. For example, the display content in the picture containing the item information may be identified to obtain the specification description information of the sample item, where the picture containing the item information may be an item homepage picture contained in the item display tag or an item display picture displayed in the item detail display page. For another example, the server may extract specification description information of the sample goods in an information display interface corresponding to the sample goods in the goods interaction platform, for example, extract specification description information in a text form in a goods detail display page, thereby avoiding a recognition error that may exist in picture recognition and improving the accuracy of obtaining the specification description information. For another example, the seller can input and upload the size information of the sample goods by himself, so that the server can use the size information uploaded by the seller as the specification description information of the sample goods.
The specification description information of the above-mentioned goods may include size information, for example, for the goods of jacket type, the size information may include one or more of length of clothes, height, chest circumference, waist circumference, body type, etc., and if the specification description information of the goods of a jacket may be "160/86 a", it indicates that the suitable height of the jacket is 160cm, the chest circumference is 86cm, and the body type is a type; alternatively, for the protective case of the electronic device, the size information may be length, width, thickness, opening position, etc. For different goods, the corresponding specification description information may be correspondingly different, and the specification does not limit this, but it can be understood that the more abundant the obtained specification description information of the goods is, the more accurate the subsequently generated effect securing data corresponding to the wearing effect is.
Step 503, the server side obtains the default stature attribute information of the universal virtual model.
In this embodiment, the figure attribute information of the virtual model may be default figure attribute information of the virtual model, that is, the virtual model may be a model with a default figure size, for example, a general virtual model preset by the server for various figures such as male, female, child, animal, and article. Correspondingly, the figure attribute information of the general virtual model can be obtained according to the figure attribute information of a large number of sample images through statistics so as to embody the general figures of the images of all the image objects, and the figure attribute information of the general virtual model is the default figure attribute information of the virtual model required to be acquired by the server.
The default figure attribute information may include figure size information of the universal virtual model, for example, for the universal virtual model corresponding to figures of adults, children and the like, the figure size information may include at least one of height, weight, shoulder width, chest circumference, waist circumference, hip circumference, leg length, hand length, head length and the like; the figure size information of the general virtual model corresponding to the animal figure such as cat, dog and the like can comprise at least one piece of information of height, weight, length and the like, and the figure size information of the general virtual model corresponding to the object figure such as vehicle and the like can comprise at least one piece of information of body length, body width, body height, rearview mirror position and the like.
Specifically, the default stature attribute information may be pre-stored in a local storage space of the server, or the server may request to acquire the default stature attribute information from another device, which is not limited in this specification.
And step 504, the server generates effect display data.
The server can obtain a virtual model worn with the sample goods in a three-dimensional modeling mode, namely the worn model. Specifically, a three-dimensional rendering engine based on a CG technology in the related art may be used to perform three-dimensional modeling (or called three-dimensional rendering), so that a currently mature CG technology may be used to present an effect of virtual fitting, and a specific calculation process may refer to related contents of three-dimensional rendering disclosed in the related art, which is not described herein again. The worn model obtained through three-dimensional modeling has a strong stereoscopic impression, and the display attributes such as the posture and the angle of the model can be changed randomly, so that the more vivid virtual model and goods try-on effect can be realized, and the training data is not needed, so that the fast modeling speed can be realized, the dyeing time is shortened, and the generation efficiency of the worn model is further improved.
Or the server can automatically generate the worn model through a preset model generation algorithm, so that the high requirement of three-dimensional modeling on computing capacity is avoided, lightweight modeling service is facilitated, and the operation pressure of the server is reduced. The model generation algorithm can be trained in advance through a large amount of data and deployed at the server, and can be called by the server, so that the generation efficiency of the worn model is ensured.
For the worn model, the server may sequentially generate effect display images corresponding to different viewing angles, zoom degrees, and other display attributes of the worn model, and at this time, the effect display images are effect display data corresponding to the worn model. Or, the server can generate an effect display video for the worn model according to a preset observation track, and the observation visual angle of the video dynamically changes according to the observation track, so that the video can completely display the wearing effect of the worn model in an all-round manner, and at the moment, the effect display video is the effect display data corresponding to the worn model. Of course, other forms of effect display data such as gif kinegrams may also be generated, which is not limited in the present application.
And 505, the server provides effect display data to the seller client.
The server side provides the effect display data generated in the process to the seller client side, so that the seller client side can display the data to the seller, and the actual putting-in effect of putting the sample goods in the corresponding part of the virtual model is presented to the buyer. Meanwhile, the server side can also send a picture embezzlement prompt to the seller client side so as to inform the seller that the picture embezzlement behavior of the seller for the sample picture is recognized.
In an embodiment, the seller may replace the sample picture in the goods interaction platform with the received effect display data, so as to try the effect display data provided by the service terminal. Further, after the seller views the putting-on effect and is satisfied with the putting-on effect, the seller may subscribe to the goods putting-on function provided by the server, and then the process proceeds to step 506. Or, in the case that the seller does not subscribe to the item try-on function, terminating and exiting the execution process of the interactive flowchart.
Step 506, the seller client sends a model customization instruction to the server.
And 507, the service end generates a custom virtual model customized by the seller.
At any time after the seller subscribes the goods trying-on function, the seller can automatically make a self-defined virtual model suitable for the goods sold by the seller through an application program browser webpage provided by the server.
In an embodiment, the seller may send a model customization instruction to the server through the seller client, and designate related information of a customized virtual model to be customized to the server, and accordingly, the server may perform three-dimensional modeling by using the CG technology according to the received model customization instruction to generate the customized virtual model. For example, the seller may specify corresponding physical images of the custom mannequin, such as human images of adults, children, animals like cats, dogs, etc., and things like mobile phones, vehicles, etc. Further, the customized virtual model generated by the server side can be a simulation model corresponding to the real image specified by the seller, or a personified cartoon image model corresponding to the real image specified by the seller, and the like.
Further, the seller can also specify stature and size information for the customized virtual model. For example, for a self-defined virtual model corresponding to an avatar of an adult, a child, etc., the figure size information of the model may include at least one of height, weight, shoulder width, chest circumference, waist circumference, hip circumference, leg length, hand length, head length, etc.; for self-defined virtual models corresponding to pet images such as cats, dogs and the like, the body size information of the model can comprise at least one piece of information such as height, weight, body length, hair color and the like; for the self-defined virtual model corresponding to the object image of the vehicle and the like, the body size information of the self-defined virtual model can comprise at least one piece of information of the vehicle body length, the vehicle body width, the vehicle body height, the rearview mirror position and the like.
In an embodiment, the seller may further customize a scene where the custom virtual model is located, for example, the scene may include own scenes such as skin color, makeup, expression, posture, and the like of the model, for example, for the makeup, hair style, eyebrow shape, eye, lip shape, lip color, face shape, and the like may be included, and for the posture, standing posture, walking posture, sitting posture, lying posture, and the like may be included, which is not described in detail. For another example, the scene may also include environment scenes such as matching information of the model, background information, and the like, such as matching information of whether to wear glasses or watch; office scenes, home scenes, outdoor scenes and the like are not repeated one by one.
In the customization process, the service end can provide a plurality of alternative items of each model customization item for the seller to directly select by the buyer; any model customization item can also provide a custom input control so that a user can input the custom model customization item, for example, the user can select one of a plurality of skin colors provided by the server side, or can select any color as the skin color in a custom skin color selection box, which is not described any more.
In another embodiment, the server may further determine optimal figure size information (for example, figure size information corresponding to a product with the largest number of purchasers) according to the historical selling records of the product sold by the seller under the condition that the authorization of the seller is obtained, and then generate a corresponding custom virtual model according to the figure size information.
The seller can customize the male virtual model according to the sold goods, the seller selling the pet articles can customize the pet virtual model corresponding to the pet image, and the like, and the same seller can customize a plurality of customized virtual models according to preset rules. In addition, after the server side completes the customized virtual model, a notification message can be sent to the seller client side to inform the seller that the customized virtual model can be used for goods try-on.
It is understood that the process of generating the customized virtual model in steps 506 and 507 is an optional step, and the seller may not generate the customized virtual model, but only use the general virtual model provided by the server, which is not limited in this specification.
At step 508, the seller specifies specification description information for the item.
After the user-defined virtual model is completed regularly, the seller can appoint the specification description information such as the size information and the like corresponding to the goods to be sold to the server. The seller can upload the picture containing the specification description information, and correspondingly, the server can identify the display content in the picture to obtain the specification description information of the goods. The seller can also upload the specification description information in a text form, and correspondingly, the server can directly acquire the specification description information. The seller can also directly designate the goods on the goods interaction platform, such as the link of a goods display page, the goods name, the SKU (Stock Keeping Unit) number of the goods and the like, and the server correspondingly acquires the specification description information of the designated goods according to the information.
In step 509, the server obtains the stature attribute information of the customized virtual model.
When the seller specifies the specification description information of a certain goods, the seller can also specify a general virtual model and/or a custom virtual model for trying on the goods, and correspondingly, the server can locally store the figure attribute information of the general virtual model and/or the custom virtual model, namely the default figure attribute information and/or the seller custom figure attribute information.
In addition, under the condition that a plurality of specified goods and a plurality of virtual models exist simultaneously, the seller can further specify the virtual models corresponding to the goods respectively; or, in the case that the seller is not specified, the service end may determine, according to the type of the item, a virtual model matching the item from the general virtual model and the custom virtual model corresponding to the seller, and of course, may ask or remind the seller as necessary.
Step 510, the server generates effect display data.
And the server generates effect display data for wearing the goods to the corresponding part of the virtual model according to the goods specification information of the goods appointed by the user and the figure attribute information of the virtual model corresponding to the goods. Specifically, the server side can adopt the CG technology to carry out three-dimensional rendering to merge the goods specification information and the figure attribute information, so that a virtual model wearing the goods is generated, namely, a worn model is generated, and corresponding effect display data are generated according to the worn model. The effect display data may be an effect display image, such as an image generated by display attributes corresponding to different observation angles, zoom degrees, and the like of a worn model, and the generated effect display image may be added to an effect gallery corresponding to an article, or, in a case where the article does not have the effect gallery, the server may create a corresponding effect gallery for the article. Or, the effect display data can also be an effect display video generated according to a preset observation track and aiming at the worn model, and the observation visual angle of the video dynamically changes according to the observation track, so that the video can completely display the wearing effect of the worn model in an all-round way. Or, the worn model can be directly used as effect display data corresponding to the goods, so that the user can conveniently check the three-dimensional display effect (namely the goods trying-on effect in a three-dimensional form) of the worn model.
In addition, under the condition that the number of the goods is multiple, the server side can respectively generate independent effect display data for wearing the goods to the corresponding parts of the virtual model, for example, for a jacket and a hat, the independent effect display data of the worn model corresponding to the virtual model when only the jacket is worn can be generated, and the independent effect display data of the worn model corresponding to the virtual model when only the hat is worn can be generated, so that the user can conveniently check the independent display effect of the goods. Or, the server may also generate merging effect display data for wearing a plurality of goods to corresponding parts of the virtual model at the same time, and still take the case that the goods include a coat and a hat, the server may generate merging effect display data of the worn model corresponding to the virtual model when the virtual model wears the coat and the hat, thereby facilitating the user to check the merging display effect of each goods.
In step 511, the server returns the effect display data to the client.
In step 512, the client uses the effect display data.
After the server generates the effect display data, the effect display data can be returned to the seller client, so that the seller can perform corresponding processing on the effect display data. For example, the seller client may show the received effect presentation picture to the seller or play the received effect presentation video for the seller to view.
Or the seller client can show the worn model to the seller, so that a three-dimensional display effect of the goods at the corresponding position of the corresponding virtual model, namely a three-dimensional goods try-on effect, is shown to the user. Correspondingly, the seller can adjust the zoom degree (such as through magnifying and looking up the wearing details) and the display parameters such as the display angle (such as through rotating and looking up the display effect all around) of the worn model through the modes of dragging the mouse, sliding the fingers and the like in the process of looking up the worn model in the three-dimensional mode, so that the adjustment of the display effect of the worn model is realized through the interaction process, and the goods trying-on effect of goods can be looked up more fully and in detail. In addition, under the condition that the seller is not satisfied with the current goods fitting effect of the worn model, the fitting combination of each goods and the model can be further adjusted through the seller client side, so that the goods fitting effect satisfied by the seller is presented.
Or, the user can directly associate the effect display image or the effect display video with the corresponding goods and display the goods in the detail page of the goods, so that the goods buyer can know the virtual putting-on effect of the goods by checking the vivid effect display data of the fitting-on effect. Compared with the picture making mode of trying on and shooting by using a real model in the correlation technique, the making cost of the effect display data is greatly reduced, and the three-dimensional putting-on and putting-on effect aiming at various figure attribute information can be realized through various collocation between goods and different models, so that the buyer can be promoted to buy the goods.
Corresponding to the embodiment shown in fig. 5, the present specification further provides a method for displaying an item fitting effect applied to a server of an item interaction platform or a server corresponding to a virtual fitting application providing an item fitting function. Referring to a flowchart of an item recommendation method shown in fig. 6, the method may include the steps of:
step 602, identifying sample materials in the goods interaction platform, wherein the sample materials are used for introducing sample goods provided by a goods provider, and ownership of the sample materials belongs to other goods providers different from the goods provider.
Step 604, generating effect display data for wearing the sample goods to the corresponding part of the virtual model.
Step 606, providing the effect display data to the goods provider.
In an embodiment, the server may obtain the specification description information of the sample goods and the stature attribute information of the virtual model, and then generate the effect display data for wearing the sample goods to the corresponding portion of the virtual model according to the matching condition of the specification description information and the stature attribute information, and a specific generation process may refer to the description of the embodiment shown in fig. 5, which is not described herein again.
In one embodiment, the specification description information of the sample item may include a plurality of types, for example, the specification description information extracted from the sample material. Specifically, under the condition that the material is a picture, the server can identify the picture content in the picture and then extract the specification description information in the picture; or, in the case that the material is a video, the server may extract video frame images of the video, sequentially identify image contents in the video frame images, and extract specification description information therein. Or, the specification description information may be specification description information extracted from an information display interface corresponding to the sample goods in the goods interaction platform. Of course, different specification description information of the same article or different specification description information of different articles may be of the same type, or may be a combination of the specification description information of the plurality of types, which is not limited in this specification.
In an embodiment, the stature attribute information may have various forms, for example, default stature attribute information of a virtual model; alternatively, the information may be historical figure attribute information or the like determined from a historical sales record of the article. The specific form of the specification description information and the stature attribute information may refer to the description of the embodiment corresponding to fig. 2, and will not be described herein again.
In this embodiment, after the server provides the above effect display data to the goods provider, the goods provider can display the effect display data in the above goods interaction platform, and the effect display data is used for introducing the fir sample goods to the goods demander. Therefore, the service end can screen the sellers with the use requirements for the virtual try-on function by identifying the sample materials with the material stealing behavior in the goods interaction platform, and generate effect display data for the sellers to use. Not only effectively reduced the seller and generated the cost of the goods show material that the goods corresponds to help solving the problem that the material was stolen, moreover because above-mentioned effect show data is according to the figure attribute information generation of virtual model, and the seller can self-define above-mentioned figure attribute information, consequently can the low-cost production goods try on the effect to the goods of the virtual model of different figure attribute information. Therefore, the material manufacturing cost of the seller can be greatly reduced by using the effect display data, and low-cost and efficient popularization of the virtual fitting function is facilitated.
In an embodiment, the goods provider may subscribe to use the goods fitting function, and correspondingly, the server may perform virtual fitting processing on other goods specified by the goods provider. Specifically, the server side can determine personalized figure attribute information according to a model configuration instruction sent by the goods provider, generate a personalized virtual model according to the personalized figure attribute information, generate personalized effect display data for wearing the goods to the corresponding part of the personalized virtual model according to the matching condition of the specification description information and the personalized figure attribute information after receiving the specification description information of the goods sent by the goods provider, and return the personalized effect display data to the goods provider, so that the goods provider can display the personalized effect display data in the goods interaction platform for introducing the goods. Through the mode, the service goods provider can customize the personalized figure attribute information of the virtual model, and the personalized effect display data generated and provided by the server side according to the personalized figure attribute information is used as the display material of the goods, so that richer and diversified fitting effects are provided for the goods.
Accordingly, the service end also provides the virtual fitting service for the goods demander (i.e. buyer) in the goods interaction platform, which is described below with reference to fig. 7-10. Fig. 7 is an interaction flowchart of a method for displaying an effect of trying on an article according to a second exemplary embodiment. The display process of the goods try-on effect under the scene can comprise the following steps:
in step 701, a buyer client detects a putting-on operation performed by a user for an item.
It should be noted that the buyer in this embodiment may be an item demander of the wearable item, and the specific form of the wearable item may refer to the description of the foregoing embodiments, which is not described herein again. The server for generating the effect display data may be a server of the goods interaction platform, and correspondingly, the buyer client may be a corresponding buyer client of the goods interaction platform, and at this time, the goods try-on function is integrated in the goods interaction platform and provided to the buyer. Or, the server for generating the effect display data may also be a server corresponding to a virtual fitting application independent of the goods interaction platform, and accordingly, the buyer client may be a buyer server corresponding to the virtual fitting application, at this time, the goods fitting function is provided to the seller as an independent virtual fitting application, and the virtual fitting application may perform data interaction with the client or the server of the goods interaction platform during the working process. The present embodiment takes a scenario in which the goods try-on function is provided to the seller as an independent virtual try-on application as an example.
The buyer terminal of the buyer client side provided with the virtual try-on application can also be provided with a shopping application corresponding to at least one item interaction platform or a network application capable of accessing a page corresponding to the item interaction platform, so that the buyer can browse and purchase an item through the shopping application or the network application. The present embodiment is described by taking the shopping application as an example. Certainly, the buyer terminal may not be installed with the shopping application or the network application, but the virtual try-on application may scan a picture and a two-dimensional code through an information scanning function, or open a related link of an article through an article link receiving and skipping function to determine the article, which is not described in detail in the specific process.
In an embodiment, the buyer may start the virtual fitting application in advance and customize the customized virtual model in the virtual fitting application, and the specific customization process of the customized virtual model may refer to the embodiment described in step 506 and 507 in fig. 5 (only the execution subject of generating the customized virtual model is the buyer client), which is not described herein again. The buyer can make the self-defined virtual model according to the figure attribute information of the actual wearer who wants to buy goods, for example, under the condition that the buyer purchases clothes for himself, the self-defined virtual model can be made according to the figure attribute information of himself; under the condition of purchasing clothes for other people, the user-defined virtual model can be manufactured according to the figure attribute information of other people; under the condition of purchasing clothes for the pet, the user-defined virtual model can be made according to the figure attribute information of the pet, and the details are not repeated. Of course, the customized virtual model may be customized in advance by the buyer, and the specification is not limited to the customization time as long as the customization is completed before the goods fitting function of the virtual fitting application is used.
The customized stature attribute information corresponding to the customized virtual model can be stored in a local storage space corresponding to the virtual try-on application or uploaded to a server. Of course, the buyer can also use a general virtual model provided by the virtual fitting application, which is not limited in this specification, and the customized virtual model is taken as an example for description in this embodiment.
Referring to the terminal interface display effect diagram shown in fig. 8, at least one shopping application 801 is installed in the user terminal, after the virtual try-on application is started, a user can select a custom virtual model 802 therein to display the custom virtual model 802 in a floating manner in the terminal interface, and the custom virtual model 802 forms a display effect similar to a special effect control displayed on the top in the terminal interface. The display position, the display size, the transparency, and other display attributes of the customized virtual model 802 can be set by the buyer in a customized manner, and are not described in detail. The customized virtual model 802 in the form of a half-body in fig. 8 is only an example, and the virtual model may be in various display forms such as a whole body, a half-body, and a head only in practical use, which is not limited in the present specification.
In an embodiment, the buyer can browse the goods to be purchased through the shopping application in the open state, and the specific mode is not substantially different from the goods browsing in the online shopping scene in the related art, and is not further described. Referring to the schematic diagram of the display effect shown in fig. 9, an item display picture 901 and corresponding item information 902 of an item 1 are displayed in an item browsing interface, a buyer can select the item display picture 901 of the item 1 (of course, the item information 902 or other items can also be selected) and drag the item display picture 901 to a self-defined virtual model 903 in a suspended state in a dragging manner (of course, the dragging process can be understood as a picture copying process, that is, a copied picture of the item display picture 901 is sent to a seller client, and the item display picture 901 displayed in an original page does not change), so that the selected item 1 is specified to the seller client through the wearing operation. Certainly, the user may also designate the article 1 to the seller client by capturing the screen of the designated area in the terminal interface, and the detailed process is not described again.
In addition, the user can carry out the above-mentioned operation of putting through to many goods in proper order to follow-up realization is put through the combination and the show effect comparison of putting through to a plurality of goods. The plurality of articles may be a plurality of articles provided by the same seller (like a store) in the same article interaction platform, may also be a plurality of articles provided by different sellers (like different stores in the same shopping application) in the same article interaction platform, and may also be a plurality of articles provided by different sellers (like different stores in different shopping applications) in different article interaction platforms, which is not limited in this specification.
Step 702, the buyer client side obtains specification description information of goods and stature attribute information of the user-defined virtual model.
After detecting the putting-on operation, the buyer client may acquire the specification description information of the item 1 by scanning the item display picture 901. In addition, the custom virtual model selected by the buyer in the current state may be determined as the virtual model corresponding to the item 1; or, in the case that there are a plurality of custom virtual models, after the buyer performs the above-mentioned putting-in operation, the buyer may be shown an inquiry message, and the selected custom virtual model is determined according to the trigger operation of the buyer and is used as the virtual model corresponding to the goods 1, and then the figure attribute information of the virtual model corresponding to the goods 1 is acquired from the local storage space or the service end.
Or, in the case that the effect display data is generated by the server, the buyer client may only obtain the specification description information of the item 1 and provide the information to the server, so that after the server obtains the information, the buyer client further obtains the figure attribute information of the virtual model corresponding to the item 1, and then generates the effect display data based on the specification description information and the figure attribute information.
After the buyer client acquires the specification description information and the stature attribute information, the effect display data may be locally generated (corresponding to step 703a), or the specification description information and the stature attribute information may be provided to the server in an associated manner, and the server generates the effect display data (corresponding to step 703 b).
In step 703a, the buyer client generates effect display data based on the above information.
At this time, the buyer client generates effect display data according to the acquired specification description information and stature attribute information. In fact, in the case that the seller client generates the effect display data, the seller client may complete the generation process of the effect display data in a state of being completely offline from the server: the seller client side can acquire goods information in a picture or two-dimensional code scanning mode based on an inter-application communication process between a seller terminal and a shopping application, acquire specification description information of goods from the shopping application through the inter-application communication, acquire body attribute information of the virtual model in a local storage control, generate a worn model for wearing the goods at a corresponding part of the virtual model through a light weight data generation algorithm or a three-dimensional modeling algorithm deployed in advance locally, and generate corresponding effect display data based on the worn model.
Of course, no matter whether data interaction exists between the buyer client and the server, the algorithm needs to be deployed in advance at the buyer client to ensure the data generation speed of the virtual try-on application.
In step 703b, the server generates effect display data according to the information provided by the buyer client, and returns the effect display data to the buyer client.
The buyer client may also provide the specification description information and the stature attribute information to the server, and the server generates the effect display data based on the information and returns the effect display data to the buyer client. The collective process may refer to the description of the embodiment shown in fig. 5, and is not described herein again.
Step 704, the buyer client displays the effect display data.
After the server generates the effect display data, the effect display data can be returned to the buyer client side to be displayed by the buyer client side. Or the buyer client can directly display the effect display data after generating the effect display data. For example, the buyer client may display the worn model to the buyer so that the buyer can check the real-time item fitting effect of the item.
Referring to fig. 10, the worn model 101 may be displayed in a floating manner above the current goods browsing interface (or a detailed display interface corresponding to the virtual fitting application may also be displayed), wherein the goods 102 are worn on the corresponding portion of the worn model 101.
Step 705, the buyer client adjusts the display parameters of the worn model according to the user operation.
The buyer can adjust the zoom degree (such as through magnifying and looking up the wearing details) and the display parameters such as the display angle (such as through rotating and looking up the display effect all around) of the worn model through the modes of mouse dragging, finger sliding and the like in the process of looking up the worn model in the three-dimensional form, thereby realizing the real-time viewing and adjustment of the display effect of the worn model through the interaction process, and further viewing the goods trying-on effect of the goods in more sufficient detail.
In step 706, the buyer client compares the goods fitting effects of the plurality of goods.
In the case that there are a plurality of virtual models in a try-on state or a plurality of goods to be tried on, the buyer can replace the displayed virtual models or replace the goods and the virtual models in a matching manner by clicking the matching replacement button 103a or 103b, and of course, other replacement manners may be adopted, which is not limited in this specification.
Or the buyer can click a lower comparison control to realize the comparison of the goods try-on effects between different goods and the same virtual model, the same goods and different virtual models or the different goods and different virtual models, so that the seller can select an intended putting-on mode.
In step 707, the buyer client purchases a plurality of items in bulk.
After the comparison is completed, the seller can click on the 'purchase' control at the lower part and select to purchase one or more items corresponding to the current putting-on in a prompt interface displayed later. And then the buyer client can call the shopping applications corresponding to the goods or the shopping applications corresponding to the goods, and orders the goods in one or more shopping applications corresponding to the goods respectively in an inter-application communication mode with the shopping applications, so that the shopping experience of application calling or cross-application one-key ordering is realized, and the shopping operation of a user is effectively simplified.
In addition, in the live platform, the live video may be used to introduce an item (or called live tape), and fig. 10 is an interactive flowchart of a method for displaying an item try-on effect provided by the third exemplary embodiment.
In the embodiment shown in fig. 10, the anchor client and the audience client can respectively provide the anchor client or the audience with the item trying-on function, and the implementation of the process requires the interaction among the anchor client, the server and the audience client. According to different generation main bodies of the effect display data, the display process of the goods try-on effect in the scene can have multiple implementation modes, the mode 1 corresponds to the step 1002-1006b, the mode 2 corresponds to the step 1007-1010b, and the mode 3 corresponds to the step 1111-1114, and the steps corresponding to the two modes do not have necessary sequence, and any mode can be adopted for implementation in practical application, which is not limited in the description, and the three modes are described below.
Step 1001, the anchor inserts items in the live program.
Before any mode is started to be executed, the anchor client, the server and the audience client need to be in a normal live broadcast state, the anchor can introduce goods in a live broadcast video through the anchor client, and correspondingly, the audience can know the introduced goods in a mode of watching the video or listening to voice through the audience client. The specific process can be referred to the record in the related art, and is not described herein again.
[ means 1: anchor client-side Generation of Effect display data
Step 1002, the anchor client acquires specification description information of goods to be tried on and figure attribute information of the virtual model.
In this embodiment, the anchor client may be a general virtual model default for the anchor display system or a custom virtual model pre-made by the anchor, so that the anchor may specify an item to be tried on for the displayed virtual model in the live interface, and a specific display manner of the virtual model may refer to the description of the embodiment related to fig. 7 to 10, which is not described herein again. For example, the anchor may drag the item a displayed in the live interface to the virtual model B displayed in a suspended manner at the preset position of the live interface, so as to designate the item a as an item to be virtually tried on with the virtual model B. Or, the virtual model B for try-on can be called in the live broadcast interface, and the needed goods a is selected from the goods display options corresponding to the virtual model B, so that the goods a is specified as the goods for virtual try-on with the virtual model B. Or, the anchor client may determine, when detecting that the anchor (may be a live video object or a worker outside a corresponding screen) performs a preset triggering operation on an article display image, an article link, and other article-related display objects displayed in the live interface, an article corresponding to the triggered display object as an article to be tried on. And then or under the condition that the shot object is detected to execute a preset trigger action or send a preset trigger voice aiming at the currently introduced goods, determining the goods corresponding to the preset trigger operation or the preset trigger voice as the goods to be tried on.
After determining the combination of the goods to be tried on and the virtual model, the server needs to further obtain the specification description information of the goods and the figure attribute information of the virtual model.
In an embodiment, the specification description information may be specified by an anchor, specifically, the anchor client may obtain the specification description information corresponding to the goods to be tried on according to information input operation implemented by a user or from locally stored goods related information, and of course, may also obtain the specification description information from the server.
In an embodiment, the specification description information and/or the stature attribute information may be obtained from a server, for example, the anchor client may send, to the server, item identifiers such as a link of an item display page corresponding to an item to be tried on, an item name, and a SKU number of the item, and correspondingly, the server determines, according to the received item identifier, the item to be tried on selected by the anchor, and then may obtain specification description information of the item to be tried on from a local or other preset storage space; and/or the anchor client may send a model identifier such as a number and a name of the virtual model to the server, so that the server specifies the virtual model to be tried-on, i.e., the model to be worn, specified by the anchor client, and further obtains the figure attribute information of the virtual model, for example, the figure attribute information may be determined to be obtained from locally pre-stored model information, or the specification description information may be requested to be obtained from other devices.
In step 1003, the anchor client generates effect display data.
In step 1004, the anchor client distributes the generated effect display data to the audience client through the server.
In step 1004a, the anchor client displays the generated effect display data.
In step 1004b, the anchor client displays the received effect display data.
The specific process of the anchor client generating the effect display data may refer to the record of the relevant embodiment corresponding to fig. 2 or fig. 5, and is not described herein again. After the generated effect display data are generated, the effect display data can be displayed locally, and the effect display data can be distributed to audience clients through the server.
Mode 2: service end generation of effect display data
In step 1106, the audience client provides the stature attribute information of the model to be worn to the anchor client through the server.
In this embodiment, in the process of watching a live video, for an item introduced by a live program, such as an item being explained by a main broadcast, a viewer can carry the personal attribute information in a text barrage, comment information or voice and send the information to a main broadcast client through a server. Therefore, the server can generate the virtual model according to the figure attribute information (namely, the figure attribute information is used as the figure attribute information of the virtual model).
Step 1107, the server obtains the specification description information of the goods to be tried on.
The server can identify the stature attribute information, and then obtains the specification description information of the goods to be tried after determining the goods to be tried corresponding to the server.
The audience can appoint the corresponding goods to be tried on while sending the stature attribute information. For example, it may send a bullet screen, "do i have a height of 170, a weight of 65kg, and can wear a striped T-shirt? "then, the server can determine that the stature attribute information of the audience is" height 170, weight 65kg ", and the corresponding item to be tried is" striped T-shirt "before the currently introduced item. At this time, the server may request the server for specification description information such as the number of codes of the "striped T-shirt", or may obtain the specification description information from the local storage space or the preset associated storage space when the specification description information of the "striped T-shirt" is stored in the local storage space or the preset associated storage space.
Step 1108, the server generates effect display data.
The specific process of the anchor client generating the effect display data may refer to the record of the relevant embodiment corresponding to fig. 2 or fig. 5, and is not described herein again.
Step 1109a, the server provides the effect display data to the anchor client.
In step 1109b, the server provides the viewer client with the effect display data.
Step 1010a, the anchor client displays the received effect display data.
Step 1010b, the anchor client displays the received effect display data.
The server can generate effect display data according to the specification description information and the stature attribute information acquired in the process, and returns the effect display data to the audience client for displaying to the audience.
Certainly, under the condition that the matching degree of the specification description information and the stature attribute information is lower than a preset matching degree threshold value, the server side can obtain a candidate goods, and the specification description information of the candidate goods is matched with the stature attribute information; and then recommending the candidate goods to the audience sending the stature attribute information. In addition, under the condition that the matching degree of the specification description information and the figure attribute information is lower than a preset matching degree threshold value or under the condition that a candidate good of which the corresponding specification description information is matched with the figure attribute information of the virtual model is not obtained, the server side can also send a corresponding reminding message to the audience so that the audience can know the matching condition; alternatively, the viewer may be sent a tendency to cross-talk opinion corresponding to the degree of match, based on the degree of match, such as "this fit! "this is a little bit smaller, please choose carefully" etc., which will not be described in detail.
Mode 3: audience client Generation of Effect display data
Step 1111, the spectator client requests the specification description information of the goods to be tried on from the server.
In this embodiment, in the process of watching a live video, for an item introduced in a live program, such as an item being explained by a main broadcasting, a viewer may perform a fitting operation, that is, select an item to be fitted and a model to be worn corresponding to the item to be fitted.
In an embodiment, the video data of the live video played by the anchor client includes specification description information of the goods to be tried on, and at this time, the anchor client may directly extract the specification description information from the video data. Or, under the condition that the video data does not contain the specification description information of the goods to be tried on, the anchor client may request the server to acquire the specification description information of the goods, for example, may send to the server a link of a goods presentation page corresponding to the goods to be tried on, a name of the goods, a SKU number of the goods, and other goods identifiers.
Correspondingly, the server determines the goods to be tried selected by the anchor according to the received goods identification, and then can acquire the specification description information of the goods to be tried from a local or other preset storage space, or can request the anchor client to acquire the specification description information. After obtaining the above specification description information, the server may return it to the viewer client.
At step 1112, the spectator client determines the stature attribute information of the spectators.
The spectators can input self figure attribute information, such as at least one piece of information of height, weight, shoulder width, chest circumference, waist circumference, hip circumference, leg length, hand length, head length and the like, in the information data interface related to the goods to be tried on or the corresponding model to be worn; of course, when the audience purchases goods for pets such as cats and dogs, at least one of height, weight, length, hair color and the like of the pet can be inputted accordingly, and the specific content of the stature attribute information is not limited in this specification.
Or, the spectator client may also obtain the historical purchase record of the user under the condition of obtaining the authorization of the spectator, and determine the stature attribute information corresponding to the try-on goods according to the historical purchase record. For example, in the case where the item to be tried on is a male T-shirt, the viewer client may screen out a purchase record of a jacket from the viewer's historical purchase record, and predict the size of the viewer based on the purchase record as the size attribute information.
At step 1113, the spectator client generates the effect presentation data.
At step 1114, the spectator client presents the effect presentation data.
The specific process of the anchor client generating the effect display data may refer to the record of the relevant embodiment corresponding to fig. 2 or fig. 5, and is not described herein again. Spectator's customer end demonstrates the effect show data of its generation to spectator can know the goods try-on effect when wearing above-mentioned goods of waiting to try on in the corresponding position of corresponding virtual model, and then can be according to this effect decision whether purchase this goods.
Corresponding to the embodiment shown in fig. 11, the present specification further provides a method for displaying an item trying effect applied to a live platform server, where a flowchart of the method may refer to fig. 12, and the method includes the following steps:
step 1202, responding to a fitting instruction sent by a live client to goods, and acquiring specification description information of the goods and figure attribute information of a virtual model, wherein live videos displayed by the live client are used for introducing the goods.
Step 1204, generating effect display data for wearing the goods to the corresponding part of the virtual model according to the matching condition of the specification description information and the stature attribute information.
And 1206, returning the effect display data to the live client so that the live client displays the effect display data in a live interface.
In an embodiment, the live broadcast client may be an anchor client, and the anchor requests a try-on service from the server through the anchor client; or, the live broadcast client may also be a viewer client, and at this time, the viewer requests a try-on service from the server through the viewer client.
Corresponding to the embodiment shown in fig. 11, the present specification further provides a method for displaying an item try-on effect applied to a live platform client, where a flowchart of the method may be seen in fig. 13, and the method includes the following steps:
and step 1302, displaying a live video for introducing the goods in a live interface.
And 1304, under the condition that the try-on operation carried out on the goods is detected, obtaining effect display data for wearing the goods to the corresponding part of the virtual model, wherein the effect display data are generated according to the matching condition of the specification description information of the goods and the figure attribute information of the virtual model.
Step 1306, displaying the effect display data in the live interface.
In this embodiment, the client may be a main client or a viewer client, and this specification does not limit this.
In an embodiment, the virtual model may include a stereoscopic model generated through three-dimensional modeling, and at this time, when a parameter adjustment operation for the effect display data is detected, the client may send a parameter control instruction for the stereoscopic model to the server, so that the server adjusts the display parameter of the stereoscopic model wearing the goods, generates adjusted display data corresponding to the stereoscopic model wearing the goods according to the adjusted display parameter, receives the adjusted display data returned by the server, and displays the adjusted display data in a live interface, thereby presenting an adjusted goods try-on effect for the anchor broadcaster or the audience.
FIG. 14 is a schematic block diagram of an apparatus provided in an exemplary embodiment. Referring to FIG. 14, at the hardware level, the device includes a processor 1402, an internal bus 1404, a network interface 1406, a memory 1408, and a non-volatile storage 1410, although other hardware required for service may be included. The processor 1402 reads the corresponding computer program from the non-volatile memory 1410 to the memory 1408 and then runs the computer program, thereby forming a display device of the goods try-on effect on the logic level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Referring to fig. 15, in a software implementation, the device for displaying the goods fitting effect may include:
an information obtaining unit 151 for obtaining specification description information of the goods and figure attribute information of the virtual model;
and the data generating unit 152 is configured to generate effect display data for wearing the goods to the corresponding part of the virtual model according to the matching condition of the specification description information and the stature attribute information.
Optionally, the goods comprise at least one of:
identifying an item from the item display picture; alternatively, the first and second electrodes may be,
determining goods according to a goods designation instruction sent by a user; alternatively, the first and second electrodes may be,
and the goods introduced by the live video in the live state.
Optionally, the information obtaining unit 151 is further configured to:
identifying display content in a picture containing goods information to obtain specification description information of the goods;
extracting specification description information of the goods in an information display interface corresponding to the goods in a goods interaction platform;
and acquiring the specification description information of the goods provided by the user.
Optionally, the number of the goods is multiple; wherein the multiple items are from the same item interaction platform, or the multiple items are from at least two different item interaction platforms.
Optionally, in a case that the number of the items is multiple, the data generating unit 152 is further configured to:
respectively generating independent effect display data for wearing the goods to the corresponding parts of the virtual model; and/or the presence of a gas in the gas,
and generating combined effect display data for wearing a plurality of goods to the corresponding parts of the virtual model at the same time.
Optionally, the virtual model includes at least one of:
the method comprises the steps of obtaining a virtual model through three-dimensional modeling of an entity model, obtaining the virtual model through three-dimensional modeling according to figure attribute information appointed by a user, and automatically generating the virtual model through a preset model generation algorithm according to the figure attribute information appointed by the user.
Optionally, the stature attribute information includes:
default stature attribute information of the virtual model; alternatively, the first and second electrodes may be,
generating personalized stature attribute information according to a model configuration instruction sent by a user; alternatively, the first and second electrodes may be,
and determining historical stature attribute information according to the historical goods purchase record of the user.
Optionally, the system further includes a scene information obtaining unit 153, configured to obtain scene attribute information of a scene where the virtual model is located;
the data generating unit 152 is further configured to: and generating effect display data for wearing the goods to the corresponding part of the virtual model according to the scene attribute information, wherein the virtual model is in the scene.
Optionally, the data generating unit 152 is further configured to:
generating at least one effect display picture for wearing the goods to the corresponding part of the virtual model according to a preset display attribute; alternatively, the first and second electrodes may be,
and under the condition that the virtual model comprises a three-dimensional model generated through three-dimensional modeling, responding to a parameter control instruction sent by a user aiming at the three-dimensional model, adjusting display parameters of the three-dimensional model worn with the goods, and generating effect display data corresponding to the three-dimensional model worn with the goods according to the adjusted display parameters.
Optionally, the method further includes:
a candidate item obtaining unit 154, configured to obtain a candidate item when a matching degree of the specification description information and the stature attribute information is lower than a preset matching degree threshold, where the specification description information of the candidate item matches the stature attribute information;
and the candidate item recommending unit 155 is used for recommending the candidate item to the user corresponding to the virtual model.
Referring to fig. 16, in a software implementation, the device for displaying the goods fitting effect may include:
a data obtaining unit 161 configured to obtain effect display data generated according to matching conditions of specification description information of goods and figure attribute information of the virtual model;
and the data display unit 162 is used for displaying the effect display data so as to display the display effect of wearing the goods to the corresponding part of the virtual model.
Optionally, the virtual model is a stereoscopic model, and the apparatus further includes:
the instruction sending unit 163 is configured to, when detecting that a parameter control operation is performed by a user for the stereoscopic model wearing the goods, send a corresponding parameter control instruction to a server, so that the server adjusts display parameters of the stereoscopic model wearing the goods according to the parameter control instruction, and generates adjusted display data corresponding to the stereoscopic model according to the adjusted display parameters;
and an adjusting and displaying unit 164, configured to receive and display the adjusted display data returned by the server, so as to present an adjusted display effect.
Referring to fig. 17, in a software implementation, the device for displaying the goods fitting effect may include:
the information acquisition unit 171 is configured to, in response to a virtual fit operation performed by a user, acquire specification description information of goods and figure attribute information of a virtual model corresponding to the virtual fit operation;
the data generating unit 172 is configured to generate effect display data according to the matching condition of the specification description information and the stature attribute information;
a data display unit 173 for displaying the effect display data to present the user with the display effect of wearing the goods to the corresponding part of the virtual model.
Optionally, the virtual model is a stereoscopic model, and the apparatus further includes:
a parameter adjusting unit 174 configured to, in a case where a parameter control operation performed by a user for the stereoscopic model on which the article is worn is detected, adjust a display parameter of the stereoscopic model on which the article is worn according to the parameter control operation;
the adjusting and displaying unit 175 is configured to generate and display adjusted display data of the stereoscopic model according to the adjusted display parameters, so as to present an adjusted display effect to the user.
Referring to fig. 18, in a software implementation, the device for displaying the goods fitting effect may include:
the information acquisition unit 181 is configured to acquire specification description information of the goods and body attribute information of the virtual model in response to a fitting instruction sent by a live client for the goods, where a live video displayed by the live client is used to introduce the goods;
the data generating unit 182 is configured to generate effect display data for wearing the goods to the corresponding part of the virtual model according to the matching condition of the specification description information and the stature attribute information;
and the data returning unit 183 is configured to return the effect display data to the live client, so that the live client displays the effect display data in a live interface.
Optionally, the stature attribute information is specified by a main broadcasting user of the live broadcasting program, and the specification description information is specified by a viewer user of the live broadcasting program.
Referring to fig. 19, in a software implementation, the device for displaying the goods fitting effect may include:
the video display unit 191 is used for displaying a live video for introducing goods in a live interface;
the instruction sending unit 192 is configured to send a fitting instruction corresponding to the goods to a server when a fitting operation performed on the goods is detected, so that the server obtains specification description information of the goods and figure attribute information of a virtual model, and generates effect display data for fitting the goods to a corresponding part of the virtual model according to a matching condition of the specification description information and the figure attribute information;
and the data display unit 193 is configured to receive the effect display data returned by the server, and display the effect display data in the live interface.
Optionally, the virtual model includes a stereoscopic model generated by three-dimensional modeling, and the apparatus further includes:
an adjustment instruction sending unit 194, configured to send a parameter control instruction for the stereoscopic model to a server when a parameter adjustment operation for the effect display data is detected, so that the server adjusts display parameters of the stereoscopic model wearing the goods, and generates adjusted effect display data corresponding to the stereoscopic model wearing the goods according to the adjusted display parameters;
and the adjusted data display unit 195 is configured to receive the adjusted effect display data returned by the server, and display the adjusted effect display data in the live interface.
Fig. 20 is a schematic block diagram of an apparatus provided in an exemplary embodiment. Referring to fig. 20, at the hardware level, the device includes a processor 2002, an internal bus 2004, a network interface 2006, a memory 2008, and a non-volatile storage 2010, but may also include hardware required for other services. The processor 2002 reads the corresponding computer program from the non-volatile storage 2010 into the memory 2008 and then runs, forming the item recommendation device on a logical level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Referring to fig. 21, in a software embodiment, the item recommendation device may include:
the material identification unit 211 is configured to identify sample materials in the goods interaction platform, where the sample materials are used to introduce sample goods provided by a goods provider, and ownership of the sample materials belongs to other goods providers different from the goods provider;
a data generating unit 212 for generating effect display data for wearing the sample goods to the corresponding part of the virtual model;
a data providing unit 213 for providing the effect display data to the goods provider.
Optionally, the specification description information includes:
specification description information extracted from the sample material; alternatively, the first and second electrodes may be,
and the specification description information is extracted from the information display interface corresponding to the sample goods in the goods interaction platform.
Optionally, the stature attribute information includes:
default stature attribute information of the virtual model; alternatively, the first and second electrodes may be,
and historical stature attribute information determined according to the historical selling records of the sample goods.
Optionally, the method further includes:
the personalized model generating unit 214 is configured to determine personalized figure attribute information according to a model configuration instruction sent by the goods provider, and generate a personalized virtual model according to the personalized figure attribute information;
the personalized data generating unit 215 is configured to generate personalized effect display data for wearing the goods to the corresponding position of the personalized virtual model according to the matching condition of the specification description information and the personalized stature attribute information after receiving the specification description information of the goods sent by the goods provider;
a personalized data returning unit 216, configured to return the personalized effect display data to the item provider, so that the item provider displays the personalized effect display data in the item interaction platform, and introduces the item.
The systems, devices, units or units illustrated in the above embodiments may be specifically implemented by computer chips or entities, or implemented by products with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present description to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The above description is only for the purpose of illustrating the preferred embodiments of the one or more embodiments of the present disclosure, and is not intended to limit the scope of the one or more embodiments of the present disclosure, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the one or more embodiments of the present disclosure should be included in the scope of the one or more embodiments of the present disclosure.

Claims (30)

1. A method for displaying the goods try-on effect is characterized by comprising the following steps:
acquiring specification description information of goods and figure attribute information of a virtual model;
and generating effect display data for wearing the goods to the corresponding part of the virtual model according to the matching condition of the specification description information and the stature attribute information.
2. The method of claim 1, wherein the goods comprise at least one of:
identifying an item from the item display picture; alternatively, the first and second electrodes may be,
determining goods according to a goods designation instruction sent by a user; alternatively, the first and second electrodes may be,
and the goods introduced by the live video in the live state.
3. The method of claim 1, wherein obtaining specification description information for the good includes at least one of:
identifying display content in a picture containing goods information to obtain specification description information of the goods;
extracting specification description information of the goods from an information display interface corresponding to the goods in a goods interaction platform;
and acquiring the specification description information of the goods provided by the user.
4. The method of claim 1 wherein the quantity of the good is plural; wherein the multiple items are from the same item interaction platform, or the multiple items are from at least two different item interaction platforms.
5. The method according to claim 1, wherein in a case where the number of the article is plural, the generating of the effect display data for wearing the article to the corresponding portion of the virtual model includes:
respectively generating independent effect display data for wearing the goods to the corresponding parts of the virtual model; and/or the presence of a gas in the gas,
and generating combined effect display data for wearing a plurality of goods to the corresponding parts of the virtual model at the same time.
6. The method of claim 1, wherein the virtual model comprises at least one of:
the method comprises the steps of obtaining a virtual model through three-dimensional modeling of an entity model, obtaining the virtual model through three-dimensional modeling according to figure attribute information appointed by a user, and automatically generating the virtual model through a preset model generation algorithm according to the figure attribute information appointed by the user.
7. The method of claim 1, wherein the stature attribute information comprises:
default stature attribute information of the virtual model; alternatively, the first and second electrodes may be,
generating personalized stature attribute information according to a model configuration instruction sent by a user; alternatively, the first and second electrodes may be,
and determining historical stature attribute information according to the historical goods purchase record of the user.
8. The method of claim 1,
further comprising: acquiring scene attribute information of a scene where the virtual model is located;
the generating of the effect display data for wearing the goods to the corresponding part of the virtual model comprises: and generating effect display data for wearing the goods to the corresponding part of the virtual model according to the scene attribute information, wherein the virtual model is in the scene.
9. The method of claim 1, wherein the generating of the effect display data for the item to be worn to the corresponding portion of the virtual model comprises:
generating at least one effect display picture for wearing the goods to the corresponding part of the virtual model according to a preset display attribute; alternatively, the first and second electrodes may be,
and under the condition that the virtual model comprises a three-dimensional model generated through three-dimensional modeling, responding to a parameter control instruction sent by a user aiming at the three-dimensional model, adjusting display parameters of the three-dimensional model worn with the goods, and generating effect display data corresponding to the three-dimensional model worn with the goods according to the adjusted display parameters.
10. The method of claim 1, further comprising:
under the condition that the matching degree of the specification description information and the stature attribute information is lower than a preset matching degree threshold value, acquiring a candidate goods, wherein the specification description information of the candidate goods is matched with the stature attribute information;
and recommending the candidate goods to the user corresponding to the virtual model.
11. A method for displaying the goods try-on effect is characterized by comprising the following steps:
acquiring effect display data, wherein the effect display data are generated according to the matching condition of the specification description information of goods and the figure attribute information of the virtual model;
and displaying the effect display data to present the display effect of wearing the goods to the corresponding part of the virtual model.
12. The method of claim 11, wherein the virtual model is a stereoscopic model, the method further comprising:
under the condition that the parameter control operation of a user for the three-dimensional model wearing the goods is detected, sending a corresponding parameter control instruction to a server, so that the server adjusts the display parameters of the three-dimensional model wearing the goods according to the parameter control instruction, and generating adjusted display data corresponding to the three-dimensional model according to the adjusted display parameters;
and receiving and displaying the adjusted display data returned by the server side so as to present the adjusted display effect.
13. A method for displaying the goods try-on effect is characterized by comprising the following steps:
responding to a virtual putting-through operation implemented by a user, and acquiring specification description information of goods corresponding to the virtual putting-through operation and figure attribute information of a virtual model;
generating effect display data according to the matching condition of the specification description information and the stature attribute information;
and displaying the effect display data so as to present the display effect of wearing the goods to the corresponding part of the virtual model to the user.
14. The method of claim 13, wherein the virtual model is a stereoscopic model, the method further comprising:
under the condition that the parameter control operation carried out by a user aiming at the three-dimensional model wearing the goods is detected, adjusting the display parameters of the three-dimensional model wearing the goods according to the parameter control operation;
and generating and displaying the adjusted display data of the stereoscopic model according to the adjusted display parameters so as to present the adjusted display effect to the user.
15. An article recommendation method, comprising:
identifying sample materials in an article interaction platform, wherein the sample materials are used for introducing sample articles provided by an article provider, and ownership of the sample materials belongs to other article providers different from the article provider;
generating effect display data for wearing the sample goods to the corresponding part of the virtual model;
providing the effect display data to the item provider.
16. The method of claim 15, wherein the specification description information comprises:
specification description information extracted from the sample material; alternatively, the first and second electrodes may be,
and the specification description information is extracted from the information display interface corresponding to the sample goods in the goods interaction platform.
17. The method of claim 15, wherein the stature attribute information comprises:
default stature attribute information of the virtual model; alternatively, the first and second electrodes may be,
and historical stature attribute information determined according to the historical selling records of the sample goods.
18. The method of claim 15, further comprising:
determining personalized figure attribute information according to a model configuration instruction sent by the goods provider, and generating a personalized virtual model according to the personalized figure attribute information;
after receiving specification description information of the goods sent by the goods provider, generating personalized effect display data for wearing the goods to the corresponding part of the personalized virtual model according to the specification description information and the matching condition of the personalized stature attribute information;
and returning the personalized effect display data to the goods provider so that the goods provider displays the personalized effect display data in the goods interaction platform for introducing the goods.
19. A method for displaying the goods try-on effect is characterized by comprising the following steps:
responding to a fitting instruction sent by a live client to goods, and acquiring specification description information of the goods and figure attribute information of a virtual model, wherein live videos displayed by the live client are used for introducing the goods;
generating effect display data for wearing the goods to the corresponding part of the virtual model according to the matching condition of the specification description information and the stature attribute information;
and returning the effect display data to the live client so that the live client displays the effect display data in a live interface.
20. The method of claim 19, wherein the live client comprises a main broadcast client or a viewer client.
21. A method for displaying the goods try-on effect is characterized by comprising the following steps:
displaying a live video for introducing goods in a live interface;
under the condition that the fitting operation carried out aiming at the goods is detected, obtaining effect display data for fitting the goods to the corresponding part of the virtual model, wherein the effect display data is generated according to the matching condition of the specification description information of the goods and the figure attribute information of the virtual model;
and displaying the effect display data in the live broadcast interface.
22. The method of claim 21, wherein the virtual model comprises a stereoscopic model generated by three-dimensional modeling, the method further comprising:
under the condition that the parameter adjustment operation aiming at the effect display data is detected, sending a parameter control instruction aiming at the three-dimensional model to a server side so that the server side can adjust the display parameters of the three-dimensional model wearing the goods, and generating adjusted display data corresponding to the three-dimensional model wearing the goods according to the adjusted display parameters;
and receiving the adjusted display data returned by the server side, and displaying the adjusted display data in the live interface.
23. The utility model provides a display device of effect is tried on to goods which characterized in that includes:
the information acquisition unit is used for acquiring specification description information of goods and figure attribute information of the virtual model;
and the data generating unit is used for generating effect display data for wearing the goods to the corresponding part of the virtual model according to the matching condition of the specification description information and the stature attribute information.
24. The utility model provides a display device of effect is tried on to goods which characterized in that includes:
the data acquisition unit is used for acquiring effect display data, and the effect display data are generated according to the matching condition of the specification description information of goods and the figure attribute information of the virtual model;
and the data display unit is used for displaying the effect display data so as to display the display effect of wearing the goods to the corresponding part of the virtual model.
25. The utility model provides a display device of effect is tried on to goods which characterized in that includes:
the information acquisition unit is used for responding to virtual putting-in operation implemented by a user and acquiring specification description information of goods corresponding to the virtual putting-in operation and figure attribute information of a virtual model;
the data generation unit is used for generating effect display data according to the matching condition of the specification description information and the stature attribute information;
and the data display unit is used for displaying the effect display data so as to present the display effect of wearing the goods to the corresponding part of the virtual model to the user.
26. An article recommending device, characterized by comprising:
the system comprises a picture identification module, a picture identification module and a display module, wherein the picture identification module is configured to identify sample materials in an article interaction platform, the sample materials are used for introducing sample articles provided by an article provider, and ownership of the sample materials belongs to other article providers different from the article provider;
the data generation module is configured to generate effect display data for wearing the sample goods to the corresponding part of the virtual model;
a data providing module configured to provide the effect display data to the item provider.
27. The utility model provides a display device of effect is tried on to goods which characterized in that includes:
the information acquisition module is configured to respond to a fitting instruction sent by a live client side aiming at goods, and acquire specification description information of the goods and figure attribute information of a virtual model, wherein live videos displayed by the live client side are used for introducing the goods;
the data generation module is configured to generate effect display data for wearing the goods to the corresponding part of the virtual model according to the matching condition of the specification description information and the stature attribute information;
and the data return module is configured to return the effect display data to the live client so that the live client displays the effect display data in a live interface.
28. The utility model provides a display device of effect is tried on to goods which characterized in that includes:
the video display module is configured to display a live video for introducing goods in a live interface;
the execution sending module is configured to acquire effect display data for wearing the goods to the corresponding part of the virtual model under the condition that the fitting operation carried out on the goods is detected, wherein the effect display data is generated according to the matching condition of the specification description information of the goods and the stature attribute information of the virtual model;
and the data display module is configured to display the effect display data in the live interface.
29. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1-22 by executing the executable instructions.
30. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, carry out the steps of the method according to any one of claims 1-22.
CN202011064416.1A 2020-09-30 2020-09-30 Method and device for displaying goods fitting effect Pending CN114339434A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011064416.1A CN114339434A (en) 2020-09-30 2020-09-30 Method and device for displaying goods fitting effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011064416.1A CN114339434A (en) 2020-09-30 2020-09-30 Method and device for displaying goods fitting effect

Publications (1)

Publication Number Publication Date
CN114339434A true CN114339434A (en) 2022-04-12

Family

ID=81032417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011064416.1A Pending CN114339434A (en) 2020-09-30 2020-09-30 Method and device for displaying goods fitting effect

Country Status (1)

Country Link
CN (1) CN114339434A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117135379A (en) * 2023-10-26 2023-11-28 武汉耳东信息科技有限公司 Live broadcast platform data analysis management system based on big data

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101079135A (en) * 2006-05-24 2007-11-28 严晓敏 On-line network marketing method and system with assistant client terminal image display
CN101295390A (en) * 2007-04-27 2008-10-29 齐南 Internet electronic model dresses exhibition system
CN103597519A (en) * 2011-02-17 2014-02-19 麦特尔有限公司 Computer implemented methods and systems for generating virtual body models for garment fit visualization
CN106250425A (en) * 2016-07-25 2016-12-21 百度在线网络技术(北京)有限公司 Exchange method and device for Search Results
CN106293092A (en) * 2016-08-15 2017-01-04 成都通甲优博科技有限责任公司 The method realizing virtual wearing based on multi-view stereo vision 3-D technology
CN106910115A (en) * 2017-02-20 2017-06-30 宁波大学 Virtualization fitting method based on intelligent terminal
CN107209962A (en) * 2014-12-16 2017-09-26 麦特尔有限公司 For the method for the 3D virtual body models for generating the people combined with 3D clothes images, and related device, system and computer program product
CN111405343A (en) * 2020-03-18 2020-07-10 广州华多网络科技有限公司 Live broadcast interaction method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101079135A (en) * 2006-05-24 2007-11-28 严晓敏 On-line network marketing method and system with assistant client terminal image display
CN101295390A (en) * 2007-04-27 2008-10-29 齐南 Internet electronic model dresses exhibition system
CN103597519A (en) * 2011-02-17 2014-02-19 麦特尔有限公司 Computer implemented methods and systems for generating virtual body models for garment fit visualization
CN107209962A (en) * 2014-12-16 2017-09-26 麦特尔有限公司 For the method for the 3D virtual body models for generating the people combined with 3D clothes images, and related device, system and computer program product
CN106250425A (en) * 2016-07-25 2016-12-21 百度在线网络技术(北京)有限公司 Exchange method and device for Search Results
CN106293092A (en) * 2016-08-15 2017-01-04 成都通甲优博科技有限责任公司 The method realizing virtual wearing based on multi-view stereo vision 3-D technology
CN106910115A (en) * 2017-02-20 2017-06-30 宁波大学 Virtualization fitting method based on intelligent terminal
CN111405343A (en) * 2020-03-18 2020-07-10 广州华多网络科技有限公司 Live broadcast interaction method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117135379A (en) * 2023-10-26 2023-11-28 武汉耳东信息科技有限公司 Live broadcast platform data analysis management system based on big data
CN117135379B (en) * 2023-10-26 2023-12-22 武汉耳东信息科技有限公司 Live broadcast platform data analysis management system based on big data

Similar Documents

Publication Publication Date Title
US11593871B1 (en) Virtually modeling clothing based on 3D models of customers
US10235810B2 (en) Augmented reality e-commerce for in-store retail
US11416918B2 (en) Systems/methods for identifying products within audio-visual content and enabling seamless purchasing of such identified products by viewers/users of the audio-visual content
KR101713502B1 (en) Image feature data extraction and use
US20170352091A1 (en) Methods for generating a 3d virtual body model of a person combined with a 3d garment image, and related devices, systems and computer program products
CN111681070B (en) Online commodity purchasing method, purchasing device, storage device and purchasing equipment
US10475099B1 (en) Displaying relevant content
US20120221418A1 (en) Targeted Marketing System and Method
CN106202317A (en) Method of Commodity Recommendation based on video and device
WO2023226454A1 (en) Product information processing method and apparatus, and terminal device and storage medium
US20230077278A1 (en) Artificial Reality Content Management
CN117642762A (en) Custom advertising with virtual changing room
US20160042233A1 (en) Method and system for facilitating evaluation of visual appeal of two or more objects
CN110084676A (en) Method of Commodity Recommendation, the network terminal and the device with store function on a kind of line
CN110084675A (en) Commodity selling method, the network terminal and the device with store function on a kind of line
CN114339434A (en) Method and device for displaying goods fitting effect
WO2015172229A1 (en) Virtual mirror systems and methods
CN114445271B (en) Method for generating virtual fitting 3D image
US20220414755A1 (en) Method, device, and system for providing fashion information
CN112991003A (en) Private customization method and system
US20240071019A1 (en) Three-dimensional models of users wearing clothing items
US20240037869A1 (en) Systems and methods for using machine learning models to effect virtual try-on and styling on actual users
US20240119681A1 (en) Systems and methods for using machine learning models to effect virtual try-on and styling on actual users
NL2022937B1 (en) Method and Apparatus for Accessing Clothing
CN116739699A (en) Method for providing commodity recommendation information and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40071583

Country of ref document: HK