CN107481101B - Dressing recommendation method and device - Google Patents

Dressing recommendation method and device Download PDF

Info

Publication number
CN107481101B
CN107481101B CN201710641558.1A CN201710641558A CN107481101B CN 107481101 B CN107481101 B CN 107481101B CN 201710641558 A CN201710641558 A CN 201710641558A CN 107481101 B CN107481101 B CN 107481101B
Authority
CN
China
Prior art keywords
clothes
candidate
user
dressing
garment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710641558.1A
Other languages
Chinese (zh)
Other versions
CN107481101A (en
Inventor
周意保
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710641558.1A priority Critical patent/CN107481101B/en
Publication of CN107481101A publication Critical patent/CN107481101A/en
Application granted granted Critical
Publication of CN107481101B publication Critical patent/CN107481101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations

Abstract

The invention provides a dressing recommendation method and device and terminal equipment, wherein the method comprises the following steps: the method comprises the steps of obtaining a 3D model of a user through structured light, extracting depth information of current dresses of the user from the 3D model, and obtaining target clothes recommended to the user according to the depth information of the dresses. The depth information of the current dressing of the user is acquired through the structured light, and then the 3D model of the current dressing can be constructed according to the depth information, so that the current dressing can be displayed in a three-dimensional mode, the related information of the current dressing is convenient to acquire, and the clothes recommended to the user can be acquired according to the related information. The clothes dressing recommendation method is enriched in the embodiment, the purpose of matching degree of the clothes dressing recommendation and the clothes dressing requirements of the user is improved, the problems that the recommendation method is single, and the matching degree of the clothes dressing requirements of the user is low are solved by analyzing historical behaviors of browsing, clicking or purchasing the clothes by the user and recommending related clothes to the user.

Description

Dressing recommendation method and device
Technical Field
The invention relates to the field of terminal equipment, in particular to a dressing recommendation method and a dressing recommendation device.
Background
At present, a common phenomenon of daily life is that a user purchases clothes through a network, and most merchants analyze historical behaviors of browsing, clicking or purchasing the clothes through the user and then recommend related clothes to the user. The recommendation mode is single, and after the user purchases a large amount of clothes for family members, the recommended clothes are all clothes related to the family members, and the matching degree of the clothes with the clothes wearing requirements of the user is low.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first object of the present invention is to provide a dressing recommendation method, so as to enrich dressing recommendation manners, improve matching degree between the dressing recommendation and the user's own dressing requirement, and solve the problems of single recommendation manner and low matching degree with the user's own dressing requirement in the prior art that the user analyzes historical behaviors such as browsing, clicking or purchasing clothes, and then recommends related clothes to the user.
A second object of the present invention is to provide a dressing recommendation device.
A third object of the present invention is to provide a terminal device.
A fourth object of the invention is to propose one or more non-volatile computer-readable storage media containing computer-executable instructions.
In order to achieve the above object, a dressing recommendation method according to an embodiment of a first aspect of the present invention includes:
acquiring a 3D model of a user through structured light;
extracting depth information of the current dress of the user from the 3D model;
and acquiring the target clothes recommended to the user according to the dressing depth information.
According to the dressing recommendation method provided by the embodiment of the invention, the 3D model of the user is obtained through structured light, the current dressing depth information of the user is extracted from the 3D model, and the target clothes recommended to the user is obtained according to the dressing depth information. The depth information of the current dressing of the user is acquired through the structured light, and then the 3D model of the current dressing can be constructed according to the depth information, so that the current dressing can be displayed in a three-dimensional mode, the related information of the current dressing is convenient to acquire, and the clothes recommended to the user can be acquired according to the related information. The clothes dressing recommendation method is enriched in the embodiment, the purpose of matching degree of the clothes dressing recommendation and the clothes dressing requirements of the user is improved, the problems that the recommendation method is single, and the matching degree of the clothes dressing requirements of the user is low are solved by analyzing historical behaviors of browsing, clicking or purchasing the clothes by the user and recommending related clothes to the user.
In order to achieve the above object, a dressing recommendation device according to a second aspect of the present invention includes:
the model acquisition module is used for acquiring a 3D model of a user through structured light;
the extraction module is used for extracting the current dressing depth information of the user from the 3D model;
and the recommending module is used for acquiring the target clothes recommended to the user according to the dressing depth information.
According to the dressing recommendation device provided by the embodiment of the invention, the 3D model of the user is obtained through structured light, the current dressing depth information of the user is extracted from the 3D model, and the target clothes recommended to the user is obtained according to the dressing depth information. The depth information of the current dressing of the user is acquired through the structured light, and then the 3D model of the current dressing can be constructed according to the depth information, so that the current dressing can be displayed in a three-dimensional mode, the related information of the current dressing is convenient to acquire, and the clothes recommended to the user can be acquired according to the related information. The clothes dressing recommendation method is enriched in the embodiment, the purpose of matching degree of the clothes dressing recommendation and the clothes dressing requirements of the user is improved, the problems that the recommendation method is single, and the matching degree of the clothes dressing requirements of the user is low are solved by analyzing historical behaviors of browsing, clicking or purchasing the clothes by the user and recommending related clothes to the user.
In order to achieve the above object, a terminal device according to a third aspect of the present invention includes a memory and a processor, where the memory stores computer readable instructions, and the instructions, when executed by the processor, cause the processor to execute the dressing recommendation method according to the first aspect of the embodiments of the present invention.
To achieve the above object, a fourth aspect of the present invention provides one or more non-transitory computer-readable storage media containing computer-executable instructions, which when executed by one or more processors, cause the processors to perform the dressing recommendation method according to the first aspect.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a dressing recommendation method according to an embodiment of the present invention;
FIG. 2 is a schematic view of an apparatus for projecting structured light according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of uniform structured light provided by an embodiment of the present invention;
FIG. 4 is a diagram illustrating results of a dressing recommendation method according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart illustrating another dressing recommendation method according to an embodiment of the present invention;
FIG. 6 is a schematic illustration of non-uniform structured light in an embodiment of the present invention;
FIG. 7 is a schematic flow chart illustrating another dressing recommendation method according to an embodiment of the present invention;
FIG. 8 is a schematic flow chart illustrating another dressing recommendation method according to an embodiment of the present invention;
FIG. 9 is a schematic flow chart illustrating another dressing recommendation method according to an embodiment of the present invention;
fig. 10 is a schematic structural view of a dressing recommendation device according to an embodiment of the present invention;
FIG. 11 is a schematic structural view of another dressing recommendation device provided in the embodiment of the present invention;
fig. 12 is a schematic structural diagram of an image processing circuit in a terminal device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The dressing recommendation method and device and the terminal device according to the embodiment of the invention are described below with reference to the accompanying drawings.
At present, a common phenomenon of daily life is that a user purchases clothes through a network, and most merchants analyze historical behaviors of browsing, clicking or purchasing the clothes through the user and then recommend related clothes to the user. The recommendation mode is single, and after the user purchases a large amount of clothes for family members, the recommended clothes are all clothes related to the family members, and the matching degree of the clothes with the clothes wearing requirements of the user is low.
Aiming at the problem, the embodiment of the invention provides a dressing recommendation method, which aims to enrich dressing recommendation modes, improve the matching degree of the dressing recommendation and the dressing requirement of a user, and solve the problems that the recommendation mode is single and the matching degree with the dressing requirement of the user is low because the user analyzes historical behaviors such as clothes browsing, clicking or purchasing and the like and then recommends related clothes to the user.
Fig. 1 is a schematic flow chart of a dressing recommendation method according to an embodiment of the present invention.
As shown in fig. 1, the dressing recommendation method includes the following steps:
step 101, a 3D model of a user is obtained through structured light.
Among them, the projection set of the known spatial direction light beam is called structured light (structured light).
As an example, FIG. 2 is a schematic diagram of an apparatus assembly for projecting structured light. The projection set of structured light is merely illustrated as a set of lines in fig. 2, and the principle for structured light as a speckle pattern for the projection set is similar. As shown in fig. 2, the apparatus may include an optical projector and a camera, wherein the optical projector projects a pattern of structured light into a space where an object to be measured (user) is located, forming a three-dimensional image of a light bar modulated by the shape of the body surface on the body surface of the user. The three-dimensional image is detected by a camera at another location to obtain a distorted two-dimensional image of the light bar. The degree of distortion of the light bar depends on the relative position between the optical projector and the camera and the contour of the user's body surface, intuitively, the displacement (or offset) displayed along the light bar is proportional to the height of the user's body surface, the distortion represents the change of the plane, the physical gap of the user's body surface is discontinuously displayed, and when the relative position between the optical projector and the camera is fixed, the three-dimensional contour of the user's body surface can be reproduced by the distorted light bar two-dimensional image coordinates, i.e. the 3D model of the user is obtained.
As an example, the face 3D model can be obtained by calculation using formula (1), where formula (1) is as follows:
Figure BDA0001366010440000041
wherein (x, y, z) is the coordinates of the acquired 3D model of the user, b is the baseline distance between the projection device and the camera, F is the focal length of the camera, θ is the projection angle when the projection device projects the preset structured light pattern to the space where the user is located, and (x ', y') is the coordinates of the two-dimensional distorted image of the user.
As an example, the types of the structured light include a grating type, a light spot type, a speckle type (including a circular speckle and a cross speckle), and the structured light is uniformly arranged as shown in fig. 3. Correspondingly, the device for generating structured light may be some kind of projection device or instrument, such as an optical projector, which projects a light spot, line, grating, grid or speckle onto the object to be examined, but also a laser, which generates a laser beam.
In this embodiment, a structured light emitting device may be installed on a terminal device such as a computer, a mobile phone, a handheld computer, or the like, and the structured light emitting device is configured to emit structured light to a user.
The terminal equipment can call the structured light projection device through an application program recommended by dressing, and then the structured light projection device emits structured light to a user. When the structured light is irradiated onto the body of the user, the structured light is distorted when the user reflects the structured light because the surface of the body of the user is not flat. The light reflected by the user is further collected by a camera on the terminal equipment, and due to the distortion information carried in the reflected structured light, the depth information of each characteristic point of the user can be calculated according to the carried distortion information, the restoration of the three-dimensional space of the user can be completed, and the 3D model of the user is constructed.
Step 102, extracting the depth information of the current dress of the user from the 3D model.
After the 3D model of the user is built, the outline and the boundary of the clothes can be identified, the feature points belonging to the current clothing of the user can be further acquired from the 3D model of the user, and then the depth information of the clothing is formed by utilizing the depth information of the feature points belonging to the clothing of the user.
And 103, obtaining target clothes recommended to the user according to the wearing depth information.
After the depth information of the dressing is acquired, the shape of the clothes currently worn by the user can be reconstructed according to the depth information of the dressing. As an example, a model recognition system may be trained in advance, a 3D model of a clothing may be input into the model recognition system, and a category to which the user clothing belongs may be acquired by the model recognition system. For example, the categories may include: overcoat, shirt, pants, and the like. Further, the style of the garment may also be identified, for example, the style of the overcoat may include: loose money, close money, A-shaped money and the like.
After the category and style to which the clothing belongs are obtained, a candidate clothes set can be formed according to the category and style, and part or all of the candidate clothes can be selected from the candidate clothes set to serve as target clothes recommended to the user.
As another example, a thickness of the garment may be retrieved from the depth information of the garment, and then a certain amount of candidate clothes may be selected as the target clothes in the retrieved set of candidate clothes based on the thickness. Specifically, the thickness of each candidate garment in each candidate garment set is obtained, then the thickness of each candidate garment is compared with the thickness of the garment to obtain a first difference value of the two thicknesses, and the candidate garment with the first difference value within a preset first range is selected as the target garment.
As another example, to enable the recommended clothing to fit the user's body shape, the recommended clothing may be selected according to the size of the current garment. Specifically, the size information of the garment is acquired from the 3D model of the garment. For example, when the garment is an upper garment, size information such as the length of the garment, the bust, the shoulder width, and the sleeve length can be acquired. After the size information is acquired, part or all of the candidate clothes can be selected from the obtained candidate clothes set to be used as the target clothes.
Specifically, size information of each candidate garment in the candidate garment set is obtained, the size information of the garment is compared with the size information of each candidate garment, a second difference value of the two sizes is obtained, and the candidate garment with the second difference value within a preset second range is selected as the target garment. Because the difference between the size of the candidate clothes and the size of the current dress is controlled within a reasonable range, the recommended candidate clothes can be attached to the body type of the user, and the user experience is better.
For example, the first drawing in fig. 4 is the current dress of the user, the current dress is a overcoat, and the target clothes selected according to the current dress are two pieces of overcoat with similar styles, as shown in the second drawing and the third drawing in fig. 4.
According to the dressing recommendation method provided by the embodiment, the 3D model of the user is obtained through the structured light, the current dressing depth information of the user is extracted from the 3D model, and the target clothes recommended to the user are obtained according to the dressing depth information. In the embodiment, the depth information of the current dressing of the user is acquired through the structured light, and then the 3D model of the current dressing can be constructed according to the depth information, so that the current dressing can be displayed in a three-dimensional mode, the related information of the current dressing is conveniently acquired, and the clothes recommended to the user can be acquired according to the related information. The clothes dressing recommendation method is enriched in the embodiment, the purpose of matching degree of the clothes dressing recommendation and the clothes dressing requirements of the user is improved, the problems that the recommendation method is single, and the matching degree of the clothes dressing requirements of the user is low are solved by analyzing historical behaviors of browsing, clicking or purchasing the clothes by the user and recommending related clothes to the user.
In order to more clearly illustrate a specific implementation process of obtaining a 3D model of a user by using structured light in the embodiment of the present invention, another dressing recommendation method is provided in the embodiment of the present invention, and fig. 4 is a schematic flow chart of the dressing recommendation method provided in another embodiment of the present invention.
As shown in fig. 5, on the basis of the embodiment shown in fig. 1, step 101 may include the following steps:
step 501, emitting structured light to a user.
The terminal equipment can be provided with an application program corresponding to a scene, the application program can call the structured light projection device, and then the structured light is emitted to a user through the structured light.
Step 502, collecting emitted light of the structured light formed on the body of the user and acquiring depth information of the user.
After the structured light emitted to the user reaches the user, since each part on the body can cause the structured light to block the structured light from being reflected at the body, at the moment, the reflected light of the structured light on the body can be collected through a camera arranged in the terminal equipment, and the depth information of the user can be acquired through the collected reflected light.
Step 503, reconstructing the 3D model of the user based on the depth information.
Specifically, the depth information of the user may include the user and a background, and the depth information is first subjected to denoising and smoothing to obtain an image of an area where the user is located, and then the user and the background image are segmented through processing such as foreground and background segmentation.
Further, feature point data for constructing the user 3D model may be extracted from the depth information, and the feature points may be connected into a network according to the extracted feature point data. For example, according to the distance relationship of each point in space, the points of the same plane or the points with the distance within the threshold range are connected into a triangular network, and then the networks are spliced, so that the 3D model of the user can be generated.
In the embodiment, the depth information of the current dressing of the user is acquired through the structured light, and then the 3D model of the current dressing can be constructed according to the depth information, so that the current dressing can be displayed in a three-dimensional mode, the related information of the current dressing is conveniently acquired, and the clothes recommended to the user can be acquired according to the related information.
It should be noted here that, as an example, the structured light adopted in the above embodiment may be non-uniform structured light, and the non-uniform structured light is a speckle pattern or a random dot pattern formed by a set of a plurality of light spots.
FIG. 6 is a schematic diagram of a projection set of non-uniform structured light according to an embodiment of the present invention. As shown in fig. 6, the non-uniform structured light is adopted in the embodiment of the present invention, where the non-uniform structured light is a randomly arranged non-uniform speckle pattern, that is, the non-uniform structured light is a set of a plurality of light spots, and the plurality of light spots are arranged in a non-uniform dispersion manner, so as to form a speckle pattern. Because the storage space occupied by the speckle patterns is small, the operation efficiency of the terminal equipment cannot be greatly influenced when the projection device operates, and the storage space of the terminal can be saved.
In addition, compared with other existing structured light types, the speckle patterns adopted in the embodiment of the invention can reduce energy consumption, save electric quantity and improve cruising ability of the terminal through hash arrangement.
In the embodiment of the invention, the projection device and the camera can be arranged in the terminals such as a computer, a mobile phone, a palm computer and the like. The projection device emits a non-uniform structured light, i.e., a speckle pattern, toward the user. In particular, a speckle pattern may be formed using a diffractive optical element in the projection device, wherein a certain number of reliefs are provided on the diffractive optical element, and an irregular speckle pattern is generated by an irregular relief on the diffractive optical element. In embodiments of the present invention, the depth and number of relief grooves may be set by an algorithm.
The projection device can be used for projecting a preset speckle pattern to the space where the measured object is located. The camera can be used for collecting the measured object with the projected speckle pattern so as to obtain a two-dimensional distorted image of the measured object with the speckle pattern.
In the embodiment of the invention, when the camera of the terminal is aligned with the user, the projection device in the terminal can project a preset speckle pattern to the space where the user is located, the speckle pattern has a plurality of scattered spots, and when the speckle pattern is projected onto the body surface of the user, the scattered spots in the speckle pattern can be shifted due to various parts included in the body surface. The non-uniform structured light reflected by the body of the user is collected through a camera of the terminal device, and a two-dimensional distorted image of the user with the speckle pattern is obtained.
Further, image data calculation is performed on the collected speckle image and the reference speckle image according to a predetermined algorithm, and the movement distance of each scattered spot (characteristic point) of the collected speckle image relative to the reference scattered spot (reference characteristic point) is acquired. And finally, according to the moving distance, the distance between the reference speckle image and the camera on the terminal and the relative interval value between the projection device and the camera, obtaining the depth information of each scattered spot of the speckle infrared image by using a trigonometry method, and further obtaining the 3D model of the user according to the depth information.
In daily life, a user also has certain preference or requirement on the color of clothes. In order to enable recommended clothes to better meet the requirements or preferences of users, the embodiment of the invention provides another dressing recommendation method so as to achieve the purpose of matching dressing recommendation with the dressing requirements of the users.
Fig. 7 is a schematic flow chart of another dressing recommendation method according to an embodiment of the present invention. As shown in fig. 7, the dressing recommendation method may include the steps of:
step 701, acquiring a 3D model of a user through structured light.
Step 702, extracting the depth information of the current dress of the user from the 3D model.
Step 703, forming a 3D model of the garment according to the depth information of the garment.
Step 704, obtaining a set of candidate clothes according to the 3D model of the garment.
For the related descriptions of step 701 to step 704, refer to the description of the related contents in the above embodiments, and are not repeated herein.
Step 705, acquiring color information of the clothing.
As an example, another camera may be used to collect a color image of the user, and color information of the current dress of the user is extracted according to the collected color image.
As another example, the color information of the current dress of the user may be manually entered by the user, or the color information of the current dress may be entered by the user's voice.
Step 706, color information of each candidate clothes in the candidate clothes set is obtained.
After the color information of the current clothes is determined, in order to recommend clothes with similar colors to the user, the color information of each candidate clothes in the candidate clothes set can be acquired, for example, keywords related to the clothes color can be extracted from the description information of each candidate clothes, so that the color information of the candidate clothes can be determined, or the color information of the candidate clothes can be analyzed by performing picture processing from pictures of the candidate clothes.
And step 707, comparing the color information of the clothing with the color information of each candidate clothes to obtain the similarity of the clothing and the candidate clothes in color.
After the color of each candidate garment is determined, the color information may be compared, so that the similarity between the color of the current garment and the color of the candidate garment can be identified, and specifically, the RGB value of each color may be compared, so as to determine the similarity between the colors of the garment. By comparison, the similarity between pink and rose red is as high as 80%, the similarity between pink and yellow is 10%, and the like.
Step 708, selecting candidate clothes with the similarity exceeding a threshold value from the candidate clothes set as target clothes.
In this embodiment, a threshold value may be preset, and when the similarity exceeds the threshold value, it indicates that the candidate clothes is similar to the current clothing in color, and when the similarity does not exceed the threshold value, it indicates that the candidate clothes is different from the current clothing in color. In order to meet the color requirement of the user, the candidate clothes with the similarity exceeding the threshold value can be selected from the candidate clothes set as the target clothes,
and step 709, displaying the target clothes on the terminal equipment.
In the embodiment, the target clothes with the color similar to the dressing color are selected from the candidate clothes set selected based on the 3D dressing model by combining the color information of the current dressing of the user, so that the matching degree of the recommended clothes and the dressing requirements of the user is further improved. The clothes dressing recommendation method is enriched in the embodiment, the purpose of matching degree of the clothes dressing recommendation and the clothes dressing requirements of the user is improved, the problems that the recommendation method is single, and the matching degree of the clothes dressing requirements of the user is low are solved by analyzing historical behaviors of browsing, clicking or purchasing the clothes by the user and recommending related clothes to the user.
In daily life, the user also has certain hobbies or requirements on the material of the clothes. In order to enable recommended clothes to better meet the requirements or preferences of users, the embodiment of the invention provides another dressing recommendation method so as to achieve the purpose of matching dressing recommendation with the dressing requirements of the users.
Fig. 8 is a schematic flow chart of another dressing recommendation method according to an embodiment of the present invention. As shown in fig. 8, the dressing recommendation method may include the steps of:
step 801, a 3D model of a user is acquired through structured light.
Step 802, extracting the depth information of the current dress of the user from the 3D model.
And 803, forming a 3D model of the costume according to the depth information of the costume.
And step 804, acquiring a candidate clothes set according to the 3D model of the clothing.
For the related descriptions of step 801 to step 804, reference may be made to the descriptions of the related contents in the above embodiments, and the description is not repeated here.
Step 805, obtaining the material information of the clothing.
As an example, another camera may be used to collect an image with color of the user, perform texture analysis on the collected image, and extract material information of the current clothing of the user, for example, the material information is cotton, hemp, silk, or the like.
As another example, the material information of the current dress of the user may be manually entered by the user, or the material information of the current dress may be entered by the user by voice.
In step 806, material information of each candidate clothes in the candidate clothes set is obtained.
After the material information of the current dress is determined, in order to recommend clothes with the same material to the user, the material information of each candidate clothes in the candidate clothes set can be acquired, for example, keywords related to the material of the clothes can be extracted from the description information of each candidate clothes, so that the material information of the candidate clothes can be determined, or the material information of the candidate clothes can be analyzed by performing picture processing from the pictures of the candidate clothes.
In step 807, candidate clothes of the same material as the garment are selected from the candidate clothes set as target clothes.
After the material of each candidate garment is determined, the color information can be compared, and candidate garments with the same material as the current garment can be identified.
And 808, displaying the target clothes on the terminal equipment.
In the embodiment, the target clothes with the same material as the dressing material are selected from the candidate clothes set selected based on the dressing 3D model by combining the material information of the current dressing of the user, so that the matching degree of the recommended clothes and the dressing requirements of the user is further improved. The clothes dressing recommendation method is enriched in the embodiment, the purpose of matching degree of the clothes dressing recommendation and the clothes dressing requirements of the user is improved, the problems that the recommendation method is single, and the matching degree of the clothes dressing requirements of the user is low are solved by analyzing historical behaviors of browsing, clicking or purchasing the clothes by the user and recommending related clothes to the user.
Fig. 9 is a schematic flow chart of another dressing recommendation method according to an embodiment of the present invention. As shown in fig. 9, the dressing recommendation method may include the steps of:
firstly, a 3D model of a user is obtained based on structured light, and then the wearing depth information is extracted from the 3D model, so that the wearing 3D model is constructed. Further, a set of candidate garments is obtained based on the 3D model of the garment. In order to enable the recommended clothes to meet the dressing requirements of the user, the attribute information of the current dressing is obtained, wherein the attribute information comprises the thickness, the size, the material and/or the color of the dressing. The target garment may be determined from the set of candidate garments based on a combination of one or more attributes in the attribute information, for example, the target garment may be selected from the set of candidate garments in combination with thickness + material, or from the set of candidate garments in combination with thickness + material + color. After the target clothes are determined, the target clothes can be displayed on the terminal equipment.
Here, when selecting the target clothes from the candidate clothes set, the more the selected target clothes can satisfy the dressing requirement of the user every time one attribute is added.
In the embodiment, the target clothes with the same material as the dressing material are selected from the candidate clothes set selected based on the dressing 3D model by combining the attribute information of the current dressing of the user, so that the matching degree of the recommended clothes and the dressing requirements of the user is greatly improved. The clothes dressing recommendation method is enriched in the embodiment, the purpose of matching degree of the clothes dressing recommendation and the clothes dressing requirements of the user is improved, the problems that the recommendation method is single, and the matching degree of the clothes dressing requirements of the user is low are solved by analyzing historical behaviors of browsing, clicking or purchasing the clothes by the user and recommending related clothes to the user.
Fig. 10 is a schematic structural diagram of a dressing recommendation device according to an embodiment of the present invention. As shown in fig. 10, the dressing recommendation device includes: a model acquisition module 11, an extraction module 12 and a recommendation module 13.
A model obtaining module 11, configured to obtain a 3D model of a user through structured light;
an extracting module 12, configured to extract depth information of the current dress of the user from the 3D model.
And the recommending module 13 is used for acquiring the target clothes recommended to the user according to the dressing depth information.
Based on fig. 10, fig. 11 is a schematic structural view of another dressing recommendation device according to an embodiment of the present invention. As shown in fig. 11, the recommending module 13 includes:
the model construction unit 131 forms a 3D model of the garment from the depth information of the garment.
An identifying unit 132 for identifying a category to which the clothing belongs and a style of the clothing according to the 3D model of the clothing.
An obtaining unit 133, configured to obtain the set of candidate clothes according to the style under the category to which the clothing belongs.
A determining unit 134, configured to select a part or all of the candidate clothes from the set of candidate clothes as the target clothes.
Further, the obtaining unit 133 is further configured to obtain the thickness of the clothing according to the depth information of the clothing.
Further, the determining unit 134 is specifically configured to obtain a thickness of each candidate garment in the set of candidate garments, compare the thickness of each candidate garment with the thickness of the garment, obtain a first difference between the two thicknesses, and select the candidate garment with the first difference within a preset first range as the target garment.
Further, the obtaining unit 133 is further configured to obtain size information of the clothing according to the 3D model of the clothing;
further, the determining unit 134 is specifically configured to obtain size information of each candidate garment in the set of candidate garments, compare the size information of the dresses with the size information of each candidate garment, obtain a second difference value between the two sizes, and select the candidate garment with the second difference value within a preset second range as the target garment.
Further, the obtaining unit 133 is further configured to obtain material information of the clothing.
Further, the determining unit 134 is specifically configured to obtain material information of each candidate garment in the candidate garment set, and select a candidate garment with the same material as the clothing from the candidate garment set as the target garment.
Further, the obtaining unit 133 is further configured to obtain color information of the clothing.
Further, the determining unit 134 is specifically configured to obtain color information of each candidate garment in the set of candidate garments, compare the color information of the clothing with the color information of each candidate garment, obtain a similarity between the clothing and the candidate garments in color, and select the candidate garment with the similarity exceeding a threshold value from the set of candidate garments as the target garment.
Further, the extracting module 12 is specifically configured to identify feature points belonging to the clothing from the 3D model, obtain depth information of the feature points, and form depth information of the clothing.
Further, the structured light is non-uniform structured light which is a speckle pattern or a random dot pattern formed by a plurality of light spots, and is formed by a diffractive optical element arranged in a projection device on the terminal, wherein a certain number of embossments are arranged on the diffractive optical element, and the groove depths of the embossments are different.
According to the dressing recommendation method provided by the embodiment, the 3D model of the user is obtained through the structured light, the current dressing depth information of the user is extracted from the 3D model, and the target clothes recommended to the user are obtained according to the dressing depth information. In the embodiment, the depth information of the current dressing of the user is acquired through the structured light, and then the 3D model of the current dressing can be constructed according to the depth information, so that the current dressing can be displayed in a three-dimensional mode, the related information of the current dressing is conveniently acquired, and the clothes recommended to the user can be acquired according to the related information. The clothes dressing recommendation method is enriched in the embodiment, the purpose of matching degree of the clothes dressing recommendation and the clothes dressing requirements of the user is improved, the problems that the recommendation method is single, and the matching degree of the clothes dressing requirements of the user is low are solved by analyzing historical behaviors of browsing, clicking or purchasing the clothes by the user and recommending related clothes to the user.
The division of each module in the dressing recommendation device is only used for illustration, and in other embodiments, the dressing recommendation device may be divided into different modules as needed to complete all or part of the functions of the dressing recommendation device.
The embodiment of the invention also provides the terminal equipment. The terminal device includes therein an Image Processing circuit, which may be implemented by hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 12 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 12, for convenience of explanation, only aspects of the image processing technique related to the embodiment of the present invention are shown.
As shown in fig. 12, the image processing circuit 90 includes an imaging device 910, an ISP processor 930, and control logic 940. The imaging device 910 may include a camera with one or more lenses 912, an image sensor 914, and a structured light projector 916. The structured light projector 916 projects the structured light to the object to be measured. The structured light pattern may be a laser stripe, a gray code, a sinusoidal stripe, or a randomly arranged speckle pattern. The image sensor 914 captures a structured light image projected onto the object to be measured and transmits the structured light image to the ISP processor 930, and the ISP processor 930 demodulates the structured light image to obtain depth information of the object to be measured. At the same time, the image sensor 914 may also capture color information of the object under test. Of course, the structured light image and the color information of the measured object may be captured by the two image sensors 914, respectively.
Taking speckle structured light as an example, the ISP processor 930 demodulates the structured light image, specifically including acquiring a speckle image of the measured object from the structured light image, performing image data calculation on the speckle image of the measured object and the reference speckle image according to a predetermined algorithm, and obtaining a moving distance of each scattered spot of the speckle image on the measured object relative to a reference scattered spot in the reference speckle image. And (4) converting and calculating by using a trigonometry method to obtain the depth value of each scattered spot of the speckle image, and obtaining the depth information of the measured object according to the depth value.
Of course, the depth image information and the like may be acquired by a binocular vision method or a method based on the time difference of flight TOF, and the method is not limited thereto, as long as the depth information of the object to be measured can be acquired or obtained by calculation, and all methods fall within the scope of the present embodiment.
After ISP processor 930 receives the color information of the object to be measured captured by image sensor 914, image data corresponding to the color information of the object to be measured may be processed. ISP processor 930 analyzes the image data to obtain image statistics that may be used to determine one or more control parameters of imaging device 910. Image sensor 914 may include an array of color filters (e.g., Bayer filters), and image sensor 914 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 914 and provide a set of raw image data that may be processed by ISP processor 930.
ISP processor 930 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 9, 10, 12, or 14 bits, and ISP processor 930 may perform one or more image processing operations on the raw image data, collecting image statistics about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 930 may also receive pixel data from image memory 920. The image memory 920 may be a part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and may include a DMA (Direct memory access) feature.
Upon receiving the raw image data, ISP processor 930 may perform one or more image processing operations.
After the ISP processor 930 acquires the color information and the depth information of the object to be measured, they may be fused to obtain a three-dimensional image. The feature of the corresponding object to be measured can be extracted by at least one of an appearance contour extraction method or a contour feature extraction method. For example, the features of the object to be measured are extracted by methods such as an active shape model method ASM, an active appearance model method AAM, a principal component analysis method PCA, and a discrete cosine transform method DCT, which are not limited herein. And then the characteristics of the measured object extracted from the depth information and the characteristics of the measured object extracted from the color information are subjected to registration and characteristic fusion processing. The fusion processing may be a process of directly combining the features extracted from the depth information and the color information, a process of combining the same features in different images after weight setting, or a process of generating a three-dimensional image based on the features after fusion in other fusion modes.
The image data for the three-dimensional image may be sent to an image memory 920 for additional processing before being displayed. ISP processor 930 receives the processed data from image memory 920 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. Image data for a three-dimensional image may be output to a display 960 for viewing by a user and/or further Processing by a Graphics Processing Unit (GPU). Further, the output of ISP processor 930 may also be sent to image memory 920 and display 960 may read the image data from image memory 920. In one embodiment, image memory 920 may be configured to implement one or more frame buffers. Further, the output of the ISP processor 930 may be transmitted to the encoder/decoder 950 to encode/decode image data. The encoded image data may be saved and decompressed before being displayed on the display 960 device. The encoder/decoder 950 may be implemented by a CPU or a GPU or a coprocessor.
The image statistics determined by ISP processor 930 may be sent to control logic 940 unit. Control logic 940 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of imaging device 910 based on the received image statistics. The following steps are implemented by using the image processing technology in fig. 12 to implement the dressing recommendation method:
acquiring a 3D model of a user through structured light;
extracting depth information of the current dress of the user from the 3D model;
and acquiring the target clothes recommended to the user according to the dressing depth information.
Embodiments of the invention also provide one or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of:
acquiring a 3D model of a user through structured light;
extracting depth information of the current dress of the user from the 3D model;
and acquiring the target clothes recommended to the user according to the dressing depth information.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (11)

1. A dressing recommendation method, comprising:
acquiring a 3D model of a user through structured light;
extracting depth information of the current dress of the user from the 3D model, wherein the depth information comprises a category to which the dress belongs, a style of the dress and a thickness of the dress, and the category to which the dress belongs, the style of the dress and the thickness of the dress are obtained by reconstructing the shape of the clothes currently worn by the user according to the depth information of the dress;
and acquiring the target clothes recommended to the user according to the dressing depth information.
2. The method of claim 1, wherein said obtaining a recommended target garment to the user based on the depth information of the garment comprises:
forming a 3D model of the apparel according to the depth information of the apparel;
identifying the category to which the clothing belongs and the style of the clothing according to the 3D model of the clothing;
acquiring a candidate clothes set according to the style under the affiliated category;
and selecting part or all of the candidate clothes from the candidate clothes set as the target clothes.
3. The method of claim 2, further comprising:
acquiring the thickness of the jacket according to the depth information of the jacket;
the selecting part or all of the candidate clothes from the candidate clothes set as the target clothes comprises:
obtaining the thickness of each candidate garment in the candidate garment set;
comparing the thickness of each candidate garment with the thickness of the garment to obtain a first difference between the two thicknesses;
and selecting the candidate clothes with the first difference value within a preset first range as the target clothes.
4. The method of claim 2, further comprising:
acquiring the size information of the dressed clothes according to the 3D model of the dressed clothes;
the selecting part or all of the candidate clothes from the candidate clothes set as the target clothes comprises:
acquiring size information of each candidate garment in the candidate garment set;
comparing the size information of the garment with the size information of each candidate garment to obtain a second difference value of the two sizes;
and selecting the candidate clothes with the second difference value within a preset second range as the target clothes.
5. The method of claim 2, further comprising:
acquiring material information of the dressed clothes;
the selecting part or all of the candidate clothes from the candidate clothes set as the target clothes comprises:
acquiring material information of each candidate garment in the candidate garment set;
and selecting the candidate clothes with the same material as the clothes from the candidate clothes set as the target clothes.
6. The method of claim 2, further comprising:
acquiring color information of the dressed clothes;
the selecting part or all of the candidate clothes from the candidate clothes set as the target clothes comprises:
acquiring color information of each candidate garment in the candidate garment set;
comparing the color information of the dress with the color information of each candidate clothes to obtain the similarity of the dress and the candidate clothes in color;
and selecting the candidate clothes with the similarity exceeding a threshold value from the candidate clothes set as the target clothes.
7. The method of claim 1, wherein said extracting depth information of the user's current dressing from the 3D model comprises:
identifying, from the 3D model, feature points belonging to the apparel;
and acquiring the depth information of the characteristic points to form the depth information of the dressed clothes.
8. The method according to any one of claims 1 to 7, wherein the structured light is a non-uniform structured light, which is a speckle pattern or a random dot pattern consisting of a collection of a plurality of light spots, formed by a diffractive optical element provided in a projection device on the terminal, wherein the diffractive optical element is provided with a number of reliefs having different groove depths.
9. A dressing recommendation device, comprising:
the model acquisition module is used for acquiring a 3D model of a user through structured light;
an extraction module, configured to extract depth information of the current dress of the user from the 3D model, where the depth information includes a category to which the dress belongs, a style of the dress, and a thickness of the dress, and the category to which the dress belongs, the style of the dress, and the thickness of the dress are obtained by reconstructing a form of a garment currently worn by the user according to the depth information of the dress;
and the recommending module is used for acquiring the target clothes recommended to the user according to the dressing depth information.
10. A terminal device comprising a memory and a processor, the memory having stored therein computer readable instructions, which when executed by the processor, cause the processor to perform the dressing recommendation method of any one of claims 1 to 8.
11. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the dressing recommendation method of any one of claims 1-8.
CN201710641558.1A 2017-07-31 2017-07-31 Dressing recommendation method and device Active CN107481101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710641558.1A CN107481101B (en) 2017-07-31 2017-07-31 Dressing recommendation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710641558.1A CN107481101B (en) 2017-07-31 2017-07-31 Dressing recommendation method and device

Publications (2)

Publication Number Publication Date
CN107481101A CN107481101A (en) 2017-12-15
CN107481101B true CN107481101B (en) 2020-10-02

Family

ID=60598049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710641558.1A Active CN107481101B (en) 2017-07-31 2017-07-31 Dressing recommendation method and device

Country Status (1)

Country Link
CN (1) CN107481101B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319680A (en) * 2018-01-31 2018-07-24 维沃移动通信有限公司 A kind of clothes recommend method and terminal device
CN110348927A (en) * 2018-04-04 2019-10-18 阿里巴巴集团控股有限公司 Information method for displaying and processing, device and shops's system
CN108804546B (en) * 2018-05-18 2021-02-12 维沃移动通信有限公司 Clothing matching recommendation method and terminal
CN110648186B (en) * 2018-06-26 2022-07-01 杭州海康威视数字技术股份有限公司 Data analysis method, device, equipment and computer readable storage medium
CN109064275A (en) * 2018-07-24 2018-12-21 广东金熙商业建设股份有限公司 A kind of individual demand intelligent steering marketing system
CN109117779A (en) * 2018-08-06 2019-01-01 百度在线网络技术(北京)有限公司 One kind, which is worn, takes recommended method, device and electronic equipment
CN111028031A (en) * 2019-05-20 2020-04-17 珠海随变科技有限公司 Clothing recommendation method, device, equipment and storage medium
CN112785389A (en) * 2021-02-01 2021-05-11 广东睿住智能科技有限公司 Dressing recommendation method, storage medium and terminal device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008571B (en) * 2014-06-12 2017-01-18 深圳奥比中光科技有限公司 Human body model obtaining method and network virtual fitting system based on depth camera
CN104966284A (en) * 2015-05-29 2015-10-07 北京旷视科技有限公司 Method and equipment for acquiring object dimension information based on depth data
CN106557753A (en) * 2016-11-14 2017-04-05 北京小米移动软件有限公司 The method and device of output prompting

Also Published As

Publication number Publication date
CN107481101A (en) 2017-12-15

Similar Documents

Publication Publication Date Title
CN107481101B (en) Dressing recommendation method and device
CN107481304B (en) Method and device for constructing virtual image in game scene
CN107480613B (en) Face recognition method and device, mobile terminal and computer readable storage medium
CN108765273B (en) Virtual face-lifting method and device for face photographing
CN107479801B (en) Terminal display method and device based on user expression and terminal
CN106705837B (en) Object measuring method and device based on gestures
US9141873B2 (en) Apparatus for measuring three-dimensional position, method thereof, and program
CN107491744B (en) Human body identity recognition method and device, mobile terminal and storage medium
CN107452034B (en) Image processing method and device
CN107564050B (en) Control method and device based on structured light and terminal equipment
CN107592449B (en) Three-dimensional model establishing method and device and mobile terminal
CN107463659B (en) Object searching method and device
CN107480615B (en) Beauty treatment method and device and mobile equipment
CN107392874B (en) Beauty treatment method and device and mobile equipment
CN107610171B (en) Image processing method and device
CN107507269A (en) Personalized three-dimensional model generating method, device and terminal device
CN107623817A (en) video background processing method, device and mobile terminal
Choe et al. Refining geometry from depth sensors using IR shading images
CN107623832A (en) Video background replacement method, device and mobile terminal
CN107438161A (en) Shooting picture processing method, device and terminal
CN107592490A (en) Video background replacement method, device and mobile terminal
CN107592491B (en) Video communication background display method and device
CN107613239B (en) Video communication background display method and device
CN107705278A (en) The adding method and terminal device of dynamic effect
CN107330974B (en) Commodity display method and device and mobile equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant