CN113537043A - Image processing method, image processing apparatus, electronic device, and storage medium - Google Patents
Image processing method, image processing apparatus, electronic device, and storage medium Download PDFInfo
- Publication number
- CN113537043A CN113537043A CN202110794124.1A CN202110794124A CN113537043A CN 113537043 A CN113537043 A CN 113537043A CN 202110794124 A CN202110794124 A CN 202110794124A CN 113537043 A CN113537043 A CN 113537043A
- Authority
- CN
- China
- Prior art keywords
- information
- item
- target
- image
- category
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 53
- 238000003672 processing method Methods 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 claims description 30
- 230000004044 response Effects 0.000 claims description 28
- 238000009877 rendering Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 13
- 238000012216 screening Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 239000002537 cosmetic Substances 0.000 description 6
- 210000000697 sensory organ Anatomy 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 210000004209 hair Anatomy 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 241000220317 Rosa Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/55—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Item recommendations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Shopping interfaces
- G06Q30/0643—Graphical representation of items or shoppers
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Library & Information Science (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The disclosure discloses an image processing method, an image processing device, an electronic device, a storage medium and a program product, and relates to the technical field of data processing, in particular to the field of intelligent search. The specific implementation scheme is as follows: identifying category information and attribute feature information of the target item object in the image data; determining article identification information matched with the target article object based on the category information and the attribute characteristic information, and providing the article identification information for the user; and responding to the request for image processing, and performing image processing on the image to be processed according to the category information and the attribute characteristic information to obtain a target image.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to the field of intelligent search, and in particular, to an image processing method and apparatus, an electronic device, a storage medium, and a program product.
Background
With the popularization of the internet, intelligent search has become one of the important tools of the internet. The intelligent search can provide services such as information query and retrieval, and the users can seek to find the required information quickly and accurately by continuously optimizing the advantages of intelligence, individuation, interactivity, initiative and the like in the process of providing the services.
Disclosure of Invention
The present disclosure provides an image processing method, apparatus, electronic device, storage medium, and program product.
According to an aspect of the present disclosure, there is provided an image processing method including: identifying category information and attribute feature information of the target item object in the image data; determining article identification information matched with the target article object based on the category information and the attribute characteristic information, and providing the article identification information for the user; and responding to the request for image processing, and performing image processing on the image to be processed according to the category information and the attribute characteristic information to obtain a target image.
According to another aspect of the present disclosure, there is provided an image processing apparatus including: the identification module is used for identifying the category information and the attribute characteristic information of the target object in the image data; the identification determining module is used for determining the item identification information matched with the target item object based on the category information and the attribute characteristic information and providing the item identification information for the user; and the response module is used for responding to the request for image processing, and performing image processing on the image to be processed according to the category information and the attribute characteristic information to obtain the target image.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method as described above.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the method as described above.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 schematically illustrates an exemplary system architecture to which the image processing method and apparatus may be applied, according to an embodiment of the present disclosure;
FIG. 2 schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a schematic diagram of determining attribute feature information and category information of a target item object, according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of determining attribute feature information and category information of a target item object according to another embodiment of the present disclosure;
FIG. 5 schematically illustrates a rendering of an image to be processed according to an embodiment of the disclosure;
FIG. 6 schematically shows a flow chart of an image processing method according to another embodiment of the present disclosure;
fig. 7 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure; and
fig. 8 schematically shows a block diagram of an electronic device adapted to implement an image processing method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
With the rapid development of the internet, when a user browses a webpage, watches live broadcast and other online activities, the user may be interested in beautiful clothes and exquisite makeup, and have ideas of further understanding and purchasing.
In practical situations, the article information can be further acquired by searching for the same article in a text mode. However, small or inaccurate search information makes it difficult for the search results to meet the user expectations, or even to meet the user's precise needs. In addition, the information of the recommended same-money articles can be acquired by using a public number recommendation mode. However, the number and variety of individual items recommended by the public are limited, and it is difficult to satisfy the search demand for various items.
Embodiments of the present disclosure provide an image processing method, apparatus, electronic device, storage medium, and program product.
According to an embodiment of the present disclosure, an image processing method may include: identifying category information and attribute feature information of the target item object in the image data; determining article identification information matched with the target article object based on the category information and the attribute characteristic information, and providing the article identification information for the user; and responding to the request for image processing, and performing image processing on the image to be processed according to the category information and the attribute characteristic information to obtain a target image.
With the image processing method provided by the embodiment of the present disclosure, the item identification information matching the target item object can be determined by identifying the category information and the attribute feature information of the target item object in the image data. The user can learn the object through the obtained object identification information matched with the object, and can learn the object more deeply. And the problems that the description of the search terms is inaccurate, the search result is difficult to meet the requirements of the user and the like caused by the character search of the user are solved. Furthermore, the image to be processed can be processed according to the category information and the attribute characteristic information to obtain a target image, the target object and the image to be processed are combined, the virtual information is visualized and more intuitive, and the user experience is improved.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
Fig. 1 schematically shows an exemplary system architecture to which the image processing method and apparatus may be applied, according to an embodiment of the present disclosure.
It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios. For example, in another embodiment, an exemplary system architecture to which the image processing method and apparatus may be applied may include a terminal device, but the terminal device may implement the image processing method and apparatus provided in the embodiments of the present disclosure without interacting with a server.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired and/or wireless communication links, and so forth.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as a knowledge reading application, a web browser application, a search application, an instant messaging tool, a mailbox client, and/or social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for content browsed by the user using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the image processing method provided by the embodiment of the present disclosure may be generally executed by the terminal device 101, 102, or 103. Accordingly, the image processing apparatus provided by the embodiment of the present disclosure may also be provided in the terminal device 101, 102, or 103.
Alternatively, the image processing method provided by the embodiment of the present disclosure may also be generally executed by the server 105. Accordingly, the image processing apparatus provided by the embodiment of the present disclosure may be generally disposed in the server 105. The image processing method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the image processing apparatus provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
For example, when a user searches for a target object in image data, the terminal devices 101, 102, and 103 may acquire the image data, transmit the acquired image data to the server 105, and analyze the image data by the server 105 to identify category information and attribute feature information of the target object in the image data; and determining the item identification information matched with the target item object based on the category information and the attribute characteristic information. Or by a server or server cluster capable of communicating with the terminal devices 101, 102, 103 and/or the server 105, and ultimately enabling determination of item identification information matching the target item object.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S230.
In operation S210, category information and attribute feature information of the target item object in the image data are identified.
In operation S220, item identification information matching the target item object is determined based on the category information and the attribute feature information, and provided to the user.
In operation S230, in response to the request for image processing, the image processing is performed on the image to be processed according to the category information and the attribute characteristic information, resulting in a target image.
According to an embodiment of the present disclosure, the source of the image data is not limited. For example, the image data may be image data acquired by an image acquisition device, image data downloaded from a web page, or video frame data obtained by extracting a video.
According to an embodiment of the present disclosure, the type of the target item object is not particularly limited. For example, the cosmetic may be a cosmetic product applied to the face, a dress worn on the body, or other articles.
According to an embodiment of the present disclosure, the category information may be information for distinguishing an article category, and may indicate a category to which the target article object in the image data belongs, for example, a category of cosmetics, clothing, accessories, or the like.
According to an embodiment of the present disclosure, the type of the attribute feature information is not limited. For example, the attribute information may refer to the material, color, pattern, size, and the like of the target object.
According to an embodiment of the present disclosure, the type of the item identification information is not limited. For example, the code may be a two-dimensional code, a bar code, or an article identification number. As long as it can serve as a unique identification for the article.
According to the embodiment of the present disclosure, the item identification information matched with the target item object may be, but is not limited to, the item identification information of the target item object. But also item identification information for items similar or related to the target item object.
With the image processing method provided by the embodiment of the present disclosure, the item identification information matching the target item object can be determined by identifying the category information and the attribute feature information of the target item object in the image data. The user can learn the object of the target object of the object. And then solve the problem that the user through the word search causes the search term description inaccurate, and then search result is inaccurate, and the search result is difficult to satisfy the user's demand.
According to the embodiments of the present disclosure, the image content of the image to be processed is not limited. For example, the image may include an image of one or more human body parts such as a face, a hair, a body, and the like, and may have background image contents such as an animal, a house, a natural landscape, and the like.
According to an embodiment of the present disclosure, the means of image processing is not limited. For example, the image rendering processing and the image rendering processing may be performed, the image synthesis processing may be performed, and the image deformation processing may be performed. And will not be described in detail herein.
By utilizing the embodiment of the disclosure, the image to be processed is processed according to the category information and the attribute characteristic information to obtain the target image, and the target object is combined with the image to be processed, so that the virtual information is visualized, more intuitionistic and good in user experience.
According to an embodiment of the present disclosure, the category information and the attribute feature information of the target item object may be determined by the following operations.
For example, in response to a request to identify image data, identifying attribute feature information and affiliation location information of a target item object in the image data, wherein the affiliation location information is used to characterize location information of an object to which the target item object is affiliated; and determining category information for the target item object based on the affiliate location information.
According to embodiments of the present disclosure, attribute feature information and affiliated location information of a target item object may be identified by a recognition model.
According to an embodiment of the present disclosure, the attribute feature information may include one or more of color category feature information, color model feature information, and color texture feature information.
According to an embodiment of the present disclosure, the color category characteristic information may be main color category characteristic information of red, yellow, blue, white, black, and the like.
According to an embodiment of the present disclosure, the color model characteristic information may be color number information, i.e., color information of the finer division in the main color category. For example, bright red, rose, orange among red. The color model characteristic information may be identified by a color number.
According to the embodiment of the present disclosure, the color category characteristic information and the color model characteristic information may be identified by a color identification model; the color type identification model and the color model identification model can be respectively adopted for identification.
According to the embodiment of the disclosure, the color category characteristic information and the color model characteristic information of the target object are finely distinguished, so that the object identification information matched with the target object can be more accurately matched.
According to an embodiment of the present disclosure, the color texture characteristic information may be matte, pearl, normal, etc. information.
According to an embodiment of the present disclosure, the color texture feature information may be recognized by a texture recognition model.
According to an exemplary embodiment of the present disclosure, color category characteristic information, color model characteristic information, and color texture characteristic information are recognized, and item identification information matching a target item object is collectively determined using the color category characteristic information, the color model characteristic information, and the color texture characteristic information. The method can be more accurately suitable for identifying the cosmetic article object and determining the article identification information.
The attached location information is not particularly limited according to an embodiment of the present disclosure. For example, the position information of the five sense organs of the subject (e.g., person) to which the subject is attached may be included, the head position information of the subject to which the subject is attached may be included, and the body position information of the subject to which the subject is attached may be included. The recognition may be performed by an object recognition model. For example, a facial recognition model, a human recognition model, etc.
According to the embodiment of the present disclosure, the category information of the target item object, particularly the category information of the cosmetic type item object, may be determined by the attached location information. For example, the eye corresponds to the eye shadow, the cheek corresponds to the blush, the eyebrow corresponds to the eyebrow pencil, the face corresponds to the foundation, the lip corresponds to the lipstick, etc.
An image processing method according to an embodiment of the disclosure is further described with reference to fig. 3 to 6.
Fig. 3 schematically shows a schematic diagram of determining attribute feature information and category information of a target item object according to an embodiment of the present disclosure.
As shown in fig. 3, attribute feature information in image data may be identified using a five sense organ recognition model, a color recognition model, and a texture recognition model in response to a request from a user for recognizing, for example, face image data. The final recognition result is that the attached position information is the mouth, the attribute feature information is the rosy color number, and the color texture is pearly. Subsequently, based on the affiliated location information, the category information of the target item object corresponding to the affiliated location information may be determined to be lipstick.
By using the attribute characteristic information and category information determining mode provided by the embodiment of the disclosure, various information in image data can be acquired, so that the identification range is wide and the search is comprehensive.
According to an embodiment of the present disclosure, the category information and the attribute feature information of the target item object may also be determined by the following operations.
For example, a request for identifying image data is acquired, wherein the request carries category information of a target object; determining, in response to the request for identifying image data, affiliation location information corresponding to the category information from the image data, wherein the affiliation location information is used to characterize location information of an object to which the target item object is affiliated; and identifying attribute feature information of the target object from the image data based on the attached position information.
According to the embodiment of the present disclosure, in the case of responding to the request for identifying the image data, the category information of the target item object is already known, and it is only necessary to identify the affiliated location information corresponding to the category information using the five sense organs identification model, and to identify the attribute feature information corresponding to the affiliated location information using the color identification model and the texture identification model, respectively.
Fig. 4 schematically shows a schematic diagram of determining attribute feature information and category information of a target item object according to another embodiment of the present disclosure.
As shown in fig. 4, in the case where the category information of the target item object is known as lipstick, the attached position information (mouth position information) in the image data is recognized by the five sense organs recognition model in response to the request for recognizing the face image data. The attribute feature information corresponding to the mouth position information in the image data is recognized using the color recognition model and the texture recognition model, for example, the color number is rosy, and the color texture is pearly.
By using the attribute characteristic information and category information determination method provided by another embodiment of the disclosure, pertinence can be achieved, and identification is faster and more accurate.
According to an embodiment of the present disclosure, item identification information matching a target item object may be determined based on category information and attribute feature information by the following operations.
For example, screening candidate item objects matched with the category information and the attribute feature information from the item object set; and determining the article identification information of the candidate article object as the article identification information matched with the target article object under the condition that the candidate article object is determined to meet the preset condition.
According to the embodiment of the present disclosure, the item object matching the target item object may be the target item object, and may also be an item object similar to or related to the target item object.
According to an embodiment of the present disclosure, one or more candidate item objects may be screened out from a set of item objects based on category information and attribute feature information as matching information.
According to an embodiment of the present disclosure, before determining the item identification information of the candidate item object as the item identification information matching the target item object, the image processing method provided by an embodiment of the present disclosure may perform an operation of determining whether the candidate item object satisfies a preset condition.
According to the embodiment of the disclosure, the candidate object is further screened by utilizing the preset conditions, so that the candidate object can not only be closer to the user requirement, but also be further screened, and the information redundancy is avoided, thereby influencing the user experience.
According to an embodiment of the present disclosure, whether the candidate item object satisfies the preset condition may be determined as follows.
For example, a value attribute value for the candidate item object is determined; and determining that the candidate item object satisfies a preset condition under the condition that the value attribute value of the candidate item object is determined to be less than or equal to a preset value threshold.
According to the embodiment of the disclosure, the value attribute value is smaller than or equal to the preset value threshold value as the preset condition, and the item identification information of the candidate item object with the value attribute value smaller than or equal to the preset value threshold value is determined as the item identification information matched with the target item object, so that the flat price item object similar to the target item object or the target item object can be screened out quickly.
According to an embodiment of the present disclosure, whether the candidate item object satisfies the preset condition may also be determined as follows.
For example, determining a degree of interest for each of a plurality of candidate item objects; and determining the candidate object with the highest attention as the candidate object meeting the preset condition.
According to an embodiment of the present disclosure, the type of attention of the candidate item object is not limited. For example, the interest level of the candidate item object may be the user's interest level, the popularity of the candidate item object, such as the collection number, the number of shopping carts added, or the number of purchases, and the time to market of the candidate item object.
According to an exemplary embodiment of the present disclosure, the degree of interest of the candidate item object may be determined based on a plurality of factors, such as the degree of interest, the degree of heat, and the time to market. For example, different weighting factors may be configured for the interest level, the heat level, and the time to market, and the attention level is determined by the total weight.
Degree of interest + heat + second weight + time to market + third weight.
According to the embodiment of the disclosure, the highest attention degree is taken as the preset condition, and the candidate article object with the highest attention degree is determined as the candidate article object meeting the preset condition and close to the requirement of the user.
According to an exemplary embodiment of the present disclosure, a value attribute value of a candidate item object may be further determined, and a degree of attention of each candidate item object of a plurality of candidate item objects may be determined, a candidate item object having a value attribute value less than or equal to a preset value threshold may be determined as a candidate item object satisfying a first preset condition, a candidate item object having a degree of attention satisfying a preset degree of attention threshold may be determined as a candidate item object satisfying a second preset condition, and a candidate item object satisfying both the first preset condition and the second preset condition may be determined as a candidate item object satisfying the preset condition.
According to the embodiment of the disclosure, the value attribute value and the attention degree are considered in a combined manner, so that the flat item object or the same-money item object meeting the personalized requirements of the user can be screened out.
According to an embodiment of the present disclosure, the item identification information matched with the target item object may include: one or more items of item link information of the target item object, item model information of the target item object, item value attribute value information of the target item object, item link information of an item similar to the target item object, item model information of an item similar to the target item object, and item value attribute value information of an item similar to the target item object.
According to the embodiment of the present disclosure, the item identification information matched with the target item object may be the item identification information of the target item object, and may also be the item identification information of an item object similar to the target item object.
According to the embodiment of the present disclosure, the article identification information may be a two-dimensional code, a barcode, or an article identification certificate, but is not limited thereto. Link information, model information, article value attribute value information, and the like may also be attached.
According to the embodiment of the disclosure, after the attribute feature information and the category information of the target object are identified, the object identification information matched with the target object is further determined, so that the user can quickly and deeply know the target object or the object similar to the target object, and the purchase demand of the user is met.
According to the embodiment of the disclosure, after the item identification information matched with the target item object is determined, the target object can be obtained through the following operations and displayed to the user.
For example, in response to a request for image processing, target position information of an image to be processed is determined based on the category information; and rendering the target position information of the image to be processed based on the attribute characteristic information to obtain a target image.
According to an embodiment of the present disclosure, the target location information may be one or more of a plurality of attached location information, for example, one or more of five sense organs, a head of a human body, a body, and the like. After the target position information is determined, the texture information of the image to be processed can be determined. The target position information of the image to be processed can be rendered according to one or more of color category characteristic information, color model characteristic information, color texture characteristic information and the like in the attribute characteristic information to obtain a rendered target image.
It should be noted that the operation of rendering the image may be aborted based on a skip request from the user. And under the condition that the request of the user for image processing is acquired, and then the operation of rendering the target position information of the image to be processed is executed in response to the request for image processing.
The image processing method provided by the embodiment of the disclosure has high selectivity, flexibility and intelligence, and meets various requirements of users.
Fig. 5 schematically shows a schematic diagram of rendering an image to be processed according to an embodiment of the present disclosure.
As shown in fig. 5, rendering an image 510 to be processed may simulate an operation of automatically trying to make up a face. The image to be processed 510 may be a face image determined by a user in a gallery, or may be a face image taken online. The operations such as whole face makeup trial, lip makeup trial, eye shadow makeup trial and the like can be performed based on the instruction of the user. The facial feature recognition model may be used to identify target location information 520 in the image to be processed, such as lips, which are then colored based on the attribute feature information, resulting in a lipstick-coated target image 530.
By utilizing the image processing method provided by the embodiment of the disclosure, online makeup trial can be simulated through rendering of the image to be processed, and the visualization effect is good.
It should be noted that, with the image processing method provided by the embodiment of the present disclosure, not only the rendering process may be performed on the image to be processed, but also the synthesis process may be performed on the image to be processed, for example, the target object (object such as a bag, a piece of clothing, a hair accessory, or jewelry) and the image to be processed are spliced and synthesized into one image, which is used as the target image.
According to the embodiment of the disclosure, the position information in the five sense organs is intelligently identified, and the attribute characteristic information of the cosmetic article object in the image data, such as color category characteristic information, color model characteristic information, texture characteristic information, category information and the like, is identified, so that the appeal that a user is difficult to search by using language and character description is more prominently solved.
Fig. 6 schematically shows a flow chart of an image processing method according to another embodiment of the present disclosure.
As shown in fig. 6, the source of the image data 610 may be a picture or a video. Data in the picture can be directly extracted to serve as image data, key frames in the video frames can also be extracted, and data in the key frames are extracted to serve as image data.
The category information and attribute feature information of the target object in the image data 610 are intelligently recognized by using a deep learning recognition model. And based on this, mapping retrieval is performed from the item object set using attribute feature information such as color category information, color model information, color texture information, category information, and item identification information 620 matching the target item object is determined. And displaying to the user.
In this embodiment, the user may also be provided with a choice of whether to perform a makeup trial operation, such as makeup trial or skipping. In response to a request of a user for performing a makeup trial operation, face and face information may be collected online, and a face image may be colored based on the attribute feature information and the category information to obtain a makeup target image 630.
The user can judge the makeup trying effect according to the target image and finally determine whether to purchase the target object or an object similar to the target object.
The image processing method provided by the embodiment of the disclosure is suitable for scenes such as pictures or videos, can accurately identify the attribute characteristic information and the category information of the target object in the image data, determine the object identification information matched with the target object (namely, similar objects which are the same or can achieve the same effect are searched), and can carry out makeup trial and virtually make up. The method realizes complete closed-loop operation of watching, trying and buying, intelligently meets various requirements of users, and solves the problem of inaccurate language and character description search.
Fig. 7 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 7, the image processing apparatus 700 may include a recognition module 710, an identification determination module 720, and a response module 730.
An identifying module 710 for identifying the category information and attribute feature information of the target item object in the image data;
an identification determination module 720, configured to determine item identification information matched with the target item object based on the category information and the attribute feature information, and provide the item identification information to the user; and
and the response module 730 is configured to, in response to the request for image processing, perform image processing on the image to be processed according to the category information and the attribute characteristic information to obtain a target image.
According to an embodiment of the present disclosure, the recognition module 710 may include a first recognition unit, and a category determination unit.
A first identifying unit, configured to identify attribute feature information and attaching position information of a target item object in the image data in response to a request for identifying the image data, wherein the attaching position information is used to represent position information of an object to which the target item object is attached;
and the category determining unit is used for determining the category information of the target object based on the attachment position information.
According to an embodiment of the present disclosure, the identification module 710 may include an acquisition unit, a location determination unit, and a second identification unit.
The system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a request for identifying image data, and the request carries the category information of a target object;
a position determining unit configured to determine attachment position information corresponding to the category information from the image data in response to the request, wherein the attachment position information is used to represent position information of an object to which the target item object is attached;
and a second identification unit for identifying attribute feature information of the target object from the image data based on the attached position information.
According to an embodiment of the present disclosure, the identity determination module 720 may include a screening unit, and an identity determination unit.
The screening unit is used for screening candidate article objects matched with the category information and the attribute characteristic information from the article object set; and
and the identification determining unit is used for determining the item identification information of the candidate item object as the item identification information matched with the target item object under the condition that the candidate item object is determined to meet the preset condition.
According to an embodiment of the present disclosure, the image processing apparatus 700 may further include a value determination module, and a first candidate determination module.
A value determination module for determining a value attribute value of the candidate item object; and
and the first candidate determining module is used for determining that the candidate object meets the preset condition under the condition that the value attribute value of the candidate object is less than or equal to the preset value threshold value.
According to an embodiment of the present disclosure, the image processing apparatus 700 may further include a degree of attention determining module, and a second candidate determining module.
A degree of attention determination module for determining a degree of attention for each of a plurality of candidate item objects; and
and the second candidate determining module is used for determining the candidate item object with the highest attention as the candidate item object meeting the preset condition.
According to an embodiment of the present disclosure, the response module 730 may include a response unit, and a rendering unit.
A response unit for determining target position information of the image to be processed based on the category information in response to the request for image processing;
and the rendering unit is used for rendering the target position information of the image to be processed based on the attribute characteristic information to obtain the target image.
According to an embodiment of the present disclosure, the attribute feature information includes at least one of: color category characteristic information, color model characteristic information and color texture characteristic information.
According to an embodiment of the present disclosure, the item identification information matched to the target item object includes at least one of: item link information of the target item object, item model information of the target item object, item value attribute value information of the target item object, item link information of an item similar to the target item object, item model information of an item similar to the target item object, and item value attribute value information of an item similar to the target item object.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
According to an embodiment of the present disclosure, an electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.
According to an embodiment of the present disclosure, a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method as described above.
According to an embodiment of the disclosure, a computer program product comprising a computer program which, when executed by a processor, implements the method as described above.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (19)
1. An image processing method comprising:
identifying category information and attribute feature information of the target item object in the image data;
determining article identification information matched with the target article object based on the category information and the attribute characteristic information, and providing the article identification information for a user; and
and responding to a request for image processing, and performing image processing on the image to be processed according to the category information and the attribute characteristic information to obtain a target image.
2. The method of claim 1, wherein the identifying category information and attribute feature information of the target item object in the image data comprises:
in response to a request to identify the image data, identifying attribute feature information and affiliation location information of a target item object in the image data, wherein the affiliation location information is used to characterize location information of an object to which the target item object is affiliated;
based on the affiliate location information, category information for the target item object is determined.
3. The method of claim 1, wherein the identifying category information and attribute feature information of the target item object in the image data comprises:
acquiring a request for identifying the image data, wherein the request carries the category information of a target object;
determining attachment location information corresponding to the category information from the image data in response to the request, wherein the attachment location information is used to characterize location information of an object to which the target item object is attached;
identifying the attribute feature information of the target object from the image data based on the affiliated location information.
4. The method of claim 1, wherein the determining item identification information that matches the target item object based on the category information and the attribute feature information comprises:
screening candidate item objects matched with the category information and the attribute characteristic information from the item object set;
and under the condition that the candidate object is determined to meet the preset condition, determining the object identification information of the candidate object as the object identification information matched with the target object.
5. The method of claim 4, further comprising:
determining a value attribute value for the candidate item object; and
determining that the candidate item object satisfies the preset condition if it is determined that the value attribute value of the candidate item object is less than or equal to a preset value threshold.
6. The method of claim 4, further comprising:
determining a degree of interest for each of a plurality of candidate item objects;
and determining the candidate article object with the highest attention as the candidate article object meeting the preset condition.
7. The method of claim 1, wherein the performing image processing on the image to be processed in response to the request for image processing to obtain the target image comprises:
determining target position information of the image to be processed based on the category information in response to a request for image processing;
and rendering the target position information of the image to be processed based on the attribute feature information to obtain a target image.
8. The method of claim 1, wherein the attribute feature information comprises at least one of:
color category characteristic information, color model characteristic information and color texture characteristic information;
the item identification information that matches the target item object includes at least one of:
item link information of the target item object, item model information of the target item object, item value attribute value information of the target item object, item link information of an item similar to the target item object, item model information of an item similar to the target item object, and item value attribute value information of an item similar to the target item object.
9. An image processing apparatus comprising:
the identification module is used for identifying the category information and the attribute characteristic information of the target object in the image data;
the identification determining module is used for determining the item identification information matched with the target item object based on the category information and the attribute characteristic information and providing the item identification information for the user; and
and the response module is used for responding to the request for image processing, and performing image processing on the image to be processed according to the category information and the attribute characteristic information to obtain a target image.
10. The apparatus of claim 9, wherein the identification module comprises:
a first identification unit, configured to identify attribute feature information and attachment location information of a target item object in the image data in response to a request for identifying the image data, wherein the attachment location information is used to characterize location information of an object to which the target item object is attached;
a category determination unit for determining category information of the target item object based on the affiliated location information.
11. The apparatus of claim 9, wherein the identification module comprises:
the acquisition unit is used for acquiring a request for identifying the image data, wherein the request carries the category information of the target object;
a position determining unit configured to determine attachment position information corresponding to the category information from the image data in response to the request, wherein the attachment position information is used to represent position information of an object to which the target item object is attached;
a second identifying unit configured to identify the attribute feature information of the target object from the image data based on the attached position information.
12. The apparatus of claim 9, wherein the identity determination module comprises:
a screening unit, configured to screen candidate item objects matching the category information and the attribute feature information from an item object set;
and the identification determining unit is used for determining the item identification information of the candidate item object as the item identification information matched with the target item object under the condition that the candidate item object is determined to meet the preset condition.
13. The apparatus of claim 12, further comprising:
a value determination module for determining a value attribute value for the candidate item object; and
a first candidate determining module, configured to determine that the candidate item object satisfies the preset condition when it is determined that the value attribute value of the candidate item object is less than or equal to a preset value threshold.
14. The apparatus of claim 12, further comprising:
a degree of attention determination module for determining a degree of attention for each of a plurality of candidate item objects;
and the second candidate determining module is used for determining the candidate item object with the highest attention as the candidate item object meeting the preset condition.
15. The apparatus of claim 9, wherein the response module comprises:
a response unit configured to determine target position information of the image to be processed based on the category information in response to a request for image processing;
and the rendering unit is used for rendering the target position information of the image to be processed based on the attribute feature information to obtain a target image.
16. The apparatus of claim 9, wherein the attribute feature information comprises at least one of:
color category characteristic information, color model characteristic information and color texture characteristic information;
the item identification information that matches the target item object includes at least one of:
item link information of the target item object, item model information of the target item object, item value attribute value information of the target item object, item link information of an item similar to the target item object, item model information of an item similar to the target item object, and item value attribute value information of an item similar to the target item object.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110794124.1A CN113537043B (en) | 2021-07-14 | 2021-07-14 | Image processing method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110794124.1A CN113537043B (en) | 2021-07-14 | 2021-07-14 | Image processing method, device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113537043A true CN113537043A (en) | 2021-10-22 |
CN113537043B CN113537043B (en) | 2023-08-18 |
Family
ID=78127877
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110794124.1A Active CN113537043B (en) | 2021-07-14 | 2021-07-14 | Image processing method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113537043B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9916613B1 (en) * | 2014-06-26 | 2018-03-13 | Amazon Technologies, Inc. | Automatic color palette based recommendations for affiliated colors |
CN108846792A (en) * | 2018-05-23 | 2018-11-20 | 腾讯科技(深圳)有限公司 | Image processing method, device, electronic equipment and computer-readable medium |
US10242395B1 (en) * | 2015-04-30 | 2019-03-26 | Amazon Technologies, Inc. | Providing shopping links to items on a network page |
CN112819767A (en) * | 2021-01-26 | 2021-05-18 | 北京百度网讯科技有限公司 | Image processing method, apparatus, device, storage medium, and program product |
CN112905889A (en) * | 2021-03-03 | 2021-06-04 | 百度在线网络技术(北京)有限公司 | Clothing searching method and device, electronic equipment and medium |
-
2021
- 2021-07-14 CN CN202110794124.1A patent/CN113537043B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9916613B1 (en) * | 2014-06-26 | 2018-03-13 | Amazon Technologies, Inc. | Automatic color palette based recommendations for affiliated colors |
US10242395B1 (en) * | 2015-04-30 | 2019-03-26 | Amazon Technologies, Inc. | Providing shopping links to items on a network page |
CN108846792A (en) * | 2018-05-23 | 2018-11-20 | 腾讯科技(深圳)有限公司 | Image processing method, device, electronic equipment and computer-readable medium |
CN112819767A (en) * | 2021-01-26 | 2021-05-18 | 北京百度网讯科技有限公司 | Image processing method, apparatus, device, storage medium, and program product |
CN112905889A (en) * | 2021-03-03 | 2021-06-04 | 百度在线网络技术(北京)有限公司 | Clothing searching method and device, electronic equipment and medium |
Non-Patent Citations (4)
Title |
---|
JING LIAO 等: "Visual attribute transfer through deep image analogy", ACM TRANSACTIONS ON GRAPHICS, vol. 36, no. 4, XP058372847, DOI: 10.1145/3072959.3073683 * |
周静: "基于用户偏好和图像内容的服装个性化推荐研究", 中国优秀硕士学位论文全文数据库, no. 2 * |
张萌岩;何儒汉;: "基于改进的残差神经网络的服装标签属性识别", 商丘师范学院学报, no. 06 * |
皮思远;唐洪;肖南峰;: "基于全卷积深度学习模型的可抓取物品识别", 重庆理工大学学报(自然科学), no. 02 * |
Also Published As
Publication number | Publication date |
---|---|
CN113537043B (en) | 2023-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10242396B2 (en) | Automatic color palette based recommendations for affiliated colors | |
US10083521B1 (en) | Content recommendation based on color match | |
US10019779B2 (en) | Browsing interface for item counterparts having different scales and lengths | |
EP4062987A1 (en) | Method and apparatus for generating virtual character | |
US9607010B1 (en) | Techniques for shape-based search of content | |
JP2020522072A (en) | Fashion coordination recommendation method and device, electronic device, and storage medium | |
US20180053234A1 (en) | Description information generation and presentation systems, methods, and devices | |
CN109409994A (en) | The methods, devices and systems of analog subscriber garments worn ornaments | |
CN106055710A (en) | Video-based commodity recommendation method and device | |
KR102115573B1 (en) | System, method and program for acquiring user interest based on input image data | |
US11037071B1 (en) | Cross-category item associations using machine learning | |
US10026176B2 (en) | Browsing interface for item counterparts having different scales and lengths | |
US11972466B2 (en) | Computer storage media, method, and system for exploring and recommending matching products across categories | |
CN116894711A (en) | Commodity recommendation reason generation method and device and electronic equipment | |
US11842457B2 (en) | Method for processing slider for virtual character, electronic device, and storage medium | |
US20210019567A1 (en) | Systems and methods for identifying items in a digital image | |
CN103279519B (en) | Articles search method and apparatus | |
CN106776898A (en) | A kind of method and device that information recommendation relative article is browsed according to user | |
CN113537043B (en) | Image processing method, device, electronic equipment and storage medium | |
CN112750004B (en) | Cold start recommendation method and device for cross-domain commodity and electronic equipment | |
CN111767925B (en) | Feature extraction and processing method, device, equipment and storage medium of article picture | |
CN111612571A (en) | Feature matching method, terminal and storage medium | |
CN112987932B (en) | Human-computer interaction and control method and device based on virtual image | |
WO2023207681A9 (en) | Method and apparatus for intelligent clothing matching, and electronic device and storage medium | |
KR102224931B1 (en) | Service providing apparatus and method for cleansing fashion related product information using neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |