WO2017157261A1 - Procédé de recherche d'image, procédé et dispositif d'acquisition d'image de caractère virtuel - Google Patents

Procédé de recherche d'image, procédé et dispositif d'acquisition d'image de caractère virtuel Download PDF

Info

Publication number
WO2017157261A1
WO2017157261A1 PCT/CN2017/076466 CN2017076466W WO2017157261A1 WO 2017157261 A1 WO2017157261 A1 WO 2017157261A1 CN 2017076466 W CN2017076466 W CN 2017076466W WO 2017157261 A1 WO2017157261 A1 WO 2017157261A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
image
clothing
character image
character
Prior art date
Application number
PCT/CN2017/076466
Other languages
English (en)
Chinese (zh)
Inventor
林清客
白博
陈茂林
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2017157261A1 publication Critical patent/WO2017157261A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image search method, a virtual person image acquisition method, and an apparatus.
  • Searching by image means submitting a picture of a person who matches the submitted picture by submitting the actual picture of the person or submitting a picture that can describe the global and local features of the character.
  • the actual character picture cannot be obtained, it is necessary to make a virtual character picture and use the virtual character picture to search.
  • the generation of the virtual character is an abstract description of the character by selecting the character feature template and the clothing template.
  • Character characteristics include height, face shape, facial features, hair style, skin color, body shape, etc.
  • clothing features include: tops, pants, skirts, shoes, and the like.
  • this requires storing a large number of clothing templates to cover as many situations as possible, wasting storage resources.
  • a large number of templates stored may not necessarily meet the requirements.
  • the RGB value of each pixel of the selected virtual character can only be the color in the basic color library, but the actual scene will be due to illumination, shadow, visual chromatic aberration, etc. The situation is biased towards the color, which results in a large deviation of the search results.
  • An embodiment of the present invention provides an image search method, a virtual person image acquisition method, and a device, which are used to solve the problem that the clothing template existing in the prior art cannot cover all the cases and the color due to illumination, shadow, visual chromatic aberration, and the like.
  • the problem caused by the deviation of the search results is large.
  • an embodiment of the present invention provides an image search method, including:
  • the color family library is obtained by clustering corresponding colors of each pixel in a plurality of actual character images in advance, and each class corresponds to a color family,
  • One color family corresponds to one basic color, and one color family includes a plurality of colors; an image matching the virtual human figure image is searched in the target image database using a plurality of colors included in the color family of the respective pixels.
  • all colors in the color family are used to participate in image matching, and the colors included in the color family are obtained by clustering colors included in some actual character images, thereby reducing real people due to illumination, shadow, and
  • the influence of visual chromatic aberration or the like on the virtual character image improves the reliability of the data, and also improves the accuracy of searching for the image matching the virtual character image in the target image database.
  • obtaining the color family library can be implemented as follows:
  • the character feature is a physiological feature of the character, and the clothing feature is related information of the clothing on the character;
  • the clustering algorithm is used to cluster the color blocks including the character features included in each actual character image and the color information included in the color block describing the clothing feature to obtain a plurality of classes, and determine a basic color corresponding to each color included in each class;
  • Each of these categories includes a number of colors that make up a color family.
  • the clustering algorithm in the embodiment of the present invention may be a K-means algorithm, a K-Medoids algorithm, or the like.
  • the color family library is obtained from the actual character image, and the accuracy of using the color in the color family library to match the image is added.
  • the color blocks describing the character features and the color information contained in the color patches describing the clothing features include: red, green, and blue RGB color components or hue, saturation, and lightness components.
  • the color block of the character ornament included in the mask MASK image of the foreground image of the respective actual character image is removed by an edge detection algorithm to obtain a color block describing the character feature and a color block describing the clothing feature.
  • Clothing features include: jackets, shirts, pants, shoes, skirts, etc.
  • the color of the garment can vary. Characters include the gender, height, face, facial features, hairstyle, skin color, expression, body shape, and so on.
  • the padding algorithm can be a flooding fill algorithm, a boundary padding algorithm, and the like.
  • obtaining the generated virtual character image can be achieved as follows:
  • the generated virtual character image is obtained by performing a filling process on the area included in the clothing corresponding to the position information to be adjusted based on the filling algorithm.
  • the adjustment of the garment in multiple dimensions can be realized, and when the provided clothing template cannot meet the demand, the garment can be adaptively adjusted.
  • the avatar image may be locked, that is, when the avatar image is in an adaptive adjustment state, after receiving the indication information of the clothing adjustment of the avatar image, the The area included in the clothing corresponding to the position information to be adjusted is subjected to filling processing to obtain a generated virtual character image.
  • the indication information for the clothing feature adjustment in the virtual character image if it is determined that the virtual character image is in the locked state, the adjustment of the clothing feature in the virtual character image is prohibited.
  • an embodiment of the present invention provides a method for acquiring a virtual character image, including:
  • the indication information includes position information and direction information to be adjusted
  • the generated virtual character image is obtained by performing a filling process on the area included in the clothing corresponding to the position information to be adjusted based on the filling algorithm.
  • the adjustment of the garment in multiple dimensions can be realized, and when the provided clothing template cannot meet the demand, the garment can be adaptively adjusted.
  • the avatar image obtained by filling the area included in the clothing corresponding to the location information to be adjusted according to the filling algorithm to obtain the generated virtual character image can be implemented as follows:
  • the location information to be adjusted is based on a filling algorithm
  • the area included in the clothing corresponding to the content is subjected to padding processing to obtain a generated virtual character image.
  • it also includes:
  • the virtual character image may be locked, that is, when the virtual character image is in an adaptive adjustment state, after receiving the indication information of the clothing adjustment of the virtual character image, the filling algorithm is based on the filling algorithm. Filling the area included in the clothing corresponding to the position information to be adjusted to obtain a generated virtual character image.
  • the adjustment of the clothing feature in the virtual character image is prohibited. It is possible to avoid an erroneous operation that occurs when there is no need to adjust the clothing in the avatar image.
  • an embodiment of the present invention provides an image search apparatus, where the apparatus includes:
  • a receiver configured to acquire a generated virtual character image
  • a processor configured to determine a color of each pixel of the virtual character image received by the receiver; and obtain, from the acquired color family library, a color family corresponding to a color of each pixel; wherein the color family library For pre-characterizing the corresponding colors of each pixel in a plurality of actual character images, and each class corresponds to one color family, one color family corresponds to one basic color, and one color family includes multiple colors; using each pixel A plurality of colors included in the color family search for an image matching the virtual character image in the target image database.
  • the receiver is further configured to acquire a plurality of actual character images
  • the processor is further configured to obtain the color family library by:
  • the character feature is a physiological feature of the character, and the clothing feature is related information of the clothing on the character;
  • the clustering algorithm is used to cluster the color blocks including the character features included in each actual character image and the color information included in the color block describing the clothing feature to obtain a plurality of classes, and determine a basic color corresponding to each color included in each class;
  • Each of these categories includes a number of colors that make up a color family.
  • the color blocks describing the character features and the color information contained in the color patches describing the clothing features include: red, green, and blue RGB color components or hue, saturation, and lightness components.
  • the processor is further configured to separately acquire color blocks describing character features included in each actual character image and color blocks describing the clothing features, including:
  • the color block of the character ornament included in the mask MASK image of the foreground image of the respective actual character image is removed by an edge detection algorithm to obtain a color block describing the character feature and a color block describing the clothing feature.
  • the receiver is further configured to:
  • the processor After the processor initially determines the avatar image based on the clothing feature template and the character feature template, receiving indication information of the clothing adjustment of the avatar image, where the indication information includes location information and direction information to be adjusted;
  • the processor is further configured to:
  • the generated virtual character image is obtained by performing a filling process on the area included in the clothing corresponding to the position information to be adjusted based on the filling algorithm.
  • the embodiment of the present invention further provides a virtual character image acquiring apparatus, including:
  • a processor configured to initially determine a virtual character image based on the clothing feature template and the character feature template
  • a receiver configured to receive, by the user, indication information about a clothing feature adjustment in the virtual character image that is initially determined by the processor, where the indication information includes location information to be adjusted and direction information;
  • the processor is further configured to perform a filling process on the area included in the clothing corresponding to the position information to be adjusted received by the receiver based on the filling algorithm to obtain the generated virtual character image.
  • the processor performs a filling process on the area included in the clothing corresponding to the position information to be adjusted based on the filling algorithm to obtain the generated virtual character image, specifically for:
  • the generated avatar image is obtained by performing a filling process on the region included in the clothing corresponding to the position information to be adjusted based on the filling algorithm.
  • the processor when the receiver receives the indication information of the clothing feature adjustment in the avatar image, the processor is further configured to determine that the avatar image is in a locked state, Adjustment of the clothing features in the virtual character image is prohibited.
  • an embodiment of the present invention provides an image search apparatus, including:
  • An image obtaining unit configured to acquire the generated virtual character image
  • a determining unit configured to determine a color of each pixel of the virtual character image acquired by the image acquiring unit
  • a color obtaining unit configured to respectively obtain a color family corresponding to a color of each pixel from a color family library, wherein the color family library is obtained by clustering corresponding colors of respective pixels in a plurality of actual character images in advance, and each The class corresponds to a color family, one color family corresponds to one basic color, and one color family includes multiple colors;
  • a matching unit configured to search the target image database for an image matching the virtual character image using a plurality of colors included in a color family of each pixel.
  • the acquiring unit is further configured to acquire a plurality of actual character images
  • the device also includes:
  • a color block obtaining unit configured to respectively acquire a color block describing a character feature included in each actual character image and a color block describing a clothing feature;
  • the character feature is a physiological feature of the character, and the clothing feature is related to the clothing of the character information;
  • a color family generating unit configured to cluster, by using a clustering algorithm, a color block including a character feature included in each actual character image and a color information included in a color block describing the clothing feature to obtain a plurality of classes, and determine each class included The basic color corresponding to several colors;
  • Each of these categories includes a number of colors that make up a color family.
  • the color blocks describing the character features and the color information contained in the color patches describing the clothing features include: red, green, and blue RGB color components or hue, saturation, and lightness components.
  • the color block acquiring unit is specifically configured to:
  • the color block of the character ornament included in the mask MASK image of the foreground image of the respective actual character image is removed by an edge detection algorithm to obtain a color block describing the character feature and a color block describing the clothing feature.
  • the device further includes an image generating unit, configured to receive indication information of the clothing adjustment of the virtual character image after the virtual character image is initially determined based on the clothing feature template and the character feature template,
  • the indication information includes location information to be adjusted and direction information;
  • the generated virtual character image is obtained by performing a filling process on the area included in the clothing corresponding to the position information to be adjusted based on the filling algorithm.
  • an embodiment of the present invention provides a virtual person image acquiring device, where the device includes:
  • a preliminary determining unit configured to initially determine a virtual character image based on the clothing feature template and the character feature template
  • a receiving unit configured to receive, by the user, indication information about a clothing feature adjustment in the virtual person image determined by the preliminary determining unit, where the indication information includes location information to be adjusted and direction information;
  • a generating unit configured to perform a filling process on the area included in the clothing corresponding to the position information to be adjusted according to the filling algorithm to obtain the generated virtual character image.
  • the generating unit is specifically configured to:
  • the generated avatar image is obtained by performing a filling process on the region included in the clothing corresponding to the position information to be adjusted based on the filling algorithm.
  • the generating unit is further configured to: when the receiving unit receives the indication information about the clothing feature adjustment in the virtual character image, if it is determined that the virtual character image is in a locked state, Adjustment of the clothing features in the virtual character image is prohibited.
  • all colors in the color family are used to participate in image matching, and the colors included in the color family are obtained by clustering colors included in some actual character images, thereby reducing real people due to illumination, shadow, and
  • the influence of visual chromatic aberration or the like on the virtual character image improves the reliability of the data, and also improves the accuracy of searching for the image matching the virtual character image in the target image database.
  • FIG. 1 is a flowchart of an image search method according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a color family library according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a method for acquiring a color family library according to an embodiment of the present invention.
  • 4A-4B are schematic diagrams of a human-machine interaction interface for initially determining a virtual character image according to an embodiment of the present invention
  • FIG. 5 is a flowchart of a method for acquiring a virtual character image according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of an image search apparatus according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of another image search apparatus according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a virtual character image acquiring apparatus according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of another virtual character image acquiring apparatus according to an embodiment of the present invention.
  • An embodiment of the present invention provides an image search method, a virtual person image acquisition method, and a device, which are used to solve the problem that the clothing template existing in the prior art cannot cover all the cases and the color due to illumination, shadow, visual chromatic aberration, and the like.
  • the problem caused by the deviation of the search results is large.
  • the method and device are based on the same inventive concept, Since the principles of the method and the device for solving the problem are similar, the implementation of the device and the method can be referred to each other, and the repeated description will not be repeated.
  • the solution provided by the embodiment of the present invention can be implemented by using one terminal device.
  • the terminal device can be a device such as a computer.
  • the image search method is provided in the embodiment of the present invention.
  • the image search method is provided in the embodiment of the present invention.
  • the image search method is as shown in FIG. 1 , and the image is searched for, as shown in FIG. 1 , the image of the virtual character is different from the difference between the light, the shadow, the visual color difference and the like.
  • Methods include:
  • the generated virtual character image is obtained, and the virtual character image can be generated through the human-computer interaction interface.
  • the color family library is obtained by clustering corresponding colors of each pixel in a plurality of actual character images in advance, and each class corresponds to one color family, one color family corresponds to one basic color, and one color family includes more A color, for example, a color family library as shown in FIG.
  • all colors in the color family are used to participate in image matching, and the colors included in the color family are obtained by clustering colors included in some actual character images, thereby reducing real people due to illumination, shadow, and
  • the influence of visual chromatic aberration or the like on the virtual character image improves the reliability of the data, and also improves the accuracy of searching for the image matching the virtual character image in the target image database.
  • obtaining the color family library can be implemented as follows, as shown in FIG. 3:
  • A1 obtain some images of actual characters.
  • A2 respectively acquiring color blocks describing the character features included in each actual character image and color patches describing the clothing features; the character features are physiological characteristics of the characters, and the clothing features are related information of the clothing on the characters.
  • the background area of each actual character image is separately removed to obtain a mask MASK image including the foreground image of each actual character image.
  • the color block of the character ornament included in the mask MASK image of the foreground image of the respective actual character image is removed by an edge detection algorithm to obtain a color block describing the character feature and a color block describing the clothing feature.
  • A3 using a clustering algorithm to cluster the color blocks of the character features included in each actual character image and the color information included in the color block describing the clothing feature to obtain a plurality of classes, and determine the basic colors corresponding to each color included in each class. colour.
  • Each of these categories includes a number of colors that make up a color family.
  • the clustering algorithm in the embodiment of the present invention may be a K-means algorithm, a K-Medoids algorithm, or the like.
  • the color block describing the character feature and the color information included in the color block describing the clothing feature include: red, green, and blue RGB color components or hue, saturation, and lightness components.
  • the garment of the initially determined virtual character image may be fine-tuned according to the difference between the virtual character image and the actual character's clothing.
  • the generated virtual character image is obtained by performing a filling process on the area included in the clothing corresponding to the position information to be adjusted based on the filling algorithm.
  • Clothing features include: jackets, shirts, pants, shoes, skirts, etc.
  • the color of the garment can vary. Characters include the gender, height, face, facial features, hairstyle, skin color, expression, body shape, and so on.
  • the padding algorithm can be a flooding fill algorithm, a boundary padding algorithm, and the like. This embodiment of the present invention does not specifically limit this.
  • the indication information for receiving the clothing adjustment of the virtual character image may be implemented as follows:
  • the location information that the garment needs to adjust includes the location of the garment and the size of the adjustment. For example, width adjustment, length adjustment, and so on.
  • the area included in the garment to be adjusted is determined according to the position indicated by the mouse, and then the area is processed based on the filling algorithm to obtain the generated virtual character image.
  • the user initially determines the virtual character image through the human-computer interaction interface and the clothing feature template displayed by the interface and the character feature template.
  • the user selects a jacket, a T-shirt, a pants, a shoe, a gender, etc. on the interface, and the pants can be pants or skirts.
  • the user uses the mouse to drag the garment to be adjusted, such as the left and right spacing of the jacket, the length of the sleeve, and the length of the pants.
  • the terminal device monitors the event that the user mouse drags the clothing, obtains the position information and the direction information that the clothing needs to be adjusted, and then fills the area included in the clothing corresponding to the position information to be adjusted according to the flooding filling algorithm to generate the generated result.
  • Virtual character image
  • the adjustment of the garment in multiple dimensions can be realized, and when the provided clothing template cannot meet the demand, the garment can be adaptively adjusted.
  • the avatar image may be locked, that is, when the avatar image is in an adaptive adjustment state, after receiving the indication information of the clothing adjustment of the avatar image, the The area included in the clothing corresponding to the position information to be adjusted is subjected to filling processing to obtain a generated virtual character image.
  • the indication information for the clothing feature adjustment in the virtual character image if it is determined that the virtual character image is in the locked state, the adjustment of the clothing feature in the virtual character image is prohibited.
  • An embodiment of the present invention further provides a method for acquiring a virtual character image. As shown in FIG. 5, the method includes:
  • S501 After initially determining the virtual person image based on the clothing feature template and the character feature template, receiving, by the user, indication information for adjusting the clothing feature in the virtual character image, where the indication information includes location information to be adjusted and direction information.
  • Clothing features include: jackets, shirts, pants, shoes, skirts, etc.
  • the color of the garment can vary. Characters include the gender, height, face, facial features, hairstyle, skin color, expression, body shape, and so on.
  • the padding algorithm can be a flooding fill algorithm, a boundary padding algorithm, and the like. This invention The embodiment does not specifically limit this.
  • the indication information for receiving the clothing adjustment of the virtual character image may be implemented as follows:
  • the location information that the garment needs to adjust includes the location of the garment and the size of the adjustment. For example, width adjustment, length adjustment, and so on.
  • the area included in the garment to be adjusted is determined according to the position indicated by the mouse, and then the area is processed based on the filling algorithm to obtain the generated virtual character image.
  • the user initially determines the virtual character image through the human-computer interaction interface and the clothing feature template displayed by the interface and the character feature template.
  • the user selects a jacket, T-shirt, pants, shoes, gender, etc. on the interface.
  • the user uses the mouse to drag the garment to be adjusted, such as the left and right spacing of the jacket, the length of the sleeve, and the length of the pants. Therefore, the terminal device monitors the event that the user mouse drags the clothing, obtains the position information and the direction information that the clothing needs to be adjusted, and then fills the area included in the clothing corresponding to the position information to be adjusted according to the flooding filling algorithm to generate the generated result.
  • Virtual character image the virtual character image.
  • the adjustment of the garment in multiple dimensions can be realized, and when the provided clothing template cannot meet the demand, the garment can be adaptively adjusted.
  • the avatar image obtained by performing the filling process on the area included in the clothing corresponding to the position information to be adjusted according to the filling algorithm may be implemented as follows:
  • the generated avatar image is obtained by performing a filling process on the region included in the clothing corresponding to the position information to be adjusted based on the filling algorithm.
  • the method may further include:
  • the virtual character image may be locked, that is, when the virtual character image is in an adaptive adjustment state, after receiving the indication information of the clothing adjustment of the virtual character image, the filling algorithm is based on the filling algorithm. Filling the area included in the clothing corresponding to the position information to be adjusted to obtain a generated virtual character image.
  • the adjustment of the clothing feature in the virtual character image is prohibited. It is possible to avoid an erroneous operation that occurs when there is no need to adjust the clothing in the avatar image.
  • an embodiment of the present invention provides an image search apparatus, as shown in FIG. 6, including:
  • a determining unit 602 configured to determine a color of each pixel of the virtual character image acquired by the image acquiring unit 601;
  • the color obtaining unit 603 is configured to separately obtain a color family corresponding to the color of each pixel from the color family library, where the color family library is obtained by clustering corresponding colors of each pixel in a plurality of actual character images in advance, and each One class corresponds to one color family, one color family corresponds to one basic color, and one color family includes multiple colors;
  • the matching unit 604 is configured to search for an image matching the virtual character image in the target image database by using a plurality of colors included in a color family of each pixel.
  • the image obtaining unit 601 is further configured to acquire a plurality of actual character images
  • the device also includes:
  • a color block obtaining unit 605 configured to respectively acquire a color block describing a character feature included in each actual character image and a color block describing a clothing feature; the character feature is a physiological feature of the character, and the clothing feature is a clothing of the character Related Information;
  • a color family generating unit 606 configured to cluster, by using a clustering algorithm, a color block including a character feature included in each actual character image and color information included in a color block describing the clothing feature to obtain a plurality of classes, and determine each class to include The basic color corresponding to several colors;
  • Each of these categories includes a number of colors that make up a color family.
  • the color blocks describing the character features and the color information contained in the color patches describing the clothing features include: red, green, and blue RGB color components or hue, saturation, and lightness components.
  • the color block obtaining unit 605 is specifically configured to:
  • the color block of the character ornament included in the mask MASK image of the foreground image of the respective actual character image is removed by an edge detection algorithm to obtain a color block describing the character feature and a color block describing the clothing feature.
  • the device further includes an image generating unit 607, configured to receive indication information about the clothing adjustment of the virtual character image after the virtual character image is initially determined based on the clothing feature template and the character feature template.
  • the indication information includes location information to be adjusted and direction information;
  • the generated virtual character image is obtained by performing a filling process on the area included in the clothing corresponding to the position information to be adjusted based on the filling algorithm.
  • each functional unit in each embodiment of the present application may be integrated into one processing. In the device, it may be physically present alone, or two or more units may be integrated in one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software function module.
  • an image search device as shown in FIG. 7 includes a receiver 701 and a processor 702.
  • the processor 702 can be a central processing unit (English: central processing unit, CPU for short), or a digital processing unit or the like.
  • the image search device further includes a memory 703 for storing a program executed by the processor 702, and a processor 702 for executing a program stored by the memory 703.
  • the memory 703 is also used to store information such as a color family library, a target image database, a clothing feature template, and a character feature template.
  • the memory 703 may be disposed inside the image search device or may be disposed outside the image search device.
  • the image search device may further include an input/output interface 704 for writing a program and configuration information into the memory 703 through the input/output interface 704 to output the matched image.
  • the receiver 701, the memory 703, the processor 702, and the input/output interface 704 can be connected through a bus 705.
  • the manner of connection between other components is merely illustrative and not limited.
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 7, but it does not mean that there is only one bus or one type of bus.
  • the memory 703 may be a volatile memory (English: volatile memory), such as a random access memory (English: random-access memory, abbreviation: RAM); the memory 703 may also be a non-volatile memory (English: Non-volatile memory, such as read-only memory (English: read-only memory, abbreviation: ROM), flash memory (English: flash memory), hard disk (English: hard disk drive, abbreviation: HDD) or solid state drive (English) : solid-state drive, abbreviated: SSD), or memory 703 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto.
  • the memory 703 may be a combination of the above memories.
  • a receiver 701 configured to acquire a generated virtual character image
  • a processor 702 configured to determine a color of each pixel of the virtual character image received by the receiver 701; and obtain, from the acquired color family library, a color family corresponding to a color of each pixel; wherein the color
  • the family library is obtained by clustering corresponding colors of each pixel in a plurality of actual character images in advance, and each class corresponds to one color family, one color family corresponds to one basic color, and one color family includes multiple colors;
  • the plurality of colors included in the color family of the pixels search for an image matching the virtual character image in the target image database.
  • the receiver 701 is further configured to acquire a plurality of actual character images
  • the processor 702 is further configured to obtain the color family library by:
  • the character feature is a physiological feature of the character, and the clothing feature is related information of the clothing on the character;
  • the clustering algorithm is used to cluster the color blocks including the character features included in each actual character image and the color information included in the color block describing the clothing feature to obtain a plurality of classes, and determine a basic color corresponding to each color included in each class;
  • Each of these categories includes a number of colors that make up a color family.
  • the color blocks describing the character features and the color information contained in the color patches describing the clothing features include: red, green, and blue RGB color components or hue, saturation, and lightness components.
  • the processor 702 is further configured to separately acquire, by using the following, a color block describing a character feature included in each actual character image and a color block describing the clothing feature, including:
  • the color block of the character ornament included in the mask MASK image of the foreground image of the respective actual character image is removed by an edge detection algorithm to obtain a color block describing the character feature and a color block describing the clothing feature.
  • the receiver 701 is further configured to:
  • the processor 702 After the processor 702 initially determines the avatar image based on the clothing feature template and the character feature template, receiving the indication information of the clothing adjustment of the avatar image, where the indication information includes location information and direction information to be adjusted;
  • the processor 702 is further configured to: perform a filling process on the area included in the clothing corresponding to the position information to be adjusted according to the filling algorithm to obtain the generated virtual character image.
  • an embodiment of the present invention provides a virtual person image acquiring device.
  • the device includes:
  • a preliminary determining unit 801 configured to initially determine a virtual character image based on the clothing feature template and the character feature template
  • the receiving unit 802 is configured to receive indication information about a clothing feature adjustment in the virtual character image determined by the preliminary determining unit 801, where the indication information includes location information to be adjusted and direction information;
  • the generating unit 803 is configured to perform a filling process on the area included in the clothing corresponding to the position information to be adjusted according to the filling algorithm to obtain the generated virtual character image.
  • the generating unit 803 is specifically configured to:
  • the generated avatar image is obtained by performing a filling process on the region included in the clothing corresponding to the position information to be adjusted based on the filling algorithm.
  • the generating unit 803 is further configured to: when the receiving unit 802 receives the indication information about the clothing feature adjustment in the virtual character image, if the virtual character image is determined to be locked The state prohibits adjustment of the clothing features in the avatar image.
  • all colors in the color family are used to participate in image matching, and the colors included in the color family are obtained by clustering colors included in some actual character images, thereby reducing real people due to illumination, shadow, and
  • the influence of visual chromatic aberration or the like on the virtual character image improves the reliability of the data, and also improves the accuracy of searching for the image matching the virtual character image in the target image database.
  • each functional unit in each embodiment of the present application may be integrated into one processing. In the device, it may be physically present alone, or two or more units may be integrated in one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software function module.
  • a virtual person image acquiring device as shown in FIG. 8 includes a receiver 901 and a processor 902.
  • the processor 902 can be a central processing unit (English: central processing unit, CPU for short), or a digital processing unit or the like.
  • the virtual character image acquiring apparatus further includes a memory 903 for storing a program executed by the processor 902, and the processor 902 is configured to execute a program stored by the memory 903.
  • the memory 903 is also used to store information such as a clothing feature template and a character feature template.
  • the memory 903 may be disposed inside the avatar image acquisition device or may be disposed outside the avatar image acquisition device.
  • the virtual character image obtaining means may further include an input/output interface 904 for writing a program and configuration information into the memory 903 through the input/output interface 904, and outputting the matched image.
  • the receiver 901, the memory 903, the processor 902, and the input/output interface 904 can be connected by a bus 905.
  • the manner of connection between other components is merely illustrative and not limited.
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 9, but it does not mean that there is only one bus or one type of bus.
  • the memory 903 can be a volatile memory, such as a RAM; the memory 903 can also be a non-volatile memory, such as a ROM, flash memory, HDD or SSD, or the memory 903 can be used to carry or store an instruction or data structure.
  • the memory 903 may be a combination of the above memories.
  • the processor 902 is configured to initially determine a virtual character image based on the clothing feature template and the character feature template;
  • the receiver 901 is configured to receive indication information about a clothing feature adjustment in the virtual character image that is initially determined by the processor 902, where the indication information includes location information to be adjusted and direction information.
  • the processor 902 is further configured to perform a filling process on the area included in the clothing corresponding to the position information to be adjusted received by the receiver 901 based on the filling algorithm to obtain the generated virtual character image.
  • the processor 902 when performing the filling process on the area included in the clothing corresponding to the position information to be adjusted based on the filling algorithm to obtain the generated virtual character image, is specifically used to:
  • the generated avatar image is obtained by performing a filling process on the region included in the clothing corresponding to the position information to be adjusted based on the filling algorithm.
  • the processor 902 when the receiver 901 receives the indication information about the adjustment of the clothing feature in the avatar image, the processor 902 is further configured to determine that the avatar image is in a locked state. When the clothing features in the virtual character image are prohibited from being adjusted.
  • the adjustment of the garment in multiple dimensions can be realized, and when the provided clothing template cannot meet the demand, the garment can be adaptively adjusted.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de recherche d'image, ainsi qu'un procédé et un dispositif d'acquisition d'image de caractère virtuel, utilisés pour résoudre le problème dans l'état antérieur de la technique de grands écarts de résultat de recherche provoqués par des écarts de couleur dus à l'éclairage, à l'ombre, à la différence de couleur visuelle, etc. Le procédé de recherche d'image consiste : à acquérir une image de caractère virtuel générée ; à déterminer les couleurs de différents pixels de l'image de caractère virtuel ; à acquérir respectivement des groupes de couleurs correspondant aux couleurs des différents pixels auprès d'une banque de groupes de couleurs, la banque de groupes de couleurs étant acquise par exécution à l'avance d'un groupement des couleurs correspondant à différents pixels dans plusieurs images de caractère réel, chaque type correspondant à un groupe de couleurs, un groupe de couleurs correspondant à une couleur de base, un groupe de couleurs comprenant une pluralité de couleurs ; à utiliser les pluralités de couleurs incluses dans les groupes de couleurs des différents pixels pour rechercher, dans une base de données d'images cibles, une image correspondant à l'image de caractère virtuel.
PCT/CN2017/076466 2016-03-14 2017-03-13 Procédé de recherche d'image, procédé et dispositif d'acquisition d'image de caractère virtuel WO2017157261A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610143234.0 2016-03-14
CN201610143234.0A CN107193816B (zh) 2016-03-14 2016-03-14 一种图像搜索方法、虚拟人物图像获取方法及装置

Publications (1)

Publication Number Publication Date
WO2017157261A1 true WO2017157261A1 (fr) 2017-09-21

Family

ID=59850611

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/076466 WO2017157261A1 (fr) 2016-03-14 2017-03-13 Procédé de recherche d'image, procédé et dispositif d'acquisition d'image de caractère virtuel

Country Status (2)

Country Link
CN (1) CN107193816B (fr)
WO (1) WO2017157261A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004906A (zh) * 2021-10-29 2022-02-01 北京小米移动软件有限公司 图像配色方法、装置、存储介质及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7792887B1 (en) * 2002-05-31 2010-09-07 Adobe Systems Incorporated Compact color feature vector representation
CN102663391A (zh) * 2012-02-27 2012-09-12 安科智慧城市技术(中国)有限公司 一种图像的多特征提取与融合方法及系统
CN103530903A (zh) * 2013-10-28 2014-01-22 智慧城市系统服务(中国)有限公司 一种虚拟试衣间的实现方法及实现系统
US8891902B2 (en) * 2010-02-16 2014-11-18 Imprezzeo Pty Limited Band weighted colour histograms for image retrieval
CN105069042A (zh) * 2015-07-23 2015-11-18 北京航空航天大学 基于内容的无人机侦察图像数据检索方法
CN105205171A (zh) * 2015-10-14 2015-12-30 杭州中威电子股份有限公司 基于颜色特征的图像检索方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5974192A (en) * 1995-11-22 1999-10-26 U S West, Inc. System and method for matching blocks in a sequence of images
ES2681432T3 (es) * 2011-08-05 2018-09-13 Rakuten, Inc. Dispositivo de determinación de color, sistema de determinación de color, procedimiento de determinación de color, medio de grabación de información y programa
CN102982350B (zh) * 2012-11-13 2015-10-28 上海交通大学 一种基于颜色和梯度直方图的台标检测方法
CN104809245A (zh) * 2015-05-13 2015-07-29 信阳师范学院 一种图像检索方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7792887B1 (en) * 2002-05-31 2010-09-07 Adobe Systems Incorporated Compact color feature vector representation
US8891902B2 (en) * 2010-02-16 2014-11-18 Imprezzeo Pty Limited Band weighted colour histograms for image retrieval
CN102663391A (zh) * 2012-02-27 2012-09-12 安科智慧城市技术(中国)有限公司 一种图像的多特征提取与融合方法及系统
CN103530903A (zh) * 2013-10-28 2014-01-22 智慧城市系统服务(中国)有限公司 一种虚拟试衣间的实现方法及实现系统
CN105069042A (zh) * 2015-07-23 2015-11-18 北京航空航天大学 基于内容的无人机侦察图像数据检索方法
CN105205171A (zh) * 2015-10-14 2015-12-30 杭州中威电子股份有限公司 基于颜色特征的图像检索方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004906A (zh) * 2021-10-29 2022-02-01 北京小米移动软件有限公司 图像配色方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
CN107193816B (zh) 2021-03-30
CN107193816A (zh) 2017-09-22

Similar Documents

Publication Publication Date Title
US9396560B2 (en) Image-based color palette generation
US9741137B2 (en) Image-based color palette generation
US9552656B2 (en) Image-based color palette generation
US9478054B1 (en) Image overlay compositing
US9311889B1 (en) Image-based color palette generation
CN106780293B (zh) 头部特写肖像的样式传递
US9846803B2 (en) Makeup supporting device, makeup supporting system, makeup supporting method, and non-transitory computer-readable recording medium
CN110390632B (zh) 基于妆容模板的图像处理方法、装置、存储介质及终端
CN105321171B (zh) 针对实况相机馈送的图像分割
CN111383232B (zh) 抠图方法、装置、终端设备及计算机可读存储介质
WO2018094653A1 (fr) Procédé et appareil de rétablissement de modèle de cheveux d'utilisateur, et terminal
CN111353546B (zh) 图像处理模型的训练方法、装置、计算机设备和存储介质
US9661886B1 (en) System and method for capturing design state of apparel or accessory to enable extrapolation for style and size variants
US20190197343A1 (en) Product image generation system
CN109949207A (zh) 虚拟对象合成方法、装置、计算机设备和存储介质
CN105405157A (zh) 肖像画生成装置、肖像画生成方法
CN111767817B (zh) 一种服饰搭配方法、装置、电子设备及存储介质
CN105678714B (zh) 一种图像处理方法和装置
CN110084219B (zh) 界面交互方法及装置
CN110728620A (zh) 一种图像处理方法、装置和电子设备
Jampour et al. Face inpainting based on high-level facial attributes
CN107564085B (zh) 图像扭曲处理方法、装置、计算设备及计算机存储介质
CN107154046A (zh) 一种视频背景处理和隐私保护的方法
CN113052783A (zh) 一种基于人脸关键点的人脸图像融合方法
WO2017157261A1 (fr) Procédé de recherche d'image, procédé et dispositif d'acquisition d'image de caractère virtuel

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17765803

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17765803

Country of ref document: EP

Kind code of ref document: A1