WO2019228144A1 - 图像处理方法和装置 - Google Patents

图像处理方法和装置 Download PDF

Info

Publication number
WO2019228144A1
WO2019228144A1 PCT/CN2019/085599 CN2019085599W WO2019228144A1 WO 2019228144 A1 WO2019228144 A1 WO 2019228144A1 CN 2019085599 W CN2019085599 W CN 2019085599W WO 2019228144 A1 WO2019228144 A1 WO 2019228144A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
model
clothing
clothing image
image
Prior art date
Application number
PCT/CN2019/085599
Other languages
English (en)
French (fr)
Inventor
何进萍
Original Assignee
北京京东尚科信息技术有限公司
北京京东世纪贸易有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京京东尚科信息技术有限公司, 北京京东世纪贸易有限公司 filed Critical 北京京东尚科信息技术有限公司
Priority to US17/044,181 priority Critical patent/US11455773B2/en
Publication of WO2019228144A1 publication Critical patent/WO2019228144A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/529Depth or shape recovery from texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/16Cloth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Definitions

  • Embodiments of the present application relate to the field of computer technology, and in particular, to an image processing method and device.
  • Image processing is a technology that uses image processing equipment to analyze images to achieve the desired results. It is usually a processed image obtained by using image matching, image description, image recognition and other processing methods for color images, grayscale images and other photographed devices such as photographing equipment and scanning equipment.
  • the existing three-dimensional clothing image processing method generally uses existing image processing techniques to process the texture of the clothing image to obtain a three-dimensional clothing image.
  • the embodiments of the present application provide an image processing method and device.
  • an embodiment of the present application provides an image processing method.
  • the method includes: obtaining a two-dimensional clothing image, and the two-dimensional clothing image includes a style identifier of the clothing; and selecting and matching the style identifier from a pre-established three-dimensional clothing model set.
  • 3D clothing model wherein the 3D clothing model includes already-marked hash points; based on the pre-established coordinate mapping relationship between the 2D clothing image and the 3D clothing model, and the hash points of the selected 3D clothing model,
  • the acquired two-dimensional clothing image is labeled with hash points; based on the selected three-dimensional clothing model and the labeling result, a three-dimensional clothing image of the acquired two-dimensional clothing image is generated.
  • generating the acquired three-dimensional clothing image of the two-dimensional clothing image based on the selected three-dimensional clothing model and the labeling result includes: performing coordinate transformation on the hash point of the acquired two-dimensional clothing image to determine the transformation The coordinate information of the hash point after the coordinates; based on the hash points after the transformed coordinates, a primitive with a predetermined shape is generated, and the primitive includes a preset number of hash points after the transformed coordinates and the connection between the hash points Relationship; rasterize the primitives to obtain the primitive's fragment set, and the fragments in the fragment set include color value and texture coordinate information; perform texture coordinate mapping on the fragment set to obtain the selected three-dimensional clothing model Pixels; based on the obtained pixels, a three-dimensional clothing image is generated.
  • the fragments in the fragment set further include texture material information; and generating a three-dimensional clothing image based on the obtained pixels includes: determining the obtained pixels based on the texture material information and preset light source coordinate information. Light intensity information; processing the obtained pixels based on the light source color information and the obtained light intensity information; and generating the three-dimensional clothing image based on the processed pixels. .
  • the method further includes: smoothing the texture of the three-dimensional clothing image.
  • the three-dimensional clothing model set is established by the following steps: obtaining a two-dimensional sample clothing image set, the two-dimensional sample clothing image set includes at least one pattern of a two-dimensional sample clothing image sequence, and for at least one pattern of two-dimensional
  • the two-dimensional sample clothing image sequence of each style in the sample clothing image sequence performs the following steps: performing feature point extraction on the two-dimensional sample clothing image sequence; constructing a basic matrix based on the extracted feature points; and based on the constructed
  • the base matrix and the calibration parameters of the pre-calibrated camera establish a three-dimensional clothing model, where the camera is a camera that obtains the two-dimensional sample clothing image sequence; and based on the established at least one three-dimensional clothing model, a three-dimensional clothing model set is generated.
  • the method further includes: receiving body shape information; selecting a virtual three-dimensional model that matches the body shape information from a preset virtual three-dimensional model set; and based on the preset virtual
  • the coordinate mapping relationship between the three-dimensional model and the three-dimensional clothing model sets the three-dimensional clothing image in the selected virtual three-dimensional model and presents it.
  • an embodiment of the present application provides an image processing apparatus, the apparatus includes: an obtaining unit configured to obtain a two-dimensional clothing image, where the two-dimensional clothing image includes a style identifier of the clothing; and a selecting unit configured to select In the set of established three-dimensional clothing models, a three-dimensional clothing model that matches the style identifier is selected, wherein the three-dimensional clothing model includes a hash point that has been labeled; the labeling unit is configured to be based on a pre-established two-dimensional clothing image and the three-dimensional clothing model Coordinate mapping relationship and the hash point of the selected three-dimensional clothing model to mark the acquired two-dimensional clothing image with a hash point; the generating unit is configured to generate the acquired based on the selected three-dimensional clothing model and the labeling result 3D clothing image of 2D clothing image.
  • the generating unit includes: a coordinate transformation subunit configured to perform coordinate transformation on the obtained hash point of the two-dimensional clothing image to determine coordinate information of the hash point after the transformed coordinates; a primitive generator A unit configured to generate a primitive with a predetermined shape based on the hash points after transforming the coordinates, the primitive including a preset number of hash points after transforming the coordinates and the connection relationship between the hash points; processing subunits , Is configured to perform rasterization processing on the primitives to obtain a fragment set of the primitives, and the fragments in the fragment set include color values and texture coordinate information; the texture coordinate mapping subunit is configured to perform the fragment collection The texture coordinates are mapped to obtain the pixels of the selected three-dimensional clothing model; the generation subunit is configured to generate a three-dimensional clothing image based on the obtained pixels.
  • the fragments in the fragment set further include texture material information; and the generating subunit is further configured to: determine the obtained light intensity information of the pixels based on the texture material information and preset light source coordinate information; based on The light source color information and the obtained light intensity information are used to process the obtained pixels; based on the processed pixels, the three-dimensional clothing image is generated.
  • the image processing device is further configured to smooth the texture of the three-dimensional clothing image.
  • the three-dimensional clothing model set is established by the following steps: obtaining a two-dimensional sample clothing image set, the two-dimensional sample clothing image set includes at least one pattern of a two-dimensional sample clothing image sequence, and for at least one pattern of two-dimensional
  • the two-dimensional sample clothing image sequence of each style in the sample clothing image sequence performs the following steps: performing feature point extraction on the two-dimensional sample clothing image sequence; constructing a basic matrix based on the extracted feature points; and based on the constructed
  • the base matrix and the calibration parameters of the pre-calibrated camera establish a three-dimensional clothing model, where the camera is a camera that obtains the two-dimensional sample clothing image sequence; and based on the established at least one three-dimensional clothing model, a three-dimensional clothing model set is generated.
  • the image processing apparatus is further configured to: receive body shape information; select a virtual three-dimensional model that matches body shape information from a preset virtual three-dimensional model set; and based on the preset virtual three-dimensional model and the three-dimensional clothing model.
  • the coordinate mapping relationship sets the three-dimensional clothing image in the selected virtual three-dimensional model and presents it.
  • an embodiment of the present application provides a server.
  • the server includes: one or more processors; a storage device on which one or more programs are stored; and when one or more programs are processed by one or more The processor executes such that one or more processors implement the method as described in any implementation of the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the method as described in any implementation manner of the first aspect.
  • the image processing method and device provided in the embodiments of the present application obtain a two-dimensional clothing image, and then select a three-dimensional clothing model that matches the style identifier of the acquired two-dimensional clothing image from a pre-established three-dimensional clothing model set, and then based on the two-dimensional
  • the coordinate mapping relationship between the clothing image and the three-dimensional clothing model is used to mark the texture coordinate points of the obtained two-dimensional clothing image.
  • the three-dimensional clothing image is generated according to the labeling result and the selected three-dimensional clothing model, thereby improving the speed of generating the three-dimensional clothing image And the accuracy of the generated 3D clothing image.
  • FIG. 1 is an exemplary system architecture diagram to which the present application can be applied;
  • FIG. 2 is a flowchart of an embodiment of an image processing method according to the present application.
  • FIG. 3 is a schematic diagram of an application scenario of an image processing method according to the present application.
  • FIG. 5 is a schematic structural diagram of an embodiment of an image processing apparatus according to the present application.
  • FIG. 6 is a schematic structural diagram of a computer system suitable for implementing a server according to an embodiment of the present application.
  • FIG. 1 illustrates an exemplary system architecture 100 to which an embodiment of an image processing method or a webpage generation device of the present application can be applied.
  • the system architecture 100 may include terminal devices 101, 102, and 103, a network 104, and a server 105.
  • the network 104 is a medium for providing a communication link between the terminal devices 101, 102, 103 and the server 105.
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • the user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send messages and the like.
  • the terminal devices 101, 102, and 103 may be hardware or software.
  • the terminal devices 101, 102, and 103 can be various electronic devices that support image capturing functions, including, but not limited to, cameras, camcorders, cameras, smart phones, and tablet computers.
  • the terminal devices 101, 102, and 103 are software, they can be installed in the electronic devices listed above. It can be implemented as multiple software or software modules or as a single software or software module. It is not specifically limited here.
  • the server 105 may provide various services.
  • the server 105 may analyze and process data such as two-dimensional clothing images obtained from the terminal devices 101, 102, and 103, and generate processing results (such as three-dimensional clothing images).
  • the server 105 may be hardware or software.
  • the server 105 can be implemented as a distributed server cluster composed of multiple servers or as a single server.
  • the server 105 is software, it can be implemented as multiple software or software modules (for example, to provide distributed services), or it can be implemented as a single software or software module. It is not specifically limited here.
  • the acquired two-dimensional clothing image can be processed and a three-dimensional clothing image can be generated through the image processing function.
  • the provided image processing method may be executed by the terminal devices 101, 102, and 103, and accordingly, the image processing apparatus is provided in the terminal devices 101, 102, and 103.
  • the terminal devices 101, 102, and 103 need not be set at this time.
  • the image processing method provided in the embodiment of the present application may be executed by the server 105. Accordingly, the image processing device It is provided in the server 105.
  • terminal devices, networks, and servers in FIG. 1 are merely exemplary. Depending on the implementation needs, there can be any number of terminal devices, networks, and servers.
  • the image processing method includes the following steps:
  • Step 201 Obtain a two-dimensional clothing image.
  • an execution subject of the image processing method can be obtained from a terminal device (for example, the terminal devices 101, 102, and 103 shown in FIG. 1) by a wired connection method or a wireless connection method
  • a terminal device for example, the terminal devices 101, 102, and 103 shown in FIG. 1
  • the terminal device includes, but is not limited to, a camera, a video camera, a camera, a smart phone, a tablet computer, and so on.
  • the two-dimensional clothing image may also be obtained locally by the execution subject.
  • the two-dimensional clothing image may include a top clothes image, a pants clothing image, a T-shirt clothing image, and the like.
  • clothing can include various categories, such as jeans, sweatpants, trench coats, down jackets, etc.
  • the same category of clothing can include various different styles, and the same style of clothing can include different colors and patterns.
  • the same style and different colors of clothing can be set in advance with the same style logo.
  • the above-mentioned execution subject obtains the two-dimensional clothing image, it may further obtain the style identifier of the clothing presented by the two-dimensional clothing image.
  • the style identification may include text for describing the style of the clothing, and may also include numbers, letters, strings, and the like.
  • Step 202 Select a three-dimensional clothing model that matches the style identifier from a pre-established three-dimensional clothing model set.
  • the above-mentioned execution subject may establish a three-dimensional clothing model set in advance.
  • Different three-dimensional clothing models can be set in the three-dimensional clothing model set.
  • the three-dimensional clothing model can be created based on the style characteristics of a certain style of clothing.
  • the three-dimensional clothing model is a grid-like three-dimensional clothing model created based on three-dimensional reconstruction technology.
  • Texture usually refers to the color on an object, and it can also refer to the roughness of the surface of the object. It is usually represented by a color value.
  • Each three-dimensional clothing model can be set with a style logo.
  • the above-mentioned execution subject can select a three-dimensional clothing model matching the clothing presented by the acquired two-dimensional clothing image from the three-dimensional clothing model set.
  • the style identifier may include text for describing the style of the clothing, and may also include numbers, letters, strings, and the like, and the representation method may be the same as the style identifier included in the two-dimensional clothing image. Therefore, the above-mentioned execution subject may select a three-dimensional clothing model having the same style as the clothing presented by the acquired two-dimensional clothing image from the three-dimensional clothing model set established in advance.
  • the three-dimensional clothing model in the three-dimensional clothing model set may further include a hash point that has been marked.
  • the hash point may be a point manually labeled in the three-dimensional clothing model, or may be a point generated in advance by the execution body.
  • the hash point information of the hash point may include, for example, object coordinate information.
  • Object coordinates are usually the coordinates with the center of the object as the origin.
  • the above-mentioned three-dimensional clothing model collection can be created based on different styles of clothing to be created using existing three-dimensional model creation technology (such as Autodesk Maya model creation software).
  • the above three-dimensional clothing model set can also be established by the following steps:
  • the above-mentioned execution subject can obtain a two-dimensional sample clothing image collection.
  • the two-dimensional sample image set includes a two-dimensional sample clothing image sequence of at least one style.
  • the two-dimensional sample clothing image sequence may include a front two-dimensional sample clothing image of a sample clothing, a back two-dimensional sample clothing image, and the like.
  • the execution subject may perform the following steps: first, feature point extraction is performed on the two-dimensional sample clothing image sequence.
  • the feature point may be a point in the image that has a sharp change in brightness or a point of maximum curvature on the edge curve of the image, which is significantly different from neighboring points around it.
  • the feature point extraction method can use the existing SIFT (Scale-invariant Feature Transformation) algorithm for feature point extraction.
  • SIFT Scale-invariant Feature Transformation
  • a basic matrix is constructed using a linear method.
  • a projection matrix based on the camera can be determined.
  • the aforementioned pre-calibrated camera is a camera that acquires a two-dimensional sample clothing image sequence.
  • the camera has been calibrated.
  • a set of three-dimensional clothing models is generated through the established at least one three-dimensional clothing model.
  • Step 203 Annotate the acquired two-dimensional clothing image based on the coordinate mapping relationship between the two-dimensional clothing image and the three-dimensional clothing model established in advance and the selected three-dimensional clothing model's hash point.
  • the execution subject may establish a coordinate mapping relationship between a two-dimensional clothing image and a three-dimensional clothing model in advance.
  • the coordinate mapping relationship can be established by the following steps: First, the existing surface texture splitting technology can be used to perform texture splitting on the three-dimensional clothing model. Because the three-dimensional clothing model is a grid-like three-dimensional model without texture mapping and has been marked with hash points. Therefore, after texture splitting of the 3D clothing model, the obtained texture plan of the 3D clothing model is a scatter plot. Next, the mapping relationship between the scatter plot obtained and the three-dimensional clothing model is established. The mapping relationship is the coordinate mapping relationship between the two-dimensional clothing image and the three-dimensional clothing model.
  • the execution subject may mark the obtained two-dimensional clothing image with a hash point according to a coordinate mapping relationship between the two-dimensional clothing image and the three-dimensional clothing model that is established in advance.
  • the above-mentioned execution subject may mark a position corresponding to the acquired two-dimensional clothing image based on the coordinate information of each hash point on the two-dimensional plane of the selected three-dimensional clothing model.
  • Step 204 Generate a three-dimensional clothing image of the acquired two-dimensional clothing image based on the selected three-dimensional clothing model and the labeling result.
  • the execution subject may determine a color value, a gray value, and the like at each hash point in the two-dimensional clothing image that has been labeled.
  • the color value and the gray value are set at the corresponding marked hash points in the selected three-dimensional clothing model through the coordinate mapping relationship between the two-dimensional clothing image and the three-dimensional clothing model established in advance.
  • the existing interpolation algorithm technology is used to interpolate each hash point to obtain the color value and gray value between each hash point.
  • a three-dimensional clothing image of the obtained two-dimensional clothing image is generated according to the color value and gray value at each point in the obtained three-dimensional clothing model.
  • FIG. 3 is a schematic diagram of an application scenario of the image processing method according to this embodiment.
  • the server 301 after the server 301 obtains a two-dimensional clothing image with a "shirt" style logo, it can select a three-dimensional clothing model 302 that matches the "shirt” from a pre-established three-dimensional clothing model set.
  • the three-dimensional clothing model 302 is a mesh three-dimensional model without adding a texture map.
  • the three-dimensional clothing model also includes hash points that have been labeled.
  • the server 301 may perform hash point labeling on the obtained two-dimensional clothing image according to the coordinate mapping relationship between the three-dimensional clothing model 302 and the obtained two-dimensional auxiliary image and the hash point of the three-dimensional clothing model 301 in advance.
  • Reference numeral 303 is a two-dimensional clothing image of a "shirt” that has been labeled with a hash point.
  • the image processing method and device provided in the embodiments of the present application obtain a two-dimensional clothing image, and then select a three-dimensional clothing model that matches the style identifier of the acquired two-dimensional clothing image from a pre-established three-dimensional clothing model set, and then based on the two-dimensional
  • the coordinate mapping relationship between the clothing image and the three-dimensional clothing model is used to mark the texture coordinate points of the obtained two-dimensional clothing image.
  • the three-dimensional clothing image is generated according to the labeling result and the selected three-dimensional clothing model, thereby improving the speed of generating the three-dimensional clothing image. And the accuracy of the generated 3D clothing image.
  • the image processing method includes the following steps:
  • Step 401 Obtain a two-dimensional clothing image.
  • an execution subject of the image processing method can be obtained from a terminal device (for example, the terminal devices 101, 102, and 103 shown in FIG. 1) by a wired connection method or a wireless connection method.
  • Two-dimensional clothing image may also be obtained locally by the execution subject.
  • the two-dimensional clothing image may include a top clothes image, a pants clothing image, a T-shirt clothing image, and the like.
  • the same style and different colors of clothing can be set in advance with the same style logo.
  • the execution subject obtains the two-dimensional clothing image, it may further obtain the style identifier of the clothing presented by the two-dimensional clothing image.
  • the style identification may include text for describing the style of the clothing, and may also include numbers, letters, strings, and the like.
  • Step 402 Select a three-dimensional clothing model that matches the style identifier from the pre-established three-dimensional clothing model combination.
  • the above-mentioned execution subject may establish a three-dimensional clothing model set in advance.
  • Different three-dimensional clothing models can be set in the three-dimensional clothing model set.
  • the three-dimensional clothing model is a grid-like three-dimensional clothing model created based on three-dimensional reconstruction technology.
  • the same style of clothing with different textures can be characterized by the same three-dimensional clothing model.
  • Each three-dimensional clothing model can be set with a style logo.
  • the execution body can select a three-dimensional clothing model that matches the clothing presented by the acquired clothing image from the three-dimensional clothing model set.
  • the style identification includes text for describing the style of the clothing, and may also include numbers, letters, strings, and the like, and the representation method may be the same as the style identification of the two-dimensional clothing image obtained above.
  • the three-dimensional clothing model in the three-dimensional clothing model set may further include a hash point that has been marked.
  • the hash point information of the hash point may include, for example, object coordinate information. Object coordinates are usually the coordinates with the center of the object as the origin.
  • Step 403 Annotate the obtained two-dimensional clothing image based on the coordinate mapping relationship between the two-dimensional clothing image and the three-dimensional clothing model, and the hash point of the selected three-dimensional clothing model.
  • the execution subject may establish a coordinate mapping relationship between a two-dimensional clothing image and a three-dimensional clothing model in advance.
  • the above-mentioned execution subject may perform hash point labeling on the obtained two-dimensional clothing image according to a coordinate mapping relationship between the two-dimensional clothing image and the three-dimensional clothing model established in advance.
  • the information of the hash point of the two-dimensional clothing image that has been labeled may include object coordinate information, texture information, and the like.
  • the object coordinate information of the hash point of the two-dimensional clothing image is the physical coordinate information of the already-marked hash point of the selected three-dimensional clothing model. Texture usually refers to the color on an object, and it can also refer to the roughness of the surface of the object.
  • each color value is called a texel or texel.
  • each texel has a unique address in the texture.
  • the address can be thought of as a column and a row value, represented by U and V, respectively.
  • the texture coordinates are the coordinates mapped to the address of the texture pixel in the object coordinate system.
  • the texture information may include texture coordinate information, texture color information, and the like.
  • the texture information of the hash point of the two-dimensional clothing image may include texture element information and texture coordinate information of each hash point mapped to the selected three-dimensional clothing model.
  • Step 404 Perform coordinate transformation on the obtained hash point of the two-dimensional clothing image, and determine coordinate information of the hash point after the transformed coordinates.
  • the execution subject may perform coordinate transformation on the hash point of the two-dimensional clothing image obtained.
  • the coordinate transformation may include, for example, mapping a hash point from an object coordinate system to a world coordinate system, and obtaining coordinates of the hash point in the world coordinate system. Then, the hash points are converted from the world coordinate system to the screen coordinate system, so that the three-dimensional clothing model can be displayed on the screen.
  • the above-mentioned coordinate transformation may further include mapping texture coordinates of the hash points to screen coordinates.
  • the coordinate information of the hash point after the coordinate transformation is determined. It is worth noting here that the above-mentioned coordinate transformation methods are known in the art, and will not be repeated here.
  • Step 405 Generate a primitive having a preset shape based on the coordinate information of the hash point after transforming the coordinates.
  • the execution body may use the hash point after the transformed coordinates as a vertex, and connect a preset number of adjacent hash points to Together, they form primitives of a predetermined shape.
  • the preset shape may include, for example, a triangle, a quadrangle, a polygon, and the like.
  • Each primitive also includes the connection relationship between the hash points after each transformed coordinate. The connection relationship includes, for example, the number of other hash points connected to each hash point, relative coordinate information between the other hash points connected to each hash point, and the hash point, and the like.
  • Step 406 Rasterize the primitives to obtain a fragment set of the primitives.
  • the execution body may perform rasterization processing on the primitives, thereby obtaining a fragment set of the primitives.
  • the rasterization process usually includes performing interpolation between the hash points in the primitives, thereby obtaining multiple interpolation points and interpolation point information.
  • Each obtained interpolation point and the interpolation point information can be referred to as a fragment.
  • the above-mentioned interpolation point information may include, for example, color information, texture coordinate information, and the like.
  • Step 407 Perform texture coordinate mapping on the fragment set to obtain pixels at the texture coordinate points in the selected three-dimensional clothing model.
  • the execution body may determine the color value at each point in the three-dimensional clothing model according to the texture coordinate information and color information of each fragment. Therefore, the above-mentioned execution subject may perform coloring processing on the selected three-dimensional clothing model, thereby obtaining pixels at each point on the three-dimensional clothing model.
  • each point of the three-dimensional clothing model includes both the marked hash points and the difference points obtained by interpolation.
  • Step 408 Generate a three-dimensional clothing image based on the obtained pixels.
  • the execution subject may render the three-dimensional clothing model, thereby generating a three-dimensional clothing image having a texture of the obtained two-dimensional clothing image.
  • the fragments in the fragment set may further include texture material information. Since the texture material of the clothing presented in the two-dimensional clothing image is a rough material, the surrounding environment light (such as sunlight) is projected on the surface of the clothing to generate diffuse reflection.
  • the execution subject may also determine the texture material coefficient according to the texture material information of the fragments in the fragment set, and then the execution subject may simulate the ambient light at each pixel point on the 3D clothing model when the environment is illuminated onto the 3D clothing model. Diffuse reflected light intensity.
  • the diffuse reflected light intensity of ambient light is usually the product of the texture material coefficient and the ambient light intensity.
  • the above-mentioned execution subject may also determine the relative position of the virtual light source and each pixel point in the three-dimensional clothing model according to the coordinates of the virtual light source set in the three-dimensional scene in the screen coordinate system. Therefore, the above-mentioned execution subject can determine the diffuse reflection light intensity of the directional light of each pixel according to the Lambertian illumination model.
  • the Lambert illumination model indicates that the intensity of diffusely reflected light is only proportional to the direction of the incident light and the cosine of the angle between the surface normal vector at the reflection point.
  • the processing method may include changing a color value of each pixel point of the three-dimensional clothing model when no light source is added.
  • the color value processing method at each pixel point may include, for example, multiplying the color value of the color of the light source, the intensity value of the light source, and the color value of each pixel point of the three-dimensional clothing model when no light source is added, and determining the calculation result as The color value at each pixel.
  • the above-mentioned execution subject can generate a three-dimensional clothing image according to the processed pixels.
  • the execution body may further perform smoothing processing on the texture of the three-dimensional clothing image.
  • Step 409 Receive body shape information.
  • the above-mentioned execution subject may also receive body shape information.
  • the body shape information may be size information of various parts of the body sent by the user through the terminal, such as waist circumference information, shoulder width information, bust information, and the like. It may also be body proportion information and the like selected by the user through the terminal.
  • Step 410 Select a virtual three-dimensional model matching the body shape information from a preset virtual three-dimensional model set.
  • the execution subject may compare the size data in the body shape information with the body size data of the preset virtual 3D model in the preset virtual 3D model set. According to the comparison result, a preset virtual 3D model whose size data is smaller than a preset threshold is selected as the virtual 3D model that matches the body shape information.
  • step 411 based on the preset coordinate mapping relationship between the virtual 3D model and the 3D clothing model, a 3D clothing image is set in the selected virtual 3D model and presented.
  • the execution subject may set a three-dimensional clothing image to the selected virtual three-dimensional model according to a coordinate mapping relationship between a preset virtual three-dimensional model and a three-dimensional clothing model.
  • the preset coordinate mapping relationship between the virtual 3D model and the 3D clothing model may be a coordinate mapping relationship between the virtual 3D model and the 3D clothing model in a screen coordinate system. Therefore, each point in the three-dimensional clothing image is mapped to a preset virtual three-dimensional model, and the three-dimensional clothing image is presented by the three-dimensional virtual model.
  • this embodiment discusses the generation process of the three-dimensional clothing image in more detail, so that the obtained two-dimensional clothing image can be more accurately analyzed.
  • the texture is set on the selected three-dimensional model; at the same time, the present embodiment also presents the three-dimensional clothing image by using a preset virtual three-dimensional model, so that the user can more intuitively view the generated three-dimensional clothing image and improve the visualization effect.
  • this application provides an embodiment of an image processing device.
  • the device embodiment corresponds to the method embodiment shown in FIG. 2.
  • the device may specifically Used in various electronic equipment.
  • the image processing apparatus 500 in this embodiment includes an obtaining unit 501, a selecting unit 502, a labeling unit 503, and a generating unit 504.
  • the obtaining unit 501 is configured to obtain a two-dimensional clothing image, and the two-dimensional clothing image includes a style identifier of the clothing.
  • the selection unit 502 is configured to select a three-dimensional clothing model that matches the style identifier from a pre-established three-dimensional clothing model set, wherein the three-dimensional clothing model includes a hash point that has been marked.
  • the labeling unit 503 is configured to label the obtained two-dimensional clothing image based on the coordinate mapping relationship between the two-dimensional clothing image and the three-dimensional clothing model established in advance and the selected three-dimensional clothing model's hash points.
  • the generating unit 504 is configured to generate a three-dimensional clothing image of the acquired two-dimensional clothing image based on the selected three-dimensional clothing model and the annotation result.
  • the specific processing of the obtaining unit 501, the selection unit 502, the labeling unit 503, and the generating unit 504 and the technical effects brought by them can be referred to steps 201 and 201 in the corresponding embodiment in FIG. 2, respectively. Relevant descriptions of step 202, step 203, and step 204 are not repeated here.
  • the generating unit includes 504 and further includes a coordinate transformation subunit (not shown) configured to perform coordinate transformation on the obtained hash point of the two-dimensional clothing image to determine the transformation The coordinate information of the hash point after the coordinates.
  • the primitive generating subunit (not shown) is configured to generate a primitive having a predetermined shape based on the hash points after the transformed coordinates, and the primitive includes the hash points and the hash points after a preset number of transformed coordinates. Connection relationship.
  • the processing subunit (not shown) is configured to perform rasterization processing on the primitives to obtain a fragment set of the primitives.
  • the fragments in the fragment set include color values and texture coordinate information.
  • the texture coordinate mapping subunit (not shown) is configured to perform texture coordinate mapping on the fragment set to obtain pixels of the selected three-dimensional clothing model.
  • a generating sub-unit (not shown) is configured to generate a three-dimensional clothing image based on the obtained pixels.
  • the fragments in the fragment set further include texture material information; and the generation subunit (not shown) is further configured to be based on the texture material information and preset light source coordinate information To determine the obtained light intensity information of the pixels; process the obtained pixels based on the light source color information and the obtained light intensity information; and generate the three-dimensional clothing image based on the processed pixels.
  • the image processing apparatus 500 is further configured to perform smoothing processing on the texture of the three-dimensional clothing image.
  • the three-dimensional clothing model set is established by the following steps: obtaining a two-dimensional sample clothing image set, and the two-dimensional sample clothing image set includes at least one style of a two-dimensional sample clothing image sequence.
  • Each of the two-dimensional sample clothing image sequences of at least one style of the two-dimensional sample clothing image sequence performs the following steps: performing feature point extraction on the two-dimensional sample clothing image sequence; constructing based on the extracted feature points Basic matrix; based on the constructed basic matrix and calibration parameters of a pre-calibrated camera, a three-dimensional clothing model is established, wherein the camera is a camera that acquires the two-dimensional sample clothing image sequence; and based on the established at least one three-dimensional clothing model, a three-dimensional clothing is generated Model collection.
  • the image processing apparatus 500 is further configured to: receive body shape information; select a virtual 3D model that matches body shape information from a preset virtual 3D model set; based on the preset virtual 3D model The coordinate mapping relationship between the model and the three-dimensional clothing model sets the three-dimensional clothing image in the selected virtual three-dimensional model and presents it.
  • FIG. 6 illustrates a schematic structural diagram of a computer system 600 as shown in FIG. 1, which is suitable for implementing the embodiment of the present application.
  • the electronic device shown in FIG. 6 is only an example, and should not impose any limitation on the functions and scope of use of the embodiments of the present application.
  • the computer system 600 includes a central processing unit (CPU, Central Processing Unit) 601, which can be loaded into random access according to a program stored in a read-only memory (ROM, Read Only Memory) 602 or from a storage portion 606
  • CPU Central Processing Unit
  • a program in a memory (RAM, Random Access Memory) 603 performs various appropriate actions and processes.
  • RAM Random Access Memory
  • various programs and data required for the operation of the system 600 are also stored.
  • the CPU 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input / output (I / O, Input / Output) interface 605 is also connected to the bus 604.
  • the following components are connected to the I / O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a cathode ray tube (CRT), a liquid crystal display (LCD), and a speaker; a storage portion 608 including a hard disk and the like; a communication section 609 including a network interface card such as a LAN card, a modem, and the like.
  • the communication section 609 performs communication processing via a network such as the Internet.
  • the driver 610 is also connected to the I / O interface 605 as necessary.
  • a removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 610 as needed, so that a computer program read therefrom is installed into the storage section 608 as needed.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing a method shown in a flowchart.
  • the computer program may be downloaded and installed from a network through the communication section 609, and / or installed from a removable medium 611.
  • CPU central processing unit
  • the computer-readable medium in the present application may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programming read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal that is included in baseband or propagated as part of a carrier wave, and which carries computer-readable program code. Such a propagated data signal may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable medium may send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • each block in the flowchart or block diagram may represent a module, a program segment, or a part of code, which contains one or more functions to implement a specified logical function Executable instructions.
  • the functions labeled in the blocks may also occur in a different order than those labeled in the drawings. For example, two blocks represented one after the other may actually be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts can be implemented by a dedicated hardware-based system that performs the specified function or operation , Or it can be implemented with a combination of dedicated hardware and computer instructions.
  • the units described in the embodiments of the present application may be implemented by software or hardware.
  • the described unit may also be provided in a processor, for example, it may be described as: a processor includes an obtaining unit, a selecting unit, a labeling unit, and a generating unit.
  • a processor includes an obtaining unit, a selecting unit, a labeling unit, and a generating unit.
  • the names of these units do not constitute a limitation on the unit itself in some cases.
  • the obtaining unit may also be described as a “unit for obtaining a two-dimensional clothing image”.
  • the present application also provides a computer-readable medium, which may be included in the device described in the above embodiments; or may exist alone without being assembled into the device.
  • the above computer-readable medium carries one or more programs, and when the one or more programs are executed by the device, the device causes the device to: obtain a two-dimensional clothing image, the two-dimensional clothing image includes a style identifier of the clothing; In the 3D clothing model set, a 3D clothing model that matches the style identifier is selected, where the 3D clothing model includes a hash point that has been marked; based on a pre-established coordinate mapping relationship between the 2D clothing image and the 3D clothing model and the selected The hash points of the three-dimensional clothing model are used to mark the obtained two-dimensional clothing image with a hash point; based on the selected three-dimensional clothing model and the labeling result, a three-dimensional clothing image of the obtained two-dimensional clothing image is generated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请实施例公开了图像处理方法和装置。该方法的一具体实施方式包括:获取二维服装图像,二维服装图像包括服装的样式标识;从预先建立的三维服装模型集合中选择与样式标识匹配的三维服装模型,其中,三维服装模型包括已经标注的散列点;基于预先建立的二维服装图像与三维服装模型之间的坐标映射关系以及所选择的三维服装模型的散列点,对所获取的二维服装图像进行散列点标注;基于所选择的三维服装模型以及标注结果,生成所获取的二维服装图像的三维服装图像。该实施方式提高了生成三维服装图像的速度。

Description

图像处理方法和装置
本专利申请要求于2018年5月31日提交的、申请号为201810549444.9、申请人为北京京东尚科信息技术有限公司和北京京东世纪贸易有限公司、发明名称为“图像处理方法和装置”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请中
技术领域
本申请实施例涉及计算机技术领域,具体涉及图像处理方法和装置。
背景技术
图像处理,是利用图像处理设备对图像进行分析,以达到需要的结果的技术。其通常是对拍摄设备、扫描设备等经过拍摄得到的彩色图像、灰度图像等利用图像匹配、图像描述、图像识别等处理方法得到处理后的图像。
现有的三维服装图像的处理方法,通常利用现有的图像处理技术对衣服图像的纹理进行处理,得到三维衣服图像。
发明内容
本申请实施例提出了图像处理方法和装置。
第一方面,本申请实施例提供了一种图像处理方法,该方法包括:获取二维服装图像,二维服装图像包括服装的样式标识;从预先建立的三维服装模型集合中选择与样式标识匹配的三维服装模型,其中,三维服装模型包括已经标注的散列点;基于预先建立的二维服装图像与三维服装模型之间的坐标映射关系以及所选择的三维服装模型的散列点,对所获取的二维服装图像进行散列点标注;基于所选择的三维服装模型以及标注结果,生成所获取的二维服装图像的三维服装图像。
在一些实施例中,基于所选择的三维服装模型以及标注结果,生成所获取的二维服装图像的三维服装图像,包括:对所获取的二维服装图像的散列点进行坐标变换,确定变换坐标后的散列点的坐标信息;基于变换坐标后的散列点,生成具有预设形状的图元,图元包括预设数目个变换坐标后的散列点以及散列点之间的连接关系;对图元进行栅格化处理,得到图元的片元集合,片元集合中的片元包括色值、纹理坐标信息;对片元集合进行纹理坐标映射,得到所选择的三维服装模型的像素;基于所得到的像素,生成三维服装图像。
在一些实施例中,片元集合中的片元还包括纹理材质信息;以及基于所得到的像素,生成三维服装图像,包括:基于纹理材质信息以及预设光源坐标信息,确定所得到的像素的光照强度信息;基于光源颜色信息、所得到的光照强度信息,对所得到的像素进行处理;基于处理后的像素,生成所述三维服装图像。。
在一些实施例中,基于所得到的像素,生成三维服装图像之后,还包括:对三维服装图像的纹理进行平滑处理。
在一些实施例中,三维服装模型集合通过如下步骤建立:获取二维样本服装图像集合,二维样本服装图像集合包括至少一种样式的二维样本服装图像序列,对于至少一种样式的二维样本服装图像序列中的每一种样式的二维样本服装图像序列,执行以下步骤:对该二维样本服装图像序列进行特征点提取;基于所提取的特征点,构建基础矩阵;基于所构建的基础矩阵以及预先标定的摄像机的标定参数,建立三维服装模型,其中,摄像机为获取该二维样本服装图像序列的摄像机;基于建立的至少一个三维服装模型,生成三维服装模型集合。
在一些实施例中,生成所获取的二维服装图像的三维服装图像之后,还包括:接收体型信息;从预设虚拟三维模特集合中选取与体型信息匹配的虚拟三维模特;基于预设的虚拟三维模特与三维服装模型之间的坐标映射关系,将三维服装图像设置于所选取的虚拟三维模特中并呈现。
第二方面,本申请实施例提供了一种图像处理装置,该装置包括:获取单元,被配置成获取二维服装图像,二维服装图像包括服装的样 式标识;选择单元,被配置成从预先建立的三维服装模型集合中选择与样式标识匹配的三维服装模型,其中,三维服装模型包括已经标注的散列点;标注单元,被配置成基于预先建立的二维服装图像与三维服装模型之间的坐标映射关系以及所选择的三维服装模型的散列点,对所获取的二维服装图像进行散列点标注;生成单元,被配置成基于所选择的三维服装模型以及标注结果,生成所获取的二维服装图像的三维服装图像。
在一些实施例中,生成单元包括:坐标变换子单元,被配置成对所获取的二维服装图像的散列点进行坐标变换,确定变换坐标后的散列点的坐标信息;图元生成子单元,被配置成基于变换坐标后的散列点,生成具有预设形状的图元,图元包括预设数目个变换坐标后的散列点以及散列点之间的连接关系;处理子单元,被配置成对图元进行栅格化处理,得到图元的片元集合,片元集合中的片元包括色值、纹理坐标信息;纹理坐标映射子单元,被配置成对片元集合进行纹理坐标映射,得到所选择的三维服装模型的像素;生成子单元,被配置成基于所得到的像素,生成三维服装图像。
在一些实施例中,片元集合中的片元还包括纹理材质信息;以及生成子单元进一步被配置成:基于纹理材质信息以及预设光源坐标信息,确定所得到的像素的光照强度信息;基于光源颜色信息、所得到的光照强度信息,对所得到的像素进行处理;基于处理后的像素,生成所述三维服装图像。
在一些实施例中,图像处理装置进一步被配置成:对三维服装图像的纹理进行平滑处理。
在一些实施例中,三维服装模型集合通过如下步骤建立:获取二维样本服装图像集合,二维样本服装图像集合包括至少一种样式的二维样本服装图像序列,对于至少一种样式的二维样本服装图像序列中的每一种样式的二维样本服装图像序列,执行以下步骤:对该二维样本服装图像序列进行特征点提取;基于所提取的特征点,构建基础矩阵;基于所构建的基础矩阵以及预先标定的摄像机的标定参数,建立三维服装模型,其中,摄像机为获取该二维样本服装图像序列的摄像 机;基于建立的至少一个三维服装模型,生成三维服装模型集合。
在一些实施例中,图像处理装置进一步被配置成:接收体型信息;从预设虚拟三维模特集合中选取与体型信息匹配的虚拟三维模特;基于预设的虚拟三维模特与三维服装模型之间的坐标映射关系,将三维服装图像设置于所选取的虚拟三维模特中并呈现。
第三方面,本申请实施例提供了一种服务器,该服务器包括:一个或多个处理器;存储装置,其上存储有一个或多个程序;当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现如第一方面中任一实现方式描述的方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如第一方面中任一实现方式描述的方法。
本申请实施例提供的图像处理方法和装置,通过获取二维服装图像,然后从预先建立的三维服装模型集合中选择与获取的二维服装图像的样式标识匹配的三维服装模型,接着基于二维服装图像与三维服装模型之间的坐标映射关系对获取的二维服装图像进行纹理坐标点标注,最后根据标注结果以及所选择的三维服装模型生成三维服装图像,从而提高了生成三维服装图像的速度以及所生成的三维服装图像的准确度。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1是本申请可以应用于其中的示例性系统架构图;
图2是根据本申请的图像处理方法的一个实施例的流程图;
图3是根据本申请的图像处理方法的一个应用场景的示意图;
图4是根据本申请的图像处理方法的又一个实施例的流程图;
图5是根据本申请的图像处理装置的一个实施例的结构示意图;
图6是适于用来实现本申请实施例的服务器的计算机系统的结构示意图。
具体实施方式
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。
图1示出了可以应用本申请的图像处理方法或网页生成装置的实施例的示例性系统架构100。
如图1所示,系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。
终端设备101、102、103可以是硬件,也可以是软件。当终端设备101、102、103为硬件时,可以是支持图像拍摄功能的各种电子设备,包括但不限于照相机、摄像机、摄像头、智能手机和平板电脑等等。当终端设备101、102、103为软件时,可以安装在上述所列举的电子设备中。其可以实现成多个软件或软件模块,也可以实现成单个软件或软件模块。在此不做具体限定。
服务器105可以提供各种服务,例如服务器105可以对从终端设备101、102、103获取到的二维服装图像等数据进行分析等处理,并生成处理结果(例如三维服装图像)。
需要说明的是,服务器105可以是硬件,也可以是软件。当服务器105为硬件时,可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器。当服务器105为软件时,可以实现成多个软件或软件模块(例如用来提供分布式服务),也可以实现成单个软件 或软件模块。在此不做具体限定。
需要说明的是,当上述终端设备具体图像处理功能,通过图像处理功能可以对所获取的二维服装图像进行处理以及生成三维服装图像时,此时可以不需要设置服务器105,本申请实施例所提供的图像处理方法可以由终端设备101、102、103执行,相应地,图像处理装置设置于终端设备101、102、103中。当上述服务器105中存储有二维服装图像时,此时也可以不需要设置终端设备101、102、103,本申请实施例所提供的图像处理方法可以由服务器105执行,相应地,图像处理装置设置于服务器105中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
继续参考图2,示出了根据本申请的图像处理方法的一个实施例的流程200。该图像处理方法,包括以下步骤:
步骤201,获取二维服装图像。
在本实施例中,图像处理方法的执行主体(例如图1所示的服务器105)可以通过有线连接方式或者无线连接方式从终端设备(例如图1所示的终端设备101、102、103)获取二维服装图像。其中,终端设备包括但不限于照相机、摄像机、摄像头、智能手机和平板电脑等等。在这里,上述二维服装图像也可以是上述执行主体从本地获取的。上述二维服装图像可以包括上衣服装图像、裤子服装图像、T恤服装图像等。通常,服装可以包括各种类别,例如牛仔裤、运动裤、风衣、羽绒服等等,同一类别的服装又可以包括各种不同的样式,相同样式的服装又可以包括不同的颜色、图案等。在这里,相同样式不同颜色的服装可以预先设置相同的样式标识。上述执行主体在获取二维服装图像时,还可以进一步获取二维服装图像所呈现的服装的样式标识。样式标识可以包括用于描述服装样式的文字,还可以包括数字、字母、字符串等。
步骤202,从预先建立的三维服装模型集合中选择与样式标识匹配的三维服装模型。
在本实施例中,上述执行主体可以预先建立三维服装模型集合。 该三维服装模型集合中可以设置有不同样式的三维服装模型。三维服装模型可以基于某一样式的服装的样式特征创建。在这里,该三维服装模型为基于三维重建技术创建的网格状三维服装模型。从而,同一样式不同纹理的服装可以通过同一个三维服装模型表征。纹理通常是指一个物体上的颜色,也可以是指物体表面粗糙程度。其通常由颜色值体现。每一个三维服装模型均可以设置有样式标识。通过该样式标识,上述执行主体可以从三维服装模型集合中选择与所获取的二维服装图像所呈现的服装匹配的三维服装模型。在这里,该样式标识可以包括用于描述服装样式的文字,还可以包括数字、字母、字符串等,其表示方法可以与二维服装图像所包括的样式标识相同。从而,上述执行主体可以从预先建立的三维服装模型集合中选择与所获取的二维服装图像所呈现的服装具有相同样式的三维服装模型。上述三维服装模型集合中的三维服装模型还可以包括已经标注的散列点。在这里,散列点可以为三维服装模型中人工标注的点,也可以是上述执行主体预先生成的点。通过在三维服装模型中标注散列点,可以提高服装图像纹理映射至三维服装模型上的速度。该散列点的散列点信息例如可以包括物体坐标信息等。物体坐标通常为以物体的中心为坐标原点的坐标。上述三维服装模型集合可以利用现有的三维模型创建技术(例如Autodesk Maya模型创建软件)基于所要创建的不同样式的服装而创建的。
在本实施的一些可选的实现方式中,上述三维服装模型集合还可以通过如下步骤建立:
首选上述执行主体可以获取二维样本服装图像集合。在这里,该二维样本图像集合包括至少一种样式的二维样本服装图像序列。在这里,该二维样本服装图像序列可以包括样本服装的正面二维样本服装图像、反面二维样本服装图像等。
然后,对至少一种样式的二维样本服装图像序列中的每一种样式的二维样本服装图像序列,上述执行主体可以执行以下步骤:首先对该二维样本服装图像序列进行特征点提取。在这里,特征点可以为图像中亮度变化剧烈的点或图像边缘曲线上曲率的极大值点,该点同周 围的邻近点具有明显的差异。特征点提取的方法可以采用现有的SIFT(Scale-invariant feature transform,尺度不变特征转换)算法进行特征点提取。接着,基于提取到的特征点,采用线性方法构建基础矩阵。然后,基于预先标定的摄像机的标定参数,可以确定基于该摄像机的投影矩阵。从而,通过所构建的基础矩阵以及摄像机的投影矩阵得到三维服装模型。在这里,上述预先标定的摄像机为获取二维样本服装图像序列的摄像机。在获取上述二维样本服装图像序列时,该摄像机已标定完毕。
最后,通过建立的至少一个三维服装模型,从而生成三维服装模型集合。
步骤203,基于预先建立的二维服装图像与三维服装模型之间的坐标映射关系以及所选择的三维服装模型的散列点,对所获取的二维服装图像进行散列点标注。
在本实施例中,上述执行主体可以预先建立二维服装图像与三维服装模型之间的坐标映射关系。作为示例,该坐标映射关系可以通过如下步骤建立:首先,可以利用现有的表面纹理拆分技术对三维服装模型进行纹理拆分。由于三维服装模型为未进行纹理贴图的网格状的且已经标注散列点的三维模型。因此,对三维服装模型进行纹理拆分后,所得到的三维服装模型的纹理平面图为散点图。接着,建立所得到的散点图与三维服装模型之间的映射关系。该映射关系即为二维服装图像与三维服装模型之间的坐标映射关系。
在本实施例中,上述执行主体可以根据预先建立的二维服装图像与三维服装模型之间的坐标映射关系,对所获取的二维服装图像进行散列点标注。作为示例,上述执行主体可以基于所选择的三维服装模型的散点图上各散列点在二维平面的坐标信息,对所获取的二维服装图像相对应的位置处进行标注。
步骤204,基于所选择的三维服装模型以及标注结果,生成所获取的二维服装图像的三维服装图像。
在本实施例中,根据步骤203中对二维服装图像的标注结果,上述执行主体可以确定已经标注的二维服装图像中的各散列点处的色 值、灰度值等。将该色值、灰度值通过预先建立的二维服装图像与三维服装模型之间的坐标映射关系,设置于所选择的三维服装模型中对应的已经标注的散列点处。再利用现有的插值算法技术对各散列点处插值得到各散列点之间的色值、灰度值。从而根据所得到的三维服装模型中各点处的色值、灰度值,生成所获取的二维服装图像的三维服装图像。
继续参考图3,图3是根据本实施例的图像处理方法的应用场景的一个示意图。在图3的应用场景中,服务器301获取到带有“衬衫”样式标识的二维服装图像后,可以从预先建立的三维服装模型集合中选择出与“衬衫”匹配的三维服装模型302。在这里,该三维服装模型302为未添加纹理图的网格三维模型。该三维服装模型还包括已经标注的散列点。然后,服务器301可以根据预先建立的三维服装模型302与获取的二维辅助图像之间的坐标映射关系以及三维服装模型301的散列点,从而对所获取的二维服装图像进行散列点标注。附图标记303为已经进行散列点标注的“衬衫”的二维服装图像。最后,根据“衬衫”的二维服装图像的散列点标注结果以及三维服装模型302,生成所获取的二维服装图像的三维服装图像304。
本申请实施例提供的图像处理方法和装置,通过获取二维服装图像,然后从预先建立的三维服装模型集合中选择与获取的二维服装图像的样式标识匹配的三维服装模型,接着基于二维服装图像与三维服装模型之间的坐标映射关系对获取的二维服装图像进行纹理坐标点标注,最后根据标注结果以及所选择的三维服装模型生成三维服装图像,从而提高了生成三维服装图像的速度以及所生成的三维服装图像的准确度。
继续参考图4,示出了根据本申请的图像处理方法的又一个实施例的流程400。该图像处理方法,包括以下步骤:
步骤401,获取二维服装图像。
在本实施例中,图像处理方法的执行主体(例如图1所示的服务器105)可以通过有线连接方式或者无线连接方式从终端设备(例如图1所示的终端设备101、102、103)获取二维服装图像。在这里, 上述二维服装图像也可以是上述执行主体从本地获取的。上述二维服装图像可以包括上衣服装图像、裤子服装图像、T恤服装图像等。在这里,相同样式不同颜色的服装可以预先设置相同的样式标识。上述执行主体在获取二维服装图像时,还可以进一步获取二维服装图像所呈现的服装的样式标识。样式标识可以包括用于描述服装样式的文字,还可以包括数字、字母、字符串等。
步骤402,从预先建立的三维服装模型结合中选择与样式标识匹配的三维服装模型。
在本实施例中,上述执行主体可以预先建立三维服装模型集合。该三维服装模型集合中可以设置有不同样式的三维服装模型。在这里,该三维服装模型为基于三维重建技术创建的网格状三维服装模型。从而,同一样式不同纹理的服装可以通过同一个三维服装模型表征。每一个三维服装模型均可以设置有样式标识。通过该样式标识,上述执行主体可以从三维服装模型集合中选择与所获取的服装图像所呈现的服装匹配的三维服装模型。在这里,该样式标识的包括用于描述服装样式的文字,还可以包括数字、字母、字符串等,其表示方法可以与上述所获取的二维服装图像的样式标识相同。上述三维服装模型集合中的三维服装模型还可以包括已经标注的散列点。在这里,该散列点的散列点信息例如可以包括物体坐标信息等。物体坐标通常为以物体的中心为坐标原点的坐标。
步骤403,基于预先建立的二维服装图像与三维服装模型之间的坐标映射关系以及所选择的三维服装模型的散列点,对所获取的二维服装图像进行散列点标注。
在本实施例中,上述执行主体可以预先建立二维服装图像与三维服装模型之间的坐标映射关系。上述执行主体可以根据预先建立的二维服装图像与三维服装模型之间的坐标映射关系,从而对所获取的二维服装图像进行散列点标注。在这里,已经标注的二维服装图像的散列点的信息可以包括物体坐标信息、纹理信息等。在这里,二维服装图像的散列点的物体坐标信息即为所选择的三维服装模型的已经标注的散列点的物理坐标信息。纹理通常是指一个物体上的颜色,也可以 是指物体表面粗糙程度。其通常由颜色值体现。而每一个颜色值被称为纹理元素或纹理像素。通常,每一个纹理像素在纹理中都有一个唯一的地址。该地址可以被认为是一个列和行的值,分别由U和V来表示。而纹理坐标即为纹理像素的地址映射至物体坐标系中的坐标。通过纹理坐标信息,即可对物体模型进行纹理处理。纹理信息可以包括纹理坐标信息、纹理颜色信息等。在这里,上述二维服装图像的散列点的纹理信息可以包括纹理元素信息以及映射至所选择的三维服装模型的各散列点的纹理坐标信息。
步骤404,对所获取的二维服装图像的散列点进行坐标变换,确定变换坐标后的散列点的坐标信息。
根据步骤403得到的二维服装图像的散列点的标注结果,上述执行主体可以对所获取的二维服装图像的散列点进行坐标变换。在这里,该坐标变换例如可以包括将散列点从物体坐标系映射至世界坐标系,得到散列点在世界坐标系中的坐标。接着,将散列点从世界坐标系转换至屏幕坐标系,从而使得三维服装模型可以在屏幕上显示。在这里,上述坐标变换还可以包括将散列点的纹理坐标映射至屏幕坐标。从而根据坐标变换,确定坐标变换后的散列点的坐标信息。在这里值得注意的是,上述各坐标变换的方法为现有的公知技术,在此不再赘述。
步骤405,基于变换坐标后的散列点的坐标信息,生成具有预设形状的图元。
在本实施例中,根据步骤404确定的变换坐标后的散列点的坐标信息,上述执行主体可以将变换坐标后的散列点作为顶点,将预设数目个相邻的散列点连接在一起,形成预设形状的图元。在这里,该预设形状例如可以包括三角形、四边形、多边形等等。在这里,可以生成一个预设形状的图元,也可以生成多个预设形状的图元。每一个图元还包括各变换坐标后的散列点之间的连接关系。该连接关系例如包括与每一个散列点连接的其他散列点的数目、与每一个散列点连接的其他散列点与该散列点之间的相对坐标信息等。
步骤406,对图元进行栅格化处理,得到图元的片元集合。
在本实施例中,对于步骤405所确定的图元,上述执行主体可以 对图元进行栅格化处理,从而得到图元的片元集合。在这里,栅格化处理通常包括在图元中的散列点之间进行插值,从而得到多个插值点以及插值点信息。将所得到的每一个插值点以及插值点信息可以称为片元。在这里,该上述插值点信息例如可以包括颜色信息、纹理坐标信息等。
步骤407,对片元集合进行纹理坐标映射,得到所选择的三维服装模型中纹理坐标点处的像素。
在本实施例中,根据步骤406所确定的片元集合,上述执行主体可以根据每一个片元的纹理坐标信息以及颜色信息,确定三维服装模型中各点处的颜色值。从而,上述执行主体可以对所选择的三维服装模型进行着色处理,从而得到三维服装模型上各点处的像素。在这里,三维服装模型各点既包括所标注出的散列点也包括插值得到的差值点。
步骤408,基于所得到的像素,生成三维服装图像。
在本实施例中,根据步骤407所确定的三维服装模型中各点的像素,上述执行主体可以对三维服装模型进行渲染,从而生成具有所获取的二维服装图像的纹理的三维服装图像。
在本实施例的一些可选的实现方式中,片元集合中的片元还可以包括纹理材质信息。由于二维服装图像所呈现服装的纹理材质为粗糙材质,因此,周围环境光(例如太阳光)投射至该服装表面会产生漫反射。上述执行主体还可以根据片元集合中的片元的纹理材质信息确定纹理材质系数,然后上述执行主体可以模拟环境照射至三维服装模型上时,三维服装模型上的各像素点处的环境光的漫反射光强。在这里,环境光的漫反射光强通常为纹理材质系数与环境光强的乘积。上述执行主体还可以根据三维场景中设置的虚拟光源在屏幕坐标系中的坐标,确定虚拟光源与三维服装模型中的各像素点的相对位置。从而,上述执行主体可以根据兰伯特光照模型确定出各像素点的方向光的漫反射光强。兰伯特光照模型中指出漫反射光的光强仅与入射光的方向和反射点处表面法向量夹角的余弦成正比。因此,通过光源的强度、光源与像素点处法向量的夹角、该像素点处纹理材质的反射系数从而 可以得到该像素点处的方向光的漫反射光强。最后,将像素点处的环境光的漫反射光强与方向光的漫反射光强之和确定为该像素点的光照强度信息。上述执行主体根据所得到的各像素点处的光照强度信息,再根据光源的颜色信息,对所得到的像素的色值进行处理。在这里,该处理方法可以包括更改未添加光源时三维服装模型的各像素点的色值。各像素点处的色值处理方法例如可以包括将光源的颜色的色值、光照强度值以及未添加光源时三维服装模型的各像素点的色值根据权重值进行乘积计算,将计算结果确定为各像素点处的色值。最后上述执行主体可以根据处理后的像素生成三维服装图像。
在本实施例的一些可选的实现方式中,上述执行主体还可以对三维服装图像的纹理进行平滑处理。
步骤409,接收体型信息。
在本实施例中,上述执行主体还可以接收体型信息。在这里,该体型信息可以为用户通过终端发送的身体各处的尺寸信息,例如腰围信息、肩部宽度信息、胸围信息等。也可以为用户通过终端所选择的身体比例信息等。
步骤410,从预设虚拟三维模特集合中选取与体型信息匹配的虚拟三维模特。
在本实施例中,根据步骤409接收到的体型信息,上述执行主体可以将上述体型信息中的尺寸数据与预设虚拟三维模特集合中的预设虚拟三维模特的身体尺寸数据进行比较。根据比较结果,选择尺寸数据小于预设阈值的预设虚拟三维模特作为与体型信息匹配的虚拟三维模特。
步骤411,基于预设的虚拟三维模特与三维服装模型之间的坐标映射关系,将三维服装图像设置于所选取的虚拟三维模特中并呈现。
在本实施例中,根据步骤410所选取的虚拟三维模特,上述执行主体可以根据预先设置的虚拟三维模特与三维服装模型之间的坐标映射关系,从而将三维服装图像设置于所选取的虚拟三维模特中。在这里,上述预先设置的虚拟三维模特与三维服装模型之间的坐标映射关系可以为虚拟三维模特与三维服装模型在屏幕坐标系中的坐标映射关 系。从而,将三维服装图像中的各点分别映射至预设的虚拟三维模特上,通过三维虚拟模特呈现三维服装图像。
从图4中可以看出,与图2所示的实施例不同的是,本实施例对三维服装图像的生成过程进行了更加详细的论述,从而可以更加准确的将获取的二维服装图像的纹理设置于所选择的三维模型上;同时,本实施例还通过利用预设的虚拟三维模特呈现三维服装图像,从而使得用户可以更加直观的查看所生成的三维服装图像,提高了可视化的效果。
进一步参考图5,作为对上述各图所示方法的实现,本申请提供了一种图像处理装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图5所示,本实施例的图像处理装置500包括:获取单元501、选择单元502、标注单元503以及生成单元504。其中,获取单元501被配置成获取二维服装图像,二维服装图像包括服装的样式标识。选择单元502被配置成从预先建立的三维服装模型集合中选择与样式标识匹配的三维服装模型,其中,三维服装模型包括已经标注的散列点。标注单元503被配置成基于预先建立的二维服装图像与三维服装模型之间的坐标映射关系以及所选择的三维服装模型的散列点,对所获取的二维服装图像进行散列点标注。生成单元504被配置成基于所选择的三维服装模型以及标注结果,生成所获取的二维服装图像的三维服装图像。
在本实施例中,图像处理装置500中:获取单元501、选择单元502、标注单元503以及生成单元504具体处理及其所带来的技术效果可分别参考图2对应实施例中的步骤201、步骤202、步骤203和步骤204的相关说明,在此不再赘述。
在本实施例的一些可选的实现方式中,生成单元包括504还包括坐标变换子单元(未示出),被配置成对所获取的二维服装图像的散列点进行坐标变换,确定变换坐标后的散列点的坐标信息。图元生成子单元(未示出),被配置成基于变换坐标后的散列点,生成具有预设形状的图元,图元包括预设数目个变换坐标后的散列点以及散列点之间 的连接关系。处理子单元(未示出),被配置成对图元进行栅格化处理,得到图元的片元集合,片元集合中的片元包括色值、纹理坐标信息。纹理坐标映射子单元(未示出),被配置成对片元集合进行纹理坐标映射,得到所选择的三维服装模型的像素。生成子单元(未示出),被配置成基于所得到的像素,生成三维服装图像。
在本实施例的一些可选的实现方式中,片元集合中的片元还包括纹理材质信息;以及生成子单元(未示出)进一步被配置成:基于纹理材质信息以及预设光源坐标信息,确定所得到的像素的光照强度信息;基于光源颜色信息、所得到的光照强度信息,对所得到的像素进行处理;基于处理后的像素,生成所述三维服装图像。
在本实施例的一些可选的实现方式中,图像处理装置500进一步被配置成:对三维服装图像的纹理进行平滑处理。
在本实施例的一些可选的实现方式中,三维服装模型集合通过如下步骤建立:获取二维样本服装图像集合,二维样本服装图像集合包括至少一种样式的二维样本服装图像序列,对于至少一种样式的二维样本服装图像序列中的每一种样式的二维样本服装图像序列,执行以下步骤:对该二维样本服装图像序列进行特征点提取;基于所提取的特征点,构建基础矩阵;基于所构建的基础矩阵以及预先标定的摄像机的标定参数,建立三维服装模型,其中,摄像机为获取该二维样本服装图像序列的摄像机;基于建立的至少一个三维服装模型,生成三维服装模型集合。
在本实施例的一些可选的实现方式中,图像处理装置500进一步被配置成:接收体型信息;从预设虚拟三维模特集合中选取与体型信息匹配的虚拟三维模特;基于预设的虚拟三维模特与三维服装模型之间的坐标映射关系,将三维服装图像设置于所选取的虚拟三维模特中并呈现。
下面参考图6,其示出了适于用来实现本申请实施例的如图1所示的终端设备或服务器的计算机系统600的结构示意图。图6示出的电子设备仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图6所示,计算机系统600包括中央处理单元(CPU,Central Processing Unit)601,其可以根据存储在只读存储器(ROM,Read Only Memory)602中的程序或者从存储部分606加载到随机访问存储器(RAM,Random Access Memory)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有系统600操作所需的各种程序和数据。CPU 601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O,Input/Output)接口605也连接至总线604。
以下部件连接至I/O接口605:包括键盘、鼠标等的输入部分606;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分607;包括硬盘等的存储部分608;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分609。通信部分609经由诸如因特网的网络执行通信处理。驱动器610也根据需要连接至I/O接口605。可拆卸介质611,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器610上,以便于从其上读出的计算机程序根据需要被安装入存储部分608。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分609从网络上被下载和安装,和/或从可拆卸介质611被安装。在该计算机程序被中央处理单元(CPU)601执行时,执行本申请的方法中限定的上述功能。需要说明的是,本申请上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适 的组合。在本申请中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本申请实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括获取单元、选择单元、标注单元以及生成单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,获取单元还可以被描述为“获取二维服装图像的单元”。
作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的装置中所包含的;也可以是单独 存在,而未装配入该装置中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该装置执行时,使得该装置:获取二维服装图像,二维服装图像包括服装的样式标识;从预先建立的三维服装模型集合中选择与样式标识匹配的三维服装模型,其中,三维服装模型包括已经标注的散列点;基于预先建立的二维服装图像与三维服装模型之间的坐标映射关系以及所选择的三维服装模型的散列点,对所获取的二维服装图像进行散列点标注;基于所选择的三维服装模型以及标注结果,生成所获取的二维服装图像的三维服装图像。
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (14)

  1. 一种图像处理方法,包括:
    获取二维服装图像,其中,二维服装图像包括服装的样式标识;
    从预先建立的三维服装模型集合中选择与所述样式标识匹配的三维服装模型,其中,三维服装模型包括已经标注的散列点;
    基于预先建立的二维服装图像与三维服装模型之间的坐标映射关系以及所选择的三维服装模型的散列点,对所获取的二维服装图像进行散列点标注;
    基于所选择的三维服装模型以及标注结果,生成所获取的二维服装图像的三维服装图像。
  2. 根据权利要求1所述的方法,其中,所述基于所选择的三维服装模型以及标注结果,生成所获取的二维服装图像的三维服装图像,包括:
    对所获取的二维服装图像的散列点进行坐标变换,确定变换坐标后的散列点的坐标信息;
    基于变换坐标后的散列点的坐标信息,生成具有预设形状的图元,所述图元包括预设数目个变换坐标后的散列点以及散列点之间的连接关系;
    对所述图元进行栅格化处理,得到所述图元的片元集合,所述片元集合中的片元包括色值、纹理坐标信息;
    对所述片元集合进行纹理坐标映射,得到所选择的三维服装模型的像素;
    基于所得到的像素,生成所述三维服装图像。
  3. 根据权利要求2所述的方法,其中,所述片元集合中的片元还包括纹理材质信息;以及
    所述基于所得到的像素,生成所述三维服装图像,包括:
    基于纹理材质信息以及预设光源坐标信息,确定所得到的像素的 光照强度信息;
    基于光源颜色信息、所得到的光照强度信息,对所得到的像素进行处理;
    基于处理后的像素,生成所述三维服装图像。
  4. 根据权利要求2所述的方法,其中,所述基于所得到的像素,生成所述三维服装图像之后,还包括:
    对所述三维服装图像的纹理进行平滑处理。
  5. 根据权利要求1-4之一所述的方法,其中,所述三维服装模型集合通过如下步骤建立:
    获取二维样本服装图像集合,所述二维样本服装图像集合包括至少一种样式的二维样本服装图像序列,对于所述至少一种样式的二维样本服装图像序列中的每一种样式的二维样本服装图像序列,执行以下步骤:对该二维样本服装图像序列进行特征点提取;基于所提取的特征点,构建基础矩阵;基于所构建的基础矩阵以及预先标定的摄像机的标定参数,建立三维服装模型,其中,所述摄像机为获取该二维样本服装图像序列的摄像机;
    基于建立的至少一个三维服装模型,生成三维服装模型集合。
  6. 根据权利要求1-4之一所述的方法,其中,所述生成所获取的二维服装图像的三维服装图像之后,还包括:
    接收体型信息;
    从预设虚拟三维模特集合中选取与所述体型信息匹配的虚拟三维模特;
    基于预设的虚拟三维模特与三维服装模型之间的坐标映射关系,将所述三维服装图像设置于所选取的虚拟三维模特中并呈现。
  7. 一种图像处理装置,包括:
    获取单元,被配置成获取二维服装图像,其中,二维服装图像包 括服装的样式标识;
    选择单元,被配置成从预先建立的三维服装模型集合中选择与所述样式标识匹配的三维服装模型,其中,三维服装模型包括已经标注的散列点;
    标注单元,被配置成基于预先建立的二维服装图像与三维服装模型之间的坐标映射关系以及所选择的三维服装模型的散列点,对所获取的二维服装图像进行散列点标注;
    生成单元,被配置成基于所选择的三维服装模型以及标注结果,生成所获取的二维服装图像的三维服装图像。
  8. 根据权利要求7所述的装置,其中,所述生成单元包括:
    坐标变换子单元,被配置成对所获取的二维服装图像的散列点进行坐标变换,确定变换坐标后的散列点的坐标信息;
    图元生成子单元,被配置成基于变换坐标后的散列点,生成具有预设形状的图元,所述图元包括预设数目个变换坐标后的散列点以及散列点之间的连接关系;
    处理子单元,被配置成对所述图元进行栅格化处理,得到所述图元的片元集合,所述片元集合中的片元包括色值、纹理坐标信息;
    纹理坐标映射子单元,被配置成对所述片元集合进行纹理坐标映射,得到所选择的三维服装模型的像素;
    生成子单元,被配置成基于所得到的像素,生成所述三维服装图像。
  9. 根据权利要求8所述的装置,其中,所述片元集合中的片元还包括纹理材质信息;以及
    所述生成子单元进一步被配置成:
    基于纹理材质信息以及预设光源坐标信息,确定所得到的像素的光照强度信息;
    基于光源颜色信息、所得到的光照强度信息,对所得到的像素进行处理;
    基于处理后的像素,生成所述三维服装图像。
  10. 根据权利要求8所述的装置,其中,所述图像处理装置进一步被配置成:
    对所述三维服装图像的纹理进行平滑处理。
  11. 根据权利要求7-10之一所述的装置,其中,所述三维服装模型集合通过如下步骤建立:
    获取二维样本服装图像集合,所述二维样本服装图像集合包括至少一种样式的二维样本服装图像序列,对于所述至少一种样式的二维样本服装图像序列中的每一种样式的二维样本服装图像序列,执行以下步骤:对该二维样本服装图像序列进行特征点提取;基于所提取的特征点,构建基础矩阵;基于所构建的基础矩阵以及预先标定的摄像机的标定参数,建立三维服装模型,其中,所述摄像机为获取该二维样本服装图像序列的摄像机;
    基于建立的至少一个三维服装模型,生成三维服装模型集合。
  12. 根据权利要求7-10之一所述的装置,其中,所述图像处理装置进一步被配置成:
    接收体型信息;
    从预设虚拟三维模特集合中选取与所述体型信息匹配的虚拟三维模特;
    基于预设的虚拟三维模特与三维服装模型之间的坐标映射关系,将所述三维服装图像设置于所选取的虚拟三维模特中并呈现。
  13. 一种服务器,所述服务器包括:
    一个或多个处理器;
    存储装置,其上存储有一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-6中任一所述的方法。
  14. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-6中任一所述的方法。
PCT/CN2019/085599 2018-05-31 2019-05-06 图像处理方法和装置 WO2019228144A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/044,181 US11455773B2 (en) 2018-05-31 2019-05-06 Image processing method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810549444.9A CN110555903B (zh) 2018-05-31 2018-05-31 图像处理方法和装置
CN201810549444.9 2018-05-31

Publications (1)

Publication Number Publication Date
WO2019228144A1 true WO2019228144A1 (zh) 2019-12-05

Family

ID=68696828

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/085599 WO2019228144A1 (zh) 2018-05-31 2019-05-06 图像处理方法和装置

Country Status (3)

Country Link
US (1) US11455773B2 (zh)
CN (1) CN110555903B (zh)
WO (1) WO2019228144A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383337A (zh) * 2020-03-20 2020-07-07 北京百度网讯科技有限公司 用于识别对象的方法和装置
CN113129423A (zh) * 2019-12-30 2021-07-16 百度在线网络技术(北京)有限公司 车辆三维模型的获取方法、装置、电子设备和存储介质
CN114529674A (zh) * 2022-02-18 2022-05-24 江南大学 基于二维版片模型的三维模型纹理映射方法、装置及介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401194B (zh) * 2020-03-10 2023-09-22 北京百度网讯科技有限公司 用于自动驾驶车辆的数据处理方法和装置
CN115908679A (zh) * 2021-08-31 2023-04-04 北京字跳网络技术有限公司 纹理映射方法、装置、设备及存储介质
CN114581586A (zh) * 2022-03-09 2022-06-03 北京百度网讯科技有限公司 一种模型基底的生成方法、装置、电子设备及存储介质
CN115937470B (zh) * 2023-01-31 2023-07-25 南京砺算科技有限公司 图形处理单元及其细分点处理方法、存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104317999A (zh) * 2014-10-17 2015-01-28 上海衣得体信息科技有限公司 一种二维服装效果至三维服装效果的转换方法
CN104933757A (zh) * 2015-05-05 2015-09-23 昆明理工大学 一种基于款式描述符的三维服装建模方法
US20150287242A1 (en) * 2014-04-03 2015-10-08 Electronics And Telecommunications Research Institute Apparatus and method of reconstructing 3d clothing model
CN105354876A (zh) * 2015-10-20 2016-02-24 何家颖 一种基于移动终端的实时立体试衣方法
CN105913496A (zh) * 2016-04-06 2016-08-31 成都景和千城科技有限公司 一种将真实服饰快速转换为三维虚拟服饰的方法及系统
US20160379419A1 (en) * 2015-06-26 2016-12-29 Virtual Outfits, Llc Three-dimensional model generation based on two-dimensional images
CN107067460A (zh) * 2016-01-07 2017-08-18 广东京腾科技有限公司 一种虚拟试衣方法、装置及系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050088515A1 (en) * 2003-10-23 2005-04-28 Geng Z. J. Camera ring for three-dimensional (3D) surface imaging
ES2279708B1 (es) * 2005-11-15 2008-09-16 Reyes Infografica, S.L. Metodo de generacion y utilizacion de un probador virtual de prendas de vestir, y sistema.
CN104036532B (zh) * 2014-05-29 2017-03-15 浙江工业大学 基于三维到二维服装图案无缝映射的服装制作方法
US10636206B2 (en) * 2015-08-14 2020-04-28 Metail Limited Method and system for generating an image file of a 3D garment model on a 3D body model
RU2635294C1 (ru) * 2016-06-09 2017-11-09 Наталия Валерьевна Кривоносова Способ и система интерактивного создания предметов одежды
US10546433B2 (en) * 2017-08-03 2020-01-28 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for modeling garments using single view images
US20190272679A1 (en) * 2018-03-01 2019-09-05 Yuliya Brodsky Cloud-based garment design system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150287242A1 (en) * 2014-04-03 2015-10-08 Electronics And Telecommunications Research Institute Apparatus and method of reconstructing 3d clothing model
CN104317999A (zh) * 2014-10-17 2015-01-28 上海衣得体信息科技有限公司 一种二维服装效果至三维服装效果的转换方法
CN104933757A (zh) * 2015-05-05 2015-09-23 昆明理工大学 一种基于款式描述符的三维服装建模方法
US20160379419A1 (en) * 2015-06-26 2016-12-29 Virtual Outfits, Llc Three-dimensional model generation based on two-dimensional images
CN105354876A (zh) * 2015-10-20 2016-02-24 何家颖 一种基于移动终端的实时立体试衣方法
CN107067460A (zh) * 2016-01-07 2017-08-18 广东京腾科技有限公司 一种虚拟试衣方法、装置及系统
CN105913496A (zh) * 2016-04-06 2016-08-31 成都景和千城科技有限公司 一种将真实服饰快速转换为三维虚拟服饰的方法及系统

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129423A (zh) * 2019-12-30 2021-07-16 百度在线网络技术(北京)有限公司 车辆三维模型的获取方法、装置、电子设备和存储介质
CN113129423B (zh) * 2019-12-30 2023-08-11 百度在线网络技术(北京)有限公司 车辆三维模型的获取方法、装置、电子设备和存储介质
CN111383337A (zh) * 2020-03-20 2020-07-07 北京百度网讯科技有限公司 用于识别对象的方法和装置
CN111383337B (zh) * 2020-03-20 2023-06-27 北京百度网讯科技有限公司 用于识别对象的方法和装置
CN114529674A (zh) * 2022-02-18 2022-05-24 江南大学 基于二维版片模型的三维模型纹理映射方法、装置及介质
CN114529674B (zh) * 2022-02-18 2024-04-12 江南大学 基于二维版片模型的三维模型纹理映射方法、装置及介质

Also Published As

Publication number Publication date
US11455773B2 (en) 2022-09-27
CN110555903A (zh) 2019-12-10
US20210134056A1 (en) 2021-05-06
CN110555903B (zh) 2023-06-23

Similar Documents

Publication Publication Date Title
WO2019228144A1 (zh) 图像处理方法和装置
US11410320B2 (en) Image processing method, apparatus, and storage medium
CN107945267B (zh) 一种用于人脸三维模型纹理融合的方法和设备
JP7403528B2 (ja) シーンの色及び深度の情報を再構成するための方法及びシステム
US20190236838A1 (en) 3d rendering method and apparatus
US20200111234A1 (en) Dual-view angle image calibration method and apparatus, storage medium and electronic device
KR101885090B1 (ko) 영상 처리 장치, 조명 처리 장치 및 그 방법
CN111369655A (zh) 渲染方法、装置和终端设备
US10169891B2 (en) Producing three-dimensional representation based on images of a person
US10650524B2 (en) Designing effective inter-pixel information flow for natural image matting
WO2020134925A1 (zh) 人脸图像的光照检测方法、装置、设备和存储介质
KR20060131145A (ko) 이차원 영상을 이용한 삼차원 물체의 렌더링 방법
JP2022541569A (ja) 単色画像及び深度情報を使用した顔テクスチャマップ生成
JP5909176B2 (ja) 陰影情報導出装置、陰影情報導出方法及びプログラム
JP2014006658A (ja) 陰影情報導出装置、陰影情報導出方法及びプログラム
Avots et al. Automatic garment retexturing based on infrared information
CN111652807B (zh) 眼部的调整、直播方法、装置、电子设备和存储介质
CN113350786A (zh) 虚拟角色的皮肤渲染方法、装置及电子设备
Lai et al. Exploring manipulation behavior on video see-through head-mounted display with view interpolation
CN111582121A (zh) 用于捕捉面部表情特征的方法、终端设备和计算机可读存储介质
US11138807B1 (en) Detection of test object for virtual superimposition
RU2778288C1 (ru) Способ и устройство для определения освещенности изображения лица, устройство и носитель данных
US20240096041A1 (en) Avatar generation based on driving views
JP7396202B2 (ja) 生成プログラム、生成方法、および情報処理装置
US20220129973A1 (en) Image Modification to Generate Ghost Mannequin Effect in Image Content

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19812302

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 17/03/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19812302

Country of ref document: EP

Kind code of ref document: A1