CN114140570A - Method and device for obtaining texture map - Google Patents

Method and device for obtaining texture map Download PDF

Info

Publication number
CN114140570A
CN114140570A CN202111148398.XA CN202111148398A CN114140570A CN 114140570 A CN114140570 A CN 114140570A CN 202111148398 A CN202111148398 A CN 202111148398A CN 114140570 A CN114140570 A CN 114140570A
Authority
CN
China
Prior art keywords
texture
image
component
furniture
component object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111148398.XA
Other languages
Chinese (zh)
Inventor
唐忠樑
徐宇航
许志强
唐立军
孙大运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meiping Meiwu Shanghai Technology Co ltd
Original Assignee
Meiping Meiwu Shanghai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Meiping Meiwu Shanghai Technology Co ltd filed Critical Meiping Meiwu Shanghai Technology Co ltd
Priority to CN202111148398.XA priority Critical patent/CN114140570A/en
Publication of CN114140570A publication Critical patent/CN114140570A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Graphics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Generation (AREA)

Abstract

The application provides a method and a device for obtaining a texture map. In the method, for a part object in a main object in an image, acquiring a first material characteristic of texture material of the part object; and screening at least one target texture map in which the second material characteristics of the presented texture material are matched with the first material characteristics of the texture material of the part object in the plurality of preset texture maps. The plurality of preset texture maps correspond to a plurality of texture materials. And determining a texture map corresponding to the texture material of the part object from the screened at least one target texture map. Through the method and the device, the efficiency and the quality of the texture map corresponding to the texture material of the part object can be improved, and the labor cost is reduced.

Description

Method and device for obtaining texture map
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for obtaining a texture map.
Background
In a home furnishing scene, there is a need to generate a three-dimensional virtual model of a furniture object according to a two-dimensional image including the furniture object, so that a great number of buyers can comprehensively know furniture according to the three-dimensional virtual model of the furniture object to improve the promotion effect of the furniture and the like.
In the case where a user (e.g., a seller, etc.) needs to obtain a three-dimensional virtual model of a furniture object, the user may manually capture an image including the furniture object, then input the captured image to the electronic device, and control the electronic device to generate the three-dimensional virtual model of the furniture object from the image.
In order to enable texture materials presented on the outer surface of the generated three-dimensional virtual model of the furniture object to be the same as texture materials actually presented on the outer surface of the furniture object as much as possible, the electronic device may obtain a material map of the texture materials of the furniture object according to the image, generate a three-dimensional white model of the furniture object according to the image, and map the texture map corresponding to the texture materials of the furniture object on the three-dimensional white model to obtain the three-dimensional virtual model of the furniture object.
However, the quality of the obtained texture map is limited by the quality of the image taken by the user, and if the quality of the image taken by the user is low, e.g., low resolution, blur, angular distortion, or uneven lighting, the quality of the texture map of the texture material of the generated furniture object is low, e.g., the generated texture map is distorted, etc.
Among them, the technology of photographing by a wide range of users generally has no professional level and is uneven, and therefore, a situation that "quality of texture material texture of a generated furniture object is low due to low quality of an image including the furniture object photographed by the user" is likely to occur, and further, quality of a generated three-dimensional virtual model of the furniture object is likely to be low.
Disclosure of Invention
In order to solve the above technical problem, the present application shows a method and an apparatus for obtaining a texture map.
In a first aspect, the present application shows a method of obtaining a texture map, the method comprising: aiming at a component object in a main object in an image, acquiring a first material characteristic of a texture material of the component object; screening at least one target texture map with the second material characteristics of the presented texture materials matched with the first material characteristics from a plurality of preset texture maps, wherein the plurality of preset texture maps correspond to a plurality of texture materials; and determining a texture map corresponding to the texture material of the part object from the screened at least one target texture map.
In a second aspect, the present application shows a method of generating a three-dimensional virtual model of a furniture object, the method comprising: acquiring a shot target image, wherein the target image comprises the furniture object, and the furniture object comprises at least one component object; acquiring a furniture image of the furniture object according to the target image; acquiring a component image of the component object according to the furniture image; acquiring a first material characteristic of texture material of the part object according to the part image; screening at least one target texture map with the second material characteristics of the presented texture materials matched with the first material characteristics from a plurality of preset texture maps, wherein the plurality of preset texture maps correspond to a plurality of texture materials; and determining a texture map corresponding to the texture material of the part object from the screened at least one target texture map. And generating a three-dimensional virtual model of the furniture object according to the texture map.
In a third aspect, the present application shows an apparatus for obtaining a texture map, the apparatus comprising: the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first material characteristic of texture material of a part object in a main body object in an image; the first screening module is used for screening at least one target texture map which is matched with the second material characteristics of the presented texture material and the first material characteristics from a plurality of preset texture maps, and the plurality of preset texture maps correspond to a plurality of texture materials; and the first determining module is used for determining the texture map corresponding to the texture material of the part object from at least one screened target texture map.
In a fourth aspect, the present application shows an apparatus for generating a three-dimensional virtual model of a furniture object, the apparatus comprising: the third acquisition module is used for acquiring a shot target image, wherein the target image comprises the furniture object, and the furniture object comprises at least one component object; the fourth acquisition module is used for acquiring a furniture image of the furniture object according to the target image; a fifth acquiring module, configured to acquire a component image of the component object according to the furniture image; a sixth obtaining module, configured to obtain a first material characteristic of a texture material of the component object according to the component image; the second screening module is used for screening at least one target texture map which is matched with the second material characteristics of the presented texture material and the first material characteristics from a plurality of preset texture maps, and the plurality of preset texture maps correspond to a plurality of texture materials; and the second determining module is used for determining the texture map corresponding to the texture material of the part object from the screened at least one target texture map. And the generating module is used for generating a three-dimensional virtual model of the furniture object according to the texture map.
In a fifth aspect, the present application illustrates an electronic device comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform a method as shown in any of the preceding aspects.
In a sixth aspect, the present application illustrates a non-transitory computer readable storage medium having instructions which, when executed by a processor of an electronic device, enable the electronic device to perform a method as in any one of the preceding aspects.
In a seventh aspect, the present application shows a computer program product, wherein instructions of the computer program product, when executed by a processor of an electronic device, enable the electronic device to perform the method according to any of the preceding aspects.
Compared with the prior art, the method has the following advantages:
in the application, for a component object in a main object in an image, a first material characteristic of a texture material of the component object is obtained, and at least one target texture map in which a second material characteristic of the presented texture material is matched with the first material characteristic of the texture material of the component object is screened from a plurality of preset texture maps. The plurality of preset texture maps correspond to a plurality of texture materials. And determining a texture map corresponding to the texture material of the part object from the screened at least one target texture map.
On one hand, the process of obtaining the texture map corresponding to the texture material of the component object can be free from human participation, so that the labor cost can be reduced, and due to the physiological characteristics of people, people can inevitably make mistakes in the process of participating in the process of obtaining the texture map corresponding to the texture material of the component object.
On the other hand, a texture map corresponding to the texture material of the component object may be generated in real time without using an image including the subject object input by the user. Instead, among the plurality of preset texture maps, the preset texture map matched with the texture map corresponding to the texture material of the component object in the main object is screened, for example, the preset texture map in which the material characteristic of the presented texture material is very similar to the first material characteristic of the texture material of the component object in the main object is screened, so that the situation that "the quality of the texture map of the texture material of the furniture object generated due to the low quality of the image including the furniture object shot by the user is low" is easily generated due to the fact that the technology for shooting by the majority of users generally does not have a professional level and is uneven can be avoided, and the situation that the quality of the three-dimensional virtual model of the generated furniture object is low is further avoided.
Secondly, the preset texture maps of the present application may be generated according to the acquired high-quality (e.g., high definition, etc.) image, for example, the high-quality image may be an image taken by a professional, and the preset texture maps may be generated by using the image taken by the professional, where the high-quality image taken by the professional often does not have the problems of low resolution, blur, angle distortion, uneven illumination, and the like, and is usually a picture with high resolution, no blur, no angle distortion, and even illumination, and the like, and the texture material presented in the high-quality image taken by the professional has high fineness and high fidelity, so the preset texture map generated based on the high-quality (e.g., high definition, etc.) image may be the high-quality preset texture map.
Thus, the problem of low quality of texture maps possibly caused by 'generating texture maps corresponding to texture materials of component objects in the main object according to the image including the main object input by the user' can be solved, and the quality of the obtained texture maps corresponding to the texture materials of the component objects in the main object can be improved.
In another aspect, the present application obtains texture maps in the dimension (granularity) of a component object in a subject object, and obtains texture maps in the dimension (granularity is coarse, resulting in low quality of texture maps) of the subject object, and the granularity of the texture maps corresponding to the texture material of the component object obtained in the present application is finer, so that the quality (e.g., accuracy) of the generated three-dimensional virtual model of the subject object can be improved.
Drawings
Fig. 1 is a schematic view of a scenario of the present application.
FIG. 2 is a flow chart of the steps of a method of obtaining a texture map of the present application.
FIG. 3 is a flow chart of the steps of a method of obtaining material characteristics according to the present application.
FIG. 4 is a flow chart of the steps of a method of screening texture maps of the present application.
FIG. 5 is a flow chart of steps of a method of obtaining a texture map of the present application.
FIG. 6 is a flow chart of the steps of a method of generating a three-dimensional virtual model of a furniture object of the present application.
FIG. 7 is a block diagram of an apparatus for obtaining texture maps according to the present application.
Fig. 8 is a block diagram of an apparatus for generating a three-dimensional virtual model of a furniture object according to the present application.
Fig. 9 is a block diagram of a device of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
The solution of the present application is generally illustrated in a schematic view with reference to fig. 1. The scheme of the application can be applied to electronic equipment, and the electronic equipment can comprise a terminal or a server and the like. The terminal may include a mobile phone, a tablet computer, a notebook computer, a desktop computer, or the like.
In a case where a texture map corresponding to a texture material of a component object in a subject object in a target image needs to be acquired, the target image including the component object in the subject object may be acquired. Then, a main body image of the main body object is obtained according to the target image, then a component image of the component object is obtained according to the main body image, and a diffuse reflection layer of the target image is obtained. And acquiring a diffuse reflection layer of the component object in the diffuse reflection layer of the target image according to the component image. And then, acquiring the material characteristics of the texture material of the component object according to the diffuse reflection layer of the component object. And then, according to the material characteristics of the texture material of the part object, screening matched target texture maps from the plurality of preset texture maps.
On one hand, the process of obtaining the texture map corresponding to the texture material of the component object can be free from human participation, so that the labor cost can be reduced, and due to the physiological characteristics of people, people can inevitably make mistakes in the process of participating in the process of obtaining the texture map corresponding to the texture material of the component object.
On the other hand, a texture map corresponding to the texture material of the component object may be generated in real time without using an image including the subject object input by the user. Instead, among the plurality of preset texture maps, the preset texture map matched with the texture map corresponding to the texture material of the component object in the main object is screened, for example, the preset texture map in which the material characteristic of the presented texture material is very similar to the first material characteristic of the texture material of the component object in the main object is screened, so that the situation that "the quality of the texture map of the texture material of the furniture object generated due to the low quality of the image including the furniture object shot by the user is low" is easily generated due to the fact that the technology for shooting by the majority of users generally does not have a professional level and is uneven can be avoided, and the situation that the quality of the three-dimensional virtual model of the generated furniture object is low is further avoided.
Secondly, the preset texture maps of the present application may be generated according to the acquired high-quality (e.g., high definition, etc.) image, for example, the high-quality image may be an image taken by a professional, and the preset texture maps may be generated by using the image taken by the professional, where the high-quality image taken by the professional often does not have the problems of low resolution, blur, angle distortion, uneven illumination, and the like, and is usually a picture with high resolution, no blur, no angle distortion, and even illumination, and the like, and the texture material presented in the high-quality image taken by the professional has high fineness and high fidelity, so the preset texture map generated based on the high-quality (e.g., high definition, etc.) image may be the high-quality preset texture map.
Thus, the problem of low quality of texture maps possibly caused by 'generating texture maps corresponding to texture materials of component objects in the main object according to the image including the main object input by the user' can be solved, and the quality of the obtained texture maps corresponding to the texture materials of the component objects in the main object can be improved.
In another aspect, the present application obtains texture maps in the dimension (granularity) of a component object in a subject object, and obtains texture maps in the dimension (granularity is coarse, resulting in low quality of texture maps) of the subject object, and the granularity of the texture maps corresponding to the texture material of the component object obtained in the present application is finer, so that the quality (e.g., accuracy) of the generated three-dimensional virtual model of the subject object can be improved.
Referring to fig. 2, a flowchart illustrating steps of a method for obtaining a texture map according to the present application is shown, which may specifically include the following steps:
in step S101, for a component object in a subject object in an image, a first material characteristic of a texture material of the component object is acquired.
In the present application, the image may be a two-dimensional image, for example, a two-dimensional RGB (Red Green Blue) image, or may be a PNG (Portable Network Graphics) or GIF (Graphics Interchange Format) image.
At least one body object is included in the image, and the body object may be an object included in the image or the like, for example, a three-dimensional object or the like, and in one example, the body object may be a furniture object or the like, for example, a stool object, a table object, a bedside table object, a bed object, a television cabinet object, a wardrobe object or the like. Of course, the present application is not limited to this, and other objects may be used.
Each body object includes at least one component object, and in a possible case, two or more component objects may be included in the body object, for example, in a case where the body object includes a wardrobe object, the wardrobe object includes different kinds of component objects such as a hardware object, a cabinet door object, a side panel object, and a chassis object, and the like.
In one possible scenario, a user may need to obtain a three-dimensional virtual model of a subject object for display or promotion of the subject object, etc.
If the user needs to obtain a three-dimensional virtual model of the subject object, the user may take an image including the subject object or download an image including the subject object from a network, or the like. The user may then input the image including the subject object into the electronic device, so that the electronic device generates a three-dimensional virtual model of the subject object from the image, for example, obtains a texture map corresponding to texture material of the subject object from the image, and generates a three-dimensional white model of the subject object, and then maps the texture map corresponding to the texture material of the subject object on the three-dimensional white model to obtain the three-dimensional virtual model of the subject object.
The texture material of the outer surface of different part objects in different main body objects on the market is different, and the texture material can be embodied at least by the style of the texture and the color of the texture.
Therefore, in order to make the texture material presented on the outer surface of the generated three-dimensional virtual model of the main object as much as possible the same as the texture material actually presented on the outer surface of each component object in the main object, after the electronic device obtains the image including the main object input by the user, when the electronic device obtains the texture map corresponding to the texture material of the main object, it is necessary to obtain the texture map corresponding to the texture material of the component object in the main object in the image, and then when mapping the texture map corresponding to the texture material of the main object on the three-dimensional white model, the texture map corresponding to the texture material of the component object in the main object may be mapped at the position of the component object on the three-dimensional white model.
In order to enable the electronic device to obtain a texture map corresponding to a texture material of a component object in a main object in an image, in the present application, the electronic device may obtain a first material characteristic of the texture material of the component object according to the image. Then, step S102 is performed.
For specific reference, the embodiment shown in fig. 2 may be referred to, and details of the embodiment are not described herein.
The diffuse reflection is actually a phenomenon that light rays irradiate on the surface of an object to be scattered, and after the light rays irradiate on the object, the light rays are scattered in all directions, so that physical characteristics, such as styles, colors and the like, of the surface of the object are reflected. The physical properties of the object surface may be described or represented using a diffuse reflective layer of the object surface, etc. The diffuse reflection layer of the component object may represent the object characteristics of the surface of the component object, and for the electronic device, the diffuse reflection layer of the component object may be represented by a matrix.
In step S102, at least one target preset texture map matching the second material characteristic of the presented texture material with the first material characteristic of the texture material of the component object is screened from the plurality of preset texture maps. The plurality of preset texture maps correspond to a plurality of texture materials.
The texture material features are used for representing patterns, colors and the like of the texture material. For an electronic device, the texture features of texture materials may be represented by vectors or matrices, which are abstract representations, and the vectors or matrices may be regarded as encoding the texture features of the texture materials, which are different from each other.
The diffuse reflection layer represents the physical characteristics of the surface of the material and the action of light, and intuitively reflects the color and texture of the material.
In the present application, the texture material presented by each preset texture map is different, and the material characteristics of the texture material presented by each preset texture map are also different. If the feature similarity between the texture features of the texture material presented by the two texture maps is large, it is often said that the two texture maps are matched. If the feature similarity between the texture features of the texture material presented by the two texture maps is small, it may often be said that the two texture maps are not matched.
In this step, at least one target texture map may be screened from a plurality of preset texture maps according to the feature similarity between material features, which may be specifically referred to the embodiment shown in fig. 3 later and will not be described in detail herein.
In step S103, a texture map corresponding to the texture material of the component object is determined from the at least one selected target texture map.
In an embodiment of the present application, in the case that one preset texture map is screened in step S102, in step S103, the screened preset texture map may be determined as a texture map corresponding to the texture material of the component object.
In another embodiment of the present application, in the case that more than two preset texture maps are screened in step S102, in step S103, one preset texture map may be selected from the more than two preset texture maps and used as the texture map corresponding to the texture material of the component object.
For example, a preset texture map may be selected from two or more preset texture maps by the participation of a user, and the selected preset texture map is used as a texture map corresponding to the texture material of the component object, which may be referred to the embodiment shown in fig. 4 later and will not be described in detail herein.
In the application, for a component object in a main object in an image, a first material characteristic of a texture material of the component object is obtained, and at least one target texture map in which a second material characteristic of the presented texture material is matched with the first material characteristic of the texture material of the component object is screened from a plurality of preset texture maps. The plurality of preset texture maps correspond to a plurality of texture materials. And determining a texture map corresponding to the texture material of the part object from the screened at least one target texture map.
On one hand, the process of obtaining the texture map corresponding to the texture material of the component object can be free from human participation, so that the labor cost can be reduced, and due to the physiological characteristics of people, people can inevitably make mistakes in the process of participating in the process of obtaining the texture map corresponding to the texture material of the component object.
On the other hand, a texture map corresponding to the texture material of the component object may be generated in real time without using an image including the subject object input by the user. Instead, among the plurality of preset texture maps, the preset texture map matched with the texture map corresponding to the texture material of the component object in the main object is screened, for example, the preset texture map in which the material characteristic of the presented texture material is very similar to the first material characteristic of the texture material of the component object in the main object is screened, so that the situation that "the quality of the texture map of the texture material of the furniture object generated due to the low quality of the image including the furniture object shot by the user is low" is easily generated due to the fact that the technology for shooting by the majority of users generally does not have a professional level and is uneven can be avoided, and the situation that the quality of the three-dimensional virtual model of the generated furniture object is low is further avoided.
Secondly, the preset texture maps of the present application may be generated according to the acquired high-quality (e.g., high definition, etc.) image, for example, the high-quality image may be an image taken by a professional, and the preset texture maps may be generated by using the image taken by the professional, where the high-quality image taken by the professional often does not have the problems of low resolution, blur, angle distortion, uneven illumination, and the like, and is usually a picture with high resolution, no blur, no angle distortion, and even illumination, and the like, and the texture material presented in the high-quality image taken by the professional has high fineness and high fidelity, so the preset texture map generated based on the high-quality (e.g., high definition, etc.) image may be the high-quality preset texture map.
Thus, the problem of low quality of texture maps possibly caused by 'generating texture maps corresponding to texture materials of component objects in the main object according to the image including the main object input by the user' can be solved, and the quality of the obtained texture maps corresponding to the texture materials of the component objects in the main object can be improved.
In another aspect, the present application obtains texture maps in the dimension (granularity) of a component object in a subject object, and obtains texture maps in the dimension (granularity is coarse, resulting in low quality of texture maps) of the subject object, and the granularity of the texture maps corresponding to the texture material of the component object obtained in the present application is finer, so that the quality (e.g., accuracy) of the generated three-dimensional virtual model of the subject object can be improved.
In one embodiment of the present application, a part object includes a plurality of composite superimposed texture materials. In this manner, in step S101, the first material characteristics of each texture material of the component object can be acquired from the image. Thus, in step S102, a target texture map in which the second material characteristics of the presented texture material are respectively matched with each of the first material characteristics may be screened from the plurality of preset texture maps. Further, in step S103, a display level of each texture material included in the part object on the part object may be obtained; and overlapping the screened at least two target texture maps according to the display levels of the corresponding texture materials on the component object to obtain the texture maps corresponding to the texture materials of the component object. The embodiment can be applied to the situation that the texture material included in the component object is complex, and in this situation, the quality of the texture map corresponding to the texture material of the component object in the acquired main object can still be improved.
The display levels of the texture materials are different, the multiple composite superposed texture materials are supposed to be displayed on the screen, and the direction perpendicular to the screen and pointing out of the screen is the positive direction of the Z axis of the coordinate axis, so that the coordinate values of the different texture materials on the periphery of the Z axis are different, the display level of the texture material is higher when the coordinate value on the Z axis is larger, and the display level of the texture material is lower when the coordinate value on the Z axis is smaller.
In one embodiment of the present application, referring to fig. 3, step S101 includes:
in step S201, a diffuse reflection layer of the component object is acquired from the image.
The filtering of the diffuse reflection layer of the component object in the diffuse reflection layer of the image may be performed as follows, and an example of a specific acquisition process may include:
2011. detecting position information of the component object in the image, and acquiring a diffuse reflection layer of the image.
In order to enable the electronic device to obtain the first material characteristic of the texture material of the component object according to the image, in the application, the electronic device may detect the position information of the component object in the image and obtain the diffuse reflection layer of the image. Then step S202 is performed.
In one embodiment of the present application, for detecting the position information of the component object in the image, a subject image of the subject object may be acquired from the image, and then the position of the component object in the image may be detected from the subject image of the subject object.
In another embodiment of the present application, in order to obtain the diffuse reflection layer of the image, a diffuse reflection layer obtaining model may be trained in advance, and then the diffuse reflection layer of the image is obtained based on the trained diffuse reflection layer obtaining model.
The diffuse reflection layer obtaining model may be obtained by training a sample data set including a "sample image and a labeled diffuse reflection layer of the sample image", and an example of a specific training process is given as follows, and may include:
11) acquiring at least one sample data set, wherein the sample data set comprises: and marking diffuse reflection layers of the sample image and the sample image.
The sample data set may be plural, and the sample images included in different sample data sets may be different images. A sample subject object may be included in a sample image in the sample dataset, at least one sample part object may be included in the sample subject object, and a sample subject object may include a furniture object, etc.
The sample image may include a two-dimensional RGB image or the like.
12) And training the network parameters in the model by using the sample data set until the network parameters are converged to obtain the diffuse reflection layer acquisition model.
The model may include CNN (Convolutional Neural Networks), and the like, and of course, may also include other types of models, which are not limited in this application.
In an embodiment of the present application, a network structure of the diffuse reflection layer obtaining model at least includes: encoders and decoders, etc.
In the application, based on different actual requirements, the network structures of the diffuse reflection layer acquisition models can be different, and the diffuse reflection layer acquisition models with different network structures can be applied to different application scenes, that is, the network structures of the diffuse reflection layer acquisition models applicable to different application scenes are different.
In the training process, a sample image can be input into the model, so that the model processes the sample image to obtain a predicted diffuse reflection layer of the sample image, and then network parameters in the model can be adjusted by means of a loss function based on the predicted diffuse reflection layer of the sample image and a labeled diffuse reflection layer of the sample image until the network parameters in the model converge, so that the training can be completed, and the obtained diffuse reflection layer acquisition model can be used online.
Therefore, the image including the main object input by the user can be input into the diffuse reflection layer acquisition model obtained through training, so that the diffuse reflection layer acquisition model processes the image to obtain the diffuse reflection layer of the image, the diffuse reflection layer of the image is output, and the electronic equipment can acquire the diffuse reflection layer of the image output by the diffuse reflection layer acquisition model.
2012. And screening the diffuse reflection layer of the component object in the diffuse reflection layer of the image according to the position information of the component object in the image.
In this application, the size of the diffuse reflection layer of the image is the same as the size of the image, so that each pixel in the image corresponds to a portion of the diffuse reflection layer in the diffuse reflection layer of the image.
Therefore, the position information of the component object in the image can be understood as the position information of each pixel point included by the component object in the image, and thus, the diffuse reflection layer of the component object can be screened in the diffuse reflection layer of the image according to the position information of each pixel point included by the component object in the image.
In step S202, a first material characteristic of a texture material of the component object is obtained according to the diffuse reflection layer of the component object.
The method may include expanding a diffuse reflection layer of the component object by using a global feature descriptor, and acquiring a first material feature of a texture material of the component object by using the obtained expanded layer feature, where an example of a specific acquisition procedure is given as follows, and may include:
2021. and acquiring reference layer characteristics of the diffuse reflection layer of the component object based on the characteristic extractor.
In the present application, the feature extractor includes a net (Residual Network), a mobilenet (lightweight convolutional neural Network for mobile devices), or a shufflenet (lightweight convolutional neural Network for mobile devices), which can implement fast extraction of layer features.
In this application, the electronic device may input the diffuse reflection layer of the component object into the feature extractor, so that the feature extractor processes the diffuse reflection layer of the component object to obtain a reference layer feature of the diffuse reflection layer of the component object, and outputs the reference layer feature of the diffuse reflection layer of the component object.
The diffuse reflection layer of the component object may represent object characteristics, such as a style and a color, of the surface of the component object.
For electronic devices, the layer features of the diffuse reflection layer may be obtained by abstracting the diffuse reflection layer, and may be embodied in the form of a matrix or a vector, or the like. Layer characteristics can be understood in principle as encoding physical properties of the pattern or color embodied by the diffuse reflection layer.
The layer characteristics of different diffuse reflection layers are different, and therefore, vectors or matrixes used for representing the layer characteristics of the diffuse reflection layers are different.
The reference layer features may be embodied in a matrix or vector form, and the like.
2022. And expanding the reference layer features of the diffuse reflection layer of the component object based on at least one global feature description sub-pair to obtain at least one expanded layer feature.
In this application, the global feature descriptor includes at least one of: a cascaded pyramid network or a pooled (pooling) network, etc.
The pooled network comprises at least one of the following, etc.: maximum pooled network, minimum pooled network, sum pooled network, and average pooled network, among others.
In an embodiment, when there is one global feature descriptor, the reference layer feature of the diffuse reflection layer of the component object may be extended by using the global feature descriptor to obtain an extended layer feature.
In another embodiment, when the global feature descriptors are two or more, for any one of the two or more global feature descriptors, the reference layer features of the diffuse reflection layer of the component object may be respectively extended by using the global feature descriptors, so as to obtain an extended layer feature. The above operation is also performed for each of the other two or more global feature descriptors. Thereby obtaining more than two extended layer features.
In this embodiment, based on the reference layer feature extension of the diffuse reflection layer of the component object by the at least one global feature descriptor, extended layer features of different scales of the diffuse reflection layer of the component object can be obtained, and the comprehensiveness, accuracy, and the like of the first material feature of the diffuse reflection layer of the component object that is finally extracted can be improved.
2023. And acquiring a first material characteristic of the texture material of the component object at least according to the at least one expanded layer characteristic.
In an embodiment of the present application, if in step 2022, the reference layer feature is expanded based on a global feature descriptor to obtain an expanded layer feature, in step 2023, the obtained expanded layer feature may be determined to be a first material feature of a texture material of the component object, or the obtained expanded layer feature may be fused with the reference layer feature of the diffuse reflection layer of the component object to obtain the first material feature of the texture material of the component object.
In another embodiment of the present application, if the reference layer features are respectively expanded based on two or more different global feature descriptors in step 2022 to obtain two or more different expanded layer features, in step 2023, the obtained two or more different expanded layer features may be fused, and the fused features may be used as the first material features of the texture material of the component object, or the obtained two or more different expanded layer features may be fused with the reference layer features of the diffuse reflection layer of the component object to obtain the first material features of the texture material of the component object.
In one example, in a case where the layer features include a vector, a manner of fusing two or more layer features includes: and connecting more than two vectors end to end in sequence to obtain a large vector so as to realize fusion.
In an embodiment of the present application, referring to fig. 4, step S102 may be implemented by a process including:
in step S301, a second texture feature of the texture material of each preset texture map is obtained.
Each of the preset texture maps can represent a texture material thereof, and for example, at least a pattern, a color, and the like of the texture material thereof can be represented.
In the application, a texture map library can be set in advance, a plurality of preset texture maps are set in the texture map library, and texture materials of the preset texture maps are different from each other pairwise.
The preset texture maps may include texture maps corresponding to texture materials that are frequently used in the market, which are calculated in advance by a technician.
Then, after the texture map library is online, when a technician finds a texture map corresponding to a new texture material on the market, the found texture map corresponding to the new texture material may be added as a preset texture map to the texture map library for later use.
The second material characteristics of the texture material of each preset texture map may be obtained in real time by referring to the embodiment shown in fig. 3 when the second material characteristics of the texture material of each preset texture map are obtained. For example, for any one of the predetermined texture maps, the second texture feature of the texture material for generating the predetermined texture map may be implemented in the manner of the embodiment shown in fig. 3, and the same is true for each of the other predetermined texture materials in the predetermined texture maps.
However, the inventors have found that the process of acquiring the second material characteristics of the texture material of each preset texture map in real time in the manner of the embodiment shown in fig. 3 takes a long time, resulting in low efficiency of acquiring the texture map corresponding to the texture material of the component object.
Therefore, in order to improve the efficiency of obtaining the texture map corresponding to the texture material of the component object, in the present application, the time consumed in the process of obtaining the second material characteristics of the texture material of each preset texture map may be reduced.
In order to reduce the time consumption of the process of obtaining the second material characteristics of the texture materials of the preset texture maps, in the application, the second material characteristics of the texture materials of the preset texture maps in the texture map library can be generated in advance, and the material characteristics of the texture materials of the preset texture maps are stored.
And then, when the second material characteristics of the texture materials of the preset texture maps need to be acquired, the stored second material characteristics of the texture materials of the preset texture maps can be directly acquired.
For example, for any one of the predetermined texture maps in the texture map library, a second material characteristic of the texture material of the predetermined texture map may be generated in advance (refer to the embodiment shown in fig. 2), and then the predetermined texture map and the second material characteristic of the texture material of the predetermined texture map are stored in the corresponding relationship between the texture map and the material characteristic of the texture map, and the above operations are performed for each of the other predetermined texture maps.
Therefore, when the material characteristics of the texture material of each preset texture map need to be obtained, the second material characteristics corresponding to each preset texture map can be searched in the corresponding relation between the texture map and the material characteristics of the texture map, and therefore the second material characteristics of the texture material of each preset texture map can be obtained.
Through statistics, the time consumed by the process of searching for the second material characteristic of the texture material of each preset texture map in the corresponding relation between the texture map and the material characteristic of the texture map is less than the time consumed by the process of generating the second material characteristic of the texture material of each preset texture map in real time according to the embodiment shown in fig. 3, so that the time consumed by the process of obtaining the second material characteristic of the texture material of each preset texture map can be reduced, and the efficiency of obtaining the texture map corresponding to the texture material of the component object can be improved.
In step S302, feature similarities between first material features of texture materials of the component object and second material features of the texture materials of the respective preset texture maps are obtained.
In an embodiment of the present application, for a material characteristic of a texture material of any one of the preset texture maps, a kNN (k-nearest neighbor classification algorithm) may be used to calculate a feature similarity between a second material characteristic of the texture material of the preset texture map and a first material characteristic of the texture material of the component object, and of course, the feature similarity between the second material characteristic of the texture material of the preset texture map and the first material characteristic of the texture material of the component object may also be calculated according to other manners.
And similarly executing the operation on the material characteristics of the texture material of each other preset texture map, so as to obtain the characteristic similarity between the first material characteristics of the texture material of the component object and the second material characteristics of the texture material of each preset texture map.
In step S303, at least one target texture map is selected from the plurality of preset texture maps according to the feature similarity.
For example, in one possible embodiment, at least one target texture map may be filtered among a plurality of preset texture maps in order of decreasing feature similarity.
For example, the preset texture maps in the plurality of preset texture maps may be sorted in an order of decreasing feature similarity between the second material feature of the presented texture material and the first material feature of the texture material of the component object, and then at least one target texture map may be selected from the preset texture maps according to the sorting order, for example, 1 target texture map may be selected, 3 target texture maps may be selected, 5 target texture maps may be selected, and the like.
In another embodiment of the present application, referring to fig. 5, step S103 includes:
in step S401, two or more target texture maps are displayed, respectively.
In step S102, if two or more target texture maps are selected, one target texture map may be selected from the two or more selected target texture maps as a texture map corresponding to the texture material of the component object.
In order to be able to select one target texture map from among the two or more screened target texture maps, in one embodiment of the present application, one target texture map may be randomly selected from among the two or more screened target texture maps.
Alternatively, in another embodiment of the present application, the target texture map that is used the most often may be selected from among the two or more screened target texture maps.
Alternatively, in another embodiment of the present application, the user may be supported to participate in the selection, for example, the user may be supported to manually select one target texture map from more than two screened target texture maps.
In order to support that a user may manually select one target texture map from more than two screened target texture maps, in the present application, the electronic device may respectively display the more than two screened target texture maps, so that the user may manually select one target texture map from the more than two screened target texture maps.
In step S402, a manually selected target texture map among the two or more displayed target texture maps is determined.
In the application, the screened more than two target texture maps can be respectively displayed on a screen of the electronic device, so that a user can compare the difference between the texture materials of the screened more than two target texture maps, and the difference between the texture materials of the screened more than two target texture maps and the texture material of the component object of the main object in the image input by the user can be compared, so that the user can manually select the target texture map with the minimum difference between the texture material and the texture material of the component object of the main object in the image input by the user according to the comparison result.
In step S403, a texture map corresponding to the texture material of the component object is acquired from the manually selected target texture map.
In one embodiment of the present application, the manually selected target texture map may be determined as the texture map corresponding to the texture material of the part object.
Referring to fig. 6, a flowchart illustrating steps of a method for generating a three-dimensional virtual model of a furniture object according to the present application is shown, and specifically, the method may include the following steps:
in step S501, a photographed target image is acquired, the target image including a furniture object including at least one component object;
in step S502, a furniture image of the furniture object is acquired from the target image;
in step S503, a component image of the component object is acquired from the furniture image;
in step S504, a first material characteristic of the texture material of the part object is acquired from the part image;
in step S505, at least one target texture map matching the second material characteristics of the presented texture material with the first material characteristics is screened from a plurality of preset texture maps, and the plurality of preset texture maps correspond to a plurality of texture materials;
in step S506, a texture map corresponding to the texture material of the component object is determined from the at least one filtered target texture map.
In step S507, a three-dimensional virtual model of the furniture object is generated from the texture map corresponding to the texture material of the component object.
By the application, the component image of the component object is directly acquired from the furniture image of the furniture object, and is not directly acquired from the target image.
Only the content of the furniture object is included in the furniture image, and the content other than the furniture object in the target image is not included, in this way, no other complicated contents interfere with the process of "acquiring a part image of a part object from a furniture image of the furniture object", since there is no interference, the component image of the component object of the furniture object is compared with the component image of the component object directly taken from the target image, the quality of the resulting component image of the component object can be improved by acquiring the component image of the component object from a furniture image of the furniture object, for example, it is possible to improve the degree of matching or the like between the component image of the component object obtained and the edge contour of the component object in the target image, furthermore, the quality of the generated three-dimensional virtual model of the furniture object can be improved, for example, the degree of matching between the generated three-dimensional virtual model of the furniture object and the actual furniture object can be improved.
On the other hand, the process of acquiring the component image of the component object can be free from human participation, so that the labor cost can be reduced, and due to the physiological characteristics of human, human errors are difficult to avoid in the process of participating in the process of acquiring the component image of the component object (for example, human selection of the edge contour of the component object of the furniture object in the target image, and the like).
In another aspect, the process of obtaining the texture map corresponding to the texture material of the component object may not involve a person, so that the labor cost may be reduced, and due to the physiological characteristics of the person, the person may inevitably make an error in the process of participating in the process of obtaining the texture map corresponding to the texture material of the component object.
In yet another aspect, a texture map corresponding to the texture material of the part object may be generated in real time without the user input of an image including the subject object. Instead, among the plurality of preset texture maps, the preset texture map matched with the texture map corresponding to the texture material of the component object in the main object is screened, for example, the preset texture map in which the material characteristic of the presented texture material is very similar to the first material characteristic of the texture material of the component object in the main object is screened, so that the situation that "the quality of the texture map of the texture material of the furniture object generated due to the low quality of the image including the furniture object shot by the user is low" is easily generated due to the fact that the technology for shooting by the majority of users generally does not have a professional level and is uneven can be avoided, and the situation that the quality of the three-dimensional virtual model of the generated furniture object is low is further avoided.
Secondly, the preset texture maps of the present application may be generated according to the acquired high-quality (e.g., high definition, etc.) image, for example, the high-quality image may be an image taken by a professional, and the preset texture maps may be generated by using the image taken by the professional, where the high-quality image taken by the professional often does not have the problems of low resolution, blur, angle distortion, uneven illumination, and the like, and is usually a picture with high resolution, no blur, no angle distortion, and even illumination, and the like, and the texture material presented in the high-quality image taken by the professional has high fineness and high fidelity, so the preset texture map generated based on the high-quality (e.g., high definition, etc.) image may be the high-quality preset texture map.
Thus, the problem of low quality of texture maps possibly caused by 'generating texture maps corresponding to texture materials of component objects in the main object according to the image including the main object input by the user' can be solved, and the quality of the obtained texture maps corresponding to the texture materials of the component objects in the main object can be improved.
In another aspect, the present application obtains texture maps in the dimension (granularity) of a component object in a subject object, and obtains texture maps in the dimension (granularity is coarse, resulting in low quality of texture maps) of the subject object, and the granularity of the texture maps corresponding to the texture material of the component object obtained in the present application is finer, so that the quality (e.g., accuracy) of the generated three-dimensional virtual model of the subject object can be improved.
It is noted that, for simplicity of explanation, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will appreciate that the present application is not limited by the order of acts, as some steps may, in accordance with the present application, occur in other orders and concurrently. Further, those skilled in the art will also appreciate that the embodiments described in the specification are exemplary and that no action is necessarily required in this application.
Referring to fig. 7, a block diagram of an apparatus for obtaining a texture map according to the present application is shown, and the apparatus may specifically include the following modules:
a first obtaining module 11, configured to obtain, for a component object in a main object in an image, a first material characteristic of a texture material of the component object; the first screening module 12 is configured to screen at least one target texture map in which a second material characteristic of a presented texture material matches the first material characteristic from among a plurality of preset texture maps, where the plurality of preset texture maps correspond to a plurality of texture materials; a second obtaining module 13, configured to determine, from the at least one filtered target texture map, a texture map corresponding to the texture material of the component object.
In an optional implementation manner, the first obtaining module includes: a first acquisition unit, configured to acquire a diffuse reflection layer of the component object according to the image; and the second acquisition unit is used for acquiring the first material characteristics of the texture material of the component object according to the diffuse reflection layer of the component object.
In an optional implementation manner, the first obtaining unit includes: a detection subunit configured to detect position information of the component object in the image; the first acquiring subunit is used for acquiring the diffuse reflection layer of the image; and the screening subunit is used for screening the diffuse reflection layer of the component object from the diffuse reflection layer of the image according to the position information.
In an optional implementation manner, the second obtaining unit includes: a second obtaining subunit, configured to obtain, based on the feature extractor, a reference layer feature of the diffuse reflection layer of the component object; the expansion subunit is configured to expand the reference layer feature of the diffuse reflection layer of the component object based on at least one global feature descriptor to obtain at least one expanded layer feature; and the third acquiring subunit is configured to acquire the first material characteristic of the texture material of the component object at least according to the at least one expanded layer characteristic.
In an optional implementation manner, the first filtering module includes: a third obtaining unit, configured to obtain material characteristics of texture materials of each preset texture map; a fourth obtaining unit, configured to obtain feature similarities between the first material feature and material features of texture materials of the preset texture maps, respectively; and the screening unit is used for screening at least one target texture map according to the feature similarity in the plurality of preset texture maps.
In an optional implementation manner, the third obtaining unit is specifically configured to: and searching second material characteristics corresponding to each preset texture mapping in the corresponding relation between the texture mapping and the material characteristics of the texture mapping.
In an optional implementation manner, the second obtaining unit includes: the display unit is used for respectively displaying more than two target texture maps; a determination unit for determining a manually selected target texture map among the displayed two or more target texture maps; and the fifth acquiring unit is used for acquiring the texture map corresponding to the texture material of the part object according to the manually selected target texture map.
In an alternative implementation, the part object includes a plurality of composite superimposed texture materials; the first obtaining module is specifically configured to: and respectively acquiring first material characteristics of each texture material of the component object according to the image.
In an optional implementation manner, the screening module is specifically configured to: and screening the presented texture characteristics of the texture materials from the plurality of preset texture maps to obtain target texture maps matched with the first material characteristics.
In an optional implementation manner, the second obtaining module is specifically configured to: acquiring display levels of various texture materials included in the part object on the part object; and overlapping the screened at least two preset texture maps according to the display levels of the corresponding texture materials on the component object to obtain the texture maps corresponding to the texture materials of the component object.
In the application, for a component object in a main object in an image, a first material characteristic of a texture material of the component object is obtained, and at least one target texture map in which a second material characteristic of the presented texture material is matched with the first material characteristic of the texture material of the component object is screened from a plurality of preset texture maps. The plurality of preset texture maps correspond to a plurality of texture materials. And determining a texture map corresponding to the texture material of the part object from the screened at least one target texture map.
On one hand, the process of obtaining the texture map corresponding to the texture material of the component object can be free from human participation, so that the labor cost can be reduced, and due to the physiological characteristics of people, people can inevitably make mistakes in the process of participating in the process of obtaining the texture map corresponding to the texture material of the component object.
On the other hand, a texture map corresponding to the texture material of the component object may be generated in real time without using an image including the subject object input by the user. Instead, among the plurality of preset texture maps, the preset texture map matched with the texture map corresponding to the texture material of the component object in the main object is screened, for example, the preset texture map in which the material characteristic of the presented texture material is very similar to the first material characteristic of the texture material of the component object in the main object is screened, so that the situation that "the quality of the texture map of the texture material of the furniture object generated due to the low quality of the image including the furniture object shot by the user is low" is easily generated due to the fact that the technology for shooting by the majority of users generally does not have a professional level and is uneven can be avoided, and the situation that the quality of the three-dimensional virtual model of the generated furniture object is low is further avoided.
Secondly, the preset texture maps of the present application may be generated according to the acquired high-quality (e.g., high definition, etc.) image, for example, the high-quality image may be an image taken by a professional, and the preset texture maps may be generated by using the image taken by the professional, where the high-quality image taken by the professional often does not have the problems of low resolution, blur, angle distortion, uneven illumination, and the like, and is usually a picture with high resolution, no blur, no angle distortion, and even illumination, and the like, and the texture material presented in the high-quality image taken by the professional has high fineness and high fidelity, so the preset texture map generated based on the high-quality (e.g., high definition, etc.) image may be the high-quality preset texture map.
Thus, the problem of low quality of texture maps possibly caused by 'generating texture maps corresponding to texture materials of component objects in the main object according to the image including the main object input by the user' can be solved, and the quality of the obtained texture maps corresponding to the texture materials of the component objects in the main object can be improved.
In another aspect, the present application obtains texture maps in the dimension (granularity) of a component object in a subject object, and obtains texture maps in the dimension (granularity is coarse, resulting in low quality of texture maps) of the subject object, and the granularity of the texture maps corresponding to the texture material of the component object obtained in the present application is finer, so that the quality (e.g., accuracy) of the generated three-dimensional virtual model of the subject object can be improved.
Referring to fig. 8, a block diagram of an apparatus for generating a three-dimensional virtual model of a furniture object according to the present application is shown, and the apparatus may specifically include the following modules:
a third obtaining module 21, configured to obtain a captured target image, where the target image includes the furniture object, and the furniture object includes at least one component object; a fourth obtaining module 22, configured to obtain a furniture image of the furniture object according to the target image; a fifth acquiring module 23, configured to acquire a component image of the component object according to the furniture image; a sixth obtaining module 24, configured to obtain a first material characteristic of a texture material of the component object according to the component image; a second screening module 25, configured to screen at least one target texture map in which a second material characteristic of a presented texture material matches the first material characteristic from among a plurality of preset texture maps, where the plurality of preset texture maps correspond to a plurality of texture materials; a second determining module 26, configured to determine, from the filtered at least one target texture map, a texture map corresponding to the texture material of the component object. A generating module 27, configured to generate a three-dimensional virtual model of the furniture object according to the texture map.
By the application, the component image of the component object is directly acquired from the furniture image of the furniture object, and is not directly acquired from the target image.
Only the content of the furniture object is included in the furniture image, and the content other than the furniture object in the target image is not included, in this way, no other complicated contents interfere with the process of "acquiring a part image of a part object from a furniture image of the furniture object", since there is no interference, the component image of the component object of the furniture object is compared with the component image of the component object directly taken from the target image, the quality of the resulting component image of the component object can be improved by acquiring the component image of the component object from a furniture image of the furniture object, for example, it is possible to improve the degree of matching or the like between the component image of the component object obtained and the edge contour of the component object in the target image, furthermore, the quality of the generated three-dimensional virtual model of the furniture object can be improved, for example, the degree of matching between the generated three-dimensional virtual model of the furniture object and the actual furniture object can be improved.
On the other hand, the process of acquiring the component image of the component object can be free from human participation, so that the labor cost can be reduced, and due to the physiological characteristics of human, human errors are difficult to avoid in the process of participating in the process of acquiring the component image of the component object (for example, human selection of the edge contour of the component object of the furniture object in the target image, and the like).
In another aspect, the process of obtaining the texture map corresponding to the texture material of the component object may not involve a person, so that the labor cost may be reduced, and due to the physiological characteristics of the person, the person may inevitably make an error in the process of participating in the process of obtaining the texture map corresponding to the texture material of the component object.
In yet another aspect, a texture map corresponding to the texture material of the part object may be generated in real time without the user input of an image including the subject object. Instead, among the plurality of preset texture maps, the preset texture map matched with the texture map corresponding to the texture material of the component object in the main object is screened, for example, the preset texture map in which the material characteristic of the presented texture material is very similar to the first material characteristic of the texture material of the component object in the main object is screened, so that the situation that "the quality of the texture map of the texture material of the furniture object generated due to the low quality of the image including the furniture object shot by the user is low" is easily generated due to the fact that the technology for shooting by the majority of users generally does not have a professional level and is uneven can be avoided, and the situation that the quality of the three-dimensional virtual model of the generated furniture object is low is further avoided.
Secondly, the preset texture maps of the present application may be generated according to the acquired high-quality (e.g., high definition, etc.) image, for example, the high-quality image may be an image taken by a professional, and the preset texture maps may be generated by using the image taken by the professional, where the high-quality image taken by the professional often does not have the problems of low resolution, blur, angle distortion, uneven illumination, and the like, and is usually a picture with high resolution, no blur, no angle distortion, and even illumination, and the like, and the texture material presented in the high-quality image taken by the professional has high fineness and high fidelity, so the preset texture map generated based on the high-quality (e.g., high definition, etc.) image may be the high-quality preset texture map.
Thus, the problem of low quality of texture maps possibly caused by 'generating texture maps corresponding to texture materials of component objects in the main object according to the image including the main object input by the user' can be solved, and the quality of the obtained texture maps corresponding to the texture materials of the component objects in the main object can be improved.
In another aspect, the present application obtains texture maps in the dimension (granularity) of a component object in a subject object, and obtains texture maps in the dimension (granularity is coarse, resulting in low quality of texture maps) of the subject object, and the granularity of the texture maps corresponding to the texture material of the component object obtained in the present application is finer, so that the quality (e.g., accuracy) of the generated three-dimensional virtual model of the subject object can be improved.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The present application further provides a non-transitory, readable storage medium, where one or more modules (programs) are stored, and when the one or more modules are applied to a device, the device may execute instructions (instructions) of method steps in this application.
Embodiments of the present application provide one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an electronic device to perform the methods as described in one or more of the above embodiments. In the embodiment of the application, the electronic device comprises a server, a gateway, a sub-device and the like, wherein the sub-device is a device such as an internet of things device.
Embodiments of the present disclosure may be implemented as an apparatus, which may include electronic devices such as servers (clusters), terminal devices such as IoT devices, and the like, using any suitable hardware, firmware, software, or any combination thereof, for a desired configuration.
Fig. 9 schematically illustrates an example apparatus 1300 that can be used to implement various embodiments described herein.
For one embodiment, fig. 9 illustrates an example apparatus 1300 having one or more processors 1302, a control module (chipset) 1304 coupled to at least one of the processor(s) 1302, memory 1306 coupled to the control module 1304, non-volatile memory (NVM)/storage 1308 coupled to the control module 1304, one or more input/output devices 1310 coupled to the control module 1304, and a network interface 1312 coupled to the control module 1304.
Processor 1302 may include one or more single-core or multi-core processors, and processor 1302 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the apparatus 1300 can be a server device such as a gateway described in the embodiments of the present application.
In some embodiments, apparatus 1300 may include one or more computer-readable media (e.g., memory 1306 or NVM/storage 1308) having instructions 1314 and one or more processors 1302, which in combination with the one or more computer-readable media, are configured to execute instructions 1314 to implement modules to perform actions described in this disclosure.
For one embodiment, control module 1304 may include any suitable interface controllers to provide any suitable interface to at least one of the processor(s) 1302 and/or any suitable device or component in communication with control module 1304.
The control module 1304 may include a memory controller module to provide an interface to the memory 1306. The memory controller module may be a hardware module, a software module, and/or a firmware module.
Memory 1306 may be used, for example, to load and store data and/or instructions 1314 for device 1300. For one embodiment, memory 1306 may comprise any suitable volatile memory, such as suitable DRAM. In some embodiments, the memory 1306 may comprise double data rate four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, control module 1304 may include one or more input/output controllers to provide an interface to NVM/storage 1308 and input/output device(s) 1310.
For example, NVM/storage 1308 may be used to store data and/or instructions 1314. NVM/storage 1308 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 1308 may include storage resources that are physically part of the device on which apparatus 1300 is installed, or it may be accessible by the device and need not be part of the device. For example, NVM/storage 1308 may be accessible over a network via input/output device(s) 1310.
Input/output device(s) 1310 may provide an interface for apparatus 1300 to communicate with any other suitable device, input/output device(s) 1310 may include a communications component, a pinyin component, a sensor component, and so forth. The network interface 1312 may provide an interface for the device 1300 to communicate over one or more networks, and the device 1300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as access to a communication standard-based wireless network, e.g., WiFi, 2G, 3G, 4G, 5G, etc., or a combination thereof.
For one embodiment, at least one of the processor(s) 1302 may be packaged together with logic for one or more controllers (e.g., memory controller modules) of the control module 1304. For one embodiment, at least one of the processor(s) 1302 may be packaged together with logic for one or more controllers of the control module 1304 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 1302 may be integrated on the same die with logic for one or more controller(s) of the control module 1304. For one embodiment, at least one of the processor(s) 1302 may be integrated on the same die with logic of one or more controllers of the control module 1304 to form a system on chip (SoC).
In various embodiments, apparatus 1300 may be, but is not limited to being: a server, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.), among other terminal devices. In various embodiments, apparatus 1300 may have more or fewer components and/or different architectures. For example, in some embodiments, device 1300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
An embodiment of the present application provides an electronic device, including: one or more processors; and one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the electronic device to perform a method as described in one or more of the present applications.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable information processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable information processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable information processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable information processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the true scope of the embodiments of the present application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method and the apparatus for obtaining a texture map provided by the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method of obtaining a texture map, the method comprising:
aiming at a component object in a main object in an image, acquiring a first material characteristic of a texture material of the component object;
screening at least one target texture map with the second material characteristics of the presented texture materials matched with the first material characteristics from a plurality of preset texture maps, wherein the plurality of preset texture maps correspond to a plurality of texture materials;
and determining a texture map corresponding to the texture material of the part object from the screened at least one target texture map.
2. The method of claim 1, wherein said obtaining a first material characteristic of a texture material of the part object from the image comprises:
acquiring a diffuse reflection layer of the component object according to the image;
and acquiring a first material characteristic of the texture material of the component object according to the diffuse reflection layer of the component object.
3. The method of claim 2, wherein said obtaining a diffuse reflection layer of said part object from said image comprises:
detecting position information of the component object in the image; and acquiring a diffuse reflection layer of the image;
and screening the diffuse reflection layer of the component object in the diffuse reflection layer of the image according to the position information.
4. The method according to claim 2, wherein the obtaining a first material characteristic of a texture material of the component object according to the diffuse reflection layer of the component object comprises:
acquiring reference layer characteristics of a diffuse reflection layer of the component object based on a characteristic extractor;
expanding the reference layer features of the diffuse reflection layer of the component object based on at least one global feature descriptor to obtain at least one expanded layer feature;
and acquiring a first material characteristic of the texture material of the component object at least according to at least one expanded layer characteristic.
5. The method of claim 1, wherein the part object comprises a plurality of composite superimposed texture materials;
obtaining a first material characteristic of a texture material of the part object from the image, including:
and respectively acquiring first material characteristics of each texture material of the component object according to the image.
6. A method of generating a three-dimensional virtual model of a furniture object, the method comprising:
acquiring a shot target image, wherein the target image comprises the furniture object, and the furniture object comprises at least one component object;
acquiring a furniture image of the furniture object according to the target image;
acquiring a component image of the component object according to the furniture image;
acquiring a first material characteristic of texture material of the part object according to the part image;
screening at least one target texture map with the second material characteristics of the presented texture materials matched with the first material characteristics from a plurality of preset texture maps, wherein the plurality of preset texture maps correspond to a plurality of texture materials;
determining a texture map corresponding to the texture material of the component object from the screened at least one target texture map;
and generating a three-dimensional virtual model of the furniture object according to the texture map.
7. An apparatus for obtaining a texture map, the apparatus comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first material characteristic of texture material of a part object in a main body object in an image;
the first screening module is used for screening at least one target texture map which is matched with the second material characteristics of the presented texture material and the first material characteristics from a plurality of preset texture maps, and the plurality of preset texture maps correspond to a plurality of texture materials;
and the first determining module is used for determining the texture map corresponding to the texture material of the part object from at least one screened target texture map.
8. An apparatus for generating a three-dimensional virtual model of a furniture object, the apparatus comprising:
the third acquisition module is used for acquiring a shot target image, wherein the target image comprises the furniture object, and the furniture object comprises at least one component object;
the fourth acquisition module is used for acquiring a furniture image of the furniture object according to the target image;
a fifth acquiring module, configured to acquire a component image of the component object according to the furniture image;
a sixth obtaining module, configured to obtain a first material characteristic of a texture material of the component object according to the component image;
the second screening module is used for screening at least one target texture map which is matched with the second material characteristics of the presented texture material and the first material characteristics from a plurality of preset texture maps, and the plurality of preset texture maps correspond to a plurality of texture materials;
the second determining module is used for determining a texture map corresponding to the texture material of the component object from at least one screened target texture map;
and the generating module is used for generating a three-dimensional virtual model of the furniture object according to the texture map.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 6 are implemented when the processor executes the program.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202111148398.XA 2021-09-28 2021-09-28 Method and device for obtaining texture map Pending CN114140570A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111148398.XA CN114140570A (en) 2021-09-28 2021-09-28 Method and device for obtaining texture map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111148398.XA CN114140570A (en) 2021-09-28 2021-09-28 Method and device for obtaining texture map

Publications (1)

Publication Number Publication Date
CN114140570A true CN114140570A (en) 2022-03-04

Family

ID=80394123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111148398.XA Pending CN114140570A (en) 2021-09-28 2021-09-28 Method and device for obtaining texture map

Country Status (1)

Country Link
CN (1) CN114140570A (en)

Similar Documents

Publication Publication Date Title
US11625896B2 (en) Face modeling method and apparatus, electronic device and computer-readable medium
KR102476294B1 (en) Determining the Suitability of Digital Images for Creating AR/VR Digital Content
CN110352446B (en) Method and apparatus for obtaining image and recording medium thereof
US8773502B2 (en) Smart targets facilitating the capture of contiguous images
US10726580B2 (en) Method and device for calibration
CN108848367B (en) Image processing method and device and mobile terminal
US20150062381A1 (en) Method for synthesizing images and electronic device thereof
CN108898082B (en) Picture processing method, picture processing device and terminal equipment
US20150302587A1 (en) Image processing device, image processing method, program, and information recording medium
US20230153897A1 (en) Integrating a product model into a user supplied image
CN114640833A (en) Projection picture adjusting method and device, electronic equipment and storage medium
CN105827932A (en) Image synthesis method and mobile terminal
CN106657600B (en) A kind of image processing method and mobile terminal
CN116524088B (en) Jewelry virtual try-on method, jewelry virtual try-on device, computer equipment and storage medium
US20230306080A1 (en) Automatic generation system of training image and method thereof
CN116452745A (en) Hand modeling, hand model processing method, device and medium
CN114140570A (en) Method and device for obtaining texture map
JP5926626B2 (en) Image processing apparatus, control method therefor, and program
CN115086625A (en) Correction method, device and system of projection picture, correction equipment and projection equipment
US10573090B2 (en) Non-transitory computer-readable storage medium, display control method, and display control apparatus
CN114119423A (en) Image processing method, image processing device, electronic equipment and storage medium
CN109931923B (en) Navigation guidance diagram generation method and device
US9681064B1 (en) Lookup table interpolation in a film emulation camera system
CN107087114B (en) Shooting method and device
CN114140574A (en) Method and device for generating three-dimensional virtual model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination