CN118365889B - Home decoration image processing method, apparatus, device, medium, and program product - Google Patents

Home decoration image processing method, apparatus, device, medium, and program product Download PDF

Info

Publication number
CN118365889B
CN118365889B CN202410796872.7A CN202410796872A CN118365889B CN 118365889 B CN118365889 B CN 118365889B CN 202410796872 A CN202410796872 A CN 202410796872A CN 118365889 B CN118365889 B CN 118365889B
Authority
CN
China
Prior art keywords
furniture
image
feature
home decoration
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410796872.7A
Other languages
Chinese (zh)
Other versions
CN118365889A (en
Inventor
向海明
梁超
初颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhizhu Perfect Home Technology Co ltd
Original Assignee
Wuhan Zhizhu Perfect Home Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhizhu Perfect Home Technology Co ltd filed Critical Wuhan Zhizhu Perfect Home Technology Co ltd
Priority to CN202410796872.7A priority Critical patent/CN118365889B/en
Publication of CN118365889A publication Critical patent/CN118365889A/en
Application granted granted Critical
Publication of CN118365889B publication Critical patent/CN118365889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The application provides a home decoration image processing method, a home decoration image processing device, a home decoration image processing equipment, a home decoration image processing medium and a home decoration image processing program product. The method comprises the following steps: performing image segmentation processing on the home decoration image to obtain at least one furniture segmentation image and at least one non-furniture segmentation image; extracting non-furniture features in the non-furniture segmented image and extracting furniture features in the furniture segmented image; wherein the non-furniture feature comprises at least a color feature and the furniture feature comprises at least a furniture style feature; matching the corresponding non-furniture image based on the non-furniture feature and matching the corresponding furniture image based on the furniture feature; wherein the furniture image and the non-furniture image are used for acquiring home decoration layout information. The method has high accuracy in home decoration image analysis.

Description

Home decoration image processing method, apparatus, device, medium, and program product
Technical Field
The present application relates to the field of home decoration design technology, and in particular, to a home decoration image processing method, apparatus, device, medium, and program product.
Background
The analysis of the home decoration images can be applied to indoor designs, for example, images with similar styles can be obtained by analyzing some home decoration images, so that indoor design suggestions are provided for users.
The method for analyzing the home decoration image is mostly realized by a traditional image analysis and processing method, but the traditional image analysis and processing method is generally difficult to process complex home decoration image analysis because the home decoration image generally comprises various objects, building structures, illumination adjustment and the like, so that the obtained home decoration image analysis result has low accuracy and cannot be effectively applied to indoor designs.
Disclosure of Invention
The application provides a home decoration image processing method, a device, equipment, a medium and a program product, which are used for solving the problem of low accuracy of a home decoration image analysis result.
In a first aspect, the present application provides a home decoration image processing method, including:
performing image segmentation processing on the home decoration image to obtain at least one furniture segmentation image and at least one non-furniture segmentation image;
extracting non-furniture features in the non-furniture segmented image and extracting furniture features in the furniture segmented image; wherein the non-furniture feature comprises at least a color feature, the furniture feature comprises at least a furniture style feature;
matching a corresponding non-furniture image based on the non-furniture feature and matching a corresponding furniture image based on the furniture feature; the furniture images and the non-furniture images are used for acquiring home decoration layout information;
extracting furniture style characteristics in the furniture segmentation image comprises the following steps:
dividing the furniture segmented image into at least one image block;
extracting space vectors of all the image blocks, and carrying out position coding on the space vectors of all the image blocks to obtain position space vectors corresponding to all the image blocks;
processing the position space vectors corresponding to the image blocks based on multi-head self-attention to obtain furniture style characteristics of the furniture segmented image; wherein the furniture style feature is a feature of a style class that generates the furniture segmented image.
In one implementation, the extracting the spatial vector of each image block includes:
Flattening each image block into a one-dimensional vector to obtain a one-dimensional vector corresponding to each image block;
mapping the one-dimensional vector corresponding to each image block to a high-dimensional embedded space to obtain the space vector.
In one implementation manner, the image segmentation processing is performed on the home decoration image to obtain at least one furniture segmentation image and at least one non-furniture segmentation image, including:
Extracting features of the home decoration image to obtain a target feature image;
Classifying the feature data of each pixel in the target feature map based on a preset category to obtain the category of each pixel in the home decoration map; wherein the preset categories include furniture categories and non-furniture categories;
and dividing the home decoration image based on the area where the pixels of the furniture class are located, and dividing the home decoration image based on the area where the pixels of the non-furniture class are located, so as to correspondingly obtain the furniture division image and the non-furniture division image.
In one implementation manner, the feature extraction of the home decoration image to obtain a target feature image includes:
Performing coding treatment on the home decoration image to obtain a coding feature map;
Decoding the coding feature map to obtain a decoding feature map;
and carrying out feature map fusion processing on the coding feature map and the decoding feature map to obtain a target feature image.
In one implementation, the non-furniture segmented image includes a wall segmented image and a floor segmented image, the non-furniture feature includes a color feature; the extracting non-furniture features in the non-furniture segmented image includes:
performing color clustering treatment on the wall segmentation image to obtain wall color characteristics; wherein the wall color feature comprises a color distribution and a color duty cycle in the wall segmentation image;
Performing color clustering treatment on the floor segmentation image to obtain floor color characteristics; wherein the floor color feature comprises a color distribution and a color duty cycle in the floor segmentation image;
And taking the wall color feature and the floor color feature as the color feature of the non-furniture-segmented image.
In one implementation, after the matching of the corresponding non-furniture image based on the non-furniture feature and the matching of the corresponding furniture image based on the furniture feature, the method further comprises:
and mapping the furniture image and the non-furniture image to a preset three-dimensional room model to obtain a home decoration layout model.
In a second aspect, the present application provides a home improvement image processing apparatus comprising: the image segmentation module is used for carrying out image segmentation processing on the home decoration image to obtain at least one furniture segmentation image and at least one non-furniture segmentation image;
The feature extraction module is used for extracting non-furniture features in the non-furniture segmented image and extracting furniture features in the furniture segmented image; wherein the non-furniture feature comprises at least a color feature, the furniture feature comprises at least a furniture style feature;
The image matching module is used for matching corresponding non-furniture images based on the non-furniture features and matching corresponding furniture images based on the furniture features; the furniture images and the non-furniture images are used for acquiring home decoration layout information;
the feature extraction module includes:
An image block acquisition unit for dividing the furniture-divided image into at least one image block;
the space vector acquisition unit is used for extracting the space vector of each image block, and carrying out position coding on the space vector of each image block to obtain a position space vector corresponding to each image block;
the furniture style feature extraction unit is used for processing the position space vectors corresponding to the image blocks based on multi-head self-attention to obtain furniture style features of the furniture segmented image; wherein the furniture style feature is a feature of a style class that generates the furniture segmented image.
In a third aspect, the present application provides an electronic device, including: a processor, a memory; the memory is used for storing instructions; the processor is configured to execute the instructions in the memory, so that the electronic device performs the home decoration image processing method according to the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, are adapted to carry out the home decoration image processing method according to the first aspect.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the home improvement image processing method according to the first aspect.
According to the home decoration image processing method, device, equipment, medium and program product, the home decoration image is divided into the furniture segmentation image and the non-furniture segmentation image, so that feature extraction is carried out on the furniture segmentation image and the non-furniture segmentation image respectively, and therefore accurate analysis processing can be carried out on the complicated home decoration image from two directions of furniture and non-furniture respectively, the non-furniture feature of the high-precision non-furniture segmentation image and the furniture feature of the furniture segmentation image are obtained, the extraction mode of the furniture style feature in the furniture feature is specifically provided, the furniture style feature is accurately obtained, and corresponding images are accurately matched based on the non-furniture feature and the furniture feature, and home decoration layout information can be efficiently and accurately obtained.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is an implementation scenario diagram illustrating an exemplary embodiment;
FIG. 2 is a flowchart of a home improvement image processing method according to an exemplary embodiment;
FIG. 3 is a schematic view of a furniture segmented image shown in an exemplary embodiment;
FIG. 4 is a block diagram of a home improvement image processing apparatus according to an exemplary embodiment;
fig. 5 is a block diagram of an electronic device, as shown in an exemplary embodiment.
Specific embodiments of the present application have been shown by way of the above drawings and will be described in more detail below. The drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but rather to illustrate the inventive concepts to those skilled in the art by reference to the specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
Indoor image analysis is critical in various applications such as indoor design, virtual reality, real estate, and robotic navigation. Conventional image analysis and processing methods are generally difficult to process complex indoor scenes because they generally contain different objects, lighting conditions and building structures, and based on this, the present embodiment provides a home decoration image processing method, apparatus, device, medium and program product, which can be used for complex home decoration image analysis, with high accuracy.
The home decoration image processing method provided by the application can be applied to the implementation environment shown in fig. 1, and as shown in fig. 1, the implementation environment can comprise a terminal and a server, wherein the terminal is in communication connection with the server.
It will be appreciated that the number of terminals and servers shown in fig. 1 is 1, and in some embodiments, the number of terminals may be one or more, and in other embodiments, the number of servers may be one or more.
In this embodiment, the terminal may receive the home decoration image to be processed, and send the home decoration image to the server, so that the server processes the home decoration image, and sends the processing result to the terminal, so as to display the processing result through the visualization structure carried by the terminal.
In some embodiments, after acquiring the home decoration image, the server performs image segmentation processing on the home decoration image to obtain at least one furniture segmentation image and at least one non-furniture segmentation image; extracting non-furniture features in the non-furniture segmented image and extracting furniture features in the furniture segmented image; wherein the non-furniture feature comprises at least a color feature and the furniture feature comprises at least a furniture style feature; matching the corresponding non-furniture image based on the non-furniture feature and matching the corresponding furniture image based on the furniture feature; the furniture image and the non-furniture image are used for acquiring home decoration layout information; in the process, furniture style characteristics are extracted to divide a furniture division image into at least one image block; extracting the space vector of each image block, and carrying out position coding on the space vector of each image block to obtain the position space vector corresponding to each image block; processing the position space vectors corresponding to each image block based on the multi-head self-attention to obtain furniture style characteristics of the furniture segmented image; the furniture style features are features of style categories for generating furniture segmented images.
The furniture image as well as the non-furniture image may then be sent to the terminal for display through the visual structure of the terminal.
In some embodiments, the terminal may be a wired terminal with a visualization structure or a wireless terminal, and in other embodiments, the terminal may be an electronic device with a visualization structure, such as a mobile phone, a computer, a tablet, and a vehicle-mounted device.
In some embodiments, the server may be a physical server, a server cluster, a cloud server, or the like, without specific limitation herein.
It will be appreciated that the home decoration image processing apparatus is disposed on the server in fig. 1, but the implementation environment shown in fig. 1 in this embodiment is merely exemplary, and in other embodiments, the home decoration image processing method may be applied to other implementation environments, and the home decoration image processing apparatus may be disposed on other structures in other implementation environments, which is not limited herein.
Based on the implementation environment of fig. 1, the present embodiment proposes a home decoration image processing method, referring to fig. 2, where the home decoration image processing method is applied to the server of fig. 1, and the home decoration image processing method includes steps S210 to S250, which are specifically described as follows:
S210: and performing image segmentation processing on the home decoration image to obtain at least one furniture segmentation image and at least one non-furniture segmentation image.
In this embodiment, the home decoration image may be an image that has been preprocessed, and in some embodiments, the preprocessing may include resizing, normalization, and data enhancement, to obtain the home decoration image.
In some embodiments, the home furnishing image may include walls, floors, furniture, and the like.
In one embodiment, the at least one furniture-segmented image and the at least one non-furniture-segmented image are obtained by semantically segmenting the house-hold image, which may be implemented via a neural network, such as a U-Net network (a type of neural network).
In this embodiment, this can be achieved by: extracting features of the home decoration image to obtain a target feature image; classifying the feature data of each pixel in the target feature map based on a preset category to obtain the category of each pixel in the home decoration map; wherein the preset categories include furniture categories and non-furniture categories; and dividing the home decoration image based on the area where the pixels of the furniture class are located, and dividing the home decoration image based on the area where the pixels of the non-furniture class are located, so as to correspondingly obtain a furniture division image and a non-furniture division image.
In one embodiment, for home image I, it may be input to a neural network (e.g., U-Net) whose goal is to generate a class label map M that is the same size as the home image.
In this embodiment, the target feature image may be acquired in the neural network by: carrying out coding treatment on the home decoration image to obtain a coding feature map; decoding the coding feature map to obtain a decoding feature map; and carrying out feature map fusion processing on the coding feature map and the decoding feature map to obtain a target feature image.
The encoder part of the U-Net neural network encodes the home decoration image to obtain an encoding characteristic diagram, and in one embodiment, the encoder comprises a plurality of convolution and pooling layers for extracting image characteristics; wherein, the convolution operation formula in the convolution layer is as follows: y=xW+b, X represents the input feature map, W represents the convolution kernel, b represents the bias term, Y represents the output feature map,Representing a convolution operation.
Then, the coding feature map may enter a decoder portion to perform decoding processing to obtain a decoding feature map, and then performing feature map fusion processing on the coding feature map and the decoding feature map to obtain a target feature image, where in an embodiment, the decoder portion of the U-Net includes a plurality of upsampling and convolution layers; the decoder performs feature fusion on the coding feature map of the encoder and the decoding feature map of the decoder through jump connection (skip connections) so as to capture multi-scale context information; the upsampling operation may be implemented using either transposed convolution (Transposed Convolution) or bilinear interpolation (Bilinear Interpolation).
After the target feature map is obtained, the output layer of the U-Net maps the target feature map to a preset category by using a 1x1 convolution operation, and a category probability distribution of each pixel in the home decoration image is obtained by applying a pixel-level Softmax activation function, so that the category label map is obtained.
The Softmax activation function in this embodiment is calculated by:
Softmax(xi) = exp(xi)/ Σ(exp(xj))
Where x i represents the original score of the i-th preset category, Σ (exp (x j)) is the sum of the index scores of all preset categories.
In this embodiment, the U-Net network for acquiring the class of each pixel in the home decoration image can calculate the difference between the real class distribution and the predicted class distribution by using the cross entropy loss as the loss function, and the calculation formula of the cross entropy loss CE (y true, ypred) is:
CE(ytrue, ypred) = - Σytrue_i log(ypred_i))
wherein y true_i represents the value of the i-th pixel in the true probability distribution, y pred_i represents the value of the i-th pixel in the predicted probability distribution, y true represents the true probability, and y pred represents the predicted probability; the loss function is calculated at the pixel level and then summed or averaged over the entire home image.
During use of the U-Net network or training, the loss function is optimized by a gradient descent algorithm (e.g., SGD, adam, etc.), and the U-Net model is trained to generate accurate pixel-level class labels.
After the categories of the pixels in the home decoration image are obtained, the home decoration image can be segmented based on the categories, namely, the corresponding areas of the pixels in the same category in the home decoration image are extracted to form segmented images of the corresponding categories.
Dividing the home decoration image based on the region where the pixels of the furniture class are positioned to obtain a furniture division image; and dividing the home decoration image based on the region where the pixels of the non-furniture class are located, and correspondingly obtaining a non-furniture divided image.
As shown in fig. 3, in an embodiment, the area where the pixels of the furniture class of the home decoration image are located is a furniture segmentation image obtained by segmenting the area where the pixels of the furniture class are located by using the closed dashed line, and the area where the pixels of the other class are not located by using the closed dashed line is not shown in fig. 3, but it should be understood that, in the segmentation image, the area where the pixels of the other class are located has no content or can be regarded as a pure black portion (the color feature extraction of the subsequent furniture segmentation image does not extract the black information of the area), and the area where the pixels of the furniture class are located is the original content of the corresponding area in the home decoration image.
S230: and extracting non-furniture features in the non-furniture segmented image, and extracting furniture features in the furniture segmented image, wherein the furniture features at least comprise furniture style features.
In this embodiment, the non-furniture feature comprises at least a color feature and the furniture feature comprises at least a furniture style feature.
For a non-furniture segmented image, only the color features of the non-furniture segmented image may be extracted, although in some embodiments, the style features of the non-furniture segmented image may also be extracted.
In one embodiment, the non-furniture categories may include multiple subcategories, one category may correspond to each item, such as wall subcategories and floor subcategories, ceiling subcategories, and the like, although not specifically limited thereto.
For different subcategories, when the image is segmented, non-furniture segmented images of the corresponding subcategories can be obtained, for example, the non-furniture segmented images comprise wall segmented images corresponding to the wall subcategories and floor segmented images corresponding to the floor subcategories.
In an embodiment, when extracting the non-furniture feature in the non-furniture segmented image, color feature extraction should be performed on the non-furniture segmented image of each sub-category separately, and the color feature may include color and color ratio in the segmented image.
In an embodiment, the non-furniture segmented image includes a wall segmented image corresponding to a wall sub-category and a floor segmented image corresponding to a floor sub-category, and color clustering is performed on the wall segmented image to obtain a wall color feature; wherein the wall color features include color distribution in the wall segmentation image and color duty cycle; performing color clustering treatment on the floor segmentation image to obtain floor color characteristics; wherein the floor color features include color distribution in the floor segmentation image and color duty cycle; the wall color feature and the floor color feature are used as the color features of the non-furniture-split image.
In this embodiment, performing color clustering processing on the non-furniture-segmented image includes: all color components and their duty ratios in a picture are extracted by adopting methods of color space conversion, clustering and neighbor pixel calculation, and the color features provide valuable information about dominant colors, color distribution and color relations in the image, and can be used for knowing the overall color scheme and atmosphere of the indoor space.
Table 1 below shows partial color feature information obtained after color feature extraction of a non-furniture segmented image according to one embodiment.
TABLE 1
In some embodiments, the furniture feature may further include a color feature, that is, the color feature in the furniture segmented image is extracted, and the extraction manner of the color feature in the furniture segmented image may refer to the extraction manner of the color feature in the non-furniture feature, which is not described herein.
In some embodiments, furniture style features of the furniture segmented image may be extracted by a style extraction network.
In one embodiment, as shown in fig. 2, in step S230, furniture style features in the furniture segmented image are extracted, including steps S231 to S233, which are described in detail as follows:
Step S231: the furniture segmented image is segmented into at least one image block.
In this embodiment, a manner of extracting the color features of the furniture is provided, which may be implemented through a style extraction network, such as vision transformer network (a neural network).
In some embodiments, the style extraction network segments the furniture segmented image into at least one image block of the same size, e.g., 16x16 pixels, which are considered elements in the sequence, the number of segmented image blocks being (H/16) x (W/16) for an HxW sized furniture image.
Step S232: and extracting the space vector of each image block, and carrying out position coding on the space vector of each image block to obtain the position space vector corresponding to each image block.
In this embodiment, each image block is flattened into a one-dimensional vector, so as to obtain a one-dimensional vector corresponding to each image block; mapping the one-dimensional vector corresponding to each image block to a high latitude embedding space to obtain a space vector.
Each image block is flattened into a one-dimensional vector and mapped to a high-dimensional embedding space. After flattening, each image block has a dimension Cx16x16, where C is the number of image channels. Then it is mapped to a D-dimensional embedding space by a linear transformation, the following is the way the space vector is obtained:
Xj= Linear(Pj)
Where P j is the one-dimensional vector of the jth image block, X i is the corresponding spatial vector, and Linear is a Linear function.
In order for the style extraction network to learn the relative positional relationship between image blocks, it is necessary to add a position code by adding a fixed position vector to each spatial vector:
Xj '= Xj+ Posj
Where Pos j is the position code of the j-th image block, and X j ' is the spatial vector added with the position code, i.e., the position spatial vector.
Step S233: and processing the position space vectors corresponding to each image block based on the multi-head self-attention to obtain furniture style characteristics of the furniture segmented image.
In this embodiment, the encoder in the style extraction network receives the position space vector and processes the position space vector based on the multi-head self-attention, so as to obtain the furniture style feature, where the furniture style feature is a feature of generating a style class of the furniture segmented image, that is, the furniture style feature, and the style class of the segmented image can be output through the classification structure of the style extraction network.
The encoder consists of multiple transducer layers, each layer containing a Multi-Head Self-Attention (Multi-Head Self-Attention) module and a Feed-forward neural network (Feed-Forward Neural Network) module.
Multi-head self-attention: the multi-headed self-attention module allows the model to learn the global features of the input sequence in different locations and different representation subspaces. The self-attention mechanism can be implemented by computing three vectors, query (Query), key (Key), and Value (Value).
Q = Wq Xj '
K = Wk Xj '
V = Wv Xj '
Attention(Q, K, V) = softmax(Q K^T / sqrt(dk))V
Wherein Q is Query, K is Key, V is Value, W q、Wk and W v are learnable weight matrices, d k is the dimension of Q and K vectors, the Attention weight is normalized by softmax function to make the sum be 1, K≡T is transposed matrix of K, and Attention is Attention mechanism.
In this embodiment, after passing through several transform encoder layers, the final feature vector is sent to a full connection layer (classification structure) of a style classifier, and cross entropy loss calculation is performed with a real label, so that training and optimization of the style extraction network can be performed, wherein the cross entropy loss function calculates the difference between model prediction probability distribution and real probability distribution, and uses a gradient descent algorithm to perform optimization, and during the training process, the style extraction network learns how to perform global self-attention operation on an input image to capture remote dependency relationship and local image characteristics, so as to obtain accurate furniture style characteristics.
It will be appreciated that the furniture feature may include, in addition to the furniture style feature, the color feature, a body feature, a material feature, a usage feature, a comprehensive feature, etc., where the body feature is embodied as a shape of furniture (the shape includes rectangle, square, circle, ellipse, etc.), the material feature is embodied as a material of furniture, the usage feature is embodied as a type of furniture (such as bed, table, chair, etc.), the comprehensive feature is embodied as a fusion of body, style, material, usage feature (excluding color), the specific acquisition of the furniture feature may be determined according to the requirement, in some embodiments, if the requirement is merely to determine a furniture image of similar/same style, similar/same color, the rabbit style feature and the color feature may be acquired, and if the requirement is to determine a furniture image of similar/same material, similar/same color, and similar/same style, the furniture feature includes the material feature, the furniture style feature and the color feature, which is confirmed according to the required furniture image requirement.
S250: the corresponding non-furniture image is matched based on the non-furniture feature and the corresponding furniture image is matched based on the furniture feature.
The furniture image and the non-furniture image are used to obtain home layout information.
In this embodiment, after the non-furniture feature is obtained, the non-furniture image may be matched in the image database, where the non-furniture feature of the non-furniture image has high similarity to the non-furniture feature of the non-furniture segmented image, that is, the non-furniture image and the non-furniture segmented image have high similarity to the color feature of the non-furniture image (e.g., similar color distribution, similar color ratio, etc.), and the process may be implemented through an image processing neural network.
In another embodiment, the furniture image can be matched in the image database by the furniture features, the furniture image and the furniture segmented image have high/same color feature similarity, and the furniture style features of the two are similar, and the process can be realized by the image processing neural network.
Of course, if the furniture feature further includes a shape feature, a material feature, etc., then the matched furniture image is an image with high/same color feature similarity, high/same shape feature similarity, high/same material feature similarity, and high/same style feature similarity as the furniture segmented image.
In some embodiments, some furniture images with high similarity are obtained through furniture feature matching, and one or more images with highest similarity can be determined to be final furniture images based on the similarity between the features of the matched images and the furniture features.
In this embodiment, after the furniture image and the non-furniture image are obtained, the furniture image and the non-furniture image may be mapped to a preset three-dimensional room model to obtain the home decoration layout model.
In this embodiment, the three-dimensional room model may include an area such as a bedroom, a kitchen, and a living room, based on the furniture image and the non-furniture image, a furniture layout model having a furniture style, a color similar to that of the furniture segmented image, and a color similar to that of the non-furniture segmented image may be generated in the three-dimensional room model, and if the furniture feature has a feature other than the color feature and the furniture style feature, the obtained furniture layout model further has a furniture effect determined by the corresponding feature, such as a material feature, a furniture material in the furniture layout model is similar to or the same as that of the furniture image.
In this embodiment, the user may select a home decoration image, and then acquire a furniture image and a non-furniture image based on the home decoration image, so as to generate a home decoration layout model according with user preference and requirement based on the furniture image and the non-furniture image.
According to the home decoration image processing method, complementary furniture or non-furniture is recommended according to the color scheme and style of the home decoration image, and indoor design tasks are promoted; it can also estimate the real estate value of the real estate by analyzing the internal conditions, layout and design; improving the performance of the robot navigation algorithm by providing a more accurate representation of the indoor environment; enhancing the virtual reality experience by generating a more realistic and immersive indoor space three-dimensional model; the energy management and the intelligent home automation are assisted by assisting in analyzing the lighting conditions, the occupancy rate and the use mode in the indoor environment; the indoor image analysis method may be further improved by incorporating additional features or techniques, such as object detection, depth estimation or three-dimensional reconstruction.
In some embodiments, a generation model may be used to generate an indoor effect model based on the house-hold image processing results by inputting corresponding text on the user interface, for example: i want a Japanese style bedroom in a cool color system.
The home decoration image processing in this embodiment can finally obtain design information of ceilings, floors, walls, and furniture (tables, chairs), and the design information includes information of furniture style (materials, uses, shapes, etc.), colors, etc.
After obtaining the design information, mapping the design information to a three-dimensional model, selecting a house type diagram, which can be a CAD diagram, analyzing the CAD diagram into a vectorized format based on the diagram, presenting the structure of the room through three dimensions (namely obtaining a preset three-dimensional room model), and determining the design of non-furniture and furniture in the preset three-dimensional room model based on the design information (furniture image and non-furniture image), for example: where the bed is located, furniture is laid out.
According to the technical scheme, the home decoration image is divided into the furniture type area and the non-furniture type area, so that feature extraction is carried out on the furniture type area and the non-furniture type area respectively, the complex home decoration image can be processed, and accurate non-furniture type color features, furniture type color and furniture style features are obtained, so that similar images can be matched based on the furniture type features and the non-furniture type features, and home decoration layout is effectively and accurately realized; the process can be used for complex home decoration image analysis by respectively processing different objects, has high accuracy, and can be effectively applied to indoor design.
Fig. 4 is a block diagram of a home decoration image processing apparatus according to an exemplary embodiment, and the home decoration image processing apparatus 400 includes:
An image segmentation module 410, configured to perform an image segmentation process on the home decoration image to obtain at least one furniture segmentation image and at least one non-furniture segmentation image;
a feature extraction module 430 for extracting non-furniture features in the non-furniture segmented image and extracting furniture features in the furniture segmented image; wherein the non-furniture feature comprises at least a color feature, the furniture feature comprises at least: furniture style characteristics and/or color characteristics;
An image matching module 450 for matching corresponding non-furniture images based on non-furniture features and matching corresponding furniture images based on furniture features; the furniture image and the non-furniture image are used for acquiring home decoration layout information; including furniture style features for furniture features; the feature extraction module 430 includes:
an image block acquisition unit 431 for dividing the furniture-divided image into at least one image block;
A space vector obtaining unit 432, configured to extract a space vector of each image block, and perform position encoding on the space vector of each image block to obtain a position space vector corresponding to each image block;
The furniture style feature extraction unit 433 is configured to process the position space vectors corresponding to each image block based on multi-head self-attention, so as to obtain furniture style features of the furniture segmented image; the furniture style features are features of style categories for generating furniture segmented images.
In one implementation, the spatial vector acquisition unit includes:
The one-dimensional vector acquisition plate is used for respectively flattening each image block into a one-dimensional vector so as to obtain a one-dimensional vector corresponding to each image block;
And the position coding plate is used for mapping the one-dimensional vector corresponding to each image block to a high-dimensional embedded space so as to obtain a space vector.
In one implementation, the image segmentation module includes:
the target feature image acquisition unit is used for carrying out feature extraction on the home decoration image to obtain a target feature image;
the classifying unit is used for classifying the characteristic data of each pixel in the target characteristic diagram based on a preset category so as to obtain the category of each pixel in the home decoration diagram; wherein the preset categories include furniture categories and non-furniture categories;
the image segmentation unit is used for segmenting the home decoration image based on the area where the pixels of the furniture class are located, and segmenting the home decoration image based on the area where the pixels of the non-furniture class are located, so that the furniture segmentation image and the non-furniture segmentation image are correspondingly obtained.
In one implementation, the target feature image acquisition unit includes:
The coding plate is used for coding the home decoration image to obtain a coding feature map;
the decoding plate is used for decoding the coding feature map to obtain a decoding feature map;
And the feature fusion plate is used for carrying out feature map fusion processing on the coding feature map and the decoding feature map to obtain a target feature image.
In one implementation, the non-furniture segmented image includes a wall segmented image and a floor segmented image, and the non-furniture feature includes a color feature; the feature extraction module comprises:
the second color feature extraction unit is used for carrying out color clustering processing on the wall segmentation image to obtain wall color features; wherein the wall color features include color distribution in the wall segmentation image and color duty cycle;
The first color feature extraction unit is used for carrying out color clustering processing on the floor segmentation image to obtain floor color features; wherein the floor color features include color distribution in the floor segmentation image and color duty cycle;
and the color feature acquisition unit is used for taking the wall color feature and the floor color feature as the color feature of the non-furniture-segmented image.
In one implementation, the home decoration image processing apparatus further includes:
and the model generation module is used for mapping the furniture image and the non-furniture image to a preset three-dimensional room model so as to obtain a home decoration layout model.
The home decoration image processing device provided in this embodiment may be used to execute the home decoration image processing method, and its implementation principle and technical effects are similar, and this embodiment will not be repeated here.
Fig. 5 is a block diagram of an electronic device shown in an exemplary embodiment, referring to fig. 5, the electronic device 500 may include: a processor 51 and a memory 52, wherein the processor 51 and the memory 52 may communicate; illustratively, the processor 51 and the memory 52 communicate via a communication bus 53, the memory 52 being for storing instructions, the processor 51 being for invoking the instructions in the memory to perform the home image processing method as shown in any of the method embodiments described above.
The Processor may be a central processing unit (Central Processing Unit, CPU), or may be another general-purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application Specific Integrated Circuit (ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution.
The application provides a computer readable storage medium, on which computer executable instructions are stored; the computer-executable instructions, when executed by the processor, are for implementing the home improvement image processing method of any of the embodiments described above.
An embodiment of the present application provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, implements the home image processing method described above.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (9)

1. A home improvement image processing method, characterized by comprising:
performing image segmentation processing on the home decoration image to obtain at least one furniture segmentation image and at least one non-furniture segmentation image;
extracting non-furniture features in the non-furniture segmented image and extracting furniture features in the furniture segmented image; wherein the non-furniture feature comprises at least a color feature, the furniture feature comprises at least a furniture style feature;
matching a corresponding non-furniture image based on the non-furniture feature and matching a corresponding furniture image based on the furniture feature; the furniture images and the non-furniture images are used for acquiring home decoration layout information;
extracting furniture style characteristics in the furniture segmentation image comprises the following steps:
dividing the furniture segmented image into at least one image block;
Flattening each image block into a one-dimensional vector to obtain a one-dimensional vector corresponding to each image block;
Mapping the one-dimensional vector corresponding to each image block to a high-dimensional embedded space to obtain a space vector of each image block;
Position coding is carried out on the space vector of each image block, so that a position space vector corresponding to each image block is obtained;
processing the position space vectors corresponding to the image blocks based on multi-head self-attention to obtain furniture style characteristics of the furniture segmented image; wherein the furniture style feature is a feature of a style class that generates the furniture segmented image.
2. The method according to claim 1, wherein the performing an image segmentation process on the home decoration image to obtain at least one furniture segmentation image and at least one non-furniture segmentation image comprises:
Extracting features of the home decoration image to obtain a target feature image;
Classifying the feature data of each pixel in the target feature map based on a preset category to obtain the category of each pixel in the home decoration map; wherein the preset categories include furniture categories and non-furniture categories;
and dividing the home decoration image based on the area where the pixels of the furniture class are located, and dividing the home decoration image based on the area where the pixels of the non-furniture class are located, so as to correspondingly obtain the furniture division image and the non-furniture division image.
3. The method according to claim 2, wherein the feature extraction of the home decoration image to obtain a target feature image includes:
Performing coding treatment on the home decoration image to obtain a coding feature map;
Decoding the coding feature map to obtain a decoding feature map;
and carrying out feature map fusion processing on the coding feature map and the decoding feature map to obtain a target feature image.
4. The method of claim 1, wherein the non-furniture segmented image comprises a wall segmented image and a floor segmented image, the non-furniture feature comprising a color feature; the extracting non-furniture features in the non-furniture segmented image includes:
performing color clustering treatment on the wall segmentation image to obtain wall color characteristics; wherein the wall color feature comprises a color distribution and a color duty cycle in the wall segmentation image;
Performing color clustering treatment on the floor segmentation image to obtain floor color characteristics; wherein the floor color feature comprises a color distribution and a color duty cycle in the floor segmentation image;
And taking the wall color feature and the floor color feature as the color feature of the non-furniture-segmented image.
5. The method of claim 1, wherein after the matching of the corresponding non-furniture image based on the non-furniture feature and the matching of the corresponding furniture image based on the furniture feature, the method further comprises:
and mapping the furniture image and the non-furniture image to a preset three-dimensional room model to obtain a home decoration layout model.
6. A home improvement image processing apparatus, comprising:
The image segmentation module is used for carrying out image segmentation processing on the home decoration image to obtain at least one furniture segmentation image and at least one non-furniture segmentation image;
The feature extraction module is used for extracting non-furniture features in the non-furniture segmented image and extracting furniture features in the furniture segmented image; wherein the non-furniture feature comprises at least a color feature, the furniture feature comprises at least a furniture style feature;
The image matching module is used for matching corresponding non-furniture images based on the non-furniture features and matching corresponding furniture images based on the furniture features; the furniture images and the non-furniture images are used for acquiring home decoration layout information;
the feature extraction module includes:
An image block acquisition unit for dividing the furniture-divided image into at least one image block;
The space vector acquisition unit is used for respectively flattening each image block into a one-dimensional vector so as to obtain a one-dimensional vector corresponding to each image block; mapping the one-dimensional vector corresponding to each image block to a high-dimensional embedded space to obtain a space vector of each image block; position coding is carried out on the space vector of each image block, so that a position space vector corresponding to each image block is obtained;
the furniture style feature extraction unit is used for processing the position space vectors corresponding to the image blocks based on multi-head self-attention to obtain furniture style features of the furniture segmented image; wherein the furniture style feature is a feature of a style class that generates the furniture segmented image.
7. An electronic device, the electronic device comprising: a processor, a memory; the memory is used for storing instructions; the processor is configured to execute the instructions in the memory, so that the electronic device performs the home image processing method according to any one of claims 1 to 5.
8. A computer-readable storage medium, wherein computer-executable instructions are stored in the computer-readable storage medium, which when executed by a processor, are adapted to implement the home decoration image processing method according to any one of claims 1 to 5.
9. A computer program product comprising a computer program which, when executed by a processor, implements the home image processing method according to any one of claims 1 to 5.
CN202410796872.7A 2024-06-20 2024-06-20 Home decoration image processing method, apparatus, device, medium, and program product Active CN118365889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410796872.7A CN118365889B (en) 2024-06-20 2024-06-20 Home decoration image processing method, apparatus, device, medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410796872.7A CN118365889B (en) 2024-06-20 2024-06-20 Home decoration image processing method, apparatus, device, medium, and program product

Publications (2)

Publication Number Publication Date
CN118365889A CN118365889A (en) 2024-07-19
CN118365889B true CN118365889B (en) 2024-09-24

Family

ID=91885395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410796872.7A Active CN118365889B (en) 2024-06-20 2024-06-20 Home decoration image processing method, apparatus, device, medium, and program product

Country Status (1)

Country Link
CN (1) CN118365889B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113094801A (en) * 2021-04-30 2021-07-09 土巴兔集团股份有限公司 Decoration simulation image generation method, device, equipment and medium
CN113298105A (en) * 2020-07-22 2021-08-24 阿里巴巴集团控股有限公司 Image processing, displaying, acquiring and home decoration matching processing method, device and medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8989440B2 (en) * 2012-03-27 2015-03-24 Way Out Ip, Llc System and method of room decoration for use with a mobile device
US10956626B2 (en) * 2019-07-15 2021-03-23 Ke.Com (Beijing) Technology Co., Ltd. Artificial intelligence systems and methods for interior design
CN112116620B (en) * 2020-09-16 2023-09-22 北京交通大学 Indoor image semantic segmentation and coating display method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298105A (en) * 2020-07-22 2021-08-24 阿里巴巴集团控股有限公司 Image processing, displaying, acquiring and home decoration matching processing method, device and medium
CN113094801A (en) * 2021-04-30 2021-07-09 土巴兔集团股份有限公司 Decoration simulation image generation method, device, equipment and medium

Also Published As

Publication number Publication date
CN118365889A (en) 2024-07-19

Similar Documents

Publication Publication Date Title
Gwak et al. Generative sparse detection networks for 3d single-shot object detection
WO2021175050A1 (en) Three-dimensional reconstruction method and three-dimensional reconstruction device
CN113449131A (en) Object image re-identification method based on multi-feature information capture and correlation analysis
CN112907602B (en) Three-dimensional scene point cloud segmentation method based on improved K-nearest neighbor algorithm
CN115063573B (en) Multi-scale target detection method based on attention mechanism
US11495055B1 (en) Pedestrian trajectory prediction method and system based on multi-interaction spatiotemporal graph network
CN115995039A (en) Enhanced semantic graph embedding for omni-directional location identification
WO2023142602A1 (en) Image processing method and apparatus, and computer-readable storage medium
CN114067041B (en) Material generation method and device of three-dimensional model, computer equipment and storage medium
CN115222998B (en) Image classification method
CN118097340B (en) Training method, system, equipment and medium for lane image segmentation model
CN114612902A (en) Image semantic segmentation method, device, equipment, storage medium and program product
CN116385660A (en) Indoor single view scene semantic reconstruction method and system
CN112668662B (en) Outdoor mountain forest environment target detection method based on improved YOLOv3 network
Li et al. Maskformer with improved encoder-decoder module for semantic segmentation of fine-resolution remote sensing images
CN117830537A (en) Weak supervision 3D scene graph generation method, device, equipment and medium
CN118365889B (en) Home decoration image processing method, apparatus, device, medium, and program product
CN117765258A (en) Large-scale point cloud semantic segmentation method based on density self-adaption and attention mechanism
CN117456104A (en) Indoor scene three-dimensional modeling method and system based on structural function analysis
CN117037102A (en) Object following method, device, computer equipment and storage medium
CN111597367A (en) Three-dimensional model retrieval method based on view and Hash algorithm
CN116977668A (en) Image recognition method, device, computer equipment and computer storage medium
CN114565773A (en) Method and device for semantically segmenting image, electronic equipment and storage medium
CN115995079A (en) Image semantic similarity analysis method and homosemantic image retrieval method
CN112417961A (en) Sea surface target detection method based on scene prior knowledge

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant