CN112365576B - Method, device and server for recommending position of fazenda component - Google Patents

Method, device and server for recommending position of fazenda component Download PDF

Info

Publication number
CN112365576B
CN112365576B CN202011251174.7A CN202011251174A CN112365576B CN 112365576 B CN112365576 B CN 112365576B CN 202011251174 A CN202011251174 A CN 202011251174A CN 112365576 B CN112365576 B CN 112365576B
Authority
CN
China
Prior art keywords
component
fazenda
feature
garden
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011251174.7A
Other languages
Chinese (zh)
Other versions
CN112365576A (en
Inventor
刘丽娟
袁燚
胡志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202011251174.7A priority Critical patent/CN112365576B/en
Publication of CN112365576A publication Critical patent/CN112365576A/en
Application granted granted Critical
Publication of CN112365576B publication Critical patent/CN112365576B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a recommendation method, a recommendation device and a recommendation server for a position of a fazenda component, wherein the recommendation method comprises the following steps: obtaining a fazenda rendering diagram and a fazenda relation diagram corresponding to the target virtual fazenda; wherein at least one existing fazenda component is placed in the target virtual fazenda, the fazenda rendering diagram is used for describing the visual state of the target virtual fazenda, and the fazenda relation diagram is used for describing the spatial relation of each existing fazenda component; performing feature extraction on the garden rendering graph through a feature extraction model to obtain visual features of the garden rendering graph; determining, by a graph generation model, a recommended location of a newly added garden component in the target virtual garden based on the garden relationship graph and the visual feature; wherein the graph generation model includes a visual perception messaging module based on a multi-headed attention mechanism. The invention can effectively improve the rationality of the recommended position of the manor component.

Description

Method, device and server for recommending position of fazenda component
Technical Field
The invention relates to the technical field of deep learning, in particular to a recommendation method, device and server for a position of a fazenda component.
Background
The garden construction is a common MMORPG (Multiplayer Online Role-play Game) Game play, where players can build their virtual garden with a high degree of freedom during the Game. At present, in order to reduce the difficulty of building a virtual garden for a player, it is proposed in the related art to learn the layout of the virtual garden by using a graphic neural network technology (such as a countermeasure generation model, a variational self-encoder, an autoregressive model, etc., and a graphic generation model), so as to recommend a new placement position of a garden component for the player, thereby achieving the purpose of reducing the difficulty of building the virtual garden. However, when the above-mentioned technology is used to learn the layout of the virtual garden by using the graphic neural network technology, not only there is a problem that the learning effect is poor due to the great learning difficulty, but also the message transmission effect is affected due to the fact that the communication relationship between the various garden components in the virtual garden cannot be efficiently utilized in the learning process, which finally results in poor rationality of the recommended placement position.
Disclosure of Invention
Accordingly, the present invention is directed to a method, an apparatus and a server for recommending a position of a garden component, which can effectively improve the rationality of the recommended position of the garden component.
In a first aspect, an embodiment of the present invention provides a method for recommending a location of a garden component, including: obtaining a fazenda rendering diagram and a fazenda relation diagram corresponding to the target virtual fazenda; wherein at least one existing fazenda component is placed in the target virtual fazenda, the fazenda rendering diagram is used for describing the visual state of the target virtual fazenda, and the fazenda relation diagram is used for describing the spatial relation of each existing fazenda component; performing feature extraction on the garden rendering graph through a feature extraction model to obtain visual features of the garden rendering graph; determining, by a graph generation model, a recommended location of a newly added garden component in the target virtual garden based on the garden relationship graph and the visual feature; wherein the graph generation model includes a visual perception messaging module based on a multi-headed attention mechanism.
In one embodiment, the feature extraction model includes a plurality of visual feature extraction units; the visual features include component local visual features and garden global visual features; the step of extracting features of the garden rendering graph through the feature extraction model to obtain visual features of the garden rendering graph comprises the following steps: for each visual feature extraction unit, performing feature extraction on the appointed feature map through the visual feature extraction unit to obtain the component local visual feature and the garden global visual feature output by the visual feature extraction unit; taking the component local visual features output by the first appointed visual feature extraction unit in the feature extraction model as the component local visual features of the garden rendering graph; and taking the fazenda global visual characteristic output by the second appointed visual characteristic extraction unit in the characteristic extraction model as the fazenda global visual characteristic of the fazenda rendering graph.
In one embodiment, the step of determining a recommended position of a newly added garden component in the target virtual garden based on the garden relationship diagram and the visual features through a diagram generation model comprises: obtaining a component relation distribution model corresponding to the target virtual fazenda based on the fazenda relation graph, the component local visual features and the fazenda global visual features through a graph generation model; the component relation distribution model is used for describing the placement rules among the existing garden components; generating a position recommendation heat map based on the component relation distribution model; and determining the recommended position of the newly added garden component in the target virtual garden according to each heat probability represented in the position recommended heat map.
In one embodiment, the graph generation model further comprises an adjacency matrix encoder and a relationship prediction unit, the number of visual perception messaging modules based on a multi-headed attention mechanism being a plurality; the step of obtaining the component relation distribution model corresponding to the target virtual fazenda based on the fazenda relation graph, the component local visual features and the fazenda global visual features through the graph generation model comprises the following steps: acquiring an adjacency matrix, component attribute features and component relationship attribute features based on the manor relationship graph; the adjacent matrix is encoded by the adjacent matrix encoder, so that the initial component characteristics of each existing garden component are obtained; iterative message transmission is carried out by the visual perception message transmission module based on the component local visual characteristics, the component attribute characteristics, the component relation attribute characteristics and the component initial characteristics, so that the characteristic representation of each existing fazenda component in the fazenda relation diagram is obtained; and obtaining a component relation distribution model corresponding to the target virtual fazenda based on the characteristic representation of each existing fazenda component through the relation prediction unit.
In one embodiment, the step of obtaining, by the visual perception messaging module, a feature representation of each of the existing garden components in the garden relationship graph based on the component local visual feature, the component attribute feature, the component relationship attribute feature, and the component initial feature, includes: if the current iteration number is 1, carrying out iterative message transfer by the visual perception message transfer module based on the component local visual characteristics, the component attribute characteristics, the component relation attribute characteristics and the component initial characteristics to obtain the characteristic representation of the current iteration; if the current iteration number is not 1, judging whether the current iteration number meets the preset iteration number or not; and if the current iteration number does not meet the preset iteration number, performing iterative message transfer by the visual perception message transfer module based on the component local visual feature, the component attribute feature, the component relation attribute feature and the feature representation of the previous iteration corresponding to the current iteration to obtain the feature representation of the current iteration until the current iteration number meets the preset iteration number.
In one embodiment, each of the visual perception message passing modules comprises a visual conversion unit, a feature fusion unit and a multi-head attention unit; the step of performing iterative message transfer by the visual perception message transfer module based on the component local visual feature, the component attribute feature, the component relationship attribute feature and the feature representation of the previous iteration corresponding to the current iteration to obtain the feature representation of the current iteration includes: converting the component local visual features into visual feature vectors by the visual conversion unit; performing feature fusion on the visual feature vector and the feature representation of the previous iteration corresponding to the current iteration through the feature fusion unit to obtain a fusion feature vector; and obtaining the characteristic representation of the current iteration through the multi-head attention unit based on the fusion characteristic vector, the component attribute characteristic and the component relation attribute characteristic.
In one embodiment, the multi-head attention unit comprises a message feature conversion subunit, an attention parameter calculation subunit and a feature vector updating subunit, wherein the number of the message feature conversion subunits is a plurality of, and the parameters of each message feature conversion subunit are different; the step of obtaining, by the multi-head attention unit, a feature representation of a current iteration based on the fusion feature vector, the component attribute feature and the component relationship attribute feature includes: aiming at each message feature conversion subunit, obtaining a message feature vector output by the message feature conversion subunit through the message feature conversion subunit based on the fusion feature vector, the component attribute feature and the component relation attribute feature; calculating attention weights respectively for the message feature vectors output by the message feature conversion subunits through the attention parameter calculation subunit; and obtaining the characteristic representation of the current iteration through the characteristic vector updating subunit based on the weighted sum of the attention weights.
In one embodiment, the relationship prediction unit comprises a hybrid multi-class subunit; the step of obtaining, by the relationship prediction unit, a component relationship distribution model corresponding to the target virtual garden based on the feature representation of each of the existing garden components, includes: and simulating the spatial relationship between each existing fazenda component and the newly added fazenda component by the mixed multi-category subunit based on the characteristic representation of each existing fazenda component, and obtaining a component relationship distribution model corresponding to the target virtual fazenda.
In one embodiment, the training step of the feature extraction model and the map generation model includes: calculating a component relation loss value based on a preset component relation loss function and the component relation distribution model; calculating a global matching loss value based on a preset global matching loss function, the component local visual features and the feature representations of each of the existing garden components in the target virtual garden; calculating a total loss value according to the component relation loss value and the global matching loss value; and respectively updating the parameters of the feature extraction model and the parameters of the graph generation model by using the total loss value.
In one embodiment, the step of calculating a global matching loss value based on a preset global matching loss function, the component local visual features and the feature representations of each of the existing garden components in the target virtual garden, comprises: obtaining a global feature pair based on the component local visual features and the feature representations of each of the existing garden components in the target virtual garden; wherein the global feature pair comprises a first global feature and a second global feature; calculating a cosine distance between the first global feature and the second global feature; calculating a probability of matching between the house rendering graph and the house relationship graph based on the cosine distance; substituting the matching probability into a preset global matching loss function to obtain a global matching loss value.
In one embodiment, the step of calculating a total loss value from the component relationship loss value and the global matching loss value comprises: calculating a weight value of the component relation loss value and/or the global matching loss value based on the current iteration times; and carrying out weighted summation on the component relation loss value and the global matching loss value according to the weight value to obtain a total loss value.
In one embodiment, the step of obtaining a garden rendering map and a garden relationship map corresponding to the target virtual garden includes: obtaining the fazenda information of the target virtual fazenda; wherein the garden information includes area description information of the target virtual garden and component description information of the existing garden components; rendering the target virtual fazenda based on the fazenda information to obtain a fazenda rendering diagram corresponding to the target virtual fazenda; and extracting the spatial relation among all the existing fazenda components in the target virtual fazenda based on the fazenda information, and obtaining a fazenda rendering diagram corresponding to the target virtual fazenda.
In a second aspect, an embodiment of the present invention further provides a recommendation device for a location of a fazenda component, including: the image acquisition module is used for acquiring a fazenda rendering image and a fazenda relation image corresponding to the target virtual fazenda; wherein at least one existing fazenda component is placed in the target virtual fazenda, the fazenda rendering diagram is used for describing the visual state of the target virtual fazenda, and the fazenda relation diagram is used for describing the spatial relation of each existing fazenda component; the feature extraction module is used for carrying out feature extraction on the garden rendering graph through a feature extraction model to obtain visual features of the garden rendering graph; a location recommendation module for determining a recommended location of a newly added garden component in the target virtual garden based on the garden relationship diagram and the visual characteristics through a diagram generation model; wherein the graph generation model includes a visual perception messaging module based on a multi-headed attention mechanism.
In a third aspect, an embodiment of the present invention further provides a server, including a processor and a memory; the memory has stored thereon a computer program which, when executed by the processor, performs the method according to any of the first aspects provided.
In a fourth aspect, embodiments of the present invention also provide a computer storage medium storing computer software instructions for use with any of the methods provided in the first aspect.
According to the recommendation method, device and server for the positions of the fazenda components, a fazenda rendering diagram and a fazenda relation diagram corresponding to a target virtual fazenda are firstly obtained, then feature extraction is carried out on the fazenda rendering diagram through a feature extraction model to obtain visual features of the fazenda rendering diagram, the recommendation positions of new fazenda components in the target virtual fazenda are determined through a diagram generation model based on the fazenda relation diagram and the visual features, at least one existing fazenda component is placed in the target virtual fazenda, the fazenda rendering diagram is used for describing the visual state of the target virtual fazenda, the fazenda relation diagram is used for describing the spatial relation of each existing fazenda component, and the diagram generation model comprises a visual perception message transmission module based on a multi-head attention mechanism. According to the method, the diagram generation model comprises the visual perception message transmission module based on the multi-head attention mechanism, and the message transmission mechanism in the diagram generation model is optimized through the visual perception message transmission module, so that the diagram generation model can better utilize the characteristics contained in the garden relation diagram, and on the basis, the position recommendation is carried out according to the garden relation diagram and the visual characteristics, so that the rationality of the recommended position of the newly-added garden component can be effectively improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for recommending a position of a garden component according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a visual perception message passing module according to an embodiment of the present invention;
FIG. 3 is a block diagram of a method for recommending a location of a garden component, according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating another method for recommending a location of a garden component according to an embodiment of the present invention;
FIG. 5a is a schematic diagram of a current scenario according to an embodiment of the present invention;
FIG. 5b is a schematic illustration of a garden rendering and a garden relationship diagram provided in an embodiment of the present invention;
FIG. 5c is a schematic diagram of an edge set according to an embodiment of the present invention;
FIG. 5d is a schematic diagram of a position recommendation heat map according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a recommendation device for a location of a garden component according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described in conjunction with the embodiments, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
At present, it is proposed in the related art that relationships between various garden components in a virtual garden can be learned by using a map generation model, however, in the learning process, the existing map generation model needs to perform almost uniform transformation on input features transmitted by original features through a concentration weight learning network, and learn different weights for each feature dimension of each garden component, which increases learning difficulty, and on the other hand, does not efficiently use communication relationships between various garden components in the virtual garden, thereby affecting message transmission effect. In addition, it is proposed in the related art that visual features of virtual fazenda can be utilized, but only visual features of each fazenda component are simply input, that is, the full image features and the full image visual features are not well matched, so that the rationality of the position recommendation is affected. Based on the above, the invention provides a method, a device and a server for recommending the position of the fazenda component, which can effectively improve the rationality of the recommended position of the fazenda component.
For the understanding of the present embodiment, a detailed description will be given of a method for recommending a position of a garden component according to an embodiment of the present invention, referring to a flowchart of a method for recommending a position of a garden component shown in fig. 1, the method mainly includes steps S102 to S106:
Step S102, a garden rendering diagram and a garden relation diagram corresponding to the target virtual garden are obtained. Wherein at least one existing garden component is placed within the target virtual garden, a garden rendering map describing the visual state of the target virtual garden, and a garden relationship map describing the spatial relationship of each existing garden component. In one embodiment, the game may provide a virtual garden system that is designed and built autonomously to the player, and the garden components are the resources used to build the garden in the garden system, and the player may optionally select components and adjust the component position and direction to complete the placement of the garden components, where an existing garden component may be understood as a garden component that has been placed in the target virtual garden. Visual status may be understood as depth information of existing garden components or detailed information of the water area in the target virtual garden. In an alternative embodiment, the garden image of the target virtual garden may be acquired, such that a garden rendering map and a garden relationship map may be obtained by rendering or extracting the garden image.
Step S104, feature extraction is carried out on the garden rendering diagram through the feature extraction model, and visual features of the garden rendering diagram are obtained. Wherein the input of the feature extraction model is a house rendering graph and the output of the feature extraction model is a visual feature, which may include component local visual features and/or house global visual features.
Step S106, determining the recommended positions of the newly added garden components in the target virtual garden by the map generation model based on the garden relation map and the visual characteristics. Wherein the graph generation model comprises a visual perception message transmission module based on a multi-head attention mechanism, a newly added fazenda component can be understood as a fazenda component to be added into a target virtual fazenda, and the recommended position can be represented in the form of probability or heat graph. In an alternative embodiment, the inputs to the graph generation model include an adjacency matrix, component attribute features, component relationship attribute features, and visual features obtained based on the fazenda relationship graph, and the outputs of the graph generation model include a component relationship distribution model, such that a location-recommended thermodynamic diagram is generated based on the component relationship distribution model, on which a recommended location of the newly added fazenda component can be determined.
According to the recommendation method for the position of the garden component, which is provided by the embodiment of the invention, the image generation model comprises the visual perception message transfer module based on the multi-head attention mechanism, and the message transfer mechanism in the image generation model is optimized through the visual perception message transfer module, so that the image generation model can better utilize the characteristics contained in the garden relation image, and the position recommendation is performed according to the garden relation image and the visual characteristics on the basis, so that the rationality of the recommended position of the newly added garden component can be effectively improved.
In one embodiment, the embodiment of the present invention provides an implementation manner of obtaining a garden rendering map and a garden relationship map corresponding to a target virtual garden, which is referred to as steps a to c:
and a, obtaining the garden information of the target virtual garden. The area description information is used for representing whether each area in the target virtual fazenda prohibits placement of the fazenda component or not, and the component description information is used for describing information of the fazenda component, such as three-dimensional data, current coordinates, rotation angle and the like of the fazenda component. Alternatively, the above-described garden information may be represented in the form of an image, i.e., a garden image containing the area description information and the component description information may be acquired.
And b, rendering the target virtual fazenda based on the fazenda information, and obtaining a fazenda rendering diagram corresponding to the target virtual fazenda. In one embodiment, the top-down cross-rendering of the house image may be performed by a renderer to obtain a top-down house rendering.
And c, extracting the spatial relation among all the existing fazenda components in the target virtual fazenda based on the fazenda information, and obtaining a fazenda rendering diagram corresponding to the target virtual fazenda. In one embodiment, the relationship extractor scene information may be utilized to extract spatial relationships between discrete existing garden components in a target virtual garden based on a garden image, resulting in a garden-rendering map.
In one implementation manner, the above feature extraction model may adopt a visual feature extractor, and specifically, the feature extraction model includes a plurality of visual feature extraction units, on the basis of this, the embodiment of the present invention provides an implementation manner that feature extraction is performed on a garden rendering diagram through the feature extraction model to obtain visual features of the garden rendering diagram, where the visual features include component local visual features and a garden global visual feature, the component local visual features are used for characterizing features of each existing garden component in the target virtual garden, and the garden global visual features are used for characterizing features of the target virtual garden, where the following (one) to (two) are referred to:
and (I) for each visual feature extraction unit, performing feature extraction on the appointed feature map through the visual feature extraction unit to obtain the component local visual features and the garden global visual features output by the visual feature extraction unit. The designated feature map corresponding to the visual feature extraction unit positioned at the head end in the feature extraction model is a garden rendering map, and the designated feature maps corresponding to the rest visual feature extraction units in the feature extraction model are garden global visual features output by the previous visual feature extraction unit. In one embodiment, the first specified visual feature extraction unit in the feature extraction model may comprise a residual network and a feature downsampling subunit, and the second specified visual feature extraction unit in the feature extraction model may comprise a residual network. On the basis, the characteristic extraction is carried out on the specified characteristic diagram through a residual error network in the first specified visual characteristic extraction unit, so that the global visual characteristic of the garden output by the first specified visual characteristic extraction unit is obtained, the global visual characteristic of the garden is cut, and the cutting result passes through the characteristic downsampling subunit in the first specified visual characteristic extraction unit, so that the local visual characteristic of the component output by the first specified visual characteristic extraction unit can be obtained. In addition, the characteristic extraction is carried out on the appointed characteristic diagram through a residual error network in the second appointed visual characteristic extraction unit, so that the global visual characteristic of the garden output by the second appointed visual characteristic extraction unit can be obtained.
Secondly, taking the component local visual features output by the first appointed visual feature extraction unit in the feature extraction model as component local visual features of the garden rendering chart; and taking the fazenda global visual feature output by the second appointed visual feature extraction unit in the feature extraction model as the fazenda global visual feature of the fazenda rendering graph. In an alternative embodiment, all visual feature extraction units in the feature extraction model may be referred to as first specified visual feature extraction units, and the visual feature extraction units located at the end of the feature extraction model may be referred to as second specified visual feature extraction units.
To facilitate an understanding of the foregoing step S106, the step of determining the recommended positions of the newly added garden components in the target virtual garden based on the garden relationship diagram and the visual features through the diagram generating model may be performed as follows in steps 1 to 3:
and step 1, obtaining a component relation distribution model corresponding to the target virtual fazenda based on the fazenda relation graph, the component local visual features and the fazenda global visual features through the graph generation model. The component relation distribution model is used for describing the placement rules among the existing garden components, such as four-sided symmetry rules, triangular symmetry rules, chained placement rules and the like, which are not limited in the embodiment of the invention. In one embodiment, the adjacency matrix, the component attribute feature and the component relationship attribute feature can be obtained based on the garden relationship graph, the adjacency matrix, the component attribute feature, the component relationship attribute feature, the component local visual feature and the garden global visual feature are used as inputs of a graph generation model, and a visual perception message transmission module in the graph generation model is utilized to optimize a message transmission process, so that a component relationship distribution model which is closer to the placement rules among all existing garden components in the target virtual garden is obtained.
And 2, generating a position recommendation heat map based on the component relation distribution model. In one embodiment, a location map generation unit may be utilized to generate a location recommendation thermodynamic diagram based on the component relationship distribution model.
And 3, determining the recommended positions of the newly-added garden components in the target virtual garden according to the heat probabilities represented in the position recommended heat map. In one embodiment, the coordinate positions corresponding to the plurality of heat probabilities may be selected as the recommended positions of the newly added garden components according to the order of the heat probabilities from the high to the low in the position recommended heat map, or the coordinate position corresponding to the highest heat probability in the position recommended heat map may be used as the recommended positions of the newly added garden components.
Considering that the message transmission effect of the graph generation model in the prior art is poor, the embodiment of the invention provides a new graph generation model, and the graph generation model further comprises an adjacency matrix encoder, a visual perception message transmission module based on a multi-head attention mechanism and a relation prediction unit, wherein the number of the visual perception message transmission modules based on the multi-head attention mechanism is a plurality. Wherein the input of the adjacent matrix encoder is an adjacent matrix, and the output of the adjacent matrix encoder is an initial feature of the component; the input of the visual perception message transfer module comprises one or more of component local visual characteristics, component attribute characteristics, component relation attribute characteristics and characteristic representations of the previous iteration corresponding to the initial iteration and the current iteration of the component, the output of the visual perception message transfer module comprises the characteristic representation of the current iteration, the input of the relation prediction unit is the characteristic representation, and the output of the relation prediction unit is the component relation distribution model.
On the basis, the embodiment of the invention provides an implementation method for obtaining a component relation distribution model corresponding to a target virtual fazenda based on a fazenda relation graph, a component local visual characteristic and a fazenda global visual characteristic through a graph generation model, which is described in the following steps 1.1 to 1.4:
and 1.1, acquiring an adjacency matrix, component attribute characteristics and component relationship attribute characteristics based on the garden relationship diagram.
And step 1.2, encoding the adjacent matrix by an adjacent matrix encoder to obtain the initial component characteristics of each existing garden component. Wherein the component initial feature can be understood as an encoded adjacency matrix.
And 1.3, carrying out iterative message transmission by a visual perception message transmission module based on the local visual features of the components, the attribute features of the component relationship and the initial features of the components to obtain the feature representation of each existing fazenda component in the fazenda relationship diagram. In one embodiment, if the visual perception messaging module performs a first iteration of messaging, the visual perception messaging module inputs are component local visual features, component attribute features, component relationship attribute features, and component initial features; if the visual perception message transfer module performs the N-th iteration message transfer, the visual perception message transfer module inputs the component local visual characteristics, the component attribute characteristics, the component relation attribute characteristics and the characteristic representation of the previous iteration corresponding to the current iteration.
And step 1.4, obtaining a component relation distribution model corresponding to the target virtual fazenda based on the characteristic representation of each existing fazenda component through a relation prediction unit. In one embodiment, the relationship prediction unit includes a hybrid multi-category subunit operable to simulate the spatial relationship between each of the existing and newly added garden components based on the characterization of each of the existing garden components by the hybrid multi-category subunit to obtain a component relationship distribution model corresponding to the target virtual garden. In practical application, the embodiment of the invention can perform relation distribution model learning with the relation prediction unit and predict edges between the fazenda relation diagram of the target virtual fazenda and the newly added fazenda component based on the relation distribution model. In the process of learning the managerial relationship diagram of the target virtual managerial park to the edges between the newly added managerial park components, the message feature vectors of all the existing managerial park components to the newly added managerial park components in the target virtual managerial park are firstly obtainedThe relationship distribution between the individual garden components is then simulated based on the hybrid multi-category subunits. Wherein (1)>Characteristic representation after the R-th update representing the i-th existing garden component,/->The feature of the R-th update leg representing the |V|+1 existing garden component represents that in the present embodiment, the set target virtual garden contains the |V| existing garden component altogether, so the |V|+1 garden component is the newly added garden component. In addition, see a component relation distribution model shown below +. >
Where s is the number of multi-class subunits, a is the mixing coefficient of the multi-class subunits, and θ is the distribution of the learned component relationships in the different multi-class subunits.
To facilitate an understanding of the foregoing step 1.3, embodiments of the present invention provide an implementation of a visual perception messaging module to obtain a characterization of each of the existing garden components in a garden relationship diagram, see steps 1.3.1 through 1.3.3 as follows:
and step 1.3.1, if the current iteration number is 1, carrying out iterative message transmission by a visual perception message transmission module based on the component local visual characteristics, the component attribute characteristics, the component relation attribute characteristics and the component initial characteristics to obtain the characteristic representation of the current iteration. In one embodiment, referring to a schematic structural diagram of a visual perception message passing module shown in fig. 2, the visual perception message passing module provided by the embodiment of the present invention may include a visual conversion unit f vis (-), feature fusion unit f cts (-), multi-head attention unitMulti-head attention unit->Comprising a message feature converting subunit->Attention parameter calculation subunit->And feature vector update subunit The number of the message feature conversion subunits is multiple, and parameters of each message feature conversion subunit are different. On the basis of fig. 2, when the visual perception message transmission module carries out first iteration message transmission, the local visual features of the component can be converted into visual feature vectors through the visual conversion unit, then the visual feature vectors and the initial features of the component are subjected to feature fusion through the feature fusion unit to obtain fusion feature vectors, and then the multi-head attention unit obtains feature representation of the first iteration message transmission based on the fusion feature vectors, component attribute features and component relation attribute features.
And step 1.3.2, if the current iteration number is not 1, judging whether the current iteration number meets the preset iteration number. In one embodiment, the preset iteration number R may be preset, and if the current iteration number is smaller than the preset iteration number, the iteration message passing is continued through the visual perception message passing module until the current iteration number is equal to the preset iteration number.
And step 1.3.3, if the current iteration number does not meet the preset iteration number, carrying out iterative message transmission by a visual perception message transmission module based on the component local visual characteristics, the component attribute characteristics, the component relation attribute characteristics and the characteristic representation of the previous iteration corresponding to the current iteration to obtain the characteristic representation of the current iteration until the current iteration number meets the preset iteration number. In specific implementation, see steps 1.3.3.1 to 1.3.3.3 below:
Step 1.3.3.1 converts the component local visual features into visual feature vectors by a visual conversion unit. In one embodiment, the visual transformation unit f is used for the visual transformation of the image vis (-) converting component local visual features to a 1024-dimensional visual feature vector
Step 1.3.3.2, feature fusion is performed on the visual feature vector and the feature representation of the previous iteration corresponding to the current iteration through a feature fusion unit, so as to obtain a fused feature vector. In one embodiment, the visual feature vector may beAnd characteristic representation->Splicing is carried out and then the characteristic fusion unit is adopted>And (3) performing feature transformation:
wherein the feature fusion unit f cts (.) a two-layer fully connected network structure can be used,/->Is a feature fusion unit f cts Network parameters, (-)>Is a fused feature vector.
In step 1.3.3.3, the feature representation of the current iteration is obtained by the multi-headed attention unit based on the fusion feature vector, the component attribute feature and the component relationship attribute feature. In the iterative message passing process, for each edge node (the node represents an existing garden component, the edge represents a communication relationship between any two existing garden components), iterative message passing may be performed based on a multi-headed attention mechanism, and in one embodiment, the step of obtaining a feature representation of the current iteration based on the fused feature vector, component attribute feature, and component relationship attribute feature by a multi-headed attention unit may be performed as follows steps 1.3.3.3.1 to 1.3.3.3.3:
Step (a)1.3.3.3.1, for each message feature conversion subunit, obtaining, by the message feature conversion subunit, a message feature vector output by the message feature conversion subunit based on the fusion feature vector, the component attribute feature, and the component relationship attribute feature. In one embodiment, the message feature vector output by the message feature converting subunit may be obtained as follows
Wherein (1)>Representing message characteristic converting subunit->K indicating that the edge-flowing messages in the manor relationship graph are delivered by the kth different attention mechanism, under which the edge-delivered messages are first retrieved in a spliced form +.>Wherein (1)>Representing component relation property features by means of a message feature transformation subunit +.>Conversion into message feature vector->
In step 1.3.3.3.2, the attention weights are calculated by the attention parameter calculation subunit for the message feature vectors output by the respective message feature conversion subunits, respectively. In one embodiment, the user may be notified by attentionNumber calculation subunitCalculating feature vector of each message +.>Corresponding attention weight->See in particular the following formula:
wherein (1)>Representing an attention parameter calculation subunit +. >Network parameters of (a) are provided.
In step 1.3.3.3.3, a feature representation of the current iteration is obtained by the feature vector update subunit based on a weighted sum of the respective attention weights. In one embodiment, a weighted sum of the individual attention weights is calculated and then the subunit is updated with the feature vectorUpdating the feature representation to obtain the feature representation of the current iteration, wherein the following formula is specifically referred to:
wherein (1)>Representing feature vector update subunitsNetwork parameters of (a) are provided.
In addition, the embodiment of the invention also provides a training step of the feature extraction model and the graph generation model, which is shown in the following (1) to (4):
(1) And calculating a component relation loss value based on a preset component relation loss function and a component relation distribution model. In one embodiment, the component relationship loss value is obtained by calculating the maximum logarithmic posterior probability for the hybrid multi-class subunit, where the component relationship loss function is expressed as follows:
wherein (1)>Representing a component relationship distribution model.
(2) A global match loss value is calculated based on a predetermined global match loss function, component local visual features and feature representations of each existing component of the target virtual park. In one embodiment, in order to enable the graph generation model to better perceive the global visual features of the fazenda of the target virtual fazenda, the embodiment of the present invention proposes a global matching loss function, which, when embodied, is implemented, is used for each pair of fazenda relationship graphs G z And garden rendering I z Global feature pairs (including a first global feature) may be derived based on component local visual features and feature representations of each existing fazenda component in the target virtual fazendaAnd a second global feature->) Such global feature pairs may be obtained, for example, by averaging component local visual features and feature representations of all existing garden components; then a cosine distance between the first global feature and the second global feature is calculatedCalculating the matching probability between the manor rendering diagram and the manor relation diagram based on the cosine distanceWherein, gamma is a super parameter, Z represents the current loss calculation and comprises Z virtual fazenda, b represents the b virtual fazenda in the current loss calculation; substituting the matching probability into a preset global matching loss function to obtain a global matching loss value, wherein the global matching loss function is +.>
(3) The total loss value is calculated from the component relationship loss value and the global matching loss value. In one embodiment, the component relationship loss value and/or the weight value of the global matching loss value may be calculated based on the current number of iterationsWherein R represents a preset iteration number, and R represents the transmission of the message of the R-th iteration; and then carrying out weighted summation on the component relation loss value and the global matching loss value according to the weight value to obtain a total loss value, and assuming that the calculated weight value is the weight value corresponding to the global matching loss value, the calculation formula of the total loss value is as follows: / >
(4) And updating the parameters of the feature extraction model and the parameters of the graph generation model respectively by using the total loss value.
For ease of understanding, referring to fig. 3 for a frame diagram of a method for recommending a location of a fazenda component, a fazenda image (i.e., the fazenda information described above) of a target virtual fazenda is first obtained, the fazenda image is rendered by a renderer to obtain a fazenda rendering diagram, and a relationship diagram extractor is used to extract a spatial relationship of the fazenda image to obtain a fazenda relationship diagram. The visual feature extractor is used for extracting the local visual features and the global visual features of the components of the garden rendering graph, wherein the visual feature extractor comprises 5 residual networks and 4 feature downsampling subunits (the number of the residual networks and the feature downsampling subunits are examples, and the setting can be performed based on actual requirements in practical application). Obtaining an adjacency matrix, node attribute characteristics (namely component attribute characteristics) and edge attribute characteristics (namely component attribute characteristics) based on a fazenda relation diagram, encoding the adjacency matrix through an adjacency matrix encoder to obtain initial component characteristics, carrying out iterative message transmission through each visual perception message transmission module based on the local visual characteristics, the node attribute characteristics, the edge attribute characteristics and the initial component characteristics of the components to obtain characteristic representations of all existing fazenda components in the fazenda relation diagram, carrying out bit-wise addition on the characteristic representations and the global visual characteristics of the fazenda, and obtaining an edge distribution model (namely component relation distribution model) based on a bit-wise addition result through an edge prediction unit (namely relation prediction unit), so as to generate a position recommendation thermodynamic diagram of a newly added component based on the edge distribution model by using a position map generation unit.
Based on the above-mentioned fig. 3, an embodiment of the present invention provides another method for recommending a garden component, referring to a flowchart of another method for recommending a position of a garden component shown in fig. 4, which mainly includes the following steps S402 to S428:
step S402, loading the current scene (i.e. the target virtual fazenda). Referring to fig. 5a, a schematic diagram of a current scenario is shown, where the current scenario includes a set of existing garden components in the target virtual garden and a forbidden placement area identifier, and a left-diagonal coverage area in fig. 5a identifies the existing garden components placed in the current scenario.
Step S404, a garden rendering is acquired based on the renderer.
Step S406, a relationship diagram extractor obtains a garden relationship diagram. Referring to a schematic representation of a garden rendering and a garden relationship diagram, as shown in fig. 5b, the interconnections between individual prior art garden components are used to characterize the relationship between individual prior art garden components, and fig. 5b is merely illustrative of the interconnections between individual prior art garden components.
In step S408, node feature vectors (i.e., the component initial features described above) of the house relationship graph are initialized.
In step S410, it is determined whether the node feature vectors in the fazenda relationship diagram need to be iteratively updated. If yes, go to step S412; if not, step S420 is performed.
In step S412, a visual feature map (i.e., the above-mentioned global visual feature of the garden) of the current scene is extracted by the visual feature extractor.
In step S414, the visual features of all the existing nodes (i.e., the above-mentioned existing garden components) in the current scene are clipped and converted, so as to obtain the visual feature map (i.e., the above-mentioned component local visual features) of the existing nodes.
In step S416, a weighted message vector is calculated based on the multi-headed attention mechanism.
In step S418, the node feature vectors in the fazenda relationship diagram are updated.
In step S420, a message from the current scene to the new node (i.e. the above-mentioned new garden component) is calculated based on the node feature vector in the garden relationship diagram, and the distribution of the edges is calculated based on the message. An edge may be understood as a spatial communication relationship between a new node and each existing node.
Step S422, determine whether it is a training process. If yes, go to step S424; if not, step S426 is performed.
In step S424, the edge distribution loss function (i.e., the component relation loss function) and the global matching loss function are calculated, and the gradient is calculated, so as to update the parameters of the visual feature extractor and the graph generation model, respectively.
Step S426, sampling the edge set of the current scene to the new node based on the edge distribution model. Referring to FIG. 5c, a schematic diagram of an edge set is shown, where the connection between each of the existing garden components in FIG. 5c is the edge set.
Step S428, deducing a position recommendation heat map of the new node based on the edge set. Referring to the schematic diagram of a location recommended heat map shown in fig. 5d, the heat probability represented by the right-diagonal coverage area in fig. 5d is highest, the heat probability represented by the left-right-diagonal cross coverage area is inferior, the heat probability represented by the vertical coverage area is smaller, the heat probability represented by the horizontal coverage area is smallest, and the sum of all heat probabilities is 1.
In summary, embodiments of the present invention include a visual perception messaging module based on a multi-headed attention mechanism, and a global matching loss function. The visual perception message express module based on the multi-head attention mechanism inputs the feature representation, the edge attribute feature and the component local visual feature obtained in the previous iteration, and the output is the feature representation obtained in the previous iteration. The input to the global matching loss function is a feature representation of the graph generation model and the component local visual features of each existing garden component, and the input is a global matching loss value. According to the embodiment of the invention, the message transmission mechanism in the graph generation model is optimized, so that the graph generation model can better represent the characteristics contained in the target virtual fazenda, and the position recommendation effect is improved. In addition, the embodiment of the invention adds the matching evaluation between the garden relation diagram and the garden rendering diagram, optimizes the diagram generation network output as much as possible in the learning process, and ensures that the diagram generation network better contains visual characteristics, thereby improving the position recommendation effect.
For the method for recommending a position of a garden component provided in the foregoing embodiment, the embodiment of the present invention further provides a device for recommending a position of a garden component, referring to a schematic structural diagram of a device for recommending a position of a garden component shown in fig. 6, the device may include the following parts:
the diagram obtaining module 602 is configured to obtain a garden rendering diagram and a garden relationship diagram corresponding to the target virtual garden; wherein at least one existing garden component is placed within the target virtual garden, a garden rendering map describing the visual state of the target virtual garden, and a garden relationship map describing the spatial relationship of each existing garden component.
The feature extraction module 604 is configured to perform feature extraction on the garden rendering graph through the feature extraction model, so as to obtain visual features of the garden rendering graph.
A location recommendation module 606 for determining a recommended location of the newly added garden component in the target virtual garden based on the garden relationship diagram and the visual characteristics through the diagram generation model; wherein the graph generation model includes a visual perception messaging module based on a multi-headed attentiveness mechanism.
According to the recommendation device for the position of the fazenda component, the image generation model comprises the visual perception message transmission module based on the multi-head attention mechanism, and the message transmission mechanism in the image generation model is optimized through the visual perception message transmission module, so that the image generation model can better utilize the characteristics contained in the fazenda relation graph, and based on the characteristics, the position recommendation is carried out according to the fazenda relation graph and the visual characteristics, and the rationality of the recommended position of the newly-added fazenda component can be effectively improved.
In one embodiment, the feature extraction model includes a plurality of visual feature extraction units; visual features include component local visual features and garden global visual features; the feature extraction module 604 is further configured to: for each visual feature extraction unit, performing feature extraction on the appointed feature map through the visual feature extraction unit to obtain the component local visual feature and the garden global visual feature output by the visual feature extraction unit; taking the component local visual features output by the first appointed visual feature extraction unit in the feature extraction model as component local visual features of the garden rendering chart; and taking the fazenda global visual feature output by the second appointed visual feature extraction unit in the feature extraction model as the fazenda global visual feature of the fazenda rendering graph.
In one embodiment, the location recommendation module 606 is further configured to: obtaining a component relation distribution model corresponding to the target virtual fazenda based on the fazenda relation graph, the component local visual features and the fazenda global visual features through the graph generation model; the component relation distribution model is used for describing the placement rules among the existing garden components; generating a position recommendation heat map based on the component relation distribution model; and determining the recommended positions of the newly added garden components in the target virtual garden according to the heat probabilities represented in the position recommended heat map.
In one embodiment, the graph generation model further comprises an adjacency matrix encoder and a relationship prediction unit, the number of visual perception messaging modules based on the multi-headed attention mechanism being a plurality; the location recommendation module 606 is also configured to: acquiring an adjacency matrix, component attribute characteristics and component relationship attribute characteristics based on the manor relationship graph; the adjacent matrix is encoded by an adjacent matrix encoder to obtain the initial characteristics of each existing garden component; iterative message transmission is carried out through the visual perception message transmission module based on the local visual features of the components, the attribute features of the components and the initial features of the components, so that feature representations of all the existing fazenda components in the fazenda relation diagram are obtained; and obtaining a component relation distribution model corresponding to the target virtual fazenda based on the characteristic representation of each existing fazenda component through the relation prediction unit.
In one embodiment, the location recommendation module 606 is further configured to: if the current iteration number is 1, carrying out iterative message transmission by a visual perception message transmission module based on the local visual characteristics of the component, the attribute characteristics of the component relation and the initial characteristics of the component to obtain the characteristic representation of the current iteration; if the current iteration number is not 1, judging whether the current iteration number meets the preset iteration number or not; and if the current iteration number does not meet the preset iteration number, carrying out iterative message transfer by the visual perception message transfer module based on the component local visual characteristics, the component attribute characteristics, the component relation attribute characteristics and the characteristic representation of the previous iteration corresponding to the current iteration to obtain the characteristic representation of the current iteration until the current iteration number meets the preset iteration number.
In one embodiment, each visual perception message passing module comprises a visual conversion unit, a characteristic fusion unit and a multi-head attention unit; the location recommendation module 606 is also configured to: converting the local visual features of the component into visual feature vectors by a visual conversion unit; the feature fusion unit is used for carrying out feature fusion on the visual feature vector and the feature representation of the previous iteration corresponding to the current iteration to obtain a fusion feature vector; and obtaining the characteristic representation of the current iteration through the multi-head attention unit based on the fusion characteristic vector, the component attribute characteristic and the component relation attribute characteristic.
In one embodiment, the multi-head attention unit comprises a message feature conversion subunit, an attention parameter calculation subunit and a feature vector updating subunit, wherein the number of the message feature conversion subunits is multiple, and parameters of each message feature conversion subunit are different; the location recommendation module 606 is also configured to: aiming at each message feature conversion subunit, obtaining a message feature vector output by the message feature conversion subunit through the message feature conversion subunit based on the fusion feature vector, the component attribute feature and the component relation attribute feature; respectively calculating attention weights for the message feature vectors output by each message feature conversion subunit through an attention parameter calculation subunit; and obtaining the characteristic representation of the current iteration through the characteristic vector updating subunit based on the weighted sum of the attention weights.
In one embodiment, the relationship prediction unit includes a hybrid multi-category subunit; the location recommendation module 606 is also configured to: and simulating the spatial relationship between each existing fazenda component and the newly added fazenda component by mixing the multi-category subunits based on the characteristic representation of each existing fazenda component, and obtaining a component relationship distribution model corresponding to the target virtual fazenda.
In one embodiment, the apparatus further includes a training module configured to: calculating a component relation loss value based on a preset component relation loss function and a component relation distribution model; calculating a global matching loss value based on a preset global matching loss function and component local visual characteristics and characteristic representations of each existing fazenda component in the target virtual fazenda; calculating a total loss value according to the component relation loss value and the global matching loss value; and updating the parameters of the feature extraction model and the parameters of the graph generation model respectively by using the total loss value.
In one embodiment, the training module is further to: obtaining a global feature pair based on the component local visual features and feature representations of each existing garden component in the target virtual garden; wherein the global feature pair comprises a first global feature and a second global feature; calculating a cosine distance between the first global feature and the second global feature; calculating a matching probability between the garden rendering graph and the garden relationship graph based on the cosine distance; substituting the matching probability into a preset global matching loss function to obtain a global matching loss value.
In one embodiment, the training module is further to: calculating a component relation loss value and/or a weight value of a global matching loss value based on the current iteration times; and carrying out weighted summation on the component relation loss value and the global matching loss value according to the weight value to obtain a total loss value.
In one embodiment, the graph acquisition module 602 is further configured to: obtaining the fazenda information of the target virtual fazenda; wherein the garden information includes area description information of the target virtual garden and component description information of the existing garden components; rendering the target virtual fazenda based on the fazenda information to obtain a fazenda rendering diagram corresponding to the target virtual fazenda; and extracting the spatial relationship among all the existing fazenda components in the target virtual fazenda based on the fazenda information, and obtaining a fazenda rendering diagram corresponding to the target virtual fazenda.
The device provided by the embodiment of the present invention has the same implementation principle and technical effects as those of the foregoing method embodiment, and for the sake of brevity, reference may be made to the corresponding content in the foregoing method embodiment where the device embodiment is not mentioned.
The embodiment of the invention provides a server, which specifically comprises a processor and a storage device; the storage means has stored thereon a computer program which, when executed by the processor, performs the method of any of the embodiments described above.
Fig. 7 is a schematic structural diagram of a server according to an embodiment of the present invention, where the server 100 includes: a processor 70, a memory 71, a bus 72 and a communication interface 73, said processor 70, communication interface 73 and memory 71 being connected by bus 72; the processor 70 is arranged to execute executable modules, such as computer programs, stored in the memory 71.
The memory 71 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and the at least one other network element is achieved via at least one communication interface 73 (which may be wired or wireless), which may use the internet, a wide area network, a local network, a metropolitan area network, etc.
Bus 72 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 7, but not only one bus or type of bus.
The memory 71 is configured to store a program, and the processor 70 executes the program after receiving an execution instruction, where the method executed by the apparatus for flow defining disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 70 or implemented by the processor 70.
The processor 70 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware or instructions in software in the processor 70. The processor 70 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a digital signal processor (Digital Signal Processing, DSP for short), application specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA for short), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory 71 and the processor 70 reads the information in the memory 71 and in combination with its hardware performs the steps of the method described above.
The computer program product of the readable storage medium provided by the embodiment of the present invention includes a computer readable storage medium storing a program code, where the program code includes instructions for executing the method described in the foregoing method embodiment, and the specific implementation may refer to the foregoing method embodiment and will not be described herein.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (13)

1. A method of recommending a location for a garden component, comprising:
obtaining a fazenda rendering diagram and a fazenda relation diagram corresponding to the target virtual fazenda; wherein at least one existing fazenda component is placed in the target virtual fazenda, the fazenda rendering diagram is used for describing the visual state of the target virtual fazenda, and the fazenda relation diagram is used for describing the spatial relation of each existing fazenda component;
Performing feature extraction on the garden rendering graph through a feature extraction model to obtain visual features of the garden rendering graph;
determining, by a graph generation model, a recommended location of a newly added garden component in the target virtual garden based on the garden relationship graph and the visual feature; wherein the graph generation model comprises a visual perception messaging module based on a multi-head attention mechanism;
the feature extraction model comprises a plurality of visual feature extraction units; the visual features include component local visual features and garden global visual features;
the step of extracting features of the garden rendering graph through the feature extraction model to obtain visual features of the garden rendering graph comprises the following steps:
for each visual feature extraction unit, performing feature extraction on the appointed feature map through the visual feature extraction unit to obtain the component local visual feature and the garden global visual feature output by the visual feature extraction unit;
taking the component local visual features output by the first appointed visual feature extraction unit in the feature extraction model as the component local visual features of the garden rendering graph; and taking the fazenda global visual feature output by the second appointed visual feature extraction unit in the feature extraction model as the fazenda global visual feature of the fazenda rendering graph;
The step of determining a recommended location of a newly added garden component in the target virtual garden based on the garden relationship diagram and the visual features through a diagram generation model, comprising:
obtaining a component relation distribution model corresponding to the target virtual fazenda based on the fazenda relation graph, the component local visual features and the fazenda global visual features through a graph generation model; the component relation distribution model is used for describing the placement rules among the existing garden components;
generating a position recommendation heat map based on the component relation distribution model;
and determining the recommended position of the newly added garden component in the target virtual garden according to each heat probability represented in the position recommended heat map.
2. The method of claim 1, wherein the graph generation model further comprises an adjacency matrix encoder and a relationship prediction unit, the number of visual perception messaging modules based on a multi-headed attention mechanism being a plurality;
the step of obtaining the component relation distribution model corresponding to the target virtual fazenda based on the fazenda relation graph, the component local visual features and the fazenda global visual features through the graph generation model comprises the following steps:
Acquiring an adjacency matrix, component attribute features and component relationship attribute features based on the manor relationship graph;
the adjacent matrix is encoded by the adjacent matrix encoder, so that the initial component characteristics of each existing garden component are obtained;
iterative message transmission is carried out by the visual perception message transmission module based on the component local visual characteristics, the component attribute characteristics, the component relation attribute characteristics and the component initial characteristics, so that the characteristic representation of each existing fazenda component in the fazenda relation diagram is obtained;
and obtaining a component relation distribution model corresponding to the target virtual fazenda based on the characteristic representation of each existing fazenda component through the relation prediction unit.
3. The method of claim 2, wherein the step of iteratively messaging by the visual perception messaging module based on the component local visual features, the component attribute features, component relationship attribute features, and the component initial features results in a feature representation of each of the existing house components in the house relationship graph, comprising:
if the current iteration number is 1, carrying out iterative message transfer by the visual perception message transfer module based on the component local visual characteristics, the component attribute characteristics, the component relation attribute characteristics and the component initial characteristics to obtain the characteristic representation of the current iteration;
If the current iteration number is not 1, judging whether the current iteration number meets the preset iteration number or not;
and if the current iteration number does not meet the preset iteration number, performing iterative message transfer by the visual perception message transfer module based on the component local visual feature, the component attribute feature, the component relation attribute feature and the feature representation of the previous iteration corresponding to the current iteration to obtain the feature representation of the current iteration until the current iteration number meets the preset iteration number.
4. A method according to claim 3, wherein each of the visual perception messaging modules comprises a visual transformation unit, a feature fusion unit, a multi-headed attention unit;
the step of performing iterative message transfer by the visual perception message transfer module based on the component local visual feature, the component attribute feature, the component relationship attribute feature and the feature representation of the previous iteration corresponding to the current iteration to obtain the feature representation of the current iteration includes:
converting the component local visual features into visual feature vectors by the visual conversion unit;
Performing feature fusion on the visual feature vector and the feature representation of the previous iteration corresponding to the current iteration through the feature fusion unit to obtain a fusion feature vector;
and obtaining the characteristic representation of the current iteration through the multi-head attention unit based on the fusion characteristic vector, the component attribute characteristic and the component relation attribute characteristic.
5. The method of claim 4, wherein the multi-headed attention unit comprises a message feature conversion subunit, an attention parameter calculation subunit, and a feature vector update subunit, wherein the number of message feature conversion subunits is a plurality, and wherein the parameters of each message feature conversion subunit are different;
the step of obtaining, by the multi-head attention unit, a feature representation of a current iteration based on the fusion feature vector, the component attribute feature and the component relationship attribute feature includes:
aiming at each message feature conversion subunit, obtaining a message feature vector output by the message feature conversion subunit through the message feature conversion subunit based on the fusion feature vector, the component attribute feature and the component relation attribute feature;
Calculating attention weights respectively for the message feature vectors output by the message feature conversion subunits through the attention parameter calculation subunit;
and obtaining the characteristic representation of the current iteration through the characteristic vector updating subunit based on the weighted sum of the attention weights.
6. The method of claim 2, wherein the relationship prediction unit comprises a hybrid multi-category subunit;
the step of obtaining, by the relationship prediction unit, a component relationship distribution model corresponding to the target virtual garden based on the feature representation of each of the existing garden components, includes:
and simulating the spatial relationship between each existing fazenda component and the newly added fazenda component by the mixed multi-category subunit based on the characteristic representation of each existing fazenda component, and obtaining a component relationship distribution model corresponding to the target virtual fazenda.
7. The method of claim 2, wherein the training step of the feature extraction model and the map generation model comprises:
calculating a component relation loss value based on a preset component relation loss function and the component relation distribution model;
Calculating a global matching loss value based on a preset global matching loss function, the component local visual features and the feature representations of each of the existing garden components in the target virtual garden;
calculating a total loss value according to the component relation loss value and the global matching loss value;
and respectively updating the parameters of the feature extraction model and the parameters of the graph generation model by using the total loss value.
8. The method of claim 7, wherein the step of calculating a global matching loss value based on a pre-set global matching loss function, the component local visual characteristics and the characteristic representation of each of the existing garden components in the target virtual garden, comprises:
obtaining a global feature pair based on the component local visual features and the feature representations of each of the existing garden components in the target virtual garden; wherein the global feature pair comprises a first global feature and a second global feature;
calculating a cosine distance between the first global feature and the second global feature;
calculating a probability of matching between the house rendering graph and the house relationship graph based on the cosine distance;
Substituting the matching probability into a preset global matching loss function to obtain a global matching loss value.
9. The method of claim 8, wherein the step of calculating a total loss value from the component relationship loss value and the global matching loss value comprises:
calculating a weight value of the component relation loss value and/or the global matching loss value based on the current iteration times;
and carrying out weighted summation on the component relation loss value and the global matching loss value according to the weight value to obtain a total loss value.
10. The method of claim 1, wherein the step of obtaining a garden rendering map and a garden relationship map corresponding to the target virtual garden comprises:
obtaining the fazenda information of the target virtual fazenda; wherein the garden information includes area description information of the target virtual garden and component description information of the existing garden components;
rendering the target virtual fazenda based on the fazenda information to obtain a fazenda rendering diagram corresponding to the target virtual fazenda;
and extracting the spatial relation among all the existing fazenda components in the target virtual fazenda based on the fazenda information, and obtaining a fazenda rendering diagram corresponding to the target virtual fazenda.
11. A recommendation device for a location of a garden assembly, comprising:
the image acquisition module is used for acquiring a fazenda rendering image and a fazenda relation image corresponding to the target virtual fazenda; wherein at least one existing fazenda component is placed in the target virtual fazenda, the fazenda rendering diagram is used for describing the visual state of the target virtual fazenda, and the fazenda relation diagram is used for describing the spatial relation of each existing fazenda component;
the feature extraction module is used for carrying out feature extraction on the garden rendering graph through a feature extraction model to obtain visual features of the garden rendering graph;
a location recommendation module for determining a recommended location of a newly added garden component in the target virtual garden based on the garden relationship diagram and the visual characteristics through a diagram generation model; wherein the graph generation model comprises a visual perception messaging module based on a multi-head attention mechanism;
the feature extraction model comprises a plurality of visual feature extraction units; the visual features include component local visual features and garden global visual features;
the feature extraction module is further configured to:
for each visual feature extraction unit, performing feature extraction on the appointed feature map through the visual feature extraction unit to obtain the component local visual feature and the garden global visual feature output by the visual feature extraction unit;
Taking the component local visual features output by the first appointed visual feature extraction unit in the feature extraction model as the component local visual features of the garden rendering graph; and taking the fazenda global visual feature output by the second appointed visual feature extraction unit in the feature extraction model as the fazenda global visual feature of the fazenda rendering graph;
the location recommendation module is further configured to:
obtaining a component relation distribution model corresponding to the target virtual fazenda based on the fazenda relation graph, the component local visual features and the fazenda global visual features through a graph generation model; the component relation distribution model is used for describing the placement rules among the existing garden components;
generating a position recommendation heat map based on the component relation distribution model;
and determining the recommended position of the newly added garden component in the target virtual garden according to each heat probability represented in the position recommended heat map.
12. A server comprising a processor and a memory;
stored on the memory is a computer program which, when executed by the processor, performs the method of any one of claims 1 to 10.
13. A computer storage medium storing computer software instructions for use with the method of any one of claims 1 to 10.
CN202011251174.7A 2020-11-10 2020-11-10 Method, device and server for recommending position of fazenda component Active CN112365576B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011251174.7A CN112365576B (en) 2020-11-10 2020-11-10 Method, device and server for recommending position of fazenda component

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011251174.7A CN112365576B (en) 2020-11-10 2020-11-10 Method, device and server for recommending position of fazenda component

Publications (2)

Publication Number Publication Date
CN112365576A CN112365576A (en) 2021-02-12
CN112365576B true CN112365576B (en) 2023-07-25

Family

ID=74514296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011251174.7A Active CN112365576B (en) 2020-11-10 2020-11-10 Method, device and server for recommending position of fazenda component

Country Status (1)

Country Link
CN (1) CN112365576B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117290366B (en) * 2023-08-14 2024-08-06 中国船舶集团有限公司第七〇九研究所 Uncertainty situation space-time knowledge graph updating method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876764A (en) * 2018-05-21 2018-11-23 北京旷视科技有限公司 Render image acquiring method, device, system and storage medium
CN110162703A (en) * 2019-05-13 2019-08-23 腾讯科技(深圳)有限公司 Content recommendation method, training method, device, equipment and storage medium
CN111353106A (en) * 2020-02-26 2020-06-30 贝壳技术有限公司 Recommendation method and device, electronic equipment and storage medium
CN111460121A (en) * 2020-03-31 2020-07-28 苏州思必驰信息科技有限公司 Visual semantic conversation method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8671069B2 (en) * 2008-12-22 2014-03-11 The Trustees Of Columbia University, In The City Of New York Rapid image annotation via brain state decoding and visual pattern mining
US10664716B2 (en) * 2017-07-19 2020-05-26 Vispek Inc. Portable substance analysis based on computer vision, spectroscopy, and artificial intelligence

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876764A (en) * 2018-05-21 2018-11-23 北京旷视科技有限公司 Render image acquiring method, device, system and storage medium
CN110162703A (en) * 2019-05-13 2019-08-23 腾讯科技(深圳)有限公司 Content recommendation method, training method, device, equipment and storage medium
CN111353106A (en) * 2020-02-26 2020-06-30 贝壳技术有限公司 Recommendation method and device, electronic equipment and storage medium
CN111460121A (en) * 2020-03-31 2020-07-28 苏州思必驰信息科技有限公司 Visual semantic conversation method and system

Also Published As

Publication number Publication date
CN112365576A (en) 2021-02-12

Similar Documents

Publication Publication Date Title
CN108304390B (en) Translation model-based training method, training device, translation method and storage medium
CN110766038B (en) Unsupervised landform classification model training and landform image construction method
CN110766096A (en) Video classification method and device and electronic equipment
CN110339569B (en) Method and device for controlling virtual role in game scene
CN112085056B (en) Target detection model generation method, device, equipment and storage medium
CN111467806A (en) Method, device, medium and electronic equipment for generating resources in game scene
CN112633591B (en) Space searching method and device based on deep reinforcement learning
CN110210204B (en) Verification code generation method and device, storage medium and electronic equipment
CN111932451A (en) Method and device for evaluating repositioning effect, electronic equipment and storage medium
CN112365576B (en) Method, device and server for recommending position of fazenda component
CN113569523A (en) PCB automatic wiring method and system based on line sequence simulation
Zhou et al. Deeptree: Modeling trees with situated latents
CN114219804A (en) Small sample tooth detection method based on prototype segmentation network and storage medium
CN111126617B (en) Method, device and equipment for selecting fusion model weight parameters
CN112861601A (en) Method for generating confrontation sample and related equipment
CN116758214A (en) Three-dimensional modeling method and device for remote sensing image, electronic equipment and storage medium
CN113642226A (en) Training method of fair machine learning model based on multi-objective evolutionary algorithm
US11010672B1 (en) Evolutionary techniques for computer-based optimization and artificial intelligence systems
CN116226674B (en) Layout model training method, layout method and device for frame beams
CN116798052B (en) Training method and device of text recognition model, storage medium and electronic equipment
CN116958954B (en) License plate recognition method, device and storage medium based on key points and bypass correction
CN116959095A (en) Training method, device, equipment, storage medium and product of motion prediction model
CN118799617A (en) Primitive learning small sample image classification method based on genetic programming
Jiang et al. Critical Test Cases Generalization for Autonomous Driving Object Detection Algorithms
Ferreira et al. Dashcam based wildlife detection and classification using fused data sets of digital photographic and simulated imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant