CN118097083A - Effect graph generation method and device, electronic equipment and storage medium - Google Patents
Effect graph generation method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN118097083A CN118097083A CN202311458437.5A CN202311458437A CN118097083A CN 118097083 A CN118097083 A CN 118097083A CN 202311458437 A CN202311458437 A CN 202311458437A CN 118097083 A CN118097083 A CN 118097083A
- Authority
- CN
- China
- Prior art keywords
- graph
- model
- displayed
- scene
- diagram
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000694 effects Effects 0.000 title claims abstract description 169
- 238000000034 method Methods 0.000 title claims abstract description 62
- 238000010586 diagram Methods 0.000 claims abstract description 179
- 230000004044 response Effects 0.000 claims description 42
- 230000015654 memory Effects 0.000 claims description 25
- 238000012545 processing Methods 0.000 abstract description 7
- 230000000875 corresponding effect Effects 0.000 description 53
- 238000013473 artificial intelligence Methods 0.000 description 41
- 238000013461 design Methods 0.000 description 17
- 238000004891 communication Methods 0.000 description 6
- KLPWJLBORRMFGK-UHFFFAOYSA-N Molindone Chemical compound O=C1C=2C(CC)=C(C)NC=2CCC1CN1CCOCC1 KLPWJLBORRMFGK-UHFFFAOYSA-N 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 229940028394 moban Drugs 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 238000009877 rendering Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides an effect diagram generation method, an effect diagram generation device, electronic equipment and a storage medium, and relates to the technical field of image processing. And responding to the generation operation of the reference graph, and displaying at least one effect graph on a display interface, wherein the effect graph in the at least one effect graph is generated by using the AI drawing model and taking the reference graph as a reference, so that the effect graph of the model to be displayed in the related scene can be quickly generated, and the display effect of the model to be displayed is improved.
Description
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to an effect graph generating method, an effect graph generating device, electronic equipment and a storage medium.
Background
In the related art, in the design field, for example, in the design of products such as doors and windows, by providing a corresponding design tool for a user, the design threshold can be reduced, so that the user can conveniently and quickly construct a door and window model with a required type and size. However, the single door and window model has limited display effect, and for users with low design basis, how to improve the display effect of the door and window model gradually becomes a concern of users.
Disclosure of Invention
The disclosure provides an effect graph generation method, an effect graph generation device, electronic equipment and a storage medium, so as to improve the display effect of door and window models and other parts.
In a first aspect, there is provided an effect graph generating method, including:
Responding to a first selecting operation of the model to be displayed and a second selecting operation of the first scene graph, and displaying a reference graph on a display interface, wherein the reference graph comprises the model graph to be displayed corresponding to the model to be displayed and the first scene graph at least partially positioned at the lower layer of the model graph to be displayed;
In response to the operation of generating the reference graph, at least one effect graph is displayed on the display interface, wherein the effect graph in the at least one effect graph is generated by using the AI drawing model and taking the reference graph as a reference.
In a second aspect, there is provided an effect map generating apparatus including:
the first display unit is used for responding to a first selection operation of the model to be displayed and a second selection operation of the first scene graph, displaying a reference graph on the display interface, wherein the reference graph comprises the model graph to be displayed corresponding to the model to be displayed and the first scene graph at least partially positioned at the lower layer of the model graph to be displayed;
And the second display unit is used for displaying at least one effect graph on the display interface in response to the generation operation of the reference graph, wherein the effect graph in the at least one effect graph is generated by using the AI drawing model and taking the reference graph as a reference.
In a third aspect, an electronic device is provided, comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
In a fourth aspect, a non-transitory computer-readable storage medium storing computer instructions is provided, wherein the computer instructions are for causing the computer to perform a method according to any one of the embodiments of the present disclosure.
According to the technical scheme, the reference diagram is displayed on the display interface by responding to the first selection operation of the model to be displayed and the second selection operation of the first scene diagram, and the reference diagram comprises the model diagram to be displayed corresponding to the model to be displayed and the first scene diagram at least partially positioned at the lower layer of the model diagram to be displayed; and responding to the generation operation of the reference graph, and displaying at least one effect graph on a display interface, wherein the effect graph in the at least one effect graph is generated by using the AI drawing model and taking the reference graph as a reference, so that the effect graph of the model to be displayed in the related scene can be quickly generated, and the display effect of the model to be displayed is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
In the drawings, the same reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily drawn to scale. It is appreciated that these drawings depict only some embodiments provided according to the disclosure and are not to be considered limiting of its scope.
FIG. 1 is a block diagram of a system for applying an effect graph generation method provided by an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method for generating an effect graph provided by another embodiment of the present disclosure;
fig. 3A to 3H are operation flowcharts of a display interface of an effect diagram generating method according to an embodiment of the present disclosure;
FIG. 4A is a diagram of a model to be displayed according to another embodiment of the present disclosure;
Fig. 4B is a scene color block diagram corresponding to a first scene graph according to another embodiment of the disclosure;
FIG. 4C is a reference color block diagram provided by another embodiment of the present disclosure;
FIG. 4D is a line drawing of the initial model corresponding to FIG. 4A;
FIGS. 5A-5D are flowcharts illustrating operations of a display interface of an effect diagram generating method according to another embodiment of the present disclosure;
FIG. 6 is a reference line drawing corresponding to the first scene graph of FIG. 5B;
fig. 7A to 7C are flowcharts of scene template uploading provided in an embodiment of the present disclosure;
FIG. 8 is a flowchart of a method for generating an effect graph according to another embodiment of the present disclosure;
FIG. 9 is a schematic block diagram of an effect graph generating apparatus provided by an embodiment of the present disclosure;
fig. 10 is a block diagram of an electronic device used to implement the effect diagram generation method of an embodiment of the present disclosure.
Detailed Description
The present disclosure will be described in further detail below with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, circuits, etc. well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
In order to facilitate understanding of the doffing scheduling method of the silk spindle product provided by the embodiment of the disclosure, the following description is made on the related technology of the embodiment of the disclosure, and the following related technology may be optionally combined with the technical scheme of the embodiment of the disclosure as an alternative, which all belong to the protection scope of the embodiment of the disclosure.
The disclosed embodiments provide an effect diagram display method, device, electronic equipment and storage medium. Specifically, the method for displaying the effect diagram in the embodiment of the present disclosure may be performed by an electronic device, where the electronic device may be a device such as a terminal or a server. The terminal can be smart phones, tablet computers, notebook computers, intelligent voice interaction equipment, intelligent household appliances, wearable intelligent equipment, aircrafts, intelligent vehicle-mounted terminals and other equipment, and the terminal can also comprise a client, wherein the client can be an audio client, a video client, a browser client, an instant messaging client or an applet and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content distribution network (Content Delivery Network, CDN), basic cloud computing services such as big data and an artificial intelligent platform.
In the related art, taking the display of a door and window model as an example, after the door and window model is built by using a design tool, for a user without a design foundation, only the door and window model can be displayed, and the effect of the door and window model cannot be better displayed for other personnel such as clients.
In addition, if the effect graph of the door and window is to be displayed to other people such as clients, a scene including the door and window, for example, an indoor living room scene, needs to be constructed, and then a corresponding effect graph is generated by utilizing the scene, which requires that a user has a design basis, and other components in the scene, such as floors, furniture and the like, can be completely drawn, and has higher threshold and longer time consumption, which is not beneficial to rapidly displaying the effect graph.
To solve at least one of the above problems, embodiments of the present disclosure provide an effect graph generating method, apparatus, electronic device, and storage medium, in which a reference graph is displayed on a display interface in response to a first selection operation of a model to be displayed and a second selection operation of a first scene graph, the reference graph including the model graph to be displayed corresponding to the model to be displayed and the first scene graph at least partially located under the model graph to be displayed; and responding to the generation operation of the reference graph, and displaying at least one effect graph on a display interface, wherein the effect graph in the at least one effect graph is generated by using the AI drawing model and taking the reference graph as a reference, so that the effect graph of the model to be displayed in the related scene can be quickly generated, and the display effect of the model to be displayed is improved.
Aspects of the disclosure are described below with reference to the drawings.
FIG. 1 is a block diagram of a system for applying an effect graph generation method provided by an embodiment of the present disclosure; referring to fig. 1, the system includes a terminal 110, a server 120, and the like; the terminal 110 and the server 120 are connected through a network, for example, a wired or wireless network connection.
Wherein the terminal 110 may be used to display a graphical user interface. The terminal is used for interacting with a user through a graphical user interface, for example, the terminal downloads and installs a corresponding client and operates, for example, the terminal invokes a corresponding applet and operates, for example, the terminal presents a corresponding graphical user interface through a login website, and the like. In the embodiment of the present disclosure, the terminal 110 may display, on the display interface, a reference graph in response to a first selection operation of a model to be displayed and a second selection operation of a first scene graph, where the reference graph includes the model graph to be displayed corresponding to the model to be displayed and the first scene graph at least partially located under the model graph to be displayed; the terminal 110 may display at least one effect graph on the display interface in response to the generation operation of the reference graph, wherein the effect graph of the at least one effect graph is the effect graph generated by using the AI drawing model and referring to the reference graph. The server 120 may generate an effect graph from the reference graph.
Although the display interface is exemplified as a page of the application program, the display interface may be another page such as a web page. The application may be an application installed on a desktop, an application installed on a mobile terminal, an applet embedded in an application, or the like.
It should be noted that the above application scenario is only shown for the convenience of understanding the spirit and principles of the present disclosure, and embodiments of the present disclosure are not limited in any way in this respect. Rather, embodiments of the present disclosure may be applied to any scenario where applicable.
The following is a detailed description. It should be noted that the following description order of embodiments is not a limitation of the priority order of embodiments.
FIG. 2 is a flow chart of a method for generating an effect graph provided by another embodiment of the present disclosure; referring to fig. 2, an embodiment of the disclosure provides an effect diagram generating method 200, where the method 200 includes the following steps S201 to S202.
Step S201, responding to a first selection operation of a model to be displayed and a second selection operation of a first scene graph, and displaying a reference graph on a display interface, wherein the reference graph comprises the model graph to be displayed corresponding to the model to be displayed and the first scene graph at least partially positioned at the lower layer of the model graph to be displayed;
in step S202, at least one effect graph is displayed on the display interface in response to the operation of generating the reference graph, wherein the effect graph in the at least one effect graph is generated by using the AI drawing model and referring to the reference graph.
The model to be displayed can be a door and window model such as a door component model and a window component model, or can be a furniture model such as a sofa component.
The first scene graph may be a scene graph suitable for a scene of the model to be displayed, which may be obtained by photographing and uploading or adopting a scene template or the like, and the first scene graph may be an indoor scene (furniture, wall surface or the like) or an outdoor scene (building or the like) of the model to be displayed. The first scene graph may be deleted, may be replaced, specifications set according to requirements, and the like.
It can be understood that the first selection operation may be used to characterize the selection of the current model to be displayed as a basis for generating an effect, and the operation form is not limited to clicking the model to be displayed or clicking, dragging, double clicking, etc. operations on related controls or buttons. The first selection operation may be a single operation, such as a single click, although it may be a series of operations including a plurality of operations.
Similarly, the second selection operation may be used to characterize the selected first scene graph as a basis for generating the effect graph, and the operation form is not limited to the operations of selecting the first scene graph and the like. After the first scene graph is selected, the display of the first scene graph can aim at displaying the complete picture, and if the size of the first scene graph does not completely fill canvas (interface reserved area), the first scene graph is filled in with white and other colors.
Taking the model to be displayed as a door and window model as an example, the model to be displayed can be a screenshot of a front view of the door and window model, and when the first selection operation is detected, the front view of the model to be displayed can be obtained as the model to be displayed. For example, a picture just wrapping the model, namely a model diagram to be displayed, can be generated by using a screenshot function of an EGS (Enggist Grandjean Software, a graphic software) engine and "photographing" the indoor side of the model to be displayed through an orthogonal camera.
After the first selection operation and the second selection operation are detected, a reference image including the first scene image and the model image to be displayed may be displayed, it may be appreciated that in some embodiments, the first scene image is a background image, which may be in a different layer from the model image to be displayed, for example, it may be located in the next layer of the model image to be displayed, that is, a portion of the first scene image may be occluded by the model image to be displayed, and the model image to be displayed may be in an unoccluded state.
Of course, in some embodiments, the first scene graph includes a foreground graph for being located at an upper layer of the model graph to be displayed and a background graph for being located at a lower layer of the model graph to be displayed.
It will be appreciated that where the first scene graph is a graph provided by a preset scene template, it may include a foreground graph and a background graph, and the resulting reference graph may be divided into 3 layers (top to bottom, the upper graph masking the lower graph): a foreground map, a model map to be displayed and a background map.
The model image to be displayed is taken as a window picture, and a foreground image (Mask layer (Mask) of the template image) is covered on the window picture, which is generally a picture of the foreground of curtains, curtain boxes, furniture and the like in the original image. The background image is typically a portion of the original image except for the mask layer, but as a background, is obscured by the window image.
By setting the background image and the foreground image, the space reality of the model to be displayed can be improved as much as possible, and the subsequent AI drawing processing is more convenient.
The generation operation may be a click operation on a reference map generation control or the like, and it may be appreciated that upon detection of the generation operation, the AI drawing model may generate at least one effect map based on the reference map.
It will be appreciated that the general style and layout of the effect map is similar to the reference map, but the local decor, pattern colour, etc. may be somewhat different. The effect graph may support downloading.
Because the model diagram to be displayed in the reference diagram is obtained based on the model to be displayed, the display style is inconsistent with the display style of the first scene diagram, if the reference diagram obtained by directly splicing the model diagram to be displayed and the first scene diagram is displayed to a client, the model to be displayed is difficult to fuse into the scene, the display effect is harder, and the model to be displayed is difficult to be close to the scene or the effect after the real design. However, after the AI drawing model is generated and the reference image is taken as a reference, the fusion of the model to be displayed (such as a window) and the first scene image in the obtained effect image is better, and the effect is closer to the real scene effect of the house after the window is used, namely the application effect of the model to be displayed is better displayed.
Therefore, the embodiment can quickly generate the effect diagram of the model to be displayed in the relevant scene by using the photo or the picture (the first scene diagram), so that the display effect of the model to be displayed is improved. The effect graph can be obtained without a design basis, namely without the skills of drawing the scene and rendering the scene, and the generating speed of the effect graph is high.
Fig. 3A to 3H are operation flowcharts of a display interface of the effect diagram generating method according to an embodiment of the present disclosure, and please refer to fig. 3A to 3H. The present embodiment is described taking a model to be displayed as a window in a door and window assembly as an example.
As shown in fig. 3A, a design page 310 for a model to be displayed is shown, in this page 310, a user may display the model to be displayed 301 in a right area by selecting a model in a model library of a door and window series, and of course, the shape of a door and window to be designed may also be selected by a right outline shape list, a shape partition list, and the like.
When the model 401 to be exposed is completed, the AI (ARTIFICIAL INTELLIGENCE ) graph above the page may be clicked, a drop-down list may be expanded, and the drop-down list may include modes such as AI profile Jing Moban authoring and AI live photo authoring, which determine the source of the first scene graph. The first selection operation may be understood as a click operation on an AI plot, AI foreground template creation, or AI live photo creation.
After the user selects the AI configuration template creation, the display interface may display the AI configuration Jing Moban creation page 320 shown in fig. 3B in full screen, where the page displays the model diagram 302 to be displayed corresponding to the model 301 to be displayed, the model diagram 302 to be displayed may be displayed in a left area of the page 320, and a right area may be provided for the user to select the scene template.
It can be appreciated that, in the design page 310, the model 301 to be displayed may be a three-dimensional model, so as to realize 360-degree display, and after entering the AI-match template creation page 320, the model diagram 302 to be displayed may be obtained by obtaining a planar orthographic view of the model to be displayed.
When the scene template is selected, for example, clicking (second selecting operation) on the template 1 in fig. 3B, the display interface is as shown in fig. 3C, that is, the first scene graph 303 corresponding to the template 1 is displayed on the basis of the page 320, and the first scene graph 303 may be displayed under the model graph 302 to be displayed as a background, that is, the first scene graph 303 and the model graph 302 to be displayed may belong to different layers, and the layer of the first scene graph 303 is located under the layer of the model graph 302 to be displayed.
As shown in fig. 3D, if the model diagram 302 to be displayed in fig. 3C is clicked, the adjustable anchor 304 may be displayed around the model diagram 303 to be displayed, the size and shape of the model diagram 302 to be displayed may be changed by adjusting the position of the adjustable anchor 304, the position of the model diagram 302 to be displayed may be changed by dragging the model diagram 302 to be displayed, so that the reference diagram 305 shown in fig. 3E may be obtained, and various operations of obtaining the reference diagram 305 by the adjustment of the model diagram 302 to be displayed may be a first adjustment operation, where the reference diagram 305 includes the updated (after the first adjustment operation) model diagram 306 to be displayed and the first scene diagram 303. Of course, in other embodiments, the tunable anchor may be displayed directly on the interface shown in fig. 3B without clicking on the model diagram 302 to be displayed to display the tunable anchor.
If the reference graph 305 meets the display requirement, the next step in fig. 3E may be clicked, the display interface may be opened to generate a page 330, as shown in fig. 3F, the page 330 may preview the reference graph, the reference graph may be generated (generating operation) immediately by clicking, at least one effect graph 307 may be displayed in the page 330, as shown in fig. 3G, it may be understood that one or more effect graphs 307 may be provided, which are effect graphs generated using the AI drawing model with reference to the reference graph 305, that is, based on AIGC (ARTIFICIAL INTELLIGENCE GENERATED Content, generating artificial intelligence) method, according to the effect graph generated by the reference graph 305. It will be appreciated that the general style and layout of the effect diagram 307 is similar to that of the reference diagram 305, for example, the living room scene, the furniture and the window are kept substantially identical to the effect diagram, but the style, light or decoration parts of the specific furniture are slightly different, so that the user can conveniently and better understand the use effect of the model to be displayed by displaying a plurality of effect diagrams.
In addition, since the model diagram to be displayed in the reference diagram 305 is obtained based on the model to be displayed, the display style is not consistent with the display style of the first scene diagram 303, if the reference diagram 305 obtained by directly splicing the two is displayed to the client, the model to be displayed is difficult to be fused into the scene, the display effect is harder, and the display effect is difficult to be close to the scene or effect after the real design. However, by using the AI drawing model and the reference diagram 305 as a reference, the fusion of the model to be displayed (for example, a window) and the first scene diagram in the obtained effect diagram 307 is better, and is closer to the real scene effect of the house after the window is used, thereby being beneficial to better displaying the use effect of the model to be displayed.
Referring to fig. 3H, by clicking one of the effect graphs, the effect graph can be displayed in full screen, and meanwhile, the contrast control 308 can be displayed on the effect graph, and by dragging the contrast control, the display area of the original model graph to be displayed (in the dashed frame) covered on the window in the effect graph can be adjusted, so that the contrast effect of the model graph to be displayed and the window in the effect graph can be realized.
It can be understood that in this embodiment, the display interface is jumped from the design page 310 to the AI key Jing Moban to create the page 320 and finally to the generated page 330 in the operation process, and in other embodiments, the page may not be jumped, or the number of jumped pages is different from that in this embodiment, specifically may be set according to the actual situation, and in addition, in this embodiment, the page layout, the control name, etc. may all have other implementation manners.
In some embodiments, the method 200 further comprises: and in response to a first adjustment operation of the model diagram to be displayed, updating the model diagram to be displayed in the reference diagram based on the first adjustment operation.
As shown in fig. 3C, the position and the size of the model diagram 302 to be displayed in the first scene diagram 303 are larger, and the display requirement is not met, at this time, the model diagram 302 to be displayed may be adjusted through the first adjustment operation, so as to obtain the more reasonable reference diagram 305 shown in fig. 3E, so that the display of the model to be displayed in the first scene diagram is more reasonable. It will be appreciated that the reference map is changed in real time according to the adjustment of the model map to be presented, and the effect map is generated with reference to the reference map when the generating operation is detected.
Of course, if the position and the size of the model diagram 302 to be displayed in the first scene diagram 303 are appropriate, the model diagram to be displayed does not need to be adjusted.
In some embodiments, as shown in fig. 3D, the corner points of the model diagram 302 to be displayed are provided with adjustable anchor points 304, and the first adjustment operation of the model diagram to be displayed includes: a second adjustment operation on the adjustable anchor point.
It will be appreciated that fig. 3D only shows the tunable anchors corresponding to the four corner points of the model diagram 302 to be displayed, and in other embodiments, the model diagram 302 to be displayed may have a greater number or a smaller number of anchors, and specifically may be set according to the shape of the model diagram to be displayed, or the like. Of course, the shape of the adjustable anchor point may also have other manners of presentation, such as arrow graphics. The adjustable anchor points may be used to change the perspective of the model map to be presented.
In addition, the first adjustment operation may include dragging the adjustable anchor point (the second adjustment operation), moving the model picture to be displayed, and the like, for example, the user may adjust the shape of the door and window by dragging the adjustable anchor point, and may directly drag the door and window to move the door and window. And when the adjustable anchor point is adjusted, perspective transformation (perspective transformation is performed by using open source OpenCV (cross platform computer vision library)) can be simulated according to the position of the corner point, so that the effect of 'near-large and far-small' can be simulated, and meanwhile, the scaling can be performed in an equal ratio by pressing shortcut keys such as SHIFT keys and the like.
The model diagram to be displayed can be directly and quickly adjusted through the adjustable anchor points.
In some embodiments, with continued reference to fig. 3A-3G, step S201 of presenting a reference map on a display interface in response to a first selection operation of a model to be presented and a second selection operation of a first scene graph includes:
responding to a first selection operation of the model to be displayed, and displaying the model diagram to be displayed in the reference diagram on a display interface;
In response to a second selection operation of the target scene template in the scene template list, a first scene graph in the reference graph is shown on the display interface, and the first scene graph is a scene graph of the target scene template.
Fig. 3A to 3G show a flow of an effect diagram generating method when a scene template is adopted, and the first selection operation may be a click operation of the AI assignment Jing Moban authoring control in fig. 3A, through which fig. 3B may be shown. The second selection operation may be a click operation on one target scene template in the scene template list, through which a corresponding first scene graph may be displayed on the left preview interface.
In some embodiments, before the display interface presents the first one of the reference figures in response to the second selection operation of the target scene template in the scene template list, the method further comprises: and responding to a third selection operation of the scene tag, and displaying a scene template list corresponding to the third selection operation on a display interface.
An optional scene template may be provided in the right area of fig. 3B, for example, a template tag may be provided, where selecting the template tag through a third selection operation may request to obtain a scene template list corresponding to the selected tag from the back end, and display the optional scene template list under the template tag, and through selecting a thumbnail in the scene template list through a second selection operation, a target scene template may be determined, for example, clicking on the template 1 to take the template 1 as the target scene template, and rendering a picture of the target scene template to the left preview interface. Of course, in some implementations, after clicking the template tag, the scene template list may be displayed through a pop-up window, and may specifically be selected according to the actual situation.
The first scene graph is a scene graph corresponding to the target scene template. It can be understood that the scene template can be a template label such as style, space and the like which is uploaded by a worker to a configuration background, and the template label can be selected after the processed exquisite rendering image is uploaded, wherein the scene template can comprise a background image and a foreground image or only comprises the background image, so that a user can quickly select the scene template.
According to the embodiment, by providing the scene template list, various scene graphs with optional styles can be provided for the user, and the user can conveniently and quickly use the scene graphs to generate the effect graph.
FIG. 4A is a diagram of a model to be displayed according to another embodiment of the present disclosure; fig. 4B is a scene color block diagram corresponding to a first scene graph according to another embodiment of the disclosure; FIG. 4C is a reference color block diagram provided by another embodiment of the present disclosure; FIG. 4D is a line drawing of the initial model corresponding to FIG. 4A; referring to fig. 4A to 4C, in an embodiment using a scene template, in response to a generating operation on a reference graph, displaying at least one effect graph on a display interface may include:
in the case that the first scene graph is from the scene template list, determining a reference color block graph corresponding to the updated reference graph in response to a generation operation of the updated reference graph;
determining a model line drawing corresponding to the updated model drawing to be displayed;
and obtaining at least one effect graph by using the AI drawing model based on the model line graph and the reference color block graph.
In this embodiment, the model to be displayed is a door model, and the model diagram 401 to be displayed is shown in fig. 4A, it may be understood that when the generating operation is detected, a corresponding reference color block diagram 403 may be generated based on the current reference diagram, and it may be understood that the reference color block diagram may be obtained by processing a segmentation model, where the picture is segmented into multiple regions with different colors, and each region corresponds to a category.
In some embodiments, determining a reference color block map corresponding to the updated reference map includes: filling the updated model diagram to be displayed to obtain a model color block diagram; determining a scene color block diagram corresponding to the first scene diagram based on the first scene diagram; and superposing the model color block diagram and the scene color block diagram to obtain a reference color block diagram.
Fig. 4A shows an initial model diagram 401 to be displayed, after a first adjustment operation, the shape of the model diagram is changed, color blocks of the changed model diagram to be displayed are filled, the color number can be selected to be #e6e6e6e, a model color block diagram 404 in fig. 4C can be obtained (the model color block diagram does not contain other color blocks in fig. 4C except 404), and the category of the color block diagram, such as a door body, a glass door, and the like, can be marked.
Meanwhile, the first scene graph (not shown) may be segmented by using a segmentation model (for example, controlnet (seg) (segmentation model in the image precision control model) in txt2img (text-generated image model) is used to segment the first scene graph, so as to obtain the scene color block graph 402 of fig. 4B.
Then, based on the position of the updated model diagram to be displayed in the first scene diagram, the model color block diagram 404 and the scene color block diagram 401 may be overlapped to obtain a reference color block diagram 403. The method is simple and easy to realize.
Then determining a line drawing corresponding to the updated model drawing to be displayed, in some embodiments, determining the model line drawing corresponding to the updated model drawing to be displayed includes: acquiring a model diagram to be displayed before updating; extracting the model diagram to be displayed before updating to obtain an initial model line diagram corresponding to the model diagram to be displayed before updating; a model line drawing is determined based on the initial model line drawing and the first adjustment operation.
The line extraction is performed on the model diagram 401 to be displayed (the original diagram without deformation), for example, controlnet (canny) (the accurate extraction model in the image accurate control model) in txt2img is adopted, so that an initial model line diagram 405 corresponding to the model diagram 401 to be displayed is obtained as shown in fig. 4D. The initial model line drawing 405 may then be adjusted according to the first adjustment operation to become the size of the adjustable anchor after being dragged to obtain the model line drawing.
The model line drawing 405 and reference color block drawing 403 may then be overlaid and an effect drawing obtained using the AI drawing model. Since the reference color block map 403 is obtained based on the reference map, the resulting effect map is also referred to as the reference map. The AI drawing model may be a model capable of implementing AI drawing functions in the related art.
The method can generate the effect graph based on the reference graph, and the generation effect is better because the model line graph corresponding to the initial model graph to be displayed is referenced.
FIGS. 5A-5D are flowcharts illustrating operations of a display interface of an effect diagram generating method according to another embodiment of the present disclosure; referring to fig. 5A to 5D, step S201 of displaying a reference image on a display interface in response to a first selection operation of a model to be displayed and a second selection operation of a first scene image includes: and responding to the first selection operation of the model to be displayed and the second selection operation of the first scene graph in the local file, and displaying the reference graph on the display interface.
In this embodiment, the first scene graph is a picture uploaded by the user, and the picture may be a rendered scene graph, a blank graph, or the like, for example, an unfinished indoor blank graph.
As shown in fig. 3A, after the user clicks the AI live photo creation (i.e., the first selection operation), the AI live photo creation page 510 shown in fig. 5A may be displayed, at this time, the display interface may not display the model diagram to be displayed first, an upload live photo control may be displayed in the diagram, a file selection popup may be popped up by clicking the control, and a local file picture may be selected to display the picture as the first scene diagram 502 in the page, i.e., display fig. 5B.
The second selection operation may be a selection operation of the first scene graph in the local file. In addition, in this embodiment, after the second selection operation is detected, the first scene graph 502 and the model graph 501 to be displayed may be displayed at the same time. Of course, the model diagram 501 to be displayed may also be displayed after the first selection operation, as shown in fig. 5A, and then the first scene diagram 502 is displayed after the second selection operation.
As shown in fig. 5C, the shape and size of the model diagram to be displayed can be adjusted according to the first adjustment operation, so as to obtain an updated model diagram 503 to be displayed, and it can be understood that the model diagram 503 to be displayed is a diagram after perspective transformation through the adjustable anchor point.
By clicking on the next step in fig. 5C, the generated page 520 in fig. 5D may be displayed, and the operations and effect diagrams in fig. 5D may refer to fig. 3F to 3G, which are not described herein. It will be appreciated that a first scene graph based on a field or line drawing may also be mapped to a similar real scene effect graph as in fig. 3G.
According to the method, the picture or the photo can be uploaded by the user to serve as the first scene graph, so that the effect graph of the model to be displayed is generated, and the method is more convenient to use.
FIG. 6 is a reference line drawing corresponding to the first scene graph of FIG. 5B; referring to fig. 6, in response to the reference graph generating operation in step S202, at least one effect graph is displayed on the display interface, including:
under the condition that the first scene graph is from a local file, determining a reference line graph corresponding to the updated reference graph in response to the generation operation of the updated reference graph;
determining a model line drawing corresponding to the updated model drawing to be displayed;
and obtaining at least one effect graph by using the AI drawing model based on the model line graph and the reference line graph.
In this embodiment, the first scene graph is uploaded by the user, and the corresponding reference line graph 601 can be determined by a line extraction method based on the reference graph, for example, using controlnet (mlsd) in txt2img (line extraction model in the image precise control model), as shown in fig. 6. A model line drawing may then be generated based on the updated model drawing to be presented, which may be referred to in the embodiments described above. And then obtaining an effect diagram by using an AI drawing model based on the reference line diagram and the model line diagram. In this embodiment, when the first scene graph is a blank graph, the fine-packing effect graph can still be obtained through the AI drawing model, the display speed is high, and the user can also know the actual effect after decoration on the basis of the blank, so that the display effect and the user experience are better.
In some embodiments, step S201 of presenting a reference map on the display interface in response to the first selection operation of the model to be presented and the second selection operation of the first scene map includes:
responding to a first selection operation of a model to be displayed and a second selection operation of a first scene graph, and displaying a reference graph in a first page of a display interface;
The method further comprises the steps of: responding to the jump operation of the first page, and displaying a second page on a display interface;
in response to the generating operation of the reference graph, at least one effect graph is displayed on a display interface, including: at least one effect graph is presented on a second page in response to a generating operation of the reference graph.
As shown in fig. 3B-3E, ai component Jing Moban can author page 320 as a first page, as shown in fig. 3F and 3G, generate page 330 as a second page, and a jump to the first page as a click to the next step in fig. 3E. Different bottom programs can be accessed through page jumping, and effect graphs can be generated and displayed better.
In some embodiments, as in FIG. 3G, the second page includes a generate control, an attribute presentation area 312 for presenting selectable attributes of the effect graph, a reference graph presentation area 311 for presenting the reference graph, and an effect graph presentation area 313 for presenting at least one effect graph.
In response to the generating operation of the reference graph, at least one effect graph is displayed on a second page, including: in response to a trigger operation to the generate control, at least one effect graph corresponding to the selected selectable attribute is presented in the effect graph presentation area based on the selected selectable attribute in the attribute presentation area.
The generation control may be a control where the immediately generated typeface is located. It will be appreciated that optional attributes of the effect map, such as style, description screen, etc., may be set on the second page, so as to be an input to the AI drawing model, and affect the resulting effect map. It can be appreciated that one effect map can be output by one calculation of the AI drawing model, and when multiple effect maps need to be generated, the AI drawing model can be generated by multiple times of use of the model. The optional attributes of course also include a size option to generate a difference between the effect map and the reference map.
Of course, the style model is reselected on the display interface shown in fig. 3G, and clicking is generated immediately, so that a new set of effect graphs can be displayed below the effect graph currently displayed in the effect graph display area.
Fig. 7A to 7C are flowcharts of scene template uploading provided in an embodiment of the present disclosure; referring to fig. 7A to 7C, the method 200 further includes:
Displaying a configuration page 710 on a display interface;
In response to a fourth selection operation of the scene template upload control 701 in the configuration page 710, a scene template upload page 730 is presented, and includes a template picture upload control 731, a template name input control 732, and an optional template tab control 733.
The configuration page 710 may provide a user with a self-upload scene template, which may display various configuration controls, including a scene template upload control 701, and may display a scene template upload page 730 by clicking the scene template upload control 701, where the page may upload a local image by a template picture upload control 731 in a drag or click manner, may input a scene template name by a template name input control 732, and may select a template tag by an optional template tag control 733, so that the scene template may be searched by the template tag.
Of course, the template scene page 720 can be displayed first after the scene template upload control 701 is clicked, the page can display the set scene template, and the page shown in fig. 7C can be displayed by clicking the template picture upload control 721 of the page.
In some embodiments, step S201, in response to a first selection operation of a model to be displayed and a second selection operation of the first scene graph, displays a reference graph on a display interface, including:
displaying a model to be displayed on the display interface, wherein the model to be displayed is a three-dimensional model of a component to be displayed;
responding to a first selection operation of the model to be displayed, intercepting a main projection view of the model to be displayed, and taking the main projection view as the model diagram to be displayed, wherein the main projection view is a view capable of reflecting the structure of the model to be displayed;
and responding to a second selection operation of the first scene graph, and displaying the reference graph on the display interface.
The to-be-displayed component may be a door and window component to be displayed by a user, and the to-be-displayed model is a three-dimensional model corresponding to the to-be-displayed component, as shown in fig. 3A, the to-be-displayed model may be a three-dimensional model provided in a model library, or may also be a three-dimensional model generated by a user in a design interface 310 of the to-be-displayed model, and so on.
When a first selection operation, such as a clicking operation on a control in an AI drawing in the design interface 310, is obtained, a model to be displayed may be captured, a front projection view, which may most represent a structure of the model to be displayed, may be taken, for example, a front view of the model to be displayed 301 in fig. 3A, and a model diagram to be displayed may be obtained by taking the front projection view of the three-dimensional model.
According to the embodiment, the main projection view of the model to be displayed is intercepted, so that the model diagram to be displayed can be obtained rapidly.
FIG. 8 is a flowchart of a method for generating an effect graph according to another embodiment of the present disclosure; referring to fig. 8, a caller (a door and window design tool, an application corresponding to fig. 3A) may initiate an AI map request, a door and window AIGC popup window (an application corresponding to a first page) may request information (a Model map to be displayed) related to a door and window screenshot, the caller receives the request and then sends the information to the door and window AIGC popup window using a currently selected door and window screenshot, the door and window AIGC popup window displays the door and window screenshot in a preview area, and may obtain a first scene map by acquiring a template picture (a scene template) or uploading the picture by a user, then render the first scene map into a scene, and may adjust the position of a door and window corner (an adjustable anchor point), adjust the shape and perspective angle of the door and window, and then may transmit the integrated reference map to a MAAS (Model AS A SERVICE, a Model is a service, an application corresponding to a second page), and the MAAS may generate an effect map by using a back end.
In addition, in this embodiment, the template picture may be pre-generated by configuring the door window AIGC with the background, for example, by uploading the picture (background picture, foreground picture (optional)), and selecting the template label such as style and control of the picture adaptation, so that the corresponding template picture may be provided.
Fig. 9 is a schematic block diagram of an effect diagram generating apparatus according to an embodiment of the present disclosure, referring to fig. 9, an effect diagram generating apparatus 900 according to an embodiment of the present disclosure includes:
the first display unit 901 is configured to display, on a display interface, a reference graph in response to a first selection operation of a model to be displayed and a second selection operation of a first scene graph, where the reference graph includes a model graph to be displayed corresponding to the model to be displayed and the first scene graph at least partially located below the model graph to be displayed;
and a second displaying unit 902, configured to display at least one effect graph on the display interface in response to the operation of generating the reference graph, where the effect graph in the at least one effect graph is the generated effect graph using the AI drawing model and taking the reference graph as a reference.
In some embodiments, apparatus 900 further comprises:
And the updating unit is used for responding to the first adjustment operation of the model diagram to be displayed and updating the model diagram to be displayed in the reference diagram based on the first adjustment operation.
In some embodiments, the corner points of the model diagram to be displayed are provided with adjustable anchor points, and the updating unit is further configured to: a second adjustment operation on the adjustable anchor point.
In some embodiments, the first display unit 901 is further for:
responding to a first selection operation of the model to be displayed, and displaying the model diagram to be displayed in the reference diagram on a display interface;
In response to a second selection operation of the target scene template in the scene template list, a first scene graph in the reference graph is shown on the display interface, and the first scene graph is a scene graph of the target scene template.
In some embodiments, the apparatus further comprises:
And the third display unit is used for responding to the third selection operation of the scene label and displaying a scene template list corresponding to the third selection operation on the display interface.
In some embodiments, the first display unit 901 is further for:
and responding to the first selection operation of the model to be displayed and the second selection operation of the first scene graph in the local file, and displaying the reference graph on the display interface.
In some embodiments, the first display unit 901 is further for:
responding to a first selection operation of a model to be displayed and a second selection operation of a first scene graph, and displaying a reference graph in a first page of a display interface;
responding to the jump operation of the first page, and displaying a second page on a display interface;
the second display unit 902 is further configured to: at least one effect graph is presented on a second page in response to a generating operation of the reference graph.
In some embodiments, the second page includes a generate control, an attribute presentation area for presenting selectable attributes of the effect graph, a reference graph presentation area for presenting the reference graph, and an effect graph presentation area for presenting the at least one effect graph;
the second display unit 902 is further configured to:
in response to a trigger operation to the generate control, at least one effect graph corresponding to the selected selectable attribute is presented in the effect graph presentation area based on the selected selectable attribute in the attribute presentation area.
In some embodiments, the first scene graph includes a foreground graph for being located at an upper level of the model graph to be displayed and a background graph for being located at a lower level of the model graph to be displayed.
In some embodiments, apparatus 900 further comprises:
the configuration unit is used for displaying a configuration page on the display interface; and responding to a fourth selection operation of a scene template uploading control in the configuration page, displaying the scene template uploading page, wherein the scene template uploading page comprises a template picture uploading control, a template name input control and an optional template label control.
In some embodiments, the second display unit 902 is further configured to:
in the case that the first scene graph is from the scene template list, determining a reference color block graph corresponding to the updated reference graph in response to a generation operation of the updated reference graph;
determining a model line drawing corresponding to the updated model drawing to be displayed;
and obtaining at least one effect graph by using the AI drawing model based on the model line graph and the reference color block graph.
In some embodiments, the second display unit 902 is further configured to:
Filling the updated model diagram to be displayed to obtain a model color block diagram;
determining a scene color block diagram corresponding to the first scene diagram based on the first scene diagram;
and superposing the model color block diagram and the scene color block diagram to obtain a reference color block diagram.
In some embodiments, the second display unit 902 is further configured to:
Acquiring a model diagram to be displayed before updating;
extracting the model diagram to be displayed before updating to obtain an initial model line diagram corresponding to the model diagram to be displayed before updating;
a model line drawing is determined based on the initial model line drawing and the first adjustment operation.
In some embodiments, the second display unit 902 is further configured to:
under the condition that the first scene graph is from a local file, determining a reference line graph corresponding to the updated reference graph in response to the generation operation of the updated reference graph;
determining a model line drawing corresponding to the updated model drawing to be displayed;
and obtaining at least one effect graph by using the AI drawing model based on the model line graph and the reference line graph.
In some embodiments, the first display unit 901 is further for:
displaying a model to be displayed on the display interface, wherein the model to be displayed is a three-dimensional model of a component to be displayed;
responding to a first selection operation of the model to be displayed, intercepting a main projection view of the model to be displayed, and taking the main projection view as the model diagram to be displayed, wherein the main projection view is a view capable of reflecting the structure of the model to be displayed;
and responding to a second selection operation of the first scene graph, and displaying the reference graph on the display interface.
For descriptions of specific functions and examples of each module and sub-module of the apparatus in the embodiments of the present disclosure, reference may be made to the related descriptions of corresponding steps in the foregoing method embodiments, which are not repeated herein.
Fig. 10 is a block diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 10, the electronic device includes: memory 1010 and processor 1020, memory 1010 stores a computer program executable on processor 1020. The number of memories 1010 and processors 1020 may be one or more. The memory 1010 may store one or more computer programs that, when executed by the electronic device, cause the electronic device to perform the methods provided by the method embodiments described above. The electronic device may further include: and the communication interface 1030 is used for communicating with external equipment and carrying out data interaction transmission.
If the memory 1010, the processor 1020, and the communication interface 1030 are implemented independently, the memory 1010, the processor 1020, and the communication interface 1030 may be connected to each other and communicate with each other via a bus. The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 10, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 1010, the processor 1020, and the communication interface 1030 are integrated on a single chip, the memory 1010, the processor 1020, and the communication interface 1030 may communicate with each other through internal interfaces.
It should be appreciated that the processor may be a central Processing unit (Central Processing Unit, CPU), other general purpose processor, digital signal processor (DIGITAL SIGNAL Processing, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be a processor supporting an advanced reduced instruction set machine (ADVANCED RISC MACHINES, ARM) architecture.
Further, optionally, the memory may include a read-only memory and a random access memory, and may further include a nonvolatile random access memory. The memory may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), programmable ROM (PROM), erasable Programmable ROM (EPROM), electrically Erasable EPROM (EEPROM), or flash Memory, among others. Volatile memory can include random access memory (Random Access Memory, RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available. For example, static random access memory (STATIC RAM, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA DATE SDRAM, DDR SDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCHLINK DRAM, SLDRAM), and Direct memory bus RAM (DR RAM).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present disclosure are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, fiber optic, data subscriber line (Digital Subscriber Line, DSL)), or wireless (e.g., infrared, bluetooth, microwave, etc.) means from one website, computer, server, or data center. Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc., that contain an integration of one or more of the available media. Usable media may be magnetic media (e.g., floppy disk, hard disk, magnetic tape), optical media (e.g., digital versatile disk (DIGITAL VERSATILE DISC, DVD)), or semiconductor media (e.g., solid state disk (Solid STATE DISK, SSD)), etc. It is noted that the computer readable storage medium mentioned in the present disclosure may be a non-volatile storage medium, in other words, may be a non-transitory storage medium.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, where the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
In the description of embodiments of the present disclosure, a description of reference to the terms "one embodiment," "some embodiments," "examples," "particular examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
In the description of the embodiments of the present disclosure, unless otherwise indicated, "/" means or, for example, a/B may represent a or B. "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone.
In the description of the embodiments of the present disclosure, the terms "first," "second," and "second" are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the embodiments of the present disclosure, unless otherwise indicated, the meaning of "a plurality" is two or more.
The foregoing is illustrative of the present disclosure and is not to be construed as limiting thereof, but rather as various modifications, equivalent arrangements, improvements, etc., which are within the spirit and principles of the present disclosure.
Claims (24)
1. An effect graph generation method, comprising:
Responding to a first selecting operation of a model to be displayed and a second selecting operation of a first scene graph, and displaying a reference graph on a display interface, wherein the reference graph comprises a model graph to be displayed corresponding to the model to be displayed and the first scene graph at least partially positioned at the lower layer of the model graph to be displayed;
And responding to the generation operation of the reference graph, displaying at least one effect graph on the display interface, wherein the effect graph in the at least one effect graph is generated by using an AI drawing model and taking the reference graph as a reference.
2. The method of claim 1, further comprising:
And in response to a first adjustment operation on the model diagram to be displayed, updating the model diagram to be displayed in the reference diagram based on the first adjustment operation.
3. The method according to claim 2, wherein the corner points of the model diagram to be displayed are provided with adjustable anchor points, and the first adjustment operation on the model diagram to be displayed comprises: and a second adjustment operation for the adjustable anchor point.
4. The method of claim 1, wherein presenting the reference map at the display interface in response to a first selection operation of the model to be presented and a second selection operation of the first scene map comprises:
Responding to a first selection operation of a model to be displayed, and displaying the model diagram to be displayed in a reference diagram on a display interface;
And responding to a second selection operation of a target scene template in a scene template list, and displaying the first scene graph in the reference graph on the display interface, wherein the first scene graph is the scene graph of the target scene template.
5. The method of claim 4, prior to presenting the first one of the reference figures on a display interface in response to the second selection operation of a target scene template in a scene template list, the method further comprising:
and responding to a third selection operation of the scene tag, and displaying the scene template list corresponding to the third selection operation on the display interface.
6. The method of claim 1, wherein presenting the reference map at the display interface in response to a first selection operation of the model to be presented and a second selection operation of the first scene map comprises:
And responding to a first selection operation of the model to be displayed and a second selection operation of the first scene graph in the local file, and displaying a reference graph on a display interface.
7. The method of any of claims 1-6, wherein presenting a reference graph at the display interface in response to a first selection operation of a model to be presented and a second selection operation of a first scene graph, comprises:
Responding to a first selection operation of a model to be displayed and a second selection operation of a first scene graph, and displaying a reference graph in a first page of the display interface;
The method further comprises the steps of: responding to the jump operation of the first page, and displaying a second page on the display interface;
and responding to the generation operation of the reference graph, displaying at least one effect graph on the display interface, wherein the method comprises the following steps: and responding to the generation operation of the reference graph, and displaying the at least one effect graph on the second page.
8. The method of claim 7, wherein the second page includes a generate control, an attribute presentation area for presenting selectable attributes of the effect graph, a reference graph presentation area for presenting the reference graph, and an effect graph presentation area for presenting the at least one effect graph;
And in response to the operation of generating the reference graph, displaying the at least one effect graph on the second page, wherein the method comprises the following steps:
And responding to the triggering operation of the generation control, and displaying the at least one effect graph corresponding to the selected selectable attribute in the effect graph display area based on the selected selectable attribute in the attribute display area.
9. The method of any of claims 1-6, wherein the first scene graph includes a foreground graph for being located at an upper level of the model graph to be displayed and a background graph for being located at a lower level of the model graph to be displayed.
10. The method of any of claims 1-6, further comprising:
Displaying a configuration page on the display interface;
And responding to a fourth selection operation of a scene template uploading control in the configuration page, displaying a scene template uploading page, wherein the scene template uploading page comprises a template picture uploading control, a template name input control and an optional template label control.
11. A method according to claim 2 or 3, wherein, in response to a generation operation of the reference map, at least one effect map is presented at the display interface, comprising:
Determining a reference color block diagram corresponding to the updated reference image in response to the generation operation of the updated reference image under the condition that the first scene image is from a scene template list;
Determining a model line drawing corresponding to the updated model drawing to be displayed;
And obtaining the at least one effect graph by using the AI drawing model based on the model line graph and the reference color block graph.
12. The method of claim 11, wherein determining a reference color block map corresponding to the updated reference map comprises:
filling the updated model diagram to be displayed to obtain a model color block diagram;
determining a scene color block diagram corresponding to the first scene diagram based on the first scene diagram;
and superposing the model color block diagram and the scene color block diagram to obtain the reference color block diagram.
13. The method of claim 11, wherein determining the updated model line graph corresponding to the model graph to be displayed comprises:
Acquiring the model diagram to be displayed before updating;
extracting the model diagram to be displayed before updating to obtain an initial model line diagram corresponding to the model diagram to be displayed before updating;
The model line drawing is determined based on the initial model line drawing and the first adjustment operation.
14. The method of claim 2, wherein presenting at least one effect graph at the display interface in response to the generating of the reference graph comprises:
Under the condition that the first scene graph is from a local file, determining a reference line graph corresponding to the updated reference graph in response to the generation operation of the updated reference graph;
Determining a model line drawing corresponding to the updated model drawing to be displayed;
And obtaining the at least one effect graph by using the AI drawing model based on the model line graph and the reference line graph.
15. The method of any of claims 1-6, wherein presenting the reference graph at the display interface in response to a first selection operation of the model to be presented and a second selection operation of the first scene graph comprises:
displaying a model to be displayed on the display interface, wherein the model to be displayed is a three-dimensional model of a component to be displayed;
responding to a first selection operation of the model to be displayed, intercepting a main projection view of the model to be displayed, and taking the main projection view as the model diagram to be displayed, wherein the main projection view is a view capable of reflecting the structure of the model to be displayed;
and responding to a second selection operation of the first scene graph, and displaying the reference graph on the display interface.
16. An effect map generating apparatus comprising:
the first display unit is used for responding to a first selection operation of a model to be displayed and a second selection operation of a first scene graph, and displaying a reference graph on a display interface, wherein the reference graph comprises a model graph to be displayed corresponding to the model to be displayed and the first scene graph at least partially positioned at the lower layer of the model graph to be displayed;
And the second display unit is used for responding to the generation operation of the reference graph and displaying at least one effect graph on the display interface, wherein the effect graph in the at least one effect graph is generated by using an AI drawing model and taking the reference graph as a reference.
17. The apparatus of claim 16, further comprising:
and the updating unit is used for responding to the first adjustment operation on the model diagram to be displayed and updating the model diagram to be displayed in the reference diagram based on the first adjustment operation.
18. The apparatus of claim 17, wherein the corner points of the model diagram to be displayed are provided with adjustable anchor points, and the updating unit is further configured to: and a second adjustment operation for the adjustable anchor point.
19. The apparatus of claim 16, wherein the first display unit is further configured to:
Responding to a first selection operation of a model to be displayed, and displaying the model diagram to be displayed in a reference diagram on a display interface;
And responding to a second selection operation of a target scene template in a scene template list, and displaying the first scene graph in the reference graph on the display interface, wherein the first scene graph is the scene graph of the target scene template.
20. The apparatus of claim 16, wherein the first display unit is further configured to:
And responding to a first selection operation of the model to be displayed and a second selection operation of the first scene graph in the local file, and displaying a reference graph on a display interface.
21. The apparatus of claim 17 or 18, wherein the second display unit is further configured to:
Determining a reference color block diagram corresponding to the updated reference image in response to the generation operation of the updated reference image under the condition that the first scene image is from a scene template list;
Determining a model line drawing corresponding to the updated model drawing to be displayed;
And obtaining the at least one effect graph by using the AI drawing model based on the model line graph and the reference color block graph.
22. The apparatus of claim 17, wherein the second display unit is further configured to:
Under the condition that the first scene graph is from a local file, determining a reference line graph corresponding to the updated reference graph in response to the generation operation of the updated reference graph;
Determining a model line drawing corresponding to the updated model drawing to be displayed;
And obtaining the at least one effect graph by using the AI drawing model based on the model line graph and the reference line graph.
23. An electronic device, comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-15.
24. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-15.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311458437.5A CN118097083A (en) | 2023-11-02 | 2023-11-02 | Effect graph generation method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311458437.5A CN118097083A (en) | 2023-11-02 | 2023-11-02 | Effect graph generation method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118097083A true CN118097083A (en) | 2024-05-28 |
Family
ID=91164368
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311458437.5A Pending CN118097083A (en) | 2023-11-02 | 2023-11-02 | Effect graph generation method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118097083A (en) |
-
2023
- 2023-11-02 CN CN202311458437.5A patent/CN118097083A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11468614B2 (en) | Presenting multiple image segmentations | |
US10971112B2 (en) | Dynamically-themed display utilizing physical ambient conditions | |
US9940404B2 (en) | Three-dimensional (3D) browsing | |
CN103530018B (en) | The method for building up and mobile terminal at widget interface in Android operation system | |
US20080229232A1 (en) | Full screen editing of visual media | |
WO2019228013A1 (en) | Method, apparatus and device for displaying rich text on 3d model | |
US11270366B2 (en) | Graphical user interface for creating building product literature | |
CN111414225A (en) | Three-dimensional model remote display method, first terminal, electronic device and storage medium | |
WO2023159595A9 (en) | Method and device for constructing and configuring three-dimensional space scene model, and computer program product | |
CN112337091A (en) | Man-machine interaction method and device and electronic equipment | |
CN114730231A (en) | Techniques for virtual try-on of an item | |
CN111465917A (en) | Bendable electronic equipment and interface adaptation method thereof | |
CN116091672A (en) | Image rendering method, computer device and medium thereof | |
US10304232B2 (en) | Image animation in a presentation document | |
TWM626899U (en) | Electronic apparatus for presenting three-dimensional space model | |
CN117557463A (en) | Image generation method, device, electronic equipment and storage medium | |
CN111309430B (en) | Method and related device for automatically caching user interaction interface nodes | |
CN118097083A (en) | Effect graph generation method and device, electronic equipment and storage medium | |
US9779529B2 (en) | Generating multi-image content for online services using a single image | |
KR102092156B1 (en) | Encoding method for image using display device | |
Ekpar | A novel system for processing user interfaces | |
CN117788304A (en) | Display diagram generation method, device, equipment and storage medium | |
CN113436344B (en) | Reference view display method, system and image display device | |
TWI799012B (en) | Electronic apparatus and method for presenting three-dimensional space model | |
KR101430964B1 (en) | Method for controlling display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |