CN113449356B - Indoor automatic layout method, indoor automatic layout device, computer equipment and computer-readable storage medium - Google Patents

Indoor automatic layout method, indoor automatic layout device, computer equipment and computer-readable storage medium Download PDF

Info

Publication number
CN113449356B
CN113449356B CN202111017772.2A CN202111017772A CN113449356B CN 113449356 B CN113449356 B CN 113449356B CN 202111017772 A CN202111017772 A CN 202111017772A CN 113449356 B CN113449356 B CN 113449356B
Authority
CN
China
Prior art keywords
state information
target room
user instruction
target
indoor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111017772.2A
Other languages
Chinese (zh)
Other versions
CN113449356A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xumi Yuntu Space Technology Co Ltd
Original Assignee
Shenzhen Xumi Yuntu Space Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xumi Yuntu Space Technology Co Ltd filed Critical Shenzhen Xumi Yuntu Space Technology Co Ltd
Priority to CN202111017772.2A priority Critical patent/CN113449356B/en
Publication of CN113449356A publication Critical patent/CN113449356A/en
Application granted granted Critical
Publication of CN113449356B publication Critical patent/CN113449356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/12Geometric CAD characterised by design entry means specially adapted for CAD, e.g. graphical user interfaces [GUI] specially adapted for CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Hardware Design (AREA)
  • Architecture (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Civil Engineering (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Structural Engineering (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to the technical field of indoor automatic layout, and provides an indoor automatic layout method, an indoor automatic layout device, computer equipment and a computer-readable storage medium. The method comprises the following steps: acquiring a first user instruction, wherein the first user instruction indicates a target room to be laid out; controlling the object in the target room to move based on the first user instruction and the first state information of the target room; acquiring a second user instruction, wherein the second user instruction indicates that the article stops moving; determining a score corresponding to the second state information of the target room based on the second user instruction and the second state information of the target room; in response to the score being less than the score threshold, controlling at least one of the items to move to the target location. The interaction mode is simple, the user can watch the process of indoor article movement in real time, the indoor layout design process is more visual, the visualization degree is high, and the user experience is greatly improved.

Description

Indoor automatic layout method, indoor automatic layout device, computer equipment and computer-readable storage medium
Technical Field
The present disclosure relates to the field of indoor automatic layout technologies, and in particular, to an indoor automatic layout method and apparatus, a computer device, and a computer-readable storage medium.
Background
The indoor design is an indoor layout scheme which is reasonable in function, comfortable, attractive and capable of meeting living requirements of users based on material technical means and indoor design requirements according to the using property, indoor environment and corresponding standards of buildings. The traditional indoor design process comprises the stages of design preparation, scheme construction, construction drawing design, design implementation and the like, a great deal of energy of designers is needed, the design period is long, the independent selectivity of customers is low, and the design effect display is limited. Therefore, in order to overcome the shortcomings of the conventional indoor design scheme, it becomes more and more important to adopt a three-dimensional simulation technology for indoor design.
In the current indoor design process, usually the designer designs to indoor environment, obtains indoor design scheme and provides the user, when the user is unsatisfied to indoor design, need carry out the scheme modification by the designer after communicating with the designer, not only communicate efficiency not high, communicate the effect not good, user experience feels low moreover.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide an indoor automatic layout method, an indoor automatic layout device, a computer device, and a computer-readable storage medium, so as to solve the problems of low communication efficiency and poor user experience in the indoor design process in the prior art.
In a first aspect of the embodiments of the present disclosure, an indoor automatic layout method is provided, including:
obtaining a first user instruction, wherein the first user instruction comprises an instruction for a target room to be laid out;
controlling the object in the target room to move based on the first user instruction and first state information of the target room, wherein the first state information of the target room comprises the size of the target room, the size of the object in the target room and the current position of the object in the target room;
obtaining a second user instruction, wherein the second user instruction comprises an instruction to stop moving the article;
acquiring a score corresponding to the second state information of the target room based on the second user instruction and the second state information of the target room;
in response to the score being less than the score threshold, controlling at least one of the items to move to the target location.
In a second aspect of the embodiments of the present disclosure, an indoor automatic layout device is provided, including:
a first obtaining module configured to obtain a first user instruction, wherein the first user instruction includes an instruction to a target room for layout;
the mobile control module is configured to control the object in the target room to move based on a first user instruction and first state information of the target room, wherein the first state information of the target room comprises the size of the target room, the size of the object in the target room and the current position of the object in the target room;
a second obtaining module configured to obtain a second user instruction, wherein the second user instruction comprises an instruction to stop moving the item;
the score acquisition module is configured to determine a score corresponding to the second state information of the target room based on the second user instruction and the second state information of the target room;
a goal state acquisition module configured to control at least one of the items to move to a goal location in response to the score being less than a score threshold.
In a third aspect of the embodiments of the present disclosure, a computer device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the above method when executing the computer program.
In a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor, implements the steps of the above-mentioned method.
Compared with the prior art, the embodiment of the disclosure has the following beneficial effects: the object in the target room is controlled to move automatically by receiving the instruction for indicating the target room to perform layout, the object is controlled to move to a target position based on second state information of the target room and scoring when the instruction for indicating the object in the target room to stop moving is received, in the whole indoor layout design process, a user only needs to input the instruction for starting layout and finishing layout, the interaction mode is simple, the whole process is not required to be in fussy communication with a designer, meanwhile, the user can watch the process of moving the indoor object in real time, the indoor layout design process is more visual, the visualization degree is high, and the user experience is greatly improved.
Drawings
To more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive efforts.
FIG. 1 is a scenario diagram of an application scenario of an embodiment of the present disclosure;
fig. 2 is a flowchart of an indoor automatic layout method provided by an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an automatic layout control model in an indoor automatic layout method provided by an embodiment of the present disclosure;
fig. 4 is a flowchart for controlling the movement of the object in the target room in the indoor automatic layout method provided by the embodiment of the disclosure;
FIG. 5 is a flowchart of an exemplary embodiment of an indoor automatic layout method provided by an embodiment of the present disclosure;
fig. 6 is a block diagram of an indoor automatic layout device provided in the embodiment of the present disclosure;
fig. 7 is a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
In order to improve the indoor living environment, improve the indoor functions, and meet the living needs, users generally need to design houses, offices, and the like indoors. Traditional indoor design flow includes stages such as design preparation, scheme construction, construction drawing design, design implementation, not only needs to spend a large amount of efforts of designer, and the design cycle is long, and the user is less at the autonomous selectivity of indoor design in-process moreover, and the participation sense is lower, and indoor design effect is because unable real-time show, leads to the design effect show limited. In order to overcome the defects of the traditional indoor design scheme, the indoor design by adopting the three-dimensional simulation technology is more popular in the market.
When indoor design is carried out, a designer usually carries out indoor design aiming at an indoor environment, an indoor design scheme is obtained and provided for a user, the part of the user, which is unsatisfied with the indoor design, is communicated with the designer, and the designer continuously modifies and adjusts the design scheme according to the communication content. For example, in designing each room of a suite of houses, the position of items in each room needs to be adjusted, and the items may include movable items such as furniture, home appliances, and the like. In the process of indoor design, because the difference generally exists between the user expressed appeal and the designer understood appeal, and repeated communication is needed, the communication efficiency is not high, and the communication effect is not good. The designer just can change the position of article in the room after having understood user's appeal, obtains the scheme after the change, and the user can't participate in real time or know the process of indoor design during the period, also can't know the overall process of article position change, and user experience feels low, is difficult to obtain the most satisfied indoor design effect.
The embodiment provides an indoor automatic layout method, which can automatically layout the positions of indoor articles, and after a user sends out an instruction for layout of a room, the articles in the room automatically move, so that the user can view the moving process of the articles in real time, and can view indoor layout schemes which change in real time; when seeing the indoor layout scheme close to satisfaction, a user can send an instruction for stopping moving the articles, at the moment, the indoor articles can be automatically adjusted to a target position meeting the grading requirement according to the position where the user sends the stop instruction and then stop moving, at the moment, the room runs to a target state, and the user obtains the target layout scheme of the room. The whole process is not required to be communicated with a designer, the user only needs to send a starting instruction, the whole process of automatic movement of the articles in the room can be watched in real time, the user sends a stopping instruction, a satisfactory indoor layout scheme can be obtained, the indoor design efficiency is improved, the design effect is improved, and the user experience is greatly enhanced.
An indoor automatic layout method and apparatus according to an embodiment of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a scene schematic diagram of an application scenario according to an embodiment of the present disclosure. The application scenario may include terminal devices 101, 102, and 103, server 104, and network 105.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, and 103 are hardware, they may be various electronic devices having a display screen and supporting communication with the server 104, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like, and the display screen of the terminal device may be a touch display screen or a non-touch display screen, and in this case, the terminal device is further provided with an interactive device, such as a mouse, a keyboard, and the like. The terminal device may also be of other types, such as a smart tv, a VR (Virtual Reality) device, an XBOX game console, a car screen, etc., without limitation. When the terminal devices 101, 102, and 103 are software, they may be installed in the electronic device as described above. The terminal devices 101, 102, and 103 may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not limited by the embodiments of the present disclosure. Further, various applications, such as an indoor design application, a data processing application, an instant messaging tool, social platform software, a search-type application, a shopping-type application, etc., may be installed on the terminal devices 101, 102, and 103.
The server 104 may be a server providing various services, for example, a backend server receiving a request sent by a terminal device establishing a communication connection with the server, and the backend server may receive and analyze the request sent by the terminal device and generate a processing result. The server 104 may be a server, may also be a server cluster composed of a plurality of servers, or may also be a cloud computing service center, which is not limited in this disclosure. The server 104 may be hardware or software. When the server 104 is hardware, it may be various electronic devices that provide various services to the terminal devices 101, 102, and 103. When the server 104 is software, it may be multiple software or software modules providing various services for the terminal devices 101, 102, and 103, or may be a single software or software module providing various services for the terminal devices 101, 102, and 103, which is not limited by the embodiment of the present disclosure.
The network 105 may be a wired network connected by a coaxial cable, a twisted pair and an optical fiber, or may be a wireless network that can interconnect various Communication devices without wiring, for example, Bluetooth (Bluetooth), Near Field Communication (NFC), Infrared (Infrared), and the like, which is not limited in the embodiment of the present disclosure.
A user can establish a communication connection with the server 104 via the network 105 through the terminal apparatuses 101, 102, and 103 to receive or transmit information or the like. Specifically, the terminal device is provided with an indoor design application, and in the running process of the indoor design application, a terminal device interface displays a target room page to be designed, and the target room page can be designed by using a visualization Engine, including but not limited to a UE Engine (absolute Engine), a VRay Engine, and the like, so as to realize page visualization.
A user inputs a first user instruction for room layout through terminal equipment, after the terminal equipment sends the first user instruction to the server 104, the server 104 acquires first state information of a target room, controls objects in the target room to move, and displays the whole moving process of the objects in the target room in real time through the terminal equipment so that the user can watch the objects through the terminal equipment; in the moving process of the object, a user can input a second user instruction for indicating the object to stop moving through the terminal device at any time, after the terminal device sends the second user instruction to the server 104, the server 104 obtains second state information of a target room and a corresponding score, and then the object is controlled to stop moving after moving to a target position based on the score and the second state information, wherein the state information of the target room is the target state information, the target state information is displayed in real time through the terminal device, and the user can check the position of the object in the target room when the object stops moving.
It should be noted that the specific types, numbers and combinations of the terminal devices 101, 102 and 103, the server 104 and the network 105 may be adjusted according to the actual requirements of the application scenario, and the embodiment of the present disclosure does not limit this.
In other embodiments, the indoor automatic layout method provided in this embodiment may also be implemented only by the terminal device, and at this time, the terminal device is built with a plurality of software or software modules providing various services, or may be a single software or software module providing various services, which is not limited herein.
Fig. 2 is a flowchart of an indoor automatic layout method according to an embodiment of the present disclosure. The indoor automatic layout method of fig. 2 may be performed by the terminal device or the server of fig. 1. As shown in fig. 2, the indoor automatic layout method includes the following steps:
s201, a first user instruction is obtained, wherein the first user instruction indicates a target room to be laid out.
And in the process of designing the indoor layout, displaying the indoor three-dimensional model to be laid out through the terminal equipment. A home typically includes a plurality of rooms, each room having a plurality of items disposed therein, which may be furniture, household appliances, or the like that are movable. In the process of indoor layout design, each room needs to be designed one by one, the selected room is a target room, a display screen of the terminal device displays the target room, and articles in the target room can be added or deleted as required.
When a user needs to lay out a target room, a first user instruction is sent to the terminal device, the sending mode of the first user instruction can be set according to needs, for example, the terminal device is provided with a touch display screen, the user can click any position of the display screen, and the terminal device obtains the first user instruction based on the operation. For another example, a "start" button is displayed on the interface of the terminal device, and the user clicks the start button, so that the terminal device obtains the first user instruction based on the operation. For another example, the terminal device is provided with an image acquisition unit, when the user needs to lay out the target room, the user can make a corresponding gesture towards the terminal device, the image acquisition unit acquires the gesture of the user and analyzes the gesture, and when the gesture corresponding to the user is identified, the first user instruction is acquired. For another example, the terminal device is provided with a sound collection unit (e.g., a microphone or a microphone array) and a sound recognition unit, when a user needs to lay out a target room, the user can send a voice to the terminal device, the sound collection unit of the terminal device collects an audio and then performs voice recognition through the voice recognition unit, and when a corresponding voice instruction is recognized, the first user instruction is obtained. For another example, the terminal device is provided with an interactive device such as a mouse and a keyboard, and the user may also input the first user instruction to the terminal device by clicking a mouse or a keyboard button. Of course, in other embodiments, the user may also issue the first user instruction to the terminal device by other manners, which is not limited to the above-mentioned case.
S202, controlling the object in the target room to move based on the first user instruction and the first state information of the target room, wherein the current state information of the target room comprises the size of the target room, the size of the object in the target room and the current position of the object in the target room.
When the target room is not laid out, each article in the target room is in an initial state, and at this time, initial state information of the target room, including the size of the target room, the size of the article in the target room, and the initial position, may be acquired. The size of the target room and the size of the object are three-dimensional information, and the size is determined to be a fixed value in the three-dimensional modeling process and cannot change along with the movement of the object; the position of the article is changed along with the movement of the article, the initial position of the article corresponds to the position before the article is moved for the first time, and the current position corresponds to the position at the current moment in the moving process.
After an instruction for indicating the target room to perform layout is obtained, the initial state of each article is the current state of each article, each article in the target room is controlled to move on the basis of the initial state, the movable directions comprise 6 directions of upward, downward, leftward, rightward, forward and backward, and the movable directions can be set as required during specific movement. For example, for a freely moving object, such as a bed, table, stool, etc., it can move in various directions; for an article that may be limited in some directions in a practical situation, such as a wardrobe, it is common for the back of the wardrobe to be placed against a wall, where the wardrobe does not move forward or backward. When the article moves, the article may move randomly or in a preset manner, and the movement is not limited herein; the step length of a single movement can be a fixed value or a random value; the movement of a single article can be realized, and a plurality of articles can also be simultaneously moved. The moving process of each article in the target room is displayed in real time through the terminal equipment, and a user can view the whole moving process of each article. In the moving process, if a new user instruction is not received, the object in the target room can be continuously moved according to a preset mode, or the object stops moving when a termination condition is reached.
S203, acquiring a second user instruction, wherein the second user instruction indicates that the article stops moving.
In the process of moving each article in the target room, if the user finds that the indoor layout scheme is close to satisfaction, the user can send a second user instruction for indicating each article to stop moving to the terminal device, and the sending mode of the second user instruction can be set according to needs.
Optionally, the second user instruction is issued in a manner that is compatible with the manner in which the first user instruction is issued. For example, when the first user instruction is issued by the user by clicking the touch display of the terminal device, the second user instruction may be issued by the user by clicking the touch display again. When the first user instruction is issued by the user by clicking a "start" button of the display interface of the terminal device, the second user instruction may be issued by the user by clicking a "stop" button of the display interface of the terminal device. When the first user instruction is issued by the user by making a gesture, the second user instruction may be issued by the user by making a gesture again. When the first user instruction is issued by the user through a voice instruction, the second user instruction may also be issued by the user through a voice instruction. When the first user instruction is sent by the user through the interactive device of the terminal device, the second user instruction may also be sent by the user through the interactive device. Of course, the issuing manner of the second user instruction may also be inconsistent with the issuing manner of the first user instruction, that is, the issuing manners of the second user instruction and the first user instruction may be any one of the foregoing manners, the relationship between the issuing manners of the second user instruction and the first user instruction is not limited, and the user operation is more flexible.
And S204, acquiring a score corresponding to the second state information of the target room based on the second user instruction and the second state information of the target room.
In the moving process of each article in the target room, the position of each article is changed continuously, so that the state information of the target room is updated continuously, and when the article moves once, the state information of the target room is updated once. And after receiving a second user instruction, acquiring second state information of the target room (the second state information refers to the current state information of the target room), and processing the second state information through a preset scoring model to acquire a score corresponding to the second state of the target room. The score may reflect an ideal situation of the location of each item in the target room to confirm whether the corresponding status meets a preset stop requirement.
And S205, in response to the score being smaller than the score threshold value, controlling at least one of the items to move to the target position.
When the score meets the preset requirement, the score means that each article in the target room is moved to the target position, the position of each article does not need to be moved again, the indoor layout of the target room is completed, the current state information of the target room is the target state information, and the target state information comprises the size of the target room, the size of the article in the target room and the target position. When the score does not meet the preset requirement, the position of at least one article in the articles in the target room needs to be adjusted, and the position adjustment is continuously adjusted near the moment of acquiring the instruction indicating that the articles stop moving until the score meets the preset requirement.
In the moving process of each article in the target room, whether the second user instruction is obtained or not, the score corresponding to the state information of the target room can be obtained at the same time, the difference is that before the second user instruction is obtained, even if the score corresponding to the state information of the target room is obtained and the score is larger than the preset requirement, the moving mode of the article cannot be adjusted based on the score, and only after the second user instruction is obtained, the moving mode of the article can be adjusted based on the score to control the article to move to the target position.
According to the technical scheme provided by the embodiment of the disclosure, the objects in the target room are controlled to automatically move by receiving the instruction for indicating the target room to perform layout, the objects are controlled to move to the target position based on the second state information of the target room and the score when the instruction for indicating the object in the target room to stop moving is received, in the whole indoor layout design process, a user only needs to input the instruction for starting the layout and finishing the layout, the interaction mode is simple, the whole process is not required to be in fussy communication with a designer, meanwhile, the user can watch the process of moving the indoor objects in real time, the indoor layout design process is more visual, the visualization degree is high, and the user experience is greatly improved.
In addition, the present embodiment can control each article in the target room through the constructed automatic layout control model. Fig. 3 is a schematic diagram of an automatic layout control model provided in an embodiment of the present disclosure, and the automatic layout control model includes a state information encoder 31, a probability acquisition model 32, a movement executor 33, and a dynamic scoring agent 34.
The state information encoder 31 is a convolutional neural network, and includes 3 convolutional layers, which are respectively denoted as convolutional layer 311, convolutional layer 312, and convolutional layer 313, and the parameters of the 3 convolutional layers are: (512 × 512 × 512 × 3, where 512, 512 denote the three-dimensional size of the convolutional layer, and 3 denotes the number of convolution kernels, the same applies below), (256 × 256 × 256 × 16), (128 × 128 × 128 × 64). Of course, the state information encoder 31 may also include other structures such as input layers, output layers, etc., which are not fully listed here. The state information encoder 31 receives current state information of a target room as an input, and outputs a coded vector corresponding to the current state information, where a parameter of the coded vector is (128 × 128 × 128 × 64), thereby realizing encoding of the current state information.
The probability acquisition models 32 are convolutional neural networks, the number of which (denoted as M) is consistent with the number of the articles, each probability acquisition model 32 corresponds to one article, and each probability acquisition model 32 includes a basic structure, a self-attention mechanism structure and an interaction structure. Wherein, the foundation structure includes 5 layers of convolution layer and 2 layers of full tie-layer, and 5 layers of convolution layer are marked as convolution layer 321, convolution layer 322, convolution layer 323, convolution layer 324 and convolution layer 325 respectively, and the parameter of 5 layers of convolution layer is in proper order: (128 × 128 × 64), (64 × 64 × 64 × 128), (32 × 32 × 32 × 256), (16 × 16 × 16 × 64), (8 × 8 × 8 × 32); the full tie layer of 2 layers is marked as full tie layer 326, full tie layer 327 respectively, and the parameters of the full tie layer of 2 layers are in turn: (16384, 128), (128, 6), where 16384 represents the fully-connected layer size, 128 represents the number of cores of the fully-connected layer, and 6 represents the output as a 6-bit vector. The self-attention mechanism structure comprises 2 convolution layers, wherein the 2 convolution layers are respectively marked as a convolution layer 328 and a convolution layer 329, and the parameters of the 2 convolution layers are as follows: (64 × 64 × 64 × 128), and (32 × 32 × 32 × 256).
The input to the infrastructure is the encoded vector output by the state information encoder 31, the output of the convolutional layer 312 is the first feature map, and the output of the convolutional layer 313 is the second feature map. The input of the self-attention mechanism structure is a first characteristic diagram, and the output is an attention characteristic diagram. The interactive structure comprises an input A, an input B and an input C, the output vector is a third feature map, the input A is a second feature map output by the convolution layer 313, the input B is an attention feature map output by the attention mechanism structure, the input C is an interactive signal value 1 (start) or 0 (stop), the interactive signal value corresponds to an acquired user instruction, and the user instruction comprises a first user instruction or a second user instruction. When the user instruction comprises a first user instruction, the interactive signal value is 1, and at the moment, the interactive structure outputs a third characteristic diagram, namely a second characteristic diagram; when the user instruction includes a second user instruction, the interactive signal value is 0, and at this time, after the interactive structure input a is multiplied by the input B, a third feature map is obtained (that is, each element of the input a is multiplied by an element corresponding to the input B to obtain an element of an output vector), where the parameter of the third feature map is: (32 × 32 × 32 × 256). The input to convolutional layer 314 is the third feature map. The output of the infrastructure is a probability vector, the 6 elements of which represent the probability values for upward, downward, leftward, rightward, forward, and backward movements, respectively.
The input of the moving actuator 33 is the probability vector output by the infrastructure, and the output is the target moving direction corresponding to the maximum probability value, so as to control the object to move to the target moving direction, and the distance of a single movement may be a fixed step length, or may be a random value, and is preferably a fixed step length (e.g. 1 unit length, 2 unit lengths, etc.).
The dynamic scoring agent 34 is a convolutional neural network, and includes 4 convolutional layers and 2 fully-connected layers, the 4 convolutional layers are respectively recorded as convolutional layer 341, convolutional layer 342, convolutional layer 343, and convolutional layer 344, and the parameters of the 4 convolutional layers are: (128 × 128 × 64), (64 × 64 × 64 × 128), (32 × 32 × 32 × 256), and (16 × 16 × 16 × 64); the 2 layers of full-link layers are respectively marked as a full-link layer 345 and a full-link layer 346, and the parameters of the 2 layers of full-link layers are as follows in sequence: (64, 32), (32, 1). The input of the dynamic scoring agent 34 is the encoding vector output by the state information encoder 31, and the output is the score of the current state information, and the score range may be set as required, for example, the score range may be 0 to 100 points, or 0 to 10 points, or other score ranges.
Before controlling the movement of the object and obtaining the score through the automatic layout control model based on the state information of the target room, the initial automatic layout control model needs to be trained. Firstly, the parameters of the state information encoder 31, the probability acquisition model 32 and the dynamic scoring agent 34 of the automatic layout control model are initialized, and the values of the parameters are random values. Then, the trained target score is set to a preset value (for example, set to 100 points), the automatic layout control model is trained by using the samples of the data set, the score of each sample and the error gradient of the target score are calculated during each training, and the parameter weights of the state information encoder 31, the probability acquisition model 32 and the dynamic scoring agent 34 are continuously updated. And after 1 time of complete training of all samples in the data set, obtaining average scores of all samples in the data set, terminating the training when the average scores of two consecutive times are greater than a score training threshold (for example, the average scores can be set to be 90 scores, 95 scores, 98 scores and the like) and are kept unchanged, and stopping updating parameters to obtain the automatic layout control model.
Fig. 4 is a flowchart for controlling the movement of the object in the target room in the indoor automatic layout method provided by the embodiment of the disclosure. After receiving a first user instruction input by a user, the controlling the object in the target room to move in step S202 specifically includes:
s401, acquiring first state information of the target room based on the first user instruction and the initial state information of the target room. When an instruction for indicating the target room to perform layout is just received, the target room is in an initial state, namely the position of each article in the target room is in the initial state, the initial state is the current state of the target room, and the first state information is updated to the initial state. As the indoor layout process progresses, the current state of the target room is also continuously updated.
S402, processing the first state information through the state information encoder to obtain an encoding vector corresponding to the first state information. After the first state information of the target room is obtained, the first state information is input to the middle state information encoder 31 as an input item of the automatic layout control model for processing, and a coding vector corresponding to the first state information is obtained.
And S403, processing the coding vector through a probability acquisition model to obtain probability values of the movement of the object in the target room to all directions. The number of the probability obtaining models is consistent with the number of the articles, and each probability obtaining model is obtained by training for one corresponding article. The method specifically comprises the following steps: inputting the coding vector into a probability obtaining model, and sequentially performing feature extraction on a first convolutional layer (convolutional layer 321) and a second convolutional layer (convolutional layer 322) to obtain a first feature map; and then the feature extraction is carried out by the third convolution layer (convolution layer 323) to obtain a second feature map. Inputting the first characteristic diagram into an attention mechanism structure for processing to obtain an attention characteristic diagram; and inputting the second characteristic diagram and the attention characteristic diagram into the interactive structure, and simultaneously inputting an interactive signal value, wherein the interactive signal value is 1 because the user instruction is the first user instruction, and the third characteristic diagram output by the interactive structure is the second characteristic diagram. The third feature map is input into the fourth convolutional layer (convolutional layer 324), sequentially processed by the fourth convolutional layer (convolutional layer 324) and the fifth convolutional layer (convolutional layer 325), and passed through the full link layers to output probability vectors. In the process, the third feature map output by the interactive structure is the second feature map, so that the coding vector corresponding to the current state of the article is input into the probability acquisition model and then processed by the infrastructure to output the probability vector, so that the probability value of moving to each direction is obtained. Corresponding to the M items, probability vectors corresponding to the current states of the M items may be obtained.
S404, determining the target moving direction of the article based on the probability value of the article moving to each direction, and controlling the article to move to the target moving direction by the target distance. After the probability vectors of the articles are obtained, the probability vectors are input into the movement actuator 33 as input items, and the movement actuator 33 recognizes that the maximum probability value is the target probability value from the 6-bit vector, so that the target movement direction corresponding to the target probability value can be obtained, and the articles are controlled to move along the target movement direction by a fixed step length.
After each article moves for a fixed step length along the target moving direction, the position of the article changes, and the corresponding state information also changes, so that the state information (namely, the second state information) of the target room after the article moves needs to be acquired, the first state information is updated to the second state information, the step S402 is returned, and the above processes are repeated, so that each article can be controlled to automatically move, and a user can see the real-time moving process of each article in the target room in real time through the display screen.
In one embodiment, in the process of moving each article in the target room, if a second user instruction input by the user is not received, the process is continuously repeated to realize automatic movement, and the automatic movement is not stopped automatically.
In one embodiment, when the second user instruction input by the user is not received while the objects in the target room are moving, the objects will stop moving when the termination condition is met while the moving step is repeatedly executed. Specifically, after the object moves in the target moving direction for a fixed step length each time, state information (second state information) of the moved target room is acquired, and the state information before (first state information) and after (second state information) the object moves in the target room is compared to determine whether the difference between the two is smaller than a state threshold value; if the value is smaller than the state threshold value, the state of the target room is not changed before and after the object moves, and the target room reaches the final state at the moment, so that the object does not need to be moved any more; if the difference value is greater than or equal to the state threshold value, it means that the state of the target room is changed before and after the object moves, and the target room does not reach the final state yet, the process needs to return to step S402, and the above process is continuously repeated until the difference value of the state information of the object in the target room before and after the object moves is less than the state threshold value.
The user may enter the second user instruction at any time during the movement of the item. When a second user instruction is received, the moving mode of the currently executed article is not changed immediately, state information of the target room is obtained after the steps S402 to S404 are executed, whether the second user instruction is received or not is judged, and if the second user instruction is not received, whether the difference value of the state information of the article in the target room before and after moving is smaller than a state threshold value or not is judged; if the second user instruction is received, a scoring step corresponding to the current state information of the target room obtained in step S203 is performed. The score is not limited to be obtained only after the second user instruction is received, but also can be obtained in the whole process of automatic movement of the article, and the difference is that the movement mode of the article is not adjusted based on the score in the automatic movement process of the article before the second user instruction is not received, and the process of moving the article is performed based on the score after the second user instruction is received.
After receiving a second user instruction input by the user, the step S205 of controlling the article to move to the target position includes:
judging whether the score corresponding to the current state information of the target room is greater than a score threshold, wherein the score threshold can be set according to needs, and can be 80, 85, 90, 95, 98 and the like;
if the score is smaller than the score threshold, it means that the current state of the target room has not reached the optimal state, at this time, the first state information is updated to the second state information, and the process continues to return to step S402, and the first state information is processed by the state information encoder to obtain the encoding vector corresponding to the current state information. It should be noted that, at this time, in step S403, since the value of the interaction signal input in the interaction structure is 0, the third feature map output by the interaction structure at this time is obtained by multiplying the second feature map by the attention feature map (i.e., each element of the second feature map is multiplied by the corresponding element of the attention feature map to obtain the element of the third feature map). The attention mechanism structure focuses more on the required local features, so that the obtained local features of the third feature map are more obvious, focusing and extraction of the features are facilitated, the determination of the moving direction of the object is more accurate, convergence is faster, and the target position is easier to obtain quickly. By continuously circulating the above processes until the score is greater than or equal to the score threshold, it means that the current state of the target room has reached the optimal state, the object does not need to move any more, at this time, the object in the target room is controlled to stop moving, the current state information of the target room is the target state information, and the user obtains an indoor layout design scheme.
Further, in the implementation, the terminal device displays the indoor three-dimensional model to be laid out, so that the visualization of the watching and the operation of the user is realized, and therefore the indoor three-dimensional model needs to be constructed in advance. For the terminal equipment which has the function of providing the three-dimensional model, after the terminal equipment obtains the indoor picture, the indoor three-dimensional model can be automatically constructed according to the indoor picture, and the three-dimensional simulation environment is established. For other types of terminal devices, after an indoor picture is obtained, an indoor three-dimensional model needs to be constructed according to the indoor picture to obtain three-dimensional models of each room and each article in the room.
The indoor three-dimensional model can be constructed through the three-dimensional simulation model. The three-dimensional simulation model comprises an attribute detection model, a convolutional neural network encoder, a convolutional neural network decoder, a key point identification model and a three-dimensional construction model, wherein the attribute detection model detects elements in the indoor picture by using a method such as an SSD (Single Shot Multi Box Detector) detection method and a FasterRCNN detection method so as to identify each article and attribute information of each article, and the attribute information comprises the size and the category of the article (such as a bed, a stool, a wardrobe and the like). The convolutional neural network encoder comprises 4 convolutional layers, and the parameters of the 4 convolutional layers are respectively as follows: (160 × 160 × 128), (80 × 80 × 256), (40 × 40 × 512), and (20 × 20 × 512), the input is an indoor picture, and the output image coding vector parameters are: (10X 512). The convolutional neural network decoder comprises 2 convolutional layers, and the parameters of the 2 convolutional layers are respectively as follows: (20 × 20 × 512), (40 × 50 × 256), the input is the image coding vector output by the convolutional neural network encoder, and the output is a civil element semantic map, wherein the civil elements comprise walls, doors, windows, roofs, floors and the like. The key point identification model detects key points of all civil engineering elements from a semantic graph of the civil engineering elements by a Solid State Disk (SSD) detection method, a FasterRCNN detection method and other methods, wherein the key points are inflection points of all the civil engineering elements (for all the civil engineering elements, namely all vertexes of the outer contour), the contour of all the civil engineering elements is constructed based on the key points, the sizes of the rooms, such as length, width, height and the like, are calculated based on the contour of all the civil engineering elements, and the attribute information of all the civil engineering elements is formed by combining the types of all the civil engineering elements. The three-dimensional construction model constructs a three-dimensional model of each room based on the attribute information of each civil engineering element; meanwhile, a three-dimensional model of each article is constructed based on the attribute information of each article; and combining the three-dimensional models of the rooms and the three-dimensional models of the articles to obtain an indoor three-dimensional model corresponding to the indoor picture.
Fig. 5 is a flowchart of an embodiment of an indoor automatic layout method provided in this embodiment. The description will be given taking an indoor layout design of a general house as an example, and a house generally includes a plurality of rooms, each of which includes various items such as furniture and home appliances, and the description will be given taking furniture as an example. The indoor automatic layout method comprises the following steps:
s501, constructing an indoor three-dimensional model of the house based on the acquired indoor pictures;
s502, acquiring a first user instruction, wherein the first user instruction indicates a target room to be laid out;
s503, acquiring first state information of the target room based on the first user instruction and the initial state information of the target room;
s504, processing the first state information through a state information encoder to obtain a coding vector corresponding to the first state information;
s505, processing the coding vector through a probability acquisition model to obtain probability values of the movement of the object in the target room to all directions;
s506, determining the target moving direction of the article based on the probability value of the article moving to each direction, and controlling the article to move to the target moving direction by the target distance;
s507, acquiring second state information of the moved target room;
s508, judging whether a second user instruction is received or not;
if the second user instruction is not received, then:
s509, comparing the state information of the object in the target room before and after moving, and judging whether the difference value between the state information and the state information is smaller than a state threshold value;
if the state is less than the state threshold value:
s510, controlling each article not to move any more, wherein second state information of the target room after the article moves is target state information;
if not, returning to step S503;
if a second user instruction is received, then:
s511, obtaining the score corresponding to the current state information of the target room;
s512, judging whether the score is smaller than a score threshold value;
if the score is greater than or equal to the score threshold, go to step S510;
if the score is smaller than the score threshold, the process returns to step S503.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 6 is a schematic view of an indoor automatic layout device according to an embodiment of the present disclosure. As shown in fig. 6, the indoor automatic layout apparatus includes:
a first obtaining module 601 configured to obtain a first user instruction, where the first user instruction instructs a target room to perform layout;
a movement control module 602 configured to control the object in the target room to move based on the first user instruction and first status information of the target room, wherein the first status information of the target room includes a size of the target room, a size of the object in the target room, and a current location thereof;
a second obtaining module 603 configured to obtain a second user instruction, wherein the second user instruction comprises an instruction to stop the movement of the item;
the score obtaining module 604 is configured to determine a score corresponding to the second state information of the target room based on the second user instruction and the second state information of the target room;
a target location acquisition module 605 configured to control at least one of the items to move to a target location in response to the score being less than the score threshold.
According to the technical scheme provided by the embodiment of the disclosure, the objects in the target room are controlled to automatically move by receiving the instruction for indicating the target room to perform layout, the objects are controlled to move to the target position based on the second state information of the target room and the score when the instruction for indicating the object in the target room to stop moving is received, in the whole indoor layout design process, a user only needs to input the instruction for starting the layout and finishing the layout, the interaction mode is simple, the whole process is not required to be in fussy communication with a designer, meanwhile, the user can watch the process of moving the indoor objects in real time, the indoor layout design process is more visual, the visualization degree is high, and the user experience is greatly improved.
In some embodiments, the movement control module 602 is specifically configured to: acquiring first state information of a target room based on a first user instruction and the initial state information of the target room; processing the first state information through a state information encoder to obtain a coding vector corresponding to the first state information; processing the coding vector through a probability acquisition model to obtain probability values of the movement of the object in the target room to all directions; and determining the target moving direction of the article based on the probability values of the movement of the article to all directions, and controlling the article to move the target distance to the target moving direction.
In some embodiments, the indoor automatic layout apparatus further comprises:
and the three-dimensional model building module 600 is configured to build an indoor three-dimensional model based on the acquired indoor pictures, wherein the indoor three-dimensional model comprises a three-dimensional model of each room and a three-dimensional model of articles in each room.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure.
Fig. 7 is a schematic diagram of a computer device 7 provided by the embodiment of the present disclosure. As shown in fig. 7, the computer device 7 of this embodiment includes: a processor 701, a memory 702, and a computer program 703 stored in the memory 702 and executable on the processor 701. The steps in the various method embodiments described above are implemented when the computer program 703 is executed by the processor 701. Alternatively, the processor 701 implements the functions of each module/unit in each device embodiment described above when executing the computer program 703.
Illustratively, the computer program 703 may be partitioned into one or more modules/units, which are stored in the memory 702 and executed by the processor 701 to accomplish the present disclosure. One or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 703 in the computer device 7.
The computer device 7 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computer devices. The computer device 7 may include, but is not limited to, a processor 701 and a memory 702. Those skilled in the art will appreciate that fig. 7 is merely an example of a computer device 7 and does not constitute a limitation of the computer device 7 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the computer device may also include input output devices, network access devices, buses, etc.
The Processor 701 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 702 may be an internal storage unit of the computer device 7, for example, a hard disk or a memory of the computer device 7. The memory 702 may also be an external storage device of the computer device 7, such as a plug-in hard disk provided on the computer device 7, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 702 may also include both an internal storage unit of the computer device 7 and an external storage device. The memory 702 is used to store computer programs and other programs and data required by the computer device. The memory 702 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/computer device and method may be implemented in other ways. For example, the above-described apparatus/computer device embodiments are merely illustrative, and for example, a division of modules or units, a division of logical functions only, an additional division may be made in actual implementation, multiple units or components may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, the present disclosure may implement all or part of the flow of the method in the above embodiments, and may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the above methods and embodiments. The computer program may comprise computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain suitable additions or additions that may be required in accordance with legislative and patent practices within the jurisdiction, for example, in some jurisdictions, computer readable media may not include electrical carrier signals or telecommunications signals in accordance with legislative and patent practices.
The above examples are only intended to illustrate the technical solutions of the present disclosure, not to limit them; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present disclosure, and are intended to be included within the scope of the present disclosure.

Claims (15)

1. An indoor automatic layout method is characterized by comprising the following steps:
obtaining a first user instruction, wherein the first user instruction instructs a target room to perform layout;
controlling the object in the target room to move based on the first user instruction and first state information of the target room, wherein the first state information comprises the size of the target room, the size of the object in the target room and the current position of the object in the target room;
obtaining a second user instruction, wherein the second user instruction indicates that the item stops moving;
determining a score corresponding to second state information of the target room based on the second user instruction and the second state information of the target room;
in response to the score being less than a score threshold, controlling at least one of the items to move to a target location;
the controlling movement of the item in the target room based on the first user instruction and the first status information of the target room comprises:
obtaining a coding vector corresponding to the first state information based on the first user instruction and the initial state information of the target room;
processing the coding vector through a probability acquisition model to acquire probability values of the movement of the object in the target room to all directions;
and determining the target moving direction of the article based on the probability value of the article moving to each direction, and controlling the article to move to the target moving direction by a target distance.
2. The method of claim 1, wherein obtaining the code vector corresponding to the first state information based on the first user instruction and the initial state information of the target room comprises:
acquiring first state information of the target room based on the first user instruction and the initial state information of the target room;
and processing the first state information through a state information encoder to obtain an encoding vector corresponding to the first state information.
3. The method of claim 1, wherein the processing the encoded vector through a probability obtaining model to obtain probability values of the movement of the object in the target room in all directions comprises:
sequentially processing the coding vector through a first convolutional layer and a second convolutional layer of a convolutional neural network to extract a first characteristic diagram;
processing the first feature map through a third convolutional layer of the convolutional neural network to extract a second feature map;
processing the first feature map through an attention mechanism structure of the convolutional neural network to obtain an attention feature map;
acquiring a third feature map based on the second feature map, the attention feature map and a user instruction, wherein the user instruction comprises a first user instruction or a second user instruction;
and sequentially processing the third feature map through a fourth convolutional layer and a fifth convolutional layer of the convolutional neural network, and outputting a probability vector through a full connection layer, wherein each element in the probability vector corresponds to the probability value of the movement of the article in each direction.
4. The method according to claim 3, wherein after the step of determining the score corresponding to the second status information of the target room based on the second user instruction and the second status information of the target room, the method further comprises:
controlling items of the target room to stop moving in response to the score being greater than or equal to a score threshold.
5. The method of claim 4, wherein said controlling at least one of said items to move to a target location step in response to said score being less than a score threshold comprises:
and updating the first state information into the second state information, and returning to the step of processing the current state information through a state information encoder to obtain the encoding vector corresponding to the first state information.
6. The method of claim 2, wherein after the step of controlling the object to move the object a target distance in the target moving direction based on the target moving direction, further comprising:
acquiring second state information of the target room after the object is moved, wherein the second state information comprises the size of the target room, the size of the object in the target room and the position of the object after the object is moved;
judging whether the difference value between the second state information and the first state information of the target room is smaller than a state threshold value or not;
and if the difference value between the second state information and the first state information is larger than or equal to a state threshold value, updating the first state information into the second state information, and returning to the step of processing the first state information through a state information encoder to obtain the encoding vector corresponding to the first state information.
7. The method of claim 6, wherein after the step of controlling the object to move the object a target distance in the target moving direction based on the target moving direction, further comprising:
and if the difference value between the second state information and the first state information is smaller than a state threshold value, controlling the articles in the target room to stop moving.
8. The method of claim 6, wherein after the step of obtaining second status information of the target room after the movement of the item, the method further comprises:
judging whether a second user instruction is acquired;
if the second user instruction is obtained, entering a scoring step corresponding to second state information of the target room based on the second user instruction and the second state information of the target room;
and if the second user instruction is not acquired, the step of judging whether the difference value between the second state information and the first state information of the target room is smaller than a state threshold value is carried out.
9. The method of any of claims 1-8, wherein prior to the step of obtaining the first user instruction, further comprising:
and constructing an indoor three-dimensional model based on the acquired indoor pictures, wherein the indoor three-dimensional model comprises a three-dimensional model of each room and a three-dimensional model of articles in each room.
10. The method according to claim 9, wherein the step of constructing an indoor three-dimensional model based on the obtained indoor picture comprises:
detecting the obtained indoor picture, and identifying each article in the indoor picture and attribute information of the article, wherein the attribute information comprises the size and the category of the article;
identifying the indoor picture to acquire attribute information of each civil engineering element in the indoor picture;
building a three-dimensional model of each room in the indoor picture based on the attribute information of the civil engineering elements;
constructing a three-dimensional model of the article based on the attribute information of the article;
and constructing an indoor three-dimensional model corresponding to the indoor picture based on the three-dimensional model of the room and the three-dimensional model of the article.
11. The method according to claim 10, wherein the step of identifying the indoor picture and obtaining attribute information of each civil engineering element in the indoor picture comprises:
processing the indoor picture through a convolutional neural network encoder to obtain an image coding vector corresponding to the indoor picture;
processing the image coding vector through a convolutional neural network decoder to obtain a civil engineering element language intention corresponding to the indoor picture;
and detecting key points of each civil engineering element in the civil engineering element semantic meaning through a key point identification model, obtaining the outline of each civil engineering element according to the key points, and obtaining attribute information of each civil engineering element, wherein the attribute information comprises the size and the category of the civil engineering element.
12. An indoor automatic layout device, comprising:
a first obtaining module configured to obtain a first user instruction, wherein the first user instruction comprises an instruction to a target room for layout;
a movement control module configured to control movement of the object in the target room based on the first user instruction and first status information of the target room, wherein the first status information of the target room includes a size of the target room, a size of the object in the target room, and a current location thereof;
a second obtaining module configured to obtain a second user instruction, wherein the second user instruction comprises an instruction to stop moving the item;
the score acquisition module is configured to determine a score corresponding to second state information of the target room based on the second user instruction and the second state information of the target room;
a target location acquisition module configured to control at least one of the items to move to a target location in response to the score being less than a score threshold;
the movement control module is specifically configured to: obtaining a coding vector corresponding to the first state information based on the first user instruction and the initial state information of the target room; processing the coding vector through a probability acquisition model to acquire probability values of the movement of the object in the target room to all directions; and determining the target moving direction of the article based on the probability value of the article moving to each direction, and controlling the article to move to the target moving direction by a target distance.
13. The apparatus of claim 12, further comprising:
the three-dimensional model building module is configured to build an indoor three-dimensional model based on the obtained indoor pictures, and the indoor three-dimensional model comprises a three-dimensional model of each room and a three-dimensional model of each article in each room.
14. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 11 when executing the computer program.
15. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 11.
CN202111017772.2A 2021-09-01 2021-09-01 Indoor automatic layout method, indoor automatic layout device, computer equipment and computer-readable storage medium Active CN113449356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111017772.2A CN113449356B (en) 2021-09-01 2021-09-01 Indoor automatic layout method, indoor automatic layout device, computer equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111017772.2A CN113449356B (en) 2021-09-01 2021-09-01 Indoor automatic layout method, indoor automatic layout device, computer equipment and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN113449356A CN113449356A (en) 2021-09-28
CN113449356B true CN113449356B (en) 2021-12-07

Family

ID=77819237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111017772.2A Active CN113449356B (en) 2021-09-01 2021-09-01 Indoor automatic layout method, indoor automatic layout device, computer equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN113449356B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113987647A (en) * 2021-10-29 2022-01-28 广联达科技股份有限公司 Method, device and equipment for generating combined component and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882645A (en) * 2020-06-23 2020-11-03 北京城市网邻信息技术有限公司 Furniture display method and device
CN112651062A (en) * 2019-12-09 2021-04-13 江苏艾佳家居用品有限公司 Room furniture automatic layout method based on pure attention network

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368911A (en) * 2017-06-11 2017-11-21 杭州巨梦科技有限公司 A kind of furniture layout method based on placement regulation
CN109740243B (en) * 2018-12-29 2022-07-08 江苏艾佳家居用品有限公司 Furniture layout method and system based on piece-by-piece reinforcement learning technology
CN110363853B (en) * 2019-07-15 2020-09-01 贝壳找房(北京)科技有限公司 Furniture placement scheme generation method, device and equipment and storage medium
CN110598252A (en) * 2019-08-07 2019-12-20 广东博智林机器人有限公司 Furniture layout method and device, electronic equipment and storage medium
CN110826135A (en) * 2019-11-05 2020-02-21 广东博智林机器人有限公司 Home arrangement method and device, neural network construction method and storage medium
CN111553012A (en) * 2020-04-28 2020-08-18 广东博智林机器人有限公司 Home decoration design method and device, electronic equipment and storage medium
CN111709061B (en) * 2020-06-18 2023-12-01 如你所视(北京)科技有限公司 Automatic indoor article placement processing method, device and equipment and storage medium
CN111709062B (en) * 2020-06-18 2023-07-18 如你所视(北京)科技有限公司 Method, device, equipment and medium for acquiring item placement scheme scoring

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651062A (en) * 2019-12-09 2021-04-13 江苏艾佳家居用品有限公司 Room furniture automatic layout method based on pure attention network
CN111882645A (en) * 2020-06-23 2020-11-03 北京城市网邻信息技术有限公司 Furniture display method and device

Also Published As

Publication number Publication date
CN113449356A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN113505429B (en) Indoor design scheme acquisition method and device, computer equipment and storage medium
US10846520B2 (en) Simulated sandtray system
WO2021093416A1 (en) Information playback method and device, computer readable storage medium, and electronic device
CN111527525A (en) Mixed reality service providing method and system
CN110910503B (en) Simulation method and device for air conditioning environment
KR102442139B1 (en) Apparatus, method and computer-readable storage medium for generating digital twin space based on arranging 3d model of drawing object
US20090240359A1 (en) Realistic Audio Communication in a Three Dimensional Computer-Generated Virtual Environment
CN103778538A (en) Furniture simulation layout method and furniture simulation layout system
KR20120123330A (en) Camera navigation for presentations
CN102270275A (en) Method for selection of an object in a virtual environment
CN108257203B (en) Home decoration effect graph construction rendering method and platform
CN104922906B (en) Action executes method and apparatus
CN111968247B (en) Method and device for constructing three-dimensional house space, electronic equipment and storage medium
KR102463112B1 (en) Simulation Sandbox System
JP2023525173A (en) Conversational AI platform with rendered graphical output
WO2020238022A1 (en) Three-dimensional space view display method, apparatus, and storage medium
CN113449356B (en) Indoor automatic layout method, indoor automatic layout device, computer equipment and computer-readable storage medium
CN115063552A (en) Intelligent home layout method and device, intelligent home layout equipment and storage medium
Sun et al. Enabling participatory design of 3D virtual scenes on mobile devices
CN108513090B (en) Method and device for group video session
CN113656877B (en) Multi-layer house type model generation method, device, medium and electronic equipment
KR20200078816A (en) Facility education and training system using virtual reality using the created 3d model
CN106294903A (en) Interior space planning system and method
CA2924696C (en) Interactive haptic system for virtual reality environment
TWI799195B (en) Method and system for implementing third-person perspective with a virtual object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant