CN111506940B - Furniture, ornament and lamp integrated intelligent layout method based on 3D structured light - Google Patents

Furniture, ornament and lamp integrated intelligent layout method based on 3D structured light Download PDF

Info

Publication number
CN111506940B
CN111506940B CN201911283712.8A CN201911283712A CN111506940B CN 111506940 B CN111506940 B CN 111506940B CN 201911283712 A CN201911283712 A CN 201911283712A CN 111506940 B CN111506940 B CN 111506940B
Authority
CN
China
Prior art keywords
furniture
scene
structured light
module
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911283712.8A
Other languages
Chinese (zh)
Other versions
CN111506940A (en
Inventor
陈旋
吕成云
邸新汉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Aijia Household Products Co Ltd
Original Assignee
Jiangsu Aijia Household Products Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Aijia Household Products Co Ltd filed Critical Jiangsu Aijia Household Products Co Ltd
Priority to CN201911283712.8A priority Critical patent/CN111506940B/en
Publication of CN111506940A publication Critical patent/CN111506940A/en
Application granted granted Critical
Publication of CN111506940B publication Critical patent/CN111506940B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/08Construction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/42Document-oriented image-based pattern recognition based on the type of document
    • G06V30/422Technical drawings; Geographical maps
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Circuit Arrangement For Electric Light Sources In General (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a furniture, ornament and lamp integrated intelligent office method based on 3D structured light, which relates to the technical field of intelligent furniture layout and specifically comprises the following steps: 1) segmenting and collecting the acquired furniture scene 3D structured light data, 2) performing semantic segmentation on the 3D furniture scene structured light data, 3) automatically and intelligently arranging furniture in the 3D furniture scene after the semantic segmentation, 4) arranging 3D ornaments and 3D lamps after the 3D furniture is arranged. According to the 3D structured light furniture, ornament and lamp integrated intelligent office method, the 3D furniture, 3D ornament and 3D lamp integrated layout is realized in a 3D scene, the identification of height information is added in the identification of a 3D house type diagram, and the identification and understanding of information such as a 3D roof, a 3D wall body, a 3D roof height and the like are facilitated. Subsequently, also be of value to rationally overall arrangement 3D lamps and lanterns, 3D ornaments on 3D wall body, 3D roof, 3D furred ceiling.

Description

Furniture, ornament and lamp integrated intelligent layout method based on 3D structured light
Technical Field
The invention relates to the technical field of intelligent furniture layout, in particular to an integrated intelligent layout method for furniture, ornaments and lamps based on 3D structured light.
Background
The existing household layout method mainly depends on manual design of designers or intervention of partial artificial intelligence algorithms, and the layout effect is not attractive due to individual difference and immaturity of the intelligent algorithms. Such as: the furniture layout style is single, the furniture layout is imperfect, the time consumption of the furniture layout is too long, and an ordinary user cannot experience the intelligent furniture layout effect in real time. The 3D intelligent layout system integrating the 3D structured light furniture, the ornaments and the lamps is used for realizing the effect of directly laying out the 3D furniture, the lamps and the ornaments in a 3D scene.
Disclosure of Invention
The invention aims to solve the technical problem of providing an integrated intelligent layout method of furniture, ornaments and lamps based on 3D structured light aiming at the defects of the background technology, which is beneficial to recognizing and understanding information such as 3D roofs, 3D walls, 3D roof heights and the like. Subsequently, also be of value to rationally overall arrangement 3D lamps and lanterns, 3D ornaments on 3D wall body, 3D roof, 3D furred ceiling.
The invention adopts the following technical scheme for solving the technical problems:
an integrated intelligent layout method for furniture, ornaments and lamps based on 3D structured light specifically comprises the following steps:
step 1, acquiring furniture scene 3D structured light data, including furniture scene 3D point cloud data and RDB-D depth map data;
step 2, receiving 3D structured light data of a furniture scene by using a neural network model, and generating 3D semantic segmentation information; the neural network comprises a general convolution kernel CNN, a sequence convolution kernel RNN and a sequence convolution kernel LSTM;
step 3, constructing a 3D furniture intelligent layout system by using a neural network model, wherein the 3D furniture intelligent layout system is composed of a reinforcement learning model and a CNN common convolution model, and the realization mode comprises a deep reinforcement model DQN;
step 4, after 3D furniture is well distributed in the 3D scene by using the neural network model, 3D ornaments and 3D lamps are continuously distributed in the 3D scene; the neural network model comprises a depth enhancement model DQN;
and 5, selecting the recorded layout state, and selecting TopN as a final result, wherein the TopN can be specified according to the requirements of the user.
As a further preferable scheme of the 3D structured light based furniture, ornament and lamp integrated intelligent layout method of the present invention, the step 2 is specifically as follows:
step 2.1, extracting scene 3D point cloud characteristics through a first characteristic extraction module;
2.2, extracting the RGB-D data characteristics of the scene through a second characteristic extraction module;
2.3, fusing the characteristics of 2.1 and 2.2 through a characteristic fusion module;
step 2.4, obtaining various segmentation information of the whole room through a semantic segmentation module, wherein the segmentation information comprises a door a, a window b, a wall c, a roof d and a suspended ceiling e;
and 2.5, obtaining 3D room categories including a 3D bedroom a, a 3D kitchen b, a 3D guest dining room c, a 3D study D and a 3D toilet e through semantic segmentation results and 2.3 and 2.4 results.
As a further preferable scheme of the 3D structured light-based furniture, ornament and lamp integrated intelligent layout method, the step 2.1 and the step 2.2 are realized by adopting a high-dimensional neural network extraction feature module.
As a further preferable scheme of the 3D structured light based furniture, ornament and lamp integrated intelligent layout method of the present invention, the step 2.3 employs a general multilayer CNN convolutional neural network.
As a further preferable scheme of the 3D structured light based furniture, ornament and lamp integrated intelligent layout method, the step 2.4 adopts a 3D scene semantic segmentation neural network.
As a further preferable scheme of the 3D structured light based furniture, ornament and lamp integrated intelligent layout method of the present invention, the step 3 is specifically as follows:
step 3.1, obtaining the result of semantic segmentation of the whole 3D scene of the house type, wherein the result comprises a 3D door x, a 3D wall y, a 3D window z, a 3D roof m and a 3D ceiling n;
step 3.2, obtaining semantic segmentation of the whole 3D scene based on the room type by using a general CNN module, wherein the semantic segmentation comprises a 3D bedroom a, a 3D kitchen b, a D guest dining room c, a 3D study D and a 3D toilet e;
3.3, intelligently distributing 3D furniture of each 3D room by using a depth tree and reinforcement learning combination module;
3.3.1, selecting a corresponding 3D furniture type for the corresponding 3D space by using a depth tree module, specifically selecting a 3D bed, a 3D storage cabinet and a 3D desk for a 3D bedroom;
and 3.3.2, automatically distributing the selected 3D furniture for the corresponding 3D space by using a reinforcement learning module.
Advantageous effects
According to the 3D structured light-based furniture, ornament and lamp integrated intelligent layout method, the 3D furniture, 3D ornament and 3D lamp integrated layout is realized in a 3D scene, the identification of height information is added in the identification of a 3D house type diagram, and the identification and understanding of information such as a 3D roof, a 3D wall body, a 3D roof height and the like are facilitated. Subsequently, also be of value to rationally overall arrangement 3D lamps and lanterns, 3D ornaments etc. on 3D wall body, 3D roof, 3D furred ceiling.
Drawings
FIG. 1 is a schematic diagram of an integrated intelligent layout method of furniture, ornaments and lamps based on 3D structured light according to the invention;
FIG. 2 is a 3D scene semantic segmentation, classification subsystem of the present invention;
FIG. 3 is a 3D furniture automation layout subsystem of the present invention;
FIG. 4 is a 3D accessory, 3D light fixture automated layout subsystem of the present invention;
FIG. 5 is a schematic diagram of a 3D point cloud extraction network in the 3D scene semantic segmentation subsystem according to the present invention;
FIG. 6 is a schematic diagram of an RGB-D information extraction network in the 3D scene semantic segmentation subsystem according to the present invention;
FIG. 7 is a depth tree module of the 3D furniture automation layout subsystem of the present invention;
FIG. 8 is a schematic diagram of a 3D scene semantic segmentation neural network according to the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings:
the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An intelligent layout method based on integration of 3D structured light furniture, ornaments and lamps is shown in figure 1 and specifically comprises the following steps:
step 1, acquiring and collecting furniture scene 3D structured light data including but not limited to furniture scene 3D point cloud data O (1), RDB-D depth map data O (2) and the like.
Wherein, O1 and O2 are 3D scene information of the whole house type.
And 2, receiving multi-dimensional 3D scene data such as O (1), O (2) and the like by using a neural network model, and generating 3D semantic segmentation information. FIG. 2 is a 3D scene semantic segmentation, classification subsystem of the present invention, and FIG. 3 is a 3D furniture automation layout subsystem of the present invention;
one implementation is as follows:
and 2.1, extracting scene 3D point cloud characteristics through a characteristic extraction module 1.
And 2.2, extracting the RGB-D data characteristics of the scene through the characteristic extraction module 2.
And 2.3, fusing the characteristics of 2.1 and 2.2 through a characteristic fusion module.
And 2.4, obtaining various segmentation information of the door a, the window b, the wall c, the roof d, the suspended ceiling e and the like of the whole room through a semantic segmentation module.
And 2.5, obtaining 3D various room types such as a 3D bedroom a, a 3D kitchen b, a 3D guest dining room c, a 3D study room D, a 3D toilet e and the like through the semantic segmentation result and the 2.3 and 2.4 results.
Wherein step 2.1 and step 2.2 implement a module for extracting features including, but not limited to, using a high-dimensional neural network, such as the PointCNN network shown in fig. 5.
Wherein the step 2.3 implementation includes, but is not limited to, the use of a generic multi-layer CNN convolutional neural network.
Wherein, the step 2.4 includes but is not limited to adopting a 3D scene semantic segmentation neural network, and FIG. 6 is a schematic diagram of an RGB-D information extraction network in the 3D scene semantic segmentation subsystem. Such as the pointet network shown in fig. 8.
And 3, constructing a 3D furniture intelligent layout system by using the neural network model. The 3D spatial intelligent layout is composed of a reinforcement learning model and a CNN convolution model, and the implementation mode includes but is not limited to DQN and the like.
Step 3.1, extracting the characteristics of various 3D furniture, wherein the 3D furniture characteristic extraction module extracts the characteristics h (1) of each 3D furniture model Obj (1).
And 3.2, taking the result of the step 3.1, the result of the step 2.4 and the result of the step 2.5 as input, and obtaining an automatic layout scheme of the 3D furniture to be laid out in each 3D room through the depth tree model.
Implementations include, but are not limited to, depefiest, rnntree.
The depth tree recommendation model takes the characteristics h (1), the.
Wherein g (1).. g (n) is represented by each leaf node.
If realized by depofost, the specific implementation is as shown in fig. 7:
x is the contact operation of the input:
forest is two completely random tree forests, each completely random tree Forest comprises 1000 completely random trees, each tree is generated by randomly selecting a feature to segment each node of the tree, the tree grows until each leaf node only comprises the same type of example or no more than 10 examples, and finally the class probability of each 3D furniture combination of each room is output. 3.3 3 3.2 recommends 3.2 recommended classes of 3D furniture combinations to the appropriate location in the 3D room, neural network constructs including but not limited to DQN and the like.
In the case of the DQN model, as shown in figure 4,
the status module consists of the status of each 3D space, including, but not limited to, the position (x, y, z) of each 3D furniture, the orientation (c _1, c _2, c _3) of each 3D furniture, the number n of furniture already laid out in the room 3D model, the space proportion b occupied by the furniture already laid out in the room 3D model,
the environment module gives a 3D semantic state diagram D of each furniture in the room 3D model.
The reward module is composed of a multilayer convolutional network, and a function for calculating the difference between the current state S and the target state S' is the most rewarded, and is shown in formula 2:
R o (s, s') formula 2
The behavior module gives the probability that the agent is performing each action in one state, as shown in equation 3 below:
pi (a | s) ═ p (a | s) formula 3
Wherein the policy pi is given by the agent module,
the strategy selection and evaluation of the states s, s' are determined by the following state cost function V and strategy cost function Q, as shown in equations 4 and 5:
Figure GDA0003647647060000051
Figure GDA0003647647060000052
the state module gives out the state after the training algorithm is executed in each iteration, the environment module gives out a state diagram under the room environment, the behavior module makes a behavior, the reward module calculates a return value, and then the intelligent agent makes a strategy. And obtaining an optimal strategy, namely the optimal strategy of the 3D space home layout, after N iterations.
After 3D furniture has been laid out in the 3D room scene, 3D luminaires, 3D ornaments and the like continue to be laid out, which may be implemented by a neural network, the neural network comprising, but not limited to, DQN and the like.
In the case of the DQN model, the implementation is similar to step 3.3, but the difference is that the 3D space state will increase the 3D luminaire position (x, y, z), orientation (c _1, c _2, c _3), 3D jewelry position (x, y, z), orientation (c _1, c _2, c _3), luminaire specific weight b for the roof/ceiling and jewelry specific weight b for the wall for the room layout.
Step 4, after 3D furniture is well distributed in the 3D scene by using the neural network model, 3D ornaments and 3D lamps are continuously distributed in the 3D scene; the neural network model comprises a depth enhancement model DQN;
and 5, selecting the recorded layout state, and selecting TopN as a final result, wherein the TopN can be specified according to the requirements of the user.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the protection scope of the present invention. While the embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.

Claims (6)

1. An intelligent layout method based on integration of 3D structured light furniture, ornaments and lamps is characterized by comprising the following steps:
step 1, acquiring furniture scene 3D structured light data, including furniture scene 3D point cloud data and RDB-D depth map data;
step 2, receiving 3D structured light data of a furniture scene by using a neural network model, and generating 3D semantic segmentation information; the neural network comprises a general convolution kernel CNN, a sequence convolution kernel RNN and a sequence convolution kernel LSTM;
step 3, constructing a 3D furniture intelligent layout system by using a neural network model, wherein the 3D furniture intelligent layout system is composed of a reinforcement learning model and a CNN common convolution model, and the realization mode comprises a deep reinforcement model DQN;
step 4, after 3D furniture is well distributed in the 3D scene by using the neural network model, 3D ornaments and 3D lamps are continuously distributed in the 3D scene; the neural network model comprises a depth enhancement model DQN;
and 5, selecting the recorded layout state, and selecting TopN as a final result, wherein the TopN can be specified according to the requirements of the user.
2. The 3D structured light-based furniture, ornament and lamp integrated intelligent layout method according to claim 1, wherein the step 2 specifically comprises the following steps:
step 2.1, extracting scene 3D point cloud characteristics through a first characteristic extraction module;
2.2, extracting the RGB-D data characteristics of the scene through a second characteristic extraction module;
2.3, fusing the characteristics of 2.1 and 2.2 through a characteristic fusion module;
step 2.4, obtaining various segmentation information of the whole room through a semantic segmentation module, wherein the segmentation information comprises a door a, a window b, a wall c, a roof d and a suspended ceiling e;
and 2.5, obtaining 3D room categories including a 3D bedroom a, a 3D kitchen b, a 3D guest dining room c, a 3D study D and a 3D toilet e through semantic segmentation results and 2.3 and 2.4 results.
3. The 3D structured light based furniture, ornaments and lamp integrated intelligent layout method according to claim 2, wherein the step 2.1 and the step 2.2 are realized by a high-dimensional neural network extraction feature module.
4. The integrated intelligent layout method for furniture, ornaments and lamps based on 3D structured light according to claim 2, characterized in that the step 2.3 adopts a general multilayer CNN convolutional neural network.
5. The 3D structured light based furniture, ornaments and lamp integrated intelligent layout method according to claim 2, characterized in that step 2.4 employs a 3D scene semantic segmentation neural network.
6. The integrated intelligent layout method for furniture, ornaments and lamps based on 3D structured light according to claim 1, wherein the step 3 is specifically as follows:
step 3.1, obtaining the result of semantic segmentation of the whole 3D scene of the house type, wherein the result comprises a 3D door x, a 3D wall y, a 3D window z, a 3D roof m and a 3D ceiling n;
step 3.2, obtaining semantic segmentation of the whole 3D scene based on the room type by using a general CNN module, wherein the semantic segmentation comprises a 3D bedroom a, a 3D kitchen b, a D guest dining room c, a 3D study D and a 3D toilet e;
3.3, intelligently distributing 3D furniture of each 3D room by using a depth tree and reinforcement learning combination module;
3.3.1, selecting a corresponding 3D furniture type for the corresponding 3D space by using a depth tree module, specifically selecting a 3D bed, a 3D storage cabinet and a 3D desk for a 3D bedroom;
and 3.3.2, automatically distributing the selected 3D furniture for the corresponding 3D space by using a reinforcement learning module.
CN201911283712.8A 2019-12-13 2019-12-13 Furniture, ornament and lamp integrated intelligent layout method based on 3D structured light Active CN111506940B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911283712.8A CN111506940B (en) 2019-12-13 2019-12-13 Furniture, ornament and lamp integrated intelligent layout method based on 3D structured light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911283712.8A CN111506940B (en) 2019-12-13 2019-12-13 Furniture, ornament and lamp integrated intelligent layout method based on 3D structured light

Publications (2)

Publication Number Publication Date
CN111506940A CN111506940A (en) 2020-08-07
CN111506940B true CN111506940B (en) 2022-08-12

Family

ID=71874049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911283712.8A Active CN111506940B (en) 2019-12-13 2019-12-13 Furniture, ornament and lamp integrated intelligent layout method based on 3D structured light

Country Status (1)

Country Link
CN (1) CN111506940B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112800501A (en) * 2021-01-15 2021-05-14 珠海新势力创建筑设计有限公司 Method and device for automatically generating ceiling lamp model based on room information
CN113052971B (en) * 2021-04-09 2022-06-10 杭州群核信息技术有限公司 Neural network-based automatic layout design method, device and system for indoor lamps and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596102A (en) * 2018-04-26 2018-09-28 北京航空航天大学青岛研究院 Indoor scene object segmentation grader building method based on RGB-D
CN110059690A (en) * 2019-03-28 2019-07-26 广州智方信息科技有限公司 Floor plan semanteme automatic analysis method and system based on depth convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596102A (en) * 2018-04-26 2018-09-28 北京航空航天大学青岛研究院 Indoor scene object segmentation grader building method based on RGB-D
CN110059690A (en) * 2019-03-28 2019-07-26 广州智方信息科技有限公司 Floor plan semanteme automatic analysis method and system based on depth convolutional neural networks

Also Published As

Publication number Publication date
CN111506940A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
Leach Architecture in the age of artificial intelligence
Connolly Perspectivism as a Way of Knowing in the Zhuangzi
CN111506940B (en) Furniture, ornament and lamp integrated intelligent layout method based on 3D structured light
Von Stackelberg The Roman garden: space, sense, and society
Bell The simulation of branching patterns in modular organisms
Tutenel et al. Generating consistent buildings: a semantic approach for integrating procedural techniques
CN108984618A (en) Data processing method and device, electronic equipment and computer readable storage medium
Deane-Drummond Joining in the dance: Catholic social teaching and ecology
Halbmayer Introduction: Toward an anthropological understanding of the area between the Andes, Mesoamerica, and the Amazon
Ellen Nuaulu religious practices; The frequency and reproduction of rituals in a Moluccan society
CN107220321A (en) Solid threedimensional embodies in a kind of literary scape conversion method and its system
Bessey Out of sorts: Making peace with an evolving faith
Ashraf Taking place: Landscape in the architecture of Louis Kahn
Strang Familiar forms: homologues, culture and gender in northern Australia
Blaviesciunaite Cultural values embedded in building environmental performance assessment methods: a comparison of LEED-Canada and Japans CASBEE
Weber A structured connectionist approach to direct inferences and figurative adjective-noun combinations
CN117706954B (en) Method and device for generating scene, storage medium and electronic device
Ryan That Seed Sets Time Ablaze
Sheu The Call of the Hyperobjects
Leblanc Stewards of Sustainability: A Theological Analysis of the Relationship Between the Anglican Church of Canada, The Climate Change Crisis, and Sustainability
Parker Stop'n'Go
Semenov Ecoempathic Design: Moving Beyond Biophilia With Brain Science
Nota et al. When biophilic design meets restorative architecture: the Strambinello project
Hendricks Playing-With the World: Toy Story's Aesthetics and Metaphysics of Play
Murray Radical foundations in Bloomsbury

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 211100 floor 5, block a, China Merchants high speed rail Plaza project, No. 9, Jiangnan Road, Jiangning District, Nanjing, Jiangsu (South Station area)

Applicant after: JIANGSU AIJIA HOUSEHOLD PRODUCTS Co.,Ltd.

Address before: 211100 No. 18 Zhilan Road, Science Park, Jiangning District, Nanjing City, Jiangsu Province

Applicant before: JIANGSU AIJIA HOUSEHOLD PRODUCTS Co.,Ltd.

GR01 Patent grant
GR01 Patent grant