CN117579804A - AR-based prefabricated building component pre-layout experience method and device - Google Patents

AR-based prefabricated building component pre-layout experience method and device Download PDF

Info

Publication number
CN117579804A
CN117579804A CN202311546210.6A CN202311546210A CN117579804A CN 117579804 A CN117579804 A CN 117579804A CN 202311546210 A CN202311546210 A CN 202311546210A CN 117579804 A CN117579804 A CN 117579804A
Authority
CN
China
Prior art keywords
real scene
building component
video stream
component
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311546210.6A
Other languages
Chinese (zh)
Other versions
CN117579804B (en
Inventor
杭世杰
朱东烽
王震
王慧英
邝东华
凌锋
黄炳森
李远东
陈超彬
梁家钊
胡少晖
欧国通
丁芷姗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGDONG CONSTRUCTION VOCATIONAL TECHNOLOGY INSTITUTE
Guangdong Yuncheng Architectural Technology Co ltd
Original Assignee
GUANGDONG CONSTRUCTION VOCATIONAL TECHNOLOGY INSTITUTE
Guangdong Yuncheng Architectural Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GUANGDONG CONSTRUCTION VOCATIONAL TECHNOLOGY INSTITUTE, Guangdong Yuncheng Architectural Technology Co ltd filed Critical GUANGDONG CONSTRUCTION VOCATIONAL TECHNOLOGY INSTITUTE
Priority to CN202311546210.6A priority Critical patent/CN117579804B/en
Publication of CN117579804A publication Critical patent/CN117579804A/en
Application granted granted Critical
Publication of CN117579804B publication Critical patent/CN117579804B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/08Construction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Business, Economics & Management (AREA)
  • Structural Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Civil Engineering (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Architecture (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of layout experience, and discloses an AR-based prefabricated building component pre-layout experience method and device, wherein the method comprises the following steps: receiving a pre-layout experience instruction of an assembled building component, analyzing the pre-layout experience instruction to obtain a building component set for generating the assembled building component, confirming the position of a virtual assembled building component in a real scene to obtain a virtual assembled building component position, obtaining the real scene position, shooting the real scene by a camera based on the real scene position to obtain a real scene video stream, fusing the virtual assembled building component into the real scene video stream to obtain a real scene video stream comprising the virtual assembled building component, optimizing the real scene video stream comprising the virtual assembled building component to obtain an optimized scene video stream, and transmitting the optimized scene video stream to an initiating terminal of the pre-layout experience instruction. The invention can solve the problem of poor experience of pre-layout of the prefabricated building components.

Description

AR-based prefabricated building component pre-layout experience method and device
Technical Field
The invention relates to the technical field of layout experience, in particular to an AR-based prefabricated building component pre-layout experience method, an AR-based prefabricated building component pre-layout experience device, electronic equipment and a computer-readable storage medium.
Background
With the development of cities, green, recyclable economies are becoming increasingly well known. The assembled building, here inoculated. By virtue of which the fabricated building: the advantages of high safety, high resource utilization rate, few procedures, easy recovery and the like are widely popularized in the building industry, and correspondingly, the method is particularly important for the pre-layout experience method of the assembled building components.
At present, the pre-layout experience method for the fabricated building components is as follows: and acquiring a building model of the fabricated building component and the position of the fabricated building component in the real scene, and completing experience of the fabricated building component based on the position in the real scene.
Although the method can realize the pre-layout experience of the prefabricated building components, when the building model of the prefabricated building components is experienced, the illumination factors in the environment and the materials of the building model are not considered, and a plurality of experience points which can be used for experiencing the prefabricated building components are not considered to be set, so that the problem of poor pre-layout experience of the prefabricated building components is caused.
Disclosure of Invention
The invention provides an AR-based prefabricated building component pre-layout experience method, an AR-based prefabricated building component pre-layout experience device and a computer-readable storage medium, and mainly aims to solve the problem that the prefabricated building component pre-layout experience is poor.
In order to achieve the above purpose, the invention provides an AR-based prefabricated building component pre-layout experience method and device, comprising the following steps:
receiving a pre-layout experience instruction of an assembled building component, and analyzing the pre-layout experience instruction to obtain a building component set for generating the assembled building component, wherein the building component set comprises a floor component, a wallboard component, a stair component and a balcony component;
generating a virtual assembly type building component based on the building component set, confirming the position of the virtual assembly type building component in a real scene to obtain the position of the virtual assembly type building component, and taking the position of the virtual assembly type building component as a circle center to obtain the position information of the real scene to obtain the position of the real scene;
shooting a real scene by using a camera based on the real scene position to obtain a real scene video stream;
fusing the virtual assembly type building components into a real scene video stream to obtain the real scene video stream comprising the virtual assembly type building components;
And optimizing a real scene video stream comprising the virtual prefabricated building components to obtain an optimized scene video stream, and sending the optimized scene video stream to an initiating terminal of a pre-layout experience instruction to complete the pre-layout experience of the prefabricated building components.
Optionally, the obtaining the position information of the real scene by using the position of the virtual assembly building component as the center of a circle to obtain the position of the real scene includes:
constructing a reference coordinate system, acquiring feature coordinates of the position of the virtual assembly type building component based on the reference coordinate system, obtaining an identification coordinate, acquiring a camera coordinate of the camera, and calculating the posture of the camera based on the camera coordinate and the identification coordinate to obtain a camera posture;
and acquiring the position information of the real scene based on the camera gesture to obtain the position of the real scene.
Optionally, the calculating the gesture of the camera based on the camera coordinates and the identification coordinates obtains the camera gesture, and the calculating formula is as follows:
wherein, (x) 1 ,y 1 ,z 1 ) Representing camera coordinates, (x) 2 ,y 2 ,z 2 ) And representing the identification coordinates, wherein R is a rotation matrix for converting camera coordinates into the identification coordinates, and P is a translation matrix for converting the shooting coordinates into the identification coordinates, wherein the camera gesture is composed of the rotation matrix and the translation matrix.
Optionally, the obtaining the position information of the real scene based on the camera pose to obtain the position of the real scene includes:
fitting a first reference circle and a second reference circle which are formed by taking the positions of the virtual assembly type building components as circle centers, wherein the first reference circle is a minimum circle which meets the requirement of shooting a real scene by using a camera, the second reference circle is a circle which meets the requirement of shooting the real scene by using the camera, and the radius of the second reference circle is a preset radius;
dividing a first reference circle uniformly based on a preset first average value and a preset first starting position to obtain a plurality of first real scene acquisition points, and dividing a second reference circle based on a preset second average value and a preset second starting position to obtain a plurality of second real scene acquisition points;
and acquiring the position information of the real scene based on the plurality of first real scene acquisition points, the plurality of second real scene acquisition points and the camera gesture to obtain the real scene position.
Optionally, the generating a virtual building element based on the set of building elements includes:
identifying one or more target components from the set of building components;
sequentially extracting target components from the one or more target components, and performing the following operations on each extracted target component:
Acquiring the size information of the extracted target member to obtain a generated size;
virtual building elements are generated based on the generated dimensions.
Optionally, the determining the position of the virtual fabricated building element in the real scene, to obtain the position of the virtual fabricated building element, includes:
a virtual building component position is obtained based on the extracted target component.
Optionally, the shooting the real scene with the camera to obtain the real scene video stream includes:
acquiring a perspective projection matrix of a camera, and converting camera coordinates into screen coordinates based on the perspective projection matrix;
and acquiring the real scene video stream based on the real scene position and the screen coordinates.
Optionally, the perspective projection matrix is as follows:
wherein T represents a perspective projection matrix, c x For the scale factor of the perspective projection matrix in the x-axis direction, f is the focal length of the camera, c y For the scale factor of the perspective projection matrix in the y-axis direction, (x) 0 ,y 0 ) Representing the position in the reference frame at the time of camera shooting.
Optionally, the optimizing the real scene video stream including the virtual fabricated building element, to obtain the optimized scene video stream includes:
Obtaining preset parameters of the virtual assembly type building component to obtain rendering parameters, wherein the rendering parameters comprise: the material of the virtual assembly type building component and the color of the virtual assembly type building component;
acquiring light wave wavelength in a real scene position, and constructing a brightness relation of a reference point by using the light wave wavelength, wherein the brightness relation of the reference point is as follows:
wherein I represents the brightness of the reference point, l s Represents the intensity of incidence of sunlight at a reference point, l t Represents the incident intensity of sky light at a reference point, gamma 1(min) 、γ 1(max) Respectively represent the minimum value and the maximum value of the solid angle of sunlight relative to the reference point, and gamma 2(min) 、γ 2(min) Respectively represent the minimum value and the maximum value of the solid angle of the sky light relative to the reference point, alpha (theta) 10 )、α(θ 23 ) Respectively represent the bidirectional reflection distribution function of sunlight and sky light at the reference point, beta s1 )、β t2 ) Respectively represent the shading functions of the reference points on sunlight and sky light, wherein gamma is a solid angle, cos (theta 1 )、cos(θ 2 ) Cosine function of normal included angle between sunlight and sky light and reference point, theta 1 、θ 0 Respectively the incident angle, the reflection angle and theta of sunlight 2 、θ 3 The incident angle and the reflection angle of sky light are respectively;
and obtaining estimated illumination for rendering the virtual assembly building component according to the brightness relation, and optimizing the real scene video stream by using the estimated illumination and rendering parameters to obtain an optimized scene video stream.
In order to solve the above problems, the present invention further provides an AR-based prefabricated building element pre-layout experience device, the device comprising:
the system comprises an experience instruction receiving module, a control module and a control module, wherein the experience instruction receiving module is used for receiving a pre-layout experience instruction of an assembled building component, analyzing the pre-layout experience instruction and obtaining a building component set for generating the assembled building component, and the building component set comprises a floor component, a wallboard component, a stair component and a balcony component;
the real scene position confirming module is used for generating a virtual assembly type building component based on the building component set, confirming the position of the virtual assembly type building component in a real scene to obtain the position of the virtual assembly type building component, and taking the position of the virtual assembly type building component as a circle center to obtain the position information of the real scene to obtain the position of the real scene;
the video stream acquisition module is used for shooting a real scene by using a camera based on the real scene position to obtain a real scene video stream;
fusing the virtual assembly type building components into a real scene video stream to obtain the real scene video stream comprising the virtual assembly type building components;
the video stream optimizing and experiencing module is used for optimizing a real scene video stream comprising the virtual assembly building component, obtaining an optimized scene video stream, and sending the optimized scene video stream to an initiating terminal of the pre-layout experience instruction to complete the pre-layout experience of the assembly building component.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
a memory storing at least one instruction; and
And the processor executes the instructions stored in the memory to realize the AR-based prefabricated building component pre-layout experience method and device.
In order to solve the above problems, the present invention further provides a computer readable storage medium, in which at least one instruction is stored, where the at least one instruction is executed by a processor in an electronic device to implement the above-mentioned AR-based prefabricated building element pre-layout experience method and apparatus.
In order to solve the problems in the background art, the embodiment of the invention receives the pre-layout experience instruction of the fabricated building components, analyzes the pre-layout experience instruction, and obtains the building component set for generating the fabricated building components, wherein the building component set comprises floor slab components, wallboard components, stair components and balcony components. The method comprises the steps of generating a virtual assembled building component based on the building component set, confirming the position of the virtual assembled building component in a real scene to obtain the position of the virtual assembled building component, and taking the position of the virtual assembled building component as a circle center to obtain the position information of the real scene to obtain the position of the real scene. The virtual assembly type building component is fused into the real scene video stream to obtain the real scene video stream comprising the virtual assembly type building component, the real scene video stream comprising the virtual assembly type building component is optimized to obtain the optimized scene video stream, and the optimized scene video stream is sent to the initiating end of the pre-layout experience instruction to complete the pre-layout experience of the assembly type building component. Therefore, the AR-based prefabricated building component pre-layout experience method, the AR-based prefabricated building component pre-layout experience device, the AR-based prefabricated building component pre-layout experience electronic device and the AR-based prefabricated building component pre-layout experience computer-readable storage medium can solve the problem that the prefabricated building component pre-layout experience is poor.
Drawings
FIG. 1 is a schematic flow diagram of an AR-based pre-layout experience method and apparatus for fabricated building components according to an embodiment of the present invention;
FIG. 2 is a functional block diagram of an AR-based prefabricated building component pre-layout experience device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device for implementing the AR-based pre-layout experience method and apparatus for fabricated building components according to an embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the application provides an AR-based prefabricated building component pre-layout experience method and device. The execution subject of the AR-based prefabricated building component pre-layout experience method and device comprises, but is not limited to, at least one of a server, a terminal and the like which can be configured to execute the method provided by the embodiment of the application. In other words, the AR-based prefabricated building component pre-layout experience method and apparatus may be performed by software or hardware installed in a terminal device or a server device, where the software may be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Referring to fig. 1, a schematic flow diagram of an AR-based pre-layout experience method and apparatus for fabricated building components according to an embodiment of the present invention is shown. In this embodiment, the AR-based prefabricated building component pre-layout experience method and apparatus include:
s1, receiving a pre-layout experience instruction of an assembled building component, and analyzing the pre-layout experience instruction to obtain a building component set for generating the assembled building component, wherein the building component set comprises a floor component, a wallboard component, a stair component and a balcony component.
It should be appreciated that AR is augmented reality, meaning: and (5) enhancing reality.
It should be explained that, the pre-layout experience instruction is issued by a layout instruction operator, and in order to meet the requirement of a customer for pre-layout experience of an assembled building, the pre-assembled instruction is performed according to the actual structure of the building, and the assembled building component is a component for manufacturing the building by transferring a large amount of field operation work in a traditional building mode to a factory and processing the factory. In addition, through the pre-layout experience instruction, the method can be realized in the AR technology, and the installation position, the effect after installation, the installation layout and the like of the assembled building which is not installed are adjusted.
Further, the pre-layout experience instruction comprises components required by the customer to perform assembly experience, and the building construction set required by the customer can be obtained by analyzing the pre-layout experience instruction. The technology for analyzing the pre-layout experience instruction is optional, the command analyzer is selected to analyze the pre-layout experience instruction, and other analysis technologies are selected to achieve the same effect, and are not described herein.
It should be understood that the floor elements are elements of the floor in the fabricated building, alternatively, the superimposed sheet is selected as the floor element, the wall plate element is selected as the wall plate element in the fabricated building, the stair element is the stair element in the fabricated building, the balcony element is the balcony element in the fabricated building, and in addition, the elements included in the building element set are all according to the model for experience, the shape of the model is consistent with the model of the pre-construction of the building, and the proportion of the pre-laid fabricated building is the same as the actual proportion.
Illustratively, a building company receives a request for a small building of a balcony within a delimited area, and prior to construction, needs to pre-layout the balcony of the building within the delimited area. Therefore, the operator king is given out by the layout instruction in the company, the pre-layout experience instruction is sent out, the types of balcony components after the business with the small business are defined in the pre-layout experience instruction, a balcony experience scene is built according to the building form agreed with the small business, and in addition, the component proportion in the kindergarten experience scene is the same as the proportion in the actual building.
S2, generating a virtual assembly type building component based on the building component set, confirming the position of the virtual assembly type building component in a real scene to obtain the position of the virtual assembly type building component, and obtaining the position information of the real scene by taking the position of the virtual assembly type building component as the circle center to obtain the position of the real scene.
It should be explained that the virtual building elements are building elements constructed by software. For example: building elements are assembled by 3d Max elements.
It is to be appreciated that the generating a virtual building element based on the set of building elements includes:
identifying one or more target components from the set of building components;
sequentially extracting target components from the one or more target components, and performing the following operations on each extracted target component:
acquiring the size information of the extracted target member to obtain a generated size;
virtual building elements are generated based on the generated dimensions.
It should be appreciated that the target components are extracted from a collection of building components and that it is necessary to generate components of a virtual assembly building model, and that the number and size of the same target components may be different, and thus it is necessary to model the extracted target components one by one. The size information is the size required when constructing the virtual building elements.
It can be appreciated that the determining the position of the virtual building elements in the real scene, to obtain the position of the virtual building elements, includes:
a virtual building component position is obtained based on the extracted target component.
It will be appreciated that the locations where different kinds of virtual building elements are to be installed are different, the virtual building element locations being the locations where the virtual building elements generated are to be installed in a real scene.
It can be understood that the obtaining the position information of the real scene by using the position of the virtual assembly building component as the center of a circle to obtain the position of the real scene includes:
constructing a reference coordinate system, acquiring feature coordinates of the position of the virtual assembly type building component based on the reference coordinate system, obtaining an identification coordinate, acquiring a camera coordinate of the camera, and calculating the posture of the camera based on the camera coordinate and the identification coordinate to obtain a camera posture;
and acquiring the position information of the real scene based on the camera gesture to obtain the position of the real scene.
It should be appreciated that the reference coordinate system is an arbitrary prescribed coordinate system, and after the prescribed, has invariance and uniqueness, and optionally, a world coordinate system is selected as the reference coordinate system. The feature coordinates are coordinates capable of describing the position of the fabricated building, for example, the center of the circle of the position of the fabricated building is defined as the origin of the reference coordinate system, and the foundation of the defined fabricated building is a rectangular area, the feature coordinates are coordinates corresponding to four vertexes in the foundation of the fabricated building, and the number of the feature coordinates is an integer greater than or equal to 4. The camera coordinate system takes the camera optical center as the origin of coordinates, the x-axis is the horizontal direction, the y-axis is the vertical direction, and the z-axis points to the direction observed by the camera and changes along with the movement of the camera. By converting the reference coordinate system and the camera coordinate system, tracking and registering of the fabricated building can be achieved. The camera coordinates are the coordinates of the camera under a reference coordinate system, in addition, the process of acquiring the sensor pose in real time according to the change of the target position in the real scene of the system, reestablishing a space coordinate system according to the current visual angle of a user and rendering the virtual scene to the accurate position in the real environment is called tracking, and the process of accurately positioning the virtual scene to the real environment is called registration. The real scene position is the position of the experimenter in the real scene corresponding to the position and the angle of the virtual assembly type building component in the optimized scene video stream.
Further, the camera gesture is calculated based on the camera coordinates and the identification coordinates, so as to obtain the camera gesture, and the calculation formula is as follows:
wherein, (x) 1 ,y 1 ,z 1 ) Representing camera coordinates, (x) 2 ,y 2 ,z 2 ) And representing the identification coordinates, wherein R is a rotation matrix for converting camera coordinates into the identification coordinates, and P is a translation matrix for converting the shooting coordinates into the identification coordinates, wherein the camera gesture is composed of the rotation matrix and the translation matrix.
It is to be explained that the camera gesture represents the position and the angle of the camera for the camera gesture, the position and the angle of the virtual assembly building component are calibrated for the user to experience, different rendering is carried out on the virtual assembly building component based on the camera gesture so as to give the user better feeling, the position of the camera is the acquisition point in the plurality of first real scene acquisition points and the plurality of second real scene acquisition points, and the angle of the camera is the preset experiential angle.
It should be explained that the obtaining the position information of the real scene based on the camera gesture to obtain the position of the real scene includes:
fitting a first reference circle and a second reference circle which are formed by taking the positions of the virtual assembly type building components as circle centers, wherein the first reference circle is a minimum circle which meets the requirement of shooting a real scene by using a camera, the second reference circle is a circle which meets the requirement of shooting the real scene by using the camera, and the radius of the second reference circle is a preset radius;
Dividing a first reference circle uniformly based on a preset first average value and a preset first starting position to obtain a plurality of first real scene acquisition points, and dividing a second reference circle based on a preset second average value and a preset second starting position to obtain a plurality of second real scene acquisition points;
and acquiring the position information of the real scene based on the plurality of first real scene acquisition points, the plurality of second real scene acquisition points and the camera gesture to obtain the real scene position.
It should be understood that, alternatively, the first reference circle may be obtained by: the satellite images acquired by taking the assembled building position as the center of a circle are fitted by utilizing the pre-trained neural network, otherwise, the acquisition mode of the second reference circle is the same as that of the first reference circle, and the same technical effects can be realized, and the description is omitted here.
It is appreciated that the experience of virtual fabricated building elements based on different locations can be achieved by dividing the first reference circle and the second reference circle. The first average value and the second average value are values of dividing multiples when the first reference circle and the second reference circle are respectively divided uniformly, the first real scene acquisition points and the second real scene acquisition points are positions corresponding to each dividing point when the first reference circle and the second reference circle are respectively divided, and the first starting position and the second starting position are starting positions when the first reference circle and the second reference circle are respectively divided, and the starting positions can be matched with a reference coordinate system for use. The real scene position is the position of the camera when shooting in the real scene.
The radius of the first reference circle is 5m, the preset radius of the second reference circle is 6m, the first average value and the second average value of the first reference circle and the second reference circle are both 4, the first starting position and the second starting position are all x axes in a reference coordinate system, the first reference circle and the second reference circle are respectively divided based on the premise, 4 first real scene acquisition points and 4 second real scene acquisition points are respectively obtained, the central angles among the acquisition points are 90 degrees, and the acquisition of the real scenes is completed at the acquisition points.
It may be appreciated that the acquiring the position information of the real scene based on the plurality of first real scene acquisition points and the plurality of second real scene acquisition points and the camera pose includes:
the method comprises the steps of sequentially extracting target real scene acquisition points from a plurality of first real scene acquisition points and a plurality of second real scene acquisition points, and executing the following operations on the extracted target real scene acquisition points: and calculating based on the real acquisition point of the target and the camera gesture to obtain the position information. In addition, the actual application of the real scene acquisition point needs to be combined with the real scene situation. For example: the stair component is preinstalled at the corner, and the corner is 90 degrees angles, and then only 25% of all the first real scene acquisition points and the second real scene acquisition points can be used for acquiring real scene video streams.
Further, the shooting the real scene with the camera to obtain the real scene video stream includes:
acquiring a perspective projection matrix of a camera, and converting camera coordinates into screen coordinates based on the perspective projection matrix;
and acquiring the real scene video stream based on the real scene position and the screen coordinates.
It should be explained that the perspective projection matrix is as follows:
wherein T represents a perspective projection matrix, c x For the scale factor of the perspective projection matrix in the x-axis direction, f is the focal length of the camera, c y For the scale factor of the perspective projection matrix in the y-axis direction, (x) 0 ,y 0 ) Representing the position in the reference frame at the time of camera shooting.
It should be understood that when the screen coordinates are the coordinates of the position where the video is presented in the screen when the video is captured based on the reference coordinates, the main purpose of converting the camera coordinates into the screen coordinates is to realize two-dimensional representation of the three-dimensional object in computer graphics, which is helpful to improve the user experience.
The method includes the steps of converting points corresponding to the identification positions under the reference coordinate system into points in the camera coordinate system based on the identification positions of the virtual assembly building components and the perspective projection matrix, converting the converted points into screen coordinates through the positions of the cameras and the orientations of the cameras, and achieving accurate display of images of the virtual assembly building components on a screen.
And S3, shooting a real scene by using a camera based on the real scene position to obtain a real scene video stream.
It should be explained that the real scene video stream refers to transmission of video data in a real scene, that is, reflects the natural environment where the virtual fabricated building element is located through the form of video in the real scene.
And S4, fusing the virtual assembly type building components into a real scene video stream to obtain the real scene video stream comprising the virtual assembly type building components.
It should be appreciated that, alternatively, the image processor may be selected to fuse the virtual fabricated building element into the real scene video stream, so as to obtain the real scene video stream including the virtual fabricated building element, and other methods may be selected to achieve the same effect, which will not be described herein.
And S5, optimizing a real scene video stream comprising the virtual prefabricated building components to obtain an optimized scene video stream, and sending the optimized scene video stream to an initiating terminal of a pre-layout experience instruction to complete the pre-layout experience of the prefabricated building components.
It should be appreciated that the optimizing a real-scene video stream comprising virtual fabricated building elements results in an optimized-scene video stream comprising:
Obtaining preset parameters of the virtual assembly type building component to obtain rendering parameters, wherein the rendering parameters comprise: the material of the virtual assembly type building component and the color of the virtual assembly type building component;
acquiring light wave wavelength in a real scene position, and constructing a brightness relation of a reference point by using the light wave wavelength, wherein the brightness relation of the reference point is as follows:
wherein I represents the brightness of the reference point, l s Represents the intensity of incidence of sunlight at a reference point, l t Represents the incident intensity of sky light at a reference point, gamma 1(min) 、γ 1(max) Respectively represent the minimum value and the maximum value of the solid angle of sunlight relative to the reference point, and gamma 2(min) 、γ 2(min) Respectively represent the minimum value and the maximum value of the solid angle of the sky light relative to the reference point, alpha (theta) 10 )、α(θ 23 ) Respectively represent the bidirectional reflection distribution function of sunlight and sky light at the reference point, beta s1 )、β t2 ) Respectively represent the shading functions of the reference points on sunlight and sky light, wherein gamma is a solid angle, cos (theta 1 )、cos(θ 2 ) Cosine function of normal included angle between sunlight and sky light and reference point, theta 1 、θ 0 Respectively the incident angle, the reflection angle and theta of sunlight 2 、θ 3 The incident angle and the reflection angle of sky light are respectively;
and obtaining estimated illumination for rendering the virtual assembly building component according to the brightness relation, and optimizing the real scene video stream by using the estimated illumination and rendering parameters to obtain an optimized scene video stream.
It can be understood that the reference point is a point on a real building in a frame image in the real scene video stream, and after the illumination effect of the real building is acquired through the reference point, the illumination effect is applied to rendering the virtual building, so that the experience sense for the user is improved, and the rendering parameters are as follows: the parameters for rendering the effect on the building component are selected as follows: 3ds max. The purpose of optimizing the real-scene video stream is to improve the experience of the user in using the prefabricated building to build the layout experience, and the material of the virtual prefabricated building component and the color of the virtual prefabricated building component can influence the actually produced prefabricated building, so the material of the virtual prefabricated building component and the color of the virtual prefabricated building component are indispensable influencing factors when optimizing the real-scene video stream. For example: stainless steel is selected as a material of a certain p-type assembled building component, and green paint with a specular reflection effect is coated outside the assembled building component, green light can be reflected out of the assembled building, and then the experience effect on other assembled building components is affected. Optionally, the estimated illumination for rendering the building component is obtained according to the brightness relation, and the method is as follows: the base image decomposition algorithm is a prior art and will not be described in detail herein. The optimized scene video stream is a video comprising the material of the virtual building elements, the color of the virtual building elements, and the estimated illumination.
Further, the estimating the illumination in the real scene based on the real scene position to obtain the estimated illumination includes:
and fitting the illumination under the real scene by using a pre-trained illumination fitting device to obtain estimated illumination.
It should be explained that the illumination fitter is a tool for estimating illumination information of a virtual assembly building component to be added by utilizing illumination information of images of videos in a real scene video stream under the frame number, and optionally, a tool obtained by training illumination intensity and environment illumination intensity in the collected photos by using the collected photos through a deep learning network is selected, and the illumination condition in the collected photos can be fitted through the illumination fitter, so that the real scene video stream comprising the virtual assembly building component is optimized by the illumination condition after fitting. Other technologies are selected to achieve the same effects, and are not described in detail herein.
It should be appreciated that optimizing the scene video stream to be a video stream after rendering the real scene video stream may enhance the experience that is used by optimizing the scene video stream. Sending the optimized scene video stream to an initiating end of a pre-layout experience instruction, for example: the initiating terminal of the pre-layout experience instruction is an applet, after the real scene video stream is optimized, the optimized scene video stream is sent to the applet, and other technologies can achieve the same effect, and the description is omitted here.
In order to solve the problems in the background art, the embodiment of the invention receives the pre-layout experience instruction of the fabricated building components, analyzes the pre-layout experience instruction, and obtains the building component set for generating the fabricated building components, wherein the building component set comprises floor slab components, wallboard components, stair components and balcony components. The method comprises the steps of generating a virtual assembled building component based on the building component set, confirming the position of the virtual assembled building component in a real scene to obtain the position of the virtual assembled building component, and taking the position of the virtual assembled building component as a circle center to obtain the position information of the real scene to obtain the position of the real scene. The virtual assembly type building component is fused into the real scene video stream to obtain the real scene video stream comprising the virtual assembly type building component, the real scene video stream comprising the virtual assembly type building component is optimized to obtain the optimized scene video stream, and the optimized scene video stream is sent to the initiating end of the pre-layout experience instruction to complete the pre-layout experience of the assembly type building component. Therefore, the AR-based prefabricated building component pre-layout experience method, the AR-based prefabricated building component pre-layout experience device, the AR-based prefabricated building component pre-layout experience electronic device and the AR-based prefabricated building component pre-layout experience computer-readable storage medium can solve the problem that the prefabricated building component pre-layout experience is poor.
FIG. 2 is a functional block diagram of an AR-based prefabricated building element pre-layout experience device according to an embodiment of the present invention.
The AR-based prefabricated building element pre-layout experience device 100 of the present invention may be installed in an electronic device. Depending on the functionality implemented, the AR-based fabricated building element pre-layout experience device 100 may include an experience instruction receiving module 101, a real scene location confirmation module 102, a video stream acquisition module 103, and a video stream optimization and experience module 104. The module of the invention, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
The experience instruction receiving module 101 is configured to receive a pre-layout experience instruction of an assembled building component, analyze the pre-layout experience instruction, and obtain a building component set for generating the assembled building component, where the building component set includes a floor component, a wallboard component, a stair component, and a balcony component;
the real scene position confirmation module 102 is configured to generate a virtual assembly type building element based on the building element set, confirm a position of the virtual assembly type building element in a real scene to obtain a position of the virtual assembly type building element, and obtain position information of the real scene by using the position of the virtual assembly type building element as a circle center to obtain a position of the real scene;
The video stream obtaining module 103 is configured to capture a real scene with a camera based on a real scene position, so as to obtain a real scene video stream;
fusing the virtual assembly type building components into a real scene video stream to obtain the real scene video stream comprising the virtual assembly type building components;
the video stream optimizing and experiencing module 104 is configured to optimize a real scene video stream including a virtual building component to obtain an optimized scene video stream, and send the optimized scene video stream to an initiating end of a pre-layout experience instruction to complete pre-layout experience of the building component.
Fig. 3 is a schematic structural diagram of an electronic device for implementing the method and apparatus for experiencing pre-layout of AR-based fabricated building elements according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11, a bus 12 and a communication interface 13, and may further comprise a computer program stored in the memory 11 and executable on the processor 10, such as an AR-based prefabricated building element pre-layout experience program.
The memory 11 includes at least one type of readable storage medium, including flash memory, a mobile hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may in other embodiments also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only for storing application software installed in the electronic device 1 and various types of data, such as codes of an AR-based prefabricated building component pre-layout experience program, but also for temporarily storing data that has been output or is to be output.
The processor 10 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects the respective components of the entire electronic device using various interfaces and lines, executes or executes programs or modules (e.g., AR-based building block pre-layout experience program, etc.) stored in the memory 11, and invokes data stored in the memory 11 to perform various functions of the electronic device 1 and process data.
The bus may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
Fig. 3 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device 1 may further include a power source (such as a battery) for supplying power to each component, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
Further, the electronic device 1 may also comprise a network interface, optionally the network interface may comprise a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used for establishing a communication connection between the electronic device 1 and other electronic devices.
The electronic device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The AR-based prefabricated building element pre-layout experience program stored by the memory 11 in the electronic device 1 is a combination of instructions that, when executed in the processor 10, may implement:
receiving a pre-layout experience instruction of an assembled building component, and analyzing the pre-layout experience instruction to obtain a building component set for generating the assembled building component, wherein the building component set comprises a floor component, a wallboard component, a stair component and a balcony component;
Generating a virtual assembly type building component based on the building component set, confirming the position of the virtual assembly type building component in a real scene to obtain the position of the virtual assembly type building component, and taking the position of the virtual assembly type building component as a circle center to obtain the position information of the real scene to obtain the position of the real scene;
shooting a real scene by using a camera based on the real scene position to obtain a real scene video stream;
fusing the virtual assembly type building components into a real scene video stream to obtain the real scene video stream comprising the virtual assembly type building components;
and optimizing a real scene video stream comprising the virtual prefabricated building components to obtain an optimized scene video stream, and sending the optimized scene video stream to an initiating terminal of a pre-layout experience instruction to complete the pre-layout experience of the prefabricated building components.
Specifically, the specific implementation method of the above instructions by the processor 10 may refer to descriptions of related steps in the corresponding embodiments of fig. 1 to 3, which are not repeated herein.
Further, the modules/units integrated in the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. The computer readable storage medium may be volatile or nonvolatile. For example, the computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a Read-only memory (ROM).
The present invention also provides a computer readable storage medium storing a computer program which, when executed by a processor of an electronic device, can implement:
receiving a pre-layout experience instruction of an assembled building component, and analyzing the pre-layout experience instruction to obtain a building component set for generating the assembled building component, wherein the building component set comprises a floor component, a wallboard component, a stair component and a balcony component;
generating a virtual assembly type building component based on the building component set, confirming the position of the virtual assembly type building component in a real scene to obtain the position of the virtual assembly type building component, and taking the position of the virtual assembly type building component as a circle center to obtain the position information of the real scene to obtain the position of the real scene;
shooting a real scene by using a camera based on the real scene position to obtain a real scene video stream;
fusing the virtual assembly type building components into a real scene video stream to obtain the real scene video stream comprising the virtual assembly type building components;
and optimizing a real scene video stream comprising the virtual prefabricated building components to obtain an optimized scene video stream, and sending the optimized scene video stream to an initiating terminal of a pre-layout experience instruction to complete the pre-layout experience of the prefabricated building components.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The terms second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (10)

1. AR-based prefabricated building component pre-layout experience method and device, and the method is characterized by comprising the following steps:
receiving a pre-layout experience instruction of an assembled building component, and analyzing the pre-layout experience instruction to obtain a building component set for generating the assembled building component, wherein the building component set comprises a floor component, a wallboard component, a stair component and a balcony component;
generating a virtual assembly type building component based on the building component set, confirming the position of the virtual assembly type building component in a real scene to obtain the position of the virtual assembly type building component, and taking the position of the virtual assembly type building component as a circle center to obtain the position information of the real scene to obtain the position of the real scene;
shooting a real scene by using a camera based on the real scene position to obtain a real scene video stream;
Fusing the virtual assembly type building components into a real scene video stream to obtain the real scene video stream comprising the virtual assembly type building components;
and optimizing a real scene video stream comprising the virtual prefabricated building components to obtain an optimized scene video stream, and sending the optimized scene video stream to an initiating terminal of a pre-layout experience instruction to complete the pre-layout experience of the prefabricated building components.
2. The AR-based prefabricated building component pre-layout experience method and apparatus according to claim 1, wherein the obtaining the position information of the real scene by using the virtual prefabricated building component position as the center of a circle, and obtaining the real scene position, includes:
constructing a reference coordinate system, acquiring feature coordinates of the position of the virtual assembly type building component based on the reference coordinate system, obtaining an identification coordinate, acquiring a camera coordinate of the camera, and calculating the posture of the camera based on the camera coordinate and the identification coordinate to obtain a camera posture;
and acquiring the position information of the real scene based on the camera gesture to obtain the position of the real scene.
3. The AR-based prefabricated building component pre-layout experience method and apparatus according to claim 2, wherein the camera pose is calculated based on camera coordinates and identification coordinates, and a camera pose is obtained, and a calculation formula is as follows:
Wherein, (x) 1 ,y 1 ,z 1 ) Representing camera coordinates, (x) 2 ,y2,z 2 ) And representing the identification coordinates, wherein R is a rotation matrix for converting camera coordinates into the identification coordinates, and P is a translation matrix for converting the shooting coordinates into the identification coordinates, wherein the camera gesture is composed of the rotation matrix and the translation matrix.
4. The AR-based prefabricated building component pre-layout experience method and apparatus according to claim 2, wherein the obtaining the position information of the real scene based on the camera pose to obtain the real scene position includes:
fitting a first reference circle and a second reference circle which are formed by taking the positions of the virtual assembly type building components as circle centers, wherein the first reference circle is a minimum circle which meets the requirement of shooting a real scene by using a camera, the second reference circle is a circle which meets the requirement of shooting the real scene by using the camera, and the radius of the second reference circle is a preset radius;
dividing a first reference circle uniformly based on a preset first average value and a preset first starting position to obtain a plurality of first real scene acquisition points, and dividing a second reference circle based on a preset second average value and a preset second starting position to obtain a plurality of second real scene acquisition points;
and acquiring the position information of the real scene based on the plurality of first real scene acquisition points, the plurality of second real scene acquisition points and the camera gesture to obtain the real scene position.
5. The AR-based prefabricated building element pre-layout experience method and apparatus of claim 1, wherein the generating a virtual prefabricated building element based on the set of building elements comprises:
identifying one or more target components from the set of building components;
sequentially extracting target components from the one or more target components, and performing the following operations on each extracted target component:
acquiring the size information of the extracted target member to obtain a generated size;
virtual building elements are generated based on the generated dimensions.
6. The AR-based prefabricated building component pre-layout experience method and apparatus according to claim 5, wherein said confirming the position of the virtual prefabricated building component in the real scene, obtaining the virtual prefabricated building component position, comprises:
a virtual building component position is obtained based on the extracted target component.
7. The AR-based prefabricated building component pre-layout experience method and apparatus according to claim 3, wherein capturing a real scene with a camera to obtain a real scene video stream comprises:
acquiring a perspective projection matrix of a camera, and converting camera coordinates into screen coordinates based on the perspective projection matrix;
And acquiring the real scene video stream based on the real scene position and the screen coordinates.
8. The AR-based prefabricated building component pre-layout experience method and apparatus of claim 7, wherein the perspective projection matrix is as follows:
wherein T represents a perspective projection matrix, c x For the scale factor of the perspective projection matrix in the x-axis direction, f is the focal length of the camera, c y For the scale factor of the perspective projection matrix in the y-axis direction, (x) 0 ,y 0 ) Representing the position in the reference frame at the time of camera shooting.
9. The AR-based prefabricated building element pre-layout experience method and apparatus of claim 1, wherein the optimizing comprises virtualizing a real scene video stream of the prefabricated building element to obtain an optimized scene video stream, comprising:
obtaining preset parameters of the virtual assembly type building component to obtain rendering parameters, wherein the rendering parameters comprise: the material of the virtual assembly type building component and the color of the virtual assembly type building component;
acquiring light wave wavelength in a real scene position, and constructing a brightness relation of a reference point by using the light wave wavelength, wherein the brightness relation of the reference point is as follows:
Wherein I represents the brightness of the reference point, l s Represents the intensity of incidence of sunlight at a reference point, l t Represents the incident intensity of sky light at a reference point, gamma 1(min) 、γ 1(max) Respectively represent the minimum value and the maximum value of the solid angle of sunlight relative to the reference point, and gamma 2(min) 、γ 2(min) Respectively represent the minimum value and the maximum value of the solid angle of the sky light relative to the reference point, alpha (theta) 10 )、α(θ 23 ) Respectively represent the bidirectional reflection distribution function of sunlight and sky light at the reference point, beta s1 )、β t2 ) Respectively represent the shading functions of the reference points on sunlight and sky light, wherein gamma is a solid angle, cos (theta 1 )、cos(θ 2 ) Cosine function of normal included angle between sunlight and sky light and reference point, theta 1 、θ 0 Respectively the incident angle, the reflection angle and theta of sunlight 2 、θ 3 The incident angle and the reflection angle of sky light are respectively;
and obtaining estimated illumination for rendering the virtual assembly building component according to the brightness relation, and optimizing the real scene video stream by using the estimated illumination and rendering parameters to obtain an optimized scene video stream.
10. An AR-based prefabricated building element pre-layout experience device, the device comprising:
the system comprises an experience instruction receiving module, a control module and a control module, wherein the experience instruction receiving module is used for receiving a pre-layout experience instruction of an assembled building component, analyzing the pre-layout experience instruction and obtaining a building component set for generating the assembled building component, and the building component set comprises a floor component, a wallboard component, a stair component and a balcony component;
The real scene position confirming module is used for generating a virtual assembly type building component based on the building component set, confirming the position of the virtual assembly type building component in a real scene to obtain the position of the virtual assembly type building component, and taking the position of the virtual assembly type building component as a circle center to obtain the position information of the real scene to obtain the position of the real scene;
the video stream acquisition module is used for shooting a real scene by using a camera based on the real scene position to obtain a real scene video stream;
fusing the virtual assembly type building components into a real scene video stream to obtain the real scene video stream comprising the virtual assembly type building components;
the video stream optimizing and experiencing module is used for optimizing a real scene video stream comprising the virtual assembly building component, obtaining an optimized scene video stream, and sending the optimized scene video stream to an initiating terminal of the pre-layout experience instruction to complete the pre-layout experience of the assembly building component.
CN202311546210.6A 2023-11-17 2023-11-17 AR-based prefabricated building component pre-layout experience method and device Active CN117579804B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311546210.6A CN117579804B (en) 2023-11-17 2023-11-17 AR-based prefabricated building component pre-layout experience method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311546210.6A CN117579804B (en) 2023-11-17 2023-11-17 AR-based prefabricated building component pre-layout experience method and device

Publications (2)

Publication Number Publication Date
CN117579804A true CN117579804A (en) 2024-02-20
CN117579804B CN117579804B (en) 2024-05-14

Family

ID=89894924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311546210.6A Active CN117579804B (en) 2023-11-17 2023-11-17 AR-based prefabricated building component pre-layout experience method and device

Country Status (1)

Country Link
CN (1) CN117579804B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140306996A1 (en) * 2013-04-15 2014-10-16 Tencent Technology (Shenzhen) Company Limited Method, device and storage medium for implementing augmented reality
US20160180593A1 (en) * 2014-07-02 2016-06-23 Huizhou Tcl Mobile Communication Co., Ltd. Wearable device-based augmented reality method and system
KR101912396B1 (en) * 2017-06-13 2018-10-26 주식회사 아이닉스 Apparatus and Method for Generating Image at any point-view based on virtual camera
US20200286289A1 (en) * 2017-09-06 2020-09-10 XYZ Reality Limited Displaying a virtual image of a building information model
US20210183161A1 (en) * 2019-12-13 2021-06-17 Hover, Inc. 3-d reconstruction using augmented reality frameworks
WO2022040920A1 (en) * 2020-08-25 2022-03-03 南京翱翔智能制造科技有限公司 Digital-twin-based ar interactive system and method
WO2022040970A1 (en) * 2020-08-26 2022-03-03 南京翱翔信息物理融合创新研究院有限公司 Method, system, and device for synchronously performing three-dimensional reconstruction and ar virtual-real registration
WO2022227191A1 (en) * 2021-04-28 2022-11-03 平安科技(深圳)有限公司 Inactive living body detection method and apparatus, electronic device, and storage medium
US20230345196A1 (en) * 2020-12-31 2023-10-26 Huawei Administration Building, Bantian Augmented reality interaction method and electronic device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140306996A1 (en) * 2013-04-15 2014-10-16 Tencent Technology (Shenzhen) Company Limited Method, device and storage medium for implementing augmented reality
US20160180593A1 (en) * 2014-07-02 2016-06-23 Huizhou Tcl Mobile Communication Co., Ltd. Wearable device-based augmented reality method and system
KR101912396B1 (en) * 2017-06-13 2018-10-26 주식회사 아이닉스 Apparatus and Method for Generating Image at any point-view based on virtual camera
US20200286289A1 (en) * 2017-09-06 2020-09-10 XYZ Reality Limited Displaying a virtual image of a building information model
US20210183161A1 (en) * 2019-12-13 2021-06-17 Hover, Inc. 3-d reconstruction using augmented reality frameworks
WO2022040920A1 (en) * 2020-08-25 2022-03-03 南京翱翔智能制造科技有限公司 Digital-twin-based ar interactive system and method
WO2022040970A1 (en) * 2020-08-26 2022-03-03 南京翱翔信息物理融合创新研究院有限公司 Method, system, and device for synchronously performing three-dimensional reconstruction and ar virtual-real registration
US20230345196A1 (en) * 2020-12-31 2023-10-26 Huawei Administration Building, Bantian Augmented reality interaction method and electronic device
WO2022227191A1 (en) * 2021-04-28 2022-11-03 平安科技(深圳)有限公司 Inactive living body detection method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
CN117579804B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN108764048B (en) Face key point detection method and device
US10339384B2 (en) Construction photograph integration with 3D model images
WO2021098454A1 (en) Region of concern detection method and apparatus, and readable storage medium and terminal device
US10791268B2 (en) Construction photograph integration with 3D model images
CN110109535A (en) Augmented reality generation method and device
KR101181967B1 (en) 3D street view system using identification information.
Fathi et al. A videogrammetric as-built data collection method for digital fabrication of sheet metal roof panels
EP4050305A1 (en) Visual positioning method and device
US20240071016A1 (en) Mixed reality system, program, mobile terminal device, and method
US20230351724A1 (en) Systems and Methods for Object Detection Including Pose and Size Estimation
Franz et al. Real-time collaborative reconstruction of digital building models with mobile devices
CN110222651A (en) A kind of human face posture detection method, device, terminal device and readable storage medium storing program for executing
WO2014026021A1 (en) Systems and methods for image-based searching
CN109214350A (en) A kind of determination method, apparatus, equipment and the storage medium of illumination parameter
CN112053440A (en) Method for determining individualized model and communication device
CN113379748A (en) Point cloud panorama segmentation method and device
CN111207672B (en) AR (augmented reality) measuring method
CN115527000B (en) Method and device for batch monomalization of unmanned aerial vehicle oblique photography model
CN117579804B (en) AR-based prefabricated building component pre-layout experience method and device
CN115578432B (en) Image processing method, device, electronic equipment and storage medium
WO2023038369A1 (en) Semantic three-dimensional (3d) building augmentation
CN116642490A (en) Visual positioning navigation method based on hybrid map, robot and storage medium
CN113628284B (en) Pose calibration data set generation method, device and system, electronic equipment and medium
CN113869218B (en) Face living body detection method and device, electronic equipment and readable storage medium
CN113177975B (en) Depth calculation method and three-dimensional modeling method based on spherical screen camera and laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant