CN112907751B - Virtual decoration method, system, equipment and medium based on mixed reality - Google Patents

Virtual decoration method, system, equipment and medium based on mixed reality Download PDF

Info

Publication number
CN112907751B
CN112907751B CN202110308327.5A CN202110308327A CN112907751B CN 112907751 B CN112907751 B CN 112907751B CN 202110308327 A CN202110308327 A CN 202110308327A CN 112907751 B CN112907751 B CN 112907751B
Authority
CN
China
Prior art keywords
decoration
dimensional model
target scene
virtual
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110308327.5A
Other languages
Chinese (zh)
Other versions
CN112907751A (en
Inventor
崔岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Germany Artificial Intelligence Institute Co ltd
China Germany Zhuhai Artificial Intelligence Institute Co ltd
4Dage Co Ltd
Original Assignee
China Germany Artificial Intelligence Institute Co ltd
China Germany Zhuhai Artificial Intelligence Institute Co ltd
4Dage Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Germany Artificial Intelligence Institute Co ltd, China Germany Zhuhai Artificial Intelligence Institute Co ltd, 4Dage Co Ltd filed Critical China Germany Artificial Intelligence Institute Co ltd
Priority to CN202110308327.5A priority Critical patent/CN112907751B/en
Publication of CN112907751A publication Critical patent/CN112907751A/en
Application granted granted Critical
Publication of CN112907751B publication Critical patent/CN112907751B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a virtual decoration method, a system, equipment and a medium based on mixed reality, which are characterized in that video streams at different positions of a target scene are obtained, and a plurality of image information of the target scene are obtained according to the video streams; calculating depth information, point cloud information and video streams of different positions by utilizing a plurality of image information so as to realize the ranging of a target scene; obtaining distance information of a target scene based on distance measurement, and constructing a virtual three-dimensional model of the target scene according to the distance information and the image information; acquiring decoration data, and rendering in real time according to the virtual three-dimensional model and the decoration data to obtain a decoration three-dimensional model; matching and marking the decoration three-dimensional model and a virtual three-dimensional model of the target scene; and overlapping the decoration three-dimensional model into a real target scene in real time through mixed reality. According to the application, the decoration three-dimensional model is superimposed in real time into a real target scene, so that the decoration effect required by a user is obtained, and the superimposed mixed reality enables the decoration effect to be more real, thereby helping the user to better see rooms and decoration.

Description

Virtual decoration method, system, equipment and medium based on mixed reality
Technical Field
The application relates to the technical field of computer vision, in particular to a virtual decoration method, system, equipment and medium based on mixed reality.
Background
Along with the continuous development of the economy in China, the living standard of people is gradually improved, more and more people purchase houses or self-build houses, and because the decoration concept of each house is different, although decoration design companies can provide three-dimensional design drawings for users, the decoration delivery results of a plurality of decoration companies are far from the design drawings in the expectations of the users, and the customer demands cannot be met. And many virtual decoration models are different in house shape of each user, and the decoration effect cannot be truly reflected due to size deviation or visual deviation.
Secondly, most of users sell the house in blank houses or template houses, and the users cannot truly experience the living effect of the house type when watching the house, so that the house watching experience of the users is poor.
Therefore, a method for enabling the decoration design to truly restore the decoration effect is needed.
Disclosure of Invention
Based on this, it is necessary to provide a virtual decoration method, system, device and medium based on mixed reality to solve the technical problem of how to enable the decoration three-dimensional model to truly restore the decoration effect in the prior art.
In a first aspect, a virtual decoration method based on mixed reality is provided, including:
acquiring video streams of different positions of a target scene through a spherical screen camera, and acquiring a plurality of image information of the target scene according to the video streams;
calculating depth information corresponding to different positions by utilizing the plurality of image information, calculating point cloud information of the corresponding positions according to the depth information, and tracking a video stream of the current position according to the point cloud information so as to realize the ranging of a target scene;
obtaining distance information of a target scene based on the distance measurement, and constructing a virtual three-dimensional model of the target scene according to the distance information and the image information;
acquiring decoration data, and rendering in real time according to the virtual three-dimensional model of the target scene and the decoration data to obtain a decoration three-dimensional model;
matching and marking the decoration three-dimensional model and the virtual three-dimensional model of the target scene;
and overlaying the decoration three-dimensional model into a real target scene in real time through mixed reality.
Preferably, calculating depth information, point cloud information, and video streams corresponding to different positions using the plurality of image information to achieve ranging of a target scene includes:
the plurality of image information is provided with overlapped characteristic areas, at least a first position and a second position are obtained according to the characteristic areas, coordinates of the first position and the second position in different image information are calculated, pixel coordinate information of the first position and the second position is marked through the coordinates, and depth information of different positions in the image information is calculated by combining gray information and parallax information of the characteristic areas;
and acquiring point cloud information of a corresponding position according to the depth information, tracking a video stream of the current position according to the point cloud information, extracting the point cloud information and the video stream of a specific area in the target scene, and performing ranging of the target scene according to the point cloud information and the video stream of the specific area.
Preferably, the obtaining distance information of the target scene based on the ranging, and constructing a virtual three-dimensional model of the target scene according to the distance information and the image information, includes:
and establishing a digital model according to the distance information, performing loop detection on the digital model, and mapping the image information to the digital model to obtain a virtual three-dimensional model of the target scene.
Preferably, the obtaining decoration data, and rendering in real time according to the virtual three-dimensional model of the target scene and the decoration data to obtain a decoration three-dimensional model, includes:
the decoration data comprise two-dimensional decoration design drawings, the two-dimensional decoration design drawings and the virtual three-dimensional model of the target scene are matched and marked according to the specific area, so that parameters of the two-dimensional decoration design drawings are adjusted to correspond to parameters of the virtual three-dimensional model of the target scene in real time;
identifying a characteristic object in a two-dimensional decoration design drawing, and digitally modeling the two-dimensional decoration design drawing and the characteristic object according to parameters of a virtual three-dimensional model of the target scene;
and rendering the two-dimensional decoration design drawing and the characteristic object subjected to digital modeling in real time according to the decoration style to obtain a decoration three-dimensional model.
Preferably, the obtaining decoration data, and rendering in real time according to the virtual three-dimensional model of the target scene and the decoration data to obtain a decoration three-dimensional model, includes:
the decoration data comprise various decoration models, and a user self-defines the decoration models to be placed in a virtual three-dimensional model of the target scene;
identifying parameters of the decoration model, carrying out matching marking on the decoration model and the virtual three-dimensional model of the target scene, and positioning and placing the decoration model in the virtual three-dimensional model of the target scene according to the matching marking;
and rendering the decoration model and the virtual three-dimensional model of the target scene with the decoration model in real time to obtain a decoration three-dimensional model.
Preferably, the matching and marking the decoration three-dimensional model and the virtual three-dimensional model of the target scene includes:
detecting whether the position information of a plurality of specific areas in the decoration three-dimensional model is matched with the position information of the specific areas of the virtual three-dimensional model of the target scene;
if the three-dimensional model is matched with the virtual three-dimensional model of the target scene, performing direction marking binding based on the position information of a plurality of specific areas of the virtual three-dimensional model of the target scene;
if the three-dimensional model is not matched with the virtual three-dimensional model of the target scene, carrying out local loop detection on the three-dimensional model of the decoration and the virtual three-dimensional model of the target scene, and adjusting parameters of the three-dimensional model of the decoration in real time until the three-dimensional model of the decoration is matched with the position information of a plurality of specific areas of the virtual three-dimensional model of the target scene, and carrying out direction mark binding.
Preferably, the real-time overlaying of the decorated three-dimensional model into a real target scene by mixed reality comprises:
and acquiring positioning information corresponding to the specific areas in the real scene, acquiring shooting tracks of the ball curtain camera shooting target scene, and superposing the decoration three-dimensional model according to the positioning information and the shooting tracks.
In a second aspect, there is provided a mixed reality-based virtual finishing system, comprising:
the data acquisition unit is used for acquiring video streams at different positions of the target scene and acquiring a plurality of image information of the target scene according to the video streams; calculating depth information, point cloud information and video streams corresponding to different positions by utilizing the plurality of image information so as to realize the ranging of a target scene;
the first model unit obtains distance information of the target scene based on the distance measurement, and constructs a virtual three-dimensional model of the target scene according to the distance information and the image information;
the second model unit is used for acquiring decoration data, and rendering the decoration data in real time according to the virtual three-dimensional model of the target scene and the decoration data to obtain a decoration three-dimensional model;
the data output unit is used for matching and marking the decoration three-dimensional model and the virtual three-dimensional model of the target scene; and overlaying the decoration three-dimensional model into a real target scene in real time through mixed reality.
In a third aspect, a virtual decoration device based on mixed reality is provided, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the virtual decoration method based on mixed reality are implemented by the processor when the computer program is executed.
In a fourth aspect, a computer readable storage medium is provided, storing a computer program which, when executed by a processor, implements the steps of the above-described mixed reality based virtual finishing method.
The implementation of the embodiment of the application has the following beneficial effects:
the application provides a virtual decoration method, a system, equipment and a medium based on mixed reality, which are characterized in that video streams at different positions of a target scene are obtained through a spherical screen camera, and a plurality of image information of the target scene are obtained according to the video streams; calculating depth information, point cloud information and video streams corresponding to different positions by utilizing a plurality of image information so as to realize the ranging of a target scene; obtaining distance information of a target scene based on distance measurement, and constructing a virtual three-dimensional model of the target scene according to the distance information and the image information; acquiring decoration data, and rendering the decoration data in real time according to the virtual three-dimensional model of the target scene to obtain a decoration three-dimensional model; matching and marking the decoration three-dimensional model and a virtual three-dimensional model of the target scene; and overlapping the decoration three-dimensional model into a real target scene in real time through mixed reality. According to the method, the virtual three-dimensional model of the target scene is obtained by obtaining the video stream of the target scene, and the decoration data are combined to obtain the decoration three-dimensional model, so that the decoration three-dimensional model is superimposed in real time into the real target scene, the real target scene can obtain the decoration effect required by a user, the decoration effect is more real through a superposition mode of mixed reality, and the method can help the user to better see a house and decorate.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Wherein:
FIG. 1 is a schematic flow chart of an implementation of a mixed reality-based virtual decoration method in one embodiment;
FIG. 2 is a block diagram of a mixed reality based virtual finishing system in one embodiment;
fig. 3 is an internal structural diagram of a virtual finishing device based on mixed reality in one embodiment.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In an embodiment, a virtual decoration method based on mixed reality is provided, and an execution subject of the virtual decoration method based on mixed reality in the embodiment of the present application is a device capable of implementing the virtual decoration method based on mixed reality in the embodiment of the present application, where the device may include a server and a terminal, where the terminal includes a mobile terminal, for example, a mobile phone and a tablet computer.
As shown in fig. 1, the virtual decoration method based on mixed reality according to the embodiment of the present application specifically includes:
step 101, obtaining video streams of different positions of a target scene through a spherical screen camera, and obtaining a plurality of image information of the target scene according to the video streams;
the spherical screen camera is specifically a multi-view spherical screen camera, including but not limited to a binocular, a trinocular or an octal spherical screen camera, and the use of the multi-view spherical screen camera can enable a photographed video stream to reflect a real space scene of a target scene, and when the spherical screen camera is used, the photographing axis of the multi-view spherical screen camera is kept on a horizontal line for photographing. And recording current position information of the dome camera at shooting time, recording next position information when moving to the next position, and recording moving tracks of the dome camera.
Furthermore, a plurality of image information of key frames in the target scene is intercepted according to the video stream of the shot target scene, the target scene is further subjected to distance measurement through processing of the image information, the operation amount of a processor is further reduced, and distance measurement and digital modeling are realized at a higher speed.
In other embodiments, the feature points in the video stream may be identified according to the video stream information captured by the multi-camera, and the spatial stitching process may be performed to obtain a three-dimensional model of the target scene.
102, calculating depth information corresponding to different positions by using the plurality of image information, calculating point cloud information of the corresponding positions according to the depth information, and tracking a video stream of the current position according to the point cloud information so as to realize the ranging of a target scene; the method specifically comprises the following steps:
the method comprises the steps that a plurality of image information are provided with overlapped characteristic areas, at least a first position and a second position are obtained according to the characteristic areas, coordinates of the first position and the second position in different image information are calculated, pixel coordinate information of the first position and pixel coordinate information of the second position are marked through the coordinates, and depth information of different positions in the image information is calculated by combining gray level information and parallax information of the characteristic areas; according to the identified overlapped characteristic areas and the coordinate information of a plurality of positions, the positions of different characteristic points in the characteristic areas are completely positioned, so that the position information among the different characteristic points is found, the pixel coordinate information is marked, the depth information is obtained through the analysis and calculation of the pixel coordinate information and the gray information, parallax information and other information of the characteristic areas, the more accurate calculation of the depth information can be ensured, and the deviation of the later digital modeling caused by the calculation error of the depth information is avoided.
In this embodiment, the disparity map is an image whose size is the size of the reference image and whose element value is the disparity value, with reference to any one of a plurality of image pairs.
Further, point cloud information of a corresponding position is obtained according to the depth information, video streams of the current position are tracked according to the point cloud information, point cloud information and video streams of a specific area in a target scene are extracted, ranging of the target scene is conducted according to the point cloud information of the specific area and the video streams, and specifically, the mapping relation between the point cloud information of the specific area and the corresponding position of the video streams is tracked, and ranging of the target scene is conducted.
And training the point cloud information and the video stream by utilizing machine learning depth to obtain the mapping relation between the point cloud information and the video stream, and extracting the point cloud information and the video stream of a specific area in a target scene, wherein the specific area refers to a special position such as a corner, a window, a door and the like. And extracting the distance size information of the target scene from the mapping relation between the point cloud information and the video stream by using the identification of the specific areas as positioning reference points so as to realize distance measurement. The method can intercept the image directly through the video stream to obtain the depth information of the environment, has simple algorithm and low calculation cost, trains the machine learning model through the point cloud information and the video stream information, measures the distance through tracking the mapping relation between the point cloud information and the video stream, can adapt to the automatic distance measurement of a larger scene, and improves the operation amount.
Step 103, obtaining distance information of a target scene based on the distance measurement, and constructing a virtual three-dimensional model of the target scene according to the distance information and the image information;
and establishing a digital model according to the distance information, performing loop detection on the digital model, and mapping the image information to the digital model to obtain a virtual three-dimensional model of the target scene.
In a plurality of images or video streams, extracting feature points (pixel points on a picture) of a single image by using SIFT descriptors, analyzing the position information of each feature point and a feature point neighborhood, and controlling the feature points according to the neighborhood.
Comparing the currently recorded position of the dome camera with the position of the past dome camera, and detecting whether the distances are similar or not; if the distance between the two is detected to be within a certain threshold range, the spherical screen camera is considered to return to the original walking place, and closed loop detection is started at the moment.
And when mapping the image information, mapping the digital model according to the feature points and the feature point neighborhood and detecting the adjacent images, thereby obtaining the virtual three-dimensional model of the target scene.
104, acquiring decoration data, and rendering in real time according to the virtual three-dimensional model of the target scene and the decoration data to obtain a decoration three-dimensional model; the method specifically comprises the following steps:
the decoration data comprise two-dimensional decoration design drawings, the two-dimensional decoration design drawings and the virtual three-dimensional model of the target scene are matched and marked according to the specific area, so that parameters of the two-dimensional decoration design drawings are adjusted to correspond to parameters of the virtual three-dimensional model of the target scene in real time;
identifying a characteristic object in a two-dimensional decoration design drawing, and digitally modeling the two-dimensional decoration design drawing and the characteristic object according to parameters of a virtual three-dimensional model of the target scene;
and rendering the two-dimensional decoration design drawing and the characteristic object subjected to digital modeling in real time according to the decoration style to obtain a decoration three-dimensional model.
The matching marking according to the specific area specifically refers to marking the virtual three-dimensional models of the specific positions of the corners, windows, doors and the like marked in the two-dimensional decoration design drawing in the target scene in a mutually corresponding manner, for example, when the sizes deviate, deviation correction is automatically started, and when the two dimensional parameters are inconsistent, the virtual three-dimensional models of the target scene are corrected with the video stream, and when the sizes of the virtual three-dimensional models of the target scene deviate, deviation correction is automatically performed, and the position information of the specific area or the characteristic points can be detected, so that the correction can be performed quickly. And if the virtual three-dimensional model of the target scene and the correction of the video stream are correct, automatically adjusting the parameter information such as the size of the two-dimensional decoration design drawing so as to keep the decoration three-dimensional model consistent with the virtual three-dimensional model of the target scene.
The characteristic objects refer to furniture products of decoration designs such as sofas, tea tables and beds marked in the two-dimensional decoration design drawing, and parameters such as distance dimensions of the virtual three-dimensional model which is matched with the target scene in an adaptive manner are subjected to digital modeling through recognition of the characteristic objects.
In other embodiments, the decoration data may further include various decoration models, and the user may customize the decoration models to be placed in the virtual three-dimensional model of the target scene; for example, sofas, tea tables, chairs and the like, which have been modeled, the decoration models are stored in a database, and a user can select a virtual three-dimensional model of a target scene according to needs, so as to achieve the effect required by the user.
Identifying parameters of the decoration model, carrying out matching marking on the decoration model and the virtual three-dimensional model of the target scene, and positioning and placing the decoration model in the virtual three-dimensional model of the target scene according to the matching marking; in order to ensure that the decoration model selected by the user can be better matched with the virtual three-dimensional model of the target scene, parameters of the decoration model are identified, and the spatial position where the decoration model is placed by the user is identified, for example, when the sofa is selected to be placed in the virtual three-dimensional model of the target scene, the decoration model is judged to be the sofa and is placed on the ground, and then the placement datum point of the sofa is identified to be the corresponding position of the ground. If the window curtain is identified, the window curtain is fixedly placed near the upper end of the window, and a specific area of the window in the target scene is acquired as a datum point for fixedly placing the window curtain.
And rendering the decoration model and the virtual three-dimensional model of the target scene with the decoration model in real time to obtain a decoration three-dimensional model.
Step 105, matching the decoration three-dimensional model with a virtual three-dimensional model of the target scene;
in this embodiment, whether the position information of a plurality of specific areas in the decoration three-dimensional model is matched with the position information of the specific areas of the virtual three-dimensional model of the target scene is detected;
if the three-dimensional model is matched with the virtual three-dimensional model of the target scene, performing direction marking binding based on the position information of a plurality of specific areas of the virtual three-dimensional model of the target scene; so as to ensure that the decoration three-dimensional model is consistent with the virtual three-dimensional model of the target scene in direction and size.
If the three-dimensional model is not matched with the virtual three-dimensional model of the target scene, carrying out local loop detection on the three-dimensional model of the decoration and the virtual three-dimensional model of the target scene, and adjusting parameters of the three-dimensional model of the decoration in real time until the three-dimensional model of the decoration is matched with the position information of a plurality of specific areas of the virtual three-dimensional model of the target scene, and carrying out direction mark binding. The position of the decoration three-dimensional model which is not matched is adjusted by utilizing local loop detection, and the position information of the specific area and the characteristic point can be corrected in a particular way, so that the decoration three-dimensional model can be matched with the virtual three-dimensional model of the target scene, the corrected matching quantity can be greatly reduced by correcting the specific area and the characteristic point, and the matching marking efficiency is improved.
And 106, overlaying the decoration three-dimensional model into a real target scene in real time through mixed reality.
And acquiring positioning information corresponding to the specific areas in the real scene, acquiring shooting tracks of the ball curtain camera shooting target scene, and superposing the decoration three-dimensional model according to the positioning information and the shooting tracks. The three-dimensional decoration model is projected into a real target scene through the projection unit, so that the three-dimensional decoration model presents a decoration effect in a real house shape, and a user can truly experience the decoration effect. The projection unit and the spherical screen camera can be integrated in the same equipment and placed in a room model of a target scene, after synchronously shooting the target scene, the current target scene can be rapidly modeled, a user can synchronously select own decoration style or different decoration models in intelligent equipment such as a mobile phone or a computer and the like to obtain a corresponding decoration three-dimensional model, and the decoration three-dimensional model can be superimposed in real time into the corresponding room model by combining shooting tracks while shooting and collecting positioning information of a plurality of specific areas in a real scene through a mixed reality technology.
The user can watch the effect that the decoration three-dimensional model is overlapped into the real house type by means of MR glasses, and can set a plurality of projection units in the house type to carry out space stereoscopic projection, so that the user can realize the overlapping effect that the naked eyes watch the mixed reality.
Furthermore, when a user views virtual decoration, a decoration model which can be selected by user definition is synchronously presented in a picture, and action gestures of the user are positioned and collected, so that different decoration models are inserted or deleted into corresponding decoration three-dimensional models which are presented through mixed reality through action indication control of the user, and the background updates and renders the decoration three-dimensional models in real time through obtaining the action indication of the user, so that the user can truly experience house decoration effects in a mixed reality scene, and can adjust the decoration effects in real time according to own preference, and the user experience is better.
The method can also be realized based on APP installed on a mobile phone or a computer and the like, and a user realizes the method steps of controlling the spherical screen camera to collect video streams of a target scene and the like on the APP. It is further described that the user can invite other users to enter the picture on the APP, so that different users can share the picture obtained by mixing reality of the decoration three-dimensional model superimposed on the real house type, thereby realizing that family friends watch the house together and discuss the decoration.
According to the method, the virtual three-dimensional model of the target scene is obtained by obtaining the video stream of the target scene, and the decoration data are combined to obtain the decoration three-dimensional model, so that the decoration three-dimensional model is superimposed in real time into the real target scene, the real target scene can obtain the decoration effect required by a user, the decoration effect is more real through a superposition mode of mixed reality, and the method can help the user to better see a house and decorate.
As shown in fig. 2, there is provided a virtual finishing system based on mixed reality, comprising:
a data acquisition unit 201, configured to acquire video streams at different positions of a target scene, and acquire a plurality of image information of the target scene according to the video streams; calculating depth information, point cloud information and video streams corresponding to different positions by utilizing the plurality of image information so as to realize the ranging of a target scene;
the first model unit 202 obtains distance information of the target scene based on the distance measurement, and constructs a virtual three-dimensional model of the target scene according to the distance information and the image information;
a second model unit 203, configured to obtain decoration data, and render in real time according to the virtual three-dimensional model of the target scene and the decoration data to obtain a decoration three-dimensional model;
a data output unit 204, configured to match the decoration three-dimensional model with a virtual three-dimensional model of the target scene; and overlaying the decoration three-dimensional model into a real target scene in real time through mixed reality.
The virtual decoration system based on mixed reality is used for realizing the virtual decoration method based on mixed reality, and the same parts are not repeated.
Fig. 3 illustrates an internal structural diagram of a mixed reality-based virtual finishing device in one embodiment. As shown in fig. 3, the mixed reality based virtual finishing device includes a processor, a memory, and a communication interface connected through a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the virtual decoration device based on mixed reality stores an operating system, and may also store a computer program, which when executed by a processor, may cause the processor to implement a virtual decoration method based on mixed reality. The internal memory may also store a computer program that, when executed by the processor, causes the processor to perform a mixed reality based virtual decoration method. It will be appreciated by those skilled in the art that the structure shown in fig. 3 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the mixed reality-based virtual fitting apparatus to which the present inventive arrangements are applied, and that a particular mixed reality-based virtual fitting apparatus may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
In one embodiment, the mixed reality-based virtual decoration method provided by the present application may be implemented in the form of a computer program that is executable on a mixed reality-based virtual decoration device as shown in fig. 3. The memory of the mixed reality-based virtual finishing device may store therein various program templates constituting the mixed reality-based virtual finishing system. Such as a data acquisition unit 201, a first model unit 202, a second model unit 203, and a data output unit 204.
A mixed reality-based virtual finishing device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform a mixed reality-based virtual finishing method.
In one embodiment, a computer readable storage medium is provided, storing a computer program, which when executed by a processor, causes the processor to perform the above-described mixed reality based virtual finishing method steps.
It should be noted that the above-mentioned virtual decoration method based on mixed reality, virtual decoration system based on mixed reality, virtual decoration device based on mixed reality and computer readable storage medium belong to one general inventive concept, and the content in the embodiments of the virtual decoration method based on mixed reality, virtual decoration system based on mixed reality, virtual decoration device based on mixed reality and computer readable storage medium may be mutually applicable.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. A virtual finishing method based on mixed reality, comprising:
acquiring video streams of different positions of a target scene through a spherical screen camera, and acquiring a plurality of image information of the target scene according to the video streams;
calculating depth information corresponding to different positions by utilizing the plurality of image information, calculating point cloud information of the corresponding positions according to the depth information, and tracking a video stream of the current position according to the point cloud information so as to realize the ranging of a target scene;
obtaining distance information of a target scene based on the distance measurement, and constructing a virtual three-dimensional model of the target scene according to the distance information and the image information;
acquiring decoration data, and rendering in real time according to the virtual three-dimensional model of the target scene and the decoration data to obtain a decoration three-dimensional model;
matching and marking the decoration three-dimensional model and the virtual three-dimensional model of the target scene;
and overlaying the decoration three-dimensional model into a real target scene in real time through mixed reality.
2. The mixed reality-based virtual decoration method of claim 1, wherein calculating depth information corresponding to different positions using the plurality of image information, obtaining point cloud information of the corresponding positions according to the depth information, tracking a video stream of a current position according to the point cloud information to achieve ranging of a target scene, comprising:
the plurality of image information is provided with overlapped characteristic areas, at least a first position and a second position are obtained according to the characteristic areas, coordinates of the first position and the second position in different image information are calculated, pixel coordinate information of the first position and the second position is marked through the coordinates, and depth information of different positions in the image information is calculated by combining gray information and parallax information of the characteristic areas;
and acquiring point cloud information of a corresponding position according to the depth information, tracking a video stream of the current position according to the point cloud information, extracting the point cloud information and the video stream of a specific area in the target scene, and performing ranging of the target scene according to the point cloud information and the video stream of the specific area.
3. The method of claim 2, wherein the obtaining distance information of the target scene based on the ranging, and constructing a virtual three-dimensional model of the target scene based on the distance information and the image information, comprises:
and establishing a digital model according to the distance information, carrying out loop detection on the digital model, and mapping the image information to the digital model to obtain a virtual three-dimensional model of the target scene.
4. The virtual decoration method based on mixed reality according to claim 3, wherein the obtaining decoration data, and rendering in real time according to the virtual three-dimensional model of the target scene and the decoration data, obtains a decoration three-dimensional model, comprises:
the decoration data comprise two-dimensional decoration design drawings, the two-dimensional decoration design drawings and the virtual three-dimensional model of the target scene are matched and marked according to the specific area, so that parameters of the two-dimensional decoration design drawings are adjusted to correspond to parameters of the virtual three-dimensional model of the target scene in real time;
identifying a characteristic object in a two-dimensional decoration design drawing, and digitally modeling the two-dimensional decoration design drawing and the characteristic object according to parameters of a virtual three-dimensional model of the target scene;
and rendering the two-dimensional decoration design drawing and the characteristic object subjected to digital modeling in real time according to the decoration style to obtain a decoration three-dimensional model.
5. The virtual decoration method based on mixed reality according to claim 3, wherein the obtaining decoration data, and rendering in real time according to the virtual three-dimensional model of the target scene and the decoration data, obtains a decoration three-dimensional model, comprises:
the decoration data comprise various decoration models, and a user self-defines the decoration models to be placed in a virtual three-dimensional model of the target scene;
identifying parameters of the decoration model, carrying out matching marking on the decoration model and the virtual three-dimensional model of the target scene, and positioning and placing the decoration model in the virtual three-dimensional model of the target scene according to the matching marking;
and rendering the decoration model and the virtual three-dimensional model of the target scene with the decoration model in real time to obtain a decoration three-dimensional model.
6. The mixed reality-based virtual decoration method of claim 2, wherein matching the decorated three-dimensional model to the virtual three-dimensional model of the target scene comprises:
detecting whether the position information of a plurality of specific areas in the decoration three-dimensional model is matched with the position information of the specific areas of the virtual three-dimensional model of the target scene;
if the three-dimensional model is matched with the virtual three-dimensional model of the target scene, performing direction marking binding based on the position information of a plurality of specific areas of the virtual three-dimensional model of the target scene;
if the three-dimensional model is not matched with the virtual three-dimensional model of the target scene, carrying out local loop detection on the three-dimensional model of the decoration and the virtual three-dimensional model of the target scene, and adjusting parameters of the three-dimensional model of the decoration in real time until the three-dimensional model of the decoration is matched with the position information of a plurality of specific areas of the virtual three-dimensional model of the target scene, and carrying out direction mark binding.
7. The mixed reality-based virtual decoration method of claim 6, wherein overlaying the decorated three-dimensional model in real time into a real target scene via mixed reality comprises:
and acquiring positioning information corresponding to the specific areas in the real scene, acquiring shooting tracks of the ball curtain camera shooting target scene, and superposing the decoration three-dimensional model according to the positioning information and the shooting tracks.
8. A mixed reality-based virtual finishing system, comprising:
the data acquisition unit is used for acquiring video streams at different positions of the target scene and acquiring a plurality of image information of the target scene according to the video streams; calculating depth information, point cloud information and video streams corresponding to different positions by utilizing the plurality of image information so as to realize the ranging of a target scene;
the first model unit obtains distance information of the target scene based on the distance measurement, and constructs a virtual three-dimensional model of the target scene according to the distance information and the image information;
the second model unit is used for acquiring decoration data, and rendering the decoration data in real time according to the virtual three-dimensional model of the target scene and the decoration data to obtain a decoration three-dimensional model;
the data output unit is used for matching and marking the decoration three-dimensional model and the virtual three-dimensional model of the target scene; and overlaying the decoration three-dimensional model into a real target scene in real time through mixed reality.
9. A mixed reality based virtual fitting device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the steps of the mixed reality based virtual fitting method according to any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the mixed reality based virtual fitting method according to any one of claims 1 to 7.
CN202110308327.5A 2021-03-23 2021-03-23 Virtual decoration method, system, equipment and medium based on mixed reality Active CN112907751B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110308327.5A CN112907751B (en) 2021-03-23 2021-03-23 Virtual decoration method, system, equipment and medium based on mixed reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110308327.5A CN112907751B (en) 2021-03-23 2021-03-23 Virtual decoration method, system, equipment and medium based on mixed reality

Publications (2)

Publication Number Publication Date
CN112907751A CN112907751A (en) 2021-06-04
CN112907751B true CN112907751B (en) 2023-11-24

Family

ID=76106010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110308327.5A Active CN112907751B (en) 2021-03-23 2021-03-23 Virtual decoration method, system, equipment and medium based on mixed reality

Country Status (1)

Country Link
CN (1) CN112907751B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256801A (en) * 2021-06-11 2021-08-13 山东能之源核电科技有限公司 Three-dimensional reverse model visualization system for radiation field
CN114049439A (en) * 2021-11-12 2022-02-15 广东鉴面智能科技有限公司 AR technology-based equipment simulation installation method and system
CN114898354A (en) * 2022-03-24 2022-08-12 中德(珠海)人工智能研究院有限公司 Measuring method and device based on three-dimensional model, server and readable storage medium
CN117240831A (en) * 2022-06-08 2023-12-15 中兴通讯股份有限公司 Three-dimensional model transmission method and device, storage medium and program product thereof
CN115600267B (en) * 2022-12-14 2023-04-18 深圳奥雅设计股份有限公司 Computer vision analysis method and system for urban public space design
CN116243795B (en) * 2023-02-20 2024-06-21 南方科技大学 Mixed reality-based object grabbing method and mixed reality equipment
CN116091708B (en) * 2023-04-11 2023-08-01 深圳朗生整装科技有限公司 Decoration modeling method and system based on big data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022306A (en) * 2017-12-30 2018-05-11 华自科技股份有限公司 Scene recognition method, device, storage medium and equipment based on augmented reality
CN108648276A (en) * 2018-05-17 2018-10-12 上海宝冶集团有限公司 A kind of construction and decoration design method, device, equipment and mixed reality equipment
WO2019242262A1 (en) * 2018-06-19 2019-12-26 亮风台(上海)信息科技有限公司 Augmented reality-based remote guidance method and device, terminal, and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022306A (en) * 2017-12-30 2018-05-11 华自科技股份有限公司 Scene recognition method, device, storage medium and equipment based on augmented reality
CN108648276A (en) * 2018-05-17 2018-10-12 上海宝冶集团有限公司 A kind of construction and decoration design method, device, equipment and mixed reality equipment
WO2019242262A1 (en) * 2018-06-19 2019-12-26 亮风台(上海)信息科技有限公司 Augmented reality-based remote guidance method and device, terminal, and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于亮度和深度信息的实时景深渲染算法;赵东阳;陈一民;李启明;刘燕;黄晨;徐升;周明珠;;系统仿真学报(第08期);全文 *

Also Published As

Publication number Publication date
CN112907751A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN112907751B (en) Virtual decoration method, system, equipment and medium based on mixed reality
US11995902B2 (en) Facial signature methods, systems and software
US11721067B2 (en) System and method for virtual modeling of indoor scenes from imagery
KR102146398B1 (en) Three dimensional content producing apparatus and three dimensional content producing method thereof
CN111768496B (en) Image processing method, device, server and computer readable storage medium
CN109615703B (en) Augmented reality image display method, device and equipment
EP3798801A1 (en) Image processing method and apparatus, storage medium, and computer device
EP3242275B1 (en) Using photo collections for three dimensional modeling
CN104835138B (en) Make foundation drawing picture and Aerial Images alignment
CN107484428B (en) Method for displaying objects
CN117115256A (en) image processing system
JP5015126B2 (en) Image generation method, image authentication method, image generation apparatus, image authentication apparatus, program, and recording medium
CN106981078B (en) Sight line correction method and device, intelligent conference terminal and storage medium
KR102187143B1 (en) Three dimensional content producing apparatus and three dimensional content producing method thereof
CN113689578A (en) Human body data set generation method and device
CN113298708A (en) Three-dimensional house type generation method, device and equipment
KR102118937B1 (en) Apparatus for Service of 3D Data and Driving Method Thereof, and Computer Readable Recording Medium
CN116168076A (en) Image processing method, device, equipment and storage medium
JP2002032741A (en) System and method for three-dimensional image generation and program providing medium
JP2002032743A (en) System and method for three-dimensional image generation and program providing medium
CN108269278B (en) Scene modeling method and device
US11620765B2 (en) Automatic detection of a calibration object for modifying image parameters
CN106611440B (en) Method and device for extracting live-action picture
US20230410451A1 (en) Augmented reality implement apparatus and method using mobile scanned object model scaling
WO2024099319A1 (en) Virtual video image generation method and apparatus, and device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant