CN116012564A - Equipment and method for intelligent fusion of three-dimensional model and live-action photo - Google Patents

Equipment and method for intelligent fusion of three-dimensional model and live-action photo Download PDF

Info

Publication number
CN116012564A
CN116012564A CN202310060142.6A CN202310060142A CN116012564A CN 116012564 A CN116012564 A CN 116012564A CN 202310060142 A CN202310060142 A CN 202310060142A CN 116012564 A CN116012564 A CN 116012564A
Authority
CN
China
Prior art keywords
dimensional model
live
photo
action
digital
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310060142.6A
Other languages
Chinese (zh)
Other versions
CN116012564B (en
Inventor
胡家杰
常炜
奚优芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Aitengpai Digital Technology Co ltd
Original Assignee
Ningbo Aitengpai Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Aitengpai Intelligent Technology Co ltd filed Critical Ningbo Aitengpai Intelligent Technology Co ltd
Priority to CN202310060142.6A priority Critical patent/CN116012564B/en
Publication of CN116012564A publication Critical patent/CN116012564A/en
Application granted granted Critical
Publication of CN116012564B publication Critical patent/CN116012564B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a system and a method for intelligent fusion of a three-dimensional model and a live-action photo, comprising the following steps: transmitting a live-action photo to be fused with a digital three-dimensional model of a real object to an intelligent fusion system, analyzing, calculating and storing related information on the live-action photo by the intelligent fusion system; presenting the live-action photo on a processing interface of the system; retrieving the adapted three-dimensional model from a resource within the system or external to the system; transmitting the three-dimensional model to a processing interface for displaying the live-action photo; and rotating, moving and scaling the three-dimensional model according to the requirements, and intelligently displaying the size, the gesture and the perspective relation of the real object expressed by the three-dimensional model in the real scene. The invention creatively provides a system and a method for intelligently fusing the three-dimensional model and the live-action photo, so that common people without professional experience can simply, conveniently, quickly and accurately fuse the three-dimensional model into the live-action photo naturally, realistically and closely.

Description

Equipment and method for intelligent fusion of three-dimensional model and live-action photo
Technical Field
The invention belongs to the field of effect diagram synthesis and manufacturing, in particular to a related technology for integrating a digital three-dimensional model of a real object into a live-action photo in a simple, natural, appropriate and intelligent way, which can be widely applied to industries such as building design, home decoration design, environment design, display design and the like, is a good tool for a designer to freely express originality and fully communicate with a client, and is a good helper for a common person who does not have professional training to try DIY design.
Background
The fusion of a digital three-dimensional model of a real object and a live-action photo of a certain scene has been widely applied to the fields of building design, home decoration design, display design and the like, and the current popular fusion method is to adopt industrial design software such as Rhino, 3DMax, solidThinking and the like. With the help of the software, the excellent effect graph of building or indoor decoration can present the building or decoration effect which does not exist at present but can be realized in the future in front of eyes of people in a false and true manner. However, to use the software to fuse the digitized three-dimensional model of the object with the live-action photograph of a scene, the operator must be trained to draw an ideal effect diagram after having mastered the method of using the software. Even so, a high level of professionals expend a lot of time and effort each time the fusion of the digitized three-dimensional model of the object with the live-action photo is completed. The frustrating effect graphs that are produced with great care are always very awkward to see after completion, not because the effect graph producer is not at an adequate level, but because they must first coordinate the live-action picture with their own eye forces and analyze the spatial perspective relationship described by the picture before fusing the three-dimensional model with the live-action picture. Because the human eye force is far less accurate than that of a measuring instrument, a small error caused by human factors finally develops into a large deviation affecting the ornamental effect of the effect graph.
In general, the method of calibrating coordinates and analyzing perspective relationship used by the effect graph producer is to find coordinate calibration references, such as the collineation of two adjacent walls, or the contour line of indoor furniture, according to the building structure in the live-action photo or the object on the photo, but many times, these references are misleading to them. It is not easy to perceive, and the wall body used for calibrating the coordinates and analyzing the photo perspective is slightly inclined, and furniture can be placed on uneven ground. Worse still, many photographs lack clear and reliable lines of sight due to the lack of experience of the photographer of live-action photographs, which enables the fusion of digitized three-dimensional models of real objects with live-action photographs to be established on an incorrect basis from the beginning.
A further problem that is easily ignored by people is that almost all photos generate a certain distortion due to the lens, and the direct effect is that when the three-dimensional model is fused with a live-action photo, if the three-dimensional model at different positions in the photo cannot correspondingly adjust the posture, the size and the perspective relation according to the distortion rule of the photo, the three-dimensional model is not fit on the photo, so that the final fused result cannot reach an ideal boundary.
In response to the above problems, the applicant often wants to: since the fusion of the digitized three-dimensional model of the object with the live-action photo is what is needed by society, why does it not have intelligent software to make it simpler and easier? Since design is an important way to beautify life, why do not allow the average person who is not trained in a professional way to easily, quickly, and accurately integrate the digitized three-dimensional model of a real object into a live-action photo in a natural, realistic, and close way?
In recent years, with the development of industries such as building, home decoration and exhibition, and the popularization of intelligent electronic communication equipment such as computers, notebooks, tablets and mobile phones, the enthusiasm of people for DIY or cooperation with designers for building or home decoration design is higher and higher, and the integration method of a simple, easy-to-learn, convenient and practical digital three-dimensional model and a live-action photo is more urgent. For example, when a couple of couples have just received a new house, they may wait for a photograph of the new house to be taken, and then go home to discuss the arrangement of the house with family and friends; for example, a factory owner is not satisfied with the arrangement of a certain part of the main workshop, and can take one or more pictures of the position to study a new arrangement scheme together with the hand or equipment provider. At this time, an intelligent system with a digital three-dimensional model fused with a live-action photo plays a great role. The intelligent software can drag the digital three-dimensional model of the proper furniture or equipment into the live-action photo to be placed freely, and then look at the newly arranged effect. Even if the new house decoration design or the workshop planning arrangement is finally operated by a professional designer, the intelligent software system reflecting the result in real time can better help the designer to communicate.
In fact, in the field of the art, there are no few professionals who like the applicant want to solve similar problems. For example, the invention patent "a three-dimensional model and photo fusion method" (CN 107292954B) discloses a technology for fusing a digitized three-dimensional model of a building with a photo of a building scene, but the patent requires recording camera parameters of a photo of a building environment when taking a live-action photo, and also requires setting virtual camera parameters in the three-dimensional scene when outputting an image of the three-dimensional model, including "sequentially selecting a photo Pc centered in a position to be built of a construction project for each photo taking place M; setting parameters of a virtual camera in a three-dimensional scene by using camera parameters corresponding to the photo Pc, wherein the parameters comprise X, Y, Z, a camera azimuth Ori, a camera Pitch angle Pitch, a camera Roll angle Roll and a camera focal length F; setting the three-dimensional scene graph parameters to enable the length and width of the image to be consistent with the size of the photo Pc … …). So many, complex manual operations do not leave the applicant to doubt deeply: can the method achieve accurate fusion of a three-dimensional model and a live-action photo? Even if possible, the method of operation is very unfriendly. The method is not required to say that an ordinary person, or a professional person can not necessarily realize perfect fusion of a digitalized three-dimensional model of a real object and a live-action photo according to the method, and further, when the three-dimensional model is sent to the live-action photo, an operator can also move and rotate the three-dimensional model at will, and the size, the gesture and the perspective relation of the three-dimensional model in the live-action photo can be automatically adjusted according to different positions.
The invention patent application (CN 106846446A) discloses a method for fusing a real environment and a building effect graph together, but according to the method, a worker must shoot a target area in a fixed-point area to obtain a plurality of original images and acquire initial coordinates of shooting points at the same time; performing panoramic stitching on the original image to obtain a panoramic image of the target area; selecting one or more panoramic images from the original images, acquiring camera parameters of each image, selecting a certain number of control points from the images, recording pixel coordinates of the control points in the images, measuring space coordinates corresponding to the control points in the field, and solving shooting point actual coordinates and shooting actual angle parameters … … by a unidirectional space rear convergence method. Obviously, this is a set of methods which are not operated by the average person without professional training, and the effect which can be achieved is far from that which the applicant wants to achieve.
The applicant has conducted intensive studies on the prior art, and found that although many people in the field are interested in fusing a digital three-dimensional model of a real object with a live-action photo, no one has really posed the problem posed by the applicant so far, namely, how to enable an ordinary person without professional training to fuse the digital three-dimensional model of the real object into the live-action photo simply, quickly and accurately. Of course, no one has proposed a solution to this problem.
The AR technology, which has been increasingly popular in recent years, seems to be closer to the problems that the applicant is concerned with and is about to solve, such as ARKit from apple corporation and ARCore from google corporation, which can help the mobile phone user to add a digital three-dimensional model of a real object to a scene photographed by the mobile phone by using an augmented reality method, but AR applications represented by both types of software must be demonstrated in a wide field because they cannot solve the problem that the imported digital three-dimensional model blocks or collides with an object in a real scene. A demonstration which looks like a very stick in a wide space is difficult if being demonstrated in a coffee shop or in a class, so that the ideal effect of 'blending' a digital three-dimensional model into a live-action photo, which is envisaged by the applicant, cannot be achieved at all, and the effects of simultaneously importing a plurality of three-dimensional models into the live-action photo, randomly moving and adjusting the size, the gesture and the perspective relation of the three-dimensional models in real time according to the different positions of the three-dimensional models in the live-action photo cannot be achieved.
In order to avoid difficulties and errors encountered by other technicians in the field, and truly solve the problems that the applicant wants to solve, the applicant has continuously studied and tested, and finally, through a combination of a unique technological thought, a set of unique algorithm system and a series of effective technologies, the accurate pre-analysis and pre-processing of the live-action photo by the computer software system are creatively realized, so that the computer software system can accurately analyze, calculate and obtain key information related to the live-action photo without manual assistance and without grasping the working state of the camera in advance, and the method comprises the following steps: the camera pose, shooting parameters, the space size of a scene shot by the photo, the semantic features of each object in the photo and the space relation of the objects are adopted when the photo is shot. Moreover, when analyzing the live-action photo, the software system can also intelligently identify each object on the live-action photo and intelligently divide the identified object from other objects.
Based on the research result, the applicant can smoothly realize the own inventive concept by combining other related technologies, namely: an average person without professional training can upload one or more live-action photos of a certain scene to be fused with a digital three-dimensional model of a real object to an intelligent fusion system, and present the live-action photos on a fusion processing interface; retrieving and calling out a digital three-dimensional model of one or more objects to be fused to the live-action photo from a digital three-dimensional model library of the system and transmitting the digital three-dimensional model to a central processing unit; after a dialog box capable of displaying one or more real objects of the digital three-dimensional model is popped up on the fusion processing interface, the three-dimensional model in the dialog box is sent or directly dragged onto the live-action photo; the digital three-dimensional model is enabled to move or rotate on the live-action photo at will, and each time of movement or rotation, the central processing unit can analyze, calculate and realize conversion from the three-dimensional model to two-dimensional projection in real time according to the actual size of a real object expressed by the three-dimensional model, the position of the three-dimensional model in the live-action photo and related information called from the information storage unit, so that the digital three-dimensional model and the live-action photo are fused naturally, and the size, the gesture and the perspective relation of the real object expressed by the digital three-dimensional model in the live-action photo are displayed. This invention is certainly a major breakthrough in the art.
Disclosure of Invention
In order to meet the increasing material and mental living demands of people, more people enjoy the convenience and fun of design, the invention provides a system and a method for intelligently fusing a three-dimensional model and a live-action photo, and the digitalized three-dimensional model of a real object can be fused into the live-action photo simply, conveniently, rapidly and accurately by common people without professional training. The system is characterized in that the system for intelligently fusing the digital three-dimensional model and the live-action photo comprises:
the system comprises an intelligent image analysis unit, an information storage unit, a central processing unit, a digital three-dimensional model database and an information input and output unit. Wherein,,
the intelligent image analysis unit is a software system which is researched and developed based on the subjects of deep learning, computer geometry and the like, and the tasks to be completed mainly comprise analysis, calculation and acquisition of all information related to the live-action photo, and the software system comprises: the camera pose, shooting parameters, photo distortion parameters, the space size of a scene shot by the photo, the semantic features of each object in the photo and the space relation of the objects are adopted when the photo is shot. The functions of the intelligent image analysis unit further comprise: each object on the live-action photo is intelligently identified, and the identified object is intelligently separated from other objects;
The information storage unit is a digital information storage module based on information storage technology, and the main tasks of the information storage unit include: storing the information analyzed and processed by the intelligent image analysis unit and the central processing unit for later use;
the central processing unit is specially programmed software, and the functions of the central processing unit comprise that a digitalized three-dimensional model of a real object is virtually and highly simulated and fused into a live-action photo of a certain scene, so that the shooting effect of the real object in the scene of the photo is achieved. The central processing unit also comprises a processing interface for fusing the digital three-dimensional model and the live-action photo, and the functions of the processing interface include displaying the digital three-dimensional model, the live-action photo and the fused effect graph which participate in the fusion. The operation method for supporting multiple man-machine interactions by the central processing unit and the processing interface comprises the following steps: instructions are issued through keyboard, mouse, touch screen controls to accomplish various tasks including: searching, selecting, observing and measuring a digital three-dimensional model of a real object; fusing the digital three-dimensional model of the real object and the live-action photo, and auditing and appreciating the result of the effect graph;
the digital three-dimensional model database is a digital storage module which is specially used for storing candidate digital three-dimensional models of objects which are used for fusing with live-action photos, and is characterized by further comprising: a specific archive is built for each digital three-dimensional model of a real object to collect and store background information related to the digital three-dimensional model of the real object, comprising: a producer of a physical object, a provider of a three-dimensional model of the physical object, the size of the physical object, the purpose of the physical object and technical indexes of the physical object;
The information input/output unit is an interface for external information exchange of the whole system, and the functions of the information input/output unit comprise that information related to fusion of a three-dimensional model and a live-action photo is input, and the information input/output unit comprises: digital three-dimensional model of real object, live-action photo, background information of real object: outputting various information related to the fusion, including: fused photos, fusion process records, fused animation demonstration, digital three-dimensional models of real objects participating in fusion and live-action photos participating in fusion. The input/output unit may be connected to a communication device with related information, including: a cell phone, a notebook computer, a desktop computer, a tablet computer, an inkjet or laser printer, a 3D printer, and an optical disk recorder.
The specific three-dimensional model and live-action photo intelligent fusion method comprises the following steps:
transmitting one or more live-action photos of a scene to be fused with a digitized three-dimensional model of a real object to an intelligent system for fusing the three-dimensional model and the live-action photos, intelligently analyzing, calculating and storing related information by the system, wherein the method comprises the following steps of: the camera pose, camera parameters, photo distortion parameters, the space size of a scene taken by the photo, the semantic features of each object in the photo and the space relation of the objects are adopted when the photo is taken. Moreover, in analyzing the live-action photo, the intelligent image analysis unit can intelligently identify each object on the live-action photo and separate the identified object from other objects.
And storing the analysis and calculation results in an information storage unit of the system for later use.
The live-action photo is presented over a processing interface of the system.
If the operator needs to know the spatial dimension relation between the object and the object in the live-action photo and between the object and the field, the starting point can be freely selected to measure the distance.
One or more digital three-dimensional models of the adapted object are retrieved and invoked from the system or from an external source.
The digitized three-dimensional model is transferred to a processing interface where a live-action photo is presented.
If the operator needs to know the spatial dimension relationship between the digitized three-dimensional model of the object and the objects (including the object represented by the three-dimensional model) after entering the live-action photo, the starting point can be freely selected to measure the distance.
The digitized three-dimensional model is rotated, moved, scaled and arranged at a suitable position of the live-action photo as required. Meanwhile, the intelligent system intelligently analyzes, calculates and realizes conversion from the three-dimensional model to two-dimensional projection according to the related information of the live-action photo and the position of the digital three-dimensional model in the live-action photo, so that the digital three-dimensional model and the live-action photo are naturally fused, and the size, the gesture and the perspective relation of the expressed real object in the live-action are displayed.
When a digitized three-dimensional model of one or more physical objects is transferred over a live-action photo, the system can intelligently analyze, calculate the distance between it and the entity that was originally present in the live-action photo. The distance can be displayed on a processing interface for presenting live-action photos as required.
The entity comprises: roofing, wall, ground, water, other physical objects.
According to the requirement, an operator can freely select a starting point to measure the distance so as to know the spatial dimension relation between objects (including the objects represented by the three-dimensional model) in the effect diagram after the digital three-dimensional model of the objects is preliminarily fused with the live-action photo.
If the system detects that the digital three-dimensional model of the real object transmitted to the real photo is in contact with or invades with the original real object in the real photo, the system gives a prompt in real time.
If the digitized three-dimensional model of the real object is required to be embedded into some objects in the real image when the digitized three-dimensional model is fused into the real image, the central processing unit can intelligently identify that the part of the real object, which is shielded by other objects in the virtual and real scene, is the foreground part of the real object in the real image. The foreground part is divided by the central processing unit according to the requirement and covered on the two-dimensional projection of the three-dimensional model, so that the three-dimensional model is fused into the live-action photo with high fidelity.
When the digital three-dimensional model of the real object is dragged between the original two objects in the live-action photo, the central processing unit can intelligently calculate whether the gap between the two objects can hold the real object expressed by the digital three-dimensional model, and if the gap is insufficient, the central processing unit can intelligently alarm to prevent the fused photo from being inconsistent with the actual condition; if the gap is too large, the central processing unit can intelligently remind the too large gap.
If the digitized three-dimensional model of the real object overlaps with a certain object position in the real image when being fused to the real image, the central processing unit can intelligently identify the object in the real image and remove the object according to the requirement so as to leave a neutral gear for the three-dimensional model to be fused, thereby fusing the three-dimensional model to the real image with high fidelity.
If the digitized three-dimensional model of the object is slightly different from the live-action in the live-action photo in terms of hue, brightness and the like, the central processing unit can intelligently adjust the hue, brightness and the like of the digitized three-dimensional model according to the hue, brightness and the like of the live-action photo, so that the digitized three-dimensional model is more naturally fused to the live-action photo.
If the user does not determine what is put in a certain part in the live-action, the part can be defined by a tool frame provided by the system according to the needs, and then a proper digital three-dimensional model is searched and called in the digital three-dimensional model of the adaptive real object provided by the system intelligently.
If multiple live-action photos shot at multiple angles in the same live-action are input into the system, after a digital three-dimensional model of a real object is arranged on one photo, other photos can be automatically fused according to the arrangement logic of the first photo, so that fusion effects under different shooting angles in the same scene are displayed.
In order for the system to achieve the above functions, software specifically developed and programmed in the present application is used, and the software uses related technologies in terms of artificial intelligence and machine vision, including:
(1) deep learning, computer vision and computer 3D geometry techniques, the roles of which include:
identifying objects in the live-action photo and understanding the spatial relationship between the objects;
analyzing and determining an object with a fixed size in the photo;
reconstructing a three-dimensional model of the object in the photograph with one or more live-action photographs;
calculating perspective relation between the three-dimensional world where the object is located in the live-action photo and the live-action photo by using one or more live-action photos;
for any two points of the live-action photo, finding out the corresponding points of the live-action photo in the three-dimensional world by combining the camera pose captured by the camera when the camera shoots, comparing the points with a reference object with a fixed size, and calculating the distance between the two points;
Intelligently analyzing the degree of deformation of the contour line in the photo;
using the deep learning based image restoration method (deep image inpaiting), a deep neural network is trained on the image features around the restoration area, and a local restoration patch is generated to be naturally fused with the whole image.
The deep learning is utilized to establish a connection between the low-definition photo and the high-definition photo of the same object, so that the blurred photo can be helped to be clear, and super resolution image generation (super resolution) is realized.
(2) A matting algorithm (segment), whose roles include:
separating out each object in the live-action photo;
judging which of an object in the live-action photo and the three-dimensional model is projected by utilizing the position of the three-dimensional model projected in the live-action photo, so as to shield the background;
intelligently identifying objects in the live-action image, and finding out the outer contour of each object;
calculating the maximum gap distance of the outer contour in each direction by combining the distance measuring method
(3) An image generation algorithm comprising:
generating a countermeasure network (GAN) or a diffusion model (diffusion model), inputting the features in the live-action photo, and continuously and iteratively adjusting the image features projected by the three-dimensional model on the live-action image until the features and the image features are fused to the extent that the features and the image features are indistinguishable in hue and brightness
The color, the line and the artistic style on one photo are automatically applied to the other photo, so that the effect of image transfer (style transfer) is realized.
(4) An image smoothing method (image smoothing), which is effective in comprising: adjusting uneven light, color and tone, and reducing image noise;
the using method comprises the following steps:
a interpolation method;
b a linear smoothing method;
and c, convolution method.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. In the drawings:
FIG. 1 shows a flow chart of intelligent fusion of a digitized three-dimensional model of a real object and a live-action photo of the invention;
FIG. 2 shows a schematic diagram of the overall operation of the system of the present invention for implementing intelligent fusion of digitized three-dimensional models and live-action photographs. Wherein, the part displayed by the rectangle module refers to each constituent unit of the intelligent fusion system; the part displayed by the right circular module refers to the extending function and external resources of the system; the parts shown in the oval modules refer to the content of the information transfer and communication between the constituent units.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
As is well known, to perfectly fuse the digitized three-dimensional model of the real object into the live-action photo, the biggest difficulty is that the digitized three-dimensional model fused into the live-action photo must precisely embody the geometric logic of the digitized three-dimensional model in the real scene described by the live-action photo, otherwise, even if the digitized three-dimensional model is highly fused with the live-action photo in terms of color, tone and atmosphere by picture processing means such as PS, the uncoordinated existence of the digitized three-dimensional model in the live-action photo is obviously exposed. Clearly, it is quite difficult to manually adjust the size, pose and perspective of the digitized three-dimensional model to achieve geometric logics in live-action photographs. Even a trained designer has difficulty judging the spatial perspective relationship displayed by live-action photographs with the naked eye, let alone estimating the distortion parameters of most photographs that are inherent. For this reason, professional designers often use PS or other means to mask so that a certain digital three-dimensional model looks as fused together in a live-action photo, but when facing the task of fusing a plurality of digital three-dimensional models into a live-action photo in a scattered manner, the embarrassment of fusing three-dimensional models together and wearing the upper is generally unavoidable. Therefore, the invention provides an innovative idea of analyzing and calculating the camera pose, camera parameters, photo distortion parameters and other information of the live-action photo by means of intelligent image analysis, and the biggest difference from the prior similar technologies is that the intelligent image analysis of the invention does not depend on the prediction of parameters such as camera coordinates and the like, and also does not depend on the positioning information and direction information provided by various sensors carried by equipment such as a camera or a mobile phone, so that the fusion of the digital three-dimensional model and the live-action photo can be independently completed by an independent software system in a completely offline, completely off-site and completely instant-free state. Therefore, the maximum elasticity, the maximum degree of freedom and the maximum simplicity are provided for the manufacture of the fusion effect graph, so that the digitalized three-dimensional model of the real object can be fused into the live-action photo simply, conveniently, rapidly and accurately by common people without professional training.
The method of the invention comprises two main processes: (A) When the system receives a live-action photo to be fused with a digital three-dimensional model of a real object; (B) And when the digital three-dimensional model of the object is fused with the live-action photo. The following describes the two processes:
A. when the system receives a live photo to be fused with a digital three-dimensional model of a real object
A1. One or more live photos to be fused with the digital three-dimensional model of the real object are obtained for matching a proper existing scene with the digital three-dimensional model used for fusion or showing the existence effect of the three-dimensional model in the live photos, so that the requirements that the definition of the photos can be matched with the definition of the digital three-dimensional model are met as much as possible when the live photos are obtained. Because the digital three-dimensional model of many real objects is obtained by high definition scanning equipment, when the high definition and high fidelity images are combined with the low definition live-action photos, people can easily feel a sense of no sense;
A2. live-action photos can be shot by various devices meeting the shooting requirements of medium-high quality photos, including cameras, video cameras, mobile phones and the like. The shot and acquired photos can be transmitted to a computer, a mobile computer, a virtual server, a tablet or a mobile phone and other devices provided with the intelligent fusion system through means of the Internet, a data line, bluetooth and the like. The shooting and transmitting of the live-action photo can be completed in different places and at different time, and the mobile communication equipment with the intelligent fusion system can be used for shooting directly after a program is started;
A3. When one or more live photos to be fused with a digitized three-dimensional model of a real object are transmitted to the intelligent fusion system of the invention, an intelligent image analysis unit of the system intelligently analyzes, calculates and obtains all information related to the live photos, including: the camera pose, shooting parameters, photo distortion parameters, the space size of a scene shot by the photo, the semantic features of each object in the photo and the space relation of the objects are adopted when the photo is shot. Moreover, when analyzing the live-action photo, the intelligent image analysis unit can intelligently identify each object on the live-action photo and intelligently divide the identified object from other objects;
A4. and storing the information subjected to intelligent analysis and calculation into an information storage unit of the system for later use.
B. When the digital three-dimensional model of the object is fused with the live-action photo
B1. The method comprises the steps of presenting a live-action photo to be fused with a digital three-dimensional model of a real object on a fusion processing interface of a system;
B2. one or more digitized three-dimensional models to be intelligently fused with the live-action photographs are retrieved from a database of digitized three-dimensional models within the system. The method for three-dimensional scanning or three-dimensional color scanning comprises the following steps: (1) scanning the object by using a three-dimensional scanner; (2) Shooting a real object by using a binocular depth camera, and generating a digital three-dimensional model by processing distance information; (3) Continuously shooting or shooting a video by using a common monocular camera, and reconstructing a digital three-dimensional model by using a photogrammetry algorithm according to the corresponding relation of overlapping parts of a plurality of photos and the camera projection geometric relation of each photo;
The digital three-dimensional model of the real object used for fusing with the live-action photo can be a digital three-dimensional model obtained by scanning the real object, or can be a digital three-dimensional model of an imitation real object created by creative, imagination and three-dimensional modeling or the like means;
the system can be connected with external equipment such as a three-dimensional scanner, a database, mobile communication equipment and the like through means such as the Internet, a data line and Bluetooth, and a digital three-dimensional model which is to be fused with a live-action photo can be directly obtained;
B3. in the retrieval process of the digital three-dimensional model, the intelligent fusion system not only supports intelligent display of candidate models, but also supports man-machine interaction operation and application, namely, operators can randomly call the candidate digital three-dimensional model on a fusion processing interface, carefully study the details of the model in a rotating, zooming, moving and other modes, and randomly measure the sizes of points on the model by means of a mapping tool provided by the system;
B4. transmitting the one or more digital three-dimensional models to be intelligently fused with the live-action photos obtained by searching to one or more live-action photos presented on a fusion main interface;
B5. The digitized three-dimensional model is rotated, moved, scaled and arranged at a suitable position of the live-action photo as required. Meanwhile, the intelligent system intelligently analyzes, calculates and realizes conversion from the three-dimensional model to two-dimensional projection according to the related information of the live-action photo and the position of the digital three-dimensional model in the live-action photo, so that the digital three-dimensional model and the live-action photo are naturally fused, and the size, the gesture and the perspective relation of the expressed real object in the live-action are displayed.
B6. When the digitized three-dimensional model of one or more real objects is transferred onto the live-action photo, the system can intelligently analyze and calculate the distance between the digitized three-dimensional model and the real-action photo. The distance can be displayed on a processing interface for displaying live-action photos according to the requirement;
the entity comprises: roofing, wall, ground, water surface, other solid articles;
B7. if the system detects that the digital three-dimensional model of the real object transmitted to the real photo is in contact with or invades with the original entity in the real photo, the system gives a prompt in real time;
B8. if an operator needs to know the spatial dimension relation between objects in the live-action photo of the digital three-dimensional model into which the objects are imported (including the objects expressed by the three-dimensional model), a starting point can be freely selected to measure the distance;
B9. If the digitized three-dimensional model of the real object is required to be embedded into some objects in the live-action photo when the digitized three-dimensional model is fused into the live-action photo, the central processing unit can intelligently identify the part of the real object, which is shielded by other objects in the virtual and real scene, as the foreground part of the real object in the photo. According to the requirement, the central processing unit can divide the foreground part and cover the two-dimensional projection of the three-dimensional model, so that the three-dimensional model is fused to the live-action photo with high fidelity;
B10. when the digital three-dimensional model of the real object is dragged between two objects in the live-action photo, the central processing unit can intelligently calculate whether the gap between the two objects can hold the real object expressed by the digital three-dimensional model, and if the gap is insufficient, the central processing unit can intelligently alarm to prevent the fused photo from being inconsistent with the actual condition; if the gap is too large, the central processing unit can intelligently remind the too large gap;
B11. if the digitized three-dimensional model of the real object overlaps with a certain object position in the live-action photo when being fused to the live-action photo, the central processing unit can intelligently identify the object in the live-action photo and remove the object according to the requirement so as to leave a neutral gear for the three-dimensional model to be fused, so that the three-dimensional model is fused into the live-action photo with high fidelity;
B12. If the digitalized three-dimensional model of the real object is slightly different from the real scene in terms of hue, brightness and the like, the central processing unit can intelligently adjust the digitalized three-dimensional model according to factors such as hue, brightness and the like of the real scene so as to enable the digitalized three-dimensional model to be fused to the real scene more naturally;
B13. if a user does not determine what is put in a certain part in a live-action, the part can be defined by a tool frame provided by a system, and then a proper digital three-dimensional model can be searched and selected from a digital three-dimensional model of a real object provided by the system intelligently to be filled in a live-action photo;
B14. if the same live-action has multiple live-action photos shot at multiple angles to be input, after arranging the digital three-dimensional model of the real object on one photo, other photos can automatically generate a fused photo according to the arrangement logic of the first photo so as to show the fusion effect of the digital three-dimensional model and the live-action photo under different shooting angles.
The method for intelligently fusing the digital three-dimensional model of the real object and the live-action photo by applying the inventive concept can be realized in various ways, and 2 of the following are specifically described:
example 1
A community prepares to arrange several children's play facilities on an empty site. Through the business they decide to order products from a certain amusement park. However, the amusement park has 20 products, and the preliminary estimation of the field is small, and only approximately 5 and 6 amusement parks can be put down. Naturally, community staff have made difficult selection. Thus, they take several live-action pictures of the site, giving the manufacturer the past, and helping the manufacturer to solve the problem. After receiving the photo, the manufacturer solves the problem according to the following steps
S1, transmitting live-action photos of an empty site of a community to the intelligent fusion system;
s2, an intelligent image analysis unit of the system intelligently analyzes, calculates and obtains all information related to the live-action photo, wherein the intelligent image analysis unit comprises the following steps: the camera pose, shooting parameters, photo distortion parameters, the space size of a scene shot by the photo, the semantic features of each object in the photo and the space relation of the objects are adopted when the photo is shot. In addition, when the live-action photo is analyzed, the intelligent image analysis unit can intelligently identify each object on the live-action photo and intelligently divide the identified object from other objects on the photo;
s3, storing the information subjected to intelligent analysis and calculation into an information storage unit of the system for later use;
s4, displaying the live-action photos of the community empty sites on a fusion processing interface of the system;
s5, inputting the digitized three-dimensional models of all amusement facilities produced by the factory into a digitized three-dimensional model database of the intelligent fusion system;
s6, selecting a proper recreation facility from the digital three-dimensional model database and sending the proper recreation facility to a fusion processing interface for displaying live-action photos of the community site;
s7, moving and rotating the digital three-dimensional model of the recreation facility transmitted in on a processing interface for displaying the live-action photo. Meanwhile, the intelligent system intelligently analyzes, calculates and realizes conversion from the three-dimensional model to two-dimensional projection according to the related information of the live-action photo and the position of the digital three-dimensional model in the live-action photo, so that the digital three-dimensional model and the live-action photo are naturally fused, and the size, the gesture and the perspective relation of the expressed real object in the live-action are displayed;
S8, after a set of digital three-dimensional model of the amusement facility is arranged on a live-action photo related to a community site, automatically fusing the photos taken at other angles according to arrangement logic of a first photo by the system so as to show fusion effects at different shooting angles;
s9, making a plurality of effect diagram schemes according to experience by a factory, and sharing design results with community related personnel through the intelligent fusion system;
s10, community related personnel can watch the game facility arrangement effect suggested by the factory on the intelligent fusion system processing interface, measure the spatial relationship and specific size between objects through the measurement system of the system, move the model to see the effect of position change, call out a new model from the digital three-dimensional model database of the system to increase or replace the three-dimensional model fused on the effect graph, and provide feedback comments to the factory by the method. In each movement and adjustment of the digital three-dimensional model, the intelligent system can instantly analyze, calculate and realize conversion from the three-dimensional model to two-dimensional projection according to the position of the digital three-dimensional model in the live-action photo and referring to the related information of the live-action photo, so that the digital three-dimensional model and the live-action photo are naturally fused, and the expressed size, posture and perspective relation of the real object in the live-action are shown;
S11, the factory party and the community party make full communication through the intelligent fusion system, and finally the two parties agree on an effect diagram of certain amusement facility arrangement.
Example 2
A housewife wants to rearrange one side of the living room. This side was originally a set of short cabinets and walls arranged around a large-size television set. Due to the development of the network, the housewives have less and less chance to watch television, so she wants to rearrange this side of the living room. However, family members have different ideas about the new arrangement. Some proposals are to arrange a row of bookcases and then add a flower stand; some suggestions are to put a treadmill and a floor sound; there are also proposals to put a freezer and a pair of sofas. In order to unify the opinion of the housewives, the housewives decide to draw a plurality of effect figures by using the intelligent fusion system of the three-dimensional model and the live-action photo provided by the invention, so that the housewives can refer to the effect figures. The steps implemented are as follows:
s1, the housewife firstly shoots one side of a living room needing to be refitted, and then sends the photo to a home decoration design company;
s2, the decoration design company obtains candidate digital three-dimensional models of furniture and electric appliances from furniture and household appliance suppliers according to the opinion of housewives, and then stores the digital three-dimensional models into a digital three-dimensional model database of the intelligent fusion system;
S3, the design company sends the live-action photo on one side of the living room sent by the housewife to the intelligent fusion system, and the live-action photo is presented on an intelligent fusion processing interface of the system;
s4, designing a plurality of sets of fusion schemes according to preliminary wishes of housewives by a design company;
s5, for the design of each set of fusion scheme, a design company sends a digital three-dimensional model of a real object related to the scheme to a live-action photo on one side of a living room, and the digital three-dimensional model is arranged according to the opinion of a housewife. In each movement and adjustment of the digital three-dimensional model, the intelligent system immediately analyzes, calculates and realizes the conversion from the three-dimensional model to two-dimensional projection according to the position of the three-dimensional model in the live-action photo and referring to the related information of the live-action photo, so that the digital three-dimensional model and the live-action photo are naturally fused, and the expressed size, posture and perspective relation of the real object in the live-action are shown;
s6, when the digital three-dimensional model sent to the live-action photo is overlapped with the original real object in the live-action photo, the system intelligently processes the original real object image according to the requirement so as to better display a new arrangement effect;
S7, the design company displays an effect diagram of ideas of each person in family members to new arrangement on one side of a living room to a housewife through the intelligent fusion system;
s8, housewives can watch the layout effect on one side of a living room which is supposed and fused by a designer on the processing interface of the intelligent fusion system, measure the spatial relationship and specific size of objects on the fusion effect graph through the measuring system of the system, move the model to see the changed effect, and can also call out a new digital three-dimensional model of furniture or electric appliances from the digital three-dimensional model database of the system or external resources to add or replace the model on the effect graph, and adjust the thought of the housewives by the method. In each movement and adjustment of the digital three-dimensional model, the intelligent system intelligently analyzes, calculates and realizes the conversion from the three-dimensional model to two-dimensional projection according to the related information of the live-action photo and the position of the digital three-dimensional model in the live-action photo, so that the digital three-dimensional model and the live-action photo are naturally fused, and the expressed size, posture and perspective relation of the real object actually existing in the live-action are shown;
S9, comparing and adjusting multiple schemes, integrating opinion of a housewife, and completing a final effect diagram of the arrangement at one side of the living room with the help of a design company.
It should be noted that:
the required structure for constructing such a system arrangement is apparent from the above description. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the above description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification, and all processes or units of any method or apparatus so disclosed, may be employed, except that at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in the creation means of a virtual machine according to an embodiment of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention.

Claims (4)

1. A system and a method for intelligently fusing a three-dimensional model and a live-action photo are characterized by comprising the following steps:
transmitting one or more live-action photos of a scene to be fused with the digital three-dimensional model of the real object to an intelligent system for fusing the digital three-dimensional model and the live-action photos, analyzing, calculating and storing related information by the intelligent fusion system, wherein the method comprises the following steps of: the camera pose, camera parameters, photo distortion parameters, the size and space layout of a scene shot by the photo, the semantic features of each object in the photo and the corresponding independent images and the spatial relationship between the objects which are segmented from the whole photo are adopted when the photo is shot;
presenting the live-action photo on a processing interface of the intelligent fusion system;
retrieving and retrieving the adapted digitized three-dimensional model of one or more physical objects from an internal or external resource of the system;
Transmitting the digitized three-dimensional model of the real object to a processing interface for presenting the live-action photo;
rotating, moving, scaling the digitized three-dimensional model as needed and arranging the digitized three-dimensional model at a proper position in the live-action photo;
the system intelligently analyzes, calculates and realizes the conversion from the three-dimensional model to the two-dimensional projection according to the related information of the live-action photo and the position of the digital three-dimensional model in the live-action photo, so that the digital three-dimensional model and the live-action photo are naturally fused, and the size, the gesture and the perspective relation of the expressed real object in the live-action are displayed.
2. The system for intelligent fusion of a three-dimensional model and a live-action photo according to claim 1, comprising: the intelligent fusion system for fusing the digital three-dimensional model of the real object into the live-action photo consists of an intelligent image analysis unit, an information storage unit, a central processing unit, a digital three-dimensional model database and an information input and output unit;
the intelligent image analysis unit is specially programmed software, and the tasks to be completed mainly comprise analysis, calculation and acquisition of all information related to the live-action photo, and the intelligent image analysis unit comprises: the camera pose, shooting parameters, photo distortion parameters, the size and space layout of a scene shot by the photo, semantic features of objects in the photo and the spatial relationship of the objects are adopted when the photo is shot. The functions of the intelligent image analysis unit further comprise: intelligently identifying each object on the live-action photo and dividing the object from other objects;
The information storage unit is a digital information storage module, and the tasks to be completed include: storing the information analyzed and processed by the intelligent image analysis unit and the central processing unit for later use;
the central processing unit is specially programmed software, and the functions of the central processing unit comprise: and fusing the digitized three-dimensional model of the real object into a live-action photo of a certain scene in a high-simulation manner so as to achieve the picture effect that the real object really exists in the scene of the photo. The central processing unit is connected with a processing interface to realize the operation function of the picture display and man-machine interaction. The display object comprises: the method comprises the steps of digitizing a three-dimensional model, a live-action photo, and a fusion process of the digitized three-dimensional model and the live-action photo, and fusing an effect graph. The man-machine interaction operation function mainly comprises all the specific steps involved in the fusion process of the three-dimensional model and the live-action photo. Meanwhile, the processing interface supports a plurality of operation methods, including: instructions are issued through keyboard, mouse, touch screen controls to accomplish various tasks including: searching, selecting, observing and measuring a digital three-dimensional model of a real object, fusing the digital three-dimensional model with a live-action photo, and auditing and appreciating the result of an effect graph;
The digital three-dimensional model database is an information storage module which is specially used for storing candidate digital three-dimensional models of objects which are used for fusing with live-action photos, and is characterized by further comprising: a specialized archive is created for each digitized three-dimensional model of a physical object to collect and store background information associated with the digitized three-dimensional model of the physical object, comprising: a producer of a physical object, a provider of a three-dimensional model of the physical object, the size of the physical object, the purpose of the physical object and technical indexes of the physical object;
the information input/output unit is an interface for realizing information exchange of the whole system, and the functions of the information input/output unit comprise: inputting information related to fusion of the three-dimensional model and the live-action photo, and outputting various information related to fusion, wherein the information comprises the fused photo, a fusion process record, a fusion animation demonstration, a digital three-dimensional model of a real object participating in fusion and the live-action photo participating in fusion. The input/output unit may be connected to a communication device with related information, including: a cell phone, a notebook computer, a desktop computer, a tablet computer, an inkjet or laser printer, a 3D printer, and an optical disk recorder.
3. The method for intelligently fusing a three-dimensional model with a live-action photo of claim 1, further comprising,
The intelligent fusion operation mode of the digital three-dimensional model of the real object and the live-action photo comprises the following steps: on-line operation, off-line operation performed only on a local computer or other similar device, is performed by way of connection to the internet.
4. The method for intelligently fusing a three-dimensional model with a live-action photo of claim 1, further comprising,
the digitized three-dimensional model for a real object fused with a live-action photo comprises: a digitized three-dimensional model obtained by scanning an actual existence object, a digitized three-dimensional model of an actual existence object obtained by other methods, a digitized three-dimensional model of an imitation object created by creative, imaginative and three-dimensional modeling or the like means.
CN202310060142.6A 2023-01-17 2023-01-17 Equipment and method for intelligent fusion of three-dimensional model and live-action photo Active CN116012564B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310060142.6A CN116012564B (en) 2023-01-17 2023-01-17 Equipment and method for intelligent fusion of three-dimensional model and live-action photo

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310060142.6A CN116012564B (en) 2023-01-17 2023-01-17 Equipment and method for intelligent fusion of three-dimensional model and live-action photo

Publications (2)

Publication Number Publication Date
CN116012564A true CN116012564A (en) 2023-04-25
CN116012564B CN116012564B (en) 2023-10-20

Family

ID=86021004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310060142.6A Active CN116012564B (en) 2023-01-17 2023-01-17 Equipment and method for intelligent fusion of three-dimensional model and live-action photo

Country Status (1)

Country Link
CN (1) CN116012564B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118113402A (en) * 2024-03-15 2024-05-31 宁波艾腾湃数字技术有限公司 Interactive article display method and system combining three-dimensional model and video
CN118200663A (en) * 2024-03-15 2024-06-14 宁波艾腾湃数字技术有限公司 Interactive video playing method combined with digital three-dimensional model display

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2938845B1 (en) * 1998-03-13 1999-08-25 三菱電機株式会社 3D CG live-action image fusion device
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN107292954A (en) * 2017-06-21 2017-10-24 重庆市勘测院 A kind of threedimensional model and photo fusion method
CN108648272A (en) * 2018-04-28 2018-10-12 上海激点信息科技有限公司 Three-dimensional live acquires modeling method, readable storage medium storing program for executing and device
CN110060331A (en) * 2019-03-14 2019-07-26 杭州电子科技大学 Three-dimensional rebuilding method outside a kind of monocular camera room based on full convolutional neural networks
US20200342652A1 (en) * 2019-04-25 2020-10-29 Lucid VR, Inc. Generating Synthetic Image Data for Machine Learning
CN112396686A (en) * 2019-07-31 2021-02-23 鸿富锦精密电子(天津)有限公司 Three-dimensional scene engineering simulation and live-action fusion system and method
US20210248822A1 (en) * 2018-04-23 2021-08-12 The Regents Of The University Of Colorado, A Body Corporate Mobile And Augmented Reality Based Depth And Thermal Fusion Scan
WO2022040970A1 (en) * 2020-08-26 2022-03-03 南京翱翔信息物理融合创新研究院有限公司 Method, system, and device for synchronously performing three-dimensional reconstruction and ar virtual-real registration
CN115294207A (en) * 2022-06-30 2022-11-04 南京南邮信息产业技术研究院有限公司 Fusion scheduling system and method for smart campus monitoring video and three-dimensional GIS model

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2938845B1 (en) * 1998-03-13 1999-08-25 三菱電機株式会社 3D CG live-action image fusion device
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN107292954A (en) * 2017-06-21 2017-10-24 重庆市勘测院 A kind of threedimensional model and photo fusion method
US20210248822A1 (en) * 2018-04-23 2021-08-12 The Regents Of The University Of Colorado, A Body Corporate Mobile And Augmented Reality Based Depth And Thermal Fusion Scan
CN108648272A (en) * 2018-04-28 2018-10-12 上海激点信息科技有限公司 Three-dimensional live acquires modeling method, readable storage medium storing program for executing and device
CN110060331A (en) * 2019-03-14 2019-07-26 杭州电子科技大学 Three-dimensional rebuilding method outside a kind of monocular camera room based on full convolutional neural networks
US20200342652A1 (en) * 2019-04-25 2020-10-29 Lucid VR, Inc. Generating Synthetic Image Data for Machine Learning
CN112396686A (en) * 2019-07-31 2021-02-23 鸿富锦精密电子(天津)有限公司 Three-dimensional scene engineering simulation and live-action fusion system and method
WO2022040970A1 (en) * 2020-08-26 2022-03-03 南京翱翔信息物理融合创新研究院有限公司 Method, system, and device for synchronously performing three-dimensional reconstruction and ar virtual-real registration
CN115294207A (en) * 2022-06-30 2022-11-04 南京南邮信息产业技术研究院有限公司 Fusion scheduling system and method for smart campus monitoring video and three-dimensional GIS model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
关一龙;吴红波;张黎明;郭敏;: "基于MapGIS和实景空间信息模型的三维数字校园建模与实现", 城市勘测, no. 03, pages 29 - 33 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118113402A (en) * 2024-03-15 2024-05-31 宁波艾腾湃数字技术有限公司 Interactive article display method and system combining three-dimensional model and video
CN118200663A (en) * 2024-03-15 2024-06-14 宁波艾腾湃数字技术有限公司 Interactive video playing method combined with digital three-dimensional model display

Also Published As

Publication number Publication date
CN116012564B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
US10755485B2 (en) Augmented reality product preview
US11367250B2 (en) Virtual interaction with three-dimensional indoor room imagery
US11640672B2 (en) Method and system for wireless ultra-low footprint body scanning
GB2564745B (en) Methods for generating a 3D garment image, and related devices, systems and computer program products
CN116012564B (en) Equipment and method for intelligent fusion of three-dimensional model and live-action photo
US9420253B2 (en) Presenting realistic designs of spaces and objects
US10154246B2 (en) Systems and methods for 3D capturing of objects and motion sequences using multiple range and RGB cameras
US10628666B2 (en) Cloud server body scan data system
US20190130649A1 (en) Clothing Model Generation and Display System
US20180144237A1 (en) System and method for body scanning and avatar creation
Gruber et al. The city of sights: Design, construction, and measurement of an augmented reality stage set
EP3398353A1 (en) A method for generating a customized/personalized head related transfer function
Tang et al. AR interior designer: Automatic furniture arrangement using spatial and functional relationships
CN117333644A (en) Virtual reality display picture generation method, device, equipment and medium
CN110349269A (en) A kind of target wear try-in method and system
WO2018182938A1 (en) Method and system for wireless ultra-low footprint body scanning
Fadzli et al. A systematic literature review: Real-time 3D reconstruction method for telepresence system
CN116664770A (en) Image processing method, storage medium and system for shooting entity
CN116486018A (en) Three-dimensional reconstruction method, apparatus and storage medium
CN114445171A (en) Product display method, device, medium and VR equipment
Hou et al. Real-time markerless facial motion capture of personalized 3D real human research
Sorokin et al. Deep learning in tasks of interior objects recognition and 3D reconstruction
WO2020075185A1 (en) Automatic furniture and electronic equipment recommender
Mutalib et al. PRO-VAS: utilizing AR and VSLAM for mobile apps development in visualizing objects
小川航平 et al. A Study on Embodied Expressions in Remote Teleconference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240226

Address after: Room 104, Building 2, No. 657 Liuting Street, Haishu District, Ningbo City, Zhejiang Province, 315010

Patentee after: Ningbo Aitengpai Digital Technology Co.,Ltd.

Country or region after: China

Address before: Room 101-69, Hon Hai Commercial Building, Free Trade Zone, Ningbo City, Zhejiang Province, 315201

Patentee before: NINGBO AITENGPAI INTELLIGENT TECHNOLOGY Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right