CN111179406A - Product model display method and system - Google Patents

Product model display method and system Download PDF

Info

Publication number
CN111179406A
CN111179406A CN201811333543.XA CN201811333543A CN111179406A CN 111179406 A CN111179406 A CN 111179406A CN 201811333543 A CN201811333543 A CN 201811333543A CN 111179406 A CN111179406 A CN 111179406A
Authority
CN
China
Prior art keywords
virtual
azimuth
product model
viewing
virtual product
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811333543.XA
Other languages
Chinese (zh)
Inventor
王珏
王琦琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yunshen Intelligent Technology Co ltd
Original Assignee
Shanghai Yunshen Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yunshen Intelligent Technology Co ltd filed Critical Shanghai Yunshen Intelligent Technology Co ltd
Priority to CN201811333543.XA priority Critical patent/CN111179406A/en
Publication of CN111179406A publication Critical patent/CN111179406A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method and a system for displaying a product model, wherein the method comprises the following steps: the method comprises the steps that intelligent equipment introduces a virtual product initial model from a virtual engine, wherein the virtual product initial model is a three-dimensional layout scene graph; the intelligent equipment uses a resource library to perform image processing on the virtual product initial model to obtain a virtual product model; and the intelligent equipment throws the virtual product model to a corresponding display platform end for projection display according to the throwing list. The invention realizes the reprocessing of the virtual product model in the virtual engine and the putting of the processed virtual product model.

Description

Product model display method and system
Technical Field
The invention relates to the field of projection display, in particular to a method and a system for displaying a product model.
Background
The non Engine4 is an editable virtual Engine, and a merchant can use the non Engine4 to present cool product effects.
Many finishing companies on the market now develop applications based on the universal Engine4 for editing virtual product models of their respective products, such as kujiale, impersonator, love nest, etc. However, in the prior art, reprocessing of the virtual product model cannot be realized between the applications, and the obtained virtual product model cannot be displayed in a physical scene.
The invention provides a product model display method and a product model display system based on the method.
Disclosure of Invention
The invention aims to provide a product model display method and a product model display system, which are used for reprocessing a virtual product model in a virtual engine and putting the processed virtual product model.
The technical scheme provided by the invention is as follows:
the invention provides a method for displaying a product model, which comprises the following steps: leading in a virtual product initial model from a virtual engine by the intelligent equipment; the virtual product initial model is a three-dimensional layout scene graph; the intelligent equipment uses a resource library to perform image processing on the virtual product initial model to obtain a virtual product model; and the intelligent equipment throws the virtual product model to a corresponding display platform end for projection display according to the throwing list.
Preferably, the method further comprises the following steps: and the intelligent equipment exports the processed virtual product model to a virtual engine.
Preferably, the method further comprises the following steps: the intelligent equipment acquires configuration information of the virtual product model, wherein the configuration information comprises product information of the virtual product model and display platform information; and generating a release list of each display platform according to the configuration information.
Preferably, the intelligent device puts the virtual product model to a corresponding display platform end for projection display according to the putting list, and the steps specifically include: the intelligent equipment puts the virtual product model to a corresponding display platform according to a putting list; the display platform end obtains position reference information corresponding to a watching position, calculates a plurality of azimuth viewing angles of the watching position by combining the position reference information, generates a virtual picture corresponding to each azimuth of the virtual product model according to the azimuth viewing angle of each azimuth, fuses the virtual pictures of the plurality of azimuths into a scene picture of watching the virtual scene at the watching position, and then projects the scene picture.
Preferably, the step of calculating a plurality of azimuth viewing angles of the viewing position by combining the position reference information specifically includes: calculating an azimuth viewing angle in one azimuth of the viewing position by combining the position reference information; calculating azimuth viewing angles of adjacent azimuths of the rest azimuths in the plurality of azimuths according to the calculated azimuth viewing angles and the angle relation between the adjacent azimuths; or respectively calculating a plurality of azimuth viewing angles of the watching position by combining the position reference information and the viewing angle calculation formulas of all azimuths.
The invention also provides a product model display system, which comprises the intelligent equipment and a display platform end:
the smart device includes: the import module is used for importing a virtual product initial model from the virtual engine; the virtual product initial model is a three-dimensional layout scene graph; the processing module is used for processing the image of the virtual product initial model by using a resource library to obtain a virtual product model; and the releasing module is used for releasing the virtual product model to a corresponding display platform end for projection display according to the releasing list.
Preferably, the smart device further includes: and the export module is used for exporting the processed virtual product model to the virtual engine.
Preferably, the smart device further includes: the information acquisition module is used for acquiring configuration information of the virtual product model, wherein the configuration information comprises product information of the virtual product model and display platform information; and the list generation module is used for generating a release list of each display platform according to the configuration information.
Preferably, the putting module is used for putting the virtual product model to a corresponding display platform according to a putting list; the display platform comprises: the positioning module is used for acquiring position reference information corresponding to the watching position; the calculation module is used for calculating a plurality of azimuth viewing angles of the watching position by combining the position reference information; the image generation module is used for generating a virtual image corresponding to each direction of the virtual product model according to the direction visual angle of each direction, and fusing the virtual images of a plurality of directions into a scene image for watching the virtual scene at the watching position; and the projection module is used for projecting the scene picture.
Preferably, the calculation module is further configured to calculate an azimuth viewing angle in one azimuth of the viewing position by combining the position reference information; calculating azimuth viewing angles of adjacent azimuths of the rest azimuths in the plurality of azimuths according to the calculated azimuth viewing angles and the angle relation between the adjacent azimuths; or; and the system is used for respectively calculating a plurality of azimuth viewing angles of the watching position by combining the position reference information and the viewing angle calculation formulas of all azimuths.
The product model display method and the product model display system provided by the invention can bring at least one of the following beneficial effects:
the virtual product model in the virtual engine can be reprocessed, such as image material rendering, lighting rendering, mapping and the like, and the processed virtual product model can be put on a corresponding display platform for display, so that the product can be promoted. The display platform end can also adjust the displayed virtual picture according to the position of the user, so that the virtual picture watched by the user is more real, and the watching experience of the user is improved.
Drawings
The above features, technical features, advantages and modes of realization of a method and system for displaying a product model will be further described in the following detailed description of preferred embodiments in a clearly understandable manner by referring to the accompanying drawings.
FIG. 1 is a flow chart of one embodiment of a method of displaying a product model of the present invention;
FIG. 2 is a flow chart of another embodiment of a method for displaying a product model of the present invention;
FIG. 3 is a schematic view of the viewing angle at various orientations of a viewpoint/viewing position in accordance with the present invention;
FIG. 4 is a schematic view of the viewing angle at various orientations of another viewpoint/viewing position in accordance with the present invention;
FIG. 5 is a schematic view of a perspective of another viewpoint/viewing position in various orientations of the present invention;
FIG. 6 is a schematic diagram of cropping in a direction in front of a viewpoint/viewing position in accordance with the present invention;
FIG. 7 is a schematic view of cropping at a view point/viewing position rear orientation in accordance with the present invention;
FIG. 8 is a schematic diagram of cropping in the left-hand side of a viewpoint/viewing position in accordance with the present invention;
FIG. 9 is a schematic diagram of cropping in the right side orientation of a viewpoint/viewing position in the present invention;
FIG. 10 is a schematic diagram of a product model display system according to an embodiment of the present invention.
The reference numbers illustrate:
11-an import module, 12-a processing module, 13-a delivery module, 14-an export module, 15-an information acquisition module and 16-a list generation module;
21-positioning module, 22-calculating module, 23-picture generating module and 24-projecting module.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
The invention provides an embodiment of a product model display method, as shown in fig. 1, comprising the steps of:
s101, leading in a virtual product initial model from a virtual engine by intelligent equipment, wherein the virtual product initial model is a three-dimensional layout scene graph;
s102, the intelligent equipment uses a resource library to perform image processing on the virtual product initial model to obtain a virtual product model;
s103, the intelligent device puts the virtual product model to a corresponding display platform end for projection display according to the putting list.
The non Engine4 is an editable virtual Engine, and many decoration companies on the market develop many applications based on the non Engine4, such as cool family, decorator, love nest, etc., for editing virtual product models of respective products, such as decoration effect diagrams of kitchen, or effect diagrams of cabinet and furniture.
However, these decoration companies generally send the manufactured virtual products directly to customers, belong to point-to-point display, have no wide influence, and have not a great propaganda effect on merchants.
Therefore, the invention also designs a virtual Engine on the intelligent device based on the universal Engine 4. The smart device imports a virtual product initial model from other virtual engines (such as kujiale, impersonator, love nest, and the like), for example, a three-dimensional layout scene diagram of a northern european kitchen which can be imported with european style products. Then, the intelligent device applies a pre-stored resource library to perform image processing on the obtained virtual product initial model, and the processing mode comprises the following steps: image material rendering, lighting rendering, map rendering, and the like.
The invention can set up a plurality of display platform ends in each city, for example, the display platform ends can be set up in the places of Xujiahui in Shanghai, Toyobo in Nanjing, and the like. The merchant can select a display platform which needs to be released specifically to release the virtual product model, the intelligent device can generate a release list according to the content selected by the merchant, and then the virtual product model is released according to the release list.
For example, the smart device imports a three-dimensional layout scene graph of the northern european style kitchen of the european style from another virtual engine. The intelligent equipment can be applied to a resource library, the materials of the cabinets in the three-dimensional layout scene graph are rendered, the whole light color of the picture is rendered, and the processed virtual product model is finally obtained. And then the virtual product model is released according to the release list.
The present invention also provides an embodiment of a method for displaying a product model, as shown in fig. 2, including:
s201, the intelligent device imports a virtual product initial model from a virtual engine, wherein the virtual product initial model is a three-dimensional layout scene graph.
S202, the intelligent equipment acquires configuration information of the virtual product model, wherein the configuration information comprises product information of the virtual product model and display platform information; and generating a release list of each display platform according to the configuration information.
In this embodiment, after the virtual product initial model is imported, configuration information of the virtual product initial model also needs to be acquired, for example, as shown in table 1 below:
TABLE 1
Figure BDA0001860625730000051
According to the configuration information, the intelligent equipment can generate a release list corresponding to the virtual product initial model, and the intelligent equipment can release the processed virtual product model to a corresponding display platform according to the configuration list.
S203, the intelligent device uses a resource library to process the image of the virtual product initial model to obtain a virtual product model.
S204, the intelligent equipment puts the virtual product model to a corresponding display platform end for projection display according to the putting list.
S205, the intelligent equipment puts the virtual product model to a corresponding display platform according to a putting list;
s206, the display platform end obtains position reference information corresponding to a watching position, calculates a plurality of azimuth viewing angles of the watching position by combining the position reference information, generates a virtual picture corresponding to each azimuth of the virtual product model according to the azimuth viewing angle of each azimuth, fuses the virtual pictures of the plurality of azimuths into a scene picture of the virtual scene watched at the watching position, and then projects the scene picture.
In the prior art, during projection, a virtual product model is mostly directly projected. For example, the virtual product model designs a three-dimensional layout scene graph for kitchen decoration, and during projection, the prior art only projects a front view or a view in a certain direction, so that the projected picture only presents a planar effect and has no sense of reality and stereoscopic impression.
In the embodiment, when the virtual product model is displayed in a projection mode, the virtual product model is not directly projected, but the watching position of a viewer is obtained by using a mobile terminal carried by the viewer after the viewer enters the watching space; the mobile terminal can complete indoor positioning, can be a mobile phone, a tablet personal computer, an intelligent bracelet and the like, and integrates an indoor positioning function on equipment frequently used by a viewer at ordinary times; or a hand-held terminal and the like can be specially produced, and the indoor positioning function is integrated.
Then, the display platform end can calculate a plurality of azimuth viewing angles of the viewing position by combining the position reference information; specifically, at different positions, the perspective view of a person may also be different at each orientation; if at different positions, the pictures presented by watching the same object at the same direction are different; the different pictures are seen because the perspective view angle changes when the object is viewed. The position information of the watching position comprises X-axis coordinate information, Y-axis coordinate information and Z-axis coordinate information, and a plurality of azimuth viewing angles can be calculated through the position information of the watching position; for example: an azimuth view right ahead, an azimuth view right behind, an azimuth view left, an azimuth view right above, and an azimuth view right below.
And secondly, generating a virtual picture corresponding to each direction of the virtual product model by the display platform end according to the direction view angle of each direction. Specifically, the virtual scene is an integral picture, and the virtual scene can be decorated with a home scene in a suite; the display scene of the commodity room can also be the display scene of the commodity. Cutting a virtual scene in a three-dimensional space; after the azimuth visual angle of the watching position is calculated, if the azimuth visual angle in the front is combined, the virtual scene is cut into a virtual picture in the front in a three-dimensional space; in this way, virtual screens right behind, left side, right above, and right below can be obtained.
And finally, fusing the virtual pictures in the plurality of directions into a scene picture for watching the virtual product model at the watching position, and projecting the scene picture. Specifically, after virtual pictures of the front, the back, the left side and the right side are obtained, the virtual pictures of the front, the back, the left side and the right side are seamlessly spliced and fused into a complete scene picture observed at an observing position.
In this embodiment, when the position reference information corresponding to the viewing position is obtained, the position reference information may be two types of position information:
in the first type, the position reference information is virtual position information:
converting the viewing position information in the viewing space into virtual position information in the virtual scene according to the corresponding relation between the space coordinates of the viewing space and the virtual coordinates of the virtual scene; and using the virtual position information as position reference information;
specifically, under the condition of real-time rendering, viewing position information is converted into virtual position information, and the calculation of the azimuth angle of view and the generation of a virtual picture are completed through the virtual position information. The essence of real-time rendering is the real-time computation and output of graphics data.
In the second type, the position reference information is position pixel information:
converting viewing position information in the viewing space into position pixel information in the virtual scene according to the corresponding relation between the space coordinates of the viewing space and the picture pixels of the virtual scene; and the positional pixel information is used as positional reference information.
Specifically, under the condition of offline rendering, viewing position information is converted into position pixel information, and the calculation of the azimuth viewing angle and the generation of a virtual picture are completed through the position pixel information.
Wherein, the scene model of the virtual scene and the space model of the viewing space are in a specific proportional relationship; the viewing space is shown in fig. 6. The specific proportion relation is 1: 1.
in this embodiment, the plurality of orientations may be four orientations, such as front, rear, left, and right; or six orientations, such as front, back, left, right, up, down; two orientations are also possible, such as up and down.
Calculating a plurality of azimuth viewing angles at different viewing positions, wherein the same azimuth has different azimuth viewing angles; and aiming at different azimuth viewing angles, virtual pictures generated in the same azimuth are different. Seamlessly splicing the virtual pictures in a plurality of directions to form a complete scene picture at the same viewing position; therefore, multi-screen fusion imaging can be realized, the viewing angle of the multi-screen fusion imaging changes along with the position change of a viewer, the viewing angle of the viewer can be kept to be updated in real time, and the displayed three-dimensional scene picture can be updated in time; the stereoscopic scene image presented by the stereoscopic scene image display device cannot be distorted due to the change of the viewing position.
By adjusting the scene picture of the virtual product model according to the position of the viewer in real time, the projected birth warp picture can better accord with the visual angle of the viewer, so that the picture is more real and has stereoscopic impression.
Preferably, the step of calculating a plurality of azimuth viewing angles of the viewing position by combining the position reference information specifically includes:
A1. calculating an azimuth viewing angle in one azimuth of the viewing position by combining the position reference information;
specifically, when a plurality of azimuth viewing angles need to be calculated, for example, the azimuth viewing angles of the front, rear, left and right azimuths; can utilize the visual angle calculation formula of the front azimuth visual angle to calculatea front direction visual angle is shown, as shown in fig. 6, the front direction visual angle is FOV, FOV is equal to 2 < theta, tan theta is equal to (L)12+ s)/y; where L1 is the width of the viewing space, s is an offset value from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
A2. Calculating azimuth viewing angles of adjacent azimuths of the rest azimuths in the plurality of azimuths according to the calculated azimuth viewing angles and the angle relation between the adjacent azimuths;
specifically, the azimuth angle between the front azimuth viewing angle and the left or right azimuth viewing angle is a fixed angle of 180 degrees, and after the front azimuth viewing angle is calculated, the front azimuth viewing angle is subtracted from the fixed angle of 180 degrees, so that the azimuth viewing angle of the left or right azimuth can be obtained.
as shown in fig. 6, the azimuth angle between the forward azimuth viewing angle and the right azimuth viewing angle is a fixed angle of 180 °, the azimuth viewing angle of the right azimuth is equal to 180 ° minus the forward azimuth viewing angle, the forward and backward azimuth viewing angles are equal, the circumferential angle of the viewpoint o is 360 °, and the left azimuth viewing angle can be calculated when the right azimuth viewing angle, ∠ aob, is known.
Or; B. and respectively calculating a plurality of azimuth viewing angles of the watching position by combining the position reference information and the viewing angle calculation formulas of all azimuths.
Specifically, when a plurality of azimuth viewing angles need to be calculated, for example, the azimuth viewing angles of the front, rear, left and right azimuths; the front azimuth viewing angle can be calculated by utilizing a viewing angle calculation formula of the front azimuth viewing angle; the rear azimuth viewing angle can be calculated by utilizing a viewing angle calculation formula of the rear azimuth viewing angle; the left-side azimuth viewing angle can be calculated by using a viewing angle calculation formula of the left-side azimuth viewing angle; the right-side azimuth viewing angle can be calculated by using a viewing angle calculation formula of the right-side azimuth viewing angle.
as shown in fig. 6, the forward azimuth angle is FOV, FOV is 2 ∠ θ, and tan θ is (L)12+ s)/y; where L1 is the width of the viewing space, s is the lateral offset from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
As shown in fig. 7, thenThe azimuthal viewing angle is the FOV,
Figure BDA0001860625730000091
tanθ=(L1/2+s)/(L2-y); where L2 is the length of the viewing space, s is the lateral offset from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
When the position information of the viewpoint o is known, the azimuth viewing angles of all azimuths can be calculated, and the azimuth viewing angles corresponding to the left and right sides of the viewpoint o can be calculated by a formula, which is not described herein again.
Further preferably, the generating a virtual image corresponding to each orientation of the virtual product model according to the orientation view angle of each orientation specifically includes: when the azimuth visual angles of the front azimuth and the rear azimuth in the plurality of azimuth visual angles are equal and the azimuth visual angles of the left azimuth and the right azimuth are not equal, virtual scene pictures corresponding to the left azimuth visual angle and/or the right azimuth visual angle are cut in the virtual scene; and/or; and calculating cutting areas corresponding to the front view angle and/or the rear view angle and/or the upper view angle and/or the lower view angle respectively, and cutting out corresponding virtual scene pictures in the virtual scene according to the cutting areas and the view angles corresponding to the cutting areas.
Specifically, after the plurality of azimuth viewing angles are calculated, whether two equal azimuth viewing angles exist in the plurality of azimuth viewing angles is analyzed, and if two equal azimuth viewing angles exist, whether two azimuths corresponding to the two equal azimuth viewing angles are opposite azimuths is analyzed.
When the azimuth viewing angles of the front and rear opposite azimuths are analyzed to be equal, as shown in fig. 6 and 7, the front and rear opposite azimuths are opposite; according to the actual display condition, a virtual scene picture corresponding to the left-side visual angle can be cut out from the virtual scene, a virtual scene picture corresponding to the right-side visual angle can be cut out from the virtual scene, virtual scene pictures corresponding to the left-side visual angle and the right-side visual angle can be cut out from the virtual scene, and the virtual scene pictures can not cut out normal pictures of the left-side and the right-side in the virtual scene.
Specifically, the viewing position is a central position, and as shown in fig. 3, when the azimuth viewing angles of all the two opposite azimuths are equal, the virtual scene picture cut from the virtual scene at the central position in each azimuth is a normal picture.
Further preferably, the generating a virtual screen corresponding to each orientation of the virtual product model according to the orientation view angle of each orientation specifically includes: when the azimuth angles of the front azimuth and the rear azimuth in the azimuth angles are not equal and the azimuth angles of the left azimuth and the right azimuth are equal, virtual scene pictures corresponding to the front azimuth angle and/or the rear azimuth angle are cut out from the virtual scene;
specifically, after the plurality of azimuth viewing angles are calculated, whether two equal azimuth viewing angles exist in the plurality of azimuth viewing angles is analyzed, and if two equal azimuth viewing angles exist, whether two azimuths corresponding to the two equal azimuth viewing angles are opposite azimuths is analyzed.
When the azimuth viewing angles of the left and right opposite orientations are analyzed to be equal, as shown in fig. 4 and 5, and as shown in fig. 8 and 9, the left and right opposite orientations are opposite; according to the requirements of actual display conditions, a virtual scene picture corresponding to a front position view angle can be cut out from a virtual scene, a virtual scene picture corresponding to a rear position view angle can be cut out from the virtual scene, virtual scene pictures corresponding to the front and rear position view angles can be cut out from the virtual scene, and the virtual scene pictures can not cut out normal pictures of the front and rear positions in the virtual scene.
Further preferably, the generating a virtual image corresponding to each orientation of the virtual product model according to the orientation view angle of each orientation specifically includes: and calculating cutting areas corresponding to the left visual angle and/or the right visual angle and/or the upper visual angle and/or the lower visual angle respectively, and cutting out a corresponding virtual scene picture in the virtual scene according to the cutting areas and the visual angles corresponding to the cutting areas.
Specifically, when the left and right azimuth viewing angles are analyzed to be equal, the pictures corresponding to the left azimuth viewing angle, the right azimuth viewing angle, the upper azimuth viewing angle and the lower azimuth viewing angle are no longer normal pictures, and the normal pictures need to be cut.
And according to the requirement of the actual display condition, cutting out the virtual scene picture corresponding to each direction after selecting a plurality of directions from the left direction view angle, the right direction view angle, the upper direction view angle and the lower direction view angle.
Further preferably, the generating a virtual extension picture corresponding to the virtual scene according to the azimuth viewing angle in each azimuth specifically includes: when the azimuth visual angles of the left azimuth and the right azimuth are not equal, and the azimuth visual angles of the front azimuth and the rear azimuth are not equal, respectively calculating a cutting area corresponding to each azimuth visual angle; and cutting out a corresponding virtual scene picture in the virtual scene according to each azimuth visual angle and the cutting area.
Specifically, when the azimuth viewing angles of the left and right opposite directions are analyzed to be unequal, and the azimuth viewing angles of the front and back opposite directions are analyzed to be unequal, the pictures corresponding to the front viewing angle, the rear viewing angle, the left viewing angle, the right viewing angle, the upper viewing angle and the lower viewing angle are no longer normal pictures, and the normal pictures need to be cut.
According to the requirement of an actual display condition, after selecting a plurality of azimuths from a front visual angle, a rear visual angle, a left visual angle, a right visual angle, an upper visual angle and a lower visual angle, cutting out virtual scene pictures corresponding to all azimuths.
Further preferably, the generating a virtual extension picture corresponding to the virtual scene according to the azimuth viewing angle in each azimuth specifically includes: when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, cutting the position reference information into corresponding virtual scene pictures according to the azimuth viewing angle corresponding to the X axis in the virtual scene; and/or; and respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual scene pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
Specifically, the central line of the X axis is a straight line which is 1/2 of the width of the viewing space and is parallel to the Y axis; if the viewing space is 4 m long and 2m wide, the X-axis center line is a straight line 1m wide and parallel to the Y-axis.
The central line of the X axis is a straight line which is 1/2 of the width of the viewing space and is parallel to the Y axis; when the viewing space is expressed in pixels, the specification is 800dp in length and 400dp in width, and the X-axis center line is a straight line 200dp in width and parallel to the Y-axis.
When the X coordinate information in the position reference information is 1m or 200dp, if the X axis corresponds to the front and rear positions, a virtual scene picture corresponding to the front position view angle can be cut out from the virtual scene according to the actual display condition, a virtual scene picture corresponding to the rear position view angle can be cut out from the virtual scene, virtual scene pictures corresponding to the front and rear position view angles can be cut out from the virtual scene, and the virtual scene pictures do not cut out normal pictures of the front and rear positions in the virtual scene.
Further preferably, when the X coordinate information in the position reference information is on the X-axis center line and the Y coordinate information in the position reference information is not on the Y-axis center line, a clipping region corresponding to each of the orientation views corresponding to the coordinate information on the remaining axes in the position reference information is calculated, and a corresponding virtual scene screen is clipped in the virtual scene according to the clipping region and the orientation view corresponding to the clipping region.
Specifically, when the position reference information includes Y coordinate information and Z coordinate information, if the Y axis corresponds to the left and right azimuths, the Z axis corresponds to the upper and lower azimuths.
The frames corresponding to the left visual angle, the right visual angle, the upper visual angle and the lower visual angle are no longer normal frames, and the normal frames need to be cut.
And according to the requirement of the actual display condition, cutting out the virtual scene picture corresponding to each direction after selecting a plurality of directions from the left direction view angle, the right direction view angle, the upper direction view angle and the lower direction view angle.
Further preferably, when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, the virtual scene is cut into a corresponding virtual scene picture according to the azimuth viewing angle corresponding to the Y-axis;
when the Y coordinate information in the position reference information is 2m or 400dp, if the two left and right positions corresponding to the Y axis are required according to the actual display situation, the virtual scene picture corresponding to the left position view angle can be cut out in the virtual scene, the virtual scene picture corresponding to the right position view angle can be cut out in the virtual scene, the virtual scene pictures corresponding to the left and right position view angles can be cut out in the virtual scene, and the virtual scene pictures do not cut out the normal pictures of the left and right positions in the virtual scene.
When the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting a corresponding virtual scene picture in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
Specifically, when the position reference information includes X coordinate information and Z coordinate information, if the X axis corresponds to the front and rear two directions, the Z axis corresponds to the upper and lower two directions.
The pictures corresponding to the front view angle, the rear view angle, the upper view angle and the lower view angle are no longer normal pictures, and the normal pictures need to be cut.
According to the requirement of an actual display condition, after selecting a plurality of azimuths from a front azimuth visual angle, a rear azimuth visual angle, an upper azimuth visual angle and a lower azimuth visual angle, cutting out virtual scene pictures corresponding to all azimuths.
When the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, respectively calculating cutting areas corresponding to all the azimuth viewing angles; and cutting out a corresponding virtual scene picture in the virtual scene according to each azimuth visual angle and the cutting area.
Specifically, when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, the frames corresponding to the front view angle, the rear view angle, the left view angle, the right view angle, the upper view angle, and the lower view angle are no longer normal frames, and the normal frames need to be cut.
According to the requirement of an actual display condition, after selecting a plurality of azimuths from a front visual angle, a rear visual angle, a left visual angle, a right visual angle, an upper visual angle and a lower visual angle, cutting out virtual scene pictures corresponding to all azimuths.
When the cutting area corresponding to each azimuth viewing angle is calculated, two calculation schemes are provided:
the first calculation scheme is as follows:
calculating a view angle picture parameter corresponding to each azimuth according to the azimuth view angle and the position reference information corresponding to each azimuth;
specifically, under the condition that the azimuth viewing angle is known, the position reference information contains the viewing distance; a view angle picture width at each azimuth at the viewing position can be calculated, for example, the view angle picture width is 600 dp; the view frame width is used as a view frame parameter.
And calculating the cutting area corresponding to each direction according to the visual angle picture parameter and the viewing space parameter corresponding to each direction.
Specifically, after the view angle picture width (600dp) corresponding to each azimuth is calculated, the picture view width (400dp) of the view space in each azimuth is fixed, and the cut region corresponding to each azimuth is obtained by subtracting the picture view width (400dp) from the view angle picture width (600 dp).
The second calculation scheme is as follows:
and analyzing the position deviation information of the position reference information relative to the preset position information, and calculating the corresponding cutting area by combining the position deviation information.
specifically, as shown in fig. 6, the virtual screen corresponding to the right-ahead view angle has a width to be clipped of 2s, the front view angle is FOV, FOV is 2 ∠ θ, and tan θ is (L ∠ θ)12+ s)/y; where L1 is the width of the viewing space, s is the lateral offset from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
As shown in fig. 7, the virtual frame corresponding to the front view has a width of 2s to be cut; rear directionangle of view FOV, FOV being 2& lttheta & gt, tan & lttheta & gt (L)12+ s)/y; where L2 is the length of the viewing space, s is the lateral offset from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
The invention provides an embodiment of a product model display method, which comprises the following steps: smart machine and show platform end:
the smart device includes:
an importing module 11, configured to import a virtual product initial model from a virtual engine; the virtual product initial model is a three-dimensional layout scene graph;
the processing module 12 is electrically connected with the importing module 11 and is used for processing the image of the virtual product initial model by using a resource library to obtain a virtual product model;
and the releasing module 13 is electrically connected with the processing module 12 and is used for releasing the virtual product model to a corresponding display platform end for projection display according to a releasing list.
The invention also designs a virtual Engine on the intelligent equipment based on the universal Engine 4. The smart device imports a virtual product initial model from other virtual engines (such as kujiale, impersonator, love nest, and the like), for example, a three-dimensional layout scene diagram of a northern european kitchen which can be imported with european style products. Then, the intelligent device applies a pre-stored resource library to perform image processing on the obtained virtual product initial model, and the processing mode comprises the following steps: image material rendering, lighting rendering, map rendering, and the like.
The invention sets a plurality of display platform ends in each city, for example, the display platform ends can be set in places such as Xujiahui in Shanghai, Toyoto Nanjing and the like. The merchant can select a display platform which needs to be released specifically, the intelligent device can generate a releasing list according to the content selected by the merchant, and then the virtual product model is released according to the releasing list.
For example, the smart device imports a three-dimensional layout scene graph of the northern european style kitchen of the european style from another virtual engine. The intelligent equipment can be applied to a resource library, the materials of the cabinets in the three-dimensional layout scene graph are rendered, the whole light color of the picture is rendered, and the processed virtual product model is finally obtained. And then the virtual product model is released according to the release list. Through this scheme, can put in each show platform with virtual product model, improve the publicity of product.
As shown in fig. 10, the present invention further provides another embodiment of a product model display system, which includes an intelligent device and a display platform end:
the smart device includes:
an importing module 11, configured to import a virtual product initial model from a virtual engine; the virtual product initial model is a three-dimensional layout scene graph;
an information obtaining module 15, configured to obtain configuration information of the virtual product model, where the configuration information includes product information of the virtual product model and display platform information;
and the list generating module 16 is electrically connected with the information acquiring module 15 and is used for generating a release list of each display platform according to the configuration information.
The processing module 12 is configured to perform image processing on the virtual product initial model by using a resource library to obtain a virtual product model;
and the export module 14 is electrically connected with the processing module 12 and is used for exporting the processed virtual product model to a virtual engine.
The putting module 13 is further configured to put the virtual product model onto a corresponding display platform according to a putting list;
the display platform comprises:
a positioning module 21, configured to obtain position reference information corresponding to the viewing position;
a calculating module 22 electrically connected to the positioning module 21, for calculating a plurality of azimuth viewing angles of the viewing position by combining the position reference information;
a picture generating module 23, electrically connected to the computing module 22, configured to generate a virtual picture corresponding to each orientation of the virtual product model according to an orientation viewing angle of each orientation, and fuse the virtual pictures in multiple orientations into a scene picture in which the virtual product model is viewed at the viewing position;
and a projection module 24 electrically connected to the image generation module 23, for projecting the scene image.
In this embodiment, after the virtual product initial model is imported, configuration information of the virtual product initial model also needs to be acquired, for example, as shown in table 1 below:
TABLE 1
Figure BDA0001860625730000151
According to the configuration information, the intelligent device can generate a release list corresponding to the virtual product initial model, and the intelligent device can release the processed virtual product model to which display platform according to the configuration list.
In the prior art, during projection, a virtual product model is mostly directly projected. For example, the virtual product model designs a three-dimensional layout scene graph for kitchen decoration, and during projection, the prior art only projects a front view or a view in a certain direction, so that the projected picture only presents a planar effect and has no sense of reality and stereoscopic impression.
In the embodiment, when the virtual product model is displayed in a projection mode, the virtual product model is not directly projected, but the watching position of a viewer is obtained by using a mobile terminal carried by the viewer after the viewer enters the watching space; the mobile terminal can complete indoor positioning, can be a mobile phone, a tablet personal computer, an intelligent bracelet and the like, and integrates an indoor positioning function on equipment frequently used by a viewer at ordinary times; or a hand-held terminal and the like can be specially produced, and the indoor positioning function is integrated.
Then, calculating a plurality of azimuth viewing angles of the viewing position by combining the position reference information; specifically, at different positions, the perspective view of a person may also be different at each orientation; if at different positions, the pictures presented by watching the same object at the same direction are different; the different pictures are seen because the perspective view angle changes when the object is viewed. The position information of the watching position comprises X-axis coordinate information, Y-axis coordinate information and Z-axis coordinate information, and a plurality of azimuth viewing angles can be calculated through the position information of the watching position; for example: an azimuth view right ahead, an azimuth view right behind, an azimuth view left, an azimuth view right above, and an azimuth view right below.
And secondly, generating a virtual picture corresponding to each direction of the virtual product model by the display platform end according to the direction view angle of each direction. Specifically, the virtual scene is an integral picture, and the virtual scene can be decorated with a home scene in a suite; the display scene of the commodity room can also be the display scene of the commodity. Cutting a virtual scene in a three-dimensional space; after the azimuth visual angle of the watching position is calculated, if the azimuth visual angle in the front is combined, the virtual scene is cut into a virtual picture in the front in a three-dimensional space; in this way, virtual screens right behind, left side, right above, and right below can be obtained.
And finally, fusing the virtual pictures in the plurality of directions into a scene picture for watching the virtual product model at the watching position, and projecting the scene picture. Specifically, after virtual pictures of the front, the back, the left side and the right side are obtained, the virtual pictures of the front, the back, the left side and the right side are seamlessly spliced and fused into a complete scene picture observed at an observing position.
In this embodiment, when the position reference information corresponding to the viewing position is obtained, the position reference information may be two types of position information:
in the first type, the position reference information is virtual position information:
converting the viewing position information in the viewing space into virtual position information in the virtual scene according to the corresponding relation between the space coordinates of the viewing space and the virtual coordinates of the virtual scene; and using the virtual position information as position reference information;
specifically, under the condition of real-time rendering, viewing position information is converted into virtual position information, and the calculation of the azimuth angle of view and the generation of a virtual picture are completed through the virtual position information. The essence of real-time rendering is the real-time computation and output of graphics data.
In the second type, the position reference information is position pixel information:
converting viewing position information in the viewing space into position pixel information in the virtual scene according to the corresponding relation between the space coordinates of the viewing space and the picture pixels of the virtual scene; and the positional pixel information is used as positional reference information.
Specifically, under the condition of offline rendering, viewing position information is converted into position pixel information, and the calculation of the azimuth viewing angle and the generation of a virtual picture are completed through the position pixel information.
Wherein, the scene model of the virtual scene and the space model of the viewing space are in a specific proportional relationship; the viewing space is shown in fig. 6. The specific proportion relation is 1: 1.
in this embodiment, the plurality of orientations may be four orientations, such as front, rear, left, and right; or six orientations, such as front, back, left, right, up, down; two orientations are also possible, such as up and down.
Calculating a plurality of azimuth viewing angles at different viewing positions, wherein the same azimuth has different azimuth viewing angles; and aiming at different azimuth viewing angles, virtual pictures generated in the same azimuth are different. Seamlessly splicing the virtual pictures in a plurality of directions to form a complete scene picture at the same viewing position; therefore, multi-screen fusion imaging can be realized, the viewing angle of the multi-screen fusion imaging changes along with the position change of a viewer, the viewing angle of the viewer can be kept to be updated in real time, and the displayed three-dimensional scene picture can be updated in time; the stereoscopic scene image presented by the stereoscopic scene image display device cannot be distorted due to the change of the viewing position.
By adjusting the scene picture of the virtual product model according to the position of the viewer in real time, the projected birth warp picture can better accord with the visual angle of the viewer, so that the picture is more real and has stereoscopic impression.
Preferably, the calculation module is further configured to calculate an azimuth viewing angle in one azimuth of the viewing position by combining the position reference information; calculating azimuth viewing angles of adjacent azimuths of the rest azimuths in the plurality of azimuths according to the calculated azimuth viewing angles and the angle relation between the adjacent azimuths;
specifically, the azimuth angle between the front azimuth viewing angle and the left or right azimuth viewing angle is a fixed angle of 180 degrees, and after the front azimuth viewing angle is calculated, the front azimuth viewing angle is subtracted from the fixed angle of 180 degrees, so that the azimuth viewing angle of the left or right azimuth can be obtained.
as shown in fig. 6, the azimuth angle between the forward azimuth viewing angle and the right azimuth viewing angle is a fixed angle of 180 °, the azimuth viewing angle of the right azimuth is equal to 180 ° minus the forward azimuth viewing angle, the forward and backward azimuth viewing angles are equal, the circumferential angle of the viewpoint o is 360 °, and the left azimuth viewing angle can be calculated when the right azimuth viewing angle, ∠ aob, is known.
Or the calculating module 22 is further configured to calculate a plurality of azimuth viewing angles of the viewing position respectively by combining the position reference information and a viewing angle calculating formula of each azimuth.
Specifically, when a plurality of azimuth viewing angles need to be calculated, for example, the azimuth viewing angles of the front, rear, left and right azimuths; the front azimuth viewing angle can be calculated by utilizing a viewing angle calculation formula of the front azimuth viewing angle; the rear azimuth viewing angle can be calculated by utilizing a viewing angle calculation formula of the rear azimuth viewing angle; the left-side azimuth viewing angle can be calculated by using a viewing angle calculation formula of the left-side azimuth viewing angle; the right-side azimuth viewing angle can be calculated by using a viewing angle calculation formula of the right-side azimuth viewing angle.
as shown in fig. 6, the forward azimuth angle is FOV, FOV is 2 ∠ θ, and tan θ is (L)12+ s)/y; where L1 is the width of the viewing space, s is the lateral offset from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
As shown in fig. 7, the rear azimuth viewing angle is FOV,
Figure BDA0001860625730000181
tanθ=(L1/2+s)/(L2-y); where L2 is the length of the viewing space, s is the lateral offset from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
When the position information of the viewpoint o is known, the azimuth viewing angles of all azimuths can be calculated, and the azimuth viewing angles corresponding to the left and right sides of the viewpoint o can be calculated by a formula, which is not described herein again.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A method for displaying a product model is characterized by comprising the following steps:
the method comprises the steps that intelligent equipment introduces a virtual product initial model from a virtual engine, wherein the virtual product initial model is a three-dimensional layout scene graph;
the intelligent equipment uses a resource library to perform image processing on the virtual product initial model to obtain a virtual product model;
and the intelligent equipment throws the virtual product model to a corresponding display platform end for projection display according to the throwing list.
2. The method for displaying a product model according to claim 1, further comprising:
and the intelligent equipment exports the processed virtual product model to a virtual engine.
3. The method for displaying a product model according to claim 1, further comprising:
the intelligent equipment acquires configuration information of the virtual product initial model, wherein the configuration information comprises product information of the virtual product initial model and display platform information; and generating a release list of each display platform according to the configuration information.
4. The method for displaying the product model according to claim 1, wherein the intelligent device puts the virtual product model on a corresponding display platform end for projection display according to a putting list, and the method specifically comprises the following steps:
the intelligent equipment puts the virtual product model to a corresponding display platform according to a putting list;
the display platform end obtains position reference information corresponding to a watching position, calculates a plurality of azimuth viewing angles of the watching position by combining the position reference information, generates a virtual picture corresponding to each azimuth of the virtual product model according to the azimuth viewing angle of each azimuth, fuses the virtual pictures of the plurality of azimuths into a scene picture of the virtual product model watched at the watching position, and then projects the scene picture.
5. The method as claimed in claim 4, wherein the step of calculating a plurality of azimuth viewing angles of the viewing position by combining the position reference information specifically comprises:
calculating an azimuth viewing angle in one azimuth of the viewing position by combining the position reference information; calculating azimuth viewing angles of adjacent azimuths of the rest azimuths in the plurality of azimuths according to the calculated azimuth viewing angles and the angle relation between the adjacent azimuths;
or;
and respectively calculating a plurality of azimuth viewing angles of the watching position by combining the position reference information and the viewing angle calculation formulas of all azimuths.
6. The product model display system is characterized by comprising intelligent equipment and a display platform end:
the smart device includes:
the import module is used for importing a virtual product initial model from the virtual engine; the virtual product initial model is a three-dimensional layout scene graph;
the processing module is used for processing the image of the virtual product initial model by using a resource library to obtain a virtual product model;
and the releasing module is used for releasing the virtual product model to a corresponding display platform end for projection display according to the releasing list.
7. The system for displaying a product model according to claim 6, wherein the smart device further comprises:
and the export module is used for exporting the processed virtual product model to the virtual engine.
8. The system for displaying a product model according to claim 6, wherein the smart device further comprises:
the information acquisition module is used for acquiring configuration information of the virtual product model, wherein the configuration information comprises product information of the virtual product model and display platform information;
and the list generation module is used for generating a release list of each display platform according to the configuration information.
9. The system for displaying a product model as recited in claim 6, wherein:
the putting module is used for putting the virtual product model to a corresponding display platform according to a putting list;
the display platform comprises:
the positioning module is used for acquiring position reference information corresponding to the watching position;
the calculation module is used for calculating a plurality of azimuth viewing angles of the watching position by combining the position reference information;
the image generation module is used for generating a virtual image corresponding to each direction of the virtual product model according to the direction visual angle of each direction, and fusing the virtual images of a plurality of directions into a scene image for watching the virtual product model at the watching position;
and the projection module is used for projecting the scene picture.
10. A system for displaying a product model according to claim 9, wherein:
the calculation module is further configured to calculate an azimuth viewing angle in one azimuth of the viewing position by combining the position reference information; calculating azimuth viewing angles of adjacent azimuths of the rest azimuths in the plurality of azimuths according to the calculated azimuth viewing angles and the angle relation between the adjacent azimuths;
or;
and the system is used for respectively calculating a plurality of azimuth viewing angles of the watching position by combining the position reference information and the viewing angle calculation formulas of all azimuths.
CN201811333543.XA 2018-11-09 2018-11-09 Product model display method and system Pending CN111179406A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811333543.XA CN111179406A (en) 2018-11-09 2018-11-09 Product model display method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811333543.XA CN111179406A (en) 2018-11-09 2018-11-09 Product model display method and system

Publications (1)

Publication Number Publication Date
CN111179406A true CN111179406A (en) 2020-05-19

Family

ID=70646024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811333543.XA Pending CN111179406A (en) 2018-11-09 2018-11-09 Product model display method and system

Country Status (1)

Country Link
CN (1) CN111179406A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359741A (en) * 2022-08-23 2022-11-18 重庆亿海腾模型科技有限公司 Method for displaying effect of automobile model projection lamp

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574005A (en) * 2015-02-15 2015-04-29 蔡耿新 Advertisement display management system integrating augmented reality, somatic sense and green screening technology and advertisement display management method integrating augmented reality, somatic sense and green screening technology
CN107077216A (en) * 2016-12-19 2017-08-18 深圳市阳日电子有限公司 Method and mobile terminal that a kind of picture is shown
CN107193372A (en) * 2017-05-15 2017-09-22 杭州隅千象科技有限公司 From multiple optional position rectangle planes to the projecting method of variable projection centre
WO2018187927A1 (en) * 2017-04-11 2018-10-18 SZ DJI Technology Co., Ltd. Vision simulation system for simulating operations of a movable platform

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574005A (en) * 2015-02-15 2015-04-29 蔡耿新 Advertisement display management system integrating augmented reality, somatic sense and green screening technology and advertisement display management method integrating augmented reality, somatic sense and green screening technology
CN107077216A (en) * 2016-12-19 2017-08-18 深圳市阳日电子有限公司 Method and mobile terminal that a kind of picture is shown
WO2018187927A1 (en) * 2017-04-11 2018-10-18 SZ DJI Technology Co., Ltd. Vision simulation system for simulating operations of a movable platform
CN107193372A (en) * 2017-05-15 2017-09-22 杭州隅千象科技有限公司 From multiple optional position rectangle planes to the projecting method of variable projection centre

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359741A (en) * 2022-08-23 2022-11-18 重庆亿海腾模型科技有限公司 Method for displaying effect of automobile model projection lamp

Similar Documents

Publication Publication Date Title
CN106210861B (en) Method and system for displaying bullet screen
US20160284132A1 (en) Apparatus and method for providing augmented reality-based realistic experience
JP3992629B2 (en) Image generation system, image generation apparatus, and image generation method
US10403045B2 (en) Photorealistic augmented reality system
WO2015123775A1 (en) Systems and methods for incorporating a real image stream in a virtual image stream
US20150325052A1 (en) Image superposition of virtual objects in a camera image
JP2014032443A (en) Image processing device, image processing method, and image processing program
KR20110136029A (en) System for operating augmented reality store using virtual display stand
Suenaga et al. A practical implementation of free viewpoint video system for soccer games
JP3947132B2 (en) Image composition display method, image composition display program, and recording medium recording this image composition display program
Zollmann et al. Passive-active geometric calibration for view-dependent projections onto arbitrary surfaces
JP2023053039A (en) Information processing apparatus, information processing method and program
CN107111998A (en) Generation and the interactive object of display actual size
KR20150022064A (en) Sale Support System for Product of Interactive Online Store based Mirror World.
CN111179407A (en) Virtual scene creating method, virtual scene projecting system and intelligent equipment
CN111179406A (en) Product model display method and system
KR101484736B1 (en) Shopping-mall System for Product of Interactive Online Store based Mirror World.
CN111050145B (en) Multi-screen fusion imaging method, intelligent device and system
CN111045286A (en) Projection method and system based on double-folding screen field and double-folding screen field
CN109299989A (en) Virtual reality dressing system
CN111182278B (en) Projection display management method and system
Lai et al. Exploring manipulation behavior on video see-through head-mounted display with view interpolation
Shin et al. Color correction using 3D multi-view geometry
KR101520444B1 (en) Apparatus and method for providing real size image
CN103295503A (en) Digital bulletin system and digital bulletin method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination