CN111182288A - Space object imaging method and system - Google Patents

Space object imaging method and system Download PDF

Info

Publication number
CN111182288A
CN111182288A CN201811333542.5A CN201811333542A CN111182288A CN 111182288 A CN111182288 A CN 111182288A CN 201811333542 A CN201811333542 A CN 201811333542A CN 111182288 A CN111182288 A CN 111182288A
Authority
CN
China
Prior art keywords
azimuth
virtual scene
virtual
picture
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811333542.5A
Other languages
Chinese (zh)
Other versions
CN111182288B (en
Inventor
王珏
王琦琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hongxing Cloud Computing Technology Co ltd
Original Assignee
Shanghai Yunshen Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yunshen Intelligent Technology Co ltd filed Critical Shanghai Yunshen Intelligent Technology Co ltd
Priority to CN201811333542.5A priority Critical patent/CN111182288B/en
Publication of CN111182288A publication Critical patent/CN111182288A/en
Application granted granted Critical
Publication of CN111182288B publication Critical patent/CN111182288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor

Abstract

The invention discloses a space object imaging method and a system; the method comprises the following steps: analyzing the virtual space position information of a virtual object placed in the virtual scene; building a real object display platform for displaying the virtual scene according to the virtual scene and the virtual space position information; intercepting virtual scene pictures corresponding to all directions in the virtual scene, and analyzing virtual coordinate information of the picture of the virtual object in the virtual scene pictures; projecting the virtual object picture in the virtual scene picture corresponding to each direction on the side corresponding to the real object display model by combining the virtual coordinate information of the picture of the virtual object in the virtual scene picture and the position relation of the real object display model in the scene display space model; and splicing the virtual scene pictures corresponding to all the directions to form a virtual display scene on the scene display platform. The invention can simulate a real object scene and bring real experience to users.

Description

Space object imaging method and system
Technical Field
The invention belongs to the technical field of projection, and particularly relates to a space object imaging method and system.
Background
With the improvement of living standard of people, the requirement of users on commodities is continuously increased.
When a user carries out decoration, a decoration company generally designs a plurality of decoration effect diagrams according to the house structure of the user for the user to select, but the effect diagrams are all plane diagrams and are difficult to present the feeling of a real object in the heart of the user. In order to enable users to have visual decoration experience, a plurality of decoration companies can decorate some sample houses for user experience, the method is high in cost, and the style teaching committee is single, so that the diversified demands of houses cannot be met.
Disclosure of Invention
The invention aims to provide a space object imaging method and a space object imaging system.
The technical scheme provided by the invention is as follows:
the invention provides a space object imaging method, which is characterized by comprising the following steps: analyzing the virtual space position information of a virtual object placed in the virtual scene; the virtual scene is formed by simulating a real scene; building a real object display platform for displaying the virtual scene according to the virtual scene and the virtual space position information; the object display platform comprises a scene display space model and an object display model; intercepting virtual scene pictures corresponding to all directions in the virtual scene, and analyzing virtual coordinate information of the picture of the virtual object in the virtual scene pictures; projecting the virtual object picture in the virtual scene picture corresponding to each direction on the side corresponding to the real object display model by combining the virtual coordinate information of the picture of the virtual object in the virtual scene picture and the position relation of the real object display model in the scene display space model; and splicing the virtual scene pictures corresponding to all the directions to form a virtual display scene on the scene display platform.
Further preferably, before the capturing the virtual scene picture corresponding to each orientation in the virtual scene, the method further includes: acquiring position reference information corresponding to a viewing position; calculating a plurality of azimuth viewing angles of the viewing position by combining the position reference information; the capturing of the virtual scene pictures corresponding to the respective orientations in the virtual scene specifically includes: and intercepting a virtual scene picture corresponding to each azimuth in the virtual scene by combining the azimuth angle of each azimuth.
Further preferably, the calculating the plurality of azimuth viewing angles of the viewing position by combining the position reference information specifically includes: calculating an azimuth viewing angle in one azimuth of the viewing position by combining the position reference information; calculating azimuth viewing angles of adjacent azimuths of the rest azimuths in the plurality of azimuths according to the calculated azimuth viewing angles and the angle relation between the adjacent azimuths; or; and respectively calculating a plurality of azimuth viewing angles of the watching position by combining the position reference information and the viewing angle calculation formulas of all azimuths.
Further preferably, the combining the azimuth viewing angle of each azimuth, the capturing the virtual scene picture corresponding to each azimuth in the virtual scene specifically includes: when the azimuth visual angles of the front azimuth and the rear azimuth in the plurality of azimuth visual angles are equal and the azimuth visual angles of the left azimuth and the right azimuth are not equal, virtual scene pictures corresponding to the left azimuth visual angle and/or the right azimuth visual angle are cut in the virtual scene; and/or; and calculating cutting areas corresponding to the front view angle and/or the rear view angle and/or the upper view angle and/or the lower view angle respectively, and cutting out corresponding virtual scene pictures in the virtual scene according to the cutting areas and the view angles corresponding to the cutting areas.
Further preferably, the combining the azimuth viewing angle of each azimuth, the capturing the virtual scene picture corresponding to each azimuth in the virtual scene specifically includes: when the azimuth angles of the front azimuth and the rear azimuth in the azimuth angles are not equal and the azimuth angles of the left azimuth and the right azimuth are equal, virtual scene pictures corresponding to the front azimuth angle and/or the rear azimuth angle are cut out from the virtual scene; and/or; and calculating cutting areas corresponding to the left visual angle and/or the right visual angle and/or the upper visual angle and/or the lower visual angle respectively, and cutting out a corresponding virtual scene picture in the virtual scene according to the cutting areas and the visual angles corresponding to the cutting areas.
Further preferably, the combining the azimuth viewing angle of each azimuth, the capturing the virtual scene picture corresponding to each azimuth in the virtual scene specifically includes: when the azimuth visual angles of the left azimuth and the right azimuth are not equal, and the azimuth visual angles of the front azimuth and the rear azimuth are not equal, respectively calculating a cutting area corresponding to each azimuth visual angle; and cutting out a corresponding virtual scene picture in the virtual scene according to each azimuth visual angle and the cutting area.
Further preferably, the combining the azimuth viewing angle of each azimuth, the capturing the virtual scene picture corresponding to each azimuth in the virtual scene specifically includes: when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, cutting the position reference information into corresponding virtual scene pictures according to the azimuth viewing angle corresponding to the X axis in the virtual scene; and/or; and respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual scene pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
Further preferably, the combining the azimuth viewing angle of each azimuth, the capturing the virtual scene picture corresponding to each azimuth in the virtual scene specifically includes: when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, cutting the position reference information into corresponding virtual scene pictures according to the azimuth viewing angle corresponding to the Y axis in the virtual scene; and/or; and respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual scene pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
Further preferably, the combining the azimuth viewing angle of each azimuth, the capturing the virtual scene picture corresponding to each azimuth in the virtual scene specifically includes: when the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, respectively calculating cutting areas corresponding to all the azimuth viewing angles; and cutting out a corresponding virtual scene picture in the virtual scene according to each azimuth visual angle and the cutting area.
Further preferably, the calculating of the cropping area corresponding to the azimuth viewing angle specifically includes: calculating a view angle picture parameter corresponding to each azimuth according to the azimuth view angle and the position reference information corresponding to each azimuth; and calculating the cutting area corresponding to each direction according to the visual angle picture parameter and the viewing space parameter corresponding to each direction.
Further preferably, the calculating of the cropping area corresponding to the azimuth viewing angle specifically includes: and analyzing the position deviation information of the position reference information relative to the preset position information, and calculating the corresponding cutting area by combining the position deviation information.
Further preferably, the method further comprises the following steps: generating a plurality of orthogonal cameras and binding the orthogonal cameras with each other; each orthogonal camera is perpendicular to a plane corresponding to the position of the orthogonal camera; and intercepting a virtual scene picture corresponding to each position in the virtual scene by using an orthogonal camera.
Further preferably, the acquiring of the position reference information corresponding to the viewing position specifically includes:
converting viewing position information in a viewing space into virtual position information in a virtual scene according to a corresponding relation between a space coordinate of the viewing space and a virtual coordinate of the virtual scene; and using the virtual position information as position reference information;
or;
converting viewing position information in a viewing space into position pixel information in a virtual scene according to a corresponding relation between a space coordinate of the viewing space and a picture pixel of the virtual scene; and using the position pixel information as position reference information.
The invention also provides a space object imaging system applied to the space object imaging method, which comprises intelligent equipment and projection equipment: the smart device includes: the analysis module is used for calculating a plurality of azimuth viewing angles of the watching position by combining the position reference information; the processing module is used for building a real object display platform for displaying the virtual scene according to the virtual scene and the virtual space position information; the object display platform comprises a scene display space model and an object display model; the picture intercepting module intercepts virtual scene pictures corresponding to all the directions in the virtual scene; the analysis module is further used for analyzing the virtual coordinate information of the picture of the virtual object in the virtual scene picture; the control module is used for controlling the projection equipment to project the virtual object picture in the virtual scene picture corresponding to each direction on the side corresponding to the real object display model by combining the virtual coordinate information of the picture of the virtual object in the virtual scene picture and the position relation of the real object display model in the scene display space model; and splicing the virtual scene pictures corresponding to all the directions to form a virtual display scene on the scene display platform.
Compared with the prior art, the space object imaging method and system provided by the invention have the following beneficial effects:
according to the invention, a virtual scene can be simulated on the intelligent equipment according to actual requirements, then a real object display platform is built according to the virtual scene, and then a virtual scene picture is projected on the built real object display platform, so that a real object effect can be simulated, and a real experience feeling is brought to a user.
The invention can also adjust the virtual scene picture of each direction in the virtual scene according to the position of the user, so that the displayed virtual scene picture accords with the real picture, and the sense of reality of the projected picture is improved.
Drawings
The foregoing features, technical features, advantages and implementations of a method and system for imaging an object in space will be further described in the following detailed description of preferred embodiments in a clearly understandable manner, in conjunction with the accompanying drawings.
FIG. 1 is a schematic flow chart diagram of one embodiment of a method for imaging an object in space of the present invention;
FIG. 2 is a schematic flow chart diagram of another embodiment of a method of imaging an object in space according to the present invention;
FIG. 3 is a schematic view of the viewing angle at various orientations of a viewpoint/viewing position in accordance with the present invention;
FIG. 4 is a schematic view of the viewing angle at various orientations of another viewpoint/viewing position in accordance with the present invention;
FIG. 5 is a schematic view of a perspective of another viewpoint/viewing position in various orientations of the present invention;
FIG. 6 is a schematic diagram of cropping in a direction in front of a viewpoint/viewing position in accordance with the present invention;
FIG. 7 is a schematic view of cropping at a view point/viewing position rear orientation in accordance with the present invention;
FIG. 8 is a schematic diagram of cropping in the left-hand side of a viewpoint/viewing position in accordance with the present invention;
FIG. 9 is a schematic diagram of cropping in the right side orientation of a viewpoint/viewing position in the present invention;
FIG. 10 is a block diagram schematically illustrating the structure of an imaging system for an object in space according to the present invention;
the reference numbers illustrate:
1-intelligent equipment, 11-analysis module, 12-picture interception module, 13-control module and 2-projection equipment.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
The present invention provides an embodiment of a method for imaging a spatial object, as shown in fig. 1, comprising:
s101, analyzing virtual space position information of a virtual object placed in a virtual scene in the virtual scene; the virtual scene is formed by simulating a real scene;
firstly, a user can simulate a virtual scene needing projection on the intelligent device, and a space coordinate system is set in the virtual scene. For example, a virtual scene of a kitchen can be simulated, and the virtual scene includes a refrigerator, a range hood, a gas stove, an electric cooker, a microwave oven, a cabinet and the like. The virtual space position information of each virtual object placed in the virtual scene, namely the three-dimensional coordinates of the virtual space, can be analyzed through intelligent equipment.
S102, building a real object display platform for displaying the virtual scene according to the virtual scene and the virtual space position information; the object display platform comprises a scene display space model and an object display model;
according to the simulated virtual scene, a user needs to build a physical display platform in an actual space, when the physical display platform is built, the virtual scene and the position information of the virtual space need to be combined, namely, the proportion in the physical scene needs to be consistent with the proportion in the virtual scene, and the physical display model can be made into a model which is consistent with the shape and the size of a virtual object in the virtual scene, for example, the physical display model of a refrigerator in the virtual scene can be made into a cuboid which is the same as the shape and the size of the refrigerator.
S103, capturing virtual scene pictures corresponding to all directions in the virtual scene, and analyzing virtual coordinate information of the picture of the virtual object in the virtual scene pictures;
in the virtual scene of the kitchen constructed in this embodiment, there are six directions, namely, front, rear, left, right, up and down, in this step, a virtual scene picture needs to be continued in the six directions, and virtual coordinate information of virtual objects (a refrigerator, a range hood, a gas stove, an electric cooker, a microwave oven, a cabinet, and the like) in each picture in the virtual scene picture is analyzed.
S104, projecting the virtual object picture in the virtual scene picture corresponding to each direction on the side corresponding to the real object display model by combining the virtual coordinate information of the picture of the virtual object in the virtual scene picture and the position relation of the real object display model in the scene display space model;
in the physical display platform, a plurality of projection devices are needed to project the virtual object picture in the virtual scene picture in each direction on the corresponding side of the physical display model. The method is equivalent to projecting on a constructed physical display platform, and each physical display model presents the effect of a virtual object in a virtual scene.
S105, splicing the virtual scene pictures corresponding to all the directions to form a virtual display scene on the scene display platform.
Through the projection in a plurality of directions, the virtual scene of the kitchen can be projected on the physical display platform of the kitchen, so that the user can have real experience. The physical display platform built in the embodiment is a plurality of models and molds which can be repeatedly used, so that the cost is lower. In addition to the spatial imaging of the kitchen as mentioned in the present embodiment, the object of the spatial imaging may be a single object or other scene, which is not listed here.
As shown in fig. 2, the present invention also provides another embodiment of a method for imaging an object in space, comprising:
s201, analyzing virtual space position information of a virtual object placed in a virtual scene in the virtual scene; the virtual scene is formed by simulating a real scene;
s202, building a real object display platform for displaying the virtual scene according to the virtual scene and the virtual space position information; the object display platform comprises a scene display space model and an object display model;
s203, acquiring position reference information corresponding to the watching position, and calculating a plurality of azimuth viewing angles of the watching position by combining the position reference information;
specifically, when a customer experiences, the mobile terminal which can be carried with the customer acquires the watching position of a watcher; the mobile terminal can complete indoor positioning. The mobile terminal can be a mobile phone, a tablet personal computer, an intelligent bracelet and the like, and integrates an indoor positioning function on equipment frequently used by a viewer at ordinary times; or a hand-held terminal and the like can be specially produced, and the indoor positioning function is integrated.
S204, combining the azimuth visual angle of each azimuth, capturing a virtual scene picture corresponding to each azimuth in the virtual scene, and analyzing the virtual coordinate information of the picture of the virtual object in the virtual scene picture;
in the above embodiment, the virtual scene projected onto the real object is not changed, but the projection effect of this projection method has some defects. When people look at an object, when the direction changes, the seen object also changes, for example, when looking at a drum washing machine, if the front view angle is in the front view angle, the front of the drum washing machine, namely a circular drum door, and the picture at the bottommost part of a drum are intelligently seen, and if looking at the drum washing machine at an angle of 45 degrees on the side, the oval-like drum door and the drum side wall of the drum washing machine are seen. Therefore, if the projected picture is kept unchanged, the projected picture appears very rigid and is not real.
Therefore, when the real object is projected, the position reference information of the watching position of the user and the azimuth angle of the user need to be acquired, so that each virtual scene picture is cut back in the virtual scene again. Thus, the projected picture has perspective. The appearance is more real. For the projection of the drum washing machine, the user stands right in front of the drum washing machine, the projection device projects a circular drum door and a circular drum bottom, stands on the side surface with an angle of 45 degrees, and the projection device projects an oval-like drum door and a drum side wall of the drum washing machine.
S205, projecting the virtual object picture in the virtual scene picture corresponding to each direction on the side corresponding to the real object display model by combining the virtual coordinate information of the picture of the virtual object in the virtual scene picture and the position relation of the real object display model in the scene display space model;
s206, splicing the virtual scene pictures corresponding to all the directions to form a virtual display scene on the scene display platform.
The present invention also provides another embodiment of a method of imaging a spatial object, comprising:
s301, analyzing the position information of a virtual space in which a virtual object is placed in a virtual scene in the virtual scene; the virtual scene is formed by simulating a real scene;
s302, according to the virtual scene and the virtual space position information, a real object display platform for displaying the virtual scene is built; the object display platform comprises a scene display space model and an object display model;
s303, acquiring position reference information corresponding to the watching position; calculating an azimuth viewing angle in one azimuth of the viewing position by combining the position reference information;
specifically, when a plurality of azimuth angles are required to be calculated, for example, four azimuth angles, i.e., front, rear, left, and right, the forward azimuth angle can be calculated by using an angle-of-view calculation formula for the forward azimuth angle, as shown in fig. 6, the forward azimuth angle is FOV, FOV is 2 ∠ θ, and tan θ is (L ∠ θ ∠ L12+ s)/y; where L1 is the width of the viewing space, s is an offset value from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
S304, calculating azimuth viewing angles of adjacent azimuths of the rest azimuths in the plurality of azimuths according to the calculated azimuth viewing angles and the angle relation between the adjacent azimuths;
specifically, the azimuth angle between the front azimuth viewing angle and the left or right azimuth viewing angle is a fixed angle of 180 degrees, and after the front azimuth viewing angle is calculated, the front azimuth viewing angle is subtracted from the fixed angle of 180 degrees, so that the azimuth viewing angle of the left or right azimuth can be obtained.
as shown in fig. 6, the azimuth angle between the forward azimuth viewing angle and the right azimuth viewing angle is a fixed angle of 180 °, the azimuth viewing angle of the right azimuth is equal to 180 ° minus the forward azimuth viewing angle, the forward and backward azimuth viewing angles are equal, the circumferential angle of the viewpoint o is 360 °, and the left azimuth viewing angle can be calculated when the right azimuth viewing angle, ∠ aob, is known.
Or S305 calculates a plurality of azimuth viewing angles of the viewing position respectively by combining the position reference information and the viewing angle calculation formulas of the respective azimuths.
Specifically, when a plurality of azimuth viewing angles need to be calculated, for example, the azimuth viewing angles of the front, rear, left and right azimuths; the front azimuth viewing angle can be calculated by utilizing a viewing angle calculation formula of the front azimuth viewing angle; the rear azimuth viewing angle can be calculated by utilizing a viewing angle calculation formula of the rear azimuth viewing angle; the left-side azimuth viewing angle can be calculated by using a viewing angle calculation formula of the left-side azimuth viewing angle; the right-side azimuth viewing angle can be calculated by using a viewing angle calculation formula of the right-side azimuth viewing angle.
as shown in fig. 6, the forward azimuth angle is FOV, FOV is 2 ∠ θ, and tan θ is (L)12+ s)/y; where L1 is the width of the viewing space, s is the lateral offset from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
As shown in fig. 7, the rear azimuth viewing angle is FOV,
Figure BDA0001860625540000101
tanθ=(L1/2+s)/(L2-y); where L2 is the length of the viewing space, s is the lateral offset from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
When the position information of the viewpoint o is known, the azimuth viewing angles of all azimuths can be calculated, and the azimuth viewing angles corresponding to the left and right sides of the viewpoint o can be calculated by a formula, which is not described herein again.
S306, combining the azimuth visual angle of each azimuth, capturing a virtual scene picture corresponding to each azimuth in the virtual scene, and analyzing the virtual coordinate information of the picture of the virtual object in the virtual scene picture;
further preferably, the combining the azimuth viewing angle of each azimuth, the capturing the virtual scene picture corresponding to each azimuth in the virtual scene specifically includes: when the azimuth visual angles of the front azimuth and the rear azimuth in the plurality of azimuth visual angles are equal and the azimuth visual angles of the left azimuth and the right azimuth are not equal, virtual scene pictures corresponding to the left azimuth visual angle and/or the right azimuth visual angle are cut in the virtual scene; and/or; and calculating cutting areas corresponding to the front view angle and/or the rear view angle and/or the upper view angle and/or the lower view angle respectively, and cutting out corresponding virtual scene pictures in the virtual scene according to the cutting areas and the view angles corresponding to the cutting areas.
Specifically, after the plurality of azimuth viewing angles are calculated, whether two equal azimuth viewing angles exist in the plurality of azimuth viewing angles is analyzed, and if two equal azimuth viewing angles exist, whether two azimuths corresponding to the two equal azimuth viewing angles are opposite azimuths is analyzed.
When the azimuth viewing angles of the front and rear opposite azimuths are analyzed to be equal, as shown in fig. 6 and 7, the front and rear opposite azimuths are opposite; according to the actual display condition, a virtual scene picture corresponding to the left-side visual angle can be cut out from the virtual scene, a virtual scene picture corresponding to the right-side visual angle can be cut out from the virtual scene, virtual scene pictures corresponding to the left-side visual angle and the right-side visual angle can be cut out from the virtual scene, and the virtual scene pictures can not cut out normal pictures of the left-side and the right-side in the virtual scene.
Specifically, the viewing position is a central position, and as shown in fig. 3, when the azimuth viewing angles of all the two opposite azimuths are equal, the virtual scene picture cut from the virtual scene at the central position in each azimuth is a normal picture.
Further preferably, the combining the azimuth viewing angle of each azimuth, the capturing the virtual scene picture corresponding to each azimuth in the virtual scene specifically includes: when the azimuth angles of the front azimuth and the rear azimuth in the azimuth angles are not equal and the azimuth angles of the left azimuth and the right azimuth are equal, virtual scene pictures corresponding to the front azimuth angle and/or the rear azimuth angle are cut out from the virtual scene;
specifically, after the plurality of azimuth viewing angles are calculated, whether two equal azimuth viewing angles exist in the plurality of azimuth viewing angles is analyzed, and if two equal azimuth viewing angles exist, whether two azimuths corresponding to the two equal azimuth viewing angles are opposite azimuths is analyzed.
When the azimuth viewing angles of the left and right opposite orientations are analyzed to be equal, as shown in fig. 4 and 5, and as shown in fig. 8 and 9, the left and right opposite orientations are opposite; according to the requirements of actual display conditions, a virtual scene picture corresponding to a front position view angle can be cut out from a virtual scene, a virtual scene picture corresponding to a rear position view angle can be cut out from the virtual scene, virtual scene pictures corresponding to the front and rear position view angles can be cut out from the virtual scene, and the virtual scene pictures can not cut out normal pictures of the front and rear positions in the virtual scene.
Further preferably, the combining the azimuth viewing angle of each azimuth, the capturing the virtual scene picture corresponding to each azimuth in the virtual scene specifically includes: and calculating cutting areas corresponding to the left visual angle and/or the right visual angle and/or the upper visual angle and/or the lower visual angle respectively, and cutting out a corresponding virtual scene picture in the virtual scene according to the cutting areas and the visual angles corresponding to the cutting areas.
Specifically, when the left and right azimuth viewing angles are analyzed to be equal, the pictures corresponding to the left azimuth viewing angle, the right azimuth viewing angle, the upper azimuth viewing angle and the lower azimuth viewing angle are no longer normal pictures, and the normal pictures need to be cut.
And according to the requirement of the actual display condition, cutting out the virtual scene picture corresponding to each direction after selecting a plurality of directions from the left direction view angle, the right direction view angle, the upper direction view angle and the lower direction view angle.
Further preferably, the combining the azimuth viewing angle of each azimuth, the capturing the virtual scene picture corresponding to each azimuth in the virtual scene specifically includes: when the azimuth visual angles of the left azimuth and the right azimuth are not equal, and the azimuth visual angles of the front azimuth and the rear azimuth are not equal, respectively calculating a cutting area corresponding to each azimuth visual angle; and cutting out a corresponding virtual scene picture in the virtual scene according to each azimuth visual angle and the cutting area.
Specifically, when the azimuth viewing angles of the left and right opposite directions are analyzed to be unequal, and the azimuth viewing angles of the front and back opposite directions are analyzed to be unequal, the pictures corresponding to the front viewing angle, the rear viewing angle, the left viewing angle, the right viewing angle, the upper viewing angle and the lower viewing angle are no longer normal pictures, and the normal pictures need to be cut.
According to the requirement of an actual display condition, after selecting a plurality of azimuths from a front visual angle, a rear visual angle, a left visual angle, a right visual angle, an upper visual angle and a lower visual angle, cutting out virtual scene pictures corresponding to all azimuths.
Further preferably, the combining the azimuth viewing angle of each azimuth, the capturing the virtual scene picture corresponding to each azimuth in the virtual scene specifically includes: when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, cutting the position reference information into corresponding virtual scene pictures according to the azimuth viewing angle corresponding to the X axis in the virtual scene; and/or; and respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual scene pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
Specifically, the central line of the X axis is a straight line which is 1/2 of the width of the viewing space and is parallel to the Y axis; if the viewing space is 4 m long and 2m wide, the X-axis center line is a straight line 1m wide and parallel to the Y-axis.
The central line of the X axis is a straight line which is 1/2 of the width of the viewing space and is parallel to the Y axis; when the viewing space is expressed in pixels, the specification is 800dp in length and 400dp in width, and the X-axis center line is a straight line 200dp in width and parallel to the Y-axis.
When the X coordinate information in the position reference information is 1m or 200dp, if the X axis corresponds to the front and rear positions, a virtual scene picture corresponding to the front position view angle can be cut out from the virtual scene according to the actual display condition, a virtual scene picture corresponding to the rear position view angle can be cut out from the virtual scene, virtual scene pictures corresponding to the front and rear position view angles can be cut out from the virtual scene, and the virtual scene pictures do not cut out normal pictures of the front and rear positions in the virtual scene.
Further preferably, when the X coordinate information in the position reference information is on the X-axis center line and the Y coordinate information in the position reference information is not on the Y-axis center line, a clipping region corresponding to each of the orientation views corresponding to the coordinate information on the remaining axes in the position reference information is calculated, and a corresponding virtual scene screen is clipped in the virtual scene according to the clipping region and the orientation view corresponding to the clipping region.
Specifically, when the position reference information includes Y coordinate information and Z coordinate information, if the Y axis corresponds to the left and right azimuths, the Z axis corresponds to the upper and lower azimuths.
The frames corresponding to the left visual angle, the right visual angle, the upper visual angle and the lower visual angle are no longer normal frames, and the normal frames need to be cut.
And according to the requirement of the actual display condition, cutting out the virtual scene picture corresponding to each direction after selecting a plurality of directions from the left direction view angle, the right direction view angle, the upper direction view angle and the lower direction view angle.
Further preferably, when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, the virtual scene is cut into a corresponding virtual scene picture according to the azimuth viewing angle corresponding to the Y-axis;
when the Y coordinate information in the position reference information is 2m or 400dp, if the two left and right positions corresponding to the Y axis are required according to the actual display situation, the virtual scene picture corresponding to the left position view angle can be cut out in the virtual scene, the virtual scene picture corresponding to the right position view angle can be cut out in the virtual scene, the virtual scene pictures corresponding to the left and right position view angles can be cut out in the virtual scene, and the virtual scene pictures do not cut out the normal pictures of the left and right positions in the virtual scene.
When the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting a corresponding virtual scene picture in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
Specifically, when the position reference information includes X coordinate information and Z coordinate information, if the X axis corresponds to the front and rear two directions, the Z axis corresponds to the upper and lower two directions.
The pictures corresponding to the front view angle, the rear view angle, the upper view angle and the lower view angle are no longer normal pictures, and the normal pictures need to be cut.
According to the requirement of an actual display condition, after selecting a plurality of azimuths from a front azimuth visual angle, a rear azimuth visual angle, an upper azimuth visual angle and a lower azimuth visual angle, cutting out virtual scene pictures corresponding to all azimuths.
When the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, respectively calculating cutting areas corresponding to all the azimuth viewing angles; and cutting out a corresponding virtual scene picture in the virtual scene according to each azimuth visual angle and the cutting area.
Specifically, when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, the frames corresponding to the front view angle, the rear view angle, the left view angle, the right view angle, the upper view angle, and the lower view angle are no longer normal frames, and the normal frames need to be cut.
According to the requirement of an actual display condition, after selecting a plurality of azimuths from a front visual angle, a rear visual angle, a left visual angle, a right visual angle, an upper visual angle and a lower visual angle, cutting out virtual scene pictures corresponding to all azimuths.
When the cutting area corresponding to each azimuth viewing angle is calculated, two calculation schemes are provided:
the first calculation scheme is as follows:
calculating a view angle picture parameter corresponding to each azimuth according to the azimuth view angle and the position reference information corresponding to each azimuth;
specifically, under the condition that the azimuth viewing angle is known, the position reference information contains the viewing distance; a view angle picture width at each azimuth at the viewing position can be calculated, for example, the view angle picture width is 600 dp; the view frame width is used as a view frame parameter.
And calculating the cutting area corresponding to each direction according to the visual angle picture parameter and the viewing space parameter corresponding to each direction.
Specifically, after the view angle picture width (600dp) corresponding to each azimuth is calculated, the picture view width (400dp) of the view space in each azimuth is fixed, and the cut region corresponding to each azimuth is obtained by subtracting the picture view width (400dp) from the view angle picture width (600 dp).
The second calculation scheme is as follows:
and analyzing the position deviation information of the position reference information relative to the preset position information, and calculating the corresponding cutting area by combining the position deviation information.
specifically, as shown in fig. 6, the virtual screen corresponding to the right-ahead view angle has a width to be clipped of 2s, the front view angle is FOV, FOV is 2 ∠ θ, and tan θ is (L ∠ θ)12+ s)/y; where L1 is the width of the viewing space, s is the lateral offset from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
As shown in fig. 7, the virtual frame corresponding to the front view has a width of 2s to be cut; the rear azimuth viewing angle is the FOV,
Figure BDA0001860625540000151
tanθ=(L1/2+s)/(L2-y); where L2 is the length of the viewing space, s is the lateral offset from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
S307, combining the virtual coordinate information of the virtual object picture in the virtual scene picture and the position relation of the real object display model in the scene display space model, projecting the virtual object picture in the virtual scene picture corresponding to each direction on the side corresponding to the real object display model;
s308, splicing the virtual scene pictures corresponding to all the directions to form a virtual display scene on the scene display platform.
The present invention also provides an embodiment of a system for imaging an object in space, as shown in fig. 10, comprising: intelligent device 1, projection equipment 2:
the smart device 1 includes:
an analysis module 11, configured to calculate a plurality of azimuth viewing angles of the viewing position by combining the position reference information;
a picture intercepting module 12 that intercepts virtual scene pictures corresponding to each orientation in the virtual scene;
the analysis module is further used for analyzing the virtual coordinate information of the picture of the virtual object in the virtual scene picture;
the control module 13 is configured to control the projection device 2 to project a virtual object picture in a virtual scene picture corresponding to each orientation on a side corresponding to the physical display model, in combination with virtual coordinate information of the virtual object picture in the virtual scene picture and a position relationship of the physical display model in the scene display space model; and splicing the virtual scene pictures corresponding to all the directions to form a virtual display scene on the scene display platform.
In this embodiment, a user may simulate a virtual scene that needs to be projected on the smart device, and set up a spatial coordinate system in the virtual scene. For example, a virtual scene of a kitchen can be simulated, and the virtual scene includes a refrigerator, a range hood, a gas stove, an electric cooker, a microwave oven, a cabinet and the like. The virtual space position information of each virtual object placed in the virtual scene, namely the three-dimensional coordinates of the virtual space, can be analyzed through intelligent equipment.
According to the simulated virtual scene, a user needs to build a physical display platform in an actual space, when the physical display platform is built, the virtual scene and the position information of the virtual space need to be combined, namely, the proportion in the physical scene needs to be consistent with the proportion in the virtual scene, and the physical display model can be made into a model which is consistent with the shape and the size of a virtual object in the virtual scene, for example, the physical display model of a refrigerator in the virtual scene can be made into a cuboid which is the same as the shape and the size of the refrigerator.
In the virtual scene of the kitchen constructed in this embodiment, there are six directions, namely, front, rear, left, right, up and down, in this step, a virtual scene picture needs to be continued in the six directions, and virtual coordinate information of virtual objects (a refrigerator, a range hood, a gas stove, an electric cooker, a microwave oven, a cabinet, and the like) in each picture in the virtual scene picture is analyzed.
In the physical display platform, a plurality of projection devices are needed to project the virtual object picture in the virtual scene picture in each direction on the corresponding side of the physical display model. The method is equivalent to projecting on a constructed physical display platform, and each physical display model presents the effect of a virtual object in a virtual scene.
Through the projection in a plurality of directions, the virtual scene of the kitchen can be projected on the physical display platform of the kitchen, so that the user can have real experience. The physical display platform built in the embodiment is a plurality of models and molds which can be repeatedly used, so that the cost is lower. In addition to the spatial imaging of the kitchen as mentioned in the present embodiment, the object of the spatial imaging may be a single object or other scene, which is not listed here.
In addition, in this embodiment, the intelligent device may also obtain position reference information corresponding to the viewing position; and calculating a plurality of azimuth visual angles of the watching position by combining the position reference information, and then intercepting a virtual scene picture corresponding to each azimuth in the virtual scene by combining the azimuth visual angle of each azimuth. For details, the above embodiments may be referred to, and are not described herein.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A method of imaging a spatial object, comprising:
analyzing the virtual space position information of a virtual object placed in the virtual scene; the virtual scene is formed by simulating a real scene;
building a real object display platform for displaying the virtual scene according to the virtual scene and the virtual space position information; the object display platform comprises a scene display space model and an object display model;
intercepting virtual scene pictures corresponding to all directions in the virtual scene, and analyzing virtual coordinate information of the picture of the virtual object in the virtual scene pictures;
projecting the virtual object picture in the virtual scene picture corresponding to each direction on the side corresponding to the real object display model by combining the virtual coordinate information of the picture of the virtual object in the virtual scene picture and the position relation of the real object display model in the scene display space model;
and splicing the virtual scene pictures corresponding to all the directions to form a virtual display scene on the scene display platform.
2. The method according to claim 1, further comprising, before the capturing the virtual scene pictures corresponding to the respective orientations in the virtual scene:
acquiring position reference information corresponding to a viewing position; calculating a plurality of azimuth viewing angles of the viewing position by combining the position reference information;
the capturing of the virtual scene pictures corresponding to the respective orientations in the virtual scene specifically includes:
and intercepting a virtual scene picture corresponding to each azimuth in the virtual scene by combining the azimuth angle of each azimuth.
3. The method according to claim 2, wherein said calculating a plurality of azimuthal views of said viewing position in combination with said position reference information comprises:
calculating an azimuth viewing angle in one azimuth of the viewing position by combining the position reference information; calculating azimuth viewing angles of adjacent azimuths of the rest azimuths in the plurality of azimuths according to the calculated azimuth viewing angles and the angle relation between the adjacent azimuths;
or;
and respectively calculating a plurality of azimuth viewing angles of the watching position by combining the position reference information and the viewing angle calculation formulas of all azimuths.
4. The method according to claim 2, wherein said capturing a virtual scene picture corresponding to each azimuth in the virtual scene in combination with the azimuth viewing angle of each azimuth specifically comprises:
when the azimuth visual angles of the front azimuth and the rear azimuth in the plurality of azimuth visual angles are equal and the azimuth visual angles of the left azimuth and the right azimuth are not equal, virtual scene pictures corresponding to the left azimuth visual angle and/or the right azimuth visual angle are cut in the virtual scene;
and/or;
and calculating cutting areas corresponding to the front view angle and/or the rear view angle and/or the upper view angle and/or the lower view angle respectively, and cutting out corresponding virtual scene pictures in the virtual scene according to the cutting areas and the view angles corresponding to the cutting areas.
5. The method according to claim 2, wherein said capturing a virtual scene picture corresponding to each azimuth in the virtual scene in combination with the azimuth viewing angle of each azimuth specifically comprises:
when the azimuth angles of the front azimuth and the rear azimuth in the azimuth angles are not equal and the azimuth angles of the left azimuth and the right azimuth are equal, virtual scene pictures corresponding to the front azimuth angle and/or the rear azimuth angle are cut out from the virtual scene;
and/or;
and calculating cutting areas corresponding to the left visual angle and/or the right visual angle and/or the upper visual angle and/or the lower visual angle respectively, and cutting out a corresponding virtual scene picture in the virtual scene according to the cutting areas and the visual angles corresponding to the cutting areas.
6. The method according to claim 2, wherein said capturing a virtual scene picture corresponding to each azimuth in the virtual scene in combination with the azimuth viewing angle of each azimuth specifically comprises:
when the azimuth visual angles of the left azimuth and the right azimuth are not equal, and the azimuth visual angles of the front azimuth and the rear azimuth are not equal, respectively calculating a cutting area corresponding to each azimuth visual angle;
and cutting out a corresponding virtual scene picture in the virtual scene according to each azimuth visual angle and the cutting area.
7. The method according to claim 2, wherein said capturing a virtual scene picture corresponding to each azimuth in the virtual scene in combination with the azimuth viewing angle of each azimuth specifically comprises:
when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, cutting the position reference information into corresponding virtual scene pictures according to the azimuth viewing angle corresponding to the X axis in the virtual scene;
and/or;
and respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual scene pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
8. The method according to claim 2, wherein said capturing a virtual scene picture corresponding to each azimuth in the virtual scene in combination with the azimuth viewing angle of each azimuth specifically comprises:
when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, cutting the position reference information into corresponding virtual scene pictures according to the azimuth viewing angle corresponding to the Y axis in the virtual scene;
and/or;
and respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual scene pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
9. The method according to claim 2, wherein said capturing a virtual scene picture corresponding to each azimuth in the virtual scene in combination with the azimuth viewing angle of each azimuth specifically comprises:
when the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, respectively calculating cutting areas corresponding to all the azimuth viewing angles;
and cutting out a corresponding virtual scene picture in the virtual scene according to each azimuth visual angle and the cutting area.
10. A space object imaging system applied in the space object imaging method according to any one of claims 1 to 9, comprising an intelligent device, a projection device:
the smart device includes:
the analysis module is used for calculating a plurality of azimuth viewing angles of the watching position by combining the position reference information;
the picture intercepting module intercepts virtual scene pictures corresponding to all the directions in the virtual scene;
the analysis module is further used for analyzing the virtual coordinate information of the picture of the virtual object in the virtual scene picture;
the control module is used for controlling the projection equipment to project the virtual object picture in the virtual scene picture corresponding to each direction on the side corresponding to the real object display model by combining the virtual coordinate information of the picture of the virtual object in the virtual scene picture and the position relation of the real object display model in the scene display space model; and splicing the virtual scene pictures corresponding to all the directions to form a virtual display scene on the scene display platform.
CN201811333542.5A 2018-11-09 2018-11-09 Space object imaging method and system Active CN111182288B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811333542.5A CN111182288B (en) 2018-11-09 2018-11-09 Space object imaging method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811333542.5A CN111182288B (en) 2018-11-09 2018-11-09 Space object imaging method and system

Publications (2)

Publication Number Publication Date
CN111182288A true CN111182288A (en) 2020-05-19
CN111182288B CN111182288B (en) 2021-07-23

Family

ID=70621989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811333542.5A Active CN111182288B (en) 2018-11-09 2018-11-09 Space object imaging method and system

Country Status (1)

Country Link
CN (1) CN111182288B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2906809Y (en) * 2006-03-24 2007-05-30 上海科技馆 Multimedia ecologic scene simulation system
CN102306088A (en) * 2011-06-23 2012-01-04 北京北方卓立科技有限公司 Solid projection false or true registration device and method
CN202196562U (en) * 2011-05-12 2012-04-18 西安灵境科技有限公司 Historic site recovering phantom landscape platform
CN103177475A (en) * 2013-03-04 2013-06-26 腾讯科技(深圳)有限公司 Method and system for showing streetscape maps
CN106611403A (en) * 2016-05-25 2017-05-03 北京数科技有限公司 Image mosaicing method and apparatus
CN107193372A (en) * 2017-05-15 2017-09-22 杭州隅千象科技有限公司 From multiple optional position rectangle planes to the projecting method of variable projection centre

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2906809Y (en) * 2006-03-24 2007-05-30 上海科技馆 Multimedia ecologic scene simulation system
CN202196562U (en) * 2011-05-12 2012-04-18 西安灵境科技有限公司 Historic site recovering phantom landscape platform
CN102306088A (en) * 2011-06-23 2012-01-04 北京北方卓立科技有限公司 Solid projection false or true registration device and method
CN103177475A (en) * 2013-03-04 2013-06-26 腾讯科技(深圳)有限公司 Method and system for showing streetscape maps
CN106611403A (en) * 2016-05-25 2017-05-03 北京数科技有限公司 Image mosaicing method and apparatus
CN107193372A (en) * 2017-05-15 2017-09-22 杭州隅千象科技有限公司 From multiple optional position rectangle planes to the projecting method of variable projection centre

Also Published As

Publication number Publication date
CN111182288B (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN107193372B (en) Projection method from multiple rectangular planes at arbitrary positions to variable projection center
US8189035B2 (en) Method and apparatus for rendering virtual see-through scenes on single or tiled displays
AU2018249563B2 (en) System, method and software for producing virtual three dimensional images that appear to project forward of or above an electronic display
US10296080B2 (en) Systems and methods to simulate user presence in a real-world three-dimensional space
CN111182288B (en) Space object imaging method and system
CN108492381A (en) A kind of method and system that color in kind is converted into 3D model pinup pictures
CN110060349B (en) Method for expanding field angle of augmented reality head-mounted display equipment
CN108205823A (en) MR holographies vacuum experiences shop and experiential method
JP6168597B2 (en) Information terminal equipment
CN111179407A (en) Virtual scene creating method, virtual scene projecting system and intelligent equipment
CN111050148A (en) Three-folding-screen-site-based projection method and system and three-folding-screen site
CN111045286A (en) Projection method and system based on double-folding screen field and double-folding screen field
CN111050145B (en) Multi-screen fusion imaging method, intelligent device and system
JP2020530218A (en) How to project immersive audiovisual content
EP3848894B1 (en) Method and device for segmenting image, and storage medium
JP2005293197A (en) Image processing device and method, and image display system
Wang et al. An intelligent screen system for context-related scenery viewing in smart home
CN111050146B (en) Single-screen imaging method, intelligent equipment and system
CN111050156A (en) Projection method and system based on four-fold screen field and four-fold screen field
Yu et al. Projective Bisector Mirror (PBM): Concept and Rationale
CN111050147A (en) Projection method and system based on five-fold screen field and five-fold screen field
CN111050144A (en) Projection method and system based on six-fold screen field and six-fold screen field
Siddek Depth-level based camouflaging using RGB-D sensor
CN111176593A (en) Projection method and system for extended picture
CN111182278B (en) Projection display management method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220905

Address after: 201508 1st floor, No. 1000, Tingwei Road, Jinshan District, Shanghai (Bay area science and Innovation Center)

Patentee after: Shanghai Hongxing Cloud Computing Technology Co.,Ltd.

Address before: 200000 da-001, 4th floor, 518 Linyu Road, Pudong New Area, Shanghai

Patentee before: SHANGHAI YUNSHEN INTELLIGENT TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right