CN111050146B - Single-screen imaging method, intelligent equipment and system - Google Patents

Single-screen imaging method, intelligent equipment and system Download PDF

Info

Publication number
CN111050146B
CN111050146B CN201811185501.6A CN201811185501A CN111050146B CN 111050146 B CN111050146 B CN 111050146B CN 201811185501 A CN201811185501 A CN 201811185501A CN 111050146 B CN111050146 B CN 111050146B
Authority
CN
China
Prior art keywords
azimuth
viewing
scene
picture
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811185501.6A
Other languages
Chinese (zh)
Other versions
CN111050146A (en
Inventor
王珏
王琦琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hongxing Cloud Computing Technology Co ltd
Original Assignee
Shanghai Yunshen Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yunshen Intelligent Technology Co ltd filed Critical Shanghai Yunshen Intelligent Technology Co ltd
Priority to CN201811185501.6A priority Critical patent/CN111050146B/en
Publication of CN111050146A publication Critical patent/CN111050146A/en
Application granted granted Critical
Publication of CN111050146B publication Critical patent/CN111050146B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a single-screen imaging method, intelligent equipment and a system; belongs to the technical field of image processing. A method of single screen imaging, comprising: acquiring position reference information corresponding to a viewing position; calculating an azimuth viewing angle of one azimuth of the viewing position by combining the position reference information; generating a corresponding scene picture by the virtual scene according to the azimuth viewing angle; and projecting the scene picture. The viewing angle of the stereoscopic scene display changes along with the change of the position of a viewer, so that the viewing angle of the viewer can be kept updated in real time, and the displayed stereoscopic scene picture can be updated in time; the stereoscopic scene image presented by the stereoscopic scene image display device cannot be distorted due to the change of the viewing position.

Description

Single-screen imaging method, intelligent equipment and system
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a single-screen imaging method, intelligent equipment and a system.
Background
At present, when the commodities or environments (such as furniture display, floor type display and the like) are displayed in the form of 3D projection (such as 3D holographic projection and the like) in the occasions of commerce, performance and the like; the 3D merchandise or environment pictures are generated based on the user standing at a fixed viewing position, and the viewing angle is also fixed. When the user changes the viewing position, the viewing angle of the user is changed; when a 3D commodity or an environment screen is viewed, the commodity or the environment in the screen is distorted.
Therefore, in the prior art, when the commodity or the environment (such as furniture display, floor type display and the like) is displayed in the form of 3D projection (such as 3D holographic projection and the like) in the occasions of commerce, performance and the like; the defect that the viewing angle of the viewer is fixed and cannot be changed along with the position of the viewer, and the real-time updating of the viewing angle of the viewer is kept. Meanwhile, when a viewer stands at another position to watch, the displayed stereoscopic scene image has distortion and other phenomena, so that the watching experience of the viewer is influenced, and the viewer cannot really know the commodity or the environment.
Disclosure of Invention
The invention aims to provide a single-screen imaging method, intelligent equipment and a system, wherein the viewing angle of the single-screen imaging method changes along with the position change of a viewer, the viewing angle of the viewer can be kept to be updated in real time, and the displayed three-dimensional scene picture can be updated in time; the stereoscopic scene image presented by the stereoscopic scene image display device cannot be distorted due to the change of the viewing position.
The technical scheme provided by the invention is as follows:
the invention provides a single-screen imaging method, which comprises the following steps: acquiring position reference information corresponding to a viewing position; calculating an azimuth viewing angle of one azimuth of the viewing position by combining the position reference information; generating a corresponding scene picture by the virtual scene according to the azimuth viewing angle; and projecting the scene picture.
Further preferably, the generating of the corresponding scene picture by the virtual scene according to the azimuth viewing angle specifically includes: when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information is not on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is the same as the azimuth corresponding to the X axis; cutting the virtual scene into corresponding scene pictures according to the azimuth view angle corresponding to the X axis; when the azimuth corresponding to the azimuth viewing angle is different from the azimuth corresponding to the X axis; and calculating a cutting area corresponding to the orientation visual angle, and cutting out a corresponding scene picture in the virtual scene according to the cutting area and the orientation visual angle corresponding to the cutting area.
Further preferably, the generating of the corresponding scene picture by the virtual scene according to the azimuth viewing angle specifically includes: when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information is on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is the same as the azimuth corresponding to the Y axis; cutting the virtual scene into corresponding scene pictures according to the azimuth view angle corresponding to the Y axis; when the azimuth corresponding to the azimuth viewing angle is different from the azimuth corresponding to the Y axis; and calculating a cutting area corresponding to the orientation visual angle, and cutting out a corresponding scene picture in the virtual scene according to the cutting area and the orientation visual angle corresponding to the cutting area.
Further preferably, the generating of the corresponding scene picture by the virtual scene according to the azimuth viewing angle specifically includes: when the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, calculating a cutting area corresponding to the azimuth viewing angle; and cutting out a corresponding scene picture in the virtual scene according to the cutting area and the azimuth visual angle corresponding to the cutting area.
Further preferably, the calculating of the cropping area corresponding to the orientation view specifically includes: calculating visual angle picture parameters corresponding to the azimuth visual angle according to the azimuth visual angle and the position reference information; and calculating a cutting area corresponding to the azimuth viewing angle according to the viewing angle picture parameters and the viewing space parameters corresponding to the azimuth viewing angle.
Further preferably, the calculating of the cropping area corresponding to the orientation view specifically includes: and analyzing the position deviation information of the position reference information relative to the preset position information, and calculating the corresponding cutting area by combining the position deviation information.
Preferably, the method further comprises the following steps: generating an orthogonal camera which is perpendicular to a plane corresponding to the direction of the orthogonal camera; the orthogonal camera is used for intercepting a scene picture corresponding to the azimuth view angle in the virtual scene.
Further preferably, the acquiring of the position reference information corresponding to the viewing position specifically includes: converting viewing position information in a viewing space into virtual position information in a virtual scene according to a corresponding relation between a space coordinate of the viewing space and a virtual coordinate of the virtual scene; and using the virtual position information as position reference information;
or;
converting viewing position information in a viewing space into position pixel information in a virtual scene according to a corresponding relation between a space coordinate of the viewing space and a picture pixel of the virtual scene; and using the position pixel information as position reference information.
Further preferably, the specific proportional relationship between the scene model of the virtual scene and the spatial model of the viewing space is 1: 1.
the present invention also provides an intelligent device, comprising: the acquisition module is used for acquiring position reference information corresponding to the watching position; the calculation module is used for calculating an azimuth viewing angle of one azimuth of the watching position by combining the position reference information; the picture generation module is used for generating a corresponding scene picture from the virtual scene according to the azimuth viewing angle; and projecting the scene picture.
The invention also provides a single-screen imaging system, which comprises the intelligent equipment, the projection equipment and the picture presentation device: the smart device includes: the acquisition module is used for acquiring position reference information corresponding to the watching position; the calculation module is used for calculating an azimuth viewing angle of one azimuth of the watching position by combining the position reference information; the picture generation module is used for generating a corresponding scene picture from the virtual scene according to the azimuth viewing angle; projecting the scene picture; the projection equipment projects the scene picture on a picture presentation device, and a viewing space is formed inside the picture presentation device.
Further preferably, the method further comprises the following steps: and the mobile terminal is used for acquiring the watching position of the watching space in the picture presenting device.
Compared with the prior art, the single-screen imaging method, the intelligent equipment and the system provided by the invention have the following beneficial effects:
the viewing angle of the invention changes along with the position change of the viewer, the viewing angle of the viewer can be kept to be updated in real time, and the displayed three-dimensional scene picture can be updated in time; the stereoscopic scene image presented by the stereoscopic scene image display device cannot be distorted due to the change of the viewing position.
Drawings
The foregoing features, technical features, advantages and implementations of a method, smart device and system for single-screen imaging will be further described in the following detailed description of preferred embodiments in a clearly understandable manner, in conjunction with the accompanying drawings.
FIG. 1 is a schematic flow chart of a method of single screen imaging according to the present invention;
FIG. 2 is a schematic flow chart of yet another single-screen imaging method of the present invention;
FIG. 3 is a diagram of a screen display apparatus according to the present invention;
FIG. 4 is a schematic view of the viewing angle at a position in front of a viewpoint/viewing position in the present invention;
FIG. 5 is a schematic view of a viewing angle in a forward direction from another viewpoint/viewing position in the present invention;
FIG. 6 is a schematic view of a perspective in a forward direction from yet another viewpoint/viewing position in accordance with the present invention;
FIG. 7 is a schematic view of cropping in a direction in front of a viewpoint/viewing position in accordance with the present invention;
FIG. 8 is a schematic view of cropping in a direction in front of a viewpoint/viewing position in accordance with the present invention;
FIG. 9 is a block diagram illustrating the structure of an intelligent device of the present invention;
FIG. 10 is a block diagram schematically illustrating the construction of a single-screen imaging system in accordance with the present invention;
the reference numbers illustrate:
10-Mobile terminal
20-intelligent device 21-acquisition module 22-calculation module
23-Picture Generation Module
30-projection device
40-picture presentation device
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
According to an embodiment of the present invention, as shown in fig. 1, a method of single-screen imaging includes:
s10, acquiring position reference information corresponding to the viewing position;
specifically, after the viewer enters the viewing space, the viewing position of the viewer in the viewing space is obtained by using the mobile terminal 10 carried by the viewer; the mobile terminal 10 thereof is capable of performing indoor positioning. The mobile terminal 10 may be a mobile phone, a tablet computer, an intelligent bracelet, etc., and integrates an indoor positioning function on a device frequently used by a viewer at ordinary times; or a hand-held terminal and the like can be specially produced, and the indoor positioning function is integrated.
S20, calculating an azimuth angle of an azimuth of the viewing position by combining the position reference information;
specifically, at different positions, the perspective view of a person may also be different at each orientation; if at different positions, the pictures presented by watching the same object at the same direction are different; the different pictures are seen because the perspective view angle changes when the object is viewed.
The position information of the watching position comprises X-axis coordinate information, Y-axis coordinate information and Z-axis coordinate information, and an azimuth viewing angle can be calculated through the position information of the watching position; for example: an azimuth view right ahead, an azimuth view right above, and an azimuth view right below.
S30, generating a corresponding scene picture by the virtual scene according to the azimuth angle; projecting the scene picture;
specifically, the virtual scene is an integral picture, and the virtual scene can be decorated with a home scene in a suite; the display scene of the commodity room can also be the display scene of the commodity. Cutting a virtual scene in a three-dimensional space; after the azimuth visual angle of the watching position is calculated, if the azimuth visual angle in the front is combined, the virtual scene is cut into a scene picture in the front in a three-dimensional space; in this way, a scene picture right above and right below can be obtained.
In this embodiment, when the position reference information corresponding to the viewing position is obtained, the position reference information may be two types of position information:
in the first type, the position reference information is virtual position information:
converting the viewing position information in the viewing space into virtual position information in the virtual scene according to the corresponding relation between the space coordinates of the viewing space and the virtual coordinates of the virtual scene; and using the virtual position information as position reference information;
specifically, under the condition of real-time rendering, viewing position information is converted into virtual position information, and the calculation of the azimuth angle of view and the generation of a scene picture are completed through the virtual position information. The essence of real-time rendering is the real-time computation and output of graphics data.
In the second type, the position reference information is position pixel information:
converting viewing position information in the viewing space into position pixel information in the virtual scene according to the corresponding relation between the space coordinates of the viewing space and the picture pixels of the virtual scene; and the positional pixel information is used as positional reference information.
Specifically, under the condition of offline rendering, viewing position information is converted into position pixel information, and the calculation of the azimuth viewing angle and the generation of a scene picture are completed through the position pixel information.
Wherein, the scene model of the virtual scene and the space model of the viewing space are in a specific proportional relationship; the specific proportion relation is 1: 1; the viewing space is shown in fig. 3.
In the embodiment, at different viewing positions, the same direction has different direction viewing angles; and generating different scene pictures in the same direction aiming at different direction visual angles. Moreover, the viewing angle of the stereoscopic scene changes along with the change of the position of the viewer, so that the viewing angle of the viewer can be kept updated in real time, and the displayed stereoscopic scene picture can be updated in time; the stereoscopic scene image presented by the stereoscopic scene image display device cannot be distorted due to the change of the viewing position.
According to another embodiment of the present invention, as shown in fig. 5, a method of single-screen imaging includes:
s10, acquiring position reference information corresponding to the viewing position;
s20, calculating an azimuth angle of an azimuth of the viewing position by combining the position reference information;
specifically, when the azimuth angle is calculated, as shown in fig. 7, the forward azimuth angle is FOV, and FOV is 2 ═ θ; tan θ ═ L 12+ s)/y; where L1 is the width of the viewing space, s is an offset value from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
S21, generating an orthogonal camera which is perpendicular to a plane corresponding to the direction of the orthogonal camera; and intercepting a scene picture corresponding to each position in the virtual scene by using an orthogonal camera.
S31, when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is the same as the azimuth corresponding to the X axis; cutting the virtual scene into corresponding scene pictures according to the azimuth view angle corresponding to the X axis; and projecting the scene picture.
Specifically, the X-axis centerline is a line 1/2 wide in viewing space and parallel to the Y-axis; if the viewing space is 4 m long and 2m wide, the X-axis center line is a straight line 1m wide and parallel to the Y-axis.
Alternatively, the X-axis centerline is a line 1/2 wide in viewing space and parallel to the Y-axis; when the viewing space is expressed in pixels, the specification is 800dp in length and 400dp in width, and the X-axis center line is a straight line 200dp in width and parallel to the Y-axis.
When the X coordinate information in the position reference information is 1m or 200dp, if the front position corresponding to the X axis is needed, a scene picture corresponding to a front position view angle can be cut out from a virtual scene according to the actual display condition; as shown in fig. 5 and 6.
S32, when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is different from the azimuth corresponding to the X axis; calculating a cutting area corresponding to the orientation visual angle, and cutting out a corresponding scene picture in the virtual scene according to the cutting area and the orientation visual angle corresponding to the cutting area; and projecting the scene picture.
Specifically, when the position reference information includes X coordinate information, Y coordinate information, and Z coordinate information, if the Y axis corresponds to the front azimuth, the Z axis corresponds to the upper azimuth and the lower azimuth.
The pictures corresponding to the front view, the upper view and the lower view are no longer normal pictures, and the normal pictures need to be cut.
Specifically, the viewing position is a center position, and as shown in fig. 4, a virtual screen into which the virtual scene is cut in a front direction at the center position is a normal screen.
S33, when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is the same as the azimuth corresponding to the Y axis; cutting the virtual scene into corresponding scene pictures according to the azimuth view angle corresponding to the Y axis; and projecting the scene picture.
When the Y coordinate information in the position reference information is 2m or 400dp, if the front position corresponding to the Y axis is required according to the actual display situation, the scene picture corresponding to the front position view angle may be cut out from the virtual scene.
S34, when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is different from the azimuth corresponding to the Y axis; calculating a cutting area corresponding to the orientation visual angle, and cutting out a corresponding scene picture in the virtual scene according to the cutting area and the orientation visual angle corresponding to the cutting area; and projecting the scene picture.
Specifically, when the position reference information includes X coordinate information and Z coordinate information, if the X axis corresponds to the front azimuth, the Z axis corresponds to the upper azimuth and the lower azimuth.
The pictures corresponding to the front view, the upper view and the lower view are no longer normal pictures, and the normal pictures need to be cut.
S35, when the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, calculating a cutting area corresponding to the orientation visual angle; cutting out a corresponding scene picture in the virtual scene according to the cutting area and the azimuth visual angle corresponding to the cutting area; and projecting the scene picture.
Specifically, when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, no matter the front position corresponding to the X-axis or the front position corresponding to the Y-axis, the frames corresponding to the front view angle, the upper view angle, and the lower view angle are no longer normal frames, and the normal frames need to be cut.
In this embodiment, when a scene picture corresponding to an azimuth angle is cut out from a virtual scene, the orthogonal camera captures the scene picture corresponding to the azimuth angle in the virtual scene by combining the azimuth angle corresponding to the orthogonal camera and the position reference information.
When the corresponding scene picture is cut out in the virtual scene according to the cutting area and the azimuth angle corresponding to the cutting area, the orthogonal camera intercepts the scene picture corresponding to the azimuth angle in the virtual scene by combining the azimuth angle corresponding to the orthogonal camera, the position reference information and the cutting area.
When a cutting area corresponding to an azimuth view angle is calculated, two calculation schemes are provided:
the first calculation scheme is as follows:
calculating visual angle picture parameters corresponding to the azimuth visual angle according to the azimuth visual angle and the position reference information;
specifically, under the condition that the azimuth viewing angle is known, the position reference information contains the viewing distance; a view angle picture width at each azimuth at the viewing position can be calculated, for example, the view angle picture width is 600 dp; the view frame width is used as a view frame parameter.
And calculating a cutting area corresponding to the azimuth viewing angle according to the viewing angle picture parameters and the viewing space parameters corresponding to the azimuth viewing angle.
Specifically, after the view angle picture width (600dp) corresponding to each azimuth is calculated, the picture view width (400dp) of the view space in each azimuth is fixed, and the cut region corresponding to each azimuth is obtained by subtracting the picture view width (400dp) from the view angle picture width (600 dp).
The second calculation scheme is as follows:
and analyzing the position deviation information of the position reference information relative to the preset position information, and calculating the corresponding cutting area by combining the position deviation information.
Specifically, as shown in fig. 7, when the azimuth corresponding to the front azimuth viewing angle is the same as the azimuth corresponding to the X-axis, the width of the scene picture corresponding to the front azimuth viewing angle to be cut is 2 s; the forward azimuth angle is FOV, and the FOV is 2 & lttheta & gt; tan θ ═ L 12+ s)/y; where L1 is the width of the viewing space, s is the lateral offset from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
As shown in fig. 8, when the azimuth corresponding to the front azimuth viewing angle is the same as the azimuth corresponding to the Y-axis, the width of the scene picture corresponding to the front azimuth viewing angle to be cut is 2P; the rear azimuth angle is FOV, and the FOV is 2 & lt alpha; tan α ═ L 22+ p)/x; where L2 is the length of the viewing space, p is the vertical offset from the center position of the viewing space, and x is the viewing distance directly in front of the viewing space.
According to an embodiment provided by the present invention, as shown in fig. 9, an intelligent device includes:
an obtaining module 21, configured to obtain position reference information corresponding to a viewing position;
a calculating module 22, configured to calculate an azimuth viewing angle of an azimuth of the viewing position by combining the position reference information;
the picture generation module 23 is configured to generate a corresponding scene picture from the virtual scene according to the azimuth viewing angle; and projecting the scene picture.
In addition to the above, the present embodiment further includes the following contents:
the generating of the corresponding scene picture by the virtual scene according to the azimuth viewing angle specifically includes:
when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information is not on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is the same as the azimuth corresponding to the X axis; cutting the virtual scene into corresponding scene pictures according to the azimuth view angle corresponding to the X axis;
when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information is not on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is different from the azimuth corresponding to the X axis; and calculating a cutting area corresponding to the orientation visual angle, and cutting out a corresponding scene picture in the virtual scene according to the cutting area and the orientation visual angle corresponding to the cutting area.
When the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information is on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is the same as the azimuth corresponding to the Y axis; cutting the virtual scene into corresponding scene pictures according to the azimuth view angle corresponding to the Y axis;
when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information is on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is different from the azimuth corresponding to the Y axis; and calculating a cutting area corresponding to the orientation visual angle, and cutting out a corresponding scene picture in the virtual scene according to the cutting area and the orientation visual angle corresponding to the cutting area.
When the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, calculating a cutting area corresponding to the azimuth viewing angle; and cutting out a corresponding scene picture in the virtual scene according to the cutting area and the azimuth visual angle corresponding to the cutting area.
One mode, the calculating the cropping area corresponding to the orientation view specifically includes:
calculating visual angle picture parameters corresponding to the azimuth visual angle according to the azimuth visual angle and the position reference information;
and calculating a cutting area corresponding to the azimuth viewing angle according to the viewing angle picture parameters and the viewing space parameters corresponding to the azimuth viewing angle.
In another mode, the calculating the cropping area corresponding to the orientation view specifically includes:
and analyzing the position deviation information of the position reference information relative to the preset position information, and calculating the corresponding cutting area by combining the position deviation information.
Generating an orthogonal camera which is perpendicular to a plane corresponding to the direction of the orthogonal camera; the orthogonal camera is used for intercepting a scene picture corresponding to the azimuth view angle in the virtual scene.
The acquiring of the position reference information corresponding to the viewing position specifically includes:
converting viewing position information in a viewing space into virtual position information in a virtual scene according to a corresponding relation between a space coordinate of the viewing space and a virtual coordinate of the virtual scene; and using the virtual position information as position reference information;
or;
converting viewing position information in a viewing space into position pixel information in a virtual scene according to a corresponding relation between a space coordinate of the viewing space and a picture pixel of the virtual scene; and using the position pixel information as position reference information.
The specific proportional relation between the scene model of the virtual scene and the space model of the viewing space is 1: 1.
according to an embodiment provided by the present invention, as shown in fig. 10, a single-screen imaging system includes a mobile terminal 10, a smart device 20, a projection device 30, and a picture presenting apparatus 40:
a mobile terminal 10 for acquiring a viewing position in a viewing space;
the smart device 20 includes:
an obtaining module 21, configured to obtain position reference information corresponding to a viewing position;
the calculation module 22 is connected with the acquisition module 21 and is used for calculating an azimuth viewing angle of one azimuth of the viewing position by combining the position reference information;
the picture generation module 23 is connected with the calculation module 22 and is used for generating a corresponding scene picture from the virtual scene according to the azimuth viewing angle; projecting the scene picture;
the projection apparatus 30 projects a scene picture on the picture presentation device 40, and a viewing space is formed inside the picture presentation device 40.
In addition to the above, the present embodiment further includes the following contents:
the generating of the corresponding scene picture by the virtual scene according to the azimuth viewing angle specifically includes:
when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information is not on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is the same as the azimuth corresponding to the X axis; cutting the virtual scene into corresponding scene pictures according to the azimuth view angle corresponding to the X axis;
when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information is not on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is different from the azimuth corresponding to the X axis; and calculating a cutting area corresponding to the orientation visual angle, and cutting out a corresponding scene picture in the virtual scene according to the cutting area and the orientation visual angle corresponding to the cutting area.
When the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information is on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is the same as the azimuth corresponding to the Y axis; cutting the virtual scene into corresponding scene pictures according to the azimuth view angle corresponding to the Y axis;
when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information is on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is different from the azimuth corresponding to the Y axis; and calculating a cutting area corresponding to the orientation visual angle, and cutting out a corresponding scene picture in the virtual scene according to the cutting area and the orientation visual angle corresponding to the cutting area.
When the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, calculating a cutting area corresponding to the azimuth viewing angle; and cutting out a corresponding scene picture in the virtual scene according to the cutting area and the azimuth visual angle corresponding to the cutting area.
One mode, the calculating the cropping area corresponding to the orientation view specifically includes:
calculating visual angle picture parameters corresponding to the azimuth visual angle according to the azimuth visual angle and the position reference information;
and calculating a cutting area corresponding to the azimuth viewing angle according to the viewing angle picture parameters and the viewing space parameters corresponding to the azimuth viewing angle.
In another mode, the calculating the cropping area corresponding to the orientation view specifically includes:
and analyzing the position deviation information of the position reference information relative to the preset position information, and calculating the corresponding cutting area by combining the position deviation information.
Generating an orthogonal camera which is perpendicular to a plane corresponding to the direction of the orthogonal camera; the orthogonal camera is used for intercepting a scene picture corresponding to the azimuth view angle in the virtual scene.
The acquiring of the position reference information corresponding to the viewing position specifically includes:
converting viewing position information in a viewing space into virtual position information in a virtual scene according to a corresponding relation between a space coordinate of the viewing space and a virtual coordinate of the virtual scene; and using the virtual position information as position reference information;
or;
converting viewing position information in a viewing space into position pixel information in a virtual scene according to a corresponding relation between a space coordinate of the viewing space and a picture pixel of the virtual scene; and using the position pixel information as position reference information.
The specific proportional relation between the scene model of the virtual scene and the space model of the viewing space is 1: 1.
the intelligent device 20 may be a computer, and the image presentation device 40 may be a built-up viewing room, or a cubic space model surrounded by several folds of screens/wallboards, etc.; or several surfaces formed by several folds of screens/wall boards and the like.
In the embodiment, based on the existing multi-screen fusion splicing imaging technology, the output mode of design software is optimized in the manufacturing process, the virtual cameras in the three-dimensional software are subjected to modular processing and are bound with each other, any one camera setting is adjusted, other cameras are set accordingly, part of operation processes are simplified, but the visual angle is still fixed, the viewer cannot be followed in real time, a fixed algorithm cannot be used, recalculation is required once every time the virtual cameras are made, the efficiency is low, and the method is not suitable for product mass production.
In the prior art, firstly, the real-time generation of virtual coordinates according to the detection of a real space by hardware equipment is a mature technology and is widely applied to various industries; secondly, the real-scale multi-level fusion imaging method is also applied in many places, but all the methods are fixed in view angle; third, real-time rendering technology is also a very mature technology; finally, the algorithm is characterized in that three relatively mature technologies are fused through a specific algorithm, real-time coordinates are converted into a real-time imaging view angle original point, a view angle is calculated, and real-time rendering imaging is spliced into a finished space image.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (11)

1. A method of single screen imaging, comprising:
acquiring position reference information corresponding to a viewing position;
calculating an azimuth viewing angle of one azimuth of the viewing position by combining the position reference information;
generating a corresponding scene picture by the virtual scene according to the azimuth viewing angle; projecting the scene picture;
when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information is not on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is the same as the azimuth corresponding to the X axis; cutting the virtual scene into corresponding scene pictures according to the azimuth view angle corresponding to the X axis;
when the azimuth corresponding to the azimuth viewing angle is different from the azimuth corresponding to the X axis; and calculating a cutting area corresponding to the orientation visual angle, and cutting out a corresponding scene picture in the virtual scene according to the cutting area and the orientation visual angle corresponding to the cutting area.
2. The method of claim 1, wherein the generating of the corresponding scene picture by the virtual scene according to the azimuth viewing angle specifically comprises:
when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information is on the Y-axis central line, under the condition that the azimuth corresponding to the azimuth viewing angle is the same as the azimuth corresponding to the Y axis; cutting the virtual scene into corresponding scene pictures according to the azimuth view angle corresponding to the Y axis;
when the azimuth corresponding to the azimuth viewing angle is different from the azimuth corresponding to the Y axis; and calculating a cutting area corresponding to the orientation visual angle, and cutting out a corresponding scene picture in the virtual scene according to the cutting area and the orientation visual angle corresponding to the cutting area.
3. The method of claim 1, wherein the generating of the corresponding scene picture by the virtual scene according to the azimuth viewing angle specifically comprises:
when the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, calculating a cutting area corresponding to the azimuth viewing angle; and cutting out a corresponding scene picture in the virtual scene according to the cutting area and the azimuth visual angle corresponding to the cutting area.
4. The method of claim 2 or 3, wherein the calculating the cropping zone corresponding to the azimuth viewing angle specifically comprises:
calculating visual angle picture parameters corresponding to the azimuth visual angle according to the azimuth visual angle and the position reference information;
and calculating a cutting area corresponding to the azimuth viewing angle according to the viewing angle picture parameters and the viewing space parameters corresponding to the azimuth viewing angle.
5. The method of claim 2 or 3, wherein the calculating the cropping zone corresponding to the azimuth viewing angle specifically comprises:
and analyzing the position deviation information of the position reference information relative to the preset position information, and calculating the corresponding cutting area by combining the position deviation information.
6. The method of single-screen imaging according to any one of claims 1-3, further comprising:
generating an orthogonal camera which is perpendicular to a plane corresponding to the direction of the orthogonal camera; the orthogonal camera is used for intercepting a scene picture corresponding to the azimuth view angle in the virtual scene.
7. The method according to any one of claims 1 to 3, wherein the acquiring of the position reference information corresponding to the viewing position specifically comprises:
converting viewing position information in a viewing space into virtual position information in a virtual scene according to a corresponding relation between a space coordinate of the viewing space and a virtual coordinate of the virtual scene; and using the virtual position information as position reference information;
or;
converting viewing position information in a viewing space into position pixel information in a virtual scene according to a corresponding relation between a space coordinate of the viewing space and a picture pixel of the virtual scene; and using the position pixel information as position reference information.
8. The method of claim 7, wherein:
the specific proportional relation between the scene model of the virtual scene and the space model of the viewing space is 1: 1.
9. an intelligent device applying the single-screen imaging method according to any one of claims 1-8, comprising:
the acquisition module is used for acquiring position reference information corresponding to the watching position;
the calculation module is used for calculating an azimuth viewing angle of one azimuth of the watching position by combining the position reference information;
the picture generation module is used for generating a corresponding scene picture from the virtual scene according to the azimuth viewing angle; and projecting the scene picture.
10. A system applying the method for single-screen imaging according to any one of claims 1 to 8, characterized by comprising an intelligent device, a projection device and a picture presentation device:
the smart device includes:
the acquisition module is used for acquiring position reference information corresponding to the watching position;
the calculation module is used for calculating an azimuth viewing angle of one azimuth of the watching position by combining the position reference information;
the picture generation module is used for generating a corresponding scene picture from the virtual scene according to the azimuth viewing angle; projecting the scene picture;
the projection equipment projects the scene picture on a picture presentation device, and a viewing space is formed inside the picture presentation device.
11. The system of claim 10, further comprising:
and the mobile terminal is used for acquiring the watching position of the watching space in the picture presenting device.
CN201811185501.6A 2018-10-11 2018-10-11 Single-screen imaging method, intelligent equipment and system Active CN111050146B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811185501.6A CN111050146B (en) 2018-10-11 2018-10-11 Single-screen imaging method, intelligent equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811185501.6A CN111050146B (en) 2018-10-11 2018-10-11 Single-screen imaging method, intelligent equipment and system

Publications (2)

Publication Number Publication Date
CN111050146A CN111050146A (en) 2020-04-21
CN111050146B true CN111050146B (en) 2022-04-15

Family

ID=70229089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811185501.6A Active CN111050146B (en) 2018-10-11 2018-10-11 Single-screen imaging method, intelligent equipment and system

Country Status (1)

Country Link
CN (1) CN111050146B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206350095U (en) * 2016-05-30 2017-07-21 青岛量子智能科技有限公司 A kind of three-dimensional filming system dynamically tracked based on human body
CN107193372A (en) * 2017-05-15 2017-09-22 杭州隅千象科技有限公司 From multiple optional position rectangle planes to the projecting method of variable projection centre

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101072366B (en) * 2007-05-24 2010-08-11 上海大学 Free stereo display system based on light field and binocular vision technology
WO2013054746A1 (en) * 2011-10-11 2013-04-18 シャープ株式会社 Optical system
JP6644371B2 (en) * 2014-12-17 2020-02-12 マクセル株式会社 Video display device
CN106154707B (en) * 2016-08-29 2018-01-05 广州大西洲科技有限公司 virtual reality projection imaging method and system
CN108616731B (en) * 2016-12-30 2020-11-17 艾迪普科技股份有限公司 Real-time generation method for 360-degree VR panoramic image and video

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206350095U (en) * 2016-05-30 2017-07-21 青岛量子智能科技有限公司 A kind of three-dimensional filming system dynamically tracked based on human body
CN107193372A (en) * 2017-05-15 2017-09-22 杭州隅千象科技有限公司 From multiple optional position rectangle planes to the projecting method of variable projection centre

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于大数据技术的沉浸式虚拟现实可视化展示系统;周安等;《北塔软件》;20161230;第20-25页 *

Also Published As

Publication number Publication date
CN111050146A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
EP3057066B1 (en) Generation of three-dimensional imagery from a two-dimensional image using a depth map
CN107193372B (en) Projection method from multiple rectangular planes at arbitrary positions to variable projection center
EP2396767B1 (en) Methods and systems for determining the pose of a camera with respect to at least one object of a real environment
CN107341832B (en) Multi-view switching shooting system and method based on infrared positioning system
US8189035B2 (en) Method and apparatus for rendering virtual see-through scenes on single or tiled displays
CN101189643A (en) 3D image forming and displaying system
US10567649B2 (en) Parallax viewer system for 3D content
US20080246757A1 (en) 3D Image Generation and Display System
AU2018249563B2 (en) System, method and software for producing virtual three dimensional images that appear to project forward of or above an electronic display
US20140306954A1 (en) Image display apparatus and method for displaying image
CN111179407A (en) Virtual scene creating method, virtual scene projecting system and intelligent equipment
KR20080034419A (en) 3d image generation and display system
Mulligan et al. Stereo-based environment scanning for immersive telepresence
CN111050148A (en) Three-folding-screen-site-based projection method and system and three-folding-screen site
CN111050145B (en) Multi-screen fusion imaging method, intelligent device and system
CN111045286A (en) Projection method and system based on double-folding screen field and double-folding screen field
CN111050146B (en) Single-screen imaging method, intelligent equipment and system
KR20120119774A (en) Stereoscopic image generation method, device and system using circular projection and recording medium for the same
JP7394566B2 (en) Image processing device, image processing method, and image processing program
CN111131726B (en) Video playing method, intelligent device and system based on multi-screen fusion imaging
De Sorbier et al. Depth camera based system for auto-stereoscopic displays
JP2005092363A (en) Image generation device and image generation program
CN111050156A (en) Projection method and system based on four-fold screen field and four-fold screen field
CN111050144A (en) Projection method and system based on six-fold screen field and six-fold screen field
CN111050147A (en) Projection method and system based on five-fold screen field and five-fold screen field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220907

Address after: 201508 1st floor, No. 1000, Tingwei Road, Jinshan District, Shanghai (Bay area science and Innovation Center)

Patentee after: Shanghai Hongxing Cloud Computing Technology Co.,Ltd.

Address before: 200000 da-001, 4th floor, 518 Linyu Road, Pudong New Area, Shanghai

Patentee before: SHANGHAI YUNSHEN INTELLIGENT TECHNOLOGY Co.,Ltd.