CN108093245B - Multi-screen fusion method, system, device and computer readable storage medium - Google Patents

Multi-screen fusion method, system, device and computer readable storage medium Download PDF

Info

Publication number
CN108093245B
CN108093245B CN201711387141.3A CN201711387141A CN108093245B CN 108093245 B CN108093245 B CN 108093245B CN 201711387141 A CN201711387141 A CN 201711387141A CN 108093245 B CN108093245 B CN 108093245B
Authority
CN
China
Prior art keywords
projection
screen
screens
sub
cones
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711387141.3A
Other languages
Chinese (zh)
Other versions
CN108093245A (en
Inventor
方希旺
俞蔚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Kelan Information Technology Co ltd
Original Assignee
Zhejiang Kelan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Kelan Information Technology Co ltd filed Critical Zhejiang Kelan Information Technology Co ltd
Priority to CN201711387141.3A priority Critical patent/CN108093245B/en
Publication of CN108093245A publication Critical patent/CN108093245A/en
Application granted granted Critical
Publication of CN108093245B publication Critical patent/CN108093245B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses a multi-screen fusion method, a system, a device and a computer readable storage medium, which are applied to a projection scene with an included angle between screens, and comprise the following steps: setting an observation point, acquiring a panoramic view cone formed by the observation point and all screens, and cutting the panoramic view cone into N sub view cones which are in one-to-one correspondence with the screens, wherein an included angle formed by the sight lines of adjacent sub view cones is complementary with an included angle formed by adjacent screens corresponding to the adjacent sub view cones, and N is a positive integer; respectively configuring context three-dimensional display environments of the N sub-view cones according to the display area information of the N sub-view cones; and receiving a projection instruction input by a user, and completing projection of each screen in a context three-dimensional display environment according to the projection instruction. According to the method, the vision field and the space are continued between the adjacent screens, a large projection range formed by a plurality of screens is a coherent three-dimensional space projection, the joints of the screens are free from folding, and the visual effect that only one large screen is used for displaying is generated.

Description

Multi-screen fusion method, system, device and computer readable storage medium
Technical Field
The invention relates to the technical field of graphics, in particular to a multi-screen fusion method, a multi-screen fusion system, a multi-screen fusion device and a computer-readable storage medium.
Background
With the continuous development of scientific computing visualization technology, three-dimensional display is widely applied to various fields of social life, such as digital cities, traffic monitoring, real estate development, military application, scenic spot planning, movie and television production and the like, and visual data are generally displayed on a screen for people to observe, analyze, re-imagine and the like. The present situation that three-dimensional display needs to be performed in some special environments appears, visual data needs to be put on a huge screen for display, display can be performed on multiple huge screens at the same time, and even multiple screens are placed with a certain included angle and are not on one plane. To the projection scene that has the contained angle between the screen, among the prior art, the amalgamation effect of a plurality of screens of angulation is not good, and the folding sense of a plurality of screen junctions is stronger, can't reach one glance and go to the effect in same plane space of same screen.
Therefore, how to provide a solution to the above technical problem is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a multi-screen fusion method, a multi-screen fusion system, a multi-screen fusion device and a computer-readable storage medium, so that vision fields and space continuation are formed between adjacent screens, a large projection range formed by a plurality of screens is a coherent three-dimensional space projection, the joints of the screens have no folding feeling, and the visual effect that only one large screen is used for display is generated.
In order to solve the above technical problem, the present invention provides a multi-screen fusion method, which is applied to a projection scene with an included angle between screens, and includes:
setting an observation point, acquiring a panoramic view cone formed by the observation point and all screens, and dividing the panoramic view cone into N sub view cones which are in one-to-one correspondence with the screens, wherein an included angle formed by the sight lines of the adjacent sub view cones is complementary with an included angle formed by the adjacent screens corresponding to the adjacent sub view cones, and N is a positive integer;
respectively configuring context three-dimensional display environments of the N sub-view cones according to the display area information of the N sub-view cones;
and receiving a projection instruction input by a user, and completing the projection of each screen in the context three-dimensional display environment according to the projection instruction.
Preferably, the process of respectively configuring the context three-dimensional display environment of the N sub-view cones according to the display area information of the N sub-view cones specifically includes:
respectively determining parameters of glFrustum () according to the display area information of the N sub-view cones;
and calling glFrustum () to respectively configure the context three-dimensional display environment of the N sub-view cones.
Preferably, the parameters of glfrustumum () are specifically the up, down, left, right, near, far information of the sub-view cone.
Preferably, the process of completing the projection of each screen in the context three-dimensional display environment according to the projection instruction specifically includes:
resolving the projection instruction into a drawing command in the context three-dimensional display environment;
finishing the drawing of the projection content corresponding to each screen according to the drawing command;
and projecting the drawn projection content to a corresponding screen.
Preferably, the drawing command is a hybrid drawing command.
Preferably, when the projection of each of the screens is controlled by a projection device corresponding to each of the screens in a one-to-one correspondence, after the configuring the contextual three-dimensional display environment of the N sub-view cones, and before receiving a projection instruction input by a user, the method further includes:
and configuring a network communication environment, and synchronizing the space coordinates of the observation points acquired by the projection devices.
Preferably, the receiving a projection instruction input by a user, and the process of completing the projection of each screen in the context three-dimensional display environment according to the projection instruction specifically includes:
setting a projection device corresponding to a main screen selected by a user as a main projection device;
the main projection device receives the projection instruction input by the user and sends the projection instruction to other projection devices;
and each projection device respectively completes the projection of each screen under the context three-dimensional display environment according to the projection instruction.
In order to solve the above technical problem, the present invention further provides a multi-screen fusion system, which is applied to a projection scene with an included angle between screens, and includes:
the setting unit is used for setting an observation point, acquiring a panoramic view cone formed by the observation point and all screens, and dividing the panoramic view cone into N sub view cones which are in one-to-one correspondence with the screens, wherein an included angle formed by the sight lines of the adjacent sub view cones is complementary with an included angle formed by the adjacent screens corresponding to the adjacent sub view cones, and N is a positive integer;
the configuration unit is used for respectively configuring context three-dimensional display environments of the N sub-view cones according to the display area information of the N sub-view cones;
and the projection unit is used for receiving a projection instruction input by a user and completing projection of each screen in the context three-dimensional display environment according to the projection instruction.
In order to solve the above technical problem, the present invention further provides a multi-screen fusion apparatus applied to a projection scene with an included angle between screens, including:
a memory for storing a computer program;
and the processor is used for realizing the steps of any one of the multi-screen fusion methods when the computer program is executed.
In order to solve the above technical problem, the present invention further provides a computer-readable storage medium, which is applied to a projection scene with an included angle between screens, wherein a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps of any one of the multi-screen fusion methods described above.
The invention provides a multi-screen fusion method, which is applied to a projection scene with an included angle between screens, and comprises the following steps: setting an observation point, acquiring a panoramic view cone formed by the observation point and all screens, and cutting the panoramic view cone into N sub view cones which are in one-to-one correspondence with the screens, wherein an included angle formed by the sight lines of adjacent sub view cones is complementary with an included angle formed by adjacent screens corresponding to the adjacent sub view cones, and N is a positive integer; respectively configuring context three-dimensional display environments of the N sub-view cones according to the display area information of the N sub-view cones; and receiving a projection instruction input by a user, and completing projection of each screen in a context three-dimensional display environment according to the projection instruction.
In the projection scene that has the contained angle between the screen, compare in the fusion effect of a plurality of screens of prior art angulation not good, this application is through cutting into the panorama view cone that all screens formed with the observation point and all screens into a plurality of sub view cones with screen one-to-one, accomplish the projection of each screen respectively under the context three-dimensional display environment of each sub view cone for all be the continuation of field of vision and space between the adjacent screen, a large projection range that a plurality of screens are constituteed is a coherent three-dimensional space projection, the screen junction does not have the folding sensation, make the observer look at once and go each screen to same plane space of same screen, produce only one large screen and be using as the visual effect of show.
The invention also provides a multi-screen fusion system, a multi-screen fusion device and a computer readable storage medium, which have the same beneficial effects as the method.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed in the prior art and the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flowchart illustrating a multi-screen fusion method according to the present invention;
FIG. 2 is a schematic diagram of a two-screen cone segmentation method according to the present invention;
FIG. 3 is a diagram illustrating a multi-screen fusion effect for implementing the two-screen view frustum shown in FIG. 2;
fig. 4 is a schematic structural diagram of a multi-screen fusion system provided by the present invention.
Detailed Description
The core of the invention is to provide a multi-screen fusion method, a multi-screen fusion system, a multi-screen fusion device and a computer readable storage medium, so that the vision field and the space continuation are formed between the adjacent screens, a large projection range formed by a plurality of screens is a coherent three-dimensional space projection, the joints of the screens have no folding feeling, and the visual effect that only one large screen is used for display is generated.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a process flow diagram of a multi-screen fusion method provided in the present invention, the method is applied to a projection scene with an included angle between screens, and includes:
step S11: setting an observation point, acquiring a panoramic view cone formed by the observation point and all screens, and cutting the panoramic view cone into N sub view cones which are in one-to-one correspondence with the screens, wherein an included angle formed by the sight lines of adjacent sub view cones is complementary with an included angle formed by adjacent screens corresponding to the adjacent sub view cones, and N is a positive integer;
it should be noted that, in the present application, the observation point is set in advance by the user according to the observation requirement, and only needs to be set once, and does not need to be reset unless modified according to the actual situation. Since the multi-screen fusion of the present application is performed based on the viewpoint, the visual effect near the viewpoint is the best.
Specifically, after the observation points are set, when the panoramic view cones formed by the observation points and all screens are cut into a plurality of sub view cones corresponding to the screens one by one, it is necessary to ensure that an included angle formed by the sight lines of adjacent sub view cones is complementary to an included angle formed by adjacent screens corresponding to the adjacent sub view cones, so that the content after splicing each screen is ensured to be consistent with the content which should be presented by the original large panoramic view cone, and the loss of projection content is avoided.
Step S12: respectively configuring context three-dimensional display environments of the N sub-view cones according to the display area information of the N sub-view cones;
specifically, scientific computing visualization is to convert large-scale data generated by scientific and engineering computing and the like into graphics and images by using the principles and methods of computer graphics or general graphics, and to express the graphics and images in an intuitive form. Specifically, the present application may configure a context three-dimensional display environment by using an Open Graphics Library (OpenGL), where the context three-dimensional display environment is specifically an OpenGL context three-dimensional display environment, and the steps include: initializing a rendering context environment, wherein the rendering process comprises viewpoint transformation- > model transformation- > projection transformation- > viewport transformation- > presentation; reading the display area information of each sub-view cone; and calling a setting function to respectively set the context three-dimensional display environment of each sub-view cone according to the display area information of the sub-view cone.
Step S13: and receiving a projection instruction input by a user, and completing projection of each screen in a context three-dimensional display environment according to the projection instruction.
In particular, the projection of each screen is completed under the context three-dimensional display environment of each sub-view cone, so that each screen looks like a space with the same visual angle, the extension of the visual field is formed between two adjacent angled screens, the space is not simply continued, and the screen joint basically has no folding feeling. In addition, the screen of the present application may display graphics and image contents by projection, or may directly display the graphics and image contents. It should be noted that the projection of each screen may be controlled by one projection device as a whole, or may be controlled by the projection devices corresponding to the screens one by one, and the user may set the projection according to the actual use requirement.
The invention provides a multi-screen fusion method, which is applied to a projection scene with an included angle between screens, and comprises the following steps: setting an observation point, acquiring a panoramic view cone formed by the observation point and all screens, and cutting the panoramic view cone into N sub view cones which are in one-to-one correspondence with the screens, wherein an included angle formed by the sight lines of adjacent sub view cones is complementary with an included angle formed by adjacent screens corresponding to the adjacent sub view cones, and N is a positive integer; respectively configuring context three-dimensional display environments of the N sub-view cones according to the display area information of the N sub-view cones; and receiving a projection instruction input by a user, and completing projection of each screen in a context three-dimensional display environment according to the projection instruction.
In the projection scene that has the contained angle between the screen, compare in the fusion effect of a plurality of screens of prior art angulation not good, this application is through cutting into the panorama view cone that all screens formed with the observation point and all screens into a plurality of sub view cones with screen one-to-one, accomplish the projection of each screen respectively under the context three-dimensional display environment of each sub view cone for all be the continuation of field of vision and space between the adjacent screen, a large projection range that a plurality of screens are constituteed is a coherent three-dimensional space projection, the screen junction does not have the folding sensation, make the observer look at once and go each screen to same plane space of same screen, produce only one large screen and be using as the visual effect of show.
On the basis of the above-described embodiment:
as a preferred embodiment, the process of respectively configuring the contextual three-dimensional display environment of the N sub-view cones according to the display area information of the N sub-view cones specifically includes:
respectively determining parameters of glFrustum () according to the display area information of the N sub-view cones;
calling glfrustutum () configures the context three-dimensional display environment of the N sub-view cones, respectively.
Specifically, the perspective projection method is to observe an object through a transparent body, a transparent plane is set between people and the object, and is called a picture (i.e., a projection plane), the position of human eyes is called a viewpoint (i.e., a projection center), a line is connected from the viewpoint to each point on the object and is called a sight line (i.e., a projection line), the intersection point of each sight line and the picture is the perspective projection of each point on the object, and the projection of each point is connected, so that the perspective of the object is obtained. Obviously, the viewing cone of the viewpoint in this application uses perspective projection.
The perspective projection method is expressed in two modes of symmetry (realized by using the interface glu Peractive) and asymmetry (realized by using the interface glFrustum) in the OpenGL graphic interface. Considering that the sub-view cones divided into are generally asymmetric sub-view cones in general, the present application calls glfrustutum () to configure a contextual three-dimensional display environment. Of course, if the complete panoramic view cone is formed by splicing the symmetrical sub-view cones, the method can also directly call glu _ perspective () to configure the context three-dimensional display environment, and is simple and convenient.
As a preferred embodiment, the parameters of glfrustutum () are specifically the up, down, left, right, near, far information of the sub-view cone.
Specifically, the parameters of glfrustumum () are the up, down, left, right, near, and far information of the sub view cone, that is, the size value length and width of the display area and the far and near screenshot values.
As a preferred embodiment, the process of completing the projection of each screen in the context three-dimensional display environment according to the projection instruction is specifically as follows:
analyzing the projection instruction into a drawing command in a context three-dimensional display environment;
finishing the drawing of the projection content corresponding to each screen according to the drawing command;
and projecting the drawn projection content to a corresponding screen.
Specifically, in order to improve performance, the method and the device only draw objects intersected with the view cones, analyze received projection instructions input by a user into drawing commands in the context three-dimensional display environment of each sub-view cone, and then finish drawing of projection contents corresponding to each screen according to the drawing commands, so that the drawn contents projected on each screen are consistent with to-be-projected contents corresponding to panoramic view cones formed by observation points and all screens.
As a preferred embodiment, the drawing commands are embodied as hybrid drawing commands.
Specifically, the visualization of the three-dimensional data scene can be realized by selecting mixed rendering, and the mixed rendering is the combination of surface rendering and volume rendering. The surface drawing firstly constructs an intermediate geometric primitive (such as an isosurface and the like) from a three-dimensional data scene, and then realizes the drawing of the surface by utilizing the traditional computer graphics technology. Volume rendering does not require the construction of intermediate geometric primitives, and two-dimensional images on a screen are directly generated from a three-dimensional data scene. The mixed drawing not only utilizes the whole information and the characteristics of the volume drawing reflection data, but also utilizes the surface drawing to display a clear important interface, and integrates the advantages of two methods of the volume drawing and the surface drawing, such as representing bones through the surface drawing, and representing structures such as muscles, blood vessels and the like through the volume drawing.
As a preferred embodiment, when the projection of each screen is controlled by the projection devices corresponding to the screens one to one, after the context three-dimensional display environment of the N sub-view cones is configured, before receiving a projection instruction input by a user, the method further includes:
and configuring a network communication environment, and synchronizing the space coordinates of the observation points acquired by each projection device.
Specifically, the projection of each screen can be controlled by one projection device, so that the method is simple and efficient, and the resource occupation of the projection device is saved. Of course, the projection of each screen can also be controlled by the projection devices corresponding to the screens one by one, so that the projection content of each screen can be conveniently adjusted. When each projection device respectively controls the corresponding screen, the spatial coordinates of the observation points acquired by each projection device may not be synchronous, so that after the context three-dimensional display environment of each sub-view cone is configured, before the projection instruction input by the user is received for projection, the network communication environment is also configured, and the spatial coordinates of the observation points acquired by each projection device are synchronized. Specifically, the configuration of the network communication environment may be implemented by using a socket byte-encoding technology or a TCP (transmission control Protocol), and of course, other manners may also be used.
As a preferred embodiment, the process of receiving a projection instruction input by a user and completing projection of each screen in a context three-dimensional display environment according to the projection instruction specifically includes:
setting a projection device corresponding to a main screen selected by a user as a main projection device;
the main projection device receives a projection instruction input by a user and sends the projection instruction to other projection devices;
and each projection device respectively completes projection of each screen under the context three-dimensional display environment according to the projection instruction.
Specifically, when each projection device controls a corresponding screen, one screen corresponds to one projection device, and the imaging of one sub view cone is the content displayed by one projection device, where the projection device may be a computer. In order to simplify the control steps, the projection device corresponding to the main screen selected by the user is set as the main projection device, the main projection device only needs to receive the projection instruction input by the user, then the main projection device and other projection devices carry out reliable TCP communication, and the received projection instruction is automatically sent to other projection devices, so that the function of controlling the use conditions of all screens by only controlling the main projection device is realized.
For the convenience of understanding of the present application, the following describes a multi-screen fusion method provided by the present application with reference to specific examples:
referring to fig. 2, fig. 2 is a schematic diagram of a two-screen cone segmentation method according to the present invention.
In this example, there are two screens, i.e. a vertical screen and a horizontal screen, which are the same in size, and have a length of 12M and a width of 8M, and an included angle of 90 degrees. The vertical distance belowVertical from the observation point to the vertical screen is 5M, and the vertical distance frontVertical from the vertical screen is 6M. The observation point is in the middle position of bilateral symmetry, that is to say the distance between the observation point and the left boundary of the vertical screen and the horizontal screen is equal to the distance between the observation point and the right boundary of the vertical screen and the horizontal screen.
The panoramic view cone formed by the observation point and the vertical screen and the depression screen is formed by splicing two sub view cones, and the visual line of the sub view cone corresponding to the vertical screen and the visual line of the sub view cone corresponding to the depression screen need to form a 90-degree included angle to be complementary with the 90-degree included angle formed by the vertical screen and the depression screen so as to avoid losing projection content. It can be seen that the two sub-view cones are not symmetrical sub-view cones, so that the application calls the glfrustutum interface to set the glFrustum (left, right, bottom, top, nearVal, farVal) parameters of the two sub-view cones respectively.
And (3) projecting a corresponding sub view cone on a vertical screen: right-left-length/2 (symmetrical in the middle of position), top-front, bottom-width;
right-left-length/2 (symmetrical in the middle of position), top-width-belowVertical, bottom-belowVertical; wherein, nerValih is smaller than farVal, and the value is set according to the relation without participating in the calculation of the design.
And synchronizing the coordinate spaces of the two screen observation points, and fixing the viewing angles of the two sub viewing cones to form a 90-degree difference value. The depression screen is set as a main screen, the computer projected to the depression screen is a main computer, and the main computer is connected with the computer projected to the vertical screen. And the third application control program is connected with the computer corresponding to the main screen, and the programs of the computers corresponding to the downward screen and the vertical screen are controlled in a command input mode. Of course, the third application control program may not be connected, and the programs of the computers corresponding to the downward screen and the upward screen may be directly controlled by the computer corresponding to the main screen.
Referring to fig. 3, fig. 3 is a schematic view illustrating a multi-screen fusion effect of the two-screen view cone shown in fig. 2, and it is obvious that the application realizes continuation of views and spaces between the vertical screen and the horizontal screen, and the screen joint has no folding sense, so that a visual effect that only one large screen is used for display is generated, and the application can better and more conveniently serve related industries and society.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a multi-screen fusion system provided in the present invention, the system is applied to a projection scene with an included angle between screens, and includes:
the device comprises a setting unit 1, a display unit and a control unit, wherein the setting unit 1 is used for setting an observation point, acquiring a panoramic view cone formed by the observation point and all screens, and cutting the panoramic view cone into N sub view cones which are in one-to-one correspondence with the screens, wherein an included angle formed by the sight lines of adjacent sub view cones is complementary with an included angle formed by adjacent screens corresponding to the adjacent sub view cones, and N is a positive integer;
the configuration unit 2 is used for respectively configuring context three-dimensional display environments of the N sub-view cones according to the display area information of the N sub-view cones;
and the projection unit 3 is used for receiving a projection instruction input by a user and completing projection of each screen in a context three-dimensional display environment according to the projection instruction.
For the introduction of the system provided by the present invention, please refer to the above method embodiment, and the present invention is not described herein again.
The invention also provides a multi-screen fusion device, which is applied to a projection scene with included angles between screens, and comprises the following components:
a memory for storing a computer program;
and the processor is used for realizing the steps of any one multi-screen fusion method when executing the computer program.
For the introduction of the apparatus provided by the present invention, please refer to the above method embodiments, which are not described herein again.
The invention also provides a computer readable storage medium, which is applied to a projection scene with an included angle between screens, wherein a computer program is stored on the computer readable storage medium, and when being executed by a processor, the computer program realizes the steps of any one of the multi-screen fusion methods.
For the introduction of the computer-readable storage medium provided by the present invention, please refer to the above method embodiments, which are not repeated herein.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The system, the device and the computer readable storage medium disclosed by the embodiment correspond to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
It should also be noted that, in the present specification, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A multi-screen fusion method is applied to a projection scene with included angles between screens, and is characterized by comprising the following steps:
setting an observation point, acquiring a panoramic view cone formed by the observation point and all screens, and dividing the panoramic view cone into N sub view cones which are in one-to-one correspondence with the screens, wherein an included angle formed by the sight lines of the adjacent sub view cones is complementary with an included angle formed by the adjacent screens corresponding to the adjacent sub view cones, and N is a positive integer;
respectively determining parameters of glFrustum () according to the display area information of the N sub-view cones;
calling glFrustum () to respectively configure the context three-dimensional display environment of the N sub-view cones;
and receiving a projection instruction input by a user, and completing the projection of each screen in the context three-dimensional display environment according to the projection instruction.
2. A multi-screen fusion method as recited in claim 1, wherein the parameters of glfrustutum () are specifically the up, down, left, right, near, and far information of the sub-view cones.
3. A multi-screen fusion method as recited in claim 1, wherein the process of completing the projection of each screen in the contextual three-dimensional display environment according to the projection instruction is specifically:
resolving the projection instruction into a drawing command in the context three-dimensional display environment;
finishing the drawing of the projection content corresponding to each screen according to the drawing command;
and projecting the drawn projection content to a corresponding screen.
4. A multi-screen fusion method as recited in claim 3, wherein the draw command is specifically a hybrid draw command.
5. A multi-screen fusion method as recited in any one of claims 1-4, wherein when the projection of each of the screens is controlled by its respective projection device, the method further comprises, after configuring the contextual three-dimensional display environment of the N sub-view cones and before receiving a user-input projection instruction:
and configuring a network communication environment, and synchronizing the space coordinates of the observation points acquired by the projection devices.
6. A multi-screen fusion method as recited in claim 5, wherein the receiving of the projection instruction input by the user, and the process of completing the projection of each screen in the contextual three-dimensional display environment according to the projection instruction specifically includes:
setting a projection device corresponding to a main screen selected by a user as a main projection device;
the main projection device receives the projection instruction input by the user and sends the projection instruction to other projection devices;
and each projection device respectively completes the projection of each screen under the context three-dimensional display environment according to the projection instruction.
7. A multi-screen fusion system is applied to projection scenes with included angles between screens, and is characterized by comprising the following components:
the setting unit is used for setting an observation point, acquiring a panoramic view cone formed by the observation point and all screens, and dividing the panoramic view cone into N sub view cones which are in one-to-one correspondence with the screens, wherein an included angle formed by the sight lines of the adjacent sub view cones is complementary with an included angle formed by the adjacent screens corresponding to the adjacent sub view cones, and N is a positive integer;
a configuration unit, configured to determine parameters of glfrustutum () according to the display region information of the N sub-view cones; calling glFrustum () to respectively configure the context three-dimensional display environment of the N sub-view cones;
and the projection unit is used for receiving a projection instruction input by a user and completing projection of each screen in the context three-dimensional display environment according to the projection instruction.
8. A multi-screen fusion device is applied to a projection scene with included angles between screens, and is characterized by comprising:
a memory for storing a computer program;
a processor for implementing the steps of the multi-screen fusion method according to any one of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium applied to a projection scene with an included angle between screens, wherein the computer-readable storage medium has a computer program stored thereon, and the computer program, when executed by a processor, implements the steps of the multi-screen fusion method according to any one of claims 1 to 6.
CN201711387141.3A 2017-12-20 2017-12-20 Multi-screen fusion method, system, device and computer readable storage medium Active CN108093245B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711387141.3A CN108093245B (en) 2017-12-20 2017-12-20 Multi-screen fusion method, system, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711387141.3A CN108093245B (en) 2017-12-20 2017-12-20 Multi-screen fusion method, system, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108093245A CN108093245A (en) 2018-05-29
CN108093245B true CN108093245B (en) 2020-05-05

Family

ID=62177573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711387141.3A Active CN108093245B (en) 2017-12-20 2017-12-20 Multi-screen fusion method, system, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108093245B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920121A (en) * 2018-07-20 2018-11-30 重庆宝力优特科技有限公司 Control method, device and computer readable storage medium based on multi-screen terminal
CN110298922B (en) * 2019-07-04 2023-05-12 浙江科澜信息技术有限公司 Three-dimensional model simplification method, device and equipment
CN110913200B (en) * 2019-10-29 2021-09-28 北京邮电大学 Multi-view image generation system and method with multi-screen splicing synchronization

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101132535A (en) * 2007-09-12 2008-02-27 浙江大学 Multi-projection large screen split-joint method based on rotating platform
CN101236485A (en) * 2008-01-28 2008-08-06 国电信息中心 Multi-screen 3-D in-phase display process, device and system
CN101291251A (en) * 2008-05-09 2008-10-22 国网信息通信有限公司 Synchronized control method and system for multicomputer
CN101334891A (en) * 2008-08-04 2008-12-31 北京理工大学 Multichannel distributed plotting system and method
CN101794065A (en) * 2009-02-02 2010-08-04 中强光电股份有限公司 System for displaying projection
CN104298065A (en) * 2014-05-07 2015-01-21 浙江大学 360-degree three-dimensional display device and method based on splicing of multiple high-speed projectors
CN106445340A (en) * 2016-09-21 2017-02-22 青岛海信电器股份有限公司 Method and device for displaying stereoscopic image by double-screen terminal
CN106445339A (en) * 2016-09-21 2017-02-22 青岛海信电器股份有限公司 Three-dimensional image display method and device for double-screen terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751506B (en) * 2013-12-25 2017-10-27 艾迪普(北京)文化科技股份有限公司 A kind of Cluster Rendering method and apparatus for realizing three-dimensional graphics images

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101132535A (en) * 2007-09-12 2008-02-27 浙江大学 Multi-projection large screen split-joint method based on rotating platform
CN101236485A (en) * 2008-01-28 2008-08-06 国电信息中心 Multi-screen 3-D in-phase display process, device and system
CN101291251A (en) * 2008-05-09 2008-10-22 国网信息通信有限公司 Synchronized control method and system for multicomputer
CN101334891A (en) * 2008-08-04 2008-12-31 北京理工大学 Multichannel distributed plotting system and method
CN101794065A (en) * 2009-02-02 2010-08-04 中强光电股份有限公司 System for displaying projection
CN104298065A (en) * 2014-05-07 2015-01-21 浙江大学 360-degree three-dimensional display device and method based on splicing of multiple high-speed projectors
CN106445340A (en) * 2016-09-21 2017-02-22 青岛海信电器股份有限公司 Method and device for displaying stereoscopic image by double-screen terminal
CN106445339A (en) * 2016-09-21 2017-02-22 青岛海信电器股份有限公司 Three-dimensional image display method and device for double-screen terminal

Also Published As

Publication number Publication date
CN108093245A (en) 2018-05-29

Similar Documents

Publication Publication Date Title
US10812780B2 (en) Image processing method and device
WO2019228188A1 (en) Method and apparatus for marking and displaying spatial size in virtual three-dimensional house model
WO2018188499A1 (en) Image processing method and device, video processing method and device, virtual reality device and storage medium
WO2017092303A1 (en) Virtual reality scenario model establishing method and device
CN108939556B (en) Screenshot method and device based on game platform
CN110072087B (en) Camera linkage method, device, equipment and storage medium based on 3D map
CN108093245B (en) Multi-screen fusion method, system, device and computer readable storage medium
CN112230836B (en) Object moving method and device, storage medium and electronic device
CN108133454B (en) Space geometric model image switching method, device and system and interaction equipment
WO2023207963A1 (en) Image processing method and apparatus, electronic device, and storage medium
US11651556B2 (en) Virtual exhibition space providing method for efficient data management
CN114998063B (en) Immersion type classroom construction method, system and storage medium based on XR technology
US11783445B2 (en) Image processing method, device and apparatus, image fitting method and device, display method and apparatus, and computer readable medium
CN104915994A (en) 3D view drawing method and system of three-dimensional data
CN109741431B (en) Two-dimensional and three-dimensional integrated electronic map frame
KR102107706B1 (en) Method and apparatus for processing image
CN111007997A (en) Remote display method, electronic device and computer-readable storage medium
CN113206993A (en) Method for adjusting display screen and display device
WO2020215789A1 (en) Virtual paintbrush implementing method and apparatus, and computer readable storage medium
KR102237519B1 (en) Method of providing virtual exhibition space using 2.5 dimensionalization
CN116860112B (en) Combined scene experience generation method, system and medium based on XR technology
CN109885172B (en) Object interaction display method and system based on Augmented Reality (AR)
CN114428573B (en) Special effect image processing method and device, electronic equipment and storage medium
CN109949396A (en) A kind of rendering method, device, equipment and medium
CN110427724A (en) Based on WebGL three-dimensional fire architecture model visualization method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant