CN109272478A - A kind of screen projecting method and device and relevant device - Google Patents

A kind of screen projecting method and device and relevant device Download PDF

Info

Publication number
CN109272478A
CN109272478A CN201811099366.3A CN201811099366A CN109272478A CN 109272478 A CN109272478 A CN 109272478A CN 201811099366 A CN201811099366 A CN 201811099366A CN 109272478 A CN109272478 A CN 109272478A
Authority
CN
China
Prior art keywords
projector
screen
projection
pixel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811099366.3A
Other languages
Chinese (zh)
Other versions
CN109272478B (en
Inventor
吕毅
王志伟
黄涛
吴斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaqiang Infante (shenzhen) Intelligent Technology Co
Original Assignee
Huaqiang Infante (shenzhen) Intelligent Technology Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaqiang Infante (shenzhen) Intelligent Technology Co filed Critical Huaqiang Infante (shenzhen) Intelligent Technology Co
Priority to CN201811099366.3A priority Critical patent/CN109272478B/en
Publication of CN109272478A publication Critical patent/CN109272478A/en
Application granted granted Critical
Publication of CN109272478B publication Critical patent/CN109272478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Projection Apparatus (AREA)

Abstract

The invention discloses a kind of screen projecting methods and device and relevant device.The present invention program shows the theoretical pixel coordinate and actual pixels coordinate of label pixel on image by obtaining projector, or, obtain the projection coordinate of projected pixel and true coordinate on screen, and keep two coordinates obtained close by adjusting projector's parameter, it determines projector's parameter, and then establishes the coordinate mapping relations of projected pixel in the pixel and screen of the original image that projector shows.The present invention program can correspond the coordinate on the coordinate and projected pixel of screen, guarantee that the picture of shooting completely projects on screen, it avoids carrying out picture stretching fitting on screen, ensure that the authenticity of picture, solve the problems, such as distortion in the prior art.In turn, the problems such as bad caused picture deformation of ball-screen projection splicing fusion in the prior art, ghost image can also be solved.

Description

Screen projection method and device and related equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a screen projection method and device and related equipment.
Background
In the fields of theme parks, special movies, scientific and technical exhibitions, simulated driving and the like, screens in a ball screen, a ring screen or other forms are commonly adopted for projection playing of video images at present. The dome screen projection is a projection mode in which one or more projections (channels) are projected on a dome screen to form a projection picture. The circular screen projection is a projection mode that one or more projections (channels) are adopted to project on a circular screen to form a projection picture.
In screen projection in the prior art, especially in non-planar screen projection such as a dome screen, a method of fitting and stretching a projection picture is often adopted on a screen, so that the projection picture is spread over the screen, which causes distortion of the picture, and the obtained projection picture is not a real shooting picture.
In addition, the projection mode of two or more lenses is adopted, the application is more and more extensive compared with the traditional single projection mode which can bring huge definition and brightness improvement, and the key point lies in how to solve the splicing fusion of picture deformation and overlapped pictures.
In the existing splicing and fusion of multiple projection modes of pictures, a theoretical relation between a projector and a screen is generally established in three-dimensional image processing software such as Maya software to render a film, so that the splicing and fusion are completed. However, the Maya software cannot reflect lens displacement, and the theoretical and real projector position and the screen model curved surface are difficult to be completely matched; therefore, the screen is likely to be misaligned, which causes problems such as screen distortion and ghosting.
Disclosure of Invention
The embodiment of the invention provides a screen projection method, a screen projection device and related equipment, which are mainly used for solving the problem of picture distortion in screen projection and solving the problems of deformation and double images in picture splicing and fusion.
In order to solve the technical problems, the following technical scheme is adopted: in a first aspect, a method for screen projection is provided, which includes the following steps: acquiring actual position coordinates of projection pixels on a screen, which correspond to marking pixels on a display image of the projector; calculating theoretical pixel coordinates of the marked pixels on a display image of the projector according to the actual position coordinates of the projection pixels; acquiring actual pixel coordinates of the marked pixels on the display image of the projector; adjusting the parameters of the projector to enable the theoretical pixel coordinate of the marked pixel to be close to the actual pixel coordinate; and establishing a coordinate mapping relation between the pixels of the original image displayed by the projector and the projection pixels on the screen based on the adjusted parameters of the projector.
In a second aspect, a method for screen projection is provided, which includes the following steps: acquiring pixel coordinates of marked pixels on a display image of the projector; calculating the projection coordinates of the corresponding projection pixels on the screen according to the pixel coordinates of the marking pixels; acquiring real coordinates of corresponding projection pixels on a screen; adjusting parameters of a projector to enable the projection coordinate of the same projection pixel to be close to the real coordinate; and establishing a coordinate mapping relation between the pixels of the original image displayed by the projector and the projection pixels on the screen based on the adjusted parameters of the projector.
In a third aspect, a screen projection apparatus is provided, including the following modules: the acquisition module is used for acquiring the actual position coordinates of the projection pixels on the screen, which correspond to the mark pixels on the display image of the projector; the calculation module is used for calculating theoretical pixel coordinates of the mark pixel on a display image of the projector according to the actual position coordinates of the projection pixel; the acquisition module is also used for acquiring the actual pixel coordinates of the mark pixels on the display image of the projector; the adjusting module is used for enabling the theoretical pixel coordinate of the marked pixel to be close to the actual pixel coordinate by adjusting the parameters of the projector; and the mapping module is used for establishing a coordinate mapping relation between the pixels of the original image displayed by the projector and the projection pixels on the screen based on the adjusted projector parameters.
In a fourth aspect, a screen projection apparatus is provided, which includes the following modules: the acquisition module is used for acquiring the pixel coordinates of the marked pixels on the display image of the projector; the calculation module is used for calculating the projection coordinates of the corresponding projection pixels on the screen according to the pixel coordinates of the marking pixels; the acquisition module is also used for acquiring the actual pixel coordinates of the mark pixels on the display image of the projector; the adjusting module is used for enabling the projection coordinate of the same projection pixel to be close to the real coordinate by adjusting the parameters of the projector; and the mapping module is used for establishing a coordinate mapping relation between the pixels of the original image displayed by the projector and the projection pixels on the screen based on the adjusted projector parameters.
In a fifth aspect, a computer device is provided, the computer device comprising a processor, a memory, a bus, and a communication interface; the memory is configured to store a computer program, the processor is connected to the memory through the bus, and when the computer device runs, the processor executes the computer program stored in the memory, so as to cause the computer device to perform the screen projection method according to the first aspect or the second aspect of the present invention.
A sixth aspect provides a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer device comprising a processor, cause the computer device to perform a screen projection method as described in the first or second aspect of the invention.
The projector display image described herein is an image/screen that is input to the projector and displayed on the display chip of the projector, and therefore may also be referred to as a projector input image.
According to the technical scheme, the embodiment of the invention has the following advantages:
the method and the device can calculate the coordinates of the corresponding projection pixels on the screen according to the pixel coordinates of the mark pixels on the display image of the projector, or calculate the coordinates of the corresponding mark pixels on the display image of the projector according to the coordinates of the projection pixels on the screen, namely, the theoretical pixel coordinates and the actual pixel coordinates of the mark pixels on the display image of the projector can be obtained, or the projection coordinates and the real coordinates of the projection pixels on the screen can be obtained. Then, the parameters of the projector are adjusted to enable the obtained two coordinates to be close to each other, so that the better parameters of the projector are determined, and further the coordinate mapping relation between the pixels of the original image displayed by the projector and the projected pixels on the screen is established.
According to the scheme of the invention, because the coordinate mapping relation between the pixels of the original image displayed by the vertical projector and the projection pixels on the screen is established, the coordinates of the screen and the coordinates on the projection pixels can be in one-to-one correspondence, so that the shot picture can be completely projected on the screen, the picture stretching fitting and the like on the screen are avoided, the authenticity of the picture is ensured, and the picture distortion problem in the prior art is solved.
Furthermore, for the splicing area, the same reason can be ensured that the pixels with the same coordinate are overlapped, so that the spliced picture is also real, fitting or stretching coverage and the like are not needed, and the problems of double images and deformation are avoided; meanwhile, the brightness of the pixels can be accurately attenuated according to the position relation of the overlapped pixels, so that the brightness of the splicing area is uniform and consistent with the brightness of the whole picture; therefore, high-precision image splicing and fusion can be realized, and the problems of image deformation, ghost image, inconsistent brightness and the like caused by poor projection splicing and fusion in the prior art are solved.
Drawings
Fig. 1 is a schematic flowchart of a screen projection method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of another screen projection method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a screen projection apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of another screen projection apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a computer device according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart of a method for projecting multiple dome screens according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a world coordinate system of a dome screen and a projector coordinate system according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating a relationship between a projector coordinate system and an image coordinate system in an exemplary application scenario of the present invention;
FIG. 9 is a schematic diagram illustrating a correspondence relationship between an original image and a dome screen generated in an application scenario example of the present invention;
FIG. 10 is a diagram illustrating an example of an application scenario of the present invention for performing stitching and fusion processing on an original image;
fig. 11 is a schematic structural diagram of a plurality of dome screen projection apparatuses according to an embodiment of the present invention;
FIG. 12 is a schematic flow chart of a method for projecting two billiard curtains according to an embodiment of the invention;
FIG. 13 is a schematic diagram showing a relationship between a world coordinate system of a ball screen and a coordinate system of a projector according to an embodiment of the present invention;
FIG. 14 is a diagram illustrating a relationship between a projector coordinate system and an image coordinate system according to an exemplary application scenario of the present invention;
FIG. 15 is a schematic diagram illustrating a correspondence relationship between an original image and a dome screen generated in an application scenario example of the present invention;
FIG. 16 is a diagram illustrating a process of stitching and fusing original images according to an exemplary application scenario of the present invention;
FIG. 17 is a schematic structural diagram of a two-billiard-curtain projection arrangement according to an embodiment of the present invention;
FIG. 18 is a flow chart illustrating a method for multi-circular screen projection according to an embodiment of the present invention;
FIG. 19 is a schematic diagram of the relationship between the world coordinate system and the projector coordinate system of an application example of the present invention;
FIG. 20 is a diagram illustrating a relationship between a projector coordinate system and an image coordinate system according to an exemplary embodiment of the present invention;
FIG. 21 is a diagram illustrating a correspondence relationship between an original image and a circular screen generated in an application scenario example of the present invention;
FIG. 22 is a diagram illustrating stitching and fusing of original images according to an exemplary application scenario;
fig. 23 is a schematic structural diagram of a multi-circular screen device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The following are detailed descriptions of the respective embodiments.
Referring to fig. 1, an embodiment of the invention provides a screen projection method for solving a splicing fusion problem of multiple projections. The screen is also called a screen or a screen, and may include a regular screen and an irregular screen such as a flat screen, an arc screen, a dome screen, a ring screen, and the like. The multiple projection means that multiple lenses are used for projecting images on the screen. The method may comprise the steps of:
11. acquiring actual position coordinates of projection pixels on a screen, which correspond to marking pixels on a display image of the projector;
12. calculating theoretical pixel coordinates of the marking pixels on the display image of the projector according to the actual position coordinates of the projection pixels;
13. acquiring actual pixel coordinates of the marked pixels on the display image of the projector;
14. adjusting the parameters of the projector to enable the theoretical pixel coordinate of the marked pixel to be close to the actual pixel coordinate;
15. and establishing a coordinate mapping relation between the pixels of the original image displayed by the projector and the projection pixels on the screen based on the adjusted parameters of the projector.
Optionally, in some embodiments, the obtaining of the actual position coordinates of the projection pixel on the screen, which corresponds to the mark pixel on the display image of the projector in step 11, may include: drawing a plurality of mark points on a display image of a projector, and recording actual pixel coordinates of the mark points; the step 13 of obtaining the actual pixel coordinates of the marked pixels on the display image of the projector may include: the actual position coordinates of the projected pixels on the screen corresponding to the recording mark points are measured using a three-dimensional measuring instrument.
Optionally, in some embodiments, the method may further include: pre-establishing a coordinate conversion relation between a mark pixel on a display image of a projector and a projection pixel on a screen; the calculating the theoretical pixel coordinate of the mark pixel on the display image of the projector according to the actual position coordinate of the projection pixel comprises the following steps: and performing calculation by adopting a theoretical derivation mode, for example, performing calculation based on the coordinate conversion relation.
Optionally, in some embodiments, the establishing a coordinate transformation relationship between a mark pixel on a display image of the projector and a projection pixel on the screen may include: a world coordinate system is established based on a screen, and the following coordinate systems are established based on any projector: the system comprises a projector coordinate system based on a projector, an image coordinate system based on a projector display chip plane, and a pixel coordinate system for displaying an image on a display chip; and establishing a conversion relation according to the position relation between the coordinate systems, and further establishing a coordinate conversion relation between the mark pixels on the display image of the projector and the projection pixels on the screen.
Optionally, in some embodiments, establishing the world coordinate system based on the screen may include: if the screen is a dome screen, establishing a world coordinate system by taking the center of the dome screen as an origin and vertically downwards as a Y axis; if the screen is a circular screen, a world coordinate system is established by taking a circle center line of the radian of the circular screen at a half position of the height of the circular screen as an original point, taking a Z axis horizontally facing the middle of the circular screen and taking a Y axis vertically downwards. Establishing the following coordinate systems based on any projector may include: the method comprises the steps of establishing a projector coordinate system by taking an optical center of the projector as an origin based on the projector, establishing a two-dimensional image coordinate system by taking the center of a display chip as the origin based on a plane of the display chip of the projector, and establishing a pixel coordinate system by taking a pixel at one vertex of a display image as the origin based on the plane of the display chip of the projector. Establishing a conversion relation according to the position relation between the coordinate systems can comprise: establishing a first conversion relation between a world coordinate system and a projector coordinate system according to the distance and the rotation angle of the projector relative to the origin of the world coordinate system; establishing a second conversion relation between the projector coordinate system and the image coordinate system according to the projection optical principle; and establishing a third conversion relation between the image coordinate system and the pixel coordinate system according to the size of the display chip and the size of the pixel.
Optionally, in some embodiments, the projector parameters, specifically the projector position parameters, may include angles α, β, γ of the projector rotating around the axis of the world coordinate system X, Y, Z at the origin, translation amounts according to the axes of the projector coordinate system Xc, Yc, Zc after the rotation, respectively denoted as Tx, Ty, Tz, and offsets Cx, Cy of the lens center of the projector along the axes of the image coordinate system u, v.
Optionally, in some embodiments, adjusting the projector parameters in step 14 to make the theoretical pixel coordinates of the marked pixel close to the actual pixel coordinates may include: calculating the distance between a theoretical pixel coordinate and an actual pixel coordinate of any one mark point, wherein the theoretical pixel coordinate is related to the parameters of the projector; summing the distances calculated by the plurality of marking points; the projector parameter at which the calculated sum approaches 0.
Optionally, in some embodiments, the method further includes: generating a planar original image through three-dimensional modeling and simulated projection; acquiring pixel areas on the original image respectively corresponding to projection areas and splicing areas of different projectors on the screen according to the coordinate mapping relation; and performing image splicing and fusion processing based on the determined pixel regions respectively corresponding to the projection regions and the splicing regions of different projectors on the original image.
Optionally, in some embodiments, the screen is specifically a dome, the method is applied to a plurality of projectors to project images on the same dome, and before generating an original image of a plane through three-dimensional modeling and simulated projection, the method further includes: drawing a spherical screen grid image, converting the spherical screen grid image into a two-dimensional image displayed by the projector according to the coordinate mapping relation and the calculated position parameter of the projector, and drawing a planar grid image; and projecting the plane grid image to a projector for displaying, verifying the difference between the real spherical screen and the theoretical spherical screen, correcting world coordinates according to a verification result, and further correcting the coordinate mapping relation. Further, the generating of the original image of the plane by three-dimensional modeling and simulated projection may include: a dome screen model is established through three-dimensional modeling, and a circular original image on a two-dimensional plane is generated by placing a virtual equidistant fisheye camera at the center of a sphere of the dome screen model.
Optionally, in some embodiments, the screen is specifically a circular screen, the method is applied to a plurality of projectors to project images onto the same circular screen, and before generating an original image of a plane through three-dimensional modeling and simulated projection, the method further includes: the method comprises the steps of obtaining initial parameters of projectors and the angle of a circular screen occupied by each projector, defining the angle of a splicing area according to the area of projection overlapping, generating a coordinate dot matrix circular screen mathematical model with a certain row number and a certain column number according to a set interval, connecting the coordinate dot matrix circular screen mathematical model to form an initial grid, adjusting the parameters of the projectors to enable the vertical height of the grid projected on the screen to be matched with the height of a real screen, enabling each angle line of the grid to be distributed at the position corresponding to the screen, and enabling the grids of adjacent projection splicing areas to be overlapped in an inosculating manner. Further, the generating of the original image of the plane by three-dimensional modeling and simulated projection may include: the method comprises the steps of establishing a circular screen model through three-dimensional modeling, setting a virtual camera at an audience observation point of the circular screen model, projecting an image acquired by the virtual camera onto the circular screen model, and expanding an image plane displayed on the circular screen model to generate a rectangular original image on a two-dimensional plane.
Optionally, in some embodiments, the performing image stitching and fusing processing may include: performing brightness attenuation processing on a splicing area on an original image by using an alpha channel; and respectively processing the original image and the alpha channel thereof by affine transformation, and performing fusion processing on the processed images to obtain the images to be projected by the corresponding projector.
In the foregoing, an embodiment of the present invention provides a screen projection method. The method adopts a mathematical model reduction method to determine a coordinate conversion relation, and can calculate the coordinates of corresponding marking pixels on a display image of the projector according to the coordinates of the projection pixels on the screen through a coordinate conversion mode, namely, the theoretical pixel coordinates and the actual pixel coordinates of the marking pixels on the display image of the projector can be obtained. Then, the parameters of the projector are adjusted to enable the obtained two coordinates to be close to each other, so that the better parameters of the projector are determined, and further the coordinate mapping relation between the pixels of the original image displayed by the projector and the projected pixels on the screen is established. Based on the established coordinate mapping relation, the projection areas of different projectors on the screen and the pixel areas on the original image corresponding to the image splicing areas can be accurately determined; therefore, high-precision image splicing and fusion processing is carried out, image deformation or double images are avoided, the image reduction degree is improved, and the problems of image deformation, double images and the like caused by poor projection splicing and fusion in the prior art are solved.
Referring to fig. 2, based on the same principle as the method embodiment shown in fig. 1, another embodiment of the present invention further provides another screen projection method, including the following steps:
21. acquiring pixel coordinates of marked pixels on a display image of the projector;
22. calculating the projection coordinates of the corresponding projection pixels on the screen according to the pixel coordinates of the marking pixels;
23. acquiring real coordinates of corresponding projection pixels on a screen;
24. adjusting parameters of a projector to enable the projection coordinate of the same projection pixel to be close to the real coordinate;
25. and establishing a coordinate mapping relation between the pixels of the original image displayed by the projector and the projection pixels on the screen based on the adjusted parameters of the projector.
The method of this embodiment is the same as the principle of the embodiment shown in fig. 1, except that the embodiment shown in fig. 1 is calculated based on the pixel coordinates of the image displayed by the projector, and the embodiment shown in fig. 2 is calculated based on the coordinates of the corresponding projection pixel on the screen. Therefore, for further implementation details of the embodiment of fig. 2, reference may be made to the description of the embodiment of fig. 1, and detailed description is not repeated herein.
In order to better implement the above-mentioned aspects of the embodiments of the present invention, the following also provides related devices for implementing the above-mentioned aspects cooperatively.
Referring to fig. 3, an embodiment of the present invention provides a screen projection apparatus, including the following modules:
the acquiring module 31 is configured to acquire actual position coordinates of a projection pixel on the screen, which correspond to a mark pixel on a display image of the projector;
a calculating module 32, configured to calculate theoretical pixel coordinates of the mark pixel on a display image of the projector according to the actual position coordinates of the projection pixel;
the obtaining module 31 is further configured to obtain actual pixel coordinates of the mark pixel on the display image of the projector;
the adjusting module 33 is configured to adjust the projector parameters so that the theoretical pixel coordinates of the marked pixels are close to the actual pixel coordinates;
and the mapping module 34 is configured to establish a coordinate mapping relationship between pixels of an original image displayed by the projector and projection pixels on the screen based on the adjusted projector parameters.
The screen projection apparatus may be an image processing device, such as a computer device. It can be understood that the functions of each functional module of the apparatus can be specifically implemented according to the method in the embodiment shown in fig. 1, and the specific implementation process thereof can refer to the related description in the above method embodiment, which is not described herein again.
Referring to fig. 4, another embodiment of the present invention further provides a screen projection apparatus, including the following modules:
an obtaining module 41, configured to obtain pixel coordinates of a mark pixel on a display image of the projector;
the calculating module 42 is configured to calculate projection coordinates of corresponding projection pixels on the screen according to the pixel coordinates of the mark pixels;
the obtaining module 41 is further configured to obtain real coordinates of corresponding projection pixels on the screen;
an adjusting module 43, configured to adjust a projector parameter so that the projection coordinate of the same projection pixel is close to the real coordinate;
and the mapping module 44 is configured to establish a coordinate mapping relationship between pixels of an original image displayed by the projector and projection pixels on the screen based on the adjusted projector parameters.
The screen projection apparatus may be an image processing device, such as a computer device. It can be understood that the functions of each functional module of the apparatus can be specifically implemented according to the method in the embodiment shown in fig. 2, and the specific implementation process thereof can refer to the related description in the above method embodiment, which is not described herein again.
Referring to fig. 5, an embodiment of the present invention further provides a computer device, where the computer device 50 includes a processor 51, a memory 52, a bus 53 and a communication interface 54; the memory 52 is configured to store a computer program, the processor 51 is connected to the memory 52 through the bus 53, and when the computer device 50 runs, the processor 51 executes the computer program stored in the memory 52, so that the computer device 50 executes the screen projection method according to the embodiment of fig. 1 or fig. 2.
An embodiment of the present invention also provides a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer device comprising a processor, cause the computer device to perform a screen projection method as described above in the embodiment of fig. 1 or fig. 2.
In the above, the technical scheme of the embodiment of the invention is introduced on a basic basis.
The screen described herein may be a ring screen or a dome screen or other form of screen. The multiple projections can be projected by two or more than two lenses. In order to facilitate understanding of the present invention, the following provides a detailed description of the present invention with respect to different projection modes, such as a dome screen and a circular screen.
Projection of multiple dome screens
The projection of the plurality of dome screens refers to a projection mode of forming a projection picture by projecting on the dome screen by a plurality of projectors. Compared with the traditional single fisheye projection, the spherical screen projection of multiple projections can bring huge definition and brightness improvement, and the key point lies in solving the splicing fusion of picture deformation and overlapped pictures. Machine vision splicing and fusion are currently used for projection splicing of multiple spherical screens, a depth camera formed by multiple cameras is used for collecting fixed image mark points to obtain the relation between the spherical screen and a projection picture, and the positions of the spherical screen and the projection picture are simulated in software to cut a deformed image to complete splicing and fusion. However, the above-described system depends on machine vision of the camera, and is not high enough in accuracy, and is likely to cause problems such as misalignment of screens, screen distortion, and ghosting.
Referring to fig. 6, an embodiment of the present invention provides a method for projecting images on a same dome screen by multiple projectors, so as to solve the above technical problem. Wherein, the spherical screen is a spherical screen or a short for spherical screen. The projection is directed to project an image on the spherical screen. The image may be a video picture. The ball curtain can be a global surface or a partial spherical surface, such as a hemisphere surface. The projector can adopt fish-eye projection, and the adopted lens can be a fish-eye lens.
As shown in fig. 6, the method may include:
61. drawing a plurality of mark points on a display image of each projector, and recording actual pixel coordinates (x, y) of the mark points;
62. acquiring three-dimensional coordinates (X, Y, Z) corresponding to a mark point on a dome projection image measured and recorded by using a three-dimensional measuring instrument, and moving an origin point of a world coordinate system from the three-dimensional measuring instrument to the spherical center position of the dome to obtain converted world coordinates (Xa, Ya, Za);
63. acquiring a coordinate conversion relation between the world coordinate of any point on the spherical screen and the pixel coordinate of a corresponding point on the display image of the projector;
64. calculating theoretical pixel coordinates (c, r) of the mark points on the display image corresponding to the world coordinates (Xa, Ya, Za) of the mark points according to the coordinate conversion relation;
65. calculating position parameters for each projector, so that the actual pixel coordinates (x, y) of a plurality of mark points on a display image of the projector are closest to the theoretical pixel coordinates (c, r);
66. generating a planar original image through three-dimensional modeling and simulated projection, determining the corresponding relation between pixel coordinates (U, V) of the original image and pixel coordinates (c, r) of a corresponding projector display image, and further determining the coordinate mapping relation between pixels on the original image and projection pixels on a screen;
67. acquiring pixel areas on the original image respectively corresponding to the projection areas and the splicing areas of different projectors on the spherical screen according to the coordinate mapping relation;
68. and performing image splicing and fusion processing based on the determined pixel regions respectively corresponding to the projection regions and the splicing regions of different projectors on the original image.
The method establishes the coordinate conversion relation between the position coordinate of any point on the spherical screen and the pixel coordinate of the corresponding point on the display image of the projector based on the relative position relation between the spherical screen and the projector. Due to the difference between the actual coordinate and the theoretical coordinate, the theoretical pixel coordinate and the actual coordinate of the mark point obtained by calculation may be different, and based on the difference, the position parameter of the projector when the two are closest can be calculated as the optimal solution, so as to determine the relative position relationship between the projector and the spherical screen.
In addition, the projector display chip is planar, the displayed picture is also planar, a spherical screen model can be established through three-dimensional modeling, and a circular original image on the plane corresponding to the three-dimensional spherical screen model is generated through simulated projection. The original image can be output to a projector after being processed, displayed on a display chip of the projector and projected to a spherical screen. The original image and the display image of the projector have a determined corresponding relationship, and the pixel coordinates on the original image and the pixel coordinates on the display image have a one-to-one corresponding relationship. The correspondence is determined and can be directly realized by a person skilled in the art.
The projector display image described herein is an image that is input to the projector and displayed on the display chip of the projector, and therefore may also be referred to as a projector input image. According to the technical scheme, the mapping relation between the input image of the projector and the projected image on the screen is established, and the coordinates of the screen and the coordinates on the projection pixels can be in one-to-one correspondence, so that the shot picture can be completely projected onto the screen, the picture stretching fitting and the like on the screen are avoided, the authenticity of the picture is ensured, the picture distortion problem in screen projection is solved, and the deformation and ghost problems in picture splicing and fusion can be solved.
Based on the coordinate conversion relation and the corresponding relation, a one-to-one coordinate mapping relation between points on the spherical screen and points on the original image can be established, so that pixel areas on the original image, which correspond to projection areas and splicing areas of different projectors on the spherical screen respectively, are obtained.
Finally, according to the pixel regions corresponding to different projector regions and the pixel regions corresponding to the splicing regions determined on the original image, the original image can be subjected to processing such as splicing and fusion, and an image or a video picture which can be output to a projector for projection is obtained.
Optionally, in an implementation manner, before the step 66, a correction step may be further included, that is: drawing a spherical screen grid image, converting the spherical screen grid image into a two-dimensional image displayed by the projector according to the coordinate mapping relation and the calculated position parameter of the projector, and drawing a planar grid image; and projecting the plane grid image to a projector for displaying, verifying the difference between the real spherical screen and the theoretical spherical screen, correcting world coordinates according to a verification result, and further correcting the coordinate mapping relation.
Optionally, in an implementation manner, step 63 may include: establishing a world coordinate system based on the spherical screen, and establishing the following coordinate systems based on any projector: the method comprises the following steps of establishing a conversion relation among a projector coordinate system based on a projector, an image coordinate system based on a projector display chip plane and a pixel coordinate system for displaying an image on a display chip; and according to the conversion relation among the coordinate systems, obtaining the coordinate mapping relation between the position coordinate of any point on the spherical screen and the pixel coordinate of the corresponding point on the display image of the projector through coordinate conversion.
Optionally, in an implementation: the establishing of the world coordinate system based on the spherical screen comprises the following steps: establishing a world coordinate system by taking the spherical center of the spherical screen as an origin and taking the vertical direction as a Y axis; the establishing of the following coordinate systems based on any projector comprises: establishing a projector coordinate system by taking the optical center of the projector as an origin based on the projector, establishing a two-dimensional image coordinate system by taking the center of a display chip as the origin based on the plane of the display chip of the projector, and establishing a pixel coordinate system by taking a pixel at one vertex of a display image as the origin based on the plane of the display chip of the projector; the establishing of the conversion relation between the coordinate systems comprises the following steps: establishing a first conversion relation between a world coordinate system and a projector coordinate system according to the distance and the rotation angle of the projector relative to the origin of the world coordinate system; establishing a second conversion relation between the projector coordinate system and the image coordinate system according to the projection optical principle; and establishing a third conversion relation between the image coordinate system and the pixel coordinate system according to the size of the display chip and the size of the pixel.
Optionally, in an implementation manner, the obtaining, according to a conversion relationship between coordinate systems, a coordinate mapping relationship between a position coordinate of any point on the spherical screen and a pixel coordinate of a corresponding point on the projector display image through coordinate conversion includes: any point on the spherical screen is represented by world coordinates (Xa, Ya, Za); obtaining corresponding projector coordinates (Xc, Yc, Zc) according to the first conversion relation; obtaining corresponding image coordinates (u, v) according to the second conversion relation; and obtaining corresponding pixel coordinates (c, r) according to the third conversion relation.
Wherein,
u=f*Xc/Zc,v=f*Yc/Zc
R0the radius of the dome screen is shown, Rx, Ry and Rz are rotation matrixes corresponding to angles α, β and gamma of the projector rotating around an axis X, Y, Z of a world coordinate system at the origin respectively, T is a translation matrix of the projector translating relative to the world coordinate system, f is the focal length of the projector, Cx and Cy are offset of the center of a lens of the projector along x and y axes of an image coordinate system, and dx and dy are the length and width of each pixel on a display chip of the projector.
Alternative, an implementationIn this manner, step 15 may include: calculating the distance Deltax between the actual pixel coordinate (x, y) of any marked point and the corresponding theoretical pixel coordinate (c, r),computingm is the number of the mark points; computingAnd the projector position parameters approaching 0 comprise α, β, gamma, T, Cx and Cy.
Optionally, in an implementation, step 66 may include: establishing a dome screen model through three-dimensional modeling, and placing a virtual equidistant fisheye camera at the center of a sphere of the dome screen model to generate a circular original image on a two-dimensional plane, wherein the pixel coordinates (U, V) of the original image are in right-determined corresponding relation with the pixel coordinates (c, r) of a corresponding projector display image.
Optionally, in an implementation manner, step 67 may include: acquiring projection areas and splicing areas of different projectors on a spherical screen; acquiring (c, r) dot matrixes of pixel areas on the projector display image corresponding to projection areas and splicing areas of different projectors on the spherical screen based on the calculated position parameters of the projectors according to the coordinate mapping relation; and according to the corresponding relation, further acquiring (U, V) dot matrixes of pixel areas on the original image, which correspond to the projection areas and the splicing areas of different projectors respectively.
Optionally, in an implementation manner, the performing image stitching and fusing processing in step 68 may include: performing brightness attenuation processing on a splicing area on the original image by using an alpha channel; and processing the original image by using affine transformation to obtain an image to be projected by the corresponding projector.
Optionally, in an implementation manner, the processing, by using affine transformation, the original image to obtain an image to be projected by a corresponding projector may include: simulating an image dot matrix of an original image to a pixel dot matrix of a display chip of the projector by using affine transformation to generate a first image to be projected by the projector; simulating an image lattice of an alpha (alpha) channel of an original image to a pixel lattice of a display chip of the projector by affine transformation to generate a second image to be projected by the projector; and performing fusion processing on the first image and the second image.
The above-described scheme may be embodied, for example, in an image processing apparatus, such as a computer apparatus.
In order to better understand the technical solutions provided by the embodiments of the present invention, the following description is given by taking an implementation mode in a specific scenario as an example.
Projector relative to dome screen parameter position determination
Calibration software can be used to draw a plurality of marker pixel (x, y) lattices on each projector display. According to the principle that the light path is reversible, the projector can be regarded as a reverse camera. According to the calibration principle of the camera, at least 9 points of calibration points can be adopted, and in order to reduce gross errors and accelerate the evolution algorithm of a following computer, 3 x 4 mark lattices of a projector picture are preferably uniformly distributed. Then, the image is projected to the dome screen. Three-dimensional coordinates (X, Y, Z) of the corresponding marker points on the dome screen may be recorded in order of measurement using a three-dimensional measuring instrument.
Calculating the spherical center O and the radius value R of the spherical screen0When the origin of the world coordinate system is moved to the position of the center O and the coordinates of the moved world coordinate system are represented by (Xa, Ya, Za), the points on the ball screen can be represented as:
a projector coordinate system with a light emitting point, namely a light center, of the projector as an original point can be established, the light emitting direction, namely the optical axis direction, of the z axis towards the center of the lens is defined, the vertical downward direction of the y axis towards the projector when the projector is placed is defined, and the projector coordinate system is followed and bound with the projector. The projector coordinate system established based on the projector at any position in space can be obtained by translation after rotation of the starting position which is completely coincident with the world coordinate system.
Referring to fig. 7, it can be defined that the angles of rotation of the projector around the axis of the world coordinate system X, Y, Z at the origin are α, β and γ, the three corresponding rotation matrices are denoted as Rx, Ry and Rz, the translation amounts of the projector's own coordinate system Xc, Yc and Zc after rotation are denoted as Tx, Ty and Tz, respectively, and the corresponding translation matrix is denoted as t, then the conversion relationship between the points (Xc, Yc and Zc) on the projector coordinate system and the points (Xa, Ya and Za) of the world coordinate system dome is as follows:
as shown in fig. 8, the plane of the display chip of the projector can be used as a two-dimensional image coordinate system, the focal length of the projector is f, the directions of the u and v axes of the image coordinate system are the same as the directions of the Xc axis and the Yc axis of the projector coordinate system, and the center of the display chip is the origin of the image coordinate system. Since all the light rays emitted from the projector pass through the optical center, i.e., the origin of the projector coordinate system, a coordinate mapping relationship of the image coordinates (u, v) of the pixel position of the light-emitting point on the display chip and the projector coordinates (Xc, Yc, Zc) of the projected point can be established as follows:
u=f*Xc/Zc v=f*Yc/Zc
as shown in FIG. 8, it can be defined that the pixel at the top left corner of the displayed image is (0, 0), the center of the lens is at the center of the image, i.e., the pixel center when the lens is not shifted, and the shift amount of the lens center along the u and v axes of the image coordinate system is Cx and Cy. The size of the display chip and the total number of pixels of the image are known, and the length and width dx, dy of each pixel on the display chip can be obtained. Therefore, the corresponding relation between the image coordinate (u, v) of any point on the display image on the display chip and the pixel coordinate (c, r) of the display image can be established:
as described above, the three-dimensional coordinates (Xa, Ya, Za) corresponding to the single projector marker pixel (x, y) are converted to obtain the corresponding pixel coordinates (c, r). The closer (x, y) is the actual pixel coordinate, (c, r) is the theoretical pixel coordinate, and the closer (x, y) and (c, r) are, the closer the screen generated by the virtual projector position indicating the parameter setting is to the reality.
The degree to which two points are close can be expressed as Δ x, as follows:
the focal length f, the size of the display chip and the resolution of the projector are known, and α, β, gamma, Tx, Ty, Tz, Cx and Cy can be calculated by using a computer evolution algorithm to obtain an array t, so that the array t is obtainedThe obtained array t comprises the above parameters α, β, γ, Tx, Ty, Tz, Cx, Cy, which are the optimal position parameters of the corresponding projector.
Parameter checking and spherical screen model correction
The points on the spherical screen can be generated according to the longitude and latitude degrees at certain intervals, the points are connected with a curve drawn by a plurality of points with the same latitude or longitude, the three-dimensional point lines on the spherical screen are converted into the point and line of the two-dimensional image displayed by the projector according to the position of the projector represented by the array t in the step, so that a planar grid image is drawn, and the grid image is projected to the projector to display the observation and verification calculation result. To correct this deviation so that the virtual dome screen model matches the real dome screen model, one dome screen latitude and longitude point is selected, the correction range is denoted as range, the correction depth of the point is denoted as Fd, the distance between any point on the dome screen and the point is denoted as LF, the correction curve is denoted as K, and then the new world coordinate points (Xa1, Ya1, Za1) within the correction range are represented as:
generating and deforming original image
As shown in fig. 9, a three-dimensional modeling method may be adopted to establish a dome screen model in three-dimensional rendering software, and an original image generated by shooting with a virtual equidistant fisheye camera is placed at the center of the dome screen model, where the original image may be a circular planar image. And (U, V) is marked as the point on the original image, and the point on the virtual spherical screen has a unique coordinate mapping relation with the point on the (U, V). When the array t of the projector is determined, that is, the position of the projector is determined, according to the coordinate mapping relationship described above, the pixel on the image displayed by the projector also has a unique mapping point on the dome screen, and the mapping between the projector pixel (c, r) and the original image coordinate can be established. Therefore, a coordinate mapping relation among points on the dome screen, pixels on the projection image and pixels on the original image can be established. According to the coordinate mapping relation, pixel areas on the original image corresponding to the projection areas and the splicing areas of different projectors on the spherical screen can be obtained.
As shown in fig. 10, the projector display image may be dotted according to a specific width and height interval to generate a pixel (c, r) lattice that is uniformly distributed over the entire projector display image, and the lattices are mapped to the original image to obtain corresponding image (U, V) lattices.
The affine transformation can be used for imitating the pictures in the image (U, V) lattice into the pixel lattice of the image displayed by the projector, and the output pictures can be used as the images required to be projected by the projector. However, in the directly output image, the brightness of the spliced area is higher than that of other areas, and therefore, brightness attenuation processing is required.
Brightness attenuation of projector projection splicing overlapping picture area
The alpha channel may be used to attenuate the brightness of the stitched area of the original image. Specifically, an alpha channel value is calculated for each pixel of the original image region by the projector to attenuate the brightness of the image stitching region. All the points of the original image picture area have corresponding world coordinate points (Xa, Ya, Za), and the pixel points which are transformed and mapped by each projector are marked as (c1, r1), (c2, r2) … … (cn, rn). The projector pixels are individually identified as W, H. For a pixel on the projector (ci, ri), the distance function from the projector pixel boundary is denoted as F, then di:
di=F(c1,v1)
if there is a projector i, and if (ci, ri) does not fall within the range of (0, 0) - (W, H), it means that (Xa, Ya, Za) is not in the display range of the projector i, and di is directly made 0. All dn values of the image point are obtained, and taking the first projection as an example, the alpha channel value w1 on the original image is the ratio of d1 to the sum of all dn values.
After the alpha channel value of the original image of each projector is obtained, the alpha channel image of the resolution of the projector can be obtained through deformation by using affine deformation. And fusing the deformed original image and the alpha channel image to obtain a projection splicing fused image of the required projector, and processing and packaging all sequence frames to complete the film fusion production.
In order to better implement the above-mentioned embodiments, the following also provides relevant devices for cooperatively implementing the above-mentioned embodiments.
Referring to fig. 11, an embodiment of the present invention provides a multiple dome screen projection apparatus, which is applied to multiple projectors to project images onto the same dome screen, and the apparatus may include:
a marking module 111, configured to draw a plurality of marking points on each projector display image, and record actual pixel coordinates (x, y) of the marking points;
a first obtaining module 112, configured to obtain three-dimensional coordinates (X, Y, Z) corresponding to a mark point on the dome projection image measured and recorded by using the three-dimensional measurement apparatus, and move an origin of the world coordinate system from the three-dimensional measurement apparatus to a spherical center position of the dome, so as to obtain converted world coordinates (Xa, Ya, Za);
the mapping module 113 is configured to obtain a coordinate transformation relationship between a world coordinate of any point on the dome screen and a pixel coordinate of a corresponding point on the projector display image through coordinate transformation;
a coordinate calculation module 114, configured to calculate theoretical pixel coordinates (c, r) on the display image corresponding to the world coordinates (Xa, Ya, Za) of the marker points according to the coordinate conversion relationship;
a parameter calculating module 115, configured to calculate a position parameter for each projector, so that actual pixel coordinates (x, y) of a plurality of mark points on a display image of the projector are closest to theoretical pixel coordinates (c, r);
the image module 116 is used for generating a planar original image through three-dimensional modeling and simulated projection, determining the corresponding relation between the pixel coordinates (U, V) of the original image and the pixel coordinates (c, r) of the corresponding projector display image, and further determining the coordinate mapping relation between the pixels on the original image and the projection pixels on the screen;
a second obtaining module 117, configured to obtain, according to the coordinate mapping relationship and based on the calculated position parameter of the projector, pixel areas on the original image, where the projection areas and the stitching areas of different projectors on the spherical screen correspond to each other;
and the processing module 118 is configured to perform image stitching and fusion processing based on pixel regions respectively corresponding to the projection regions and the stitching regions of different projectors, which are determined on the original image.
It can be understood that the functions of the functional modules of the multiple dome screen projection apparatuses of this embodiment can be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process thereof can refer to the related description in the foregoing method embodiment, and is not described herein again.
According to the scheme of the embodiment, the coordinate mapping relation between the spherical screen and the image displayed by the projector is obtained through a coordinate conversion mode, and the position parameters of the projector are calculated and determined through drawing the mark points according to the comparison between the actual pixel coordinates of the mark points and the theoretical pixel coordinates obtained through back-stepping; therefore, the image splicing and fusion processing is carried out, so that the splicing and fusion of the projected images can be realized with high precision, the image restoration degree is improved, and the problems of image deformation, ghost image and the like caused by poor splicing and fusion of the spherical screen projection in the prior art are solved.
Compared with the existing machine vision method, the device required by the scheme of the invention is only the three-dimensional measuring instrument, and matched equipment such as an industrial camera and the like is not needed, so that the equipment installation time is saved, and the cost is lower.
When a part of projectors generate picture movement due to external factors, the method can generate a new deformed image only by re-collecting the pictures of the projectors and performing combined calculation with the original data, and the prior art needs to re-calculate all the projections.
In the preferred embodiment, the three-dimensional world coordinates can be converted into points on the two-dimensional image according to the relative position relationship to draw the grid image in real time, verification and correction are performed according to the calculation result, and the world coordinates can be adjusted and corrected in real time to achieve an ideal condition.
(II) projection of two billiard curtains
The two billiard screen projection is a projection mode that two projectors are adopted to project on a spherical screen to form a projection picture. The spherical screen projection of two fisheye projections compares in traditional single fisheye projection, and definition and luminance that can bring promote, its main points lie in how to solve the influence of fisheye lens distortion to the picture and the concatenation of spherical screen display frame deformation and overlap region and fuse. The splicing and fusion of two fisheye projection films on the existing spherical screen generally renders the films by establishing a theoretical relation between a projector and a projection in three-dimensional Maya software. However, the distortion and displacement of the fisheye lens cannot be reflected in Maya, and the mismatching between the theoretical and actual projector position and the spherical screen model curved surface can make the picture not coincide, resulting in picture deformation and double image. Therefore, the screens are usually superimposed using morphing software. The deformation is caused by generating a curve of a specific mathematical algorithm to deform the picture, the deformation trend is difficult to control, and the spherical screen picture is easy to distort and double image.
Referring to fig. 12, an embodiment of the present invention provides a two-billiard-screen projection method, which is applied to two projectors to project images on the same billiard screen.
As shown in fig. 12, the method may include:
121. determining the projection range of each of the two projectors on the spherical screen, and ensuring that the projection images of the two projectors respectively cover a half of the spherical screen and have a splicing area with a certain width;
122. drawing a plurality of mark points on a display image of each projector, and recording actual pixel coordinates (x, y) of the mark points;
123. acquiring a coordinate conversion relation between the world coordinate of any point on the spherical screen and the pixel coordinate of a corresponding point on the display image of the projector;
124. according to world coordinates (Xa, Ya, Za) corresponding to the mark points on the projection image of the spherical screen, calculating theoretical pixel coordinates (c, r) on the corresponding display image according to the coordinate conversion relation;
125. calculating position parameters for each projector, so that the actual pixel coordinates (x, y) of a plurality of mark points on a display image of the projector are closest to the theoretical pixel coordinates (c, r);
126. generating a planar original image through three-dimensional modeling and simulated projection, determining the corresponding relation between pixel coordinates (U, V) of the original image and pixel coordinates (c, r) of a corresponding projector display image, and further determining the coordinate mapping relation between pixels on the original image and projection pixels on a screen;
127. according to the mapping relation, acquiring pixel areas on the original image respectively corresponding to projection areas and splicing areas of different projectors on the dome screen based on the calculated position parameters of the projectors, wherein the splicing areas on the dome screen are regular areas divided according to latitudes;
128. and performing image splicing and fusion processing based on the determined pixel regions respectively corresponding to the projection regions and the splicing regions of different projectors on the original image.
The method comprises the steps of respectively establishing different coordinate systems and conversion relations among the different coordinate systems based on the relative position relations of the spherical screen and the projector, and obtaining the mapping relation between the position coordinates of any point on the spherical screen and the pixel coordinates of the corresponding point on the display image of the projector through coordinate conversion by utilizing the conversion relations.
According to the mapping relationship, the corresponding pixel coordinate of the display image can be calculated according to the world coordinate of the mark point on the dome screen. Due to the difference between the actual coordinate and the theoretical coordinate, the theoretical pixel coordinate and the actual coordinate of the mark point obtained by calculation may be different, and based on the difference, the position parameter of the projector when the two are closest can be calculated as the optimal solution, so as to determine the relative position relationship between the projector and the spherical screen.
In addition, the projector display chip is planar, the displayed picture is also planar, a spherical screen model can be established through three-dimensional modeling, and a circular original image on the plane corresponding to the three-dimensional spherical screen model is generated through simulated projection. The original image can be output to a projector after being processed, displayed on a display chip of the projector and projected to a spherical screen. The original image and the display image of the projector have a determined corresponding relationship, and the pixel coordinates on the original image and the pixel coordinates on the display image have a one-to-one corresponding relationship. The correspondence is determined and can be directly realized by a person skilled in the art.
Based on the mapping relation and the corresponding relation, the one-to-one corresponding relation between the points on the spherical screen and the points on the original image can be established, so that the pixel areas on the original image, which correspond to the projection areas and the splicing areas of different projectors on the spherical screen respectively, are obtained.
Finally, according to the pixel regions corresponding to different projector regions and the pixel regions corresponding to the splicing regions determined on the original image, the original image can be subjected to processing such as splicing and fusion, and an image or a video picture which can be output to a projector for projection is obtained.
Optionally, in an implementation manner, the method further includes: three-dimensional coordinates (X, Y, Z) corresponding to a mark point on the dome projection image recorded by measurement using the three-dimensional measuring instrument are acquired, and the origin of the world coordinate system is moved from the three-dimensional measuring instrument to the position of the center of the dome to obtain converted world coordinates (Xa, Ya, Za) which are functions of the radius, longitude and latitude of the dome.
Optionally, in an implementation manner, before step 126, the method further includes: generating a spherical screen grid image according to the longitude and latitude, converting the spherical screen grid image into a two-dimensional image displayed by a projector according to the mapping relation and the calculated position parameter of the projector, and drawing a planar grid image; projecting the plane grid image to a projector for displaying, so that the plane grid image can be completely displayed on the projector display image, and the projection can correctly cover the spherical screen; and verifying the difference between the real spherical screen and the theoretical spherical screen, and correcting the world coordinate according to the verification result so as to correct the mapping relation.
Optionally, in an implementation manner, step 123 may include: establishing a world coordinate system based on the spherical screen, and establishing the following coordinate systems based on any projector: the method comprises the following steps of establishing a conversion relation among a projector coordinate system based on a projector, an image coordinate system based on a projector display chip plane and a pixel coordinate system for displaying an image on a display chip; and according to the conversion relation among the coordinate systems, obtaining the mapping relation between the position coordinate of any point on the spherical screen and the pixel coordinate of the corresponding point on the display image of the projector through coordinate conversion.
Optionally, in an implementation manner, the establishing a world coordinate system based on a spherical screen includes: establishing a world coordinate system by taking the spherical center of the spherical screen as an origin and taking the vertical direction as a Y axis; the establishing of the following coordinate systems based on any projector comprises: establishing a projector coordinate system by taking the optical center of the projector as an origin based on the projector, establishing a two-dimensional image coordinate system by taking the center of a display chip as the origin based on the plane of the display chip of the projector, and establishing a pixel coordinate system by taking a pixel at one vertex of a display image as the origin based on the plane of the display chip of the projector; the establishing of the conversion relation between the coordinate systems comprises the following steps: establishing a first conversion relation between a world coordinate system and a projector coordinate system according to the distance and the rotation angle of the projector relative to the origin of the world coordinate system; establishing a second conversion relation between the projector coordinate system and the image coordinate system according to the projection optical principle; and establishing a third conversion relation between the image coordinate system and the pixel coordinate system according to the size of the display chip and the size of the pixel.
Optionally, in an implementation manner, the obtaining, according to a conversion relationship between coordinate systems, a mapping relationship between a position coordinate of any point on the spherical screen and a pixel coordinate of a corresponding point on the display image of the projector through coordinate conversion includes: any point on the spherical screen is represented by world coordinates (Xa, Ya, Za); obtaining corresponding projector coordinates (Xc, Yc, Zc) according to the first conversion relation; obtaining corresponding image coordinates (u, v) according to the second conversion relation; and obtaining corresponding pixel coordinates (c, r) according to the third conversion relation.
Wherein,
Xa=R0cos(n),Ya=R0sin(n)*cos(e),Za=R0sin(n)*sin(e)
R0the radius of the dome screen is represented, e and n are respectively the longitude and latitude of the dome screen, Rx, Ry and Rz are respectively rotation matrixes corresponding to angles α, β and gamma of the projector rotating around an axis X, Y, Z of a world coordinate system at the origin, T is a translation matrix of the projector translating relative to the world coordinate system, f is the focal length of the projector, theta is an included angle between a connecting line of a point (Xc, Yc and Zc) and the origin of the coordinate system of the projector and the Z axis of the coordinate system of the projector, Rd is a lens model parameter, k is a lens distortion parameter, u ', v' is an image coordinate directly calculated according to a coordinate conversion relation, (u, v) is a final image coordinate obtained after the lens distortion is considered, F (k) is a lens distortion influence function, a theoretical formula or an empirical formula can be adopted, and the formula is not limited.
Optionally, in an implementation manner, step 125 may include: calculating the distance Deltax between the actual pixel coordinate (x, y) of any marked point and the corresponding theoretical pixel coordinate (c, r),computingm is the number of the mark points; computingAnd the projector position parameters approaching 0 comprise α, β, gamma, T, Cx and Cy.
Optionally, in an implementation manner, step 126 includes: establishing a dome screen model through three-dimensional modeling, and placing a virtual equidistant fisheye camera at the center of a sphere of the dome screen model to generate a circular original image on a two-dimensional plane, wherein the pixel coordinates (U, V) of the original image are in right-determined corresponding relation with the pixel coordinates (c, r) of a corresponding projector display image.
Optionally, in an implementation manner, step 127 may include: acquiring projection areas and splicing areas of different projectors on a spherical screen; acquiring (c, r) dot matrixes of pixel areas on the projector display image corresponding to projection areas and splicing areas of different projectors on the spherical screen based on the calculated position parameters of the projectors according to the mapping relation; and according to the corresponding relation, further acquiring (U, V) dot matrixes of pixel areas on the original image, which correspond to the projection areas and the splicing areas of different projectors respectively.
Optionally, in an implementation manner, the performing image stitching and fusing processing in step 128 may include: performing brightness attenuation processing on a splicing area on the original image by using an alpha channel; and processing the original image by using affine transformation to obtain an image to be projected by the corresponding projector.
Optionally, in an implementation manner, the processing, by using affine transformation, the original image to obtain an image to be projected by a corresponding projector may include: simulating an image dot matrix of an original image to a pixel dot matrix of a display chip of the projector by using affine transformation to generate a first image to be projected by the projector; simulating an image lattice of an alpha channel of an original image to a pixel lattice of a display chip of the projector by affine transformation to generate a second image to be projected by the projector; and performing fusion processing on the first image and the second image.
The above-described scheme may be embodied, for example, in an image processing apparatus, such as a computer apparatus.
In order to better understand the technical solutions provided by the embodiments of the present invention, the following description is given by taking an implementation mode in a specific scenario as an example.
Projector relative to dome screen parameter position determination
The two fisheye projections of the dome screen can meet the requirement that a picture of a single projector covers a half dome screen and the width of the splicing zone region meets the requirement.
The calibration software can be used for calibrating and displaying the picture area required by the pictures of the two projectors, namely a plurality of pixel coordinate (x, y) dot matrixes are uniformly distributed in a hemispherical picture splicing band. According to the principle that the light path is reversible, the projector can be regarded as a reverse camera. According to the calibration principle of a camera, at least 9 calibration points are adopted, and in order to reduce gross errors and accelerate the evolution algorithm of a following computer, 27 marking dot matrixes uniformly distributed on a projector picture are selected. Three-dimensional coordinate data (X, Y, Z) of the index point can be measured and recorded sequentially by using a three-dimensional space measuring instrument (three-dimensional measuring instrument or three-dimensional measuring instrument for short). The sphere center O and the radius value R of the spherical screen can be calculated by utilizing the measured three-dimensional data0Moving the world coordinate system origin to the position of the center of the sphere, the points on the spherical screen can be expressed as:
the points on the dome screen are defined by longitude and latitude (e, n), and the following are provided:
Xa=R0cos(n);Ya=R0sin(n)*cos(e);
Za=R0sin(n)*sin(e);
a projector coordinate system with a light emitting point, namely a light center, of the projector as an original point can be established, the light emitting direction, namely the optical axis direction, of the z axis towards the center of the lens is defined, the vertical downward direction of the y axis towards the projector when the projector is placed is defined, and the projector coordinate system is followed and bound with the projector. The projector coordinate system established based on the projector at any position in space can be obtained by translation after rotation of the starting position which is completely coincident with the world coordinate system.
As shown in fig. 13, it can be defined that the angles of rotation of the projector around the axis of the world coordinate system X, Y, Z at the origin are α, β and γ, the three corresponding rotation matrices are denoted as Rx, Ry and Rz, the translation amounts of the projector's own coordinate system Xc, Yc and Zc after rotation are denoted as Tx, Ty and Tz, respectively, and the corresponding translation matrix is denoted as t, then the conversion relationship between the points (Xc, Yc and Zc) on the projector coordinate system and the points (Xa, Ya and Za) of the world coordinate system dome is as follows:
as shown in fig. 14, when the chip plane is taken as a two-dimensional image coordinate system, the focal length of the projector is f, the direction of the U, V axis is the same as the projector coordinate system, the chip center is taken as the origin, (Xc, Yc, Zc) and the connection line of the projector origin and the central optical axis (i.e. the Z axis of the projector coordinate system) form an included angle θ, the lens model is Rd, the lens distortion k is, and the uniform solid angle fisheye is taken as an example, then (Xc, Yc, Zc) projected on the chip has:
as shown in fig. 14, it can be defined that the pixel at the top left corner of the display image is (0, 0), the center of the lens is at the center of the image, i.e. the pixel center when the lens is not shifted, and the shift amount of the lens center along the u and v axes of the image coordinate system is Cx and Cy. The size of the display chip and the total number of pixels of the image are known, and the length and width dx, dy of each pixel on the display chip can be obtained. Therefore, the corresponding relation between the image coordinate (u, v) of any point on the display image on the display chip and the pixel coordinate (c, r) of the display image can be established:
as described above, the three-dimensional coordinates (Xa, Ya, Za) corresponding to the single projector marker pixel (x, y) are converted to obtain the corresponding pixel coordinates (c, r). The closer (x, y) is the actual pixel coordinate, (c, r) is the theoretical pixel coordinate, and the closer (x, y) and (c, r) are, the closer the screen generated by the virtual projector position indicating the parameter setting is to the reality.
The degree to which two points are close can be expressed as Δ x, as follows:
when the focal length f, the size and the resolution of the display chip of the projector are known, and the lens model Rd is known, α, β, gamma, Tx, Ty, Tz, Cx and Cy can be calculated by using a computer evolution algorithm to obtain an array t, so that the array t is obtainedThe obtained array t comprises the above parameters α, β, γ, Tx, Ty, Tz, Cx, Cy, which are the optimal position parameters of the corresponding projector.
Parameter checking and spherical screen model correction
The method comprises the steps of generating points on a spherical screen according to longitude and latitude (e, n), connecting the points by using curves drawn by a plurality of points with the same latitude (longitude), converting the three-dimensional point lines into the points and lines of a two-dimensional image displayed by a projector according to an array t in the step, and projecting the points and lines to the projector to display an observation verification calculation result.
Because the real spherical screen and the theoretical sphere have deviation, the grids of the deviation area are staggered. To correct the deviation, so that the virtual dome screen model matches the real dome screen model, a dome screen longitude and latitude point is selected, the correction range is denoted as frame, the correction depth of the point is denoted as Fd, the distance between any point on the dome screen and the point is denoted as LF, and the correction curve is denoted as K, and then the new world coordinate points (Xa1, Ya1, Za1) in the correction range are changed as follows:
generating and deforming original image
As shown in fig. 15, a three-dimensional modeling method may be adopted to establish a dome screen model in three-dimensional rendering software, and an original image generated by shooting with a virtual equidistant fisheye camera is placed at the center of the dome screen model, where the original image may be a circular planar image. And (U, V) is marked as the point on the original image, and the point on the virtual spherical screen has a unique mapping relation with the point on the (U, V). When the array t of the projector is determined, that is, the position of the projector is determined, according to the mapping relationship described above, the pixel on the image displayed by the projector also has a unique mapping point on the dome screen, and a mapping between the projector pixel (c, r) and the original image coordinate can be established. Thus, the mapping relation of the points on the dome screen, the pixels on the projected image and the pixels on the original image can be established. According to the mapping relation, pixel areas on the original image corresponding to the projection areas and the splicing areas of different projectors on the spherical screen can be obtained.
As shown in fig. 16, the projector display image may be dotted according to a specific width and height interval to generate a pixel (c, r) lattice that is uniformly distributed over the entire projector display image, and the lattice may be mapped to the original image to obtain a corresponding image (U, V) lattice.
Each projection is divided into dot matrixes (X, Y, Z) with row and column numbers according to the longitude and latitude interval (for example, 1 degree) of the longitude and latitude area where the grid is located, the dot matrixes are converted into corresponding dot matrixes (c, r) in corresponding parameters of the respective projection, and the dot matrixes are mapped to the original image to obtain corresponding dot matrixes of the image (U, V).
The affine transformation can be used for imitating the pictures in the image (U, V) lattice into the pixel lattice of the image displayed by the projector, and the output pictures can be used as the images required to be projected by the projector. However, in the directly output image, the brightness of the spliced area is higher than that of other areas, and therefore, brightness attenuation processing is required.
Brightness attenuation of projector projection splicing overlapping picture area
An alpha (alpha) channel may be used to attenuate the intensity of the stitched region of the original image.
Specifically, for each pixel of the original image, an alpha channel value is calculated to attenuate the brightness of the image splicing region, and assuming that the splicing zone latitude range is n1-n2(n2> n1), because the splicing region is a regular region divided according to the latitude, all points at the same latitude can be set to be the same brightness, taking the projection picture surface portion inside n1 as an example, the alpha channel value of the pixel point (the latitude value is ni) between the splicing zones:
after the alpha channel value of the original image of each projector is obtained, the original image is deformed by affine deformation to obtain an alpha channel image of the projector resolution, and the deformed image is added with the alpha channel image to obtain the required projector projection splicing fusion image, as shown in fig. 16. And processing and packaging all the sequence frames to complete the film fusion production.
The method of the embodiment of the present invention is described in detail above with reference to specific application scenarios.
In order to better implement the above-mentioned embodiments, the following also provides relevant devices for cooperatively implementing the above-mentioned embodiments.
Referring to fig. 17, according to an embodiment of the present invention, there is provided a two-billiard-screen projection apparatus for two projectors to project images onto a same billiard screen, the apparatus including:
the determining module 171 is configured to determine the projection ranges of the two projectors on the dome screen, and ensure that the projection images of the two projectors cover a half of the dome screen and have a splicing area with a certain width;
a marking module 172, configured to draw a plurality of marking points on each projector display image, and record actual pixel coordinates (x, y) of the marking points;
the mapping module 173 is configured to obtain a coordinate transformation relationship between the world coordinate of any point on the spherical screen and the pixel coordinate of the corresponding point on the display image of the projector through coordinate transformation;
a coordinate calculation module 174, configured to calculate, according to the world coordinates (Xa, Ya, Za) corresponding to the mark point on the spherical screen projection image, theoretical pixel coordinates (c, r) on the corresponding display image according to the coordinate conversion relationship;
a parameter calculating module 175, configured to calculate a position parameter for each projector, so that actual pixel coordinates (x, y) of a plurality of mark points on a display image of the projector are closest to theoretical pixel coordinates (c, r);
the image generation module 176 is used for generating a planar original image through three-dimensional modeling and simulated projection, determining the corresponding relation between the pixel coordinates (U, V) of the original image and the pixel coordinates (c, r) of the corresponding projector display image, and further determining the coordinate mapping relation between the pixels on the original image and the projection pixels on the screen;
the image acquisition module 177 is configured to acquire pixel regions on the original image, which correspond to projection regions and splicing regions of different projectors on the spherical screen, respectively, based on the calculated position parameters of the projectors according to the mapping relationship, where the splicing regions on the spherical screen are regular regions divided according to latitudes;
and the processing module 178 is configured to perform image stitching and fusion processing based on the determined pixel regions respectively corresponding to the projection regions and the stitching regions of different projectors on the original image.
It can be understood that the functions of the functional modules of the splicing and fusing device of the two projection spherical screens in this embodiment can be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process thereof can refer to the related description in the foregoing method embodiment, and will not be described herein again.
According to the scheme, the coordinate mapping relation between the spherical screen and the image displayed by the projector is obtained through a coordinate conversion mode, and the position parameters of the projector are calculated and determined through drawing the mark points according to the comparison between the actual pixel coordinates of the mark points and the theoretical pixel coordinates obtained through reverse estimation; therefore, the image splicing and fusion processing is carried out, so that the splicing and fusion of the projected images can be realized with high precision, the image restoration degree is improved, and the problems of image deformation, ghost image and the like caused by poor splicing and fusion of the spherical screen projection in the prior art are solved. Compared with the prior art, the scheme of the invention can generate an accurate image film after deformation and fusion and restore the influence caused by lens distortion.
In the preferred embodiment, the projector position is determined by using the three-dimensional measuring instrument and the picture pixel marking software, so that the virtual projector position of the picture same as the real picture can be generated, and the relation between the dome screen position and the fisheye lens distortion can be restored.
In the preferred embodiment, the three-dimensional world coordinates can be converted into points on the two-dimensional image according to the relative position relationship to draw the grid image in real time, verification and correction are performed according to the calculation result, and the world coordinates can be adjusted and corrected in real time to achieve an ideal condition.
[ III ] projection of multiple circular screens
The multi-ring screen projection refers to a projection screen picture formed by projecting multiple projections (channels) on a ring screen, and comprises a two-dimensional projection system and a three-dimensional projection system. The key point is how to solve the splicing and fusion of the picture deformation and the overlapped picture. The existing splicing and fusion of multiple projection pictures on a circular screen generally renders a film by establishing a theoretical relation between a projector and the circular screen in three-dimensional image processing software such as Maya software. However, the lens shift cannot be reflected in Maya, and the mismatching between the theoretical and actual projector position and the screen model curved surface makes the image not coincide, resulting in image distortion and double image. Another way is to make the pictures coincide by using morphing software. The deformation is the deformation of the picture caused by the curve of a specific mathematical algorithm, but the deformation trend is difficult to control, and the picture in a splicing area is easy to distort, double image and uneven brightness.
Referring to fig. 18, an embodiment of the present invention provides a multi-circular-screen projection method, which is applied to a plurality of projectors to project images onto the same circular screen. Wherein, the annular screen is an annular screen or the abbreviation of annular screen. The projection is directed to project an image on the circular screen. The image may be a video picture.
As shown in fig. 18, the method may include:
181. and acquiring the coordinate conversion relation between the position coordinate of any point on the circular screen and the pixel coordinate of the corresponding point on the display image of the projector.
In this context, a mathematical model method is used, based on the relative position relationship between the circular screen and the projector, to respectively establish different coordinate systems and the conversion relationship between the different coordinate systems, and a mapping relationship between the position coordinate of any point on the circular screen and the pixel coordinate of the corresponding point on the display image of the projector can be obtained through coordinate conversion.
182. And determining pixel areas on the display image respectively corresponding to the projection areas and the splicing areas of different projectors on the circular screen according to the coordinate conversion relation.
The projector is adjusted to project to the circular screen, the projection areas and the splicing areas of different projectors on the circular screen can be obtained by using a three-dimensional measuring instrument to assist in measurement calculation and the like, and the areas can be represented by position coordinates on the circular screen. According to the mapping relationship obtained in the previous step, the position coordinates can be converted into pixel coordinates, so as to obtain corresponding pixel coordinates on the display image, that is, obtain the corresponding pixel area on the display image.
183. Generating a planar original image through three-dimensional modeling and simulated projection, determining a coordinate mapping relation between pixels on the original image and projection pixels on a screen based on the corresponding relation between the original image and a display image and the coordinate conversion relation, and further determining pixel areas on the original image respectively corresponding to projection areas and splicing areas of different projectors.
The projector display chip is planar, the displayed picture is also planar, a circular screen model can be established through three-dimensional modeling, and a planar original image corresponding to the circular screen model is generated through simulated projection. The original image can be output to a projector after being processed, displayed on a display chip of the projector and projected to a circular screen. Because the original image needs to be displayed on the projector, the original image and the display image of the projector have a certain corresponding relationship, and the pixel coordinates on the original image and the pixel coordinates on the display image can have a one-to-one corresponding relationship. The specific formula representation of the corresponding relationship is not limited herein, but the corresponding relationship is determined and can be directly realized by a person skilled in the art. And further determining the coordinate mapping relation between the pixels on the original image and the projected pixels on the screen.
In some embodiments, coordinates of a corresponding mark pixel on the display image of the projector may be calculated according to coordinates of a projection pixel on the screen through a coordinate conversion method, that is, theoretical pixel coordinates and actual pixel coordinates of the mark pixel on the display image of the projector may be obtained. Then, the parameters of the projector are adjusted to enable the obtained two coordinates to be close to each other, so that the better parameters of the projector are determined, and further the coordinate mapping relation between the pixels of the original image displayed by the projector and the projected pixels on the screen is established. Or, the projection coordinates and the position coordinates of the projection pixels on the circular screen can be obtained through a coordinate conversion mode, and then the parameters of the projector are adjusted to enable the obtained two coordinates to be close to each other, so that better parameters of the projector are determined, and further the coordinate mapping relation between the pixels of the original image displayed by the projector and the projection pixels on the screen is established.
184. And performing image splicing and fusion processing based on the determined pixel regions respectively corresponding to the projection regions and the splicing regions of different projectors on the original image.
Finally, according to the pixel regions corresponding to different projector regions and the pixel regions corresponding to the splicing regions determined on the original image, the original image can be subjected to processing such as splicing and fusion, and an image or a video picture which can be output to a projector for projection is obtained.
Optionally, in an implementation manner, step 11 may specifically include:
establishing a world coordinate system based on the circular screen, and establishing the following coordinate systems based on any projector: the method comprises the following steps of establishing a conversion relation among a projector coordinate system based on a projector, an image coordinate system based on a projector display chip plane and a pixel coordinate system for displaying an image on a display chip; and according to the conversion relation among the coordinate systems, obtaining the mapping relation between the position coordinate of any point on the circular screen and the pixel coordinate of the corresponding point on the display image of the projector through coordinate conversion.
Optionally, in an implementation manner, the establishing a world coordinate system based on a circular screen includes: taking a circle center line of the radian of the circular screen at a half of the height of the circular screen as an original point, taking a Z axis horizontally facing the middle of the circular screen, and taking a Y axis vertically downwards as a world coordinate system; the establishing of the following coordinate systems based on any projector comprises: establishing a projector coordinate system by taking the optical center of the projector as an origin based on the projector, establishing a two-dimensional image coordinate system by taking the center of a display chip as the origin based on the plane of the display chip of the projector, and establishing a pixel coordinate system by taking a pixel at one vertex of a display image as the origin based on the plane of the display chip of the projector; the establishing of the conversion relation between the coordinate systems comprises the following steps: establishing a first conversion relation between a world coordinate system and a projector coordinate system according to the distance and the rotation angle of the projector relative to the origin of the world coordinate system; establishing a second conversion relation between the projector coordinate system and the image coordinate system according to the projection optical principle; and establishing a third conversion relation between the image coordinate system and the pixel coordinate system according to the size of the display chip and the size of the pixel.
Optionally, in an implementation manner, the obtaining, according to a conversion relationship between coordinate systems, a mapping relationship between a position coordinate of any point on the circular screen and a pixel coordinate of a corresponding point on the display image of the projector through coordinate conversion includes: expressing the position coordinates (theta, h) of any point on the circular screen by world coordinates (X, Y, Z); obtaining corresponding projector coordinates (Xc, Yc, Zc) according to the first conversion relation; obtaining corresponding image coordinates (u, v) according to the second conversion relation; and obtaining corresponding pixel coordinates (c, r) according to the third conversion relation. Wherein,
u=f*Xc/Zc,v=f*Yc/Zc
theta represents an included angle between a perpendicular line from the point to a Y axis of the world coordinate system and an X axis of the world coordinate system, H represents the height of the point relative to the bottom end of the circular screen, r represents the distance between the point and the Y axis of the world coordinate system, and H represents the height of the circular screen; rx, Ry, Rz are rotation matrices for the projector at the origin rotating about the axis of world coordinate system X, Y, Z, and Tz is translation matrix for the projector translating relative to the world coordinate system; f is the focal length of the projector, Cx and Cy are the offset of the center of the lens of the projector along the x axis and the y axis of the image coordinate system; dx, dy are the length and width of each pixel on the projector display chip.
Optionally, in an implementation manner, the step 12 may include: adjusting the parameters of the projectors to project on the circular screen, acquiring projection areas and splicing areas of different projectors on the circular screen, and generating a (theta, h) dot matrix with a certain row number and column number on the circular screen according to a set interval, wherein the (theta, h) is a position coordinate on the circular screen; and acquiring a (c, r) dot matrix of a pixel area on the display image corresponding to the (theta, h) dot matrix according to the mapping relation, wherein the (c, r) is a pixel coordinate on the display image.
Optionally, in an implementation manner, step 183 may include: establishing a circular screen model through three-dimensional modeling, arranging a virtual camera at an audience observation point of the circular screen model, projecting an image acquired by the virtual camera onto the circular screen model, expanding an image plane displayed on the circular screen model to obtain an original image, and determining the corresponding relation between pixel coordinates (U, V) of the original image and pixel coordinates (c, r) of an image displayed by a projector; and acquiring (U, V) lattices of pixel areas on the original image, which respectively correspond to (theta, h) lattices of projection areas and splicing areas of different projectors according to the mapping relation and the corresponding relation.
Optionally, in an implementation manner, step 184 may include: performing brightness attenuation processing on a splicing area on the original image by using an alpha channel; and processing the original image by using affine transformation to obtain an image to be projected by the corresponding projector.
The above-described scheme may be embodied, for example, in an image processing apparatus, such as a computer apparatus.
In order to better understand the technical solutions provided by the embodiments of the present invention, the following description is given by taking an implementation mode in a specific scenario as an example.
First, a world coordinate system is established based on the circular screen. As shown in fig. 19, a world coordinate system may be established with a circle center line of the radian of the circular screen at a half of the height of the circular screen as an origin, a Z axis horizontally facing the middle of the circular screen, and a Y axis vertically downward. The lattice range of the circular screen model can be determined according to a world coordinate system by the real radius R, the height H and the radian phi of the circular screen. Alternatively, the actual circular screen may have a protruding waist drum. In this embodiment, the protruding waist drum can be determined by the half length a and the depth b of the waist drum. The actual radius R of any point on the circular screen is related to the height value h of the position of the circular screen where the point is located, the actual radius R is actually the distance from the point to the Y axis of the world coordinate system, the distance is a numerical value smaller than or equal to R, the maximum value is R, and the minimum value is R-b.
Herein, an angle between a perpendicular line from any point on the circular screen to the Y axis of the world coordinate system and the X axis of the world coordinate system may be defined as an angle θ of the circular screen, that is, an angle of the circular screen defining the z axis direction is 90 °. Each point on the circular screen can be uniquely determined by the position coordinates (θ, h), where h is the height of the point on the circular screen, i.e., the height of the point relative to the bottom end of the circular screen.
The point of the circular screen (θ, h,):
then, as shown in fig. 19, the optical center or the light-emitting point of the projector may be used as an origin, the light-emitting direction or the optical axis at the midpoint may be used as a Z-axis to define a projector coordinate system, that is, the Z-axis is perpendicular to the display chip of the projector, the angles of rotation of the projector around the axis X, Y, Z of the world coordinate system at the origin of the world coordinate system may be defined as α, β and γ, respectively, the corresponding three rotation matrices may be defined as Rx, Ry and Rz, the translation amounts of the projector's own coordinate system Xc, Yc and Zc after rotation may be defined as Tx, Ty and Tz, and the translation matrix may be defined as t, and the first conversion relationship between the point (Xc, Yc, Zc) on the projector coordinate system and the point (X, Y and Z) on the world coordinate system may be:
then, an image coordinate system is defined. As shown in fig. 20, the plane of the projector display chip may be used as a two-dimensional image coordinate system, the u-axis and v-axis directions of the image coordinate system may be the same as the Xc-axis and Yc-axis directions of the projector coordinate system, respectively, and the center of the display chip may be used as the origin of the image coordinate system. Since the light rays emitted by the projector all pass through the optical center, namely the origin of the projector coordinate system, and the focal length of the projector is recorded as f, a second conversion relationship between the image coordinates (u, v) of the pixel position of the light-emitting point on the display chip and the points (Xc, Yc, Zc) of the projected point on the image coordinate system can be established:
u=f*Xc/Zc v=f*Yc/Zc
then, a pixel coordinate system is defined. It is defined that a pixel at one vertex of the display image on the display chip, for example, the upper left corner, is the origin (0, 0) of the pixel coordinate system, and the c-axis and r-axis of the pixel coordinate system are the same as the u-axis and v-axis directions of the image coordinate system, respectively. When the lens is not shifted, the lens center is at the center of the displayed image, i.e., the pixel center, and the shift amount of the lens center along the u-axis and v-axis of the image coordinate system is denoted by Cx and Cy. The chip size and the total number of pixels of the image are known, and the length and width dx, dy of each pixel on the chip can be obtained. A third transformation of the image coordinates (u, v) to the pixel coordinates (c, r) can thus be established for any point on the chip:
therefore, according to the conversion relation between the coordinate system and the coordinate system established in the steps, the position coordinates of any point on the ring screen can be converted into the pixel coordinates on the display image, namely, the dot matrix on the ring screen can be accurately reflected on the display image. Optionally, the points on the circular screen with the same height h and the same angle θ may be connected by lines, that is, the circular screen mesh of the current relative position parameter may be drawn.
And secondly, carrying out circular screen projection and acquiring a pixel dot matrix of a display image corresponding to the dot matrix of the projection image on the circular screen. The projector parameters can be adjusted through the auxiliary calculation of the three-dimensional measuring instrument, so that lines with the same height h can be horizontally displayed on the circular screen, and the screen waist drum parameters can be adjusted so that lines with the same angle theta can be vertically displayed. And lines with the same angle theta and height h in the splicing area of the projection images are overlapped, and the splicing angle and the height range of the same overlapping area are completely the same. And (e, h) dot matrixes with a certain row number and a certain column number are generated by each projection according to the angle and the height of the owned circular screen at intervals of certain angle width, and the corresponding (c, r) dot matrixes are obtained through the coordinate conversion.
Secondly, generating a planar original image through three-dimensional modeling and simulated projection. As shown in fig. 21, three-dimensional modeling may be performed in three-dimensional rendering software, a circular screen model with a screen height of H, a radius of R, and a radian of Φ is established, a virtual camera is placed at an audience observation point of the circular screen model, a picture acquired by the virtual camera through a circular screen display area is re-projected onto the circular screen model, and an image plane on the screen model is expanded to obtain an original image. Because the original image and the display image of the projector have a certain corresponding relation, the (theta, h) dot matrix also has a certain mapping relation on the original image, and the pixel dot matrix mapped on the original image is marked as (U, V).
And finally, carrying out image splicing and fusion processing. The alpha channel may be used to attenuate the brightness of the stitched area of the original image. Because the splicing regions are set as rectangles with the same width and height in a regular mode during grid splicing, namely splicing tapes, the alpha channel values of image pixels (U, V) corresponding to the same angle theta of the splicing tapes can be set to be equal, the angle values of the left edge and the right edge of the splicing tapes are assumed to be theta 1 and theta 2, and in order to reduce the Mach tape effect, the transparency splicing gradual change of the adjacent picture splicing tapes is set to be a trigonometric function correlation curve, the angle value corresponding to the pixels of the splicing tapes is set to be theta i, the projection pictures of the theta 1 on the inner side are taken as an example for calculation, and the alpha channel values wi of the splicing tapes are as follows:
the luminance reduction process for the alpha channel is completed as described above. The brightness and range of the individual projections in the original image are already determined. And the points of (U, V) are uniformly distributed in the projection original image area and are in one-to-one correspondence with the points (c, r) on the corresponding projection display image. Affine transformation can be used for affine corresponding the original image to the triangular areas of the two adjacent points corresponding to (c, r) according to (U, V) and the triangular images of the two adjacent points, as shown in fig. 22, the whole area is affine and output, and then the corresponding image which can be output to a projector for projection, namely the sequence frame with fused film, is obtained.
The method of the embodiment of the present invention is described in detail above with reference to specific application scenarios.
In order to better implement the above-mentioned embodiments, the following also provides relevant devices for cooperatively implementing the above-mentioned embodiments.
Referring to fig. 23, an embodiment of the invention provides a multi-circular-screen projection apparatus, which is applied to a plurality of projectors to project images on the same circular screen. The apparatus may comprise:
the obtaining module 231 is configured to obtain a coordinate conversion relationship between a position coordinate of any point on the circular screen and a pixel coordinate of a corresponding point on the display image of the projector;
a determining module 232, configured to determine, according to the coordinate transformation relationship, pixel areas on the display image, where the projection areas and the splicing areas of different projectors on the circular screen correspond to each other;
the image module 233 is configured to generate a planar original image through three-dimensional modeling and simulated projection, determine a coordinate mapping relationship between pixels on the original image and projection pixels on a screen based on a correspondence relationship between the original image and a display image and the coordinate conversion relationship, and further determine pixel areas on the original image corresponding to projection areas and splicing areas of different projectors;
and the processing module 234 is configured to perform image stitching and fusion processing based on pixel regions respectively corresponding to the projection regions and the stitching regions of different projectors, which are determined on the original image.
It can be understood that the functions of the functional modules of the multiple circular screen projection apparatuses in this embodiment can be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process thereof can refer to the related description in the foregoing method embodiment, and is not described herein again.
According to the scheme, the coordinate mapping relation is obtained, the projection areas of different projectors on the circular screen and the pixel areas on the original image corresponding to the image splicing areas are determined, image splicing and fusion processing is carried out, high-precision splicing and fusion of the projected images can be achieved, and the image restoration degree is improved.
According to the scheme, the calculation result can be verified and corrected by drawing the circular screen grid image, the circular screen parameters can be adjusted according to the calculation result, and the picture deformation caused by the screen deformation can be accurately reduced by setting the circular screen parameters.
In addition, the screen grid image is generally generated in advance in the existing splicing and fusion technology, and then the projector is required to be moved to align the grids.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention. Again, the terms "including" and "having," and any variations thereof, in the description and claims of this invention and the above-described drawings are intended to cover non-exclusive inclusions.
The principles and embodiments of the present invention have been described herein using specific examples, which are set forth only to help understand the method and its core ideas of the present invention, and not to limit the same; those of ordinary skill in the art will understand that: according to the idea of the present invention, modifications may be made to the technical solutions described in the above embodiments, or some technical features may be equivalently replaced, and these modifications or replacements may not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the present invention.

Claims (21)

1. A screen projection method is characterized by comprising the following steps:
acquiring actual position coordinates of projection pixels on a screen, which correspond to marking pixels on a display image of the projector;
calculating theoretical pixel coordinates of the marked pixels on a display image of the projector according to the actual position coordinates of the projection pixels;
acquiring actual pixel coordinates of the marked pixels on the display image of the projector;
adjusting the parameters of the projector to enable the theoretical pixel coordinate of the marked pixel to be close to the actual pixel coordinate;
and establishing a coordinate mapping relation between the pixels of the original image displayed by the projector and the projection pixels on the screen based on the adjusted parameters of the projector.
2. The screen projection method of claim 1, wherein obtaining actual position coordinates of a projection pixel on the screen corresponding to a marker pixel on a display image of the projector comprises:
drawing a plurality of mark points on a display image of the projector, and recording the actual pixel coordinates of the mark points.
3. The screen projection method of claim 1, wherein obtaining actual pixel coordinates of the marking pixels on the projector display image comprises:
and measuring and recording the actual position coordinates of the corresponding projection pixels on the screen by using a three-dimensional measuring instrument.
4. The screen projection method of claim 1, further comprising: pre-establishing a coordinate conversion relation between a mark pixel on a display image of a projector and a projection pixel on a screen;
the calculating the theoretical pixel coordinate of the mark pixel on the display image of the projector according to the actual position coordinate of the projection pixel comprises: and calculating based on the coordinate conversion relation.
5. The screen projection method of claim 4, wherein the establishing a coordinate transformation relationship between the mark pixel on the display image of the projector and the projection pixel on the screen comprises:
a world coordinate system is established based on a screen, and the following coordinate systems are established based on any projector: the system comprises a projector coordinate system based on a projector, an image coordinate system based on a projector display chip plane, and a pixel coordinate system for displaying an image on a display chip;
and establishing a conversion relation according to the position relation between the coordinate systems, and further establishing a coordinate conversion relation between the mark pixels on the display image of the projector and the projection pixels on the screen.
6. The screen projection method of claim 5,
the screen-based establishment of the world coordinate system comprises: if the screen is a dome screen, establishing a world coordinate system by taking the center of the dome screen as an origin and vertically downwards as a Y axis; if the screen is a circular screen, a world coordinate system is established by taking a circle center line of the radian of the circular screen at a half position of the height of the circular screen as an original point, taking a Z axis horizontally facing the middle of the circular screen and taking a Y axis vertically downwards;
the establishing of the following coordinate systems based on any projector comprises: establishing a projector coordinate system by taking the optical center of the projector as an origin based on the projector, establishing a two-dimensional image coordinate system by taking the center of a display chip as the origin based on the plane of the display chip of the projector, and establishing a pixel coordinate system by taking a pixel at one vertex of a display image as the origin based on the plane of the display chip of the projector;
the establishing of the conversion relation according to the position relation between the coordinate systems comprises the following steps: establishing a first conversion relation between a world coordinate system and a projector coordinate system according to the distance and the rotation angle of the projector relative to the origin of the world coordinate system; establishing a second conversion relation between the projector coordinate system and the image coordinate system according to the projection optical principle; and establishing a third conversion relation between the image coordinate system and the pixel coordinate system according to the size of the display chip and the size of the pixel.
7. A screen projection method as claimed in claim 5 wherein the projector parameters comprise:
angles α, β, γ of the projector rotation about the world coordinate system X, Y, Z axis at the origin, and the translation amounts Tx, Ty, Tz in the projector coordinate systems Xc, Yc, Zc axis after the rotation, and the shift amounts Cx, Cy of the lens center of the projector along the image coordinate systems u, v axis, and the focal length f of the projector.
8. The screen projection method of claim 1, wherein the adjusting the projector parameters to make the theoretical pixel coordinates of the mark pixel close to the actual pixel coordinates comprises:
calculating the distance between a theoretical pixel coordinate and an actual pixel coordinate of any one mark point, wherein the theoretical pixel coordinate is related to the parameters of the projector;
summing the distances calculated by the plurality of marking points;
the projector parameter at which the calculated sum approaches 0.
9. The screen projection method of claim 1, further comprising:
generating a planar original image through three-dimensional modeling and simulated projection;
acquiring pixel areas on the original image respectively corresponding to projection areas and splicing areas of different projectors on the screen according to the coordinate mapping relation;
and performing image splicing and fusion processing based on the determined pixel regions respectively corresponding to the projection regions and the splicing regions of different projectors on the original image.
10. The screen projection method of claim 9, wherein the screen is specifically a dome, the method is applied to a plurality of projectors to project images onto the same dome, and before generating the original image of the plane by three-dimensional modeling and simulated projection, the method further comprises:
drawing a spherical screen grid image, converting the spherical screen grid image into a two-dimensional image displayed by the projector according to the coordinate mapping relation and the calculated position parameter of the projector, and drawing a planar grid image; and projecting the plane grid image to a projector for displaying, verifying the difference between the real spherical screen and the theoretical spherical screen, correcting world coordinates according to a verification result, and further correcting the coordinate mapping relation.
11. A screen projection method as claimed in claim 10, wherein the generating of the planar original image by three-dimensional modeling and simulated projection comprises: a dome screen model is established through three-dimensional modeling, and a circular original image on a two-dimensional plane is generated by placing a virtual equidistant fisheye camera at the center of a sphere of the dome screen model.
12. The screen projection method of claim 9, wherein the screen is specifically a ring screen, and the method is applied to a plurality of projectors to project images onto the same ring screen, and before generating the original image of the plane through three-dimensional modeling and simulated projection, the method further comprises:
the method comprises the steps of obtaining initial parameters of projectors and the angle of a circular screen occupied by each projector, defining the angle of a splicing area according to the area of projection overlapping, generating a coordinate dot matrix circular screen mathematical model with a certain row number and a certain column number according to a set interval, connecting the coordinate dot matrix circular screen mathematical model to form an initial grid, adjusting the parameters of the projectors to enable the vertical height of the grid projected on the screen to be matched with the height of a real screen, enabling each angle line of the grid to be distributed at the position corresponding to the screen, and enabling the grids of adjacent projection splicing areas to be overlapped in an inosculating manner.
13. A screen projection method as claimed in claim 12, wherein the generating of the planar original image by three-dimensional modeling and simulated projection comprises:
the method comprises the steps of establishing a circular screen model through three-dimensional modeling, setting a virtual camera at an audience observation point of the circular screen model, projecting an image acquired by the virtual camera onto the circular screen model, and expanding an image plane displayed on the circular screen model to generate a rectangular original image on a two-dimensional plane.
14. The screen projection method according to any of claims 1 to 13, wherein the performing image stitching fusion processing includes:
performing brightness attenuation processing on a splicing area on an original image by using an alpha channel;
and respectively processing the original image and the alpha channel thereof by affine transformation, and performing fusion processing on the processed images to obtain the images to be projected by the corresponding projector.
15. A screen projection method is characterized by comprising the following steps:
acquiring pixel coordinates of marked pixels on a display image of the projector;
calculating the projection coordinates of the corresponding projection pixels on the screen according to the pixel coordinates of the marking pixels;
acquiring real coordinates of corresponding projection pixels on a screen;
adjusting parameters of a projector to enable the projection coordinate of the same projection pixel to be close to the real coordinate;
and establishing a coordinate mapping relation between the pixels of the original image displayed by the projector and the projection pixels on the screen based on the adjusted parameters of the projector.
16. A screen projection device is characterized by comprising the following modules:
the acquisition module is used for acquiring the actual position coordinates of the projection pixels on the screen, which correspond to the mark pixels on the display image of the projector;
the calculation module is used for calculating theoretical pixel coordinates of the mark pixel on a display image of the projector according to the actual position coordinates of the projection pixel;
the acquisition module is also used for acquiring the actual pixel coordinates of the mark pixels on the display image of the projector;
the adjusting module is used for enabling the theoretical pixel coordinate of the marked pixel to be close to the actual pixel coordinate by adjusting the parameters of the projector;
and the mapping module is used for establishing a coordinate mapping relation between the pixels of the original image displayed by the projector and the projection pixels on the screen based on the adjusted projector parameters.
17. A screen projection device is characterized by comprising the following modules:
the acquisition module is used for acquiring the pixel coordinates of the marked pixels on the display image of the projector;
the computing module is used for computing the projection coordinates of the corresponding projection pixels on the screen according to the pixels of the marking pixels;
the acquisition module is further used for acquiring the real coordinates of the corresponding projection pixels on the screen;
the adjusting module is used for enabling the projection coordinate of the same projection pixel to be close to the real coordinate by adjusting the parameters of the projector;
and the mapping module is used for establishing a coordinate mapping relation between the pixels of the original image displayed by the projector and the projection pixels on the screen based on the adjusted projector parameters.
18. A computer device, comprising a processor, a memory, a bus, and a communication interface; the memory is used for storing a computer program, the processor is connected with the memory through the bus, and when the computer device runs, the processor executes the computer program stored in the memory to enable the computer device to execute the screen projection method according to claim 1.
19. A computer device, comprising a processor, a memory, a bus, and a communication interface; the memory is used for storing a computer program, the processor is connected with the memory through the bus, and when the computer device runs, the processor executes the computer program stored in the memory to enable the computer device to execute the screen projection method according to claim 15.
20. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer device comprising a processor, cause the computer device to perform the screen projection method of claim 1.
21. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer device comprising a processor, cause the computer device to perform the screen projection method of claim 15.
CN201811099366.3A 2018-09-20 2018-09-20 Screen projection method and device and related equipment Active CN109272478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811099366.3A CN109272478B (en) 2018-09-20 2018-09-20 Screen projection method and device and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811099366.3A CN109272478B (en) 2018-09-20 2018-09-20 Screen projection method and device and related equipment

Publications (2)

Publication Number Publication Date
CN109272478A true CN109272478A (en) 2019-01-25
CN109272478B CN109272478B (en) 2022-03-25

Family

ID=65197710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811099366.3A Active CN109272478B (en) 2018-09-20 2018-09-20 Screen projection method and device and related equipment

Country Status (1)

Country Link
CN (1) CN109272478B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109873997A (en) * 2019-04-03 2019-06-11 贵安新区新特电动汽车工业有限公司 Projected picture correcting method and device
CN110035275A (en) * 2019-03-27 2019-07-19 苏州华恒展览设计营造有限公司 City panorama dynamic display system and method based on large screen fusion projection
CN110636275A (en) * 2019-09-24 2019-12-31 深圳魔秀文化科技有限公司 Immersive projection system and method
CN110942498A (en) * 2019-11-06 2020-03-31 天津大学 Method for establishing HUD system image warping deformation model
CN111061421A (en) * 2019-12-19 2020-04-24 北京澜景科技有限公司 Picture projection method and device and computer storage medium
CN111935468A (en) * 2020-09-24 2020-11-13 歌尔股份有限公司 Method and device for detecting deviation of projection center and computer readable storage medium
CN111932686A (en) * 2020-09-09 2020-11-13 南昌虚拟现实研究院股份有限公司 Mapping relation determining method and device, readable storage medium and computer equipment
CN112672122A (en) * 2020-12-15 2021-04-16 深圳市普汇智联科技有限公司 Method and system for calibrating projection and camera mapping relation errors
CN112734639A (en) * 2020-12-28 2021-04-30 南京欣威视通信息科技股份有限公司 Image display splicing method and system
CN112734860A (en) * 2021-01-15 2021-04-30 中国传媒大学 Arc-screen prior information-based pixel-by-pixel mapping projection geometric correction method
CN113810673A (en) * 2021-09-24 2021-12-17 当趣网络科技(杭州)有限公司 Projector uniformity testing method and device and computer readable storage medium
CN114463475A (en) * 2022-04-08 2022-05-10 山东捷瑞数字科技股份有限公司 Multi-camera rendering image fusion method based on edge correction
CN114531579A (en) * 2020-11-23 2022-05-24 华强方特(深圳)科技有限公司 Circular screen projection method, system and equipment
WO2022116194A1 (en) * 2020-12-04 2022-06-09 中国科学院深圳先进技术研究院 Panoramic presentation method and device therefor
CN114666556A (en) * 2022-02-23 2022-06-24 深圳华侨城文化旅游科技集团有限公司 Method, system, equipment and storage medium for fusing back projection spherical screen edges
CN114727075A (en) * 2021-01-06 2022-07-08 成都极米科技股份有限公司 Projection control method and device, projection equipment and storage medium
CN115002431A (en) * 2022-05-20 2022-09-02 广景视睿科技(深圳)有限公司 Projection method, control device and projection system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020009699A1 (en) * 2000-04-28 2002-01-24 Kenichi Hyodo Data receiving device and image forming apparatus using same
CN102385238A (en) * 2010-09-03 2012-03-21 深圳华强数码电影有限公司 Implementation method and system for projecting and showing of ball screen
CN105787920A (en) * 2014-12-26 2016-07-20 秦永进 Dome screen demarcating method, demarcating system and control device
CN107121888A (en) * 2017-07-13 2017-09-01 广西临届数字科技有限公司 The method of ball-screen projection projection
CN107635120A (en) * 2017-09-19 2018-01-26 南京乐飞航空技术有限公司 A kind of method of multiple channel ball curtain Geometry rectification and Fusion Edges
CN108227348A (en) * 2018-01-24 2018-06-29 长春华懋科技有限公司 Geometric distortion auto-correction method based on high-precision vision holder

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020009699A1 (en) * 2000-04-28 2002-01-24 Kenichi Hyodo Data receiving device and image forming apparatus using same
CN102385238A (en) * 2010-09-03 2012-03-21 深圳华强数码电影有限公司 Implementation method and system for projecting and showing of ball screen
CN105787920A (en) * 2014-12-26 2016-07-20 秦永进 Dome screen demarcating method, demarcating system and control device
CN107121888A (en) * 2017-07-13 2017-09-01 广西临届数字科技有限公司 The method of ball-screen projection projection
CN107635120A (en) * 2017-09-19 2018-01-26 南京乐飞航空技术有限公司 A kind of method of multiple channel ball curtain Geometry rectification and Fusion Edges
CN108227348A (en) * 2018-01-24 2018-06-29 长春华懋科技有限公司 Geometric distortion auto-correction method based on high-precision vision holder

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110035275A (en) * 2019-03-27 2019-07-19 苏州华恒展览设计营造有限公司 City panorama dynamic display system and method based on large screen fusion projection
CN110035275B (en) * 2019-03-27 2021-01-15 苏州华恒展览设计营造有限公司 Urban panoramic dynamic display system and method based on large-screen fusion projection
CN109873997A (en) * 2019-04-03 2019-06-11 贵安新区新特电动汽车工业有限公司 Projected picture correcting method and device
CN110636275A (en) * 2019-09-24 2019-12-31 深圳魔秀文化科技有限公司 Immersive projection system and method
CN110942498A (en) * 2019-11-06 2020-03-31 天津大学 Method for establishing HUD system image warping deformation model
CN111061421A (en) * 2019-12-19 2020-04-24 北京澜景科技有限公司 Picture projection method and device and computer storage medium
CN111932686A (en) * 2020-09-09 2020-11-13 南昌虚拟现实研究院股份有限公司 Mapping relation determining method and device, readable storage medium and computer equipment
CN111932686B (en) * 2020-09-09 2021-01-01 南昌虚拟现实研究院股份有限公司 Mapping relation determining method and device, readable storage medium and computer equipment
CN111935468A (en) * 2020-09-24 2020-11-13 歌尔股份有限公司 Method and device for detecting deviation of projection center and computer readable storage medium
CN111935468B (en) * 2020-09-24 2021-01-22 歌尔股份有限公司 Method and device for detecting deviation of projection center and computer readable storage medium
CN114531579A (en) * 2020-11-23 2022-05-24 华强方特(深圳)科技有限公司 Circular screen projection method, system and equipment
WO2022116194A1 (en) * 2020-12-04 2022-06-09 中国科学院深圳先进技术研究院 Panoramic presentation method and device therefor
CN112672122B (en) * 2020-12-15 2022-05-24 深圳市普汇智联科技有限公司 Method and system for calibrating projection and camera mapping relation errors
CN112672122A (en) * 2020-12-15 2021-04-16 深圳市普汇智联科技有限公司 Method and system for calibrating projection and camera mapping relation errors
CN112734639A (en) * 2020-12-28 2021-04-30 南京欣威视通信息科技股份有限公司 Image display splicing method and system
CN114727075A (en) * 2021-01-06 2022-07-08 成都极米科技股份有限公司 Projection control method and device, projection equipment and storage medium
CN114727075B (en) * 2021-01-06 2023-09-08 成都极米科技股份有限公司 Projection control method and device, projection equipment and storage medium
CN112734860B (en) * 2021-01-15 2021-09-21 中国传媒大学 Arc-screen prior information-based pixel-by-pixel mapping projection geometric correction method
CN112734860A (en) * 2021-01-15 2021-04-30 中国传媒大学 Arc-screen prior information-based pixel-by-pixel mapping projection geometric correction method
CN113810673B (en) * 2021-09-24 2023-05-30 当趣网络科技(杭州)有限公司 Projector uniformity testing method and device and computer readable storage medium
CN113810673A (en) * 2021-09-24 2021-12-17 当趣网络科技(杭州)有限公司 Projector uniformity testing method and device and computer readable storage medium
CN114666556A (en) * 2022-02-23 2022-06-24 深圳华侨城文化旅游科技集团有限公司 Method, system, equipment and storage medium for fusing back projection spherical screen edges
CN114463475A (en) * 2022-04-08 2022-05-10 山东捷瑞数字科技股份有限公司 Multi-camera rendering image fusion method based on edge correction
CN114463475B (en) * 2022-04-08 2022-07-19 山东捷瑞数字科技股份有限公司 Edge correction-based multi-camera rendering image fusion method
CN115002431A (en) * 2022-05-20 2022-09-02 广景视睿科技(深圳)有限公司 Projection method, control device and projection system
CN115002431B (en) * 2022-05-20 2023-10-27 广景视睿科技(深圳)有限公司 Projection method, control device and projection system

Also Published As

Publication number Publication date
CN109272478B (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN109272478B (en) Screen projection method and device and related equipment
CN110336987B (en) Projector distortion correction method and device and projector
US20220286654A1 (en) Projector Keystone Correction Method, Apparatus And System, And Readable Storage Medium
CN108257183B (en) Camera lens optical axis calibration method and device
WO2018076154A1 (en) Spatial positioning calibration of fisheye camera-based panoramic video generating method
US8311366B2 (en) System and method for calibrating and adjusting a projected image of a projection apparatus
CN110191326B (en) Projection system resolution expansion method and device and projection system
US9892488B1 (en) Multi-camera frame stitching
CN105654502A (en) Panorama camera calibration device and method based on multiple lenses and multiple sensors
CN113808220A (en) Calibration method and system of binocular camera, electronic equipment and storage medium
KR20160034847A (en) System and method for calibrating a display system using a short throw camera
CN112734860B (en) Arc-screen prior information-based pixel-by-pixel mapping projection geometric correction method
CN107527336B (en) Lens relative position calibration method and device
CN110335307B (en) Calibration method, calibration device, computer storage medium and terminal equipment
CN110099267A (en) Trapezoidal correcting system, method and projector
CN107358577B (en) Rapid splicing method of cubic panoramic image
CN111429531A (en) Calibration method, calibration device and non-volatile computer-readable storage medium
CN106534670B (en) It is a kind of based on the panoramic video generation method for connecting firmly fish eye lens video camera group
CN111461963B (en) Fisheye image stitching method and device
CN109785390B (en) Method and device for image correction
US11380063B2 (en) Three-dimensional distortion display method, terminal device, and storage medium
CN112055186A (en) Geometric correction method, system, equipment and storage medium for multi-projection image splicing
CN114071103A (en) Adaptive left-right trapezoidal correction method for projector
CN112118435A (en) Multi-projection fusion method and system for special-shaped metal screen
CN109785225B (en) Method and device for correcting image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant