WO2023035965A1 - 成像装置、方法及设备 - Google Patents

成像装置、方法及设备 Download PDF

Info

Publication number
WO2023035965A1
WO2023035965A1 PCT/CN2022/114901 CN2022114901W WO2023035965A1 WO 2023035965 A1 WO2023035965 A1 WO 2023035965A1 CN 2022114901 W CN2022114901 W CN 2022114901W WO 2023035965 A1 WO2023035965 A1 WO 2023035965A1
Authority
WO
WIPO (PCT)
Prior art keywords
spherical
imaging
image
display
display screen
Prior art date
Application number
PCT/CN2022/114901
Other languages
English (en)
French (fr)
Inventor
吴本华
吴剑飞
Original Assignee
淮北康惠电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202111066693.0A external-priority patent/CN115798363A/zh
Priority claimed from CN202111068298.6A external-priority patent/CN115811608A/zh
Priority claimed from CN202122207099.0U external-priority patent/CN215494532U/zh
Priority claimed from CN202111066699.8A external-priority patent/CN115811607A/zh
Priority claimed from CN202122207069.XU external-priority patent/CN215499282U/zh
Priority claimed from CN202122200889.6U external-priority patent/CN215449878U/zh
Priority claimed from CN202122200821.8U external-priority patent/CN215818329U/zh
Priority claimed from CN202122207067.0U external-priority patent/CN215417490U/zh
Priority to US18/293,699 priority Critical patent/US20240348766A1/en
Priority to CN202280074857.0A priority patent/CN118235420A/zh
Priority to EP22866433.0A priority patent/EP4358507A1/en
Application filed by 淮北康惠电子科技有限公司 filed Critical 淮北康惠电子科技有限公司
Publication of WO2023035965A1 publication Critical patent/WO2023035965A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B13/00Optical objectives specially designed for the purposes specified below
    • G02B13/06Panoramic objectives; So-called "sky lenses" including panoramic objectives having reflecting surfaces
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B3/00Simple or compound lenses
    • G02B3/02Simple or compound lenses with non-spherical faces
    • G02B3/08Simple or compound lenses with non-spherical faces with discontinuous faces, e.g. Fresnel lens
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/18Diffraction gratings
    • G02B5/1876Diffractive Fresnel lenses; Zone plates; Kinoforms
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • G03B21/54Accessories
    • G03B21/56Projection screens
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B25/00Viewers, other than projection viewers, giving motion-picture effects by persistence of vision, e.g. zoetrope
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • G03B37/06Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe involving anamorphosis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/293Generating mixed stereoscopic images; Generating mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background

Definitions

  • the invention relates to the technical field of imaging and imaging, in particular to an imaging device, method and equipment.
  • the traditional panoramic image is a wide-area source or local source panoramic image file stitched from multiple planar images captured and output by one or more planar image sensor cameras after spherical processing.
  • the traditional panoramic image is a wide-area source or local source panoramic image file stitched from multiple planar images captured and output by one or more planar image sensor cameras after spherical processing.
  • 3D modeling is to project the object onto the plane projection surface in the three directions of X, Y, and Z to calculate various corresponding values.
  • the projection process takes into account the tangential and radial deformation, it does not deeply consider the loss of light quanta. ;
  • the shooting of a camera with a flat photosensitive surface is a real shooting behavior, and the loss of different degrees of light quanta when the light reaches different parts of the flat photosensitive surface is highlighted.
  • the imaging method based on the plane projection surface whether it is a linear imaging model, a nonlinear imaging model, or a combination of linear and nonlinear imaging models, has deformation phenomena. Imaging under the linear imaging model is also accompanied by blurring, and in some cases or parts, the deformation is very serious. Most of the existing solutions are to alleviate it by changing the focal length, but they cannot be eradicated.
  • the current camera standard lens and telephoto lens mostly use a linear imaging model.
  • the telephoto lens mainly shoots distant scenes.
  • the imaging method based on the planar projection surface does not have the function of obtaining images from the spatial scene on the longitudinal coordinate parallel to and coincident with the imaging axis.
  • To realize the complete projection of continuous points, lines and surfaces only the approximate projection of single point and short line segment can be realized, so it is impossible to obtain the complete image of the three-dimensional space, which is also the reason why it cannot obtain the real image with three-dimensional effect.
  • the utility model aims to provide an imaging device, method and equipment.
  • the technical solution adopted by the present invention includes an imaging device, including an imaging component whose imaging surface is a spherical structure, and the spherical surface forms the point where all parts of the image surface and the light emitted by the image source meet at the intersection of the imaging surface.
  • the included angles are all 90°, and several imaging units are distributed on the imaging surface.
  • an image source with a spherical structure is also included; during imaging, the light emitted by the image source is irradiated on the imaging surface of the imaging part, and the image source adjusts the direction and angle of the incident light, so that the final All the light rays irradiated on the imaging surface are perpendicular to the corresponding positions of the various parts on the surface of the imaging surface.
  • the model of the imaging method further includes an optical lens combination and an auxiliary lens, the optical lens combination and the auxiliary lens are located on the path of light, by changing the properties and layout of the optical lens combination, the image The direction and path of the light emitted by the source change, and the distance and position to the imaging surface change correspondingly, so that the imaging surface can be placed at the specified position according to the requirements; the auxiliary lens further precisely adjusts the incident light on the imaging surface, so as to The incident light irradiated on all parts of the imaging surface is precisely perpendicular to the corresponding position of each part on the surface of the imaging surface.
  • the auxiliary lens coincides with the symmetry axis of the imaging surface, and the spherical structure types of the imaging surface, the image source and the auxiliary lens are the same or match each other.
  • the imaging surface and/or the image source and/or the auxiliary lens is a structure with a flat surface on one side and a spherical surface on the other, or a structure with both surfaces being spherical; the spherical surface refers to a concave spherical surface or a convex spherical surface structure.
  • the spherical structure of the imaging surface and/or the image source and/or the auxiliary lens is a conventional spherical structure, or a Fresnel spherical structure, or a combined structure of multiple Fresnel spherical structures; a conventional spherical structure It is one of conventional positive spherical structure, conventional ellipsoidal structure, and conventional parabolic structure; the Fresnel spherical structure is one of Fresnel positive spherical structure, Fresnel ellipsoidal structure, and Fresnel parabolic structure.
  • the imaging units on the imaging surface are arranged on the surface of the imaging surface in the form of meridians, the included angles of each meridian are equal, and the distances between the imaging units on the same meridian are equal; or Arranged on the surface of the imaging surface in the form of latitude/horizontal/helix, the spacing of the imaging units on the same weft/horizontal/helix is equal and equal to that between adjacent wefts/horizontal/helixes
  • the spacing refers to the spacing along the surface of the imaging surface;
  • the imaging units that do not use any point, line, or plane as a reference object are equally spaced on the surface of the imaging surface;
  • the imaging unit refers to the photosensitive unit on the photosensitive surface of the camera image sensor, or the display pixel of the display screen, or Image pixels of the image
  • the present invention also provides an imaging method based on any of the above-mentioned imaging devices.
  • S1 the angles between all parts of the image surface formed by the spherical surface and the light emitted by the image source at the intersection of the imaging surface are 90°, S2, opposite to the imaging surface
  • the imaging unit performs matrix processing to form a virtual row-column matrix, and reads or writes image pixel values to the virtual row-column matrix;
  • S3 the image acquisition device formed by the convex spherical surface directly receives the virtual matrix formed by the scene in the real world,
  • the concave spherical or convex spherical imaging surface indirectly receives the real world scene through the convex spherical image source and outputs the image file corresponding to the virtual matrix, which is displayed and restored by the display device whose viewing surface is a concave spherical display surface;
  • the imaging surface directly receives the virtual matrix formed by the real world scene, and the concave s
  • the way of reading the imaging unit in S2 is: S2.1
  • the virtual imaging unit supplements the latitude/horizontal actual imaging unit method, and the imaging unit that is distributed in the latitude/horizontal line on the imaging surface is used as the actual imaging unit.
  • the actual number of imaging units on the longest latitude/horizontal line is the reference number, and virtual imaging units are used to supplement the insufficient reference numbers on other latitudes/horizontal lines, so that the actual imaging units on other latitude/horizontal lines and the supplementary virtual imaging units
  • the latitude/horizontal lines of the equal number of imaging units that reach the benchmark number are regarded as one row; the rows obtained in the above method are regarded as rows, and the number of latitude/horizontal lines of all imaging units on the imaging surface is regarded as columns to form a virtual row-column matrix.
  • S2.2 Adjacent latitude/horizontal line imaging units complement each other, using the imaging units distributed on the imaging surface as latitude/horizontal lines as actual imaging units, with a given number of actual imaging units as a reference value, one of the latitude/horizontal lines is used as the starting line, and the actual imaging units are virtually extracted line by line and point by point.
  • the reference value is not reached, continue to pick up the imaging unit from the next adjacent latitude/horizontal line until the reference value is reached, and record it as a virtual line in the first row, while the latitude/horizontal line
  • the remaining actual imaging units virtually picked on the horizontal line will be included in the virtual picking work of the next virtual line, and so on, until the actual imaging units of the last latitude/horizontal line on the imaging surface are all virtual picked , and when the actual imaging unit extracted virtually last time fails to reach the reference value, the virtual imaging unit is used to supplement; finally, the rows obtained in the above manner are used as rows, and the total number of rows is used as columns to form a virtual row-column matrix.
  • the S2.3 block method divides the imaging surface where the imaging units are distributed in the manner of meridian, latitude, horizontal line, spiral, and no reference object equidistant distribution into one or more equal-area or unequal-area areas
  • the principle of division is that the number of imaging units in each block is equal to and equal to the reference value.
  • virtual imaging units are used to supplement the reference value.
  • An equal number of imaging units of a block is regarded as a virtual row; the rows obtained in the above manner are regarded as rows, and the numbers of all blocks are regarded as columns, forming a virtual row-column matrix.
  • the S2.4 virtual meridian cutting method uses any meridian passing through the center point of the spherical structure on the imaging surface as a virtual meridian, and the virtual meridian takes a diameter line perpendicular to the sphere and passing through the center point of the sphere as the axis of rotation , rotate clockwise or counterclockwise, and take an equal number of imaging units arranged at equal intervals in parallel, horizontal, and helical lines and without reference objects on the virtual meridian cut imaging surface for a period of time set in advance as A virtual row; the row obtained by the above method is used as a row, and the number of virtual rows obtained by rotating the virtual meridian one circle is used as a column to form a virtual row-column matrix.
  • S2.5 Meridian method distributing the imaging units on the imaging plane in a meridian manner.
  • Each meridian with the same number of imaging units is regarded as a row, and all meridians are regarded as columns, forming a virtual matrix of rows and columns.
  • S2.6 divides the imaging unit using the spiral layout method into several equal parts, selects the number of imaging units starting from the first imaging unit at the starting point of the spiral as a virtual row, and selects Up to the last imaging unit on the helix; the selected imaging units of the same number are used as virtual rows, and the number of all virtual rows is used as virtual columns to form a row-virtual-column matrix.
  • S3.1 outputs an image dataset file in the form of an original matrix
  • S3.2 performs spherical restoration and stitching on the pixel coordinates or pixels in the virtual row-column matrix, and outputs a spherical image file or a planar image file.
  • the present invention also provides an image sensor, including the above-mentioned imaging device, wherein the imaging element in the imaging device is embodied as a photosensitive element in the image sensor, the imaging surface is embodied as a photosensitive surface, and the imaging unit on the imaging surface is embodied as a photosensitive unit or, the image sensor also includes a matrix generator connected to the photosensitive element, a data reader connected to the matrix generator, and an image processor connected to the data reader; when the image sensor works, the above-mentioned In the imaging method, the matrix generator processes the non-matrix-arranged photosensitive units on the photosensitive surface of the photosensitive member through the built-in logic circuit of the matrix generator to generate a matrix-arranged virtual matrix, and the photosensitive units on the virtual matrix are obtained from the outside The acquired photosensitive data is read by the data reader and sent to the image processor, and the image processor processes the input data and outputs a corresponding image file.
  • the imaging element in the imaging device is embodied as a photosensitive element in the image sensor
  • the image sensor is packaged independently or together with the auxiliary lens; when the auxiliary lens of the image sensor is packaged together, one side of the auxiliary lens faces the photosensitive port, and the other side faces away from the photosensitive port and faces the image sensor.
  • the photosensitive surface of the photosensitive element, and the focal point of the auxiliary lens coincides with the center of the spherical photosensitive surface.
  • the present invention also provides a camera, the camera includes a fuselage and a lens, and a shutter, a built-in auxiliary lens, an image sensor as described above, and an image data processing output module are arranged in the dark box inside the fuselage from front to back;
  • the front end of the lens is provided with a viewfinder lens, and the inside of the lens barrel of the lens is provided with a lens combination;
  • the internal parts of the fuselage are combined with the lens components according to the layout of any of the imaging devices described above, and the imaging components in the imaging device are arranged
  • the above-mentioned camera is embodied as an image sensor, the imaging surface is embodied as a photosensitive surface of the image sensor, and the image source is embodied as a viewfinder lens; and any of the above-mentioned imaging methods is implemented;
  • the built-in auxiliary lens adopts a spherical lens;
  • the built-in auxiliary lens The focal point coincides with the spherical center point of the sp
  • the built-in auxiliary lens is used to cooperate with the lens combination so that all light is irradiated vertically on the photosensitive surface of the image sensor
  • the image data processing output module is used to process the image data acquired from the image sensor into various formats of file output, or synthesized spherical image output, or synthesized spherical image and then convert it into a planar image output.
  • the present invention also provides a method for shooting and making panoramic images.
  • the camera with a convex spherical viewfinder lens takes a certain point in the space where it is located as the center point, and the viewfinder lens faces away from the center point, and takes pictures covering all directions other than the center point.
  • the scene obtains the images of several spherical pictures, and the images of the several spherical pictures are spliced into a panoramic image file of a complete spherical picture to save or output; the panoramic image is a wide area including all directions other than the central point
  • the scene in a wide range is referred to here as a wide-area source panoramic image; the image is displayed on a display screen whose display surface is a concave spherical surface.
  • the camera of the concave spherical viewfinder lens has a certain point in its space as the center point, and the viewfinder lens faces the center point, and captures scenes in all directions from the camera to the space point to obtain several spherical pictures.
  • image the images of the several spherical frames are spliced into a panoramic image file of a complete spherical frame to save or output; the panoramic image includes local ranges in all directions between the camera and the space point
  • the scene is referred to here as a local source panoramic image for short.
  • the image is displayed on a display screen whose display surface is a convex spherical surface.
  • the present invention also provides a display screen, including the above-mentioned imaging device, wherein the imaging part of the imaging device is embodied as an image display part in the display screen, the imaging surface is represented as an image display surface, and the imaging unit on the imaging surface is represented as a display pixels, or the display screen displays an image file of a spherical picture taken and output by the camera or shooting device of the above-mentioned imaging method.
  • the display screen also includes an image processing module and a scanning module, and the scanning module is connected to the display pixels on the display surface while being connected to the image processing module;
  • the display pixel matrixer in the scanning module contains The matrix generation logic circuit or program instructions of one or more display pixels mentioned above, when the display screen is working, the display pixel matrixer advances the display pixels that are not arranged in a matrix on the display surface through the matrix generation logic circuit or program instructions in advance Matrixing;
  • the image pixel matrixer in the image processing module contains the above-mentioned matrix generation logic circuit or program instruction of one or more image pixels, and the matrix generation logic circuit or program instruction of the image pixel
  • the matrix type is the same as the matrix generation logic circuit or program instruction matrix type of the display pixels in the scanning module; when the display screen displays images, the image judgment program in the image processing module takes the matrixed spherical image data
  • the set file is directly sent to the matcher in the scanning module, the spherical image file whose image pixels are not matrixed is
  • the matcher combines the image pixel matrix with the The display pixel matrix type in the scanning module is matched, and after the matching is successful, the scanning module scans and writes the data of the image pixel into the corresponding display pixel on the display surface according to the corresponding matrix to realize image display.
  • the present invention also provides a fan display screen, including any one of the above-mentioned imaging devices, wherein the imaging component is embodied as a fan in the fan display screen, and the imaging surface is embodied as a fan blade of the fan facing the audience.
  • the imaging unit on the imaging surface is embodied as a lamp bead on the outer surface of the fan blade; it also includes a control mechanism, the control mechanism includes a control board and a driving motor, and the driving end of the driving motor is connected to the fan blade of the fan machine;
  • the fan blade is a bar with an arc structure, and the arc structure is a spherical structural member between the parallel surfaces obtained by intercepting a conventional spherical surface or a Fresnel spherical surface by two parallel surfaces with relatively small distances.
  • the control board executes the above-mentioned imaging method to realize the display and imaging of the picture; or the display screen displays the image file of the spherical picture taken and output by the camera or shooting device of the above-mentioned imaging method.
  • the present invention also provides a projection device, including any of the above-mentioned imaging devices; wherein, in the imaging device, the imaging component is embodied as a projected display screen in the projection device, and the imaging surface is represented as a part of the projected display screen.
  • the imaging unit is the reflective particles painted on the image display surface of the projected display screen or the projected unit is set, and the image source is embodied as a projection host;
  • the projection host is a point-like image signal particle projector, arc-shaped Image signal particle projectors, spherical image signal particle projectors;
  • the image signal particle emitters of the point-shaped image signal particle projectors and arc-shaped image signal particle projectors are driven by the driving device connected to them, according to the action
  • the command movement given by the commander projects the image signal particles onto the spherical projection screen, and/or executes the above-mentioned imaging method to realize the picture display.
  • the present invention also provides a glasses-type panoramic display device, which includes a glasses frame, a display screen, an earphone, a file processing module, and a control handle; the image display surface of the display screen is a concave spherical structure or a convex spherical structure, and the spherical shape
  • the structure is a conventional spherical structure or a Fresnel spherical structure or a combination of Fresnel spherical structures, which are arranged in the frame of the spectacle frame and are located directly in front of the viewer's eyes when worn;
  • the display screen is one or two , when there are two pieces, the display screens respectively display the same area of the same frame of the panoramic image file captured and synthesized by the single-view camera, or the same area of the same frame of the same frame of the panoramic image file captured and synthesized by the dual-view camera Pictures from different angles.
  • the present invention also provides a cinema, including a projection room, a viewing platform, a sound system and a display screen; the display screen is any one of the above-mentioned display screens, and the projection room is provided with a concave spherical display screen whose display surface is a concave spherical surface, Either a convex spherical display screen with a convex spherical display surface, or a concave spherical display screen with a concave spherical display surface and a concave spherical display screen with a convex spherical display surface.
  • the cinema is provided with a concave spherical display screen in the projection room.
  • the display screen is set on one side of the projection room, and the viewing platform is located in the projection room.
  • the platform or slope on the other side when the display surface of the display screen is a complete spherical surface or a small part of the spherical vacancy at the intersection with the projection room floor is removed, and the spherical surface of other parts is close to a complete spherical surface, the display screen It is set in the upper space in the middle of the projection room and fixed on the inner wall of the projection room through a fixing frame.
  • the viewing platform is a platform or a slope located in the lower middle of the display surface and is completely wrapped by the display surface;
  • the theater with a convex spherical display screen is set in the projection room.
  • the display surface of the display screen is a relatively small part of a complete sphere
  • the display screen is set on one side of the projection room, and the viewing platform is located in the projection room.
  • the platform or slope surface on the other side when the display surface of the display screen is a complete spherical surface or removes a small part of the spherical vacancy where it meets the projection room ground, and the spherical surface of other parts is close to a complete spherical surface, the display screen Set on the ground in the middle and lower part of the projection room, the viewing platform is a platform or slope surrounding the convex spherical display screen;
  • the projection room is provided with two spherical display screens, one is a concave spherical display screen and the other is a concave spherical display screen, and the display surface of the concave spherical display screen and the display surface of the convex spherical display screen are complete spherical surfaces or Except for a small part of the spherical vacancy at the intersection with the projection room ground, the spherical surface of other parts is close to a complete spherical surface, and the diameter of the display surface of the concave spherical display screen is greater than the diameter of the display surface of the convex spherical display screen.
  • the concave spherical display screen is set In the middle of the projection room close to the upper space and fixed on the wall of the projection room through a fixing frame, the convex spherical display screen is set in the middle and lower part of the projection room close to the ground or on the ground, and it is located inside the concave spherical display screen
  • the lower part of the middle is completely wrapped by the display surface of the concave spherical display screen;
  • the viewing platform is located in the area between the horizontal plane passing through the center of the spherical display surface of the convex spherical display screen and the horizontal plane passing through the center of the spherical display screen of the concave spherical display screen, or the upper and lower heights are slightly higher.
  • the area is larger or slightly smaller than the height of the area, and is on or close to the annular ring belt of the display surface of the concave spherical display screen, and the seats of the viewing platform face the display surface of the convex spherical display screen.
  • the concave spherical display screen and the convex spherical display screen respectively display wide-area source panoramic image files and local area source panoramic image files captured and output in the same field.
  • the beneficial effects of the present invention are: by using the spherical projection surface, the straight lines in the three dimensions X, Y, and Z of the Cartesian coordinate system can be completely, continuously, and accurately projected onto the spherical surface, thereby completely, completely, and accurately Obtain the image of the scene, and the obtained image is displayed by a display device with the same structure corresponding to the reverse spherical display surface, presenting a highly realistic three-dimensional picture that is completely consistent with the scene, thus solving the difficulty of obtaining the imaging method based on the plane projection surface And show the historical defects of high-fidelity three-dimensional scene pictures.
  • VR display no longer has the small depth of field, screen door effect, dizziness, and narrow field of view that exist in existing VR devices. , Difficult to focus and other defects.
  • the imaging method based on the spherical projection surface Compared with the imaging method based on the flat projection surface, the imaging method based on the spherical projection surface, the light from the objective lens to the projection surface is not only rarely lost in the middle, but also the light projected on all parts of the projection surface is perpendicular to the projection surface. face. Since the light quantum value of the light perpendicular to the projection surface is the largest, the image definition obtained is the highest; and because all the light rays are vertical, all parts of the image are clear and have the same definition. In this way, the overall clarity of the image obtained by the imaging method based on the spherical projection surface is much higher than that obtained by the imaging method based on the planar projection surface.
  • the incident light is all perpendicular to the spherical projection surface, and the image light reflected by the external scene at a point of a certain latitude and longitude value of the objective lens, after reaching the projection surface, is on the surface of the projection surface.
  • the coordinate position of the same longitude and latitude or the coordinate point position composed of the same longitude value and the negative number of the same absolute value dimension is presented.
  • the position of the projection surface is exactly the same as the corresponding position of the objective lens, so the shape of the formed picture is also exactly the same.
  • the output image is displayed by a display device with the same spherical shape corresponding to the reverse structure of the display surface, and the picture is exactly the same as the original appearance of the scene without any deformation; thus fundamentally getting rid of the long-standing problem of the existing plane projection surface imaging method that is difficult to eradicate deformation defects.
  • the image obtained by the imaging method based on the spherical projection surface is a spherical image itself, and the pixel coordinates of the image are coordinate values in the form of spherical coordinates
  • the spherical image obtained by the imaging method of the spherical projection surface is stitched into the same It is a spherical VR panoramic image, and there will be no misalignment, picture damage or even chromatic aberration that exists in planar image stitching. All parts of the picture can be stitched accurately, and the quality of the stitched VR panoramic image is much higher than that based on planar projection. Surface method to get the stitching effect of the image.
  • the imaging method based on the spherical projection surface solves many problems that are difficult to solve by the flat projection surface imaging method. It is suitable for the use, docking and fusion of high-fidelity, high-definition, high-stereoscopic realistic images, virtual reality and mixed reality images. It can be perfectly solved; the application and extensible application of this imaging method are of great significance to the upgrading of human technology in the entire imaging field.
  • Fig. 1 is the schematic diagram of embodiment 1 imaging device; Among the figure A is the imaging device or model of concave spherical structure imaging surface, and B is the imaging device or model of convex spherical structure imaging surface;
  • Fig. 2 is the schematic diagram of embodiment 2 imaging device
  • A is a convex lens imaging device
  • B is a concave lens imaging device
  • Fig. 3 is the schematic diagram of embodiment 3 imaging device
  • A is an imaging device with a concave spherical structure imaging surface
  • B is an imaging device with a convex spherical structure imaging surface
  • Fig. 4 is the schematic diagram of embodiment 4 imaging device
  • Fig. 5 is the schematic diagram of embodiment 5 imaging device
  • Fig. 6 is the schematic diagram that uses convex spherical surface as the imaging device of transparent objective lens to obtain wide-area scene scene;
  • Fig. 7 is the schematic diagram that uses the concave spherical surface as the imaging device of transparent objective lens to obtain the scene scene of local range;
  • Fig. 8 is the schematic diagram of conventional positive spherical structure
  • Fig. 9 is the schematic diagram of conventional ellipsoid structure
  • Fig. 10 is the schematic diagram of conventional paraboloid structure
  • Fig. 11 is a schematic diagram of a Fresnel positive concave spherical structure
  • Fig. 12 is a schematic diagram of a Fresnel positive convex spherical structure
  • Fig. 13 is a schematic cross-sectional view of a Fresnel ellipsoid structure
  • Fig. 14 is a schematic cross-sectional view of a Fresnel paraboloid structure
  • Fig. 15 is a schematic cross-sectional view of a structure composed of a plurality of Fresnel concave spherical surfaces
  • Fig. 16 is a schematic cross-sectional view of a structure composed of a plurality of Fresnel convex spheres
  • Fig. 17 is a cross-sectional schematic diagram of a concave Fresnel concave spherical structure
  • Fig. 18 is a schematic cross-sectional view of an inner convex Fresnel convex spherical structure
  • Fig. 19 is a schematic diagram of virtual imaging of the imaging method
  • 20 is a schematic diagram of the layout of imaging units in a horizontal manner
  • Fig. 21 is a schematic diagram of the layout of the imaging unit in a weft manner
  • Fig. 22 is a schematic diagram of the layout of imaging units in a helical manner
  • Fig. 23 is a schematic diagram of the layout of imaging units in a warp manner
  • Fig. 24 is a schematic diagram of the virtual imaging unit supplementing the latitude line layout for scanning by the actual imaging unit method
  • Fig. 25 is a schematic diagram of the virtual imaging unit supplementing the horizontal line layout for scanning by the actual imaging unit method
  • Fig. 26 is a schematic diagram of block method scanning
  • Fig. 27 is a schematic diagram of sectoral block method scanning
  • Fig. 28 is a schematic diagram of virtual meridian cutting method scanning
  • Fig. 29 is a schematic diagram of scanning by the line scanning method; see Fig. 1-29: 1. Imaging piece; 2. Imaging surface; 3. Image source; 4. Optical lens combination; 5. Auxiliary lens;
  • FIG. 30 is an overall structural diagram of a spherical image sensor in this embodiment.
  • Fig. 31 is a schematic diagram of spherical image sensors with four structures independently packaged
  • Figure 32 is a schematic diagram of spherical image sensors with four structures installed together with auxiliary lenses; see Figure 31-33: 18-1, photosensitive element; 18-2, photosensitive surface; 18-3, matrix generator; 18- 4. Data virtual matrix; 18-5. Data reader; 18-6. Image processor; 18-7. Image sensor packaging shell;
  • Fig. 33 is a schematic structural diagram of the camera of the invention of this embodiment.
  • Fig. 34 is a schematic diagram of the camera of this embodiment adopting different shapes and structures
  • Fig. 35 is a schematic structural diagram of two cameras combined into a 3D camera
  • Fig. 36 is a schematic structural diagram of two cameras combined into a panoramic camera
  • Fig. 37 is a schematic structural diagram of four cameras combined into a 3D panoramic camera
  • Fig. 38 is a schematic diagram of a panoramic shooting device and a panoramic shooting method thereof for a camera with a single convex spherical viewfinder lens;
  • 39 is a schematic diagram of a panoramic shooting device for a camera with multiple convex spherical viewfinder lenses
  • Fig. 40 is a schematic diagram of a panoramic shooting device and a panoramic shooting method thereof for a camera with a single concave spherical viewfinder lens;
  • Fig. 41 is a schematic diagram of a panoramic shooting device for a camera with multiple concave spherical viewfinder lenses
  • FIG. 42 is a schematic diagram of a method for shooting a panoramic image of a local source by a panoramic shooting device
  • Fig. 43 is a schematic diagram of a panorama shooting device shooting a wide-area source panorama image and a local-area source panorama image simultaneously in the same scene;
  • Fig. 44 is a schematic structural view of the spherical display screen of Embodiment 21;
  • Fig. 45 is a schematic diagram of the working principle of the spherical display screen of the present invention.
  • 46-49 are schematic diagrams showing the shape of the display surface of the display screen of the present invention.
  • Fig. 50 is a schematic diagram of the display screen of the fan machine in embodiment 22;
  • Figure 51 is a working principle diagram of the display screen of the fan machine
  • Figure 52 is a schematic diagram of various shapes of the fan blades of the fan display screen
  • Figures 53-56 are schematic diagrams of the distribution of lamp beads on the fan blades of the fan display screen
  • Figure 57 is a schematic diagram of the projection device of Embodiment 23;
  • Fig. 58 is a schematic diagram of a static projection source of the projection host
  • Figures 59 and 60 are schematic diagrams of dynamic projection sources of the projection host
  • Fig. 61 is a schematic diagram of VR glasses in embodiment 24; in the figure: 24-1, glasses frame; 24-2, spherical display screen; 24-3, earphone; 24-4, file processing module;
  • Fig. 62 is a schematic diagram of a panoramic theater in which the display surface of the concave spherical display screen is a relatively small part of a complete spherical surface;
  • Fig. 63 is a schematic diagram of a panoramic theater in which the display surface of the concave spherical display screen is nearly a complete sphere;
  • Fig. 64 is a schematic diagram of a panoramic cinema in which the display surface of a convex spherical display screen is a relatively small part of a complete sphere;
  • Fig. 65 is a schematic diagram of a panoramic theater in which the display surface of the convex spherical display screen is close to a complete sphere;
  • Fig. 66 is a schematic diagram of a panoramic cinema having both a concave spherical display screen and a convex spherical display screen, and the display surfaces of the two display screens are nearly complete spheres.
  • Fig. 67 is a schematic diagram of a panoramic cinema having both a concave spherical display screen and a convex spherical display screen, and the display surfaces of both display screens are complete spherical surfaces.
  • Example 1 As shown in Figure 1, a kind of imaging device, comprises the imaging element 1 that imaging surface 2 is spherical structure, and the included angle of all parts of described imaging surface 2 spherical surface and the light that image source sends at imaging surface 2 intersections is uniform is 90°, and several imaging units are distributed on the imaging surface 2 .
  • the imaging part 1 is the image sensor of the camera or the camera negative
  • the imaging surface 2 is the photosensitive surface of the image sensor or the photosensitive surface of the camera negative
  • the imaging unit is the photosensitive surface of the image sensor
  • the photosensitive unit above or the photosensitive particles painted on the photosensitive surface of the film, the light emitted by the image source is the direct light of the external scene or the indirect light passing through the camera lens, which is perpendicular to the imaging surface 2.
  • the imaging device is a self-luminous spherical image restoration display device
  • the imaging part 1 is a display screen
  • the imaging surface 2 is the display surface of the display screen
  • the imaging unit is a display pixel on the display surface
  • the light emitted by the image source is the display screen.
  • the light emitted by the display pixels on the surface is perpendicular to the imaging surface 2 .
  • the imaging device is a projection type spherical image restoration display device
  • the imaging part 1 is a projection screen
  • the imaging surface 2 is the image display surface of the projection screen
  • the imaging unit is the reflective particles painted on the image display surface of the projection screen
  • the light emitted by the image source is the light projected by the projection host onto the projection screen, which is incident light perpendicular to the imaging surface 2 .
  • the imaging part 1 is the page or frame surface of the image file
  • the imaging surface 2 is the picture of the page or frame surface of the image file
  • the imaging unit is the image pixel of the picture
  • the light emitted by the image source is the outgoing light perpendicular to the image frame.
  • the imaging device of this embodiment is also the most basic imaging model of the spherical image imaging method.
  • a kind of imaging device comprises the imaging element 1 of spherical structure and the image source 3 of spherical structure including imaging surface 2, and the spherical surface of imaging element 1 forms the spherical center point of image surface 2 and the image source 3
  • the focus coincides, the optical axis of the image source 3 coincides with the central symmetry axis of the imaging surface 2, and the spherical structure types of the imaging surface 2 and the image source 3 are opposite, that is: when the imaging surface 2 is a concave spherical structure, the image source 3 is a convex spherical surface ; When the imaging surface 2 is a convex spherical structure, the image source 3 is a concave spherical surface.
  • Example 3 As shown in Figure 3, a kind of imaging device, comprise the image source 3 of the imaging member 1 of spherical structure and spherical structure that imaging surface 2 is spherical structure, the optical axis of image source 3 coincides with the center axis of symmetry of imaging surface 2; Also Including optical lens combination 4 and auxiliary lens 5, optical lens combination 4 and auxiliary lens 5 are located on the path of light, optical lens combination 4 turns incident light into parallel light and shoots to auxiliary lens 5, the focal point of auxiliary lens 5 and imaging surface 2 The centers of the spheres are coincident and the axes of symmetry are coincident; in this way, all rays finally projected on all parts of the spherical surface of the imaging plane 2 are perpendicular to the imaging plane 2 .
  • This embodiment is an improvement of Embodiment 2.
  • the imaging device of this layout allows the imaging component 1 to be placed at any position on the optical axis, and is no longer limited to the position where the center of the sphere of the imaging surface 2 coincides with the focal point of the image source 3 .
  • Example 4 As shown in Figure 4, the imaging device of this structure is on the basis of Embodiment 3, a mirror is placed on the path of the light, so that the general direction of the light beam changes, so that the imaging surface 2 of the imaging component 1 is not only connected with the image source 3 The distance can be flexibly adjusted, and the azimuth can also be flexibly adjusted.
  • Example 5 As shown in Figure 5, the imaging device with this structure emits non-parallel light from the image source 3 or the optical lens combination 4 and irradiates the auxiliary lens 5, and the focal point of the auxiliary lens 5 does not necessarily coincide with the spherical center of the imaging surface 2.
  • the auxiliary lens 5 is set to a suitable position on the common axis, so that the light is further adjusted through the auxiliary lens 5, and finally the incident angles when projected on the imaging surface 2 are all perpendicular to the spherical surface of the imaging surface 2 all corresponding parts.
  • the imaging device with this structure can reduce the loss of light quanta during the transmission of light, and further improve the quality of the acquired image.
  • Embodiments 1-5 are imaging devices built on a spherical image surface, and the image of the original image formed is spherical, so the imaging device is collectively referred to as a spherical imaging device, and the imaging model based on the imaging device is referred to as a spherical surface for short. imaging model.
  • the spherical surface forms the image surface 2, the image source 3 and the auxiliary lens 5 as a structure with one plane and the other spherical surface, or a structure in which both surfaces are spherical, and the spherical structure is concave.
  • the light emitted or received by the imaging unit on the imaging surface 2 is perpendicular to the spherical surface of the imaging surface 2 where the imaging unit is located.
  • Embodiments 2-5 are improvements based on different methods of Embodiment 1. There are still many improvement methods for the imaging method based on spherical surface formation. Different imaging models can be produced under different improvement methods to meet the needs of various occasions. Realize the imaging effect of various purposes.
  • Embodiment 2-5 The spherical imaging device is not suitable for self-illuminating spherical display screens and spherical image sensors themselves, simple cameras for spherical image sensors that directly receive external mirror light, and imaging models for spherical image file screens themselves, and is suitable for spherical image sensors.
  • the viewfinder lens when it is a projection display device with a spherical projection surface, the image source 3 is a projection host that emits image beams or particle beams.
  • Embodiment 2-5 can be used as a common spherical imaging model of a camera in which a spherical image sensor is combined with a lens.
  • the light rays projected onto all parts of the spherical photosensitive surface 2 of the spherical image sensor of the camera are perpendicular to the spherical surface.
  • the image of the spherical image obtained by the spherical imaging device is restored and displayed on the screen of the corresponding spherical structure display surface, which retains the original high-definition At the same time, all parts of the whole picture are free from deformation and blur, with high fidelity and strong three-dimensional effect.
  • the light rays projected by the projection host onto all parts of the projection screen are perpendicular to the surface of the spherical projection screen, and the focusing degree of light quanta obtained by each part of the surface of the projection screen reaches the maximum value. While making the picture quality of the whole picture reach the best state, it also makes all parts without deformation and blurring, with high fidelity and strong three-dimensional effect.
  • the front viewfinder lens of the camera that adopts the convex spherical structure image source 3 imaging model is a convex lens, which is suitable for image shooting of wide-area source scenery, and the field of view can be set as a front view Field angle value is calculated;
  • the audience sees a deformed mirror image when watching from the front of the display surface , viewed from the back side of the display surface, what is seen is a clear and three-dimensional original image without deformation, but the display screen must be a transparent screen that can be viewed on both sides;
  • the display devices with concave spherical structures shown in 57, 62, and 63 perform restored display, the viewer sees a clear and three-dimensional original image without deformation when watching from the front of the display surface.
  • the front-end viewfinder lens of the camera adopting the concave spherical structure image source 3 imaging model is a concave lens, which is suitable for image shooting of local source scenery in a limited range, such as shooting stage scenes, When shooting objects, place the stage or objects at the focal point of the concave spherical front-end viewfinder transparent lens, and the captured image will be the clearest; the angle of view can be calculated by corresponding to the negative angle of view; the display surface is shown in Figures 47 and 49 , 50, 57, 62, and 63, when the display device with concave spherical structure shown in 63 performs display restoration, the audience watches from the front of the display surface and sees a distorted mirror image; It is a clear and three-dimensional original image without deformation, but the display screen must be a transparent screen that can be viewed on both sides; the display surface is a display with a convex spherical structure as shown in Figures 46
  • the spherical line image files captured and output by a spherical image sensor camera with a convex lens as the viewing lens are generally displayed on a display device with a concave spherical display surface
  • the spherical line image files captured and output by a spherical image sensor camera with a concave lens as the viewing lens are generally A display device whose display surface is a convex spherical surface is used for display.
  • the spherical structure of the imaging surface 2, the image source 3 and the auxiliary lens 5 includes a conventional spherical structure, or a Fresnel spherical structure, or a plurality of Fresnel spherical structures.
  • the combined shape structure of the Neel spherical structure includes conventional positive spherical structure, conventional ellipsoidal structure, and conventional parabolic structure;
  • the Fresnel spherical structure includes Fresnel positive spherical structure, Fresnel ellipsoidal structure, and Fresnel parabolic structure.
  • the imaging units are only distributed on the arc surface of the Fresnel spherical structural surface, and no imaging units are distributed on the vertical section of the Fresnel structural surface.
  • the Fresnel spherical structure includes a conventional Fresnel spherical structure, an inner concave Fresnel concave spherical structure, and an outer convex Fresnel convex spherical structure.
  • the conventional Fresnel spherical structure is a flat horizontal surface on one side, and the Fresnel curved surface on the other side.
  • the concave Fresnel concave spherical structure is a structure in which the conventional Fresnel concave spherical surface is concave inward from the center of one side of the Fresnel curved surface.
  • the convex Fresnel convex spherical structure is a structure in which the conventional Fresnel convex spherical surface is concave inward from the center of one side of the flat horizontal plane.
  • Different spherical structures can be used to meet different needs and purposes; for example, the imaging surface 2 or image source 3 adopts a Fresnel spherical structure, which can produce thin and light imaging equipment based on this imaging method, and the imaging surface 2 or image source Source 3 adopts an elliptical spherical structure, and the imaging equipment produced based on this imaging method has the ability to obtain or restore and display more distant scene images, and has the ability to obtain or restore and display wider lateral scenes; the imaging surface 2 or image source 3 adopts the imaging device based on this imaging method with a parabolic structure, which not only has the ability to obtain or restore more distant scene images, but also has the ability to obtain or restore larger photon values for wide-area lateral scenes, so that Acquiring or
  • the imaging method of the spherical imaging device in Embodiment 1-5 is a physical imaging method and a virtual imaging method, as shown in Figure 19, when the imaging method is a virtual imaging method, the imaging part 1 is a virtual imaging part, and the imaging surface 2 is a virtual imaging method surface, image source 3 virtual image source, optical lens combination 4 virtual optical lens combination, auxiliary lens 5 is a virtual auxiliary lens; virtual imaging is the act of acquiring or displaying a spherical image using this imaging method in a virtual world scene. That is to say, in the image design and production tool, a function of acquiring images or displaying images in a virtual scene based on the imaging method is provided.
  • the imaging units on the imaging surface 2 are arranged according to certain rules, which not only facilitates the reading or writing of the values of the imaging units and improves the efficiency of reading and writing, but also allows flexible selection and implementation of different imaging unit layouts for different application objects.
  • Example 6 As shown in FIG. 20 , a layout method of an imaging unit, the imaging unit starts to lay out horizontal lines on the imaging surface 2 with one side edge of the image surface 2 formed by a spherical surface, and the distance between adjacent imaging units on the same horizontal line is equal , the spacing between adjacent horizontal lines is equal and equal to the spacing between adjacent imaging units on the same horizontal line, where the spacing refers to the distance between two points or two lines along the surface of the sphere.
  • Example 7 As shown in Figure 21, a method for the layout of imaging units, the imaging units on the imaging surface 2 take the center of the spherical image surface 2 as the pole to carry out latitude line layout, and the distance between adjacent imaging units on the same latitude line is equal.
  • the spacing of the latitude lines is equal and equal to the spacing of adjacent imaging units on the same parallel line, where the spacing refers to the distance between two points or two lines along the spherical surface.
  • Example 8 As shown in FIG. 22 , a layout method of an imaging unit: take the center point of the imaging surface 2 or a point close to the center point as the starting point, and mark one or more spiral lines with a certain point on the edge of the imaging surface 2 as the end point, and place The center point of the imaging unit is placed on the helix, and the spacing between adjacent imaging units on the same helix is equal and equal to the spacing between adjacent helices and adjacent helixes; the spacing here refers to two points or two The distance between the lines along the surface of the ball.
  • Example 9 As shown in FIG. 23 , a layout method of an imaging unit: the imaging unit is divided into a number of meridians with equal angles with the center of the imaging surface 2 as the pole, and imaging units are arranged at equal intervals on each meridian.
  • Example 10 The imaging units are not referenced to any point, line, or surface on the imaging surface, and are unconditionally distributed on the surface of the imaging surface at equal intervals.
  • Embodiment 10 In the layout of the imaging unit of Embodiment 6-10, relative to the layout of the warp, the clarity of each part of the picture on the imaging surface 2 of the imaging unit arranged with horizontal lines, parallel lines, and spiral lines is consistent; relative to the layout of horizontal lines, the layout of the latitude lines, The difficulty coefficient of the helical layout is smaller, the reading and writing of the imaging unit is easier, and the reading speed is faster; therefore, the weft layout and the helical layout can be used as spherical surfaces to form the image surface 2 (photosensitive surface of the image sensor, display surface of the display device) , image file page screen) is a more commonly used layout method.
  • Embodiment 10 can be used as a relatively simple and easy layout method, for example, it is applied to the brushing layout of the imaging unit on the negative film of the spherical image sensor camera and the projection screen of the spherical projection projection device.
  • Example 11 When reading or writing the value of the image unit to the imaging unit on the imaging surface 2, the imaging unit is matrixed before scanning and reading, which can simplify the reading and writing algorithm and improve the reading and writing efficiency.
  • a matrix scanning method uses the imaging units distributed on the imaging surface 2 as latitude/horizontal lines as the actual imaging units, and takes the actual number of imaging units on the longest latitude/horizontal line as the reference number. , use virtual imaging units to supplement the insufficient reference numbers on other latitudes/horizontal lines, so that the sum of actual imaging units and supplementary virtual imaging units on other latitudes/horizontal lines reaches the reference number, and will reach the same number of imaging units as the reference number
  • the latitude/horizontal lines of the above method are regarded as a row; the rows obtained in the above manner are regarded as rows, and the number of latitude/horizontal threads of all imaging units on the imaging surface 2 is regarded as columns to form a virtual row-column matrix and perform scanning.
  • Example 12 A matrix scanning method, using the imaging units distributed along the latitude/horizontal lines on the imaging surface 2 as the actual imaging units, taking a given number of actual imaging units as a reference value, and using one of the latitude/horizontal lines as the starting line , virtual pick the actual imaging unit line by line and point by point, if the actual number of virtual imaging units picked on the starting latitude/horizontal line reaches the reference value, it will be recorded as a virtual line in the first line, if it does not reach the reference value, Then, continue to virtually extract imaging units from the next adjacent latitude/horizontal line until the reference value is reached, and record it as a virtual line in the first line, while the remaining actual imaging units that are virtually extracted from the latitude/horizontal line are counted as Go into the virtual picking work of the next virtual row, and so on, until the actual imaging units of the last latitude/horizontal line on the imaging surface 2 are all virtual picked, and when the last virtual picked actual imaging unit If it fails to reach the reference value, it will be supplemented with virtual
  • Example 13 As shown in Figures 24 and 25, a matrix scanning method divides the imaging surface 2 in which the imaging units are distributed in the manner of longitude, latitude, horizontal line, spiral, and no reference object at equal intervals into one or more equal-area or non-identical
  • the principle of division is that the number of imaging units in each block is equal and equal to the reference value.
  • the virtual imaging unit is used to supplement the reference value.
  • regard the equal number of imaging units of each block as a virtual row; use the rows obtained in the above manner as rows, and the number of all blocks as columns to form a virtual matrix of rows and columns, and perform scanning; wherein, shown in FIG. 25
  • the scanning method is a matrix scanning method of equal-area sector blocks.
  • Example 14 As shown in Figure 26, a matrix scanning method uses any meridian passing through the center point of the spherical structure on the imaging surface 2 as a virtual meridian, and the virtual meridian takes the diameter line perpendicular to the sphere and passing through the center point of the sphere as the rotation axis , rotate clockwise or counterclockwise, cut the virtual meridian on the imaging surface 2 in the way of latitude, horizontal line, spiral line, An equal number of imaging units without reference objects distributed at equal intervals is regarded as a virtual row; the rows obtained by the above method are regarded as rows, and the number of virtual rows obtained by rotating the virtual meridian for one circle is regarded as columns to form a virtual row-column matrix and perform scanning .
  • Example 15 As shown in Figure 27, a matrix scanning method uses each meridian with the same number of imaging units distributed on the imaging surface 2 in the form of meridians as a row, and the number of all meridians as columns to form a virtual matrix of rows and columns for scanning. .
  • Example 16 A matrix scanning method, which divides the imaging unit using the spiral layout method into several equal parts, selects the imaging unit of this number as a virtual row from the first imaging unit at the starting point of the spiral line, and selects the imaging unit until the spiral line Up to the last imaging unit on the line; the selected imaging units of equal number are used as virtual rows, and the number of rows in all virtual rows is used as virtual columns to form a row-virtual-column matrix for scanning.
  • Example 17 A matrix scanning method, after obtaining the virtual row and column matrix according to the method of embodiment 12-16, the imaging unit is sampled at intervals, the odd group forms a matrix, and the even group forms a matrix, and the two matrices are respectively matched to receive the same picture Different perspective image matrix data, used to display dual perspective images and videos.
  • Example 18 The image sensor uses the photoelectric conversion function of the photoelectric device to convert the light image on the photosensitive surface into an electrical signal proportional to the light image.
  • the photosensitive surface of existing image sensor products is mainly flat, and the acquired images generally have picture blurring and distortion.
  • problems such as chromatic aberration, misalignment, and picture loss in the stitching of VR pictures.
  • an image sensor is a spherical image sensor based on the spherical imaging model of Embodiment 1, including an image sensor package 18-7, and the package 18-7 includes a photosensitive element 18 -1, a matrix generator 18-3 connected to the photosensitive element 18-1, a data reader 18-5 connected to the matrix generator 3, an image information processor 18-6 connected to the data reader 18-5
  • the photosensitive surface 18-2 of the photosensitive member 18-1 is a part of a complete spherical surface, and the structure of a half complete spherical surface is a common structure of a spherical image sensor; on the photosensitive surface 18-2, according to any one of embodiments 6-10
  • One layout mode has a photosensitive unit, the light-receiving surface of the photosensitive unit is parallel to the tangent plane of the spherical surface of the photosensitive surface; when the image sensor is installed on the camera, all the scene light projected on the photosensitive surface 18-2 must be The light-rece
  • photosensitive surface 18-2 is conventional concave spherical surface, conventional convex spherical surface, Fresnel concave spherical surface, Fresnel convex spherical surface, Fresnel concave spherical surface combination respectively Image sensor;
  • the spherical image sensor can be packaged separately as shown in Figure 31, or as shown in Figure 32, the spherical image sensor can be combined with a spherical lens with one side being a plane and one side being a spherical surface 18-8 are packaged together in the package shell 18-7; when packaged, the plane side of the spherical lens faces the photosensitive port, and the spherical side faces away from the photosensitive port, and the focal point of the spherical lens coincides with the spherical center of the spherical image sensor; When it is installed inside the camera as
  • a separately packaged image sensor with a half-complete spherical photosensitive surface an image sensor packaged with a half-complete spherical photosensitive surface and a half-spherical lens can be used as a spherical image sensor camera Common structure types.
  • This embodiment can be used as a general-purpose or standard spherical image sensor, applied to various spherical image sensor cameras or cameras, which helps to reduce or even eliminate blurring and distortion of the shooting picture, improve picture clarity, and improve VR.
  • the effect of screen stitching and the output are more suitable for image playback files of various spherical display screens.
  • Example 19 At present, the image sensors of cameras are all planar structures or variable structures based on planar structures. Their existence: the shooting picture is prone to blurring and deformation; it is difficult to achieve a high and consistent definition of the overall picture; the depth of field of the picture Insufficient, and basically no three-dimensional effect; shooting high-definition, large-angle images requires high shooting technology and repeated shooting modes; shooting VR images requires relying on VR forming and stitching software, and the stitching effect still has misalignment, color difference, and picture loss and other defects.
  • a camera is a camera based on the spherical imaging model of Embodiment 3, including a body 19-1 and a lens 19-9, and the camera obscura 19-5 inside the body 19-1 is directed to
  • the back is provided with shutter 19-4, auxiliary lens 19-6, image sensor 19-7 and image data processing module 19-8;
  • the inside is provided with lens assembly 19-3;
  • Image sensor 19-7 is the image sensor of the spherical photosensitive surface of embodiment 18;
  • Auxiliary lens 19-6 is a spherical lens, when image sensor 19-7 is packaged together with spherical lens When the component is used, the auxiliary lens 19-6 is located in the packaging shell of the image sensor 19-7.
  • the auxiliary lens 19-6 is located outside the image sensor 19-7, and the body 19-7 1; the focal point of the auxiliary lens 19-6 coincides with the spherical center point of the photosensitive surface of the image sensor 19-7, and the central axis coincides with the central axis of the image sensor 19-7;
  • the auxiliary lens 19-6 is used to cooperate with the lens combination 19-3 so that all the light entering the camera is vertically irradiated on all the photosensitive units on the photosensitive surface of the image sensor 19-7, so that no matter the distant view or the near view, the center of the picture or the edge of the picture
  • the scenes are clear and undistorted;
  • the image data processing module 19-8 is used to process the image data sets acquired from the image sensor 19-7 into image files in various formats or synthetic spherical images.
  • the auxiliary lens 19-6 adopts a structure in which both sides are spherical, or adopts a structure in which the side facing the image sensor 19-7 is spherical and the side facing the shutter 19-4 is a plane.
  • Figure 34 shows different types of cameras produced by selecting different shapes and structures for the image sensor, auxiliary lens, and viewfinder lens to meet the needs of various scenes.
  • This camera is a common structure of a spherical image sensor camera.
  • this camera in combination with the spherical imaging model of Embodiment 1, 2, 4, and 5 and the extended model based on the 2-5 spherical imaging model, increase or decrease parts or adjust the camera structure, more types of cameras can be produced; in addition, the camera of this embodiment can be combined with other equipment or self-combined to produce more diverse-purpose cameras or shooting equipment, as shown in Figure 37, a two-camera A panoramic camera, a dual-view 3D camera composed of two cameras as shown in FIG. 38 , a dual-view 3D panoramic camera composed of four cameras as shown in FIG. 39 .
  • the best light-receiving surface of the image sensor photosensitive surface of the spherical image sensor camera has been greatly improved, thereby significantly improving the field of view and the overall quality of the picture, and reducing the large field of view.
  • the difficulty of picture shooting simplifies the VR image synthesis process, improves the efficiency and quality of VR image synthesis, and the images directly output by the camera and the synthesized VR images are played on the corresponding spherical display screen, and the displayed picture is A clearer image with little or no distortion, high fidelity, and a stronger three-dimensional image.
  • Example 20 The existing panoramic image is a wide-area source or local source panoramic image file stitched from multiple planar images captured and output by one or more planar image sensor cameras after spherical processing, and the captured image has distortion/blurring / Stretch / collapse phenomenon, the spliced spherical panoramic image has shortcomings such as picture dislocation, damage, and unnatural color transition.
  • a panoramic image shooting method comprising a panoramic image shooting device, the device including a spherical image sensor camera 20-1, a camera carrier 20-2, and an image processing system 20-3;
  • the panoramic image shooting equipment of a spherical image sensor camera with a convex spherical viewfinder lens is arranged on the camera carrier 20-2.
  • a certain point on itself or a certain point in the space where it is located is the center point, and the shooting port faces away from the center point to carry out rotating shooting, and the image processing system 20-3 transforms the captured multiple pictures covering all directions into a partial spherical surface Shaped images are spliced into a panoramic image file of a complete spherical picture to save or output;
  • the camera carrier 20-2 is equipped with a spherical image sensor camera with multiple convex spherical viewfinder lenses for panoramic image shooting equipment, centered on a certain point on the device itself or a certain point in the space where the device is located point, the shooting ports of multiple cameras 20-1 face away from the central point and are set on the bearing surface bearing point of the camera carrier 20-2 in real spherical or virtual spherical shape, and when the multiple cameras 20-1 When the camera 20-1 is able to cover scenes in all directions other than the central point, the camera 20-1 performs shooting in a stationary state; when the multiple cameras 20-1 can only cover scenes outside the central point When the scene is in a partial direction, the camera 20-1 rotates and shoots around the center point; the image processing system 20-3 converts the multiple pictures captured by the camera 20-1 covering all directions into a partial spherical shape Save or output the panorama image file that stitches the images into a complete spherical picture;
  • the spherical image sensor camera with a convex spherical viewfinder lens takes a certain point in the shooting space as the center point, and the scenes in all directions shot with the shooting port facing away from the center point are scenes from a wide area. This method shoots the output panorama
  • the image is referred to here as a wide-area source panoramic image for short.
  • the camera carrier 20-2 is provided with a spherical image sensor camera with a concave spherical viewfinder lens for panoramic image shooting equipment, with a certain point in the scene of the shooting object as the center point, the camera 20-1
  • the shooting port faces the center point, and rotates and shoots on a real spherical surface or a virtual spherical surface whose radius is larger than the radius of the space occupied by the shooting object.
  • the camera carrier 20-2 is provided with a panoramic image capture device (1) with multiple spherical image sensor cameras with concave spherical surfaces.
  • the shooting port of the camera 20-1 faces the center point and is arranged on the bearing point of the real spherical or virtual spherical bearing surface of the camera carrier 20-2.
  • the camera 20-1 When the multiple cameras 20-1 can When covering scenes in all directions of the object to be photographed, the camera 20-1 performs shooting in a stationary state; -1 Take the center point as the center to rotate and shoot; the image processing system 20-3 splices the captured multiple pictures covering all directions into partial spherical images into a panoramic image file output of a complete spherical picture; A certain point in the space is the center point, and the scenes in all directions taken by the shooting port towards the center point are scenes from a limited range between the center point and the virtual sphere or the real sphere where the camera is located.
  • This method captures the output panoramic image
  • it is referred to as local source panoramic image for short.
  • the wide-area source panoramic image shooting method and the local-area source panoramic image shooting method are adopted simultaneously for panoramic image shooting, and the corresponding panoramic image files respectively output are respectively displayed on different independent panoramic image display devices to display.
  • This embodiment adopts the panoramic image output by the shooting equipment composed of spherical image sensor cameras.
  • the original image output by the spherical image sensor itself is higher in definition than the original image output by the planar image sensor camera, and there is no distortion or blurring. The phenomenon of deformation and blurring is extremely small.
  • the original image captured and output by the spherical image sensor camera is originally a spherical image.
  • synthesizing a panoramic image there is no need for spherical processing, and there is no such process as in the process of spherical image processing for flat images.
  • the picture is destroyed and damaged, and the spherical image is originally stitched into a spherical panoramic image.
  • the original coordinates of the pixels will not be damaged, and there is no phenomenon of low coincidence of the splicing parts of the planar image; thus making this method
  • the quality of the panoramic image captured and output is higher and the efficiency is faster.
  • Example 21 The current OLED or liquid crystal display is a flat display or a display that is based on a flat display and does not deviate from the basic physical structure and display method of the flat display. It is difficult for the display to achieve a three-dimensional display effect;
  • the spherical screen can also improve the three-dimensional effect of the display to a certain extent, but due to the constraints of the traditional imaging method, pixel layout, scanning method, image file and image processing method, not only the three-dimensional effect of the display is still not obvious, but also the resolution of the display screen Unevenness, deformation, and slow response.
  • a display screen with a spherical display surface is a display device based on the spherical imaging model of Embodiment 1, including an external shell 21-6, and the front of the shell 21-6.
  • the display surface 21-2 is a display member 21-1 with a spherical structure; on the display surface 21-2, display pixels 21-3 are arranged according to any one of the layout methods in Embodiments 6-10, and the display pixels 21- 3.
  • the light emitted when it is energized is perpendicular to the position where each display pixel of the display surface 21-2 is located; the display pixel 21-3 is connected to the scanning module 21-4, and the display pixel matrixer of the scanning module 21-4 contains One or more matrix generation logic circuits or program instructions of display pixels in Embodiments 11-16, when the display screen is working, the display pixel matrixer passes the matrix generation logic circuit or program instructions in advance to convert the non-matrix on the display surface
  • the arranged display pixels are matrixed in advance to form a display pixel matrix for use;
  • the image pixel matrixer in the image processing module 21-5 contains one or more matrix generation logic circuits or program instructions of image pixels in Embodiment 11-16, and the matrix generation logic circuit of the image pixels or The matrix type of the program instruction is the same as the matrix generation logic circuit of the display pixel in the scanning module or the matrix type of the program instruction;
  • the image judging program in the image processing module 21-5 directly sends the matrixed spherical image data set file to the matcher in the scanning module 21-4, and the image pixel is not
  • the matrixed spherical image file is matrixed by the image pixel matrixer and the matrixed spherical image data is sent to the matcher in the scanning module 21-4, and the planar image is converted by the image in the image processing module Convert the spherical image into a spherical image by the image pixel matrixer and then send it to the matching device in the scanning module 21-4.
  • the scanning module 21-4 scans and writes the data of the image pixels according to the corresponding matrix.
  • the display screen whose display surface is a concave spherical surface (such as shown in FIGS. 47 and 49 ) is used to display the directly output image taken by the spherical image sensor camera whose viewfinder lens 19-2 is convex spherical, and the viewfinder lens 19-2 is convex.
  • the wide-area source panoramic image is stitched from the images shot and output by the spherical image sensor camera of spherical shape, and the wide-area source panoramic image is stitched from the images shot and output by the planar image sensor camera.
  • the display screen whose display surface is a convex spherical surface (such as shown in FIGS. 46 and 48 ) is used to display the image directly output by the spherical image sensor camera with a concave spherical spherical image sensor camera and the concave spherical spherical image sensor camera and the concave spherical spherical surface of the viewing lens 19-2.
  • the display surface of the display screen in this embodiment is Conventional spheres as shown in Figure 46 and 47, or Fresnel spheres as shown in Figure 48 and 49, or a combination of Fresnel spheres;
  • the display surface adopts Fresnel sphere structure, which can make light and thin
  • the spherical display screen, the light and thin spherical screen is applied to the notebook computer and the mobile phone, while realizing the high-definition three-dimensional effect brought by the spherical screen, it is also easy to carry.
  • the display screen of this embodiment based on the spherical imaging model not only displays a clearer picture without distortion, but also presents a stronger three-dimensional effect, and the three-dimensional picture can be viewed with the naked eye;
  • the VR device with a spherical screen in this embodiment since the VR image matching degree between the spherical screen and the spherical screen is higher, the clarity of the display screen is improved, and the viewing angle and depth of field of the screen are also significantly improved. The graininess of the picture and the difficulty of focusing are significantly reduced. If the panoramic image output from the shooting device of the spherical image sensor camera of the spherical imaging model is played together, the quality of the picture displayed by the VR device will be improved to a higher level and reach a higher level. .
  • Example 22 The display screen of the existing fan machine adopts a hollowed-out picture, and the display screen is placed at a certain height from the ground, and the hollowed-out picture played is suspended in the air, so as to give people the impression of imaging in the air. But the three-dimensional effect of the existing fan machine display screen is still limited, and its playback object is single, and the application scene is relatively narrow.
  • an arc-shaped rod fan display screen is a display device based on the spherical imaging model of Embodiment 1, including a control mechanism 22-1 and a fan blade 22-4; a control mechanism 22-1 Including the control board 22-3 and the driving motor 22-2, the driving end of the driving motor 22-2 is connected with the fan blade 22-4, and the fan blade 22-4 is a bar with an arc structure; the fan blade 22-4 is a group or multiple groups, it is driven by the connected drive motor 22-2 to rotate during operation, and the rotation is spherical; on the arc-shaped outer surface of the fan blade 22-4 facing the audience, a lamp bead 22-5 is placed.
  • the bead 22-5 is electrically connected with the control board 22-3, and the light emitted by the lamp bead 22-5 when it is energized is perpendicular to the arc surface of the arc rod where the lamp bead is located, and the image processing module in the control board 22-3 will
  • the input video is parsed into image values of pixel coordinate values, color values, and gray scale values, and the image values are sent to the lamp bead 22-5 at the corresponding position on the fan blade 22-4, and the lamp bead 22-5 receives the image value Lighting emits the brightness of the corresponding color and grayscale, combined with the rotation of the fan blades, to realize the display of the screen.
  • the shape of the fan blade 22-4 is as shown in Figure 52.
  • Two parallel lines with relatively small distances intercept a conventional spherical surface or a Fresnel spherical surface A variety of correspondingly shaped structures of spherical structures between said parallel lines are obtained.
  • the lamp beads 22-5 are arranged on the surface of the fan blade 22-4 in an arc manner, and the image processing module in the control board 22-3 transmits the image signal according to the matrix scanning method in Embodiment 14.
  • the lamp bead 22-5 the lamp bead lights up correspondingly, and presents a picture
  • the lamp bead 22-5 is arranged on the surface of the fan blade 22-4 in a fan-shaped manner as shown in Figures 54 and 56, and the image in the control board 22-3
  • the processing module transmits the image signal to the lamp bead 22-5 according to the matrix scanning method of embodiment 13, and the lamp bead is correspondingly lit to present a picture;
  • This embodiment is a change of the display screen of Embodiment 21, which has a stronger three-dimensional effect than the existing fan display screen, and plays a wider range of image files, and can directly play various spherical images and indirectly play flat images and videos At the same time, the application scenarios are wider. It can not only be used for visual drainage of merchants, but also can be applied to VR devices, which is beyond the reach of existing fan machine displays.
  • a spherical image projection device is a display device based on the spherical imaging model of Embodiment 2.
  • the projection device includes a projection host 23-1, and a spherical projected display screen 23-2 whose display surface is spherical.
  • the projection host bracket 23-3; the projection host 23-1 is a static projection source host and a dynamic projection source host; the image signal particles emitted by the static projection source host are spherical, and the image signal particles emitted by the dynamic projection source host The image signal particles are arc-shaped or point-shaped, and the arc-shaped or point-shaped image signal particles form a spherical image signal particle surface through motion and are scanned and projected onto the spherical display surface of the projected display screen 23-2;
  • the static projection source host includes a spherical light source 23-102, a spherical transparent projection screen 23-101 wrapped outside the shape light source 23-102, an image processing system 23-103; the spherical light source 23-102, the spherical transparent projection screen
  • the ball center of the screen 23-1 and the projected display screen 23-2 is located at the same point; during work, the spherical light source 23-102 lights up, and the spherical image video played by the
  • the dynamic projection source host includes an image signal particle emission gun 23-104, an action device 23-105, an action instruction system 23-106, and an image processing system 23-103; wherein, as shown in Figure 59
  • the image signal particle launch gun 23-104 is a launch device with multiple or multiple sets of launch holes facing the projected display screen 23-2 direction arranged on the arc-shaped rod fan blade in an arc, as shown in the image in Figure 60
  • the signal particle emission gun 23-104 is a point-shaped emission source emission device with one or a group of emission holes. When there is one emission hole, the image particles emitted by the hole are a single primary color.
  • the image particles emitted by the hole are multi-primary colors; the image signal particles emitted by the image signal particle emitting gun 23-104 are visible light particles or non-visible light particles;
  • the image processing system 23-103 sends the scanning action to the action command system 23-106, at the same time send the image pixel color value to the launch gun 23-104, the action device 23-105 drives the launch gun 23-104 to move, and the launch gun 23-104 emits the image signal particles corresponding to the image color value to be projected and displayed
  • the screen 23-2 is spherically displayed, and the incident angles between the image signal particles and all parts of the display screen 23-2 are 90°, presenting a picture.
  • the image screen projected by the projection device of this embodiment has higher definition and stronger three-dimensional effect than the existing planar image projection device, and the three-dimensional effect can be seen with the naked eye. At the same time, it has a wider film source than the existing spherical projection device. Easier to popularize.
  • a glasses-type panoramic display device also known as VR glasses; includes: glasses frame 24-1, spherical display screen 24-2, earphone 24-3, file processing module 24-4 and control handle, etc.
  • the ball display screen 24-2 is two pieces, the two display screens display the pictures of the same area and the same angle of view in the same frame of the panoramic image file taken and synthesized by the single-view camera, or respectively display the panorama taken and synthesized by the double-view camera Images from different angles of view in the same area of the same frame of an image file.
  • the spherical display screen 24-2 is the screen of any one of the display devices in Embodiments 21, 22, and 23, wherein thin and light VR glasses can be produced by using the display screen of Embodiment 21.
  • the panoramic VR glasses with a display surface whose display surface is a concave spherical surface are used to play wide-area source panoramic image files.
  • the panoramic VR glasses with a convex spherical display screen are used to play local source panoramic image files.
  • the picture does not change correspondingly.
  • the viewer virtually dials the picture through the control handle. A rotation corresponding to the direction of the toggle occurs.
  • the VR glasses of this embodiment using a spherical imaging model and a spherical screen display a larger viewing angle, higher definition, and a higher degree of deformation and blurring. Smaller, better picture quality, and because the spherical screen does not need to be used in conjunction with a convex lens like a flat display screen, the picture does not have graininess, picture defocus, and hard focusing of the picture caused by the pixels being enlarged by the convex lens.
  • the picture quality displayed by the VR device will be improved to a higher level; thereby fundamentally solving the obstacle
  • the development of existing VR glasses has key defects such as low definition, narrow field of view, defocusing and solving problems, which will greatly promote the development and popularization of VR and the metaverse based on VR display technology.
  • Example 25 The three-dimensional effect of the picture projected in the existing 3D stereoscopic theater is still relatively low, the degree of immersion is small, and in most cases, it is necessary to wear two-color polarized glasses to watch, and wearing two-color polarized glasses will greatly reduce the viewing brightness of the picture.
  • a theater includes: a projection room 25-1, a concave spherical display screen 25-2 whose display surface is a concave spherical surface, a sound system 25-3, and a viewing platform 25-4;
  • the concave spherical display screen 25-2 is arranged on one side in the projection room 25-1, and the viewing platform 25-4 is arranged on the projection room.
  • the slope on the other side of the room 25-1, the slope of the slope faces the display surface of the display screen 25-2, and the upper part of the audience viewing area on the slope is lower than and the lower part is higher than the concave spherical display screen 25-2;
  • the theater of the structure is used for displaying and projecting the spherical image files directly output by the convex spherical spherical image sensor camera and the wide-area source panoramic image video files.
  • the concave spherical display screen 25-2 When the display surface of the concave spherical display screen 25-2 is a complete spherical surface or a small part of the spherical vacancy at the intersection with the projection room ground is removed, and the spherical surface of other parts is close to a complete spherical surface, the concave spherical display screen 25-2
  • the upper space of the middle position of the projection room 25-1 is set and connected to the inner wall wall of the projection room 25-1 by a fixing frame, and its display surface is positioned at the viewing platform 25-4 in the middle area of the ground of the projection room 25-1. Fully wrapped, the audience seats on viewing platform 25-4 can adjust the viewing angle up, down, left, and right; the theater of this structure is used to display wide-area source panoramic image files.
  • the picture presented by the theater in this embodiment is a stereoscopic panoramic image that can be seen with the naked eye, extends to the outside space, has a high sense of depth, and is wide-area.
  • a theater includes: a projection room 25-1, a convex spherical display screen 25-5 whose display surface is a convex spherical surface, a sound system 25-3, and a viewing platform 25-4.
  • the display surface of the convex spherical display screen 25-5 is a relatively small part of a complete spherical surface, it is arranged on one side in the projection room 25-1, and the viewing platform 25-4 is arranged on the other side in the projection room 25-1.
  • the display viewfinder lens 19-2 is the spherical image file and the wide-area source panoramic image file directly output by the concave spherical spherical image sensor camera.
  • the convex spherical display screen 25-5 When the display surface of the convex spherical display screen 25-5 is a complete spherical surface or the spherical surface of other parts is close to a complete spherical surface except for the spherical vacancy of the part in contact with the ground, the convex spherical display screen 25-5 is arranged in the middle of the projection room 25-1 On the ground at the lower position, the viewing stand 25-4 is a slope surrounding the convex spherical display screen 25-5, and the viewing area on the slope is the center of the spherical display screen 25-5 display surface and the convex spherical display screen. 25-5 shows the area between the vertical height differences between the highest points of the face.
  • the theater with this structure is suitable for projecting panoramic image videos of local sources.
  • What the theater of this embodiment shows is a panoramic image with a limited range of pictures and a high stereoscopic effect that can be seen with the naked eye, protrudes on one side or the center of the viewing hall 25-1, has no external environment scene, and has a limited range of pictures.
  • a theater includes: a projection room 25-1, a concave spherical display screen 25-2 with a concave spherical display surface, a convex spherical display screen 25-5 with a convex spherical display surface, and an audio system 25. -3. Viewing stand 25-4. Wherein, the diameter of the display surface of the concave spherical display screen 25-2 is greater than the diameter of the display surface of the convex spherical display screen 25-5;
  • the concave spherical display screen 25-2 and the convex spherical display screen 25-5 are complete spherical surfaces or remove a small part of the spherical vacancy at the intersection of the concave spherical display screen 25-2 and the projection room ground, the spherical surfaces of other parts are close to
  • the concave spherical display screen 25-2 is arranged in the middle of the projection room 25-1 near the upper space and is fixed on the wall of the projection room by a fixing frame;
  • the convex spherical display screen 25-5 is arranged in the projection room 25-1
  • the middle lower part is close to the ground or the position on the ground, and the lower position in the middle of the concave spherical display screen 25-2 is completely wrapped by the display surface of the concave spherical display screen; -5
  • the concave spherical display screen 25-2 and the display surface of the convex spherical display screen 25-5 are a relatively small part of a complete spherical surface
  • the concave spherical display screen 25-2 and the convex spherical display screen 25-2 are arranged on the same side of the projection room, and the viewing platform is a platform or a slope located on the other side of the projection room.
  • the audience can not only watch the high-depth and strong three-dimensional picture covering all the peripheral wide-area environment, but also watch the virtual three-dimensional characters and their characters that are in the wide-area picture with the audience and protrude in front of the audience.
  • Local image in this way, the audience and virtual people and objects are in the same virtual world, achieving a more immersive mixed reality visual experience.
  • the concave spherical display screen 25-2 and the convex spherical display screen 25-5 of embodiment 25-27 are the self-luminous display screen of embodiment 22, or the projection device of embodiment 24, or the fan display screen of embodiment 23 ; Because the fan display screen is difficult to achieve large-scale display, it can only be used in miniature panoramic theaters.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Studio Devices (AREA)

Abstract

成像装置、方法及设备,包括成像面为球面形结构的成像件,所述球面形成像面所有部位与像源所发出的光在成像面交汇处的夹角均为90°,成像面上分布有若干成像单元。通过使用的球面形投影面,完全、完整、准确地获取景物的影像,获取的影像使用同样结构的对应反向的球面形显示面的显示设备显示,呈现出与景物完全一致的高度逼真的立体画面,从而解决了基于平面投影面的成像方法难以获取和显现高逼真度立体场景画面的缺陷。

Description

成像装置、方法及设备 技术领域
本发明涉及成像显像技术领域,具体地说是一种成像装置、方法及设备。
背景技术
当前,人们使用的成像方法都是建立在平面投影面上的推演和延伸。传统的全景图像是采用一部或多部平面图像传感器相机拍摄输出的多张平面图像球面化处理后缝合而成的广域源或局域源全景图像文件,过程中,存在画面变形/虚化/拉伸/坍缩/损伤等现象,拼接成球面形全景图像时还存在画面错位、色彩过渡僵硬不自然等缺陷。
3D建模是把物体分别投影到X、Y、Z三个方向的平面投影面上进行各种对应数值的计算,投影过程中虽然考虑到了切向和径向变形但并没有深度考虑光量子的损失;采用平面感光面的相机的拍摄是一种实拍行为,光线到达平面感光面上不同部位的光量子出现不同程度的损失就凸显出来。线性成像模型下,距离投影面中心点越远的投影面表面部位,光量子到达后的损失越大直至损失殆尽,画面也就越来越模糊直至完全没有画面;非线性成像模型下,距离投影面中心点越远的投影面表面部位,光线是以坍缩的形式到达投影面,所以导致投影面的受光面积越来越小,直至为零;形成的图像被显示还原成不坍缩的画面时,坍缩越严重的部位画面越模糊,直至完全看不清楚。这一现象是基于平面形投影面的成像方法永远存在,无法消除致命缺陷。
基于平面投影面的成像方法,不管是线性成像模型、还是非线性成像模型、还是线性与非线性结合的成像模型,都存在变形现象。线性成像模型下的成像还伴随着虚化,某些情况或者某些部位变形还非常严重,现有的解决办法多数是通过改变焦距进行缓解,但无法根除。目前的相机标准镜头和长焦镜头多采用的是线性成像模型,长焦镜头主要拍摄远处景物,远处景物在纵向轴上两点的距离变化引起的角变量(画面某一点与中心轴的夹角)变化不大,所以造成的影像变形的程度不大、不明显;标准镜头拍摄中等距离的景物,此情况下景物在纵向轴上两点的距离变化引起的角变量开始明显,变形也随之明显。不管哪种镜头,近景拍摄,纵向距离差导致的角变量会非常大,所以拍摄的画面就变形非常严重。广角和鱼眼镜头,景物在纵向轴上两点的距离变化引起的角变量非常大,所以基于平面投影面的成像方法不得不使用非线性模型或者线性和非 线性模型结合的方法来缓解这种严重变形的现象,但这样的方法不仅不能从根本上消除变形缺陷,还依然会带来画面清晰度上的缺陷。
基于平面形投影面的成像方法完全不具备从与成像轴平行和重合的纵向坐标上的空间景物获取影像的功能,直角坐标系的纵向坐标上的空间景物,基于平面形投影面的成像方法无法实现连续点线面的完整投影,只能实现单点和短线段的近似投影,所以无法获取立体空间的完整影像,这也是其无法获取真正有立体效果影像的原因。
发明内容
本实用新型为解决现有的问题,旨在提供一种成像装置、方法及设备。
为了达到上述目的,本发明采用的技术方案中,包括一种成像装置,包括成像面为球面形结构的成像件,所述球面形成像面所有部位与像源所发出的光在成像面交汇处的夹角均为90°,成像面上分布有若干成像单元。
在一些实施例中,还包括球面形结构的影像源;成像时,影像源发出的光照射到成像件的成像面上,且所述影像源对射入的光线进行方向和角度调整,使最终照射到所述成像面上的所有光线均垂直于成像面表面的各部位对应位置。在一些实施例中,所述成像方法的模型中还包括光学镜片组合和辅助透镜,所述光学镜片组合和辅助透镜位于光线的路径上,通过改变所述光学镜片组合的属性和布局,使影像源发出的光线的方向和路径发生变化,到达成像面的距离和位置对应发生改变,从而使得成像面可以按照需求放置在指定位置;所述辅助透镜对成像面的入射光线作进一步精准调整,以使得照射到所述成像面上所有部位的入射光线均精准垂直于成像面表面的各部位对应位置。
在一些实施例中,所述辅助透镜与所述成像面的对称轴重合,成像面、影像源和辅助透镜的球面形结构类型相同或相互匹配。
在一些实施例中,所述成像面和/或影像源和/或辅助透镜为一面平面、另一面球面的结构,或者两面均为球面的结构;该球面指凹球面或者凸球面结构。在一些实施例中,成像面和/或影像源和/或辅助透镜的球面结构为常规球面结构、或菲涅尔球面结构、或多个菲涅尔球面结构体的结合形结构;常规球面结构为常规正球面结构、常规椭球面结构、常规抛物面结构中的一种;菲涅尔球面结构为菲涅尔正球面结构、菲涅尔椭球面结构、菲涅尔抛物面结构中的一种。在一些实施例中,所述成像面上的成像单元以经线的方式布局在所述成像面的表面,各所述经线的夹角相等,同一条经线上的所述成像单元的间距相等;或 以纬线/横线/螺旋线的方式布局在所述成像面的表面,同一条纬线/横线/螺旋线上的所述成像单元的间距相等且等于相邻纬线/横线/螺旋线之间的间距,螺旋线方式布局成像单元的成像面上为多条螺线时,多条螺旋线之间的间距相等且等于成像单元的间距;此处的间距指的是沿成像面表面的间距;或不以任何点、线、面作参照对象的成像单元等间距分布于成像面的表面上;所述成像单元指的是相机图像传感器感光面上的感光单元、或显示屏幕的显示像素、或图像画面的图像像素;
本发明还提供一种基于上述任一成像装置的成像方法,S1,球面形成像面所有部位与像源所发出的光在成像面交汇处的夹角均为90°,S2,对成像面上的成像单元进行矩阵化处理,形成虚拟行列矩阵,对所述虚拟行列矩阵进行图像像素数值读取或写入;S3,凸球面形成像面的图像获取设备直接接收现实世界景象形成的虚拟矩阵、凹球面形或凸球面形的成像面通过凸球面影像源间接接收现实世界景象形成并输出的对应虚拟矩阵的图像文件,用观看面为凹球面形显示面的显示设备显示还原;凹球面形的成像面直接接收现实世界景象形成的虚拟矩阵、凹球面形或凸球面形的成像面通过凹球面影像源间接接收现实世界景象形成并输出的对应虚拟矩阵的图像文件,用观看面为凸球面形显示面的显示设备显示还原。
在一些实施例中,S2中读取成像单元方式是:S2.1虚拟成像单元补充纬线/横线实际成像单元法,将成像面上呈纬线/横线分布的成像单元作为实际成像单元,以最长纬线/横线上的实际成像单元数量为基准数,对其它纬线/横线上不足基准数用虚拟成像单元进行补充,以使其它纬线/横线上实际成像单元与补充的虚拟成像单元之和达到基准数,将达到基准数的相等数量成像单元的纬线/横线作为一行;将上述方式获得的行作为行,把成像面上所有成像单元的纬线/横线数量作为列,组成虚拟行列矩阵。
在一些实施例中,S2.2相邻纬线/横线成像单元互相补充法,将成像面上呈纬线/横线分布的成像单元作为实际成像单元,以给定数量的实际成像单元数量为参照值,将其中一条纬线/横线作为起始线,逐线逐点进行虚拟摘取实际成像单元,若起始纬线/横线上虚拟摘取的实际成像单元数量达到参照值,则作为一条虚拟行记入第一行,若未达到参照值,则从相邻的下一条纬线/横线上继续虚拟摘取成像单元,直到达到参照值时作为一条虚拟行记入第一行,而纬线/横线上虚拟摘取剩余的实际成像单元则计入下一个虚拟行的虚拟摘取工作中去,以此 类推,直至成像面上的最后一条纬线/横线的实际成像单元全部虚拟摘取完,且当最后一次虚拟摘取的实际成像单元未能达到参照值时,则用虚拟成像单元补充;最后;将上述方式获得的行作为行,以行的总数量为列,组成虚拟行列矩阵。
在一些实施例中,S2.3区块法将成像单元以经线、纬线、横线、螺旋线、无参照对象等间距分布方式分布的成像面分成一个或多个等面积或不等面积的区块,划分的原则为每个区块内的成像单元数量相等且等于参照值,当成像面内区块的成像单元数量少于参照值时,用虚拟成像单元补充达到参照值,将每个区块的相等数量的成像单元视为一虚拟行;将上述方式获得的行作为行,所有区块的数量作为列,形成虚拟行列矩阵。
在一些实施例中,S2.4虚拟经线切割法,将成像面上经过球面结构中心点的任意一条经线作为虚拟经线,所述虚拟经线以垂直于球面且经过球面中心点的直径线为旋转轴,进行顺时针或逆时针旋转,把提前设定的一个时间段下虚拟经线切割成像面上以纬线方式、横线方式、螺旋线方式、无参照对象等间距分布布局的相等数量的成像单元作为一条虚拟行;将上述方式获得的行作为行,把虚拟经线旋转一圈所获得的虚拟行的数量作为列,组成虚拟行列矩阵。
在一些实施例中,S2.5经线法,将成像单元以经线方式分布于成像面。的成像单元数量相等的每一条经线作为一行,所有经线数量作为列,形成虚拟行列矩阵。
在一些实施例中,S2.6把采用螺旋线布局法的成像单元分成等数量的若干份,从所述螺旋线起点的第一个成像单元开始选择该数量的成像单元作为一个虚拟行,选择至螺旋线上的最后一个成像单元为止;以选择的所述等数量的成像单元作为虚拟行,以全部虚拟行的行数作为虚拟列,形成行虚拟列矩阵。
或S2.7隔点取样法,在S2获得虚拟行列矩阵后,对成像单元进行隔点取样,奇数组组成一个矩阵、偶数组组成一个矩阵,两个矩阵分别匹配接收同一幅画面的不同视角图像矩阵数据,用于显示双视角图像视频。
在一些实施例中,S3.1输出原始矩阵形式的图像数据集文件,或S3.2将虚拟行列矩阵中的像素坐标或像素进行球面还原缝合,输出球面形图像文件或者平面图像文件。
本发明还提供一种图像传感器,包括上述的成像装置,其中成像装置中的成像件在所述图像传感器中体现为感光件,成像面体现为感光面,成像面上的成像 单元体现为感光单元;或者,所述图像传感器还包括与感光件相连的矩阵生成器,以及与矩阵生成器连接的数据读取器、与数据读取器连接的图像处理器;所述图像传感器工作时,执行上述的成像方法,矩阵生成器将所述感光件的感光面上非矩阵排列的感光单元经矩阵生成器内置的逻辑电路处理后生成矩阵化排列的虚拟矩阵,所述虚拟矩阵上的感光单元从外界获取的感光数据被数据读取器读取后传送给图像处理器,图像处理器对输入的数据进行处理,输出对应的图像文件。
在一些实施例中,所述图像传感器独立封装、或者与辅助透镜封装在一起;所述图像传感器辅助透镜封装在一起时,辅助透镜的一面朝向感光口,另一面背向感光口并朝向图像传感器感光件的感光面,且所述辅助透镜的焦点与球面形感光面的球心重合。
本发明还提供一种相机,所述相机包括机身和镜头,所述机身内部的暗箱内由前往后设有快门、内置辅助透镜、如上述的图像传感器、图像数据处理输出模块;所述镜头的前端设有取景透镜,镜头的镜筒内部设有透镜组合;所述机身内部件结合镜头部件按照如上述任一所述成像装置的方式布局,所述成像装置中的成像件在所述相机中体现为图像传感器,成像面体现为图像传感器的感光面,影像源体现为取景透镜;并执行上述任一所述的成像方法;内置的辅助透镜采用球形透镜;所述内置辅助透镜的焦点与图像传感器的球面形感光面的球心点重合,内置辅助透镜的中心轴线与图像传感器的中心轴线重合,内置辅助透镜用于与透镜组合配合使所有光线均垂直照射在图像传感器感光面上;所述图像数据处理输出模块用于将从图像传感器上获取的图像数据处理成各种格式的文件输出、或者合成球面图像输出,或者合成球面图像再转化成平面图像输出。
本发明还提供一种全景图像的拍摄制作方法,凸球面形取景透镜的相机以其所在空间的某一点为中心点,取景透镜背向着所述中心点,拍摄覆盖所述中心点以外所有方向的景象获得若干张球面形画面的图像,把所述若干张球面形画面的图像拼接成完整球面画面的全景图像文件保存或输出;所述全景图像为包有所述中心点以外所有方向的广域范围的景象,这里简称为广域源全景图像;所述图像在显示面为凹球面显示屏幕上显示。
凹球面形取景透镜的相机在其所以空间的某一点为中心点,取景透镜面向所述中心点,拍摄从所述相机到所述空间点之间范围内所有方向景象获得若干张球 面形画面的图像,把所述若干张球面形画面的图像被拼接成完整球面形画面的全景图像文件保存或输出;所述全景图像包含有所述相机与所述空间点之间所有方向的局域范围的景象,这里简称为局域源全景图像。所述图像在显示面为凸球面显示屏幕上显示。
本发明还提供一种显示屏,包括上述的成像装置,其中成像装置的成像件在所述显示屏中体现为图像显示件,成像面体现为图像显示面,成像面上的成像单元体现为显示像素,或并所述显示屏显示来源于上述成像方法的相机或拍摄设备拍摄输出的球面形画面的图像文件。
所述显示屏还包括图像处理模组和扫描模组,所述扫描模组一边与显示面上的显示像素相连,一边与图像处理模组相连;扫描模组内的显示像素矩阵化器内包含上述的一种或者多种显示像素的矩阵生成逻辑电路或者程序指令,显示屏工作时,所述显示像素矩阵化器提前通过矩阵生成逻辑电路或者程序指令把显示面上非矩阵排列的显示像素提前矩阵化;所述图像处理模组内的图像像素矩阵化器内包含上述的一种或多种图像像素的矩阵生成逻辑电路或者程序指令,且所述图像像素的矩阵生成逻辑电路或者程序指令的矩阵类型与扫描模组内的显示像素的矩阵生成逻辑电路或者程序指令的矩阵类型相同;所述显示屏显示图像时,图像处理模组中的图像判断程序把已经被矩阵化的球面形图像数据集文件直接输送给扫描模组内的匹配器、把图像像素未被矩阵化的球面形图像文件通过图像像素矩阵化器矩阵化并将矩阵化的球面形图像数据输送给扫描模组内的匹配器、把平面图像通过图像处理模组内的图像转化器转化成球面形图像后再通过图像像素矩阵化器矩阵化后传送给扫描模组内的匹配器,所述匹配器把图像像素矩阵与扫描模组内的显示像素矩阵类型进行匹配,匹配成功后,扫描模组按照对应矩阵把图像像素的数据扫描写入显示面上的对应显示像素,实现图像显示。
本发明还提供一种扇机显示屏,包括上述任一的成像装置,其中,所述成像件在所述扇机显示屏中体现为扇机,成像面体现为扇机的扇叶面向观众的外表面,成像面上的成像单元体现为扇叶外表面上的灯珠;还包括控制机构,控制机构包括控制板与驱动电机,所述驱动电机的驱动端与扇机的扇叶相连;所述扇叶为弧形结构的杆件,所述弧形结构为两条间距相对较小的平行面截取常规球面或菲涅尔球面获得的所述平行面之间的球面状结构件,在扇叶面向观众的一面的外表面置放有灯珠,所述扇机和灯珠与控制板电性连接,灯珠发出的光 垂直于扇叶表面灯珠所在部位的面,扇机带动扇叶旋转,控制板执行上述的成像方法,实现画面的显示成像;或并所述显示屏显示来源于上述成像方法的相机或拍摄设备拍摄输出的球面形画面的图像文件。
本发明还提供一种投影装置,包括上述任一的成像装置;其中,在所述成像装置中,成像件在所述投影装置中体现为被投显示屏幕,成像面体现为被投显示屏幕的图像显示面,成像单元为被投显示屏幕图像显示面上涂刷的反光颗粒或者设置的被投单元,图像源体现为投影主机;所述投影主机为点状图像信号粒子投影器、弧线状图像信号粒子投影器、球面状图像信号粒子投影器;所述点状图像信号粒子投影器、弧线状图像信号粒子投影器的图像信号粒子的发射装置在与其连接的驱动装置驱动下,按照动作指令器给出的指令运动把图像信号粒子投射到球面形投影屏幕上,和/或执行上述的成像方法,实现画面显示。本发明还提供一种眼镜式全景显示设备,包括眼镜架、显示屏、耳机、文件处理模块及控制手柄;所述显示屏的图像显示面为凹球面结构或凸球面形结构,所述球面形结构为常规球面形结构或者菲涅尔球面形结构或者菲涅尔球面形结构的组合,其设置于眼镜架的镜框内位于佩戴时观看者眼睛的正前方;所述显示屏为一片或二片,当为二片时,所述显示屏分别显示单视角相机拍摄并合成的全景图像文件的同一幅画面的同一区域画面,或者双视角相机拍摄并合成的全景图像文件的同一幅画面的同一区域不同视角的画面。
本发明还提供一种影院,包括放映室、观看台、音响和显示屏;所述显示屏为上述任一所述显示屏,且放映室设置有一块显示面为凹球面的凹球面显示屏,或者为一块显示面为凸球面的凸球面显示屏,或者为一块显示面为凹球面的凹球面显示屏和一块显示面为凸球面的凹球面显示屏。
放映室内设置有一块凹球面显示屏的所述影院,当所述显示屏的显示面为完整球面的相对较少一部分时,所述显示屏设置于放映室内的一侧,观看台为位于放映室内另一侧的平台或斜坡;当所述显示屏的其显示面为完整球面或者除去其与放映室地面交汇处的小部分球面空缺、其他部位的球面为接近于完整球面时,所述显示屏设置在放映室中间位置的上部空间,并通过固定架固定于放映室的内墙壁上,观看台为位于显示面的内部中间靠下位置的平台或斜坡并被显示面完全包裹;
放映室内设置有一块凸球面显示屏的所述影院,当所述显示屏的显示面为完整球面的相对较少一部分时,所述显示屏设置于放映室内的一侧,观看台为位于 放映室内另一侧的平台或斜坡面;当所述显示屏的显示面为完整球面或者除去其与放映室地面交汇处的小部分球面空缺、其他部位的球面为接近于完整球面时,所述显示屏设置于放映室的中间下部的地面上,观看台为环绕于凸球面显示屏四周的平台或斜坡;
放映室内设置有两块球面显示屏、且一块为凹球面显示屏和一块为凹球面显示屏的所述影院,所述凹球面显示屏的显示面和凸球面显示屏的显示面为完整球面或者除去其与放映室地面交汇处的小部分球面空缺、其他部位的球面为接近于完整球面,且所述凹球面显示屏显示面的直径大于凸球面显示屏显示面的直径,凹球面显示屏设置于放映室中间靠近上部空间位置并通过固定架固定在放映室内墙墙壁上,所述凸球面显示屏设置于放映室中间下部靠近地面位置或设置于地面上,且其位于凹球面显示屏的内部中间靠下的位置被凹球面显示屏的显示面完全包裹;观看台位于经过凸球面显示屏显示面球心的水平面与经过凹球面显示屏显示面球心的水平面之间的区域或上下高度稍大于或稍小于该区域高度的区域,并处于或靠近凹球面显示屏显示面的环形圈带上,观看台的座椅面向凸球面显示屏的显示面。所述凹球面显示屏、凸球面显示屏分别显示同一场下拍摄并输出的广域源全景图像文件和局域源全景图像文件。
本发明的有益效果是:通过使用的球面形投影面,直角坐标系的X、Y、Z三个维度的直线都可以完整、连续、精准的投射到球曲面上,从而完全、完整、准确地获取景物的影像,获取的影像使用同样结构的对应反向的球面形显示面的显示设备显示,呈现出与景物完全一致的高度逼真的立体画面,从而解决了基于平面投影面的成像方法难以获取和显现高逼真度立体场景画面的历史缺陷。同时也给立体显示和VR显示提供了正确的立体显示方法,使立体显示的效果更逼真更清晰,VR显示不再存在现有VR设备存在的景深小、纱门效应、眩晕、视场角窄、对焦难等缺陷。
与基于平面投影面的成像方法相比,基于球面形投影面的成像方法,从物镜到达投影面的光线,不仅在中途很少损失,而且投射到投影面上所有部位的光线都是垂直于投影面的。由于垂直于投影面的光线的光量子值最大,所以获取的画面清晰度最高;又由于全部垂直,所以画面所有部位都清晰而且清晰度一致。如此,基于球面形投影面的成像方法获取的图像整体清晰度远大于基于平面投影面的成像方法获取的图像。
基于球面形投影面的成像方法的成像装置,入射光均全部垂直于球面形投射 面,外界景物在物镜的某一经纬度值的点上反应的影像光线,到达投影面后,是在投影面的相同经纬度的坐标位置或者同经度值和同绝对值维度的负数组成的坐标点位置呈现出来的,投影面的该位置与物镜对应的位置形状完全相同,所以形成画面形状也是完全一样的,成像后输出的图像采用相同球面形对应反向结构的显示面的显示设备显示,画面与景物的原形原貌完全一样,无丝毫变形;从而从根本上摆脱了现有平面投射面成像方法长期存在的难以根除的变形缺陷。
由于基于球面形投影面的成像方法获取的影像,影像本身就是球面形图像,组成图像的像素坐标为球坐标形式的坐标值,用以球面形投影面的成像方法获取的球面形图像缝合成同为球面形的VR全景图像,不会出现平面形图像缝合所存在的错位、画损乃至色差现象,画面所有部位都可以精准对接缝合,缝合成的VR全景图像画面品质远高于基于平面形投影面方法获取图像的缝合效果。基于球面形投影面的成像方法解决了平面形投影面成像方法难以解决的诸多难题,对于高逼真度、高清晰度、高立体感还原现实影像以及虚拟现实和混合现实的影像使用、对接、融合都能完美解决;该成像方法的应用和延展性应用,对人类整个成像领域技术的升级,极具意义。
附图说明
附图说明
图1为实施例1成像装置的示意图;图中A为凹球面结构成像面的成像装置或模型,B为凸球面结构成像面的成像装置或模型;
图2为实施例2成像装置的示意图;
图中A为凸透镜成像装置,B为凹透镜成像装置。
图3为实施例3成像装置的示意图;
图中A为凹球面结构成像面的成像装置,B为凸球面结构成像面的成像装置。
图4为实施例4成像装置的示意图;
图5为实施例5成像装置的示意图;
图6为以凸球面作为透明物镜的成像装置获取广域范围景物景象的示意图;
图7为以凹球面作为透明物镜的成像装置获取局域范围景物景象的示意图;
图8为常规正球面结构的示意图;
图9为常规椭球面结构的示意图;
图10为常规抛物面结构的示意图;
图11为菲涅尔正凹球面结构的示意图;
图12为菲涅尔正凸球面结构的示意图;
图13为菲涅尔椭球面结构的横截面示意图;
图14为菲涅尔抛物面结构的横截面示意图;
图15为多个菲涅尔凹球面组合而成的结构的横截面示意图;
图16为多个菲涅尔凸球面组合而成的结构的横截面示意图;
图17为内凹式菲涅尔凹球面结构的横截面示意图;
图18为内凸式菲涅尔凸球面结构的横截面示意图;
图19为本成像方法的虚拟成像的示意图;
图20为成像单元以横线方式布局的示意图;
图21为成像单元以纬线方式布局的示意图;
图22为成像单元以螺旋线方式布局的示意图;
图23为成像单元以经线方式布局的示意图;
图24为虚拟成像单元补充纬线布局的实际成像单元法扫描的示意图;
图25为虚拟成像单元补充横线布局的实际成像单元法扫描的示意图;
图26为区块法扫描的示意图;
图27为扇形区块法扫描的示意图;
图28为虚拟经线切割法扫描的示意图;
图29为经线扫描法扫描的示意图;参见图1-29:1、成像件;2、成像面;3、影像源;4、光学镜片组合;5、辅助透镜;
图30为本实施例球面形图像传感器的整体架构图;
图31为独立封装的四种结构的球面形图像传感器示意图;
图32为与辅助透镜装在一起的四种结构的球面形图像传感器示意图;参见图31-33:18-1、感光件;18-2、感光面;18-3、矩阵生成器;18-4、数据虚拟矩阵;18-5、数据读取器;18-6、图像处理器;18-7、图像传感器封装壳;
图33为本实施例发明的相机结构示意图;
图34为采用不同形状结构的本实施例相机的示意图;
图35为两个相机组合成3D相机的结构示意图;
图36为两个相机组合成全景相机的结构示意图;
图37为四个相机组合成3D全景相机的结构示意图;
参见图33-37:19-1、机身;21-2、取景透镜;19-3、透镜组合;19-4、快门; 19-5、暗箱;19-6、内置辅助透镜;19-7、图像传感器;19-8、图像数据处理输出模块;19-9、镜头;19-10、偏振滤片;
图38为带有单个凸球面取景镜头的相机的全景拍摄设备及其全景拍摄方法示意图;
图39为带有多个凸球面取景镜头的相机的全景拍摄设备的示意图;
图40为带有单个凹球面取景镜头的相机的全景拍摄设备及其全景拍摄方法示意图;
图41为带有多个凹球面取景镜头的相机的全景拍摄设备的示意图;
图42为全景拍摄装置拍摄局域源全景图像的方法示意图;
图43为全景拍摄装置在同一场景下同时拍摄广域源全景图像和局域源全景图像的示意图;
参见图38-43:20-1、相机;20-2、相机承载体;20-3、图像处理系统;
图44为实施例21的球面显示屏的结构示意图;
图45为本发明的球面显示屏的工作原理示意图;
图46-49为本发明显示屏的显示面形状示意图;
参见图44-49:21-1、显示件;21-2、显示面;21-3、显示像素;21-4、扫描模组;21-5、图像处理模组、21-6、壳件;
图50为实施例22扇机显示屏的示意图;
图51为扇机显示屏的工作原理图;
图52为扇机显示屏的扇叶的各种形状示意图;
图53-56为扇机显示屏的扇叶上灯珠分布方式的示意图;
参见图50-56:22-1、控制机构;22-2、驱动电机;22-3、控制板;22-4、扇叶;22-5、灯珠;22-6、处理模块;22-7、数据线;
图57为实施例23投影装置的示意图;
图58为投影主机的静态投影源示意图;
图59、60为投影主机的动态投影源示意图;
参见图57-60:23-1、投影主机;23-2、被投显示屏幕;23-3、支架;23-101、透明投屏;23-102、球形光源;23-103、图像处理系统;23-104、图像信号粒子发射枪;23-105、动作装置,23-106、动作指令系统;
图61为实施例24的VR眼镜示意图;图中:24-1、眼镜架;24-2、球面形显示屏;24-3、耳机;24-4、文件处理模块;
图62为凹球面显示屏的显示面为完整球面相对较少一部分的全景影院示意图;
图63为凹球面显示屏的显示面为接近完整球面的全景影院示意图;
图64为凸球面显示屏的显示面为完整球面相对较少一部分的全景影院示意图;
图65为凸球面显示屏的显示面为接近完整球面的全景影院示意图;
图66为同时拥有凹球面显示屏和凸球面显示屏、且两个显示屏的显示面均为接近完整球面的全景影院示意图。
图67为同时拥有凹球面显示屏和凸球面显示屏、且两个显示屏的显示面均为为完整球面的的相对较少一部分的全景影院示意图。
参见图62-67:25-1、放映室;25-2、显示面为凹球面的凹球面显示屏;25-3、音响;25-4、观看台;25-5、显示面为凸球面的凸球面显示屏;
具体实施方式
下面结合附图对本申请作进一步详细描述。
实施例1。如图1所示,一种成像装置,包括成像面2为球面形结构的成像件1,所述成像面2球形面所有部位与像源所发出的光在成像面2交汇处的夹角均为90°,成像面2上分布有若干成像单元。
当所述成像装置为球面形图像获取拍摄装置时,成像件1为相机的图像传感器或者相机底片,成像面2为图像传感器的感光面或者为相机底片的感光面;成像单元为图像传感器感光面上的感光单元或者底片感光面上涂刷的感光颗粒,像源发出的光线为外部景物的直接光线或经过相机镜头的间接光线,其相对于成像面2为垂直射入的光。
当所述成像装置为自发光球面形图像还原显示装置时,成像件1为显示屏,成像面2为显示屏的显示面,成像单元为显示面上的显示像素,像源发出的光线为显示面上的显示像素发出的光线,其相对于成像面2为垂直射出的光。
当所述成像装置为投影式球面形图像还原显示装置时,成像件1为投影屏幕,成像面2为投影屏幕的图像显示面,成像单元为投影屏幕的图像显示面上涂刷的反光颗粒;像源发出的光线为投影主机投射到投影屏幕上的光线,其相对于成像面2为垂直射入的光。
当基于所述成像装置的模型体现为球面形图像文件时,所述成像件1为图像文件的页面或帧面,成像面2为图像文件页面或帧面的画面,成像单元为画面的图像像素,像源发出的光线为垂直于图像画面的射出光。
本实施例成像装置也是球面形图像成像法最基础的成像模型。
实施例2。如图2所示,一种成像装置,包括成像面2为球面形结构的成像件1和球面形结构的影像源3,成像件1的球面形成像面2的球心点与影像源3的焦点重合,影像源3的光轴与成像面2的中心对称轴重合,成像面2和影像源3的球面结构类型相反,即:当成像面2为凹球面结构时,影像源3为凸球面;当成像面2为凸球面结构时,影像源3为凹球面。
实施例3。如图3所示,一种成像装置,包括成像面2为球面形结构的成像件1和球面形结构的影像源3,影像源3的光轴与成像面2的中心对称轴与重合;还包括光学镜片组合4和辅助透镜5,光学镜片组合4和辅助透镜5位于光线的路径上,光学镜片组合4把入射光线变成平行光射向辅助透镜5,辅助透镜5的焦点与成像面2的球心重合、对称轴重合;这样使得最终投射到成像面2球形面所有部位上的所有光线均垂直于成像面2。本实施例为实施例2的改进,这种布局方式的成像装置使得成像件1可以放在光轴的任意位置,而不再局限于成像面2的球心与影像源3焦点重合的位置。
实施例4。如图4所示,此结构的成像装置是在实施例3的基础上,在光线的路径上放置反光镜,使得光束总方向发生变化,从而使得成像件1的成像面2不仅与影像源3的距离可以进行灵活调整,还可以在方位上进行灵活调整。实施例5。如图5所示,此结构的成像装置从影像源3或光学镜片组合4出来照射到辅助透镜5的光线为非平行光,辅助透镜5的焦点与成像面2的球心并不一定重合,但同在一条轴线上,辅助透镜5在共同的轴线上设置到合适的位置,使得光线通过辅助透镜5的进一步调整,最终使投射到成像面2上时的入射角均垂直于成像面2球面的所有对应部位。这种结构的成像装置可以减少光线在传输途中的光量子损失,进一步提高获取影像的质量。
实施例1-5为建立在球面形成像面上的成像装置,形成的原始图像的画面为球面形,故把所述成像装置统一简称为球面成像装置,基于该成像装置的成像模型简称为球面成像模型。
如图1-5所示,实施例1-5模型中球面形成像面2、影像源3和辅助透镜5为一面平面、另一面球面的结构、或者两面均为球面的结构,球面结构为凹球面结构或者凸球面结构,成像面2上的成像单元发出或接收的光均垂直于成像单元所在的成像面2部位的球面。
实施例2-5为基于实施例1的不同方式的改进,基于球面形成像面的成像方法还有很多改进方法,不同的改进方法下可以制作出不同的成像模型,以满足多 种场合需求,实现多种目的的成像效果。
实施例2-5球面成像装置不适用于自发光球面显示屏和球面形图像传感器自身、球面形图像传感器直接接收外部镜像光线的简易相机、以及球面图像文件画面本身的成像模型,适用于球面形图像传感器与透镜结合的相机和球面形投影面的投影显示放映装置的成像;当实施例2-5的球面成像装置为球面形图像传感器与透镜结合的相机时,所述图像源3为相机前端的取景透镜,当为球面形投影面的投影显示放映装置时,所述图像源3为发射图像光束或粒子束的投影主机。
实施例2-5可以作为球面形图像传感器与透镜结合的相机的常用球面成像模型,该模型下的相机,投射到相机球面形图像传感器的球面形感光面2上所有部位的光线均垂直于球面形感光面的表面,使得感光面所有部位接收到的光量子达到最高值,从而成像面2各个部位获得的影像清晰度最高且一致,实现整幅画的画质达到最佳状态,清晰度远高于平面成像面2成像方法获得的影像清晰度,该球面成像装置获得的球面形画面的影像,用与之对应的球面形结构显示面的屏幕还原显示,其在保留了原有高清晰度的同时,整幅画面所有部位无变形、无虚化,保真度高、立体效果强。
同理,该球面成像装置模型下的投影放映装置,投影主机投射到投影屏幕上所有部位的光线均垂直于球面形投影屏幕的表面,投影屏幕表面各部位获得光量子的聚焦程度均达到最大值,使得整幅画的画质达到最佳状态的同时,还使得所有部位无变形、无虚化,保真度高,立体效果强。
其中,如图6所示,实施例2-5模型中,采用凸球面结构影像源3成像模型的相机的前端取景镜头为凸透镜,适用于广域源景物的影像拍摄,视场角可以以正视场角值进行计算;用显示面如图46、48、64、65所示的凸球面形结构的显示设备进行还原显示时,观众从该显示面的正面观看,看到的是变形的镜像画面,从该显示面的背面观看,看到的是不变形的清晰立体的本像画面,但所述显示屏必须为双面都可以观看的透明屏幕;用显示面如图47、49、50、57、62、63所示的凹球面形结构的显示设备进行还原显示时,观众从该显示面的正面观看,看到是不变形的清晰立体的本像画面。
如图7所示,实施例2-5模型中,采用凹球面结构影像源3成像模型的相机的前端取景镜头为凹透镜,适用于有限范围的局域源景物的影像拍摄,比如拍摄舞台场景、物品,拍摄时,把舞台或者物品居于凹球面形前端取景透明透镜的 焦点位置,所拍摄的影像画面最为清晰;视场角可以对应以负视场角进行计算;用显示面如图47、49、50、57、62、63所示凹球面形结构的显示设备进行显示还原时,观众从该显示面的正面观看,看到的是变形的镜像画面,从该显示面的背面观看,看到的是不变形的清晰立体的本像画面,但所述显示屏必须为双面都可以观看的透明屏幕;用显示面为如图46、48、、64、65所示凸球面形结构的显示设备还原显示时,观众从该显示面的正面即看到的是不变形的清晰立体的本像画面。
所以,取景镜头为凸透镜的球面形图像传感器相机拍摄输出的球面行图像文件一般采用显示面为凹球面的显示设备来显示,取景镜头为凹透镜的球面形图像传感器相机拍摄输出的球面行图像文件一般采用显示面为凸球面的显示设备来显示。
如图8-18所示,进一步改进在于,实施例1-5模型中,成像面2、影像源3和辅助透镜5的球面结构包括常规球面结构、或菲涅尔球面结构、或多个菲涅尔球面结构体的结合形结构。其中,常规球面结构结构包括常规正球面结构、常规椭球面结构、常规抛物面结构;菲涅尔球面结构包括菲涅尔正球面结构、菲涅尔椭球面结构、菲涅尔抛物面结构。当成像面2为菲涅尔球面时,成像单元只分布在菲涅尔球面结构面的弧面部位,菲涅尔结构面的竖切面部位不分布成像单元。
如图17、18所示,需要进一步说明的是,菲涅尔球面结构包括常规菲涅尔球面结构和内凹式菲涅尔凹球面结构、外凸式菲涅尔凸球面结构。常规菲涅尔球面结构为一面为平整的水平面,一面为菲涅尔曲面,内凹式菲涅尔凹球面结构为常规菲涅尔凹球面从菲涅尔曲面一面的中心向内凹陷的结构,外凸式菲涅尔凸球面结构为常规菲涅尔凸球面从平整水平面的一面的中心向内凹陷的结构。采用不同的球面结构,可以达到不同的需求和目的;比如,成像面2或影像源3采用菲涅尔球面结构,可以制作出轻薄型型的基于该成像方法的成像设备,成像面2或影像源3采用椭圆形球面结构,制作出来的基于该成像方法的成像设备,具备获取或还原显示更深远的景物影像的能力,而且具备获取或还原显示更宽域的侧向景象的能力;成像面2或影像源3采用抛物面结构的基于该成像方法的成像设备,在具备获取或还原更深远的景物影像能力的同时,还具备获取或还原显示宽域侧向场景更大光量子值的能力,使获取或还原显示的侧向影像画面更清晰。
实施例1-5球面成像装置的成像方法为实物成像法和虚拟成像法,如图19所示,当该成像方法为虚拟成像法时,成像件1为虚拟成像件、成像面2为虚拟成像面、影像源3虚拟影像源、光学镜片组合4虚拟光学镜片组合、辅助透镜5为虚拟辅助透镜;虚拟成像为在虚拟世界场景中采取该成像方法获取或者显示球面形图像的行为。也就是说,在图像设计制作工具当中,提供一种基于该成像方法的在虚拟场景中获取影像或者显示影像的功能。
在成像面2上的成像单元按照一定规律进行布局,不仅便于成像单元数值的读取或写入,提高读写效率,而且针对不同的应用对象,可以灵活选择和实施不同的成像单元布局方式。
实施例6。如图20所示,一种成像单元的布局方法,所述成像单元在成像面2上以球面形成像面2的一边边缘为起始布局横线,同一横线上相邻成像单元的间距相等,相邻横线的间距相等且等于同一横线上相邻成像单元的间距,此处间距指的是两点或两条线之间沿球表面的距离。
实施例7。如图21所示,一种成像单元的布局方法,所述成像单元在成像面2上以球面形成像面2的中心为极点进行纬线布局,同一纬线上相邻成像单元的间距相等,相邻纬线的间距相等且等于同一纬线上相邻成像单元的间距,此处间距指的是两点或两条线之间沿球表面的距离。
实施例8。如图22所示,一种成像单元的布局方法:以成像面2的中心点或接近中心点的点为起始点、以成像面2边缘的某一点为终点标记一条或多条螺旋线,将成像单元的中心点放置在所述螺旋线上,同一条螺旋线上的相邻成像单元的间距相等且等于相邻螺旋、相邻螺旋线的间距;此处间距指的是两点或两条线之间沿球表面的距离。
实施例9。如图23所示,一种成像单元的布局方法:所述成像单元以成像面2中心位置为极点,分成若干数量的等夹角的经线,在每条经线上等间距布置有成像单元。
实施例10。所述成像单元在成像面上不以任何点、线、面为参照,无条件等间距分布在成像面的表面上。
实施例6-10的成像单元布局中,相对于经线布局,成像单元以横线、纬线、螺旋线布局的成像面2上的画面各部位的清晰度一致;相对于横线布局,纬线布局、螺旋线布局的难度系数更小,成像单元的读写更容易、读取速度更快;所以,纬线布局、螺旋线布局可以作为球面形成像面2(图像传感器的感光面、 显示设备的显示面、图像文件页画面)较为常用的布局方法。实施例10则可以作为一种相对简单且较为容易的布局方法,比如,应用于球面形图像传感器相机底片、球面形投影放映设备投影屏幕表面上的成像单元的涂刷式布局。
实施例11。在向成像面2上的成像单元实施图像单元数值的读取或写入时,对成像单元进行矩阵化处理后再进行扫描读写,可简化读写算法、提高读写效率。
如图24、25所示,一种矩阵化扫描方法,将成像面2上呈纬线/横线分布的成像单元作为实际成像单元,以最长纬线/横线上的实际成像单元数量为基准数,对其它纬线/横线上不足基准数用虚拟成像单元进行补充,以使其它纬线/横线上实际成像单元与补充的虚拟成像单元之和达到基准数,将达到基准数的相等数量成像单元的纬线/横线作为一行;将上述方式获得的行作为行,把成像面2上所有成像单元的纬线/横线数量作为列,组成虚拟行列矩阵,实施扫描。
实施例12。一种矩阵化扫描方法,将成像面2上呈纬线/横线分布的成像单元作为实际成像单元,以给定数量的实际成像单元数量为参照值,将其中一条纬线/横线作为起始线,逐线逐点进行虚拟摘取实际成像单元,若起始纬线/横线上虚拟摘取的实际成像单元数量达到参照值,则作为一条虚拟行记入第一行,若未达到参照值,则从相邻的下一条纬线/横线上继续虚拟摘取成像单元,直到达到参照值时作为一条虚拟行记入第一行,而纬线/横线上虚拟摘取剩余的实际成像单元则计入下一个虚拟行的虚拟摘取工作中去,以此类推,直至成像面2上的最后一条纬线/横线的实际成像单元全部虚拟摘取完,且当最后一次虚拟摘取的实际成像单元未能达到参照值时,则用虚拟成像单元补充;最后将上述方式获得的行作为行,以行的总数量为列,组成虚拟行列矩阵,实施扫描。
实施例13。如图24、25所示,一种矩阵化扫描方法,将成像单元以经线、纬线、横线、螺旋线、无参照对象等间距分布方式分布的成像面2分成一个或多个等面积或不等面积的区块,划分的原则为每个区块内的成像单元数量相等且等于参照值,当成像面2内区块的成像单元数量少于参照值时,用虚拟成像单元补充达到参照值,将每个区块的相等数量的成像单元视为一虚拟行;将上述方式获得的行作为行,所有区块的数量作为列,形成虚拟行列矩阵,实施扫描;其中,图25所示的扫描方法为等面积的扇区区块的矩阵化扫描法。
实施例14。如图26所示,一种矩阵化扫描方法,将成像面2上经过球面结构中心点的任意一条经线作为虚拟经线,所述虚拟经线以垂直于球面且经过球面中心点的直径线为旋转轴,进行顺时针或逆时针旋转,把提前设定的一个时间 段且该时间段小于等于该虚拟经线旋转一周的时间下虚拟经线切割成像面2上以纬线方式、横线方式、螺旋线方式、无参照对象等间距分布布局的相等数量的成像单元作为一条虚拟行;将上述方式获得的行作为行,把虚拟经线旋转一圈所获得的虚拟行的数量作为列,组成虚拟行列矩阵,实施扫描。
实施例15。如图27所示,一种矩阵化扫描方法,将成像单元以经线方式分布于成像面2上的成像单元数量相等的每一条经线作为一行,所有经线数量作为列,形成虚拟行列矩阵,实施扫描。
实施例16。一种矩阵化扫描方法,把采用螺旋线布局法的成像单元分成等数量的若干份,从所述螺旋线起点的第一个成像单元开始选择该数量的成像单元作为一个虚拟行,选择至螺旋线上的最后一个成像单元为止;以选择的所述等数量的成像单元作为虚拟行,以全部虚拟行的行数作为虚拟列,形成行虚拟列矩阵,实施扫描。
实施例17。一种矩阵化扫描方法,按照实施例12-16方法获得虚拟行列矩阵后,对成像单元进行隔点取样,奇数组组成一个矩阵、偶数组组成一个矩阵,两个矩阵分别匹配接收同一幅画面的不同视角图像矩阵数据,用于显示双视角图像视频。
实施例18。图像传感器是利用光电器件的光电转换功能将感光面上的光像转换为与光像成相应比例关系的电信号。现有的图像传感器产品的感光面以平面为主,所获取的图像普遍存在画面模糊虚化、畸变现象,拍摄图像应用到VR领域,VR画面缝合存在色差、错位、画损等问题。
如图30、并结合31、32所示,一种图像传感器,为基于实施例1球面成像模型的球面形图像传感器,包括图像传感器封装壳18-7,封装壳18-7内部包括感光件18-1、与感光件18-1相连的矩阵生成器18-3、与矩阵生成器3连接的数据读取器18-5、与数据读取器18-5连接的图像信息处理器18-6;感光件18-1的感光面18-2为完整球面的一部分,二分之一完整球面的结构为球面形图像传感器的常用结构;感光面18-2上按照实施例6-10中任一一种布局方式布局有感光单元,所述感光单元的受光面与其所在感光面部位球面的切面平行;所述图像传感器安装在相机上时,必须使投射到感光面18-2上的景物光线全部垂直于感光面18-2上所有部位的感光单元受光面,所述图像传感器工作时,矩阵生成器18-3把感光面18-2上非矩阵布局的感光单元经矩阵生成器18-3内置的包含实施例11-16中的一种或多种成像单元矩阵化方法的逻辑电路或处 理程序处理生成对应的虚拟矩阵18-4,数据读取器18-5按照虚拟矩阵18-4把感光面18-2上感光单元获取的图像信息读取后传送给图像信息处理器18-6,图像信息处理器18-6把输入进来的图像信息处理成图像数字信号,输出图像像素布局与感光单元布局方式相同、图像像素矩阵形式与数据读取器18-5读取图像信息时使用的矩阵方式相同的图像数字信号集输出。
图31、32中的A、B、C、D、E分别是感光面18-2为常规凹球面、常规凸球面、菲涅尔凹球面、菲涅尔凸球面、菲涅尔凹球面组合的图像传感器;为满足各种相机的需求,所述球面形图像传感器既可以如图31所示单独封装,也可以如图32所示把球面形图像传感器与一面为平面、一面为球面的球形透镜18-8一起封装在封装壳18-7内;封装时,球形透镜平面的一面朝向感光口,球面的一面背向感光口,且所述球形透镜的焦点与球面图像传感器的球心重合;其作为相机部件安装于相机内部时,设置在感光口与射入光线垂直的位置。其中,二分之一完整球面形状感光面的单独封装的图像传感器,二分之一完整球面形状感光面的图像传感器与二分之一球面透镜组合封装的图像传感器可以作为球面形图像传感器相机的常用结构类型。
本实施例可以作为通用型或标准型球面形图像传感器,应用在各种球面形图像传感器相机或者摄像头上,有助于减少甚至消除拍摄画面的虚化、畸变现象,提高画面清晰度,提高VR画面缝合的效果,以及输出更适配于各种球面显示屏图像播放文件。
实施例19。目前相机的图像传感器均是平面形结构或以平面形结构为基础的变化型结构,其存在:拍摄画面易出现虚化、变形现象;难以实现画面全局的较高和一致的清晰度;画面景深不足,且基本无立体效果;针对高清、大视角画面拍摄,对拍摄技术要求高,以及反复拍摄调式;拍制VR画面,需要依赖VR成形缝合软件,且缝合效果依然存在错位、色差、画损等缺陷。
如图33并结合34所示,一种相机,为基于实施例3球面成像模型的相机,包括机身19-1和镜头19-9,机身19-1内部的暗箱19-5内由前往后设有快门19-4、辅助透镜19-6、图像传感器19-7和图像数据处理模块19-8;镜头19-9的最前端设有取景透镜19-2,镜头19-9的镜筒内部设有透镜组合19-3;图像传感器19-7为实施例18的球面形感光面的图像传感器;辅助透镜19-6为球形透镜,当图像传感器19-7为与球形透镜封装在一起的部件时,辅助透镜19-6位于图像传感器19-7的封装壳内,当图像传感器19-7为独自封装的部件时, 辅助透镜19-6位于图像传感器19-7的外部,机身19-1的内部;辅助透镜19-6的焦点与图像传感器19-7感光面的球心点重合、中心轴线与图像传感器19-7的中心轴线重合;
辅助透镜19-6用于与透镜组合19-3配合使所有进入相机内部的光线均垂直照射在图像传感器19-7感光面上的所有感光单元,从而使得不管远景还是近景、画面中心还是画面边缘的景物均清晰不变形;图像数据处理模块19-8用于将从图像传感器19-7上获取的图像数据集处理成各种格式的图像文件或合成球面图像。
所述辅助透镜19-6采用双面均为球面的结构,或者采用朝向图像传感器19-7的一面为球面,朝向快门19-4的一面为平面的结构。
图34是图像传感器、辅助透镜、取景透镜选用不同形状结构产生的不同类型的相机,以满足各种场景需要。
本相机为球面形图像传感器相机的常用结构,在本相机基础上,结合实施例1、2、4、5球面成像模型以及基于2-5球面成像模型的扩展模型,增加或减少部件或调整相机结构,可以制作更多类型的相机;另外,本实施例相机与其它设备搭配或者自身组合,可以制作出更多样化用途的相机或拍摄设备,如图37所示,由两个相机组成的一种全景相机、如图38所示由两个相机组成的一种双视角3D相机、如图39所示由四个相机组成的一种双视角3D全景相机。
相对于目前的平面形图像传感器相机,球面形图像传感器的相机的图像传感器感光面最佳受光面得以大幅提升,从而显著提高了拍摄画面的视场范围和画面整体品质,降低了大视场角画面拍摄的难度,简化了VR图像合成流程、提高了VR图像合成的效率和质量,且所述相机直接输出的图像以及合成的VR图像在对应的球面形显示屏上播放,呈现出的画面是变形程度很小或零变形、逼真度高、立体感更强的全局画面更清晰的影像。
实施例20。现有的全景图像是采用一部或多部平面图像传感器相机拍摄输出的多张平面图像球面化处理后缝合而成的广域源或局域源全景图像文件,拍摄图像画面存在变形/虚化/拉伸/坍缩的现象,拼接成的球面形全景图像存在画面错位、损伤、色彩过渡僵硬不自然等不足之处。
一种全景图像的拍摄方法,包括全景图像拍摄设备,所述设备包括球面形图像传感器相机20-1、相机承载体20-2、图像处理系统20-3;
如图38所示,相机承载体20-2上设有一部凸球面形取景透镜的球面形图像传 感器相机的全景图像拍摄设备,在自身驱动装置或者人力操控下,所述相机20-1以设备自身上的某一点或者其所在空间的某一点为中心点,拍摄口背对着所述中心点实施旋转拍摄,图像处理系统20-3把拍摄到的覆盖所有方向的多张画面形状为部分球面形状的图像拼接成完整球面形画面的全景图像文件保存、或输出;
如图39所示,相机承载体20-2上设置有多部凸球面形取景透镜的球面形图像传感器相机的全景图像拍摄设备,以设备自身上的某一点或者设备所在空间的某一点为中心点,多部所述相机20-1的拍摄口面背向所述中心点设置在所述相机承载体20-2实物球面形或者虚拟球面形的承载面承载点上,且当所述多部相机20-1能够覆盖所述中心点以外所有方向的景象时,所述相机20-1在静止不动的状态下实施拍摄;当所述多部相机20-1只能覆盖所述中心点以外部分方向的景象时,所述相机20-1以所述中心点为中心旋转拍摄;图像处理系统20-3把所述相机20-1拍摄到的覆盖所有方向的多张画面形状为部分球面的图像拼接成完整球面形画面的全景图像文件保存、或输出;
凸球面形取景透镜的球面形图像传感器相机以拍摄空间的某一点为中心点,拍摄口背向着所述中心点拍摄的所有方向景象为来自来于广域范围的景象,该方法拍摄输出的全景图像这里简称为广域源全景图像。
如图40、42所示,相机承载体20-2上设置有一部凹球面形取景透镜的球面形图像传感器相机的全景图像拍摄设备,以拍摄对象场景中的某一点为中心点,所述相机20-1拍摄口朝向所述中心点,在半径大于拍摄对象所占空间半径的实物球面上或者虚拟球面上进行旋转拍摄,图像处理系统20-3把拍摄到的覆盖拍摄对象的所有方向场景的多张画面形状为部分球面形状的图像拼接成完整球面形画面的全景图像文件输出;
如图41所示,相机承载体20-2上设置有多部凹球面的球面形图像传感器相机的全景图像拍摄设备(1),以拍摄对象所在空间中的某一点为中心点,多部所述相机20-1的拍摄口面朝向所述中心点布局在所述相机承载体20-2的实物球面型或者虚拟球面形的承载面的承载点上,当所述多部相机20-1能够覆盖拍摄对象的所有方向的景象时,所述相机20-1在静止不动状态下实施拍摄;当所述多部相机20-1只能覆盖拍摄对象的部分方向的景象时,所述相机20-1以所述中心点为中心旋转拍摄;图像处理系统20-3把拍摄到的覆盖所有方向的多张画面形状为部分球面的图像拼接成完整球面形画面的全景图像文件输出; 相机以拍摄空间的某一点为中心点,拍摄口朝向着所述中心点拍摄的所有方向景象为来自所述中心点与相机所在的虚拟球面或者实物球面之间有限范围的景象,该方法拍摄输出的全景图像这里简称为局域源全景图像。
如图43所示,同一场景下,同时采取广域源全景图像拍摄法和局域源全景图像拍摄法进行全景图像拍摄,并分别输出的对应全景图像文件分别在不同的独立全景图像显示设备上进行显示。
本实施例采用球面形图像传感器相机组成的拍摄设备输出的全景图像,一方面球面形图像传感器拍摄输出的原始图像本身比平面图像传感器相机输出的原始图像的清晰度高,且无变形虚化或变形虚化现象极小,另一方面,球面图像传感器相机拍摄输出的原始图像原本就是球面形图像,合成全景图像时,无需球面化处理的环节,也就没有如平面形图像球面化处理过程中的画面破坏、损伤的现象,且原本就为球面形图像缝合成球面形全景图像,拼接时,像素原坐标不会发生破坏,不存在平面形图像拼接部位吻合度低的现象;从而使得该方法拍摄输出的全景图像的画质更高,效率更快。
实施例21。目前的OLED或液晶显示屏为平面显示屏或者为在平面显示屏基础上且未脱离平面显示屏基础物理结构和显示方法的变化的显示屏,该显示屏不易实现立体的显示效果;现在市面上的球面屏也能一定程度提升显示的立体效果,但由于受到传统成像方法、像素布局、扫描方式、图像文件及图像处理方式的制约,不仅显示立体效果依然不明显,而且还存在显示画面分辨率不均匀、变形、响应速度慢的现象。
如图44并结合图45所示,一种球面形显示面的显示屏,为基于实施例1球面成像模型的显示设备,包括外部的壳件21-6,位于壳件21-6前部的显示面21-2呈球面形结构的显示件21-1;在显示面21-2上,按照实施例6-10中任一一种布局方式布局有显示像素21-3,且显示像素21-3在其通电时发出的光垂直于显示面21-2各显示像素所在的部位;显示像素21-3与扫描模组21-4相连,扫描模组21-4的显示像素矩阵化器内包含实施例11-16中的一种或者多种显示像素的矩阵生成逻辑电路或者程序指令,所述显示屏工作时,显示像素矩阵化器提前通过矩阵生成逻辑电路或者程序指令把显示面上非矩阵排列的显示像素提前矩阵化、形成显示像素矩阵待用;
图像处理模组21-5内的图像像素矩阵化器内包含实施例11-16中的一种或者多种图像像素的矩阵生成逻辑电路或者程序指令,且所述图像像素的矩阵生成 逻辑电路或者程序指令的矩阵类型与扫描模组内的显示像素的矩阵生成逻辑电路或者程序指令的矩阵类型相同;
所述显示屏显示图像时,图像处理模组21-5中的图像判断程序把已经被矩阵化的球面形图像数据集文件直接输送给扫描模组21-4内的匹配器、把图像像素未被矩阵化的球面形图像文件通过图像像素矩阵化器矩阵化并将矩阵化的球面形图像数据输送给扫描模组21-4内的匹配器、把平面图像通过图像处理模组内的图像转化器转化成球面形图像后再通过图像像素矩阵化器矩阵化后传送给扫描模组21-4内的匹配器,匹配成功后,扫描模组21-4按照对应矩阵把图像像素的数据扫描写入对应显示面上的21-2的显示像素21-3,实现图像显示。其中,显示面为凹球面(比如图47、49所示)的显示屏用于显示取景透镜19-2为凸球面形的球面形图像传感器相机拍摄直接输出的图像以及取景透镜19-2为凸球面形的球面形图像传感器相机拍摄输出的图像缝合而成的广域源全景图像、平面形图像传感器相机拍摄输出的图像球面化缝合而成的广域源全景图像。
显示面为凸球面(比如图46、48所示)的显示屏用于显示取景透镜19-2为凹球面形的球面形图像传感器相机拍摄直接输出的图像以及取景透镜19-2为凹球面形的球面形图像传感器相机拍摄输出的图像缝合而成的局域源全景图像、平面形图像传感器相机拍摄输出的图像球面化缝合而成的局域源全景图像;本实施例显示屏的显示面为如图46、47所示的常规球面、或者为如图48、49所示的菲涅尔球面,或菲涅尔球面的组合;其中显示面采用菲涅尔球面结构,可以制作出轻薄型的球面显示屏,所述轻薄型球面屏应用到笔记本电脑、手机上,在实现球面屏带来的高清立体效果的同时,还便于携带。
相对于基于平面成像模型的平面显示屏,基于球面成像模型的本实施例显示屏,不仅显示的画面更清晰、无变形,而且呈现出的立体效果更强烈、裸眼即可观看到立体画面;采用所述本实施例球面屏的VR设备,由于球面屏与球面形画面的VR图像匹配度更高,从而显示画面清晰度得以提高的同时,画面的观看视场角和景深得以也得以明显提高,画面颗粒感、对焦难度显著降低,如果配合播放来自球面成像模型的球面形图像传感器相机的拍摄设备输出的全景图像,所述VR设备显示的画面品质会更上一层楼,达到较高的境界。
实施例22。现有扇机显示屏采用播放镂空画面,显示屏置于距地面一定高度,播放的镂空画面悬于空中,从而给人以空中成像的观感。但现有扇机显示屏的 立体效果仍然有限,且其播放对象单一、应用场景较窄。
如图50并结合图51所示,一种弧形杆扇机显示屏,为基于实施例1球面成像模型的显示设备,包括控制机构22-1和扇叶22-4;控制机构22-1包括控制板22-3与驱动电机22-2,驱动电机22-2的驱动端与扇叶22-4相连,扇叶22-4为弧形结构的杆件;扇叶22-4为一组或多组,运行时其被连接的驱动电机22-2带动发生旋转,旋转呈球面状;在扇叶22-4面向观众的一面的弧形外表面上置放有灯珠22-5,灯珠22-5与控制板22-3电性连接,灯珠22-5通电时所发出的光垂直于弧形杆该灯珠所在部位的弧面,控制板22-3内的图像处理模块将输入的视频解析成像素坐标值、颜色值、灰阶值的图像数值,并把图像数值传送给扇叶22-4上对应位置的灯珠22-5,灯珠22-5接收的图像数值后点亮发出与对应颜色和灰阶的亮度,结合扇叶旋转,实现画面的显示。
基于弧形杆扇扇叶22-4旋转形成常规球面或菲涅尔球面,扇叶22-4的形状如图52所示的两条间距相对较小的平行线截取常规球面或菲涅尔球面获得的所述平行线之间的球面状结构的多种对应形状结构件。
灯珠22-5如图53、55所示按照弧线方式布局在扇叶22-4的表面上,控制板22-3内的图像处理模块按照实施例14的矩阵化扫描方式把图像信号传送给灯珠22-5,灯珠对应点亮,呈现画面;灯珠22-5如图54、56所示按照扇形方式布局在扇叶22-4的表面上,控制板22-3内的图像处理模块按照实施例13的矩阵化扫描方式把图像信号传送给灯珠22-5,灯珠对应点亮,呈现画面;
本实施例是实施例21显示屏的一种变化,比现有扇机显示屏呈现画面的立体效果更强,播放的图像文件更广泛,可直接播放各种球面形、间接播放平面形图像视频文件,同时应用场景更广泛,不仅可用于商家视觉引流,还可以应用到VR设备上,这是现有扇机显示屏所做不到的。
实施例23。如图57所示,一种球面图像投影装置,为基于实施例2球面成像模型的显示设备,所述投影装置包括投影主机23-1,显示面为球面形状的被投显示屏幕23-2,投影主机支架23-3;所述投影主机23-1为静态投影源型主机、动态投影源型主机;静态投影源型主机发射出去的图像信号粒子为球面状,动态投影源型主机发射出去的图像信号粒子为弧线状或者点状,所述弧线状或者点状图像信号粒子通过运动形成球面状图像信号粒子面并扫描投射到被投显示屏幕23-2的球面形显示面上;如图58所示,静态投影源型主机包括球形光源23-102、包裹在形光源23-102外部的球形透明投屏23-101,图像处理系统 23-103;球形光源23-102、球形透明投屏23-1和被投显示屏幕23-2的球心位于同一点上;工作时,球形光源23-102点亮,把球形透明投屏23-1播放的球面形图像视频投射到被投球面形显示屏幕23-2上,并所述图像视频光线与显示屏幕23-2所有部位的入射角均为90°,呈现画面。
如图59、60所示,动态投影源型主机包括图像信号粒子发射枪23-104、动作装置23-105、动作指令系统23-106、图像处理系统23-103;其中,如图59所示的图像信号粒子发射枪23-104为以弧线方式布局在弧形杆扇叶上面朝被投显示屏幕23-2方向的多个或多组发射孔的发射装置,如图60所示的图像信号粒子发射枪23-104为一个或一组发射孔的点状发射源发射装置,其当发射孔为一个时,所述孔发射的图像粒子为单基色,当发射孔为多个为一组的发射孔时,所述孔发射的图像粒子为多基色;所述图像信号粒子发射枪23-104发射出的图像信号粒子为可见光粒子或者非可见光粒子;当为非可见光粒子时,所述粒子通过激发被投显示屏幕23-2显示面表面上的显示单元或显示粒子,而产生对应图像色值的可见光实现显像;工作时,图像处理系统23-103根据把扫描动作发送给动作指令系统23-106,同时把图像像素色值发送给发射枪23-104,动作装置23-105带动发射枪23-104运动,发射枪23-104把对应图像色值的图像信号粒子发射出被投显示屏幕23-2球面形显示上,并所述图像信号粒子与显示屏幕23-2所有部位的入射角均为90°,呈现画面。
本实施例投影装置投射的图像画面比现有平面图像投影装置的清晰度更高、立体感更强、且裸眼即可看到立体效果,同时比现有球面形投影装置的片源更广,更便于推广普及。
实施例24。如图62所示,一种眼镜式全景显示设备,又称VR眼镜;包括:眼镜架24-1、球面形显示屏24-2、耳机24-3、文件处理模块24-4以及控制手柄等部件;球面形显示屏24-2为一片或者二片,设置于眼镜架24-1的镜框内并位于佩戴时观看者眼睛的正前方,其图像显示面为凹球面结构或凸球面形结构;当球显示屏24-2为二片时,两片显示屏显示单视角相机拍摄并合成的全景图像文件的同一幅画面的同一区域同一视角的画面,或者分别显示双视角相机拍摄并合成的全景图像文件的同一幅画面的同一区域不同视角的画面。
所述球面显示屏24-2为实施例21、22、23中的任一显示显示设备的屏幕,其中,使用实施例21显示屏可以制作出轻薄型的VR眼镜。
采用显示面为凹球面的显示屏的所述全景VR眼镜用于播放广域源全景图像文 件,观看者观看时转动头部,画面发生与转动方向相同、转动角度相等的转动改变;采用显示面为凸球面的显示屏的所述全景VR眼镜,用于播放局域源全景图像文件,观看者观看时转动头部,画面不发生对应变化,观看者通过控制手柄虚拟拨动画面,所述画面发生对应拨动方向的转动。
与采用平面显示屏的现有VR眼镜相比,采用球面成像模型和方法的球面屏的本实施例VR眼镜显示的图像画面的视场角更大,清晰度更高,变形度和虚化程度更小、画质更好,且因球面屏无需像平面显示屏那样与凸透镜配合使用,所以所述画面不再存在因像素被凸透镜放大造成的颗粒感和画面散焦、画面对焦困难的现象,如果配合播放来自于球面成像模型的球面形图像传感器相机的拍摄设备输出的全景图像,所述VR设备显示的画面品质会更上一层楼,达到较高的境界;从而从根本上解决了阻碍现有VR眼镜发展的清晰度低、视场角窄、散焦等关键性缺陷及解决难题,为VR以及以VR显示为基础技术的元宇宙的发展和普及起到很大的助推作用。
实施例25。现有3D立体影院放映的画面立体效果仍然相对较低,沉浸度较小,而且多数情况下需要佩戴双色偏光眼镜观看,而佩戴双色偏光眼镜观看会大幅降低画面的观看亮度。
如图62、63所示,一种影院,包括:放映室25-1、显示面为凹球面的凹球面显示屏25-2、音响25-3、观看台25-4;
当凹球面显示屏25-2的显示面为完整球面的相对较少一部分时,凹球面面显示屏25-2设置于放映室25-1内的一侧,观看台25-4为设置于放映室25-1内另一侧的斜坡,所述斜坡的坡面面向显示屏25-2的显示面,斜坡面上的观众观看区上部低于、下部高于凹球面显示屏25-2;该结构的影院用于显示放映取景透镜19-2为凸球面形的球面形图像传感器相机直接输出的球面形图像文件以及广域源全景图像视频文件。
当凹球面显示屏25-2的显示面为完整球面为完整球面或者除去其与放映室地面交汇处的小部分球面空缺、其他部位的球面为接近于完整球面时,凹球面显示屏25-2设置设放映室25-1的中间位置的上部空间并通过固定架与放映室25-1的内墙墙壁相连固定,其显示面把位于放映室25-1的地面中间区域的观看台25-4完全包裹,观看台25-4上的观众座椅可以上下左右调节观看角度;该结构的影院用于显示广域源全景图像文件。
本实施例影院呈现出来的画面为裸眼即可看到、向外部空间延伸的、高纵深感 的、广域范围的立体全景影像。
实施例26。如图64、65所示,一种影院,包括:放映室25-1、显示面为凸球面的凸球面显示屏25-5、音响25-3、观看台25-4。当凸球面显示屏25-5的显示面为完整球面的的相对较少一部分时,其设置于放映室25-1内的一侧,观看台25-4为设置于放映室25-1内另一侧的斜坡,所述斜坡的坡面面向凸球面显示屏25-5的显示面,斜坡面上的观众观看区上部低于、下部高于凸球面显示屏25-5;该结构影院用于显示取景透镜19-2为凹球面形的球面形图像传感器相机直接输出的球面形图像文件和广域源全景图像文件。
当凸球面显示屏25-5的显示面为完整球面或者除去与地面接触部分的球面空缺外、其他部位的球面接近完整球面时,凸球面显示屏25-5设置于放映室25-1的中间下部位置的地面上,观看台25-4为环绕于凸球面显示屏25-5四周的斜坡,斜斜坡面上的观看区为凸球面显示屏25-5显示面的球心与凸球面显示屏25-5显示面最高点之间垂直高度差之间的区域。该结构的影院适用于放映局域源全景图像视频。
本实施例影院展现出来的为裸眼即可看到的、凸立于观影厅25-1一侧或中央位置的无外部环境景象的、有限范围画面的、高立体效果的全景影像。
实施例27。如图66、67所示,一种影院,包括:放映室25-1、显示面为凹球面的凹球面显示屏25-2、显示面为凸球面的凸球面显示屏25-5、音响25-3、观看台25-4。其中,凹球面显示屏25-2的显示面直径大于凸球面显示屏25-5的显示面直径;
如图66所示,当凹球面显示屏25-2的显示面和凸球面显示屏25-5为完整球面或者除去其与放映室地面交汇处的小部分球面空缺、其他部位的球面为接近于完整球面时,凹球面显示屏25-2设置于放映室25-1中间靠近上部空间并通过固定架固定在放映室内墙墙壁上的位置;凸球面显示屏25-5设置于放映室25-1中间下部靠近地面或居于地面上的位置,并位于凹球面显示屏25-2的内部中间靠下的位置被凹球面显示屏的显示面完全包裹;观看台25-4位于经过凸球面显示屏25-5显示面球心的水平面与经过凹球面显示屏25-2显示面球心的水平面之间的区域且靠近凹球面显示屏25-2显示面上的环形圈带上,观看台25-4的座椅面向凸球面显示屏25-5的显示面。
如图67所示,当凹球面显示屏25-2的显示面和凸球面显示屏25-5的显示面为完整球面的的相对较少一部分时,所述凹球面显示屏25-2和凸球面显示屏 25-5设置于放映室内的同一侧,观看台为位于放映室内另一侧的平台或斜坡。本实施例影院让观众既能观看到涵盖所有外围广域环境的、高纵深强立体画面,又能观看与观众同处于该广域画面中的、凸立于观众面前的、虚拟立体人物及其局域画面;如此,观众与虚拟人和物同在一个虚拟世界中,实现更高沉浸度的混合现实视觉体验。
实施例25-27的凹球面显示屏25-2和凸球面显示屏25-5为实施例22的自发光显示屏,或者为实施例24的投影装置,或者为实施例23的扇机显示屏;由于扇机显示屏很难实现大尺寸显示,所以其只能用于微型全景影院。
上面结合附图及实施例描述了本发明的实施方式,实施例给出的结构并不构成对本发明的限制,本领域内熟练的技术人员可依据需要做出调整,在所附权利要求的范围内做出各种变形或修改均在保护范围内。

Claims (20)

  1. 一种成像装置,特征在于:包括成像面(2)为球面形结构的成像件(1),所述球面形成像面(2)所有部位与像源所发出的光在成像面(2)交汇处的夹角均为90°,成像面(2)上分布有若干成像单元。
  2. 根据权利要求1所述的一种成像装置,其特征在于:还包括球面形结构的影像源(3);成像时,所述影像源(3)发出的光照射到成像件(1)的成像面(2)上,且所述影像源(3)对射入的光线进行方向和角度调整,使最终照射到所述成像面(2)上的所有光线均垂直于成像面(2)表面的各部位对应位置。
  3. 根据权利要求1所述的一种成像装置,其特征在于:所述成像方法的模型中还包括光学镜片组合(4)和辅助透镜(5),所述光学镜片组合(4)和辅助透镜(5)位于光线的路径上,通过改变所述光学镜片组合(4)的属性和布局,使影像源(3)发出的光线的方向和路径发生变化,到达成像面(2)的距离和位置对应发生改变,从而使得成像面(2)可以按照需求放置在指定位置;所述辅助透镜(5)对成像面(2)的入射光线作进一步精准调整,以使得照射到所述成像面(2)上所有部位的入射光线均精准垂直于成像面(2)表面的各部位对应位置。
  4. 根据权利要求3所述的一种成像装置,其特征在于:所述辅助透镜(5)与所述成像面(2)的对称轴重合,成像面(2)、影像源(3)和辅助透镜(5)的球面形结构类型相同或相互匹配。
  5. 根据权利要求4所述的一种成像装置,其特征在于:所述成像面(2)和/或影像源(3)和/或辅助透镜(5)为一面平面、另一面球面的结构,或者两面均为球面的结构;该球面指凹球面或者凸球面结构。
  6. 根据权利要求5所述的一种成像装置,其特征在于:所述成像面(2)和/或影像源(3)和/或辅助透镜(5)的球面结构为常规球面结构、或菲涅尔球面结构、或多个菲涅尔球面结构体的结合形结构;其中,常规球面结构为常规正球面结构、常规椭球面结构、常规抛物面结构中的一种;菲涅尔球面结构为菲涅尔正球面结构、菲涅尔椭球面结构、菲涅尔抛物面结构中的一种。
  7. 根据权利要求1-6任一所述的一种成像装置,其特征在于:所述成像面(2)上的成像单元以经线的方式布局在所述成像面(2)的表面,各所述经线的夹角相等,同一条经线上的所述成像单元的间距相等;或以纬线/横线/螺旋线的方式 布局在所述成像面(2)的表面,同一条纬线/横线/螺旋线上的所述成像单元的间距相等且等于相邻纬线/横线/螺旋线之间的间距,螺旋线方式布局成像单元的成像面上为多条螺线时,多条螺旋线之间的间距相等且等于成像单元的间距;此处的间距指的是沿成像面(2)表面的间距;或不以任何点、线、面作参照对象的成像单元等间距分布于成像面(2)的表面上;所述成像单元指的是相机图像传感器感光面上的感光单元、或显示屏幕的显示像素、或图像画面的图像像素;
  8. 一种基于权利要求1-7任一所述成像装置的成像方法,其特征在于:
    S1,所述球面形成像面(2)所有部位与像源所发出的光在成像面(2)交汇处的夹角均为90°,
    S2,对成像面(2)上的成像单元进行矩阵化处理,形成虚拟行列矩阵,对所述虚拟行列矩阵进行图像像素数值读取或写入;
    S3,凸球面形成像面(2)的图像获取设备直接接收现实世界景象形成的虚拟矩阵、凹球面形或凸球面形的成像面(2)通过凸球面影像源(3)间接接收现实世界景象形成并输出的对应虚拟矩阵的图像文件,用观看面为凹球面形显示面的显示设备显示还原;凹球面形的成像面(2)直接接收现实世界景象形成的虚拟矩阵、凹球面形或凸球面形的成像面(2)通过凹球面影像源(3)间接接收现实世界景象形成并输出的对应虚拟矩阵的图像文件,用观看面为凸球面形显示面的显示设备显示还原。
  9. 根据权利要求7所述的一种成像方法,其特征在于:S2中读取成像单元方式是:S2.1虚拟成像单元补充纬线/横线实际成像单元法,将成像面(2)上呈纬线/横线分布的成像单元作为实际成像单元,以最长纬线/横线上的实际成像单元数量为基准数,对其它纬线/横线上不足基准数用虚拟成像单元进行补充,以使其它纬线/横线上实际成像单元与补充的虚拟成像单元之和达到基准数,将达到基准数的相等数量成像单元的纬线/横线作为一行;将上述方式获得的行作为行,把成像面(2)上所有成像单元的纬线/横线数量作为列,组成虚拟行列矩阵;
    或S2.2相邻纬线/横线成像单元互相补充法,将成像面(2)上呈纬线/横线分布的成像单元作为实际成像单元,以给定数量的实际成像单元数量为参照值,将其中一条纬线/横线作为起始线,逐线逐点进行虚拟摘取实际成像单元,若起始纬线/横线上虚拟摘取的实际成像单元数量达到参照值,则作为一条虚拟行记入第 一行,若未达到参照值,则从相邻的下一条纬线/横线上继续虚拟摘取成像单元,直到达到参照值时作为一条虚拟行记入第一行,而纬线/横线上虚拟摘取剩余的实际成像单元则计入下一个虚拟行的虚拟摘取工作中去,以此类推,直至成像面(2)上的最后一条纬线/横线的实际成像单元全部虚拟摘取完,且当最后一次虚拟摘取的实际成像单元未能达到参照值时,则用虚拟成像单元补充;最后;将上述方式获得的行作为行,以行的总数量为列,组成虚拟行列矩阵;
    或S2.3区块法,将成像单元以经线、纬线、横线、螺旋线、无参照对象等间距分布方式分布的成像面(2)分成一个或多个等面积或不等面积的区块,划分的原则为每个区块内的成像单元数量相等且等于参照值,当成像面(2)内区块的成像单元数量少于参照值时,用虚拟成像单元补充达到参照值,将每个区块的相等数量的成像单元视为一虚拟行;将上述方式获得的行作为行,所有区块的数量作为列,形成虚拟行列矩阵;
    或S2.4虚拟经线切割法,将成像面(2)上经过球面结构中心点的任意一条经线作为虚拟经线,所述虚拟经线以垂直于球面且经过球面中心点的直径线为旋转轴,进行顺时针或逆时针旋转,把提前设定的一个时间段下虚拟经线切割成像面(2)上以纬线方式、横线方式、螺旋线方式、无参照对象等间距分布布局的相等数量的成像单元作为一条虚拟行;将上述方式获得的行作为行,把虚拟经线旋转一圈所获得的虚拟行的数量作为列,组成虚拟行列矩阵;
    或S2.5经线法,将成像单元以经线方式分布于成像面(2)上的成像单元数量相等的每一条经线作为一行,所有经线数量作为列,形成虚拟行列矩阵;
    或S2.6把采用螺旋线布局法的成像单元分成等数量的若干份,从所述螺旋线起点的第一个成像单元开始选择该数量的成像单元作为一个虚拟行,选择至螺旋线上的最后一个成像单元为止;以选择的所述等数量的成像单元作为虚拟行,以全部虚拟行的行数作为虚拟列,形成行虚拟列矩阵;
    或S2.7隔点取样法,在S2获得虚拟行列矩阵后,对成像单元进行隔点取样,奇数组组成一个矩阵、偶数组组成一个矩阵,两个矩阵分别匹配接收同一幅画面的不同视角图像矩阵数据,用于显示双视角图像视频;
  10. 根据权利要求8所述的一种成像方法,其特征在于:S3.1输出原始矩阵形式的图像数据集文件,或S3.2将虚拟行列矩阵中的像素坐标或像素进行球面还原 缝合,输出球面形图像文件或者平面图像文件。
  11. 一种图像传感器,其特征在于:包括如权利要求1,或如权利要求1和6、或如权利要求1和7,或如权利要求1、6和7的成像装置,所述成像装置中的成像件(1)在所述图像传感器中体现为感光件,成像面(2)体现为感光面,成像面(2)上的成像单元体现为感光单元;或还包括与感光件(1)相连的矩阵生成器,以及与矩阵生成器连接的数据读取器、与数据读取器连接的图像处理器;所述图像传感器工作时,执行如权利要求10的成像方法,矩阵生成器将所述感光件(1)的感光面(2)上非矩阵排列的感光单元经矩阵生成器内置的逻辑电路处理后生成矩阵化排列的虚拟矩阵,所述虚拟矩阵上的感光单元从外界获取的感光数据被数据读取器读取后传送给图像处理器,图像处理器对输入的数据进行处理,输出对应的图像文件。
  12. 根据权利要求11所述的一种图像传感器,其特征在于:所述图像传感器独立封装、或者与辅助透镜封装在一起;所述图像传感器和辅助透镜封装在一起时,辅助透镜的一面朝向感光口,另一面背向感光口并朝向图像传感器感光件(1)的感光面(2),且所述辅助透镜的焦点与球面形感光面(2)的球心重合。
  13. 一种相机,其特征在于:包括机身和镜头,所述机身内部的暗箱内由前往后设有快门、内置辅助透镜、如权利要求11或12所述的图像传感器、图像数据处理输出模块;所述镜头的前端设有取景透镜,镜头的镜筒内部设有透镜组合;所述机身内部件结合镜头部件按照如权利要求1-7任一所述成像装置的方式布局,并执行如权利要求8-10任一所述的成像方法;所述成像装置中的成像件(1)在所述相机中体现为图像传感器,成像面(2)体现为图像传感器的感光面,影像源(3)体现为取景透镜;内置的辅助透镜采用球形透镜;所述内置辅助透镜的焦点与图像传感器的球面形感光面的球心点重合,内置辅助透镜的中心轴线与图像传感器的中心轴线重合,内置辅助透镜用于与透镜组合配合使所有光线均垂直照射在图像传感器感光面上;所述图像数据处理输出模块用于将从图像传感器上获取的图像数据处理成各种格式的文件输出、或者合成球面图像输出,或者合成球面图像再转化成平面图像输出。
  14. 一种全景图像的拍摄制作方法,其特征在于:凸球面形取景透镜的权利要求13所述的相机以其所在空间的某一点为中心点,取景透镜背向着所述中心点, 拍摄覆盖所述中心点到相机之间区域以外所有方向的景象获得若干张球面形画面的图像,把所述若干张球面形画面的图像拼接成完整球面形画面的全景图像文件保存或输出;所述全景图像为包含所述中心点到相机之间区域以外所有方向的广域范围的景象,这里简称为广域源全景图像;所述图像在显示面为凹球面的球面形成像模型的显示屏幕上显示。
    凹球面形取景透镜的权利要求13所述的相机以其所在空间的某一点为中心点,取景透镜面向所述中心点,拍摄从所述相机到所述中心点之间区域范围内所有方向的景象获得若干张球面形画面的图像,把所述若干张球面形画面的图像拼接成完整球面形画面的全景图像文件保存或输出;所述全景图像为包含所述相机与所述中心点之间所有方向的局域范围的景象,这里简称为局域源全景图像;所述图像在显示面为凸球面的球面形成像模型的显示屏幕上显示。
  15. 一种显示屏,其特征在于:包括如权利要求1,或如权利要求1和6、或如权利要求1和7,或如权利要求1、6和7的成像装置,所述成像装置的成像件(1)在所述显示屏中体现为图像显示件,成像面(2)体现为图像显示面,成像面上的成像单元体现为显示像素;或并所述显示屏显示来源于权利要求8、9、10、14任一所述成像方法输出的球面形画面的图像文件。
  16. 根据权利要求15所述的显示屏,其特征在于:还包括图像处理模组和扫描模组,扫描模组一边与显示面(2)上的显示像素相连,一边与图像处理模组相连;扫描模组内的显示像素矩阵化器内包含权利要求9中的一种或者多种显示像素的矩阵生成逻辑电路或者程序指令;所述显示屏工作时,所述显示像素矩阵化器提前通过矩阵生成逻辑电路或者程序指令把显示面上非矩阵排列的显示像素提前矩阵化;所述图像处理模组内的图像像素矩阵化器内包含权利要求9中的一种或多种图像像素的矩阵生成逻辑电路或者程序指令,且所述图像像素的矩阵生成逻辑电路或者程序指令的矩阵类型与扫描模组内的显示像素的矩阵生成逻辑电路或者程序指令的矩阵类型相同;所述显示屏显示图像时,图像处理模组中的图像判断程序把已经被矩阵化的球面形图像数据集文件直接输送给扫描模组内的匹配器、把图像像素未被矩阵化的球面形图像文件通过图像像素矩阵化器矩阵化并将矩阵化的球面形图像数据输送给扫描模组内的匹配器、把平面图像通过图像处理模组内的图像转化器转化成球面形图像后再通过图像像素矩阵化器矩阵化 后传送给扫描模组内的匹配器,所述匹配器把图像像素矩阵与扫描模组内的显示像素矩阵类型进行匹配,匹配成功后,扫描模组按照对应矩阵把图像像素的数据扫描写入显示面上的对应显示像素,实现图像显示。
  17. 一种扇机显示屏,其特征在于:包括如权利要求1,或如权利要求1和6、或如权利要求1和7,或如权利要求1、6和7的成像装置,所述成像装置的成像件(1)在所述扇机显示屏中体现为扇机,成像面(2)体现为扇机的扇叶面向观众的外表面,成像面上的成像单元体现为扇叶外表面上的灯珠;或还包括控制机构,控制机构包括控制板与驱动电机,所述驱动电机的驱动端与扇机的扇叶相连;所述扇叶为弧形结构的杆件,扇叶面向观众的一面的弧形外表面上设置有灯珠,所述扇机和灯珠与控制板电性连接,灯珠发出的光垂直于扇叶表面灯珠所在部位的面,扇机带动扇叶旋转,控制板执行如权利要求8、9任一所述的成像方法,实现画面的显示成像;或并所述扇机显示屏显示来源于权利要求8、9、10、14任一所述成像方法输出的球面形画面的图像文件。
  18. 一种投影装置,其特征在于,包括如权利要求1-7任一所述的成像装置;其中,所述成像装置的成像件(1)在所述投影装置中体现为被投显示屏幕,成像面(2)体现为被投显示屏幕的图像显示面,成像单元为被投显示屏幕图像显示面上涂刷的反光颗粒或者设置的被投单元,图像源(3)体现为投影主机;所述投影主机为点状图像信号粒子投影器、弧线状图像信号粒子投影器、球面状图像信号粒子投影器;所述点状图像信号粒子投影器、弧线状图像信号粒子投影器的图像信号粒子的发射装置在与其连接的驱动装置驱动下,按照动作指令器给出的指令运动把图像信号粒子投射到球面形投影屏幕上,和/或执行如权利要求8-9所述的成像方法,实现画面显示;或并所述投影装置显示来源于权利要求8、9、10、14任一所述成像方法输出的球面形画面的图像文件。
  19. 一种眼镜式全景显示设备,其特征在于,包括眼镜架、显示屏、耳机、文件处理模块及控制手柄;所述显示屏为权利要求15-18任一所述显示屏,其设置于眼镜架的镜框内位于佩戴时观看者眼睛的正前方;所述显示屏为一片或二片,当为二片时,所述显示屏分别显示单视角镜头相机拍摄并合成的全景图像文件的同一幅画面的同一区域画面,或者双视角相机拍摄并合成的全景图像文件的同一幅画面的同一区域不同视角的画面。
  20. 一种影院,其特征在于,包括放映室、观看台、音响和显示屏;所述显示屏为一块显示面为凹球面的凹球面显示屏,或者为一块显示面为凸球面的凸球面显示屏,或者为一块显示面为凹球面的凹球面显示屏和一块显示面为凸球面的凸球面显示屏;所述显示屏为权利要求15-18任一所述显示屏;
    当放映室设置的凹球面屏或者凸球面屏为完整球面的相对较少一部分时,所述显示屏设置于放映室内的一侧,观看台为位于放映室内另一侧的平台或斜坡;
    当放映室内设置一块凹球面显示屏且所述显示屏为完整球面或者除去其与放映室地面交汇处的小部分球面空缺、其他部位的球面为接近于完整球面时,所述显示屏设置在放映室中间位置的上部空间,并通过固定架固定于放映室的内墙壁上,观看台为位于显示面内部中间靠下位置的平台或环形斜坡并被完全包围在凹球面显示面内部;
    当放映室内设置一块凸球面显示屏且所述显示屏为完整球面或者除去其与放映室地面交汇处的小部分球面空缺、其他部位的球面为接近于完整球面时,所述显示屏设置于放映室的中间下部的地面上,观看台为环绕于凸球面显示屏周围的平台或斜坡;
    当放映室内设置有两块球面显示屏,所述两块显示屏为一块凹球面显示屏、一块凹球面显示屏,且所述凹球面显示屏和凸球面显示屏的显示面均为完整球面或除去其与放映室地面交汇处的小部分球面空缺、其他部位的球面为接近于完整球面时,所述凹球面显示屏显示面的直径大于凸球面显示屏显示面的直径,所述凹球面显示屏设置于放映室中间靠近上部空间位置并通过固定架固定在放映室内墙墙壁上,所述凸球面显示屏设置于放映室中间下部靠近地面位置或设置于地面上,并位于凹球面显示屏的内部中间靠下的位置被凹球面显示屏的显示面完全包围;观看台为位于经过凸球面显示屏显示面球心的水平面与经过凹球面显示屏显示面球心的水平面之间的区域或上下高度稍大于或稍小于该区域高度的区域,并处于或靠近凹球面显示屏显示面的环形圈带上,观看台的座椅面向凸球面显示屏的显示面。所述凹球面显示屏、凸球面显示屏分别对应显示放映同一场景下拍摄并输出的广域源全景图像文件和局域源全景图像文件。
PCT/CN2022/114901 2021-09-13 2022-08-25 成像装置、方法及设备 WO2023035965A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US18/293,699 US20240348766A1 (en) 2021-09-13 2022-08-25 Imaging apparatus and method, and device
CN202280074857.0A CN118235420A (zh) 2021-09-13 2022-08-25 成像装置、方法及设备
EP22866433.0A EP4358507A1 (en) 2021-09-13 2022-08-25 Imaging apparatus, method and device

Applications Claiming Priority (16)

Application Number Priority Date Filing Date Title
CN202111066699.8 2021-09-13
CN202122200821.8 2021-09-13
CN202111068298.6 2021-09-13
CN202122200889.6 2021-09-13
CN202111068298.6A CN115811608A (zh) 2021-09-13 2021-09-13 一种图像传感器
CN202122207099.0U CN215494532U (zh) 2021-09-13 2021-09-13 一种投影装置
CN202111066699.8A CN115811607A (zh) 2021-09-13 2021-09-13 一种成像方法
CN202122207067.0 2021-09-13
CN202122207069.XU CN215499282U (zh) 2021-09-13 2021-09-13 一种超广角搭载式相机
CN202111066693.0A CN115798363A (zh) 2021-09-13 2021-09-13 一种显示屏
CN202122200889.6U CN215449878U (zh) 2021-09-13 2021-09-13 一种影院
CN202122207069.X 2021-09-13
CN202122200821.8U CN215818329U (zh) 2021-09-13 2021-09-13 一种相机
CN202111066693.0 2021-09-13
CN202122207067.0U CN215417490U (zh) 2021-09-13 2021-09-13 一种扇机显示屏
CN202122207099.0 2021-09-13

Publications (1)

Publication Number Publication Date
WO2023035965A1 true WO2023035965A1 (zh) 2023-03-16

Family

ID=85506100

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/114901 WO2023035965A1 (zh) 2021-09-13 2022-08-25 成像装置、方法及设备

Country Status (3)

Country Link
US (1) US20240348766A1 (zh)
EP (1) EP4358507A1 (zh)
WO (1) WO2023035965A1 (zh)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852400A (zh) * 2006-05-30 2006-10-25 丁瀚文 模拟人眼视效的影像拍摄成像系统
US20140253677A1 (en) * 2013-03-05 2014-09-11 Apple Inc. Small form factor high-resolution camera
US20150341578A1 (en) * 2014-05-20 2015-11-26 Google Inc. Curved Image Sensor for a Curved Focal Surface
CN105629639A (zh) * 2016-01-27 2016-06-01 秦皇岛视听机械研究所 一种基于超大视场角鱼眼镜头的球型显示系统
WO2018227424A1 (zh) * 2017-06-14 2018-12-20 李程 头戴式显示器及其显示屏、头戴支架和视频
CN212906862U (zh) * 2020-06-24 2021-04-06 城市之光科技股份有限公司 一种球形旋转3d显示装置
CN215417490U (zh) * 2021-09-13 2022-01-04 淮北康惠电子科技有限公司 一种扇机显示屏
CN215449878U (zh) * 2021-09-13 2022-01-07 淮北康惠电子科技有限公司 一种影院
CN215499282U (zh) * 2021-09-13 2022-01-11 淮北康惠电子科技有限公司 一种超广角搭载式相机
CN215818329U (zh) * 2021-09-13 2022-02-11 淮北康惠电子科技有限公司 一种相机

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852400A (zh) * 2006-05-30 2006-10-25 丁瀚文 模拟人眼视效的影像拍摄成像系统
US20140253677A1 (en) * 2013-03-05 2014-09-11 Apple Inc. Small form factor high-resolution camera
US20150341578A1 (en) * 2014-05-20 2015-11-26 Google Inc. Curved Image Sensor for a Curved Focal Surface
CN105629639A (zh) * 2016-01-27 2016-06-01 秦皇岛视听机械研究所 一种基于超大视场角鱼眼镜头的球型显示系统
WO2018227424A1 (zh) * 2017-06-14 2018-12-20 李程 头戴式显示器及其显示屏、头戴支架和视频
CN212906862U (zh) * 2020-06-24 2021-04-06 城市之光科技股份有限公司 一种球形旋转3d显示装置
CN215417490U (zh) * 2021-09-13 2022-01-04 淮北康惠电子科技有限公司 一种扇机显示屏
CN215449878U (zh) * 2021-09-13 2022-01-07 淮北康惠电子科技有限公司 一种影院
CN215499282U (zh) * 2021-09-13 2022-01-11 淮北康惠电子科技有限公司 一种超广角搭载式相机
CN215818329U (zh) * 2021-09-13 2022-02-11 淮北康惠电子科技有限公司 一种相机

Also Published As

Publication number Publication date
EP4358507A1 (en) 2024-04-24
US20240348766A1 (en) 2024-10-17

Similar Documents

Publication Publication Date Title
EP1048167B1 (en) System and method for generating and displaying panoramic images and movies
US6141034A (en) Immersive imaging method and apparatus
ES2546929T3 (es) Método de corrección de imágenes para compensar la distorsión de la imagen del punto de vista
JP6489482B2 (ja) 3次元画像メディアを生成するシステム及び方法
US5023725A (en) Method and apparatus for dodecahedral imaging system
US8432436B2 (en) Rendering for an interactive 360 degree light field display
US10983312B2 (en) Panoramic stereoscopic imaging systems
US10078228B2 (en) Three-dimensional imaging system
CN102116938A (zh) 一种基于柱面会聚定向屏的全景视场三维显示装置
CN109769111A (zh) 图像显示方法、装置、系统、存储介质和处理器
JP2004144874A (ja) 画像表示装置及び画像表示方法
Naimark Elements of real-space imaging: a proposed taxonomy
CN1361993A (zh) 立体系统
WO2023035965A1 (zh) 成像装置、方法及设备
RU2718777C2 (ru) Объемный дисплей
JP7489160B2 (ja) 電子的にエミュレートされた透明性を備えたディスプレイアセンブリ
CN215818329U (zh) 一种相机
JP4505559B2 (ja) スタジオセット用遠見パネルおよびこれを用いたスタジオセット
CN118235420A (zh) 成像装置、方法及设备
CN115811607A (zh) 一种成像方法
JPH11220758A (ja) ステレオ画像表示方法及び装置
WO2021143640A1 (zh) 一种全固态全息拍摄器及全固态全息投影器
CN115811651A (zh) 一种相机
CN115953306A (zh) 一种基于鱼眼相机的三维成像方法及装置、成像系统
JPH05268639A (ja) 3次元映像表示装置

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2022866433

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022866433

Country of ref document: EP

Effective date: 20240116

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22866433

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 202280074857.0

Country of ref document: CN