CN111142825B - Multi-screen visual field display method and system and electronic equipment - Google Patents

Multi-screen visual field display method and system and electronic equipment Download PDF

Info

Publication number
CN111142825B
CN111142825B CN201911399512.9A CN201911399512A CN111142825B CN 111142825 B CN111142825 B CN 111142825B CN 201911399512 A CN201911399512 A CN 201911399512A CN 111142825 B CN111142825 B CN 111142825B
Authority
CN
China
Prior art keywords
display screen
camera
image
display
reference object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911399512.9A
Other languages
Chinese (zh)
Other versions
CN111142825A (en
Inventor
金巧慧
高峰
曾炫
李翠云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Tuobaba Technology Co ltd
Original Assignee
Hangzhou Tuobaba Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Tuobaba Technology Co ltd filed Critical Hangzhou Tuobaba Technology Co ltd
Priority to CN201911399512.9A priority Critical patent/CN111142825B/en
Publication of CN111142825A publication Critical patent/CN111142825A/en
Application granted granted Critical
Publication of CN111142825B publication Critical patent/CN111142825B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the invention provides a display method, a system and electronic equipment for multi-screen vision, which comprise the following steps: collecting coordinates of a user's view point relative to a display screen; wherein the display screen is a plurality of; according to the data of a plurality of display screens acquired in advance, a display screen reference object system corresponding to the display screens is established in the virtual scene; establishing a plurality of cameras corresponding to the user viewing points in the virtual scene; based on the image shot by each camera, the image is displayed on the display screen corresponding to the camera. The invention can correct the distortion of the spliced image, reduce the perspective relation of image errors and improve the experience of users.

Description

Multi-screen visual field display method and system and electronic equipment
Technical Field
The present invention relates to the field of multimedia technologies, and in particular, to a method and a system for displaying multiple views of a screen, and an electronic device.
Background
In order to improve user experience, more and more multimedia interaction systems can use a display scheme of 'multi-screen splicing', so that the visual range of a user is expanded, and the immersion of the user is greatly improved. However, the traditional multi-screen spliced display scheme is not only limited by the splicing angle of the display screen, but also limits the position of the view point of the user, if the splicing angle is too large or the view point of the user deviates too much from the center of the image, serious distortion, error in perspective relation and the like can occur to the image seen by the user, so that the user cannot accurately perceive the space sense and the distance sense in the 3D image, and the experience is poor.
Disclosure of Invention
Accordingly, the present invention is directed to a method, a system, and an electronic device for displaying multiple views, so as to correct distortion of a spliced image, reduce perspective relation of image errors, and improve experience of a user.
In a first aspect, an embodiment of the present invention provides a method for displaying multiple views, including: collecting coordinates of a user's view point relative to a display screen; wherein the display screen is a plurality of; according to the data of a plurality of display screens acquired in advance, a display screen reference object system corresponding to the display screens is established in the virtual scene; establishing a plurality of cameras corresponding to the user viewing points in the virtual scene; based on the image shot by each camera, the image is displayed on the display screen corresponding to the camera.
In one embodiment, based on the image captured by each camera, before the step of displaying on the display screen corresponding to the camera, the method further comprises: if it is detected that the coordinates of the user's point of view with respect to the display screen change, the coordinates of each camera are updated.
In one embodiment, the step of displaying on the display screen corresponding to each camera based on the image captured by the camera includes: based on the data of the display screens and the coordinates of the user view points relative to each display screen, determining the transverse offset and the longitudinal offset of the images shot by each camera and the mapping images of the display screen references corresponding to the cameras; determining FOV angles of the respective cameras; based on the angle of the FOV, the transverse offset and the longitudinal offset, processing the images shot by each camera by adopting a matrix algorithm, and intercepting the images mapped by the reference objects of each display screen; and displaying the intercepted image on a display screen corresponding to the display screen reference object.
In one embodiment, the step of capturing the image mapped by each display screen reference using a matrix algorithm based on FOV angle, lateral offset, and longitudinal offset, comprises: the truncated images of the respective display screen reference map are calculated according to the following formula:
the offsetX is the transverse offset of an image shot by the camera and a corresponding image mapped by the display screen reference object; offsetY is the longitudinal offset of the image shot by the camera and the corresponding image mapped by the display screen reference object; the NearPlane is the distance from the near clipping plane of the view cone to the camera in computer graphics; the halfFOV is half the FOV angle.
In one embodiment, the data of the display screen includes at least: the size, the placement angle and the placement position of each display screen.
In one embodiment, each camera is positioned at the same location as the user's point of view relative to each display screen, with each camera facing vertically to its corresponding display screen reference.
In a second aspect, embodiments of the present invention provide a multi-screen view display system, comprising: the data acquisition module is used for acquiring the coordinates of the user's view point relative to the display screen; wherein the display screen is a plurality of; the display screen reference object establishing module is used for establishing a display screen reference object system corresponding to the display screen in the virtual scene according to the data of the plurality of display screens acquired in advance; the camera establishing module is used for establishing a plurality of cameras corresponding to the user watching points in the virtual scene; and the image processing module is used for displaying the images shot by each camera on the display screen corresponding to the camera.
In one embodiment, an image processing module includes: the first computing unit is used for determining the transverse offset and the longitudinal offset of the images shot by each camera and the display screen reference object mapping images corresponding to the cameras based on the data of the display screens and the coordinates of the user viewing points relative to each display screen; a second calculation unit for determining FOV angles of the respective cameras; the image intercepting unit is used for processing the images shot by each camera by adopting a matrix algorithm based on the FOV angle, the transverse offset and the longitudinal offset and intercepting the images mapped by each display screen reference object; and the image display unit is used for displaying the intercepted image on the display screen corresponding to the display screen reference object.
In a third aspect, an embodiment of the present invention provides an electronic device comprising a processor and a memory storing computer executable instructions executable by the processor to perform the steps of the method of any one of the first aspects described above.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor performs the steps of the method of any of the first aspects provided above.
The embodiment of the invention provides a display method, a system and electronic equipment for multi-screen visual field, which can collect coordinates of a user's view point relative to a display screen (a plurality of display screens) and data of the plurality of display screens, establish a display screen reference object system corresponding to the display screen and a plurality of cameras (cameras are perpendicular to the display screen reference object) corresponding to the user's view point according to the collected data combined with a virtual scene, process images shot by the cameras and display the images on the display screen corresponding to the cameras. According to the method provided by the embodiment, the spliced image displayed by the display screen can be corrected according to the data of the display screen and the data of the user watching point, so that the distortion of the image is eliminated, the perspective relation of the image error is reduced, and the experience of the user is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a method for displaying multiple views according to an embodiment of the present invention;
fig. 2 is a top view of a placement example of a display screen reference object and a camera in a virtual scene according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating another method for displaying multiple views according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the offset of a camera image and a reference image of a display screen according to an embodiment of the present invention;
FIG. 5 is a top view of a camera facing vertically to a display screen reference according to an embodiment of the present invention;
FIG. 6 is a schematic diagram showing a comparison between a conventional scheme and a multi-screen display effect according to the present embodiment of the present invention;
FIG. 7 is a schematic diagram of a multi-view display system according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
At present, in a multi-screen spliced display scheme, serious distortion, perspective relation errors and the like can occur to an image seen by a user, so that the user cannot accurately perceive the space sense and the distance sense in the 3D image. Based on the above, the display method, the system and the electronic device for the multi-screen visual field, which are provided by the embodiment of the invention, can correct the distortion of the spliced image, reduce the perspective relation of image errors and improve the experience of users.
For the sake of understanding the present embodiment, first, a detailed description will be given of a method for displaying a multi-screen view disclosed in the present embodiment, and referring to a flowchart of a method for displaying a multi-screen view shown in fig. 1, the method is executed by an electronic device, and the electronic device is capable of displaying display images in a spliced manner, so that the method for displaying a multi-screen view includes steps S101 to S104:
step S101: and acquiring coordinates of a user's view point relative to the display screen.
The display screens are multiple, and the multiple display screens can be spliced at any angle. The display screen may be any device or medium capable of displaying images that can be tiled, including but not limited to projectors, projection walls, projection screens, two-dimensional displays, three-dimensional displays, and the like.
The coordinates of the user's view point relative to the display screen may be fixed or may be changed in real time, and the coordinates may be obtained by measuring the relative distance, relative offset, and relative height of the user's view point from each display screen, and then converting these data into spatial coordinates. The relative distance, relative offset and relative height lamp data between the user's view point and each display screen can be acquired by manually utilizing a measuring tool, and can also be acquired by a positioning instrument or an eye movement instrument worn on the user and then uploaded for processing.
Step S102: and according to the data of the plurality of display screens acquired in advance, a display screen reference object system corresponding to the display screens is established in the virtual scene.
In one specific application, the data of the display screen at least includes: the size, the placement angle and the placement position of each display screen can be manually collected by a measuring instrument and then stored in 3D software for standby. The size of the display screen may include the length and width of the display screen; the placement angle can comprise the splicing angle between the displays and the horizontal plane; the placement position can be set by setting a coordinate origin in reality (the placement position can be flexibly set without influencing the final result), and then the relative distance, the relative offset and the relative height between each display screen and the coordinate origin are respectively measured.
Further, the virtual scene may be image content displayed through a plurality of display screens, and the image content may be a virtual scene previously created or restored through 3D software. Substituting the collected data of each display into 3D software can simulate and establish a plurality of display screen references corresponding to the display screens in the virtual scene, wherein the display screen references are completely consistent with the display screens in reality, namely, the size, the placement angle and the placement position are completely the same.
Referring to fig. 2, an exemplary top view of a display screen reference and camera placement in a virtual scene illustrates three display screen references (i.e., a left screen reference, a middle screen reference, and a right screen reference) and three cameras. The left screen reference object is parallel to the right screen reference object and is perpendicular to the middle screen reference object respectively; the three cameras are respectively vertically faced to the corresponding display screen reference objects.
It will be appreciated that fig. 2 is only a schematic illustration for easy understanding, and should not be construed as limiting, in practical applications, the number of display references and cameras may be more or less than fig. 2, the placement angle between the displays may be larger or smaller, and the position of the cameras may be different from fig. 2.
Step S103: a plurality of cameras corresponding to user viewing points are established in the virtual scene.
In one specific application, the coordinates of the acquired user view point relative to the display screen may be substituted into 3D software to simulate creating multiple cameras corresponding to the user view point in the virtual scene. Wherein the position of each camera is the same as the coordinates of the user's point of view relative to each display screen, and each camera faces vertically to the corresponding display screen reference of that camera.
Step S104: based on the image shot by each camera, the image is displayed on the display screen corresponding to the camera.
In a specific application, a matrix algorithm may be used to perform secondary processing on the image captured by each camera, capture an image mapped by the reference object of the display screen, and display the processed image on the display screen corresponding to the camera (i.e., on the display screen corresponding to the reference object of the display).
The multi-screen view display method provided by the embodiment of the invention can collect the coordinates of the view points of the user relative to the display screen (a plurality of display screens) and the data of the plurality of display screens, establish a display screen reference object system corresponding to the display screen and a plurality of cameras (the cameras are perpendicular to the display screen reference object) corresponding to the view points of the user according to the collected data and combined with the virtual scene, and process the images shot by the cameras and then display the processed images on the display screen corresponding to the cameras. According to the method provided by the embodiment, the spliced image displayed by the display screen can be corrected according to the data of the display screen and the data of the user watching point, so that the distortion of the image is eliminated, the perspective relation of the image error is reduced, and the experience of the user is improved.
Further, for the situation that the user view point and the coordinates relative to the display screen can be changed in trial and error, in order to better correct the image distortion and the wrong perspective relationship caused by the new view point, the embodiment of the present invention provides another multi-screen view field display method, referring to the flow schematic diagram of another multi-screen view field display method shown in fig. 3, which mainly includes the following steps S301 to S305:
step S301: and acquiring coordinates of a user's view point relative to the display screen.
Step S302: and according to the data of the plurality of display screens acquired in advance, a display screen reference object system corresponding to the display screens is established in the virtual scene.
Step S303: a plurality of cameras corresponding to user viewing points are established in the virtual scene.
Step S304: if it is detected that the coordinates of the user's point of view with respect to the display screen change, the coordinates of each camera are updated.
The change of the coordinates of the user observation point can be detected through a positioning instrument or an eye movement instrument worn on the user, and when the change of the coordinates is detected, the measured coordinates of the new user observation point relative to the display screen are uploaded, and the cameras in the virtual scene can synchronize the data in real time to update the coordinates of each camera.
Step S305: based on the image shot by each camera, the image is displayed on the display screen corresponding to the camera.
Further, after step S305 is performed, the process returns to step S304, and changes of the updated user' S view point are detected in real time, so as to continuously correct the image distortion and the erroneous perspective relationship caused by the new view point.
In one embodiment, in order to better understand how the present invention corrects the perspective relationship of the display image distortion and error, the above step S104 may be performed according to the following steps b1 to b4:
step b1: and determining the transverse offset and the longitudinal offset of the image shot by each camera and the display screen reference object mapping image corresponding to the camera based on the data of the display screen and the coordinates of the user view point relative to each display screen.
For better understanding, the embodiment of the invention provides a schematic diagram of the offset between the camera image and the reference object mapping image of the display screen, referring to fig. 4, the schematic diagram shows that offsetX is the lateral offset between the camera image and the reference object mapping image of the display screen; offsetY is the longitudinal offset of the camera image from the display screen reference map image.
Step b2: the FOV angle of each camera is determined.
The FOV angle is the field angle of the camera, and the size of the field angle can determine the field range of the camera. Further, an embodiment of the present invention provides a top view of the camera facing vertically to the display screen reference, and referring to fig. 5, the camera and the display screen reference are illustrated. Wherein, halfFOV is half of FOV angle, a is the distance that the camera (i.e. the user's point of view) faces the reference object of the display screen vertically, b is the lateral distance of the far edge of the camera from the reference object of the display screen, and can be considered as the offset of the point of view (i.e. the lateral offset of the camera image and the image of the reference object of the display screen) plus half of the width of the reference object of the display screen.
Accordingly, the FOV angle of each camera can be determined from the above data according to the following formula:
step b3: based on the angle of the FOV, the transverse offset and the longitudinal offset, processing the images shot by each camera by adopting a matrix algorithm, and intercepting the images mapped by the reference objects of each display screen.
Specifically, the captured images of the respective display screen reference object map may be calculated according to the following formula:
wherein, nearPlane is the distance from the near clipping plane of the view cone to the camera in computer graphics.
Step b4: and displaying the intercepted image on a display screen corresponding to the display screen reference object.
In order to better understand the above method, the embodiment of the present invention provides a schematic diagram comparing the multi-screen display effect of the conventional scheme with that of the present scheme, and referring to fig. 6, it is shown that the image displayed in the conventional scheme is distorted and the image displayed in the present scheme is not distorted under the condition that the same number of display screens and the same splicing angle of the display screens are also the same.
It will be appreciated that fig. 5 and 6 are merely illustrative illustrations for ease of understanding and should not be considered limiting.
The method provided by the embodiment of the invention can be used for carrying out secondary processing on the image of each display screen according to the data such as the size, the position, the angle and the view point of the user of each display screen and combining with the virtual scene in the 3D engine, so that the distortion and error perspective relation is corrected, the perspective relation in one-to-one reduction reality can be compared, the user can accurately perceive the space sense and the distance sense in the 3D image, the visual fatigue and the dizziness sense of the user are greatly reduced, and the user experience is improved.
For the method for displaying multiple fields of view provided in the foregoing embodiment, the embodiment of the present invention further provides a system for displaying multiple fields of view, referring to a schematic structural diagram of a system for displaying multiple fields of view shown in fig. 7, where the system may include the following parts:
the data acquisition module 701 is configured to acquire coordinates of a user's view point relative to the display screen; wherein, the display screen is a plurality of.
The display screen reference object establishing module 702 is configured to establish a display screen reference object system corresponding to a display screen in a virtual scene according to data of a plurality of display screens acquired in advance.
The camera establishment module 703 is configured to establish a plurality of cameras corresponding to the user's view points in the virtual scene.
And the image processing module 704 is used for displaying the images shot by each camera on a display screen corresponding to the camera.
The multi-screen view display system provided by the embodiment of the invention can collect the coordinates of the view points of the user relative to the display screen (a plurality of display screens) and the data of the plurality of display screens, establish a display screen reference object system corresponding to the display screen and a plurality of cameras (the cameras are perpendicular to the display screen reference object) corresponding to the view points of the user according to the collected data and combined with the virtual scene, and process the images shot by the cameras and then display the processed images on the display screen corresponding to the cameras. The system provided by the embodiment can correct the spliced image displayed by the display screen according to the data of the display screen and the data of the user watching point, so that the distortion of the image is eliminated, the perspective relation of the image error is reduced, and the experience of the user is improved.
In one embodiment, the multi-screen view display system further includes an updating module configured to update the coordinates of each camera if a change in the coordinates of the user's viewpoint with respect to the display screen is detected.
In one embodiment, the image processing module 704 includes:
and the first calculation unit is used for determining the transverse offset and the longitudinal offset of the images shot by each camera and the display screen reference object mapping image corresponding to the camera based on the data of the display screens and the coordinates of the user viewing points relative to each display screen.
And a second calculation unit for determining the FOV angle of each camera.
The image intercepting unit is used for processing the images shot by each camera by adopting a matrix algorithm based on the FOV angle, the transverse offset and the longitudinal offset and intercepting the images mapped by each display screen reference object.
And the image display unit is used for displaying the intercepted image on the display screen corresponding to the display screen reference object.
In one embodiment, the image capturing unit is further configured to: calculating the intercepted images of the reference object map of the display screen according to the following formula:
wherein offsetX is the lateral offset of the image shot by the camera and the corresponding image mapped by the display screen reference object; offsetY is the longitudinal offset of the image shot by the camera and the corresponding image mapped by the display screen reference object; the NearPlane is the distance from the near clipping plane of the view cone to the camera in computer graphics; the halfFOV is half the FOV angle.
The system provided by the embodiment of the present invention has the same implementation principle and technical effects as those of the foregoing method embodiment, and for the sake of brevity, reference may be made to the corresponding content in the foregoing method embodiment where the system embodiment is not mentioned.
The embodiment of the invention also provides electronic equipment, which comprises a processor and a storage device; the storage means has stored thereon a computer program which, when run by a processor, performs the method according to any of the above embodiments.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, where the electronic device 100 includes: a processor 80, a memory 81, a bus 82 and a communication interface 83, the processor 80, the communication interface 83 and the memory 81 being connected by the bus 82; the processor 80 is arranged to execute executable modules, such as computer programs, stored in the memory 81.
The memory 81 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 83 (which may be wired or wireless), and may use the internet, a wide area network, a local network, a metropolitan area network, etc.
Bus 82 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 8, but not only one bus or type of bus.
The memory 81 is configured to store a program, and the processor 80 executes the program after receiving an execution instruction, where the method executed by the system defined by the flow process disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 80 or implemented by the processor 80.
The processor 80 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware or instructions in software in processor 80. The processor 80 may be a general-purpose processor, including a central processing unit (CPU, central Processing Unit), a network processor (NP, network Processor), etc.; but may also be a digital signal processor (DSP, digital Signal Processing), an application specific integrated circuit (ASIC, application Specific Integrated Circuit), an off-the-shelf programmable gate array (FPGA, field-Programmable Gate Array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory 81 and the processor 80 reads the information in the memory 81 and in combination with its hardware performs the steps of the method described above.
The computer program product of the readable storage medium provided by the embodiment of the present invention includes a computer readable storage medium storing a program code, where the program code includes instructions for executing the method described in the foregoing method embodiment, and the specific implementation may refer to the foregoing method embodiment and will not be described herein.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A method for displaying a multi-screen view, comprising:
collecting coordinates of a user's view point relative to a display screen; wherein the display screen is a plurality of display screens;
according to the data of a plurality of display screens acquired in advance, a display screen reference object system corresponding to the display screens is established in a virtual scene;
establishing a plurality of cameras corresponding to the user view points in the virtual scene;
based on the image shot by each camera, displaying on the display screen corresponding to the camera;
the step of displaying on the display screen corresponding to each camera based on the image shot by the camera comprises the following steps: determining a transverse offset and a longitudinal offset of an image shot by each camera and the display screen reference object mapping image corresponding to the camera based on the data of the display screens and the coordinates of the user viewing points relative to each display screen; determining FOV angles of each of said cameras; processing the images shot by each camera by adopting a matrix algorithm based on the FOV angle, the transverse offset and the longitudinal offset, and intercepting the images mapped by each display screen reference object; and displaying the intercepted image on the display screen corresponding to the display screen reference object.
2. The method according to claim 1, wherein the step of displaying on the display screen corresponding to each camera based on the image captured by the camera further comprises:
and if the coordinates of the user viewing point relative to the display screen are detected to change, updating the coordinates of each camera.
3. The method of claim 1, wherein the step of intercepting the image mapped by each of the display screen references using a matrix algorithm based on the FOV angle, the lateral offset, and the longitudinal offset comprises:
calculating the intercepted images of the reference object map of the display screen according to the following formula:
the offsetX is the transverse offset of an image shot by the camera and a corresponding image mapped by the display screen reference object; offsetY is the longitudinal offset of the image shot by the camera and the corresponding image mapped by the display screen reference object; the NearPlane is the distance from the near clipping plane of the view cone to the camera in computer graphics; halfFOV is half the FOV angle; b represents the lateral distance of the camera from the far edge of the display screen reference.
4. The method of claim 1, wherein the data of the display screen comprises at least: the size, the placement angle and the placement position of each display screen.
5. A method of multi-screen view display as claimed in claim 1, wherein each of said cameras is positioned at the same coordinates relative to each display as said user's point of view, each of said cameras facing vertically to its corresponding reference of said display.
6. A multi-screen visual field display system, comprising:
the data acquisition module is used for acquiring the coordinates of the user's view point relative to the display screen; wherein the display screen is a plurality of display screens;
the display screen reference object establishing module is used for establishing a display screen reference object system corresponding to the display screen in the virtual scene according to the data of a plurality of display screens acquired in advance;
the camera establishing module is used for establishing a plurality of cameras corresponding to the user watching points in the virtual scene;
the image processing module is used for displaying the images shot by each camera on the display screen corresponding to the camera;
the image processing module includes: the first computing unit is used for determining the transverse offset and the longitudinal offset of the images shot by each camera and the display screen reference object mapping image corresponding to the camera based on the data of the display screens and the coordinates of the user viewing points relative to each display screen;
a second calculation unit for determining FOV angles of the respective cameras;
the image intercepting unit is used for processing the images shot by the cameras by adopting a matrix algorithm based on the FOV angle, the transverse offset and the longitudinal offset, and intercepting the images mapped by the display screen references;
and the image display unit is used for displaying the intercepted image on the display screen corresponding to the display screen reference object.
7. An electronic device comprising a processor and a memory, the memory storing computer executable instructions executable by the processor, the processor executing the computer executable instructions to implement the steps of the method of any one of claims 1 to 5.
8. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor performs the steps of the method of any of the preceding claims 1 to 5.
CN201911399512.9A 2019-12-27 2019-12-27 Multi-screen visual field display method and system and electronic equipment Active CN111142825B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911399512.9A CN111142825B (en) 2019-12-27 2019-12-27 Multi-screen visual field display method and system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911399512.9A CN111142825B (en) 2019-12-27 2019-12-27 Multi-screen visual field display method and system and electronic equipment

Publications (2)

Publication Number Publication Date
CN111142825A CN111142825A (en) 2020-05-12
CN111142825B true CN111142825B (en) 2024-04-16

Family

ID=70522038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911399512.9A Active CN111142825B (en) 2019-12-27 2019-12-27 Multi-screen visual field display method and system and electronic equipment

Country Status (1)

Country Link
CN (1) CN111142825B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115243029A (en) * 2022-09-22 2022-10-25 苏州域光科技有限公司 Image display method, device, equipment, system and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103119547A (en) * 2010-09-29 2013-05-22 高通股份有限公司 Image synchronization for multiple displays
CN103678221A (en) * 2013-12-27 2014-03-26 广东威创视讯科技股份有限公司 Method and mobile device for acquiring operation information of spliced-wall system
CN104536714A (en) * 2015-01-07 2015-04-22 深圳市众进思创科技开发有限公司 Method and system for split screen display of display information in equipment room
CN105282532A (en) * 2014-06-03 2016-01-27 天津拓视科技有限公司 3D display method and device
CN106133674A (en) * 2014-01-17 2016-11-16 奥斯特豪特集团有限公司 Perspective computer display system
CN107392853A (en) * 2017-07-13 2017-11-24 河北中科恒运软件科技股份有限公司 Double-camera video frequency merges distortion correction and viewpoint readjustment method and system
WO2018068719A1 (en) * 2016-10-12 2018-04-19 腾讯科技(深圳)有限公司 Image stitching method and apparatus
CN108415174A (en) * 2018-02-05 2018-08-17 上海溯石文化传播有限公司 The method that multi-screen splicing formula abnormal shape screen theaters realize bore hole 3D viewing effects
CN108429905A (en) * 2018-06-01 2018-08-21 宁波视睿迪光电有限公司 A kind of bore hole 3D display method, apparatus, electronic equipment and storage medium
CN108629830A (en) * 2018-03-28 2018-10-09 深圳臻迪信息技术有限公司 A kind of three-dimensional environment method for information display and equipment
CN109640070A (en) * 2018-12-29 2019-04-16 上海曼恒数字技术股份有限公司 A kind of stereo display method, device, equipment and storage medium
CN109729338A (en) * 2018-11-28 2019-05-07 利亚德光电股份有限公司 Show processing method, the device and system of data
CN109840949A (en) * 2017-11-29 2019-06-04 深圳市掌网科技股份有限公司 Augmented reality image processing method and device based on optical alignment
CN110099268A (en) * 2019-05-28 2019-08-06 吉林大学 The blind area perspectiveization display methods of color Natural matching and viewing area natural fusion
CN110544208A (en) * 2019-09-06 2019-12-06 深圳市泰沃德自动化技术有限公司 Industrial-grade image splicing method and system
CN110610454A (en) * 2019-09-18 2019-12-24 上海云绅智能科技有限公司 Method and device for calculating perspective projection matrix, terminal device and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100097444A1 (en) * 2008-10-16 2010-04-22 Peter Lablans Camera System for Creating an Image From a Plurality of Images
US9264702B2 (en) * 2013-08-19 2016-02-16 Qualcomm Incorporated Automatic calibration of scene camera for optical see-through head mounted display
US20150370322A1 (en) * 2014-06-18 2015-12-24 Advanced Micro Devices, Inc. Method and apparatus for bezel mitigation with head tracking
US10503457B2 (en) * 2017-05-05 2019-12-10 Nvidia Corporation Method and apparatus for rendering perspective-correct images for a tilted multi-display environment

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103119547A (en) * 2010-09-29 2013-05-22 高通股份有限公司 Image synchronization for multiple displays
CN103678221A (en) * 2013-12-27 2014-03-26 广东威创视讯科技股份有限公司 Method and mobile device for acquiring operation information of spliced-wall system
CN106133674A (en) * 2014-01-17 2016-11-16 奥斯特豪特集团有限公司 Perspective computer display system
CN105282532A (en) * 2014-06-03 2016-01-27 天津拓视科技有限公司 3D display method and device
CN104536714A (en) * 2015-01-07 2015-04-22 深圳市众进思创科技开发有限公司 Method and system for split screen display of display information in equipment room
WO2018068719A1 (en) * 2016-10-12 2018-04-19 腾讯科技(深圳)有限公司 Image stitching method and apparatus
CN107392853A (en) * 2017-07-13 2017-11-24 河北中科恒运软件科技股份有限公司 Double-camera video frequency merges distortion correction and viewpoint readjustment method and system
CN109840949A (en) * 2017-11-29 2019-06-04 深圳市掌网科技股份有限公司 Augmented reality image processing method and device based on optical alignment
CN108415174A (en) * 2018-02-05 2018-08-17 上海溯石文化传播有限公司 The method that multi-screen splicing formula abnormal shape screen theaters realize bore hole 3D viewing effects
CN108629830A (en) * 2018-03-28 2018-10-09 深圳臻迪信息技术有限公司 A kind of three-dimensional environment method for information display and equipment
CN108429905A (en) * 2018-06-01 2018-08-21 宁波视睿迪光电有限公司 A kind of bore hole 3D display method, apparatus, electronic equipment and storage medium
CN109729338A (en) * 2018-11-28 2019-05-07 利亚德光电股份有限公司 Show processing method, the device and system of data
CN109640070A (en) * 2018-12-29 2019-04-16 上海曼恒数字技术股份有限公司 A kind of stereo display method, device, equipment and storage medium
CN110099268A (en) * 2019-05-28 2019-08-06 吉林大学 The blind area perspectiveization display methods of color Natural matching and viewing area natural fusion
CN110544208A (en) * 2019-09-06 2019-12-06 深圳市泰沃德自动化技术有限公司 Industrial-grade image splicing method and system
CN110610454A (en) * 2019-09-18 2019-12-24 上海云绅智能科技有限公司 Method and device for calculating perspective projection matrix, terminal device and storage medium

Also Published As

Publication number Publication date
CN111142825A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
US10559090B2 (en) Method and apparatus for calculating dual-camera relative position, and device
US11050994B2 (en) Virtual reality parallax correction
US9519968B2 (en) Calibrating visual sensors using homography operators
US10909719B2 (en) Image processing method and apparatus
JPWO2018235163A1 (en) Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method
CN111860489A (en) Certificate image correction method, device, equipment and storage medium
KR102450236B1 (en) Electronic apparatus, method for controlling thereof and the computer readable recording medium
CN106570907B (en) Camera calibration method and device
CN113029128B (en) Visual navigation method and related device, mobile terminal and storage medium
CN112686877A (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN105719586A (en) Transparent display method and device
JP2016201668A (en) Image processing apparatus, image processing method, and program
CN103617615A (en) Radial distortion parameter obtaining method and obtaining device
CN109785390B (en) Method and device for image correction
US20180020203A1 (en) Information processing apparatus, method for panoramic image display, and non-transitory computer-readable storage medium
CN113256742A (en) Interface display method and device, electronic equipment and computer readable medium
KR101148508B1 (en) A method and device for display of mobile device, and mobile device using the same
CN111142825B (en) Multi-screen visual field display method and system and electronic equipment
CN110740315B (en) Camera correction method and device, electronic equipment and storage medium
JP6168597B2 (en) Information terminal equipment
US10339702B2 (en) Method for improving occluded edge quality in augmented reality based on depth camera
KR20110025083A (en) Apparatus and method for displaying 3d image in 3d image system
CN111968245B (en) Three-dimensional space marking line display method and device, electronic equipment and storage medium
CN112825198B (en) Mobile tag display method, device, terminal equipment and readable storage medium
CN112634418B (en) Method and device for detecting mold penetrating visibility of human body model and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant