CN111766951A - Image display method and apparatus, computer system, and computer-readable storage medium - Google Patents

Image display method and apparatus, computer system, and computer-readable storage medium Download PDF

Info

Publication number
CN111766951A
CN111766951A CN202010900440.8A CN202010900440A CN111766951A CN 111766951 A CN111766951 A CN 111766951A CN 202010900440 A CN202010900440 A CN 202010900440A CN 111766951 A CN111766951 A CN 111766951A
Authority
CN
China
Prior art keywords
virtual
camera
image
screens
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010900440.8A
Other languages
Chinese (zh)
Other versions
CN111766951B (en
Inventor
殷元江
徐立
马添翼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 7d Vision Technology Co ltd
Original Assignee
Beijing 7d Vision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 7d Vision Technology Co ltd filed Critical Beijing 7d Vision Technology Co ltd
Priority to CN202010900440.8A priority Critical patent/CN111766951B/en
Publication of CN111766951A publication Critical patent/CN111766951A/en
Application granted granted Critical
Publication of CN111766951B publication Critical patent/CN111766951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The embodiment of the disclosure discloses an image display method and device. The image display method includes: constructing a space coordinate system of a three-dimensional real space for displaying a virtual scene image based on a space positioning technology, and determining a position coordinate and a posture coordinate of a camera in the space coordinate system; constructing at least two virtual screens respectively corresponding to the at least two real screens based on the size information of the at least two real screens and the position information of the at least two real screens in a space coordinate system; binding the position coordinates and the posture coordinates of the camera to the virtual camera, and projecting image parts of the virtual scene image to virtual screen areas on at least two virtual screens facing the virtual camera based on the position coordinates and the posture coordinates of the virtual camera; and rendering the projected image part and outputting the rendered image part to a real screen area corresponding to the virtual screen area on at least two real screens for display.

Description

Image display method and apparatus, computer system, and computer-readable storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to an image display method and apparatus, a computer system, and a computer-readable storage medium.
Background
The Cave-shaped Automatic Virtual Environment (Cave) system is a Virtual reality display system and can be applied to any Virtual simulation application field with immersion requirements. As the viewer moves through the CAVE system, the CAVE system can automatically display the correct stereoscopic perspective image on each screen according to the position and posture of the viewer.
In the current CAVE system, each screen can only display correct stereoscopic perspective images for shooting of one camera, but cannot simultaneously display correct stereoscopic perspective images for shooting of a plurality of cameras. When multiple cameras are used to capture the display content of each screen in a CAVE system, the images captured by the cameras at other locations, in addition to the one camera at the best viewpoint, are stretched or distorted.
Disclosure of Invention
According to a first aspect of the present disclosure, an embodiment of the present disclosure discloses an image display method for a cave-shaped automatic virtual environment system, including: constructing a space coordinate system of a three-dimensional real space for displaying a virtual scene image based on a space positioning technology, and determining a position coordinate and a posture coordinate of a camera in the space coordinate system; constructing at least two virtual screens respectively corresponding to the at least two real screens based on the size information of the at least two real screens and the position information of the at least two real screens in a space coordinate system; binding the position coordinates and the posture coordinates of the camera to the virtual camera, and projecting image parts of the virtual scene image to virtual screen areas on at least two virtual screens facing the virtual camera based on the position coordinates and the posture coordinates of the virtual camera; and rendering the projected image part and outputting the rendered image part to a real screen area corresponding to the virtual screen area on at least two real screens for display.
According to a second aspect of the present disclosure, an embodiment of the present disclosure discloses an image display apparatus for a cave-shaped automatic virtual environment system, including: a coordinate system construction unit configured to construct a spatial coordinate system of a three-dimensional real space in which a virtual scene image is to be displayed based on a spatial positioning technique, and determine position coordinates and attitude coordinates of the camera in the spatial coordinate system; a screen construction unit configured to construct at least two virtual screens respectively corresponding to the at least two real screens based on size information of the at least two real screens and position information of the at least two real screens in a spatial coordinate system; an image projection unit configured to bind the position coordinates and the posture coordinates of the camera to the virtual camera, and project image portions of the virtual scene image to virtual screen areas on at least two virtual screens faced by the virtual camera based on the position coordinates and the posture coordinates of the virtual camera; and an image rendering unit configured to render the image portion and output the rendered image portion to a real screen region corresponding to the virtual screen region on the at least two real screens for display.
According to a third aspect of the present disclosure, an embodiment of the present disclosure discloses a computer system, including: a processor; and a memory storing a computer program which, when executed by the processor, causes the processor to execute the above-described image display method.
According to a fourth aspect of the present disclosure, an embodiment of the present disclosure discloses a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to execute the above-described image display method.
According to one or more embodiments of the present disclosure, by binding the position coordinates and the posture coordinates of the cameras to the virtual cameras, projecting the image parts of the virtual scene images to virtual screen regions on at least two virtual screens faced by the virtual cameras based on the position coordinates and the posture coordinates of the virtual cameras, rendering the projected image parts and outputting the rendered image parts to real screen regions on at least two real screens corresponding to the virtual screen regions faced by the virtual cameras for display, it is possible to display correct stereoscopic perspective images for the respective cameras when the plurality of cameras simultaneously photograph the contents displayed by different screen regions of the CAVE system.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
Fig. 1 is a flowchart illustrating an image display method according to an embodiment of the present disclosure;
fig. 2 is a block diagram illustrating an image display apparatus according to an embodiment of the present disclosure;
FIG. 3 is a block diagram illustrating an example CAVE system to which image display methods and apparatus according to embodiments of the present disclosure may be applied;
FIG. 4 is a schematic diagram illustrating the display of different image portions on the left and right LED screens, respectively, of the exemplary CAVE system shown in FIG. 3; and
FIG. 5 is a block diagram illustrating an exemplary computer system that can be used to implement embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. In addition, it should be noted that, for convenience of description, only portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. In addition, the numbers of the steps or the functional modules used in the present disclosure are only used for identifying the respective steps or the functional modules, and are not used for limiting the execution order of the respective steps or the connection relationship between the respective functional modules.
For the convenience of understanding, the structure and operation of the CAVE system will be briefly described with reference to the accompanying drawings. Fig. 3 is a schematic structural diagram illustrating an example CAVE system 300 to which the image display method and apparatus according to the embodiments of the present disclosure may be applied. As shown in fig. 3, CAVE system 300 includes infrared cameras 302-1 to 302-6, active ethernet (POE) switch 304, control server 306, rendering servers 308-1 to 308-3, screen splicer 310, and left, right, and ground Light Emitting Diode (LED) screens 312-1 to 312-3, wherein:
the infrared cameras 302-1 to 302-6 are used for shooting real-time images of a three-dimensional real space in which virtual scene images are to be displayed, and transmitting the shot real-time images to the control server 306 and the rendering servers 308-1 to 308-3 through the POE switch 304;
the control server 306 is configured to construct a spatial coordinate system of a three-dimensional real space in which a virtual scene image is to be displayed based on the real-time images from the infrared cameras 302-1 to 302-6, determine position coordinates and posture coordinates of the observer in the constructed spatial coordinate system, and transmit origin and axial information of the spatial coordinate system and the position coordinates and posture coordinates of the observer in the spatial coordinate system to the rendering servers 308-1 to 308-3;
the rendering servers 308-1 to 308-3 are configured to render the virtual scene image based on the real-time image from the infrared cameras 302-1 to 302-6, the origin and axial information of the spatial coordinate system, and the position coordinates and the posture coordinates of the observer in the spatial coordinate system, and transmit the rendered virtual scene image to the screen splicer 310;
the screen splicer 310 is configured to divide the rendered virtual scene image into a plurality of image blocks, and transmit the plurality of image blocks to corresponding LED screens of the left, right, and ground LED screens 312-1 to 312-3 for display.
Here, the virtual scene image may be divided into three image portions of left, right, and ground, which are to be rendered by respective ones of the rendering servers 308-1 to 308-3, respectively, and the rendered image portions are to be displayed by respective ones of the left, right, and ground LED screens, respectively. Specifically, the left image portion is rendered by rendering server 308-1, and the rendered left image portion is displayed by the left LED screen; the right image portion is rendered by rendering server 308-2, and the rendered right image portion is displayed by the right LED screen; the floor image portion is rendered by rendering server 308-3 and the rendered floor image portion is displayed by the floor LED screen.
In view of the problem that in the current CAVE system, each screen can only provide correct stereoscopic perspective images for shooting by one camera (equivalent to the watching of an observer), but cannot simultaneously provide correct stereoscopic perspective images for shooting by a plurality of cameras, the present disclosure provides an image display method and apparatus for the CAVE system.
Fig. 1 is a flowchart illustrating an image display method 100 for a CAVE system according to an embodiment of the present disclosure. As shown in fig. 1, the image display method 100 includes: step S102, constructing a space coordinate system of a three-dimensional real space for displaying a virtual scene image based on a space positioning technology, and determining position coordinates and attitude coordinates of a camera in the space coordinate system; step S104, constructing at least two virtual screens respectively corresponding to the at least two real screens based on the size information of the at least two real screens and the position information of the at least two real screens in a space coordinate system; step S106, binding the position coordinates and the posture coordinates of the camera to the virtual camera, and projecting the image part of the virtual scene image to the virtual screen areas on at least two virtual screens facing the virtual camera based on the position coordinates and the posture coordinates of the virtual camera; and a step S108 of rendering the projected image part and outputting the rendered image part to a real screen area corresponding to the virtual screen area faced by the virtual camera on at least two real screens for display.
Here, by binding the position coordinates and the posture coordinates of the cameras to the virtual cameras, projecting the image parts of the virtual scene image to virtual screen regions on at least two virtual screens faced by the virtual cameras based on the position coordinates and the posture coordinates of the virtual cameras, rendering the projected image parts and outputting the rendered image parts to real screen regions on at least two real screens corresponding to the virtual screen regions faced by the virtual cameras for display, it is possible to display correct stereoscopic perspective images for the respective cameras when the plurality of cameras simultaneously photograph the display contents of different screen regions of the CAVE system.
In some embodiments, a spatial coordinate system may be constructed based on infrared positioning techniques and the position and pose coordinates of the camera determined. Specifically, the method for constructing the space coordinate system of the three-dimensional real space based on the infrared positioning technology comprises the following steps: acquiring a real-time image of a three-dimensional real space shot by each infrared camera; calculating the spatial scanning field motion difference value of the mark points (the mark points correspond to the infrared reflection points on the camera) in the acquired real-time image; and determining the axial direction and the origin of the space coordinate system based on the calculation result of the space scanning field motion difference value, thereby constructing the space coordinate system of the three-dimensional real space. While constructing the spatial coordinate system of the three-dimensional real space, the position coordinates and the attitude coordinates of the camera in the constructed spatial coordinate system can be determined based on the marker points in the acquired real-time image.
The infrared positioning technique has a very high positioning accuracy and can be delayed very little if the frame rate of the infrared camera is sufficiently high. Under the condition that the positioning accuracy or delay requirement is not high, a space coordinate system of a three-dimensional real space can be constructed based on a laser positioning technology or a visible light positioning technology, and the position coordinate and the posture coordinate of the camera in the constructed space coordinate system can be determined.
In many application scenarios of CAVE systems, at least one of the orientation and angle of at least two real screens may be set according to the specifics of the virtual scene image or the three-dimensional real space in which the virtual scene image is to be displayed. Accordingly, in some embodiments, the at least two virtual screens may be further constructed based on at least one of orientation information and angle information of the at least two real screens in a spatial coordinate system of a three-dimensional real space in which the virtual scene image is to be displayed. In this way, even in an application scene in which at least one of the directions and angles of at least two real screens is not fixed, at least two virtual screens that completely correspond to the at least two real screens can be constructed, so that a correct stereoscopic perspective image can be displayed on the at least two real screens.
In some embodiments, the image display method 100 may further include: step S110, determining zooming and focusing parameters of the camera, and binding the zooming and focusing parameters of the camera to the virtual camera; and a step S112 of determining the size of the image portion based on the zoom and focus parameters of the virtual camera. Here, by determining the size of the image portion in consideration of the shooting range size of the camera, it is possible to display as many image portions as possible on at least two real screens while avoiding mutual interference between the image portions, so that it is possible to satisfy the shooting requirements of as many cameras as possible.
In some embodiments, the zoom and focus parameters of the camera may be determined based on checkerboard lens calibration. The zoom and focus parameters of the camera can be determined relatively simply and accurately by the checkerboard lens calibration mode. It will be appreciated that the zoom and focus parameters of the camera may also be determined by other calibration means as required.
The method comprises the steps of constructing a plurality of virtual cameras corresponding to the cameras respectively and a plurality of virtual screens corresponding to the real screens respectively, projecting different image parts of a virtual scene image to different virtual screen areas on the virtual screens based on the position and the posture of each virtual camera, rendering the different image parts and outputting the rendered image parts to corresponding real screen areas on the real screens respectively for displaying, and displaying correct stereoscopic perspective images for each camera when the cameras simultaneously shoot display contents of different screen areas of the CAVE system.
Fig. 2 is a block diagram illustrating an image display apparatus 200 according to an embodiment of the present disclosure. As shown in fig. 2, the image display apparatus 200 may include a coordinate system building unit 202, a screen building unit 204, an image projecting unit 206, and an image rendering unit 208, wherein: the coordinate system construction unit 202 is configured to construct a spatial coordinate system of a three-dimensional real space in which a virtual scene image is to be displayed, based on a spatial localization technique, and determine position coordinates and posture coordinates of the camera in the spatial coordinate system; the screen construction unit 204 is configured to construct at least two virtual screens respectively corresponding to the at least two real screens based on size information of the at least two real screens and position information of the at least two real screens in a spatial coordinate system; the image projecting unit 206 is configured to bind the position coordinates and the posture coordinates of the camera to the virtual camera, and project the image part of the virtual scene image to a virtual screen area on at least two virtual screens faced by the virtual camera based on the position coordinates and the posture coordinates of the virtual camera; the image rendering unit 208 is configured to render the image portion and output the rendered image portion to a real screen region corresponding to the virtual screen region on the at least two real screens for display.
In some implementations of the present embodiment, the coordinate system construction unit 202 may be further configured to construct a spatial coordinate system and determine the position coordinates and pose coordinates of the camera based on infrared positioning technology.
In some implementations of the present embodiment, the screen constructing unit 204 may be further configured to construct the at least two virtual screens further based on at least one of orientation information and angle information of the at least two real screens in a spatial coordinate system.
In some implementations of the present embodiment, the image projection unit 206 may be further configured to determine zoom and focus parameters of the camera, bind the zoom and focus parameters of the camera to the virtual camera, and determine the size of the image portion based on the zoom and focus parameters of the virtual camera.
In some embodiments of the present embodiment, the image projection unit 206 may be further configured to determine the zoom and focus parameters of the camera based on checkerboard lens calibration.
In this embodiment, the technical effects brought by the image display apparatus 200 and the corresponding functional units thereof can refer to the related description in the corresponding embodiment of fig. 1, and are not repeated herein.
It will be understood that, in the CAVE system 300 shown in fig. 3, the image display method 100 and the image display apparatus 200 may be performed by at least two rendering servers of the control server 306 and the rendering servers 308-1 to 308-3 in combination. For example, steps S102 to S104 and steps S110 to S112 in the image display method 100 may be performed by the control server 306, step S106 in the image display method 100 may be performed by the control server 306 and at least two of the rendering servers 308-1 to 308-3 in combination (e.g., a process of binding the position coordinates and the posture coordinates of the camera to the virtual camera is performed by the control server 306, a process of projecting the image portion of the virtual scene image onto the virtual screen areas on at least two virtual screens faced by the virtual camera based on the position coordinates and the posture coordinates of the virtual camera is performed by at least two of the rendering servers 308-1 to 308-3), and step S108 in the image display method 100 may be performed by at least two servers among the rendering servers 308-1 to 308-3; similarly, the coordinate system building unit 202 and the screen building unit 204 in the image display apparatus 200 may be executed by the control server 306, the image projecting unit 206 in the image display apparatus 200 may be executed by the control server 306 and at least two of the rendering servers 308-1 to 308-3 in combination, and the image rendering unit 208 of the image display apparatus 200 may be executed by at least two of the rendering servers 308-1 to 308-3.
Specifically, the control server 306 may transmit the relevant information of the virtual camera to the rendering servers 308-1 to 308-3 by, for example, a local area network broadcast after binding the relevant information of the camera to the virtual camera. In addition, the control server 306 may transmit information about at least two virtual screens to corresponding ones of the rendering servers 308-1 to 308-3, respectively, after constructing the at least two virtual screens corresponding to the at least two real screens, respectively. For example, the control server 306 may construct two virtual screens corresponding to the left and right LED screens 312-1 to 312-2, respectively, transmit information about the virtual screen corresponding to the left LED screen 312-1 to the rendering server 308-1, and transmit information about the virtual screen corresponding to the right LED screen 312-2 to the rendering server 308-2.
The rendering server 308-1, after receiving the information about the virtual camera and the information about the virtual screen corresponding to the left LED screen 312-1, may project an image portion in the shooting range of the virtual camera in the left image portion of the virtual scene image to a screen area on the virtual screen corresponding to the left LED screen 312-1, which the virtual camera faces, according to the position, posture, and shooting range of the virtual camera, render the projected image portion, and output the rendered image portion to a corresponding screen area on the left LED screen 312-1 for display. Similarly, the rendering server 308-2, after receiving the related information of the virtual camera and the related information of the virtual screen corresponding to the right LED screen 312-2, may project an image portion in the shooting range of the virtual camera in the right image portion of the virtual scene image to a screen area on the virtual screen corresponding to the right LED screen 312-2, which the virtual camera faces, according to the position, posture, and shooting range of the virtual camera, render the projected image portion, and output the rendered image portion to the corresponding screen area on the right LED screen 312-2 for display.
FIG. 4 is a schematic diagram illustrating the display of different image portions on the left and right LED screens, respectively, of the example CAVE system shown in FIG. 3. As can be seen from fig. 4, different image portions can be displayed on the left and right LED screens 312-1 to 312-2, respectively, for the two cameras to take.
In summary, in a CAVE system to which the image display method 100 and the image display apparatus 200 according to the embodiments of the present disclosure are applied, the position coordinate and the attitude coordinate of the camera can be determined by an infrared positioning technology, the zooming and focusing parameters of the camera are determined by a checkerboard lens calibration mode, the position coordinate, the attitude coordinate and the zooming and focusing parameters of the camera are bound on the virtual camera, so that all parameters of the virtual camera are linked with all parameters of the camera in real time, projecting an image portion of a virtual scene image by lens projection to a virtual screen area on two virtual screens respectively corresponding to the left and right LED screens which the virtual camera faces, rendering the projected image portions and outputting the rendered image portions to corresponding real screen areas on the left and right LED screens for display. Here, when the camera performs motion shooting in a three-dimensional real space displaying a virtual scene image, the corresponding real screen regions on the left and right LED screens change following the motion of the camera, which ensures that the display contents in the shooting range of the camera are correct stereoscopic perspective images.
FIG. 5 is a block diagram illustrating an exemplary computer system that can be used to implement embodiments of the present disclosure. A computer system 500 suitable for implementing embodiments of the present disclosure is described below in conjunction with fig. 5. It should be appreciated that the computer system 500 illustrated in FIG. 5 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 5, computer system 500 may include a processing device (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage device 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the computer system 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, camera, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; a storage device 508 including, for example, a Flash memory (Flash Card); and a communication device 509. The communication means 509 may allow the computer system 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates a computer system 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 5 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, an embodiment of the present disclosure provides a computer-readable storage medium storing a computer program containing program code for executing the image display method 100 shown in fig. 1. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program realizes the above-described functions defined in the system of the embodiment of the present disclosure when executed by the processing apparatus 501.
It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (Radio Frequency), etc., or any suitable combination of the foregoing.
The computer-readable medium may be embodied in the computer system 500; or may exist separately and not be incorporated into the computer system 500. The computer readable medium carries one or more programs which, when executed by the computing device, cause the computing system to: constructing a space coordinate system of a three-dimensional real space for displaying a virtual scene image based on a space positioning technology, and determining a position coordinate and a posture coordinate of a camera in the space coordinate system; constructing at least two virtual screens respectively corresponding to the at least two real screens based on the size information of the at least two real screens and the position information of the at least two real screens in a space coordinate system; binding the position coordinates and the posture coordinates of the camera to the virtual camera, and projecting image parts of the virtual scene image to virtual screen areas on at least two virtual screens facing the virtual camera based on the position coordinates and the posture coordinates of the virtual camera; and rendering the projected image part and outputting the rendered image part to a real screen area corresponding to a virtual screen area faced by the virtual camera on at least two real screens for display.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a coordinate system building unit, a screen building unit, an image projecting unit, and an image rendering unit. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (12)

1. An image display method is used for a cave-shaped automatic virtual environment system and comprises the following steps:
constructing a space coordinate system of a three-dimensional real space for displaying a virtual scene image based on a space positioning technology, and determining a position coordinate and a posture coordinate of a camera in the space coordinate system;
constructing at least two virtual screens respectively corresponding to the at least two real screens based on the size information of the at least two real screens and the position information of the at least two real screens in the space coordinate system;
binding the position coordinates and the pose coordinates of the camera to a virtual camera and projecting image portions of the virtual scene image based on the position coordinates and the pose coordinates of the virtual camera to virtual screen regions on the at least two virtual screens that the virtual camera faces; and
rendering the image part and outputting the rendered image part to a real screen area corresponding to the virtual screen area on the at least two real screens for display.
2. The image display method according to claim 1, wherein the spatial coordinate system is constructed based on an infrared positioning technique and position coordinates and attitude coordinates of the camera are determined.
3. The image display method of claim 1, wherein the at least two virtual screens are further constructed based on at least one of orientation information and angle information of the at least two real screens in the spatial coordinate system.
4. The image display method according to claim 1, further comprising:
determining zoom and focus parameters of the camera, and binding the zoom and focus parameters of the camera to the virtual camera; and
determining a size of the image portion based on zoom and focus parameters of the virtual camera.
5. The image display method of claim 1, wherein the zoom and focus parameters of the camera are determined based on checkerboard lens calibration.
6. An image display device for a cave-shaped automatic virtual environment system, comprising:
a coordinate system construction unit configured to construct a spatial coordinate system of a three-dimensional real space in which a virtual scene image is to be displayed based on a spatial positioning technique, and determine position coordinates and attitude coordinates of a camera in the spatial coordinate system;
a screen construction unit configured to construct at least two virtual screens respectively corresponding to at least two real screens based on size information of the at least two real screens and position information of the at least two real screens in the spatial coordinate system;
an image projection unit configured to bind the position coordinates and the posture coordinates of the camera to a virtual camera and project an image portion of the virtual scene image to a virtual screen area on the at least two virtual screens, which the virtual camera faces, based on the position coordinates and the posture coordinates of the virtual camera; and
an image rendering unit configured to render the image portion and output the rendered image portion to a real screen region corresponding to the virtual screen region on the at least two real screens for display.
7. The image display apparatus according to claim 6, wherein the coordinate system construction unit is further configured to construct the spatial coordinate system and determine position coordinates and attitude coordinates of the camera based on an infrared positioning technique.
8. The image display device according to claim 6, wherein the screen construction unit is further configured to construct the at least two virtual screens further based on at least one of orientation information and angle information of the at least two real screens in the spatial coordinate system.
9. The image display device of claim 6, wherein the image projection unit is further configured to determine zoom and focus parameters of the camera, bind the zoom and focus parameters of the camera to the virtual camera, and determine the size of the image portion based on the zoom and focus parameters of the virtual camera.
10. The image display device of claim 6, wherein the image projection unit is further configured to determine zoom and focus parameters of the camera based on checkerboard lens calibration.
11. A computer system, comprising:
a processor; and
a memory storing a computer program that, when executed by the processor, causes the processor to perform the method of any of claims 1-5.
12. A computer-readable storage medium storing a computer program which, when executed by a processor of a computer system, causes the computer system to perform the method of any one of claims 1-5.
CN202010900440.8A 2020-09-01 2020-09-01 Image display method and apparatus, computer system, and computer-readable storage medium Active CN111766951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010900440.8A CN111766951B (en) 2020-09-01 2020-09-01 Image display method and apparatus, computer system, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010900440.8A CN111766951B (en) 2020-09-01 2020-09-01 Image display method and apparatus, computer system, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN111766951A true CN111766951A (en) 2020-10-13
CN111766951B CN111766951B (en) 2021-02-02

Family

ID=72729208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010900440.8A Active CN111766951B (en) 2020-09-01 2020-09-01 Image display method and apparatus, computer system, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111766951B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810612A (en) * 2021-09-17 2021-12-17 上海傲驰广告文化集团有限公司 Analog live-action shooting method and system
CN114520903A (en) * 2022-02-17 2022-05-20 阿里巴巴(中国)有限公司 Rendering display method, device, storage medium and computer program product
CN115760964A (en) * 2022-11-10 2023-03-07 亮风台(上海)信息科技有限公司 Method and equipment for acquiring screen position information of target object
CN116320364A (en) * 2023-05-25 2023-06-23 四川中绳矩阵技术发展有限公司 Virtual reality shooting method and display method based on multi-layer display
CN116433769A (en) * 2023-04-21 2023-07-14 北京优酷科技有限公司 Space calibration method, device, electronic equipment and storage medium
CN116524022A (en) * 2023-04-28 2023-08-01 北京优酷科技有限公司 Offset data calculation method, image fusion device and electronic equipment
CN116723303A (en) * 2023-08-11 2023-09-08 腾讯科技(深圳)有限公司 Picture projection method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040125205A1 (en) * 2002-12-05 2004-07-01 Geng Z. Jason System and a method for high speed three-dimensional imaging
CN1897715A (en) * 2006-05-31 2007-01-17 北京航空航天大学 Three-dimensional vision semi-matter simulating system and method
CN106296683A (en) * 2016-08-09 2017-01-04 深圳迪乐普数码科技有限公司 A kind of generation method of virtual screen curtain wall and terminal
CN107341832A (en) * 2017-04-27 2017-11-10 北京德火新媒体技术有限公司 A kind of various visual angles switching camera system and method based on infrared location system
CN111476876A (en) * 2020-04-02 2020-07-31 北京七维视觉传媒科技有限公司 Three-dimensional image rendering method, device and equipment and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040125205A1 (en) * 2002-12-05 2004-07-01 Geng Z. Jason System and a method for high speed three-dimensional imaging
CN1897715A (en) * 2006-05-31 2007-01-17 北京航空航天大学 Three-dimensional vision semi-matter simulating system and method
CN106296683A (en) * 2016-08-09 2017-01-04 深圳迪乐普数码科技有限公司 A kind of generation method of virtual screen curtain wall and terminal
CN107341832A (en) * 2017-04-27 2017-11-10 北京德火新媒体技术有限公司 A kind of various visual angles switching camera system and method based on infrared location system
CN111476876A (en) * 2020-04-02 2020-07-31 北京七维视觉传媒科技有限公司 Three-dimensional image rendering method, device and equipment and readable storage medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810612A (en) * 2021-09-17 2021-12-17 上海傲驰广告文化集团有限公司 Analog live-action shooting method and system
CN114520903A (en) * 2022-02-17 2022-05-20 阿里巴巴(中国)有限公司 Rendering display method, device, storage medium and computer program product
CN114520903B (en) * 2022-02-17 2023-08-08 阿里巴巴(中国)有限公司 Rendering display method, rendering display device, electronic equipment and storage medium
CN115760964A (en) * 2022-11-10 2023-03-07 亮风台(上海)信息科技有限公司 Method and equipment for acquiring screen position information of target object
CN115760964B (en) * 2022-11-10 2024-03-15 亮风台(上海)信息科技有限公司 Method and equipment for acquiring screen position information of target object
CN116433769A (en) * 2023-04-21 2023-07-14 北京优酷科技有限公司 Space calibration method, device, electronic equipment and storage medium
CN116524022A (en) * 2023-04-28 2023-08-01 北京优酷科技有限公司 Offset data calculation method, image fusion device and electronic equipment
CN116524022B (en) * 2023-04-28 2024-03-26 神力视界(深圳)文化科技有限公司 Offset data calculation method, image fusion device and electronic equipment
CN116320364A (en) * 2023-05-25 2023-06-23 四川中绳矩阵技术发展有限公司 Virtual reality shooting method and display method based on multi-layer display
CN116723303A (en) * 2023-08-11 2023-09-08 腾讯科技(深圳)有限公司 Picture projection method, device, equipment and storage medium
CN116723303B (en) * 2023-08-11 2023-12-05 腾讯科技(深圳)有限公司 Picture projection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111766951B (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN111766951B (en) Image display method and apparatus, computer system, and computer-readable storage medium
US11272165B2 (en) Image processing method and device
US20230360337A1 (en) Virtual image displaying method and apparatus, electronic device and storage medium
RU2741443C1 (en) Method and device for sampling points selection for surveying and mapping, control terminal and data storage medium
CN108769462B (en) Free visual angle scene roaming method and device
CN106251403A (en) A kind of methods, devices and systems of virtual three-dimensional Scene realization
CN109840946B (en) Virtual object display method and device
CN110021071A (en) Rendering method, device and equipment in a kind of application of augmented reality
WO2022166868A1 (en) Walkthrough view generation method, apparatus and device, and storage medium
CN112312041B (en) Shooting-based image correction method and device, electronic equipment and storage medium
CN111242107B (en) Method and electronic device for setting virtual object in space
JP7225016B2 (en) AR Spatial Image Projection System, AR Spatial Image Projection Method, and User Terminal
CN109272453B (en) Modeling device and positioning method based on 3D camera
CN115004683A (en) Imaging apparatus, imaging method, and program
KR20210048244A (en) 3d based indoor location estimation device and system
KR102534449B1 (en) Image processing method, device, electronic device and computer readable storage medium
CN113068006B (en) Image presentation method and device
CN112286355B (en) Interactive method and system for immersive content
WO2021207943A1 (en) Projection display method and system based on multiple photographic apparatuses, and terminal and storage medium
CN114554108B (en) Image processing method and device and electronic equipment
CN111510370B (en) Content processing method and device, computer medium and electronic equipment
CN112991542B (en) House three-dimensional reconstruction method and device and electronic equipment
CN115171200B (en) Target tracking close-up method and device based on zooming, electronic equipment and medium
CN113038262A (en) Panoramic live broadcast method and device
CN117934769A (en) Image display method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Image display method and device, computer system and computer readable storage medium

Effective date of registration: 20220624

Granted publication date: 20210202

Pledgee: Industrial Bank Co.,Ltd. Beijing Dongcheng sub branch

Pledgor: BEIJING 7D VISION TECHNOLOGY Co.,Ltd.

Registration number: Y2022980008816

PE01 Entry into force of the registration of the contract for pledge of patent right