CN112351266B - Three-dimensional visual processing method, device, equipment, display system and medium - Google Patents

Three-dimensional visual processing method, device, equipment, display system and medium Download PDF

Info

Publication number
CN112351266B
CN112351266B CN202011169542.3A CN202011169542A CN112351266B CN 112351266 B CN112351266 B CN 112351266B CN 202011169542 A CN202011169542 A CN 202011169542A CN 112351266 B CN112351266 B CN 112351266B
Authority
CN
China
Prior art keywords
display screen
display
dimensional
preset
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011169542.3A
Other languages
Chinese (zh)
Other versions
CN112351266A (en
Inventor
高志生
张安全
崔志斌
朱运兰
葛耀旭
田卧龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou J&T Hi Tech Co Ltd
Original Assignee
Zhengzhou J&T Hi Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou J&T Hi Tech Co Ltd filed Critical Zhengzhou J&T Hi Tech Co Ltd
Priority to CN202011169542.3A priority Critical patent/CN112351266B/en
Publication of CN112351266A publication Critical patent/CN112351266A/en
Application granted granted Critical
Publication of CN112351266B publication Critical patent/CN112351266B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/365Image reproducers using digital micromirror devices [DMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application provides a three-dimensional visual processing method, a device, equipment, a display system and a medium, and relates to the technical field of display equipment. The method is applied to computer equipment connected with a plurality of display screens, and a preset included angle exists between adjacent display screens in the plurality of display screens; the method can calculate the size information and position information of a viewfinder of a virtual camera corresponding to each display screen in a virtual three-dimensional scene corresponding to a physical scene according to the acquired physical screen parameters of each display screen and the coordinates of a preset viewing angle origin; framing from the virtual three-dimensional scene according to the size information and the position information of the viewfinder, and determining a three-dimensional view picture corresponding to each display screen; splicing the three-dimensional visual pictures corresponding to the plurality of display screens to obtain a target visual picture; the target view picture is sent to each display screen, so that real view finding can be performed according to the physical size of the display screen, and the display effect can be improved when the corresponding three-dimensional view picture is displayed through each display screen.

Description

Three-dimensional visual processing method, device, equipment, display system and medium
Technical Field
The present application relates to the field of display device technologies, and in particular, to a three-dimensional view processing method, apparatus, device, display system, and medium.
Background
The multi-channel splicing technology is a novel technology, can display a plurality of dynamic pictures on a plurality of screens, realizes the function of multi-window splicing, and is widely applied to various aspects such as monitoring and commanding, dispatching systems, military, meteorology, aviation, video conferences, inquiry systems and the like.
The existing multi-channel splicing technology mainly comprises the following two application modes: one is to use a projector to project a picture onto a curtain; another is to use a liquid crystal display (or other type of electronic display) for the picture display.
However, the existing multi-channel splicing technology has a relatively simple data processing mode, wherein the proportion of a projection picture is scaled according to display equipment, so that the technical problem of poor display effect exists.
Disclosure of Invention
An object of the present application is to provide a method, an apparatus, a device, a display system, and a medium for processing a three-dimensional view, which can perform real view finding according to the physical size of a display screen, so that when a corresponding three-dimensional view is displayed on each display screen, adaptive display can be performed, and the display effect is improved.
In order to achieve the above purpose, the technical solutions adopted in the embodiments of the present application are as follows:
in a first aspect, an embodiment of the present invention provides a three-dimensional view processing method, which is applied to a computer device connected with a plurality of display screens, where adjacent display screens in the plurality of display screens have a preset included angle; the method comprises the following steps:
acquiring physical screen parameters of each display screen and coordinates of a preset view angle origin, wherein the preset view angle origin is a view angle point preset in a physical scene where the plurality of display screens are located;
calculating the size information and position information of a viewfinder of a virtual camera corresponding to each display screen in a virtual three-dimensional scene corresponding to the physical scene according to the parameters of the physical screen and the coordinates of the preset viewing angle origin;
framing from the virtual three-dimensional scene according to the size information and the position information of the viewfinder, and determining a three-dimensional view picture corresponding to each display screen;
splicing the three-dimensional visual pictures corresponding to the display screens to obtain a target visual picture;
and sending the target visual scene picture to each display screen, so that each display screen displays the three-dimensional visual scene picture corresponding to each display screen in the target visual scene picture.
In an alternative embodiment, the physical screen parameters include: the size information of each display screen and the central point parameter of each display screen;
the calculating of the size information and the position information of the viewfinder of the virtual camera of each display screen in the virtual three-dimensional scene corresponding to the physical scene according to the parameters of the physical screen and the preset view angle origin comprises:
and calculating the size information and the position information of the viewfinder according to the size information, the central point parameter and the coordinates of the preset viewing angle origin.
In an alternative embodiment, the center point parameter comprises: the vertical distance, the horizontal distance and the height distance of the center point of each display screen;
the vertical distance is the vertical distance from the central point of each display screen to a preset auxiliary line, the preset auxiliary line is an auxiliary line which passes through the preset visual angle original point and is perpendicular to an extension line of the positive direction of the visual angle, and the positive direction of the visual angle is the visual angle orientation of the preset visual angle original point in the physical scene; the horizontal distance is the horizontal distance from the center point of each display screen to the extension line of the positive direction of the viewing angle; the height distance is the height difference between the center point of each display screen and the preset view angle origin.
In an alternative embodiment, the physical screen parameters further include: the included angle between each display screen and the positive direction of the visual angle;
the finding a view from the virtual three-dimensional scene according to the size information and the position information of the finder window and determining the three-dimensional view picture corresponding to each display screen comprises the following steps:
calculating the viewfinder rotation information of each display screen corresponding to the virtual camera according to the viewfinder size information, the viewfinder position information and the included angle;
and framing is carried out from the virtual three-dimensional scene according to the viewfinder rotation information, and a three-dimensional view picture corresponding to each display screen is determined.
In an optional implementation manner, the calculating, according to the size information of the finder window, the position information of the finder window, and the included angle, the rotation information of the finder window of the virtual camera corresponding to each display screen includes:
and processing a preset projection matrix of each display screen corresponding to the virtual camera according to the size information of the viewfinder, the position information of the viewfinder and the included angle to obtain a target projection matrix, wherein the target projection matrix is used for representing the size information of the viewfinder, the position information of the viewfinder and the rotation information of the viewfinder.
In an optional embodiment, the splicing the three-dimensional view pictures corresponding to the plurality of display screens to obtain a target view picture includes:
and splicing the three-dimensional visual pictures corresponding to the display screens according to the arrangement positions of the display screens in the physical scene to obtain the target visual picture.
In a second aspect, an embodiment of the present invention provides a three-dimensional view processing apparatus, which is applied to a computer device connected with a plurality of display screens, where adjacent display screens in the plurality of display screens have a preset included angle; the three-dimensional visual processing device comprises:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring physical screen parameters of each display screen and coordinates of a preset view angle origin, and the preset view angle origin is a view angle point preset in a physical scene where a plurality of display screens are located;
the calculation module is used for calculating the size information and the position information of the view finder of each virtual camera corresponding to each display screen in the virtual three-dimensional scene corresponding to the physical scene according to the physical screen parameters and the preset view origin;
the view finding module is used for finding a view from the virtual three-dimensional scene according to the size information and the position information of the view finding window and determining a three-dimensional view picture corresponding to each display screen;
the splicing module is used for splicing the three-dimensional visual pictures corresponding to the display screens to obtain a target visual picture;
and the transmission module is used for sending the target view picture to each display screen, so that each display screen displays the three-dimensional view picture corresponding to each display screen in the target view picture.
In an alternative embodiment, the physical screen parameters include: the size information of each display screen and the central point parameter of each display screen; the calculation module is specifically configured to calculate the size information of the finder window and the position information of the finder window according to the size information, the center point parameter, and the coordinates of the preset view origin.
In an alternative embodiment, the center point parameter comprises: the vertical distance, the horizontal distance and the height distance of the center point of each display screen;
the vertical distance is the vertical distance from the central point of each display screen to a preset auxiliary line, the preset auxiliary line is an auxiliary line which passes through the preset visual angle original point and is perpendicular to an extension line of the positive direction of the visual angle, and the positive direction of the visual angle is the visual angle orientation of the preset visual angle original point in the physical scene; the horizontal distance is the horizontal distance from the center point of each display screen to the extension line of the positive direction of the viewing angle; the height distance is the height difference between the center point of each display screen and the preset view angle origin.
In an alternative embodiment, the physical screen parameters further include: the included angle between each display screen and the positive direction of the visual angle; the view finding module is specifically configured to calculate view finding window rotation information of each display screen corresponding to the virtual camera according to the view finding window size information, the view finding window position information, and the included angle;
and framing is carried out from the virtual three-dimensional scene according to the viewfinder rotation information, and a three-dimensional view picture corresponding to each display screen is determined.
In an optional implementation manner, the view finding module is specifically configured to process a preset projection matrix of each virtual camera corresponding to the display screen according to the view finding window size information, the view finding window position information, and the included angle, so as to obtain a target projection matrix, where the target projection matrix is used to represent the view finding window size information, the view finding window position information, and the view finding window rotation information.
In an optional embodiment, the splicing module is specifically configured to splice three-dimensional view pictures corresponding to a plurality of display screens according to arrangement positions of the display screens in the physical scene, so as to obtain the target view picture.
In a third aspect, an embodiment of the present invention provides a computer device, including: the display device comprises a processor, a storage medium and a bus, wherein the processor is connected with a plurality of display screens, and adjacent display screens in the plurality of display screens form preset included angles; the storage medium stores machine-readable instructions executable by the processor, when a computer device runs, the processor communicates with the storage medium through a bus, and the processor executes the machine-readable instructions to execute the steps of the three-dimensional visual processing method according to any one of the foregoing embodiments.
In a fourth aspect, an embodiment of the present invention provides a three-dimensional view display system, including: at least one computer device, and a plurality of display screens connected to each computer device; a preset included angle exists between adjacent display screens in the plurality of display screens;
the display screens connected with the computer devices are arranged in the same physical scene; each of the computer devices is configured to execute the three-dimensional view processing method according to any one of the foregoing embodiments.
In an optional embodiment, each of the display screens is a tiled display screen obtained by tiling a plurality of sub-display screens.
In a fifth aspect, an embodiment of the present invention provides a storage medium, on which a computer program is stored, where the computer program is executed by a processor to execute the steps of the three-dimensional scene processing method according to any one of the foregoing embodiments.
The beneficial effect of this application is:
in the three-dimensional view processing method, the three-dimensional view processing device, the three-dimensional view processing equipment, the three-dimensional view processing display system and the three-dimensional view processing medium, the method can be applied to computer equipment connected with a plurality of display screens, and adjacent display screens in the plurality of display screens have preset included angles; the method comprises the following steps: acquiring physical screen parameters of each display screen and coordinates of a preset view angle origin; calculating the size information and position information of a viewfinder of each display screen corresponding to a virtual camera in a virtual three-dimensional scene corresponding to the physical scene according to the parameters of the physical screen and the coordinates of a preset viewing angle origin; framing from the virtual three-dimensional scene according to the size information and the position information of the viewfinder, and determining a three-dimensional view picture corresponding to each display screen; splicing the three-dimensional visual pictures corresponding to the plurality of display screens to obtain a target visual picture; in the process, because the size information and the position information of the viewfinder of each display screen corresponding to the virtual camera can be acquired according to the physical screen parameters of each display screen and the coordinates of the preset viewing angle original point, the real framing can be realized according to the physical size of the display screen, so that the adaptive display can be carried out when the corresponding three-dimensional viewfinder picture is displayed through each display screen, and the display effect is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flow chart of a three-dimensional scene processing method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram including a plurality of display screens according to an embodiment of the present application;
fig. 3 is a schematic flow chart of another three-dimensional scene processing method according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of another display screen including a plurality of display screens according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another three-dimensional scene processing method according to an embodiment of the present application;
fig. 6 is a schematic functional block diagram of a three-dimensional view processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a system architecture diagram of a three-dimensional visual display system according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The multi-channel visual scene splicing is to adopt a multi-channel projection display system to project virtual reality application onto a large screen, the projection picture is subjected to edge fusion processing, the brightness and the color of the whole picture are kept uniform, and compared with a common standard projection system, the multi-channel visual scene splicing has the advantages of larger display size, wider visual field, more display contents, higher display resolution and visual effect with more impact force and immersion.
The existing multi-channel splicing technology mainly comprises the following two application modes: one is to use a projector to project a picture onto a curtain; another is to use a liquid crystal display (or other type of electronic display) for the picture display.
However, the existing multi-channel splicing technology has a simple data processing mode, the proportion of a projection picture is zoomed according to display equipment, and real framing cannot be performed according to the physical size of the display equipment, so that the technical problem of poor display effect exists. In addition, when a picture is displayed through display equipment based on the existing multi-channel splicing technology, a flat display screen or an arc screen can only be supported generally, and when included angles exist among a plurality of pieces of display equipment, the problem that the picture at the boundary is hard to fold can be caused.
In view of this, an embodiment of the present application provides a three-dimensional view processing method, which may be applied to a computer device connected with multiple display screens, where adjacent display screens in the multiple display screens have a preset included angle. In addition, by applying the embodiment of the application, when the preset included angles exist among the plurality of display screens, the pictures at the boundary of each display screen can be in smooth transition, the phenomenon of rigid folding is avoided, and the display effect is further improved.
Fig. 1 is a schematic flow diagram of a three-dimensional view processing method provided in an embodiment of the present application, and fig. 2 is a schematic structural diagram of a computer device including a plurality of display screens, where the method may be applied to a computer device connected with the plurality of display screens, where adjacent display screens in the plurality of display screens have a preset included angle, as shown in fig. 2, the method may include a first display screen 11, a second display screen 12, a third display screen 13, and a fourth display screen 15, where the first display screen 11 and the second display screen 12 may have a preset included angle γ 1, the second display screen 12 and the second display screen 13 may have a preset included angle γ 3, and each preset included angle may be different. Alternatively, the display screen may be a liquid crystal display screen or other types of electronic display screens, and the plurality of display screens may be connected with the computer device in a wired or wireless manner, which is not limited herein. As shown in fig. 1, the method may include:
s101, acquiring physical screen parameters of each display screen and coordinates of a preset view angle origin, wherein the preset view angle origin is a view angle point preset in a physical scene where the plurality of display screens are located.
In some embodiments, the physical screen parameters of the display screen may be obtained by reading configuration information of the display screen, or may be further calculated and obtained according to the read configuration information of the display screen, optionally, the screen center point parameters of the display screen may include coordinates of a center point of the screen, and the like, which is not limited herein.
The preset view origin is a view point preset in a physical scene where the multiple display screens are located, that is, a preset observation point, for example, when the physical scene where the multiple display screens are located is a certain conference room, the preset view origin is a view point preset in the conference room (for example, may be a position point where an observer is located), as shown in fig. 2, the preset view origin may be an a position in the diagram, an arrow of the preset view origin may indicate an observation direction or a view direction, and correspondingly, a coordinate of the preset view origin is a coordinate position of the observation point.
S102, calculating the size information and position information of the view window of each display screen corresponding to the virtual camera in the virtual three-dimensional scene corresponding to the physical scene according to the parameters of the physical screen and the coordinates of the preset view angle original point.
According to the physical scene where the plurality of display screens are located, modeling of the virtual scene can be carried out, a corresponding virtual three-dimensional scene is established, according to the number of the display screens, the virtual three-dimensional scene can comprise a plurality of virtual cameras the number of which is the same as that of the display screens, after the physical screen parameters and the coordinates of the preset visual angle original points are obtained, the size information and the position information of the viewfinder of each display screen corresponding to the virtual camera in the virtual three-dimensional scene can be calculated, and the viewfinder position information can represent the viewfinder positions of the virtual cameras. It can be understood that, according to the physical screen parameters of each display screen, the size information of the viewfinder of the virtual camera corresponding to each display screen can be calculated; and according to the coordinates of the preset viewing angle origin, the position information of the viewing window of each display screen corresponding to the virtual camera can be calculated.
S103, framing is performed from the virtual three-dimensional scene according to the size information and the position information of the viewfinder, and a three-dimensional view picture corresponding to each display screen is determined.
After the size information and the position information of the view window of the virtual camera corresponding to each display screen are acquired, the view can be obtained from the virtual three-dimensional scene through each virtual camera, and the three-dimensional view picture corresponding to each display screen can be determined through view. In the process, the size information and the position information of the viewfinder of the virtual camera corresponding to each display screen can be acquired according to the physical screen parameters of each display screen and the coordinates of the preset viewing angle origin, so that the size of the viewfinder of the virtual camera can be the same as the size of the physical screen of the display screen, and therefore real framing can be performed according to the physical size of the display screen.
In some embodiments, the three-dimensional view picture corresponding to each display screen may include a picture identifier, and the picture identifier may be the same as the identifier of the display screen, but not limited thereto, as long as the corresponding relationship between each display screen and each three-dimensional view picture may be identified.
And S104, splicing the three-dimensional visual pictures corresponding to the plurality of display screens to obtain a target visual picture.
Based on the above content, after the three-dimensional view picture corresponding to each display screen is obtained, the three-dimensional view pictures corresponding to the plurality of display screens can be spliced, and the target view picture is obtained through splicing. In some embodiments, when performing the splicing, the three-dimensional view picture corresponding to each display screen may be spliced to the position corresponding to the target view picture according to the identifier of the display screen, but not limited thereto.
And S105, sending the target visual image to each display screen, so that each display screen displays the three-dimensional visual image corresponding to each display screen in the target visual image.
Based on the above embodiment, after obtaining the target view picture, the computer device may send the target view picture to each display screen, and after receiving the target view picture, the display screen may display the corresponding three-dimensional view picture. In some embodiments, when each display screen displays the three-dimensional view picture corresponding to each display screen in the target view picture, the display screen may display the three-dimensional view picture according to a corresponding relationship between each display screen and each three-dimensional view picture.
To sum up, the method for processing a three-dimensional view provided by the embodiment of the present application may be applied to a computer device connected with a plurality of display screens, where adjacent display screens in the plurality of display screens have a preset included angle; the method comprises the following steps: acquiring physical screen parameters of each display screen and coordinates of a preset view angle origin; calculating the size information and position information of a viewfinder of each display screen corresponding to a virtual camera in a virtual three-dimensional scene corresponding to the physical scene according to the parameters of the physical screen and the coordinates of a preset viewing angle origin; framing from the virtual three-dimensional scene according to the size information and the position information of the viewfinder, and determining a three-dimensional view picture corresponding to each display screen; splicing the three-dimensional visual pictures corresponding to the plurality of display screens to obtain a target visual picture; in the process, because the size information and the position information of the viewfinder of each display screen corresponding to the virtual camera can be obtained according to the physical screen parameters of each display screen and the coordinates of the preset viewing angle origin, real framing can be realized according to the physical size of the display screen, so that adaptive display can be performed when the corresponding three-dimensional viewing picture is displayed through each display screen, and the display effect is improved.
Fig. 3 is a schematic flowchart of another three-dimensional view processing method according to an embodiment of the present disclosure. Optionally, the physical screen parameters may include: size information of each display screen and a central point parameter of each display screen; the size information of the display screen may include length information and width information of the display screen, and the center point parameter of the display screen may be used to indicate a relevant parameter of the center position of the display screen. As shown in fig. 3, the calculating, according to the physical screen parameter and the preset view origin, the size information and the position information of the view window of the virtual camera in the virtual three-dimensional scene corresponding to the physical scene on each display screen includes:
s201, calculating the size information and the position information of the viewfinder according to the size information, the central point parameter and the coordinates of the preset viewing angle origin.
The size information of the viewfinder can be determined according to the size information of the display screen; and calculating the position information of the viewfinder according to the central point parameter of the display screen and the coordinates of the preset viewing angle origin, so that the viewfinder is viewed from the virtual three-dimensional scene based on the size information and the position information of the viewfinder, and the real framing can be performed according to the physical size of each display screen when the three-dimensional view scene corresponding to each display screen is determined. It should be noted that, for the virtual camera in the virtual three-dimensional scene, the view window of the virtual camera may be a rectangular view window, through which a view can be taken from the virtual three-dimensional scene to obtain a three-dimensional view picture. It can be understood that when the size information and the position information of the view window are changed, the three-dimensional view picture obtained will also be changed.
Optionally, the center point parameters may include: the vertical distance, the horizontal distance and the height distance of the center point of each display screen;
the vertical distance is the vertical distance from the central point of each display screen to a preset auxiliary line, the preset auxiliary line is an auxiliary line which passes through a preset viewing angle original point and is perpendicular to an extension line of a viewing angle positive direction, and the viewing angle positive direction is the viewing angle direction of the preset viewing angle original point in a physical scene, namely the direction of an observation viewing angle; the horizontal distance is the horizontal distance from the center point of each display screen to the extension line of the positive direction of the viewing angle; the height distance is the height difference between the center point of each display screen and the preset viewing angle origin.
For the view angle positive direction, if the user is located at the preset view angle origin in the physical scene, the view angle direction of the user may be the view angle positive direction. In some embodiments, assuming that the display screen and the preset viewing angle origin are in the same horizontal plane, the height difference between the center point of the display screen and the preset viewing angle origin may be obtained by calculating a difference between a first distance between the center point of the display screen and the ground and a second distance between the preset viewing angle origin and the ground. It is understood that the predetermined viewing angle origin may have a certain height from the ground, for example, the average height of the observer (for example, 1.7 m), but not limited thereto.
Fig. 4 is a schematic structural diagram of another display screen including a plurality of display screens according to an embodiment of the present application. The present application is described below with reference to specific schematic diagrams. As shown in fig. 4, the plurality of display screens includes a fifth display screen 110 and a sixth display screen 120, wherein a preset included angle between the fifth display screen 110 and the sixth display screen 120 is α, and the centers of the fifth display screen and the sixth display screen are respectively O 1 And O 2 The predetermined viewing angle origin is a, the predetermined auxiliary line L1 is an auxiliary line passing through the predetermined viewing angle origin a and perpendicular to the viewing angle square direction, which may be the direction indicated by the arrow in the figure, and the vertical distance M1 corresponding to the center point of the fifth display screen is the center point O of the fifth display screen 110 1 Vertical distance to a predetermined auxiliary line L1, fifth display screenThe horizontal distance M2 of the central point of (1) is the central point O of the fifth display screen 1 The horizontal distance from the extension line of the positive direction of the visual angle and the height distance from the center point of the fifth display screen are the center point O of the fifth display screen 1 A height difference from a preset viewing angle origin a (not shown in the figure); for the calculation process of the center point parameter of the sixth display screen, reference may be made to the relevant part of the fifth display screen, which is not described herein again.
It can be seen that, based on the central point parameter of each display screen, the distance information of each display screen in each direction relative to the preset viewing angle origin can be determined, and then, when the real framing is subsequently performed based on the size information of each display screen and the central point parameter of each display screen, the accuracy of the framing position can be ensured, and the display effect of the corresponding three-dimensional view picture displayed by each display screen is improved.
Fig. 5 is a schematic flowchart of another three-dimensional view processing method according to an embodiment of the present application. Optionally, the physical screen parameters further include: as shown in fig. 5, the above-mentioned finding a view from a virtual three-dimensional scene according to the size information and the position information of the finder window, and determining the three-dimensional view picture corresponding to each display screen includes:
s301, calculating the viewfinder rotation information of each display screen corresponding to the virtual camera according to the viewfinder size information, the viewfinder position information and the included angle.
S302, framing is conducted from the virtual three-dimensional scene according to the rotation information of the viewfinder, and a three-dimensional view picture corresponding to each display screen is determined.
After the included angle between each display screen and the positive direction of the viewing angle is obtained, a target included angle corresponding to each display screen can be calculated, the target included angle and the included angle can be complementary angles, that is, the sum of the two angles is 90 °, as shown in fig. 4, the target included angle corresponding to the first display screen is β, and the target included angle corresponding to the second display screen is 0.
After the target included angle is obtained, the viewfinder rotation information of each display screen corresponding to the virtual camera can be calculated according to the viewfinder size information, the viewfinder position information and the included angle, the viewfinder can be rotated according to the viewfinder rotation information to obtain the rotated viewfinder, the viewfinder can be framed from the virtual three-dimensional scene according to the rotated viewfinder, and the three-dimensional view picture corresponding to each display screen is determined.
The rotation angle of the viewfinder and the rotation angle of the physical screen of the corresponding display screen can be kept the same through rotating the viewfinder, and then when the three-dimensional view picture corresponding to each display screen in the target view picture is displayed through each display screen, the pictures at the boundary of each display screen can be in smooth transition, the phenomenon of hard folding is avoided, and the display effect is further improved.
Optionally, the calculating, according to the viewfinder size information, the viewfinder position information, and the included angle, viewfinder rotation information of each display screen corresponding to the virtual camera includes:
and processing a preset projection matrix of each display screen corresponding to the virtual camera according to the size information, the position information and the included angle of the viewfinder to obtain a target projection matrix, wherein the target projection matrix is used for representing the size information, the position information and the rotation information of the viewfinder.
Each virtual camera may correspond to a corresponding preset projection matrix, the preset projection matrix may be an initial projection matrix, and a plurality of preset projection matrices corresponding to a plurality of virtual cameras may be the same or different, which is not limited herein. After the size information, the position information and the included angle of the viewfinder are obtained, a preset projection matrix of the virtual camera corresponding to each display screen can be processed to obtain a target projection matrix, so that the size information, the position information and the rotation information of the viewfinder can be represented through the target projection matrix.
It will be appreciated that objects located in the human eye's viewing world may be mapped for display based on the target projection matrix, i.e., images in a virtual three-dimensional scene may be mapped to a two-dimensional plane for display.
Optionally, the splicing the three-dimensional view pictures corresponding to the plurality of display screens to obtain a target view picture includes:
and splicing the three-dimensional visual pictures corresponding to the plurality of display screens according to the arrangement positions of the plurality of display screens in the physical scene to obtain a target visual picture.
When the three-dimensional visual pictures corresponding to the plurality of display screens are spliced, the three-dimensional visual pictures corresponding to the plurality of display screens can be spliced according to the arrangement positions of the plurality of display screens in the physical scene, that is, the three-dimensional visual pictures corresponding to each display screen can correspond to the arrangement positions of the display screens in the physical scene, and finally, the target visual pictures are obtained.
Fig. 6 is a functional module schematic diagram of a three-dimensional view processing apparatus according to an embodiment of the present disclosure, where the three-dimensional view processing apparatus may be applied to a computer device connected with a plurality of display screens, and adjacent display screens in the plurality of display screens have a preset included angle; the basic principle and the technical effect of the device are the same as those of the corresponding method embodiments, and for the sake of brief description, the corresponding contents in the method embodiments may be referred to for the parts not mentioned in this embodiment. As shown in fig. 6, the three-dimensional view processing apparatus 200 includes:
the acquiring module 210 is configured to acquire a physical screen parameter of each display screen and coordinates of a preset view angle origin, where the preset view angle origin is a view angle point preset in a physical scene where the plurality of display screens are located;
the calculating module 220 is configured to calculate, according to the physical screen parameters and a preset view origin, size information and position information of a view window of a virtual camera corresponding to each display screen in a virtual three-dimensional scene corresponding to the physical scene;
a view finding module 230, configured to find a view from the virtual three-dimensional scene according to the size information and the position information of the view finding window, and determine a three-dimensional view picture corresponding to each display screen;
the splicing module 240 is configured to splice three-dimensional view pictures corresponding to the multiple display screens to obtain a target view picture;
the transmission module 250 is configured to send the target view picture to each display screen, so that each display screen displays a three-dimensional view picture corresponding to each display screen in the target view picture.
In an alternative embodiment, the physical screen parameters include: size information of each display screen and a central point parameter of each display screen; the calculating module 220 is specifically configured to calculate size information and position information of the finder window according to the size information, the center point parameter, and the coordinates of the preset viewing angle origin.
In an alternative embodiment, the center point parameters include: the vertical distance, the horizontal distance and the height distance of the center point of each display screen; the vertical distance is the vertical distance from the central point of each display screen to a preset auxiliary line, the preset auxiliary line is an auxiliary line which passes through a preset visual angle original point and is perpendicular to an extension line of the positive direction of the visual angle, and the positive direction of the visual angle is the visual angle orientation of the preset visual angle original point in a physical scene; the horizontal distance is the horizontal distance from the center point of each display screen to the extension line of the positive direction of the visual angle; the height distance is the height difference between the center point of each display screen and the preset viewing angle origin.
In an alternative embodiment, the physical screen parameters further include: the included angle between each display screen and the positive direction of the visual angle; the view finding module is specifically used for calculating the view finding window rotation information of each display screen corresponding to the virtual camera according to the size information, the position information and the included angle of the view finding window;
and framing from the virtual three-dimensional scene according to the viewfinder rotation information, and determining a three-dimensional view picture corresponding to each display screen.
In an optional embodiment, the view finding module 230 is specifically configured to process a preset projection matrix of the virtual camera corresponding to each display screen according to the size information of the view finding window, the position information of the view finding window, and the included angle, to obtain a target projection matrix, where the target projection matrix is used to represent the size information of the view finding window, the position information of the view finding window, and the rotation information of the view finding window.
In an optional embodiment, the splicing module 240 is specifically configured to splice three-dimensional view pictures corresponding to the multiple display screens according to arrangement positions of the multiple display screens in a physical scene, so as to obtain a target view picture.
Fig. 7 is a system architecture diagram of a three-dimensional view display system according to an embodiment of the present application, as shown in fig. 7, the three-dimensional view display system includes: at least one computer device, and a plurality of display screens connected to each computer device; a preset included angle exists between adjacent display screens in the plurality of display screens; the display screens connected to each computer device are disposed in the same physical scene, the multiple display screens may correspond to the same preset viewing angle origin a, the direction indicated by the arrow may be a viewing angle positive direction, and each computer device 310 is configured to execute the three-dimensional view processing method in the foregoing embodiment.
As shown in fig. 7, the three-dimensional view display system may include: first computer device 311 and second computer device 312, wherein first computer device 311 is connected to first display screen 321 and second display screen 322, and second computer device 312 is connected to third display screen 325 and fourth display screen 326, it can be understood that, because first display screen 321, second display screen 322, third display screen 325 and fourth display screen 326 connected to first computer device 311 and second computer device 312 are disposed in the same physical scene and correspond to the same preset origin of view a, even if there is a preset angle between second display screen 322 and third display screen 325 in the diagram, the pictures at the boundary of each display screen can still be in smooth transition, so as to avoid the phenomenon of hard folding, and further improve the display effect.
Optionally, each display screen is a spliced display screen obtained by splicing a plurality of sub-display screens.
As shown in fig. 7, taking the first display screen 321 as an example, the sub-display screens 331 included in the first display screen 321 may be liquid crystal display screens with the same category, shape and size or other types of electronic display screens, or may be liquid crystal display screens with different at least one item of information in category, shape and size, and the application is not limited herein. It is to be understood that each display screen may also be a separate display screen, and the application is not limited thereto.
The above-mentioned apparatus is used for executing the method provided by the foregoing embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 8, the computer apparatus may include: the display device comprises a processor 510, a storage medium 520 and a bus 530, wherein the processor 510 is connected with a plurality of display screens, and adjacent display screens in the plurality of display screens form a preset included angle; the storage medium 520 stores machine-readable instructions executable by the processor 510, and when the computer device is operated, the processor 510 communicates with the storage medium 520 through the bus, and the processor 510 executes the machine-readable instructions to perform the steps of the above-described method embodiments. The specific implementation and technical effects are similar, and are not described herein again.
Optionally, the present application further provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program performs the steps of the above method embodiments. The specific implementation and technical effects are similar, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is only a logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of additional identical elements in the process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (11)

1. A three-dimensional visual processing method is characterized in that the method is applied to computer equipment connected with a plurality of display screens, and a preset included angle exists between adjacent display screens in the plurality of display screens; the method comprises the following steps:
acquiring physical screen parameters of each display screen and coordinates of a preset view angle origin, wherein the preset view angle origin is a view angle point preset in a physical scene where the plurality of display screens are located;
calculating the size information and position information of a viewfinder of a virtual camera corresponding to each display screen in a virtual three-dimensional scene corresponding to the physical scene according to the parameters of the physical screen and the coordinates of the preset viewing angle origin;
framing from the virtual three-dimensional scene according to the size information and the position information of the viewfinder, and determining a three-dimensional view picture corresponding to each display screen;
splicing the three-dimensional visual pictures corresponding to the display screens to obtain a target visual picture;
and sending the target visual scene picture to each display screen, so that each display screen displays the three-dimensional visual scene picture corresponding to each display screen in the target visual scene picture.
2. The method of claim 1, wherein the physical screen parameters comprise: the size information of each display screen and the central point parameter of each display screen;
calculating the size information and position information of a viewfinder of a virtual camera of each display screen in a virtual three-dimensional scene corresponding to the physical scene according to the parameters of the physical screen and the preset view angle origin, wherein the calculation comprises the following steps:
and calculating the size information and the position information of the viewfinder according to the size information, the central point parameter and the coordinates of the preset viewing angle origin.
3. The method of claim 2, wherein the center point parameter comprises: the vertical distance, the horizontal distance and the height distance of the center point of each display screen;
the vertical distance is the vertical distance from the central point of each display screen to a preset auxiliary line, the preset auxiliary line is an auxiliary line which passes through the preset visual angle original point and is perpendicular to an extension line of a visual angle positive direction, and the visual angle positive direction is the visual angle orientation of the preset visual angle original point in the physical scene; the horizontal distance is the horizontal distance from the center point of each display screen to the extension line of the positive direction of the viewing angle; the height distance is the height difference between the center point of each display screen and the preset view angle origin.
4. The method of claim 3, wherein the physical screen parameters further comprise: the included angle between each display screen and the positive direction of the visual angle;
the finding a view from the virtual three-dimensional scene according to the size information and the position information of the finding window and determining the three-dimensional view picture corresponding to each display screen comprises the following steps:
calculating the viewfinder rotation information of each display screen corresponding to the virtual camera according to the viewfinder size information, the viewfinder position information and the included angle;
and framing from the virtual three-dimensional scene according to the viewfinder rotation information, and determining a three-dimensional view picture corresponding to each display screen.
5. The method according to claim 4, wherein calculating the viewfinder rotation information of the virtual camera corresponding to each display screen according to the viewfinder size information, the viewfinder position information and the included angle comprises:
and processing a preset projection matrix of each display screen corresponding to the virtual camera according to the size information of the viewfinder, the position information of the viewfinder and the included angle to obtain a target projection matrix, wherein the target projection matrix is used for representing the size information of the viewfinder, the position information of the viewfinder and the rotation information of the viewfinder.
6. The method according to any one of claims 1 to 5, wherein the splicing the three-dimensional view frames corresponding to the plurality of display screens to obtain a target view frame comprises:
and splicing the three-dimensional visual pictures corresponding to the display screens according to the arrangement positions of the display screens in the physical scene to obtain the target visual picture.
7. A three-dimensional visual processing device is characterized by being applied to computer equipment connected with a plurality of display screens, wherein a preset included angle exists between adjacent display screens in the display screens; the three-dimensional view processing apparatus includes:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring physical screen parameters of each display screen and coordinates of a preset view angle origin, and the preset view angle origin is a view angle point preset in a physical scene where a plurality of display screens are located;
the calculation module is used for calculating the size information and the position information of a viewfinder of a virtual camera corresponding to each display screen in a virtual three-dimensional scene corresponding to the physical scene according to the physical screen parameters and the preset view angle original point;
the view finding module is used for finding a view from the virtual three-dimensional scene according to the size information and the position information of the view finding window and determining a three-dimensional view picture corresponding to each display screen;
the splicing module is used for splicing the three-dimensional visual pictures corresponding to the display screens to obtain a target visual picture;
and the transmission module is used for sending the target view picture to each display screen, so that each display screen displays the three-dimensional view picture corresponding to each display screen in the target view picture.
8. A computer device, comprising: the display device comprises a processor, a storage medium and a bus, wherein the processor is connected with a plurality of display screens, and adjacent display screens in the plurality of display screens form preset included angles; the storage medium stores machine-readable instructions executable by the processor, the processor and the storage medium communicate with each other via a bus when a computer device is operated, and the processor executes the machine-readable instructions to perform the steps of the three-dimensional visual processing method according to any one of claims 1 to 6.
9. A three-dimensional visual display system, comprising: at least one computer device, and a plurality of display screens connected to each computer device; a preset included angle exists between adjacent display screens in the plurality of display screens;
the display screens connected with the computer devices are arranged in the same physical scene; each of said computer devices being adapted to perform the three-dimensional view processing method of any of the preceding claims 1-6.
10. The system of claim 9, wherein each of the display screens is a tiled display screen tiled by a plurality of sub-display screens.
11. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, performs the steps of the three-dimensional view processing method according to any one of claims 1 to 6.
CN202011169542.3A 2020-10-27 2020-10-27 Three-dimensional visual processing method, device, equipment, display system and medium Active CN112351266B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011169542.3A CN112351266B (en) 2020-10-27 2020-10-27 Three-dimensional visual processing method, device, equipment, display system and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011169542.3A CN112351266B (en) 2020-10-27 2020-10-27 Three-dimensional visual processing method, device, equipment, display system and medium

Publications (2)

Publication Number Publication Date
CN112351266A CN112351266A (en) 2021-02-09
CN112351266B true CN112351266B (en) 2023-03-24

Family

ID=74358489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011169542.3A Active CN112351266B (en) 2020-10-27 2020-10-27 Three-dimensional visual processing method, device, equipment, display system and medium

Country Status (1)

Country Link
CN (1) CN112351266B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112799628B (en) * 2021-03-18 2021-09-10 卡莱特云科技股份有限公司 Virtual LED box body orientation determining method and device, computer equipment and storage medium
CN113674433B (en) * 2021-08-25 2024-06-28 先壤影视制作(上海)有限公司 Mixed reality display method and system
CN113568700B (en) * 2021-09-22 2022-01-11 卡莱特云科技股份有限公司 Display picture adjusting method and device, computer equipment and storage medium
CN115243029A (en) * 2022-09-22 2022-10-25 苏州域光科技有限公司 Image display method, device, equipment, system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102238411A (en) * 2011-06-29 2011-11-09 浙江大学 Image display method for reflecting three-dimensional display
WO2017152803A1 (en) * 2016-03-09 2017-09-14 腾讯科技(深圳)有限公司 Image processing method and device
JP2018151696A (en) * 2017-03-09 2018-09-27 株式会社岩根研究所 Free viewpoint movement display apparatus
CN109729338A (en) * 2018-11-28 2019-05-07 利亚德光电股份有限公司 Show processing method, the device and system of data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102238411A (en) * 2011-06-29 2011-11-09 浙江大学 Image display method for reflecting three-dimensional display
WO2017152803A1 (en) * 2016-03-09 2017-09-14 腾讯科技(深圳)有限公司 Image processing method and device
JP2018151696A (en) * 2017-03-09 2018-09-27 株式会社岩根研究所 Free viewpoint movement display apparatus
CN109729338A (en) * 2018-11-28 2019-05-07 利亚德光电股份有限公司 Show processing method, the device and system of data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
大屏幕无缝拼接系统的应用软件平台开发;俞凌云等;《计算机应用》;20080901(第09期);全文 *

Also Published As

Publication number Publication date
CN112351266A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
CN112351266B (en) Three-dimensional visual processing method, device, equipment, display system and medium
CN109242961B (en) Face modeling method and device, electronic equipment and computer readable medium
WO2019228188A1 (en) Method and apparatus for marking and displaying spatial size in virtual three-dimensional house model
CN110072087B (en) Camera linkage method, device, equipment and storage medium based on 3D map
JP2005339313A (en) Method and apparatus for presenting image
EP3474236A1 (en) Image processing device
US11086927B2 (en) Displaying objects based on a plurality of models
CN112714266B (en) Method and device for displaying labeling information, electronic equipment and storage medium
CN104427230A (en) Reality enhancement method and reality enhancement system
Sajadi et al. Scalable Multi‐view Registration for Multi‐Projector Displays on Vertically Extruded Surfaces
CN114697623A (en) Projection surface selection and projection image correction method and device, projector and medium
CN114782648A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114928718A (en) Video monitoring method and device, electronic equipment and storage medium
CN108765582B (en) Panoramic picture display method and device
RU2735066C1 (en) Method for displaying augmented reality wide-format object
CN112330785A (en) Image-based urban road and underground pipe gallery panoramic image acquisition method and system
CN112017242A (en) Display method and device, equipment and storage medium
EP3177005B1 (en) Display control system, display control device, display control method, and program
CN109885172B (en) Object interaction display method and system based on Augmented Reality (AR)
CN109115238B (en) Map display method, device and equipment
CN109801351B (en) Dynamic image generation method and processing device
CN114428573B (en) Special effect image processing method and device, electronic equipment and storage medium
CN108510433B (en) Space display method and device and terminal
CN113177975A (en) Depth calculation method and three-dimensional modeling method based on dome camera and laser radar
CN112565730A (en) Roadside sensing method and device, electronic equipment, storage medium and roadside equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant