CN111833243B - Data display method, mobile terminal and storage medium - Google Patents

Data display method, mobile terminal and storage medium Download PDF

Info

Publication number
CN111833243B
CN111833243B CN202010991247.XA CN202010991247A CN111833243B CN 111833243 B CN111833243 B CN 111833243B CN 202010991247 A CN202010991247 A CN 202010991247A CN 111833243 B CN111833243 B CN 111833243B
Authority
CN
China
Prior art keywords
image
angle
window
projection matrix
navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010991247.XA
Other languages
Chinese (zh)
Other versions
CN111833243A (en
Inventor
张凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhongke Tongda High New Technology Co Ltd
Original Assignee
Wuhan Zhongke Tongda High New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhongke Tongda High New Technology Co Ltd filed Critical Wuhan Zhongke Tongda High New Technology Co Ltd
Priority to CN202010991247.XA priority Critical patent/CN111833243B/en
Publication of CN111833243A publication Critical patent/CN111833243A/en
Application granted granted Critical
Publication of CN111833243B publication Critical patent/CN111833243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/047
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • G06T3/067

Abstract

The embodiment of the application provides a data display method, a mobile terminal and a storage medium, and relates to the technical field of smart cities. The method comprises the following steps: the CPU copies the image data to obtain first image data and second image data; rendering the first image data into a first image under a large visual angle by the first GPU and displaying the first image; the second GPU generates a second image under a small visual angle according to the second image data and displays the second image; processing the second image to obtain a three-dimensional navigation area in the first image model according to the first image, and displaying the navigation image in the first window; the CPU obtains control operation of a user on the first window based on the navigation image, converts the control operation into an angle in a three-dimensional space, and updates a second projection matrix corresponding to a second window according to the angle; and the second GPU and the third GPU respectively update a second image and a navigation image according to the second projection matrix. According to the embodiment of the application, the understanding efficiency of the image data is improved, and the power consumption of the mobile terminal is reduced.

Description

Data display method, mobile terminal and storage medium
Technical Field
The application relates to the technical field of smart cities, in particular to a data display method, a mobile terminal and a storage medium.
Background
In traditional video monitoring, 2D plane pictures are mainly displayed, but with the rise of computer technology, the advantages of fisheye images in the monitoring industry are more and more obvious. The scene of only a position can be monitored in traditional plane camera, but the fish-eye camera can monitor a wider visual field because of having a wider visual angle, so that the field needing monitoring by a plurality of plane cameras originally can be solved by only one fish-eye camera, and the hardware cost is greatly saved.
Because the fisheye camera has wider visual angle, the fisheye image (image data) obtained by shooting often has great distortion, and the fisheye image obtained by shooting is usually displayed through a circle, so that the fisheye image is not well understood and can be understood by professional technicians, and the application of the fisheye image cannot be well popularized and developed.
Disclosure of Invention
The embodiment of the application provides a data display method, a mobile terminal and a storage medium, which can improve the processing efficiency of the mobile terminal on image data acquired by a fisheye camera, reduce the power consumption of the mobile terminal and improve the user experience.
The embodiment of the application provides a data display method, which is suitable for a mobile terminal communicating with a fisheye camera, wherein the mobile terminal comprises a central processing unit, a memory, a first graphic processor, a second graphic processor and a third graphic processor; the data display method comprises the following steps:
the central processing unit reads image data collected by the fisheye camera from the memory;
copying the image data to obtain first image data and second image data, and transmitting the first image data to the first graphics processor and transmitting the second image data to the second graphics processor;
the first image data is rendered into a first image under a large visual angle by the first graphic processor according to a first projection matrix and a first image model, and the first image is displayed in a first window of a data display interface;
the second graphic processor generates a second image under a small visual angle according to a second projection matrix, a second image model and second image data, and displays the second image in the second window, wherein the second projection matrix is different from the first projection matrix, the second image model is the same as the first image model, the first projection matrix comprises information for controlling the first image model, information for a first virtual camera and information for perspective projection of the first virtual camera, and the second projection matrix comprises information for controlling the second image model, information for a second virtual camera and information for perspective projection of the second virtual camera;
the third graphic processor determines that the second image corresponds to a three-dimensional navigation area in the first image model according to the second projection matrix and the first image model;
the third graphic processor processes the three-dimensional navigation area in a preset mode to obtain a navigation image so as to display the navigation image in the first window in a protruding mode, wherein the navigation image represents the position of the second image in the first image;
the central processing unit acquires control operation of a user on a first window based on the navigation image;
the central processing unit converts the control operation into an angle in a three-dimensional space;
the central processing unit updates a second projection matrix corresponding to the second window according to the angle and transmits the second projection matrix to the second graphic processor and the third graphic processor;
the second graphic processor updates a second image under a small visual angle according to the second projection matrix, a second image model and the second image data, and updates and displays the second image in the second window;
the third graphic processor updates a three-dimensional navigation area corresponding to the second image in the first image model according to the second projection matrix and the first image model;
and the third graphic processor processes the three-dimensional navigation area in a preset mode to obtain an updated navigation image so as to display the updated navigation image in the first window in a highlighted mode.
An embodiment of the present application further provides a mobile terminal, where the mobile terminal includes: one or more central processors; a memory; one or more graphics processors, and one or more computer programs, wherein the central processor is connected to the memory and the graphics processors, the one or more computer programs being stored in the memory and configured to be executed by the central processor and the graphics processors to perform the data presentation method.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the data presentation method are implemented.
The image data collected by the fisheye camera is processed and displayed through the central processing unit and the graphic processor of the mobile terminal, specifically, the central processing unit copies the image data to obtain first image data and second image data, transmits the first image data to the first graphic processor, and transmits the second image data to the second graphic processor; the first image data is rendered into a first image under a large visual angle by the first graphic processor according to the first projection matrix and the first image model, and the first image is displayed in a first window; the second image processor generates a second image under a small visual angle according to the second projection matrix, the second image model and the second image data, and displays the second image in the second window, so that the image data is processed by the image processor to obtain images under different visual angles, the power consumption of the mobile terminal is reduced, and the efficiency of processing the image data is improved; and the third graphic processor determines a three-dimensional navigation area of the second image corresponding to the first image model according to the second projection matrix and the first image model, processes the three-dimensional navigation area in a preset mode to obtain a navigation image, and displays the navigation image in the first window in a protruding mode. After the navigation image is obtained, the central processing unit obtains the control operation of the user on the first window based on the navigation image, converts the control operation into an angle in a three-dimensional space, and updates the second projection matrix corresponding to the second window according to the angle. After the second projection matrix is obtained, the second projection matrix is transmitted to a second graphic processor and a third graphic processor, the second graphic processor updates a second image under a small visual angle according to the second projection matrix, a second image model and second image data, and updates and displays the second image in a second window; and then updating the three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model, processing the three-dimensional navigation area in a preset mode to update the navigation image, and displaying the updated navigation image in the first window in a highlighted mode, so that the second projection matrix is updated according to the control operation of the user on the first window based on the navigation image, thereby updating the second image, and then updating the navigation image, so that the position of the second image in the first image is updated in real time according to the control operation of the user on the first window based on the navigation image, the region watched by the user is adjusted in real time, the user is guided to quickly find the region concerned, the speed of positioning the region concerned by the user in the image data is improved, and the user experience is improved. In addition, the second image displayed through the second window also realizes the detail display of the image data.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a schematic diagram of a system scenario of a data presentation method according to an embodiment of the present application;
fig. 1b is a schematic structural diagram of a mobile terminal provided in an embodiment of the present application;
fig. 1c is a schematic structural diagram of a mobile terminal according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating a data presentation method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of image data acquired by a fisheye camera provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of vertex coordinates and texture coordinates provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of an imaging principle of perspective projection provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a data presentation interface provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of determining an angle at which a control point is located on a first window provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of determining an orientation of a second virtual camera provided by an embodiment of the present application;
fig. 9a and 9b are another schematic flow chart diagram of a data presentation method provided in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a data display method, a mobile terminal and a storage medium. The mobile terminal includes, but is not limited to, a smart phone, a tablet computer, a notebook computer, a smart robot, a wearable device, a vehicle-mounted terminal, and the like.
Please refer to fig. 1a, which is a schematic view illustrating a data display system according to an embodiment of the present disclosure. The data display system comprises a fisheye camera and a mobile terminal. The number of the fisheye cameras can be one or more, the number of the mobile terminals can also be one or more, and the fisheye cameras and the mobile terminals can be directly connected or can be connected through a network. The fisheye camera and the mobile terminal can be connected in a wired mode or a wireless mode. The fisheye camera in the embodiment of fig. 1a is connected to the mobile terminal through a network, where the network includes network entities such as a router and a gateway.
The fisheye camera can shoot to obtain initial image data, wherein the initial image data refers to an image shot by the fisheye camera, and the shot initial image data is sent to the mobile terminal; the mobile terminal receives initial image data shot by the fisheye camera and stores the initial image data in the memory. In one case, the initial image data is directly used as the image data collected by the fisheye camera, the image data is received and stored in the memory, and in the other case, the initial image data is subjected to correction processing to obtain the image data collected by the fisheye camera and stored in the memory. And finally, correspondingly processing the image data through the processor and the graphic processor and displaying the data.
Specifically, the processor 101 is included in the mobile terminal, and the processor 101 is a control center of the mobile terminal. The processor 101 includes one or more Central Processing Units (CPUs) 1011 and at least one Graphics Processing Unit (GPU). At least one graphics processor is connected to the central processor 1011. As shown in fig. 1b, the graphics processor includes a first graphics processor (first GPU) 1012, a second graphics processor (second GPU) 1013, and a third graphics processor (third GPU) 1014. Also included in the mobile terminal is memory 102 of one or more computer-readable storage media, the memory 102 being connected to the central processor 1011. It should be noted that in the prior art, there is either no graphics processor or one graphics processor in the mobile terminal. The embodiment of the application improves the hardware of the mobile terminal, provides at least three graphic processors, can execute in parallel, greatly improves the data processing efficiency, processes data by using the graphic processors, processes the data relative to a CPU, greatly improves the data processing efficiency and accuracy, and greatly reduces the power consumption of the mobile terminal. The three graphics processors in the embodiment of the present application may be three module units that can be executed in parallel on one hardware, or may be three hardware.
The cpu 1011 connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs (computer programs) and/or modules stored in the memory 102 and calling data stored in the memory 102, such as image data, thereby monitoring the mobile terminal as a whole. Optionally, the central processor may include one or more processing cores; preferably, the central processor may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the central processor. The graphic processor is mainly used for accelerating the data transmitted by the central processing unit, such as rendering and the like.
The memory 102 may be used to store software programs (computer programs) and modules, and the processor 101 executes various functional applications and data processing by operating the software programs and modules stored in the memory 102. The memory 102 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the mobile terminal, image data collected by the fisheye camera, and the like. Further, the memory 102 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 102 may also include a memory controller to provide the processor 101 access to the memory 102.
As shown in fig. 1c, the mobile terminal may further include, in addition to the processor 101 and the memory 102: a Radio Frequency (RF) circuit 103, a power supply 104, an input unit 105, and a display unit 106. Those skilled in the art will appreciate that the mobile terminal architecture shown in the figures is not intended to be limiting of mobile terminals and may include more or fewer components than those shown, or some of the components may be combined, or a different arrangement of components. Wherein:
the RF circuit 103 may be used for receiving and transmitting signals during information transmission and reception, and in particular, for receiving downlink information of a base station and then processing the received downlink information by one or more processors 101; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuitry 103 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 103 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The mobile terminal further includes a power supply 104 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 104 is logically connected to the processor 101 via a power management system, so that functions of managing charging, discharging, and power consumption are implemented via the power management system. The power supply 104 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The mobile terminal may further include an input unit 105, and the input unit 105 may be used to receive input numeric or character information and generate a keyboard, mouse, joystick, optical or trackball signal input in relation to user settings and function control. Specifically, in one particular embodiment, the input unit 105 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 101, and can receive and execute commands sent by the processor 101. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 105 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The mobile terminal may also include a display unit 106, and the display unit 106 may be used to display information input by the user or provided to the user, as well as various graphical user interfaces of the mobile terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 106 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may cover the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 101 to determine the type of the touch event, and then the processor 101 provides a corresponding visual output on the display panel according to the type of the touch event. Although in the figures the touch sensitive surface and the display panel are shown as two separate components to implement input and output functions, in some embodiments the touch sensitive surface may be integrated with the display panel to implement input and output functions.
Although not shown, the mobile terminal may further include a camera (note that the camera here is different from a virtual camera described below, and the camera here refers to hardware), a bluetooth module, and the like, which are not described herein again. Specifically, in this embodiment, the processor 101 in the mobile terminal loads the executable file corresponding to the process of one or more computer programs into the memory 102 according to the corresponding instructions, and the processor 101 runs the computer program stored in the memory 102, thereby implementing the steps in any data presentation method described below. Therefore, the beneficial effects that can be achieved by any data presentation method described below can also be achieved, and specific reference is made to the corresponding description of the data presentation method below.
Fig. 2 is a schematic flow chart of a data display method according to an embodiment of the present application. The data display method is operated in the mobile terminal, and comprises the following steps:
and 201, reading the image data collected by the fisheye camera from the memory by the central processing unit.
Because the viewing angle of the fisheye camera is wider, the image shot by the fisheye camera contains more information than the image shot by the plane camera. The shooting angle of the fisheye camera is similar to a hemisphere, the obtained image is represented by a similar circle and the like, if the visual angle of the fisheye camera is 180 degrees, the shooting angle is just a hemisphere, and the obtained image is presented on a two-dimensional plane in a circle. In the embodiment of the present application, the viewing angle of the fisheye camera is 180 degrees.
Fig. 3 is a schematic diagram of initial image data directly acquired by the fisheye camera provided in the embodiment of the present application, and a middle circular area is an initial image captured by the fisheye camera. In fig. 3, the fisheye camera faces the sky, and the captured image includes the sky, buildings, trees, and the like around the position where the fisheye camera is located.
In step 201, the image data acquired by the fisheye camera may be understood as initial image data directly acquired by the fisheye camera, such as the initial image data shown in fig. 3. Initial image data directly acquired by the fisheye camera can be sent to the mobile terminal through a network, and as shown in fig. 1a, the mobile terminal receives the initial image data directly acquired by the fisheye camera and stores the initial image data in the memory. Initial image data directly collected by the fisheye camera can also be sent to other terminals, and then sent to the mobile terminal in the embodiment of the application through the network by the other terminals, and received by the mobile terminal and stored in the memory.
Thus, in step 201, the central processing unit reads the image data acquired by the fisheye camera from the memory, that is, reads the initial image data directly acquired by the fisheye camera.
In some cases, in order to achieve a better display effect, the initial image data directly acquired by the fisheye camera needs to be further processed. Specifically, step 201 includes: the central processing unit acquires initial image data shot by the fisheye camera from the memory; and the central processing unit performs distortion correction on the initial image data based on a calibration result of data calibration on the fisheye camera, and takes the image data after distortion correction as the image data acquired by the fisheye camera.
The fisheye camera manufacturer needs to calibrate the fisheye camera before the mass production of the fisheye camera, and provides a calibration interface, and a user inputs calibration parameters through the calibration interface after purchasing the fisheye camera so as to calibrate the fisheye camera. The main purpose of data calibration is to obtain parameters corresponding to the fisheye lens to find a circular area in the initial image data shown in fig. 3. Due to the difference of the hardware of the fisheye cameras, the positions of the circular areas in the images are different in the initial image data obtained by shooting by each different fisheye camera.
And after the fisheye camera is subjected to data calibration, distortion correction is carried out on the initial image data according to the result of the data calibration. For example, distortion correction is performed on the initial image data by using a longitude and latitude method, or distortion correction is performed on the initial image data by using another method, and the image data after distortion correction is used as the image data in step 201. The purpose of the distortion correction is to reduce or eliminate distortion in the original fisheye image. Such as converting the initial image data of the circular area shown in fig. 3 into image data of a 2:1 rectangle.
Further, the corrected image data is converted into texture units for subsequent texture mapping.
202, copying the image data to obtain a first image data and a second image data, and transmitting the first image data to the first graphic processor and the second image data to the second graphic processor.
After the central processing unit reads the image data, the image data is copied, and the image data is copied to two parts to obtain first image data and second image data. Transmitting the first image data to a first graphics processor so that the first graphics processor processes the first image data; and transmitting the second image data to a second graphics processor so that the second graphics processor processes the second image data.
And 203, the first graphics processor renders the first image data into a first image under a large visual angle according to the first projection matrix and the first image model, and displays the first image in a first window of the data display interface.
In a virtual scene, a coordinate system of an object is generally required to be constructed, and a model is established in the coordinate system of the object (commonly called modeling). In the embodiment of the application, a first image model is established, and the first image model is spherical; in other cases, different shapes of image models may be accommodated depending on the particular use scenario.
In the following, the first image model is taken as a sphere as an example, and it can be simply understood that the first image model is a sphere formed by dividing the first image model into n circles according to longitude and allocating m points to each circle, such as n =180, m =30, and the like. It should be noted that the larger n and the larger m, the more rounded the sphere formed.
The first image model built by OpenGL includes a plurality of points, each point being represented by [ (x, y, z) (u, v) ], where (x, y, z) represents vertex coordinates and (u, v) represents texture coordinates. The vertex coordinates (x, y, z) are three-dimensional space coordinates that determine the shape of the object; (u, v) are two-dimensional coordinates that determine where the texture removal unit is to extract the texture. It should be noted that, for the unified measurement, the vertex coordinates and the texture coordinates are normalized, such as mapping the vertex coordinates onto [ -1,1] and mapping the texture coordinates onto [0,1] uniformly. It should also be noted that the coordinate systems used for the vertex coordinates and texture coordinates are different.
Fig. 4 is a schematic diagram showing vertex coordinates and texture coordinates. Wherein A, B, C, D represents four points on the model, and the vertex coordinates and texture coordinates of the four points are A [ (-1, -1,0) (0,0.5) ], B [ (1, -1,0) (0.5 ) ], C [ (-1,0,0) (0,1) ], and D [ (1,0,0) (0.5,1) ].
After the model is built, a projection matrix can be constructed. In a virtual scene, a coordinate system in which an object (or a model, which is displayed as an object after texture mapping on the model) is located is called an object coordinate system, and a camera coordinate system is a three-dimensional coordinate system established with a focus center of a camera as an origin and corresponds to a world coordinate system. The virtual camera, the object, etc. are all in the world coordinate system. The relationships among the virtual camera, the object, the model in the world coordinate system, the wide angle and the elevation angle of the virtual camera, the distance from the lens to the near plane and the far plane, and the like are all embodied in the projection matrix.
The first projection matrix comprises information for controlling the first image model, information of the first virtual camera and information of perspective projection of the first virtual camera.
The first projection matrix may be determined by: the CPU acquires set initial parameters of the first virtual camera, wherein the initial parameters comprise the position of the first virtual camera (information of the first virtual camera), the Euler angle, the distance from the lens of the first virtual camera to a projection plane (also called a near plane), the distance from the lens of the first virtual camera to a far plane (information of perspective projection of the first virtual camera), and the like; the CPU determines a first projection matrix according to the initial parameters of the first virtual camera. The first projection matrix is determined, for example, using a mathematical library based on initial parameters of the first virtual camera, for example, the initial parameters of the first virtual camera are input into a corresponding function of a glm (opengl mathematics) database, and the first projection matrix is calculated using the function. It should be noted that the first projection matrix determined according to the set initial parameters of the first virtual camera may also be understood as an initial first projection matrix. In the embodiment of the present application, since the first image model is not subjected to rotation control, and the information of the second virtual camera is not changed, the initial first projection matrix is not changed all the time, and the first projection matrix is the initial first projection matrix.
Fig. 5 is a schematic diagram of imaging of perspective projection provided in the embodiment of the present application. Wherein the distance of the lens to the near plane 11, i.e. the distance between point 0 and point 1, and the distance of the lens to the far plane 12, i.e. the distance between point 0 and point 2. The position of the virtual camera can be simply understood as the coordinates of the 0 point in the world coordinate system.
The first image model and the first projection matrix may be determined in advance, that is, before step 203 is executed, the first image model and the first projection matrix are determined; when step 203 is executed, the determined first image model and the first projection matrix are directly acquired. The first image model and the first projection matrix may also be determined during the execution of step 203, i.e. the first image model and the first projection matrix are determined first during the execution of step 203. The first image model and the first projection matrix are predetermined as an example.
Wherein, step 203, includes: the CPU obtains a first projection matrix and a first image model; the CPU sends the first projection matrix and the first image model to a first GPU; the first GPU renders the first image data into a first image under a large visual angle according to the first projection matrix and the first image model. Specifically, a vertex in the first image model is sent to a vertex shader, texture coordinates in the first image model are sent to a fragment shader, a texture unit corresponding to the texture coordinates is determined according to the first image data, and a first GPU is used for rendering to obtain a first image under a large visual angle.
The large viewing angle refers to a viewing angle at which at least complete image data can be seen in the field of view after rendering. It can be simply understood that a large viewing angle is a viewing angle at which the first virtual camera is placed farther outside the first image model, so that the complete planar image corresponding to the first image model is seen within the field of view. The large view angle is essentially the view angle corresponding to the placement of the first image model into the viewing frustum of the first virtual camera. As shown in fig. 5, the viewing frustum is a trapezoidal region between the proximal plane 11 and the distal plane 12. It is to be understood that at large viewing angles the first image model is entirely within the viewing cone of the first virtual camera. In this step, the first image at a large viewing angle is obtained, so that the user can understand the content of the image data as a whole.
And after the GPU processes the first image under the large visual angle, displaying the first image in a first window of a data display interface.
The data display interface comprises at least one first window and at least one second window. Referring to fig. 6, fig. 6 is a schematic view of a data display interface provided in an embodiment of the present application. The data presentation interface 20 comprises a first window 21 on the left side of the data presentation interface and two second windows 22 on the right side of the first window 21. The bottom layer in the first window 21 shows a first image. As can be seen from fig. 6, the obtained first image corresponds/matches the image data. The first window and/or the second window may be present on the data presentation interface 20 in the form of a display control, for example, the first window includes at least one display control, and the second window includes one display control.
And 204, generating a second image under a small visual angle by the second graphic processor according to the second projection matrix, the second image model and the second image data, and displaying the second image in a second window, wherein the second projection matrix is different from the first projection matrix, and the second image model is the same as the first image model.
It is noted that as well as the initial first projection matrix, there is also an initial second projection matrix. The initial second projection matrix and the initial first projection matrix may be understood as default projection matrices corresponding to the data presentation interface when the data presentation interface is turned on/refreshed. The initial second image and the initial first image can be determined by using the initial second projection matrix and the second image model, and the initial first projection matrix and the first image model, namely the initial first image and the initial second image which correspond to the initial second image and the initial first image before any control operation is not carried out after the data display interface is opened. The initial first image and the initial second image are images under a default view angle corresponding to the default corresponding projection matrix.
The second projection matrix comprises information for controlling the second image model, information of the second virtual camera and information of perspective projection of the second virtual camera.
The initial second projection matrix may be determined by: acquiring set initial parameters of the second virtual camera, wherein the initial parameters comprise the position of the second virtual camera (information of the second virtual camera), the Euler angle, the distance from the lens of the second virtual camera to the near plane, the distance from the lens of the second virtual camera to the far plane (information of perspective projection of the second virtual camera), and the like; an initial second projection matrix is determined from the initial parameters of the second virtual camera. The initial second projection matrix may be predetermined. It should be noted that the initial first projection matrix is different from the initial second projection matrix, and correspondingly, the first projection matrix is different from the second projection matrix. And taking the projection matrix after the initial second projection matrix updating as a second projection matrix, updating the second projection matrix through the control operation on the second window, and updating the second projection matrix through the control operation on the first window. In the embodiment of the present application, the second projection matrix is updated by a control operation on the first window. Wherein the control operation on the first window or the second window updates the information on the second image model rotation control and the information on the second virtual camera.
Wherein the second image model may be predetermined. In this embodiment, the second image model is the same as the first image model, and the first image model can be directly obtained as the second image model.
The step of generating a second image under a small viewing angle according to the second projection matrix, the second image model and the second image data includes: the CPU obtains a second image model; the CPU transmits the second image model to a second GPU; and the second GPU generates a second image under a small visual angle according to the second projection matrix, the second image model and the second image data. Specifically, the CPU transmits a vertex in the second image model to the vertex shader, copies a texture coordinate in the second image model to the fragment shader, determines a texture unit corresponding to the texture coordinate according to the second image data, and performs rendering using the second GPU to generate the second image at the small viewing angle.
The small view angle refers to a view angle at which local image data can be seen in the view field after rendering. It can be simply understood that the small viewing angle is the viewing angle of the local planar image corresponding to the second image model projected in the view field by placing the second virtual camera inside the second image model. In the step, the second image under the small visual angle is obtained, so that the user can understand the content of the image data locally (under the small visual angle), and the understanding efficiency of the content of the image data is improved.
And displaying the second image in a second window of the data display interface in the obtained second image. As shown in fig. 6, if there is one second window 22 on the data presentation interface, a second image is presented in the second window. If there are a plurality of second windows 22 on the data display interface, the generated second image is displayed on the second window corresponding to the control operation, and the second image in the second window not performing the control operation remains unchanged. In the plurality of second windows, the small visual angles corresponding to each second window may be different, and the second image displayed in each second window at the same time is also displayed as a different image.
In the above steps, the first window on the data display interface displays the first image under the large viewing angle, and the second window displays the second image under the small viewing angle, so that the planar image of the image data under different viewing angles is obtained, the image data can be understood from different viewing angles, the user can conveniently understand the content of the image data, and the understanding efficiency of the content of the image data is improved.
The first image and the second image are projected under a large visual angle and a small visual angle through the same image model (the first image model and the second image model are the same), and are mapped by using the same texture (image data). The image data is understood from the whole through the first image under the large visual angle, and the image data is understood from the local part through the second image under the small visual angle, so that the detail display of the image data is realized. When the control operation is carried out on the windows (including the first window and the second window) of the data display interface, the second image is continuously changed. And the second image model is spherical, 360 degrees and has no boundary, so that the second image is easy to repeat, namely the situation of a turn appears during the process of controlling the window. Therefore, when the user controls the window, the user needs to know to which part/position in the first image the second image displayed on the second window corresponds currently, so as to increase the speed of positioning the region of interest by the user.
The third graphics processor determines, based on the second projection matrix and the first image model, that the second image corresponds to a three-dimensional navigation region within the first image model 205.
The first image or the second image determined based on the projection matrix (the first projection matrix and the second projection matrix, respectively) and the image model (the first image model and the second image model, respectively) as described above is an image obtained by the imaging principle of perspective projection. As shown in fig. 5, the projection of a point in the image model between the near plane 11 and the far plane 12 can be seen in our field of view.
According to the imaging principle of perspective projection, the visible part of the visual field is the vertex on the image model multiplied by the projection matrix, and the vertex on the near plane is normalized, cut and finally displayed by texture mapping. Therefore, if one wants to determine that the second image corresponds to a three-dimensional navigation area within the first image model, or wants to determine the position of the second image in the first image, the problem can be transformed by reverse thinking into: and determining which vertexes on the first image model can be projected onto the near plane of the second projection matrix, and after determining the vertexes, taking the areas corresponding to the vertexes as three-dimensional navigation areas, and highlighting and displaying texture coordinates corresponding to the three-dimensional navigation areas. Further, if it is desired to determine which vertices of the first image model can be projected onto the near plane of the second projection matrix, this can be determined by the second projection matrix and the first image model.
Specifically, the step of determining a three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model includes: the CPU obtains a first image model; sending the first image model to a third GPU; the third GPU determines a navigation vertex projected to a near plane corresponding to the second projection matrix from the vertex of the first image model according to the second projection matrix and the first image model; and taking the area corresponding to the navigation vertex as a three-dimensional navigation area in the first image model corresponding to the second image. The area corresponding to the navigation vertex is understood as the area where the navigation vertex is located.
Navigation vertices are understood to be vertices in the first image model that can be projected into the near plane of the second projection matrix, which vertices, after rendering, result in a navigation image that represents the position of the second image in the first image. Specifically, the step of determining, by the third GPU, a navigation vertex projected to a near plane corresponding to the second projection matrix from vertices of the first image model according to the second projection matrix and the first image model specifically includes the following steps: determining the coordinates of the projected vertexes of the first image model by the third GPU according to the second projection matrix, and obtaining the coordinates of each vertex after projection if the vertexes in the first image model are multiplied by the second projection matrix; and the third GPU determines the navigation vertex projected to the near plane corresponding to the second projection matrix according to the coordinate of the first image model after the vertex projection. The step of determining, by the third GPU, a navigation vertex projected into a near plane corresponding to the second projection matrix according to the vertex-projected coordinates of the first image model includes: the third GPU detects whether the projected coordinates of each vertex are in the range of the near plane corresponding to the second projection matrix; if yes, determining the vertex as a navigation vertex; and if not, determining the vertex as a non-navigation vertex. Wherein the navigation vertices are visible to the user after being projected onto the near plane of the second projection matrix, and the non-navigation vertices are not visible to the user after being projected.
Specifically, if the first image model is divided into 180 circles by longitude and 30 points are assigned to each circle, the number of vertices is 180 × 30. And the third GPU takes all the vertex coordinates as a matrix, multiplies the second projection matrix with the vertex coordinate matrix of the vertex to determine the projected coordinates of the vertex, if the projected coordinates are in the range of the near plane corresponding to the second projection matrix, the projected coordinates are determined as a navigation vertex, otherwise, the projected coordinates are determined as the navigation vertexA non-navigational vertex. It is understood that after the second projection matrix is determined, the range of the near plane corresponding to the second projection matrix is also determined. If projected coordinate (x)1,y1,z1) X in (2)1And y1Is in the [ -1,1] coordinate]In the range of-1. ltoreq. x1Y is not more than 1, and-1 is not more than y1And if the projection coordinate is less than or equal to 1, determining that the projected coordinate is in the range of the near plane corresponding to the second projection matrix. And after the navigation vertex is determined, taking the area corresponding to the navigation vertex as a three-dimensional navigation area in the first image model corresponding to the second image. It should be noted that it is not necessary to judge z here1The projected coordinates, and thus the near plane, are two-dimensional, all z-axis coordinates are equal. z is a radical of1The projected coordinates can be used as the depth of field subsequently, so as to realize the effect of large and small distances.
It can be understood that, the first projection matrix is multiplied by the vertex of the first image model, so that the vertex projected onto the near plane of the first projection matrix can be determined, and the vertex is the first image after clipping rendering and the like; multiplying the second projection matrix by the vertex of the second image model, determining the vertex projected to the near plane of the second projection matrix, and obtaining a second image after cutting rendering and the like; therefore, after the second projection matrix is multiplied by the first image model, the determined navigation vertex is the corresponding vertex of the second image in the first image model (the second image can be obtained after the corresponding vertex is projected to the second projection matrix).
Or, it can also be simply understood that, outside the first image model, the first image is obtained by multiplying the first projection matrix by the vertex of the first image model, clipping and rendering the product, and the like; multiplying the second projection matrix with the vertex of the second image model in the second image model, and obtaining a second image after clipping, rendering and the like; then, after multiplying the internal second projection matrix with the first image model, it can be derived which vertices in the first image model can be projected onto the near plane of the second projection matrix, and the resulting vertices are used as navigation vertices.
It is noted that the above determination of the three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model is implemented by the third GPU. And the third GPU calculates the coordinates of the first image model after the vertex projection in a matrix mode, so that the processing speed is greatly improved, and the power consumption of the mobile terminal is reduced. It can be understood that if the CPU is used for calculation, the CPU is required to traverse each vertex in the first image conversion model, that is, the number of traversed vertices is 180 × 30, and each time a vertex is traversed, the coordinates after vertex projection are calculated according to the second projection matrix and the vertex, so that the processing speed is greatly reduced, and the power consumption of the mobile terminal is high. On the other hand, the coordinates of the vertex of the first image model after projection are calculated, if CPU calculation is adopted, the CPU floating point calculation efficiency is not high, and the error is larger; and the GPU is specially used for processing floating point operation, so that the efficiency is high, and the processing accuracy is greatly improved. It can be understood that, in the fragment shader of the third GPU of the GPU, the vertex and the texture coordinate of the first image model may be simultaneously transmitted, the second projection matrix may be transmitted, and whether the vertex of the first image model is a navigation vertex (and then the value of the transparency is directly adjusted) is determined, thereby eliminating the complex floating point operation when the CPU is used to determine whether the vertex is a navigation vertex, improving the processing efficiency, and reducing the power consumption of the mobile terminal.
And 206, processing the three-dimensional navigation area in a preset mode by the third graphic processor to obtain a navigation image so as to display the navigation image in the first window in a protruding mode, wherein the navigation image represents the position of the second image in the first image.
Specifically, after the third GPU determines a navigation vertex, texture coordinates corresponding to the navigation vertex are determined; and the third GPU processes the three-dimensional navigation area in a preset mode according to the texture coordinates so as to display the navigation image in the first window in a protruding mode. And a navigation image representing a position of the second image within the first image.
It should be noted that, if the CPU is used for processing, after the CPU determines the navigation vertex and the texture coordinate corresponding to the navigation vertex, the texture coordinate needs to be copied to a third GPU in the GPUs, so that the GPU processes the three-dimensional navigation area according to the texture coordinate, so as to display the three-dimensional navigation area in the first window in a protruding manner. By adopting the scheme in the embodiment of the application, the third GPU determines the navigation vertex and the corresponding texture coordinate, the texture coordinate does not need to be copied, a large amount of time from the CPU to the GPU is saved, the processing efficiency is further improved, and the power consumption of the mobile terminal is further reduced.
The third GPU processes the three-dimensional navigation area in a preset mode according to the texture coordinates to obtain a navigation image, and the step of displaying the navigation image in the first window in a protruding mode comprises the following steps: the third GPU obtains a three-dimensional navigation area preset texture and a first preset transparency, wherein the three-dimensional navigation area preset texture comprises a preset color or a preset picture; and the third GPU renders the three-dimensional navigation area according to the three-dimensional navigation area preset texture, the first preset transparency and the texture coordinate to obtain a navigation image, and the navigation image is displayed in the first window in a protruding mode. Specifically, the texture corresponding to the texture coordinate is set as a preset texture of the three-dimensional navigation area, and the transparency of the texture corresponding to the texture coordinate is set as a first preset transparency; and the third GPU renders the three-dimensional navigation area according to the set texture. Therefore, the three-dimensional navigation area is rendered into the three-dimensional navigation area preset texture, and the displayed transparency is the first preset transparency, so that the purpose of highlighting the navigation image is achieved, and the position of the second image in the first image is highlighted.
Further, the step of taking an area outside the three-dimensional navigation area as a non-three-dimensional navigation area, specifically, processing the three-dimensional navigation area by the third GPU in a preset manner according to the texture coordinates to obtain a navigation image, and displaying the navigation image in the first window in a highlighted manner, includes:
acquiring a preset texture, a first preset transparency and a second preset transparency of the three-dimensional navigation area, wherein the second preset transparency is smaller than the first preset transparency, and the preset texture of the three-dimensional navigation area is a preset color or a preset picture; the third GPU renders the three-dimensional navigation area according to the preset texture, the first preset transparency and the texture coordinate of the three-dimensional navigation area to obtain a navigation image, and the navigation image is displayed in the first window in a protruding mode; and the third GPU renders the non-three-dimensional navigation area into a second preset transparency. The rendering of the three-dimensional navigation area by the third GPU according to the three-dimensional navigation area preset texture, the first preset transparency and the texture coordinate specifically comprises the following steps: and setting the texture corresponding to the texture coordinate as a three-dimensional navigation area preset texture, setting the transparency of the texture corresponding to the texture coordinate as a first preset transparency, rendering the three-dimensional navigation area by the third GPU according to the set texture so as to render the three-dimensional navigation area as the three-dimensional navigation area preset texture, and displaying the transparency as the first preset transparency.
It is understood that in order to highlight the navigation image, the rendered navigation image is displayed on top of the first image. In order to not block the region corresponding to the non-three-dimensional navigation region in the first image and improve the display effect, the second preset transparency is set to be less than 0.8, for example, the second preset transparency may be set to be 0. In order to highlight the navigation image, the first preset transparency may be set to be between (0,1), and in order to not completely cover the area corresponding to the navigation image in the first image, so as to improve the user experience, the first preset transparency may be set to be 0.8. Wherein the preset color may be set to red to highlight the navigation image.
As shown in the left diagram of fig. 6, the rendered navigation image 23 and the rendered non-three-dimensional navigation area are located above the first image, and the current first preset transparency is not 1, and a partial area corresponding to the first image located below the navigation image 23 can be seen through the navigation image 23. The partial area corresponding to the first image, which is located below the navigation image 23, coincides with the second image. Since the second preset transparency is 0, the rendered non-three-dimensional navigation area is transparent and cannot be seen by human eyes.
In some other cases, the step of taking a region outside the three-dimensional navigation region as a non-three-dimensional navigation region, specifically, processing the three-dimensional navigation region by the third GPU in a preset manner according to the texture coordinates to obtain a navigation image, so as to highlight and display the navigation image in the first window includes:
the third GPU obtains a three-dimensional navigation area preset texture, a first preset transparency, a non-three-dimensional navigation area preset texture and a second preset transparency, wherein the second preset transparency is smaller than the first preset transparency, the three-dimensional navigation area preset texture is a first preset color or a first preset picture, and the non-three-dimensional navigation area preset texture is a second preset color or a second preset picture; the third GPU renders the three-dimensional navigation area according to the preset texture, the first preset transparency and the texture coordinate of the three-dimensional navigation area to obtain a navigation image, and the navigation image is displayed in the first window in a protruding mode; and the third GPU renders the non-three-dimensional navigation area according to the non-three-dimensional navigation area preset texture and the second preset transparency.
The third GPU renders the three-dimensional navigation area according to the three-dimensional navigation area preset texture, the first preset transparency and the texture coordinate, and comprises the following steps: the third GPU sets the texture corresponding to the texture coordinate as a three-dimensional navigation area preset texture, and sets the transparency of the texture corresponding to the texture coordinate as a first preset transparency; and rendering the three-dimensional navigation area according to the set texture so as to render the three-dimensional navigation area into a three-dimensional navigation area preset texture, wherein the displayed transparency is a first preset transparency. The third GPU renders the non-three-dimensional navigation area according to the non-three-dimensional navigation area preset texture and the second preset transparency, and comprises the following steps: setting the texture corresponding to the non-three-dimensional navigation area as a non-three-dimensional navigation area preset texture, and setting the transparency of the texture corresponding to the non-three-dimensional navigation area as a second preset transparency; and rendering the non-three-dimensional navigation area according to the set texture so as to render the non-three-dimensional navigation area into a non-three-dimensional navigation area preset texture, wherein the displayed transparency is a second preset transparency. Wherein, the setting of the first preset transparency and the second preset transparency can refer to the description above; the three-dimensional navigation area preset texture and the non-three-dimensional navigation area preset texture can be the same or different. And highlighting the navigation image, rendering the non-three-dimensional navigation area by using the preset texture of the non-three-dimensional navigation area, and setting the transparency as a second preset transparency.
In the embodiment, the three-dimensional navigation area and the non-three-dimensional navigation area are distinguished, the navigation image is further highlighted, and the user experience is improved.
It should be noted that the third GPU processes the three-dimensional navigation area in a preset manner according to the texture coordinates to obtain a navigation image, and there may be a plurality of implementation scenes in the step of displaying the navigation image in the first window in a highlighted manner.
For example, in one implementation scenario, there is only one display control in the first window, through which both the navigation image (and rendered non-three-dimensional navigation area) and the first image may be displayed. The display control includes two texture units: a first texture unit and a second texture unit. Specifically, before the step of displaying the first image in the first window of the data display interface, the method further includes: acquiring a first texture unit and a second texture unit in a display control of a first window; the second texture unit is disposed on the first texture unit. Thus, the step of displaying the first image in the first window of the data display interface includes: the first image is presented within a first texture unit in a display control of a first window. The step of highlighting the navigation image within the first window comprises: the navigation image (and rendered non-three-dimensional navigation area) is highlighted within a second texture element in the display control of the first window. It should be noted that, in this case, while the step of processing the three-dimensional navigation area in the preset manner to obtain the navigation image and highlighting the navigation image in the second texture unit in the first window display control is executed, the step of rendering the first image data into the first image in the large viewing angle according to the first projection matrix and the first image model and displaying the first image in the first texture unit in the first window display control are also executed synchronously. It will be appreciated that because the first image and the navigation image are displayed in a single display control, the first image and the navigation image (and the non-three-dimensional navigation area) will be rendered simultaneously, and if only the navigation image (and the non-three-dimensional navigation area) is rendered, the first image will not be displayed in the first window, thus defeating the purpose of the present application.
As another implementation scenario, there are two display controls in the first window, such as including a first display control and a second display control. The first display control is used to display the first image and the second display control is used to display the navigation image (and the processed non-three-dimensional navigation region). Specifically, before the step of displaying the first image in the first window of the data display interface, the method further includes: acquiring a first display control and a second display control in a first window; the second display control is disposed over the first display control. Thus, the step of displaying the first image in the first window of the data display interface includes: and displaying the first image in a first display control of a first window of the data display interface. The step of highlighting the navigation image within the first window comprises: the navigation image (and rendered non-three-dimensional navigation area) is highlighted in the second display control of the first window. In this way, the first image and the navigation image (and the rendered non-three-dimensional navigation area) are displayed through the two display controls respectively, and are processed separately, so that the processing efficiency is improved. If the three-dimensional navigation area is processed, only the content displayed on the second display control needs to be rendered, and the content displayed on the first display control does not need to be rendered, so that the consumption of the mobile terminal is reduced, and the processing efficiency and speed are improved.
Through highlighting the navigation image, the user can clearly know the position of the second image displayed in the second window in the first image displayed in the first window according to the navigation image so as to establish the incidence relation between the images at different visual angles, the understanding efficiency of the image data content is further improved, the user can conveniently adjust the watched area, the user can conveniently guide the user to quickly find the concerned area, the speed of positioning the concerned area in the image data by the user is improved, and the user experience is improved. In addition, the second image displayed through the second window also realizes the detail display of the image data.
At this point, the first image and the navigation image are displayed in the first window of the data display interface, and the second image is displayed in the second window of the data display interface.
207, the central processing unit acquires the control operation of the user on the first window based on the navigation image.
Since the navigation image indicates that the second image corresponds to a position in the first image, the user can perform a control operation based on the navigation image in the first window of the data presentation interface. The control operation may be performed by a user performing a sliding touch operation on the navigation image of the first window, or may be performed in another manner. Here, the effects of the control operation will be briefly described: after a user touches and slides on the navigation image of the first window, the second projection matrix of the second window is changed, and then the second image is changed, so that the navigation image of the first window is also changed. It appears as if the navigation image on the first window is directly controlled.
In the embodiment of the present application, a control operation by a slide touch operation is described as an example.
The events of the control operation corresponding to the sliding touch operation of the first window comprise a sliding event, a clicking event and the like. The click event is used to stop the accelerator introduced by the control operation of the second window, it being understood that the control operation on the first window does not involve the relevant processing of the accelerator. The slide event is used to control various conditions during the finger slide. The slide events include BeginDrag, DragMove, EndDrag, DragCancle, and the like. BeginDrag corresponds to touchesbgan, understood as a finger press event; DragMove corresponds to touchmoved, understood as a finger movement event; EndDrag corresponds to touchEnded, understood as a finger lift event; dragcancel corresponds to touchhers cancelled, understood as an unexpected interrupt event, such as an unexpected interrupt caused by a call.
It should be noted that the control operation of the acquisition user on the first window based on the navigation image in the embodiment of the present application is performed when the control operation is not ended. Namely, the user acquires the control operation of the user in the sliding process of the first window, and after the control operation of the user is finished, the process is ended and is not executed according to the corresponding steps in the embodiment of the application.
And 208, converting the control operation into an angle in a three-dimensional space by the central processing unit.
For the mobile terminal, the screen corresponds to one coordinate axis, the height direction (vertical direction) corresponds to the y axis, and the width direction (horizontal direction) corresponds to the x axis. Therefore, the position coordinates corresponding to the slide-touch operation generally include x-axis coordinates and y-axis coordinates, and the x-axis coordinates and the y-axis coordinates on the screen are physical coordinates. (0, 0) in the upper left corner of the screen, the z-axis is not included in the coordinate axes of the screen of the mobile terminal.
In the image model, since the rotation of the model in openGL can only be performed around the base axes, the base axes include the first base axis, the second base axis, and the third base axis, which in the embodiment of the present application correspond to the X axis, the Y axis, and the Z axis in the three-dimensional coordinate system, respectively. I.e. the Z-axis is introduced in the openGL, (0,0, 0) corresponds to the midpoint of the first image of the first window or the midpoint of the second window. In the present embodiment, the object coordinate system is a right-hand coordinate system, and the base axis of the object coordinate system coincides with the base axis of the world coordinate system.
Determining a second projection matrix by a user based on control operation of a navigation image on a first window, which is a core of determining the second projection matrix in the embodiment of the application, namely converting the control operation of gesture sliding of the user on a screen of the mobile terminal into a corresponding angle; and then sending the corresponding angle to a second window, and determining a second projection matrix by the second window according to the received angle. The angles comprise the rotation angle of the second image model on the third base axis Z axis and the pitch angle of the second virtual camera corresponding to the second projection matrix on the first base axis X axis.
Specifically, step 208 includes: acquiring a central coordinate of a central point corresponding to a first image in a first window; acquiring a control coordinate of a control point corresponding to the control operation; and converting the control operation into an angle in a three-dimensional space according to the central coordinate and the control coordinate.
As can be seen from fig. 6 and the above description, the first image is an image at a large viewing angle obtained by pasting the first image data as texture units on a half sphere in its entirety, and is visually displayed on a two-dimensional plane by a circle. Since the midpoint of the first image in the first window is corresponded as the origin of the three-dimensional coordinate axis in the openGL, the center of the first image in the first window is taken as the center point in the two-dimensional coordinate system to which the screen corresponds in order to convert the control operation into an angle. It is understood that, in general, the center point of the first image is the center point of the first window, but there may be a case where the center point is not the center point of the first window, and therefore the center point of the first image is not obtained here.
The center coordinate and the control coordinate are both coordinates with the upper left corner as the origin under the two-dimensional coordinate system corresponding to the screen. The center coordinates can be calculated in advance, the width corresponding to the first window is set as the windows _ width, the height corresponding to the first window is set as the windows _ height, and if the center point of the first image is the center point of the first window, the center coordinates are (windows _ width/2, windows _ height/2); if the center point of the first image is not the center point of the first window, the center coordinates are determined according to the pixel values of the first image, or calculated according to other manners.
And converting the control operation into an angle in a three-dimensional space according to the central coordinate and the control coordinate, namely converting the coordinate in the two-dimensional coordinate system into the angle in the three-dimensional space, so as to control the second image through the control operation and achieve the effect of changing the display of the navigation image in the first image.
How to determine the corresponding angle according to the control coordinate corresponding to the control point slid by the gesture and the center coordinate corresponding to the center point is the core of converting the control operation into the angle in the three-dimensional space in the embodiment of the present application.
The step of converting the control operation into an angle in three-dimensional space, in particular, according to the center coordinate and the control coordinate, for the rotation angle of the second image model on the Z-axis of the third base axis, includes: determining the angle of a straight line formed by the control point and the central point on the first window according to the central coordinate and the control coordinate; the angle corresponds to the rotation angle of the second image model as a control operation.
Fig. 7 is a schematic diagram for determining an angle at which a control point is located on a first window according to an embodiment of the present application. In the drawingsPoint A of (a) is a central point, and the coordinates of point A are (x)0,y0) The point B is a control point, and the coordinates of the point B are (x, y). Note that, since the a point and the B point are on the first window, and the coordinates of the a point and the B point are coordinates on the screen in the two-dimensional space, the coordinates of the a point and the B point are coordinates with the upper left corner of the screen as the origin. Since the first image is displayed in a circular shape, and in openGL, (0,0, 0) corresponds to the midpoint of the first image of the first window, in order to more conveniently and rapidly convert the coordinates in the two-dimensional coordinate system into an angle in the three-dimensional space, the first image is regarded as a clock, the angle corresponding to the 0-point or 12-point direction is 0 degree or 360 degrees, the angle corresponding to the 3-point direction is 90 degrees, the angle corresponding to the 6-point direction is 180 degrees, and the angle corresponding to the 9-point direction is 270 degrees.
The angle at which the straight line formed by the control point and the center point is located on the first window is understood to be the angle between the straight line formed by the control point and the center point and the straight line corresponding to the 0-point direction (or the 0-degree direction) in the first image. Specifically, the step of determining an angle of a straight line formed by the control point and the central point on the first window according to the central coordinate and the control coordinate includes: determining a quadrant where the control point is located according to the central coordinate and the control coordinate; and determining the angle of a straight line formed by the control point and the central point on the first window according to the quadrant, the central coordinate and the control coordinate. The quadrant is a quadrant formed by taking the point a as the center, and the quadrant in the embodiment of the present application is used for calculating an angle of a straight line formed by the control point and the center point on the first window.
Wherein, 0 to 90 degrees are the first quadrant, 90 to 180 degrees are the second quadrant, 180 to 270 degrees are the third quadrant, 270 to 360 degrees are the fourth quadrant. And determining the quadrant where the control point is located according to the central coordinate and the control coordinate. For example, if x>x0,y<y0Determining the control point in the first quadrant, and determining the angle of the straight line formed by the control point and the central point on the first window as the angle between the straight line formed by the control point and the central point and the straight line corresponding to the 0 point direction
Figure 248330DEST_PATH_IMAGE001
The expression may be represented by a cos angle, a sin angle, or the like. For example, if x<x0,y>y0Then, the control point is determined to be in the third quadrant, and the angle of the straight line formed by the control point and the central point on the first window is the angle between the straight line formed by the control point and the central point and the straight line corresponding to the 6-point direction (180-degree direction)
Figure 688539DEST_PATH_IMAGE002
And the sum of the angles between the straight line corresponding to the 6-point direction (180-degree direction) and the straight line corresponding to the 0-point direction, i.e., the sum
Figure 945077DEST_PATH_IMAGE003
. The calculation method in other quadrants is similar, and as in the fourth phenomenon, the angle of the straight line formed by the control point and the central point on the first window is
Figure 346102DEST_PATH_IMAGE004
As shown in fig. 7, if the control point is in the second quadrant, the angle of the straight line formed by the control point and the center point in the first window is the sum of the angle between the straight line formed by the control point and the center point and the 3-point direction (90-degree direction) and the angle between the straight line corresponding to the 3-point direction (90-degree direction) and the straight line corresponding to the 0-point direction, that is, the angle between the straight line corresponding to the 3-point direction (90-degree direction) and the straight line
Figure 340429DEST_PATH_IMAGE005
And taking the angle of a straight line formed by the control point and the central point on the first window as the rotation angle of the control operation corresponding to the second image model. The angle of a straight line formed by the control point and the central point on the first window is an absolute angle; all of roll, yaw, and pitch, etc. represent absolute angles, and therefore, in the embodiment of the present application, roll, yaw, and pitch are used to represent corresponding absolute angles. pitch represents rotation about the Y-axis, also called yaw; yaw represents rotation about the X axis, also called pitch; roll indicates rotation about the Z axis, also called the roll angle. The control operation of the user on the second window essentially changes the roll angle roll and the pitch angle yaw, while the yaw pitch is always fixed and not changed, and the default yaw pitch is 90 degrees, ensuring that the second virtual camera always faces the direction pointed by the Z-axis. Wherein the control operation is represented by a roll corresponding to the rotation angle of the second image model.
It will be appreciated that the second image model is spherical, a rotation of the sphere corresponding to 360 degrees, while the first image is also shown as a circle, and the control operation is slid a single turn, also exactly 360 degrees, around the centre point of the first image. On the other hand, the object coordinate system adopts a right-hand coordinate system, and the control operation slides one circle around the central point of the first image, namely, one circle of rotation around the Z axis in the three-dimensional space of the object coordinate system is equivalent. Therefore, the angle at which the straight line formed by the control point and the center point is located on the first window is taken as the control operation corresponding to the rotation angle of the second image model, so that the control operation of the user on the basis of the two-dimensional plane (on the first window) is converted into the rotation angle corresponding to the second image model, namely, the rotation angle of the second image model on the Z axis of the third base axis.
For the pitch angle of the second virtual camera on the first base axis X axis corresponding to the second projection matrix, specifically, the step of converting the control operation into an angle in a three-dimensional space according to the center coordinate and the control coordinate includes: acquiring a radius corresponding to the second image and a maximum pitch angle corresponding to the second virtual camera; determining the control distance from the control point to the central point according to the central coordinate and the control coordinate; determining a control distance of the control operation according to the radius and the maximum pitch angle corresponds to a pitch angle of the second virtual camera.
And the radius corresponding to the second image is the radius of the sphere corresponding to the first image model or the second image model and is represented by r. The pitch angle includes an elevation angle that is the upward shift of the second virtual camera and a depression angle that is the downward shift of the second virtual camera. Wherein the maximum value of the elevation angle is
Figure 369564DEST_PATH_IMAGE006
And the minimum value is 0. The Euler angle is generally preset to be 30 degrees, and is an included angle formed by a straight line between the upper surface of the viewing pyramid and the lens of the second virtual camera and a straight line between the lower surface of the viewing pyramid and the lens of the second virtual camera. The maximum and minimum values of the depression angle coincide with the maximum and minimum values of the elevation angle, but differ in direction. I.e. the maximum pitch angle max of the pitch angle is
Figure 813315DEST_PATH_IMAGE006
The minimum pitch angle min is 0.
Determining the control distance from the control point to the central point according to the central coordinate corresponding to the central point and the control coordinate corresponding to the control point, wherein if the central coordinate corresponding to the known central point is (x)0,y0) If the control coordinate corresponding to the control point is (x, y), the control distance m from the control point to the central point is
Figure 295112DEST_PATH_IMAGE007
Determining that a control distance of the control operation corresponds to a pitch angle of the second virtual camera based on the radius and the maximum pitch angle, comprising: the maximum pitch angle multiplied by the control distance divided by the radius to obtain a control distance for the control operation corresponding to the pitch angle of the second virtual camera. Specifically, as shown in formula (1):
Figure 833410DEST_PATH_IMAGE008
(1)
wherein a denotes that the control distance m corresponds to the pitch angle of the second virtual camera, i.e., the pitch angle of the second virtual camera on the first base axis X-axis, max is the maximum pitch angle, r is the radius,
Figure 123577DEST_PATH_IMAGE009
is the euler angle. The calculated control distance m is an absolute angle corresponding to the pitch angle a of the second virtual camera.
It is determined that the control distance corresponds to the second distanceAfter the pitch angle of the virtual camera, the direction of the pitch angle also needs to be determined. The direction of the pitch angle of the second virtual camera may be determined from the control coordinates and the center coordinates. If it is
Figure 987496DEST_PATH_IMAGE010
If the direction of the pitch angle is negative, determining that the pitch angle is downward, namely, the depression angle; if it is
Figure 222169DEST_PATH_IMAGE010
Positive, the direction of the pitch angle is determined to be up, i.e. the elevation angle.
It should be noted that the calculated angle, including the rotation angle of the second image model on the Z-axis of the third base axis and the pitch angle of the second virtual camera corresponding to the second projection matrix on the X-axis of the first base axis, is obtained on the first window based on the control operation of the navigation image. Whereas the second projection matrix corresponds to the projection matrix of the second window. Therefore, the calculated angle needs to be sent to the second window, so that the second window updates the corresponding second projection matrix according to the angle.
209, the central processing unit updates the second projection matrix corresponding to the second window according to the angle, and transmits the second projection matrix to the second graphic processor and the third graphic processor.
The projection matrix (including the first projection matrix and the second projection matrix) corresponds to an MVP matrix, where MVP = predictive view model. The model matrix (also called model matrix) corresponds to an operation matrix of the second image model, and mainly operates the rotation of the second image model on the X, Y, Z axis, and the model matrix includes information for controlling the second image model. The view matrix (also referred to as a view matrix) mainly corresponds to a position point, an orientation (a position of the second virtual camera), and the like of the second virtual camera, and the proactive matrix (also referred to as a perspective matrix) corresponds to information such as an euler angle, a near plane, and a far plane of the second virtual camera, and is understood as information of a perspective projection of the second virtual camera.
How to correspond the angle to the second projection matrix is also the core of determining the second projection matrix in the embodiment of the present application: when a user performs control operation on the first window based on the navigation image, the control operation corresponds to the rotation angle of the second image model on the third base axis Z axis, and a model matrix is correspondingly adjusted; the pitch angle of the second virtual camera on the X axis of the first base axis is correspondingly adjusted by a view matrix.
Specifically, the step of updating the second projection matrix corresponding to the second window according to the angle includes: updating the model matrix according to the rotation angle of the control operation corresponding to the second image model; updating the view matrix corresponding to the pitch angle of the second virtual camera according to the control operation; and updating a second projection matrix corresponding to the second window according to the model matrix, the view matrix and the predictive matrix. Wherein the prestive matrix is unchanged.
How to update the model matrix corresponding to the rotation angle of the second image model according to the control operation. By the above description, the rotation angle of the control operation corresponding to the second image model is represented by a roll, and the control operation corresponding to the rotation angle of the second image model is an absolute angle. Therefore, the rotation angle roll can be converted into radian, and then a rotate function is called to rotate, so as to obtain a model matrix. For example, model = glm:: rotate (glm:: radians (roll), glm:: vec3(0.0f, 0.0f, 1.0 f)). model, where glm:: radians represents the radian calculation function.
How to update the view matrix corresponding to the pitch angle of the second virtual camera according to the control operation. Typically the position of a virtual camera is determined by three parameters: camera _ pos: a location point of the virtual camera; camera _ front: an orientation of the virtual camera; camera _ up: perpendicular to the orientation of the virtual camera. After initialization on the data display interface, before control operation is performed on the window, the camera _ pos, the camera _ front and the camera _ up all correspond to an initial value. Wherein the camera _ pos keeps the initial value unchanged, such as setting the camera _ pos to the very center inside the second image model. When the user performs a control operation on the first window based on the navigation image, the camera _ front is changed, and the camera _ up is also changed, so that the view matrix is changed.
Specifically, the step of updating the view matrix corresponding to the pitch angle of the second virtual camera according to the control operation includes: taking the pitch angle of the control operation corresponding to the second virtual camera as the pitch angle of the second virtual camera, and acquiring the yaw angle of the second virtual camera; updating the orientation vector of the second virtual camera according to the yaw angle and the pitch angle; and updating the view matrix according to the orientation vector.
Fig. 8 is a schematic diagram of determining an orientation of a second virtual camera according to an embodiment of the present disclosure. Where point C is the position camera _ pos of the second virtual camera, and CD is the orientation camera _ front of the second virtual camera, where the coordinates of point D are (x, y, z). It should be noted that the orientation camera _ front of the second virtual camera is on the ray of the CD, and the length of the CD may be any value. For the sake of calculation, it is assumed that the CD length is 1, and the yaw angle pitch, pitch angle yaw are known. The coordinates of the D point may be calculated according to formula (2), formula (3), formula (4), thereby obtaining a value of the orientation camera _ front of the second virtual camera:
Figure 173944DEST_PATH_IMAGE011
(2)
Figure 833465DEST_PATH_IMAGE012
(3)
Figure 619018DEST_PATH_IMAGE013
(4)
after the orientation camera _ front of the second virtual camera is calculated, the value of camera _ up may be further calculated.
Since camera _ front and camera _ up define a plane and the control operation corresponds to tilting up and down about the y-axis, the point of (0,1, 0) must be on the plane defined by camera _ front and camera _ up. A transition vector up _ help may be set to help calculate the value of camera up. Let up _ help be (0,1, 0).
And obtaining a right vector right of the second virtual camera by using the transition vector up _ help and the calculated orientation camera _ front of the second virtual camera, specifically, cross-multiplying the transition vector up _ help and the calculated orientation vector camera _ front of the second virtual camera, and then normalizing to obtain the right vector right, wherein the obtained right vector right is perpendicular to the orientation camera _ front of the second virtual camera according to the principle of cross-multiplication. Such as glm:: vec3 right = glm:: normaize (glm:: cross (up _ help, camera _ front)), where glm:: cross represents a cross product. And then obtaining a value of the camera _ up by using the right vector right and the calculated orientation vector camera _ front of the second virtual camera, specifically, cross-multiplying the orientation vector camera _ front of the second virtual camera and the right vector right, and then normalizing to obtain the value of the camera _ up. Such as camera _ up = glm:: normal (glm:: cross (camera _ front, right)). According to the principle of cross multiplication, the resulting camera _ up is perpendicular to the orientation camera _ front of the second virtual camera.
After the camera _ pos, the camera _ front and the camera _ up are obtained, the camera _ pos, the camera _ front and the camera _ up are used to determine the view matrix. Specifically, the lookup at function is called to realize, view = glm:, lookup at (camera _ pos, camera _ front, camera _ up), and the view matrix can be obtained.
And generating a second projection matrix according to the updated view matrix, the updated model matrix and the predictive matrix so as to update the second projection matrix corresponding to the second window. In this way, the control operation of the user on the first window based on the navigation image is converted into an angle, and the second projection matrix corresponding to the second window is updated according to the angle, so that the second projection matrix is updated through the control operation.
In the process of updating the second projection matrix corresponding to the second window according to the control operation of the user on the first window, two threads are respectively corresponding to the first projection matrix and one thread is a main thread ui thread and is used for capturing gestures, for example, capturing sliding events such as BeginDrag, DragMove, EndDrag, dragcantle and the like, and determining corresponding angles according to gesture sliding. The other thread is a gl thread with a refresh rate of 60 frames per second. The gl thread generates a second projection matrix according to the angle so as to update the second projection matrix corresponding to the second window.
And after updating the second projection matrix, transmitting the second projection matrix to a second GPU and a third GPU, so that the second GPU and the third GPU respectively perform different processing according to the second projection matrix.
And 210, updating the second image under the small visual angle according to the second projection matrix, the second image model and the second image data by the second graphic processor, and updating and displaying the second image in a second window.
Specifically, the step of updating the second image at the small viewing angle according to the second projection matrix and the second image model, and the second image data includes: the CPU obtains a second image model; the CPU transmits the second image model to a second GPU; and the second GPU updates a second image under the small visual angle according to the second projection matrix, the second image model and the second image data. Specifically, the CPU transmits a vertex in the second image model to the vertex shader, copies a texture coordinate in the second image model to the fragment shader, determines a texture unit corresponding to the texture coordinate according to the second image data, and performs rendering using the second GPU to update the second image at the small viewing angle.
Specifically, please refer to the above description corresponding to the step of generating the second image under the small viewing angle according to the second projection matrix, the second image model, and the second image data, which is not repeated herein.
The third graphics processor updates a three-dimensional navigation area in the second image corresponding to the first image model according to the second projection matrix and the first image model 211.
Specifically, the step of updating the three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model includes: the third GPU detects whether the first image model exists or not, and if not, the first image model is obtained through the CPU; sending the first image model to a third GPU; the third GPU determines a navigation vertex projected to a near plane corresponding to the second projection matrix from the vertex of the first image model according to the second projection matrix and the first image model; and taking the area corresponding to the navigation vertex as a three-dimensional navigation area in the first image model corresponding to the second image.
Specifically, the step of determining, from the vertices of the first image model, the navigation vertices projected into the near plane corresponding to the second projection matrix according to the second projection matrix and the first image model specifically includes the following steps: determining the coordinates of the projected vertexes of the first image model by the third GPU according to the second projection matrix, and obtaining the coordinates of each vertex after projection if the vertexes in the first image model are multiplied by the second projection matrix; and the third GPU determines the navigation vertex projected to the near plane corresponding to the second projection matrix according to the coordinate of the first image model after the vertex projection.
For a specific implementation principle, please refer to the above description of determining the three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model, which is not repeated herein.
And 212, processing the three-dimensional navigation area by the third graphic processor in a preset mode to obtain an updated navigation image so as to display the updated navigation image in the first window in a highlighted mode.
Specifically, after the updated navigation vertex is determined by the third GPU, the texture coordinate corresponding to the updated navigation vertex is determined; and the third GPU processes the three-dimensional navigation area in a preset mode according to the texture coordinates to obtain an updated navigation image so as to display the navigation image in the first window in a protruding mode.
The third GPU processes the three-dimensional navigation area in a preset manner to obtain an updated navigation image, and performs the step of highlighting the updated navigation image in the first window, so as to refer to the above description that the third GPU processes the three-dimensional navigation area in a preset manner according to the texture coordinates to obtain the navigation image, and performs the step of highlighting the navigation image in the first window, which is not described herein again.
It should be noted that the above steps 204 and 205, and 210 and 211 can be executed in series or in parallel; and executed in parallel to improve processing efficiency.
The control operation of the user on the basis of the navigation image on the first window is converted into the angle in the three-dimensional space through the scheme, so that the control operation on the two-dimensional space is converted into the rotation angle of the second image model on the three-dimensional space and the pitch angle of the second virtual camera, and the control on the second image on the second window is achieved; and the second image displayed on the current second window is highlighted corresponding to the position in the first image, so that the user can clearly know the position of the second image displayed in the second window in the first image displayed in the first window according to the navigation image so as to establish the association relationship between the images at different viewing angles, further improve the understanding efficiency of the image data content, facilitate the user to adjust the watched area, facilitate the user to quickly find the concerned area, improve the speed of positioning the concerned area in the image data, and improve the user experience. In addition, the second image displayed through the second window also realizes the detail display of the image data. The data display method in the embodiment of the application can be applied to more application scenes.
Fig. 9a and 9b are schematic flow charts of a data presentation method provided in an embodiment of the present application. Please refer to the data display method of the present application in conjunction with fig. 9a and 9 b.
As shown in fig. 9a, when the data presentation interface is opened/refreshed, the CPU acquires the set initial parameters of the first virtual camera, the set initial parameters of the second virtual camera, the first image model, the second image model, and the image data acquired by the fisheye camera; the CPU copies the image data to obtain first image data and second image data, transmits the first image data to the first GPU and transmits the second image data to the second GPU; the CPU determines an initial first projection matrix according to initial parameters of the first virtual camera and transmits the first projection matrix to the first GPU; the CPU determines an initial second projection matrix according to the initial parameters of the second virtual camera and transmits the second projection matrix to a second GPU and a third GPU; the first GPU determines an initial first image under a large visual angle according to the initial first projection matrix, the first image model and the first image data, and displays the initial first image in a first window of a data display interface; the second GPU generates an initial second image under a small visual angle according to the initial second projection matrix, the second image model and the second image data, and displays the initial second image in a second window of the data display interface; and the third GPU determines a three-dimensional navigation area corresponding to the initial second image in the first image model according to the initial second projection matrix and the first image model, processes the three-dimensional navigation area in a preset mode to obtain a navigation image, and displays the navigation image in the first window in a protruding mode. The above is the corresponding steps in opening/refreshing the data display interface.
Then, as shown in fig. 9b, the CPU acquires a control operation of the user on the first window based on the navigation image using the ui thread; and the CPU converts the control operation into an angle in a three-dimensional space by using the ui thread, updates a second projection matrix corresponding to the second window according to the angle by using the gl thread, wherein the second projection matrix is the updated projection matrix, and transmits the second projection matrix to the second GPU and the third GPU. And the second GPU updates a second image under the small visual angle according to the second projection matrix, the second image model and the second image data, and updates and displays the second image in a second window, wherein the second image is an updated image. And the third GPU updates the three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model, processes the three-dimensional navigation area in a preset mode to obtain an updated navigation image, and displays the navigation image in the first window in a protruding mode. It is understood that, after the data presentation interface is opened, when a control operation based on the navigation image on the first window is detected, the second projection matrix is determined according to the control operation, so as to update the second image presented in the second window and update the navigation image in the first window.
It should be noted that fig. 9a and 9b together illustrate the entire flow of the data presentation method. For details of each step, please refer to the description of the corresponding step above, which is not repeated herein.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions (computer programs) which are stored in a computer-readable storage medium and loaded and executed by a processor, or by related hardware controlled by the instructions (computer programs). To this end, an embodiment of the present application provides a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps of any embodiment of the data presentation method provided in the embodiment of the present application.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any data presentation method embodiment provided in the present application, the beneficial effects that can be achieved by any data presentation method provided in the present application embodiment can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The data display method, the mobile terminal and the storage medium provided by the embodiment of the present application are introduced in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiment is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. The data display method is characterized by being applicable to a mobile terminal which is communicated with a fisheye camera, wherein the mobile terminal comprises a central processing unit, a memory, a first graphic processor, a second graphic processor and a third graphic processor; the data display method comprises the following steps:
the central processing unit reads image data collected by the fisheye camera from the memory;
copying the image data to obtain first image data and second image data, and transmitting the first image data to the first graphics processor and transmitting the second image data to the second graphics processor;
the first image data is rendered into a first image under a large visual angle by the first graphic processor according to a first projection matrix and a first image model, and the first image is displayed in a first window of a data display interface;
the second graphic processor generates a second image under a small visual angle according to a second projection matrix, a second image model and the second image data, and displays the second image in a second window of the data display interface, wherein the second projection matrix is different from the first projection matrix, the second image model is the same as the first image model, the first projection matrix comprises information for controlling the first image model, information for a first virtual camera and information for perspective projection of the first virtual camera, and the second projection matrix comprises information for controlling the second image model, information for a second virtual camera and information for perspective projection of the second virtual camera;
the third graphic processor determines that the second image corresponds to a three-dimensional navigation area in the first image model according to the second projection matrix and the first image model;
the third graphic processor processes the three-dimensional navigation area in a preset mode to obtain a navigation image so as to display the navigation image in the first window in a protruding mode, wherein the navigation image represents the position of the second image in the first image;
the central processing unit acquires control operation of a user on a first window based on the navigation image;
the central processing unit converts the control operation into an angle in a three-dimensional space;
the central processing unit updates a second projection matrix corresponding to the second window according to the angle and transmits the second projection matrix to the second graphic processor and the third graphic processor;
the second graphic processor updates a second image under a small visual angle according to the second projection matrix, the second image model and the second image data, and updates and displays the second image in the second window;
the third graphic processor updates a three-dimensional navigation area corresponding to the second image in the first image model according to the second projection matrix and the first image model;
and the third graphic processor processes the three-dimensional navigation area in a preset mode to obtain an updated navigation image so as to display the updated navigation image in the first window in a highlighted mode.
2. The data presentation method of claim 1, wherein said step of converting said control operation into an angle in three-dimensional space comprises:
acquiring a central coordinate of a central point corresponding to the first image in the first window;
acquiring a control coordinate of a control point corresponding to the control operation;
and converting the control operation into an angle in a three-dimensional space according to the central coordinate and the control coordinate.
3. The data presentation method of claim 2, wherein the angle comprises a pitch angle of a second virtual camera corresponding to the second projection matrix, and the step of converting the control operation into an angle in three-dimensional space according to the center coordinate and the control coordinate comprises:
acquiring a radius corresponding to the second image and a maximum pitch angle corresponding to a second virtual camera;
determining the control distance from the control point to the central point according to the central coordinate and the control coordinate;
determining that the control distance of the control operation corresponds to a pitch angle of the second virtual camera based on the radius and the maximum pitch angle.
4. The data presentation method of claim 2, wherein the angle comprises a rotation angle of the second image model, and the step of converting the control operation into an angle in three-dimensional space according to the center coordinate and the control coordinate comprises:
determining the angle of a straight line formed by the control point and the central point on the first window according to the central coordinate and the control coordinate;
and taking the angle as the rotation angle of the control operation corresponding to the second image model.
5. The data presentation method of claim 4, wherein the step of determining an angle of a straight line formed by the control point and the central point on the first window according to the central coordinate and the control coordinate comprises:
determining a quadrant where the control point is located according to the central coordinate and the control coordinate;
and determining the angle of a straight line formed by the control point and the central point on the first window according to the quadrant, the central coordinate and the control coordinate.
6. The method of claim 1, wherein the angle comprises a rotation angle corresponding to the second image model and a pitch angle of the second virtual camera corresponding to the second projection matrix, and the step of updating the second projection matrix corresponding to the second window according to the angle comprises:
updating a model matrix according to the rotation angle of the control operation corresponding to the second image model;
updating a view angle matrix corresponding to a pitch angle of a second virtual camera according to the control operation;
and updating a second projection matrix corresponding to the second window according to the model matrix, the visual angle matrix and the perspective matrix.
7. The data presentation method of claim 6, wherein the step of updating the perspective matrix corresponding to the pitch angle of the second virtual camera according to the control operation comprises:
taking the pitch angle of the second virtual camera corresponding to the control operation as the pitch angle of the second virtual camera, and acquiring the yaw angle of the second virtual camera;
updating an orientation vector of the second virtual camera according to the yaw angle and the pitch angle;
updating the view matrix according to the orientation vector.
8. The data presentation method of claim 1, wherein prior to the step of presenting the first image in the first window of the data presentation interface, further comprising:
acquiring a first display control and a second display control in the first window;
disposing the second display control over the first display control;
the step of displaying the first image in a first window of a data display interface includes: displaying the first image in a first display control of a first window of a data display interface;
the step of highlighting the navigation image within the first window comprises: highlighting the navigation image on a second display control of the first window.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, which when executed by a central processing unit and a graphics processing unit, implements the steps in the data presentation method according to any one of the preceding claims 1 to 8.
10. A mobile terminal, characterized in that the mobile terminal comprises: one or more central processors; a memory; one or more graphics processors, and one or more computer programs, wherein the central processor is connected to the memory and the graphics processor, the one or more computer programs being stored in the memory and configured to be executed by the central processor and the graphics processor to perform the data presentation method of any of the preceding claims 1 to 8.
CN202010991247.XA 2020-09-20 2020-09-20 Data display method, mobile terminal and storage medium Active CN111833243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010991247.XA CN111833243B (en) 2020-09-20 2020-09-20 Data display method, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010991247.XA CN111833243B (en) 2020-09-20 2020-09-20 Data display method, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111833243A CN111833243A (en) 2020-10-27
CN111833243B true CN111833243B (en) 2020-12-01

Family

ID=72918519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010991247.XA Active CN111833243B (en) 2020-09-20 2020-09-20 Data display method, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111833243B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112181230A (en) * 2020-09-30 2021-01-05 武汉中科通达高新技术股份有限公司 Data display method and device and electronic equipment
CN112529769B (en) * 2020-12-04 2023-08-18 威创集团股份有限公司 Method and system for adapting two-dimensional image to screen, computer equipment and storage medium
CN112765706B (en) * 2020-12-31 2024-02-20 杭州群核信息技术有限公司 Home decoration material moving method and device, computer equipment and storage medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5338174B2 (en) * 2008-07-28 2013-11-13 富士通株式会社 Panorama photographing apparatus and method, camera unit equipped with panoramic photographing apparatus
US9071752B2 (en) * 2012-09-25 2015-06-30 National Chiao Tung University Scene imaging method using a portable two-camera omni-imaging device for human-reachable environments
CN105208323B (en) * 2015-07-31 2018-11-27 深圳英飞拓科技股份有限公司 A kind of panoramic mosaic picture monitoring method and device
CN107438152B (en) * 2016-05-25 2023-04-07 中国民用航空总局第二研究所 Method and system for quickly positioning and capturing panoramic target by motion camera
CN107734244B (en) * 2016-08-10 2019-07-05 深圳看到科技有限公司 Panorama movie playback method and playing device
CN106570938A (en) * 2016-10-21 2017-04-19 哈尔滨工业大学深圳研究生院 OPENGL based panoramic monitoring method and system
CN106454138A (en) * 2016-12-07 2017-02-22 信利光电股份有限公司 Panoramic zoom camera
US10789671B2 (en) * 2016-12-28 2020-09-29 Ricoh Company, Ltd. Apparatus, system, and method of controlling display, and recording medium
CN106989730A (en) * 2017-04-27 2017-07-28 上海大学 A kind of system and method that diving under water device control is carried out based on binocular flake panoramic vision
CN110956583B (en) * 2018-09-26 2022-05-10 华为技术有限公司 Spherical image processing method and device and server
CN111199177A (en) * 2018-11-20 2020-05-26 中山大学深圳研究院 Automobile rearview pedestrian detection alarm method based on fisheye image correction
CN210867975U (en) * 2020-01-17 2020-06-26 深圳市华世联合科技有限公司 Intelligent display system

Also Published As

Publication number Publication date
CN111833243A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
EP3955212A1 (en) Shadow rendering method and apparatus, computer device and storage medium
CN111833243B (en) Data display method, mobile terminal and storage medium
US10055879B2 (en) 3D human face reconstruction method, apparatus and server
US20200302670A1 (en) Image processing method, electronic device, and storage medium
CN111813290B (en) Data processing method and device and electronic equipment
CN111932664B (en) Image rendering method and device, electronic equipment and storage medium
CN112017133B (en) Image display method and device and electronic equipment
CN110033503B (en) Animation display method and device, computer equipment and storage medium
WO2021004412A1 (en) Handheld input device, and method and apparatus for controlling display position of indication icon thereof
EP3618006B1 (en) Image processing method and apparatus
CN112150560B (en) Method, device and computer storage medium for determining vanishing point
CN110853128A (en) Virtual object display method and device, computer equipment and storage medium
CN112308767B (en) Data display method and device, storage medium and electronic equipment
CN112308768B (en) Data processing method, device, electronic equipment and storage medium
CN112181230A (en) Data display method and device and electronic equipment
CN112308766B (en) Image data display method and device, electronic equipment and storage medium
CN112306344B (en) Data processing method and mobile terminal
CN115797535A (en) Three-dimensional model texture mapping method and related device
CN112184543B (en) Data display method and device for fisheye camera
CN112184801A (en) Data display method for fisheye camera and mobile terminal
CN109842722B (en) Image processing method and terminal equipment
CN114093020A (en) Motion capture method, motion capture device, electronic device and storage medium
CN112308757A (en) Data display method and mobile terminal
WO2019061712A1 (en) Color filter substrate, display screen, and terminal
CN117618893A (en) Scene special effect processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant