CN112181230A - Data display method and device and electronic equipment - Google Patents

Data display method and device and electronic equipment Download PDF

Info

Publication number
CN112181230A
CN112181230A CN202011063142.4A CN202011063142A CN112181230A CN 112181230 A CN112181230 A CN 112181230A CN 202011063142 A CN202011063142 A CN 202011063142A CN 112181230 A CN112181230 A CN 112181230A
Authority
CN
China
Prior art keywords
image
angle
window
projection matrix
navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011063142.4A
Other languages
Chinese (zh)
Inventor
张凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhongke Tongda High New Technology Co Ltd
Original Assignee
Wuhan Zhongke Tongda High New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhongke Tongda High New Technology Co Ltd filed Critical Wuhan Zhongke Tongda High New Technology Co Ltd
Priority to CN202011063142.4A priority Critical patent/CN112181230A/en
Publication of CN112181230A publication Critical patent/CN112181230A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04803Split screen, i.e. subdividing the display area or the window area into separate subareas

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a data display method and device and electronic equipment, and relates to the technical field of smart cities. The method comprises the following steps: acquiring a first image and a navigation image displayed in a first window of a data display interface, and acquiring a second image displayed in a second window; acquiring control operation of a user on a first window of a data display interface; converting the control operation into an angle in a three-dimensional space; updating the second projection matrix according to the angle to obtain an updated second projection matrix; updating a second image under a small visual angle according to the updated second projection matrix, the second image model and the image data, and displaying the second image, so that the understanding efficiency of the content of the image data is improved; and updating a three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model, processing the three-dimensional navigation area to obtain a navigation image, displaying the navigation image in the first window, and improving the speed of positioning the region of interest in the image data by the user.

Description

Data display method and device and electronic equipment
Technical Field
The application relates to the technical field of smart cities, in particular to a data display method and device and electronic equipment.
Background
In traditional video monitoring, 2D plane pictures are mainly displayed, but with the rise of computer technology, the advantages of fisheye images in the monitoring industry are more and more obvious. The scene of only a position can be monitored in traditional plane camera, but the fish-eye camera can monitor a wider visual field because of having a wider visual angle, so that the field needing monitoring by a plurality of plane cameras originally can be solved by only one fish-eye camera, and the hardware cost is greatly saved.
Because the fisheye camera has wider visual angle, the fisheye image (image data) obtained by shooting often has great distortion, and the fisheye image obtained by shooting is usually displayed through a circle, so that the fisheye image is not well understood and can be understood by professional technicians, and the application of the fisheye image cannot be well popularized and developed.
Disclosure of Invention
The embodiment of the application provides a data display method and device and electronic equipment, which can improve the understanding efficiency of image data content and improve the speed of positioning a region of interest in image data by a user.
The embodiment of the application provides a data display method, which comprises the following steps:
acquiring image data acquired by a fisheye camera;
processing the image data into a first image under a large visual angle according to a first projection matrix and a first image model, and displaying the first image in a first window of a data display interface;
generating a second image under a small visual angle according to a second projection matrix, a second image model and the image data, and displaying the second image in a second window of the data display interface, wherein the first projection matrix is different from the second projection matrix, and the first image model is the same as the second image model;
determining a three-dimensional navigation area of the second image corresponding to the first image model according to the second projection matrix and the first image model;
processing the three-dimensional navigation area in a preset mode to obtain a navigation image so as to display the navigation image in the first window in a protruding mode, wherein the navigation image represents the position of a second image in the first image;
acquiring control operation of a user on the first window based on the navigation image;
converting the control operation into an angle in a three-dimensional space;
updating a second projection matrix corresponding to the second window according to the angle;
updating a second image under a small visual angle according to the second projection matrix, the second image model and the image data, and updating and displaying the second image in the second window;
updating a three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model;
and processing the three-dimensional navigation area in a preset mode to update the navigation image, and displaying the navigation image in a first window in a highlighted mode.
An embodiment of the present application further provides a data display device, including:
the image acquisition module is used for acquiring image data acquired by the fisheye camera;
the first processing and displaying module is used for processing the image data into a first image under a large visual angle according to the first projection matrix and the first image model and displaying the first image in a first window of a data displaying interface;
the second processing and displaying module is used for generating a second image under a small visual angle according to a second projection matrix, a second image model and the image data, and displaying the second image in a second window of the data display interface, wherein the first projection matrix is different from the second projection matrix, and the first image model is the same as the second image model;
the area determining module is used for determining a three-dimensional navigation area, corresponding to the second image, in the first image model according to the second projection matrix and the first image model;
the third processing and displaying module is used for processing the three-dimensional navigation area in a preset mode to obtain a navigation image so as to display the navigation image in the first window in a protruding mode, and the navigation image represents the position of the second image in the first image;
the operation acquisition module is used for acquiring control operation of a user on the first window based on the navigation image;
the angle conversion module is used for converting the control operation into an angle in a three-dimensional space;
the matrix updating module is used for updating a second projection matrix corresponding to the second window according to the angle;
the second processing and displaying module is further configured to update a second image under a small viewing angle according to the second projection matrix, the second image model and the image data, and update and display the second image in the second window;
the area updating module is used for updating a three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model;
and the third processing and displaying module is also used for processing the three-dimensional navigation area in a preset mode so as to update the navigation image and display the navigation image in the first window in a highlighted mode.
An embodiment of the present application further provides an electronic device, where the electronic device includes:
one or more processors; a memory; and one or more computer programs, wherein the processor is coupled to the memory, the one or more computer programs being stored in the memory and configured to be executed by the processor to perform the data presentation method described above.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in any data presentation method are implemented.
The embodiment of the application processes image data collected by a fisheye camera into a first image under a large visual angle according to a first projection matrix and a first image model, displays the first image on a first window, generates a second image under a small visual angle according to a second projection matrix, a second image and the image data, displays the second image on a second window, processes the second image corresponding to a three-dimensional navigation area in the first image model to obtain a navigation image, displays the navigation image on the first window, obtains control operation of a user on the second window based on the navigation image, converts the control operation into an angle under a three-dimensional space, updates the second projection matrix corresponding to the second window according to the angle, and thus converts the control operation of the user on the first window based on the navigation image into the angle under the three-dimensional space and generates the second projection matrix according to the angle, converting the control operation on the first window into a second projection matrix corresponding to the second window; updating a second image under a small visual angle according to the second projection matrix, the second image model and the image data, and updating and displaying the second image in the second window, so that on one hand, the second projection matrix is obtained through control operation on the first window, and the display of the second image in the second window is controlled through the control operation on the first window, and on the other hand, the obtained first image and the second image are planar images of the image data under different visual angles, so that the image data can be understood from different visual angles, the understanding of a user on the content of the image data is facilitated, and the understanding efficiency of the content of the image data is improved; updating a three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model, processing the three-dimensional navigation area in a preset mode to update the navigation image so as to display the navigation image in the first window in a protruding mode, and thus updating the second projection matrix in real time according to the control operation of a user on the first window based on the navigation image, thereby updating the second image in real time, updating the navigation image in real time, and updating the display of a data display interface in real time according to the control operation of the user on the first window based on the navigation image; according to the navigation image, a user can clearly know the position of the second image displayed in the second window in the first image displayed in the first window to establish the incidence relation between the images at different viewing angles, the understanding efficiency of the image data content is further improved, the user can conveniently adjust the watched area, the user can conveniently find the concerned area, the speed of positioning the concerned area in the image data by the user is improved, and the user experience is improved; and the second image under a small visual angle displayed through the second window also realizes the detail display of the image data. The data display method in the embodiment of the application can be applied to more application scenes.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a system scenario diagram of a data presentation method provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart of a data presentation method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of image data acquired by a fisheye camera provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of vertex coordinates and texture coordinates provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of an imaging principle of perspective projection provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a data presentation interface provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of determining an angle at which a control point is located on a first window provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of determining an orientation of a second virtual camera provided by an embodiment of the present application;
fig. 9a and 9b are another schematic flow chart of a data presentation method provided in an embodiment of the present application;
FIG. 10 is a schematic block diagram of a data presentation device provided by an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a data display method and device, electronic equipment and a storage medium. Any kind of data display device that this application embodiment provided can integrate in electronic equipment. The electronic device includes, but is not limited to, a smart phone, a tablet Computer, a notebook Computer, a smart television, a smart robot, a Personal Computer (PC), a wearable device, a server Computer, a vehicle terminal, and the like.
Please refer to fig. 1, which is a schematic view illustrating a data display system according to an embodiment of the present disclosure. The data display system comprises a fisheye camera and electronic equipment. The number of the fisheye cameras can be one or more, the number of the electronic equipment can also be one or more, and the fisheye cameras and the electronic equipment can be directly connected or can be connected through a network. The fisheye camera and the electronic device can be connected in a wired mode or a wireless mode. The fisheye camera in the embodiment of fig. 1 is connected to the electronic device through a network, where the network includes network entities such as a router and a gateway.
The fisheye camera can shoot to obtain initial image data of a fisheye image, and the shot initial image data is sent to the electronic equipment; the electronic equipment receives initial image data shot by the fisheye camera, and under one condition, the received initial image data is directly used as image data collected by the fisheye camera, and under the other condition, the received initial image data is corrected to obtain the image data collected by the fisheye camera, and then the image data is correspondingly processed and displayed.
Fig. 2 is a schematic flow chart of a data display method according to an embodiment of the present application. The data display method runs in the electronic equipment, and comprises the following steps:
and 101, acquiring image data acquired by the fisheye camera.
Because the viewing angle of the fisheye camera is wider, the image shot by the fisheye camera contains more information than the image shot by the plane camera. The shooting angle of the fisheye camera is similar to a hemisphere, the obtained image is represented by a similar circle and the like, if the visual angle of the fisheye camera is 180 degrees, the shooting angle is just a hemisphere, and the obtained image is presented on a two-dimensional plane in a circle.
Fig. 3 is a schematic diagram of initial image data directly acquired by the fisheye camera provided in the embodiment of the present application, and a middle circular area is an initial image captured by the fisheye camera. In fig. 3, the fisheye camera faces the sky, and the captured image includes the sky, buildings, trees, and the like around the position where the fisheye camera is located.
Initial image data directly acquired by the fisheye camera can be pre-stored in the electronic equipment so as to directly acquire the initial image data acquired by the fisheye camera from the electronic equipment; the initial image data acquired by the fisheye camera can be acquired from other electronic equipment through a network; the captured initial image data can also be acquired in real time from the fisheye camera through a network, as shown in fig. 1. In this way, the acquiring of the image data acquired by the fisheye camera in step 101 may be understood as acquiring initial image data directly acquired by the fisheye camera, that is, the initial image data shown in fig. 3.
In some cases, in order to achieve a better display effect, the initial image data directly acquired by the fisheye camera needs to be further processed. Specifically, step 101 includes: calibrating data of the fisheye camera; acquiring initial image data shot by a fisheye camera; and correcting the acquired initial image data according to the result of data calibration, and taking the corrected image data as the image data acquired by the fisheye camera.
The fisheye camera manufacturer needs to calibrate the fisheye camera before the mass production of the fisheye camera, and provides a calibration interface, and a user inputs calibration parameters through the calibration interface after purchasing the fisheye camera so as to calibrate the fisheye camera. The main purpose of data calibration is to obtain parameters corresponding to the fisheye lens to find a circular area in the initial image data shown in fig. 3. Due to the difference of the hardware of the fisheye cameras, the positions of the circular areas in the images are different in the initial image data obtained by shooting by each different fisheye camera.
And after the fisheye camera is subjected to data calibration, correcting the initial image data according to the result of the data calibration. For example, the longitude and latitude method is adopted to correct the initial image data, or other methods are adopted to correct the initial image data, and the corrected image data is used as the image data collected by the fish-eye camera. The purpose of the correction is to reduce or eliminate distortion in the original image data. Such as converting the image of the circular area shown in fig. 3 to a 2:1 rectangular image to reduce or eliminate distortion in the original image data.
Further, the corrected image data is converted into texture units for subsequent texture mapping.
And 102, processing the image data into a first image under a large viewing angle according to the first projection matrix and the first image model, and displaying the first image in a first window of a data display interface.
In a virtual scene, a coordinate system of an object is generally required to be constructed, and a model is established in the coordinate system of the object (commonly called modeling). In the embodiment of the application, a first image model is established, and the first image model is spherical; in other cases, different shapes of image models may be accommodated depending on the particular use scenario.
In the following description, the first image model is taken as an example of a sphere, and it can be simply understood that the first image model is a sphere formed by dividing the first image model into n circles according to longitude and allocating m points to each circle, where n is 180, m is 30, and the like. It should be noted that the larger n and the larger m, the more rounded the sphere formed.
The first image model built by OpenGL includes a plurality of points, each point being represented by [ (x, y, z) (u, v) ], where (x, y, z) represents vertex coordinates and (u, v) represents texture coordinates. The vertex coordinates (x, y, z) are three-dimensional space coordinates that determine the shape of the object; (u, v) are two-dimensional coordinates that determine where the texture removal unit is to extract the texture. It should be noted that, for the unified measurement, the vertex coordinates and the texture coordinates are normalized, such as mapping the vertex coordinates onto [ -1,1] and mapping the texture coordinates onto [0,1] uniformly. It should also be noted that the coordinate systems used for the vertex coordinates and texture coordinates are different.
Fig. 4 is a schematic diagram showing vertex coordinates and texture coordinates. A, B, C, D are four points on the model, and the vertex coordinates and texture coordinates of the four points are A [ (-1, -1,0) (0,0.5) ], B [ (1, -1,0) (0.5 ) ], C [ (-1,0,0) (0,1) ], and D [ (1,0,0) (0.5, 1), respectively.
After the model is built, a projection matrix can be constructed. In a virtual scene, a coordinate system in which an object (or a model, which is displayed as an object after texture mapping on the model) is located is called an object coordinate system, and a camera coordinate system is a three-dimensional coordinate system established with a focus center of a camera as an origin and corresponds to a world coordinate system. The virtual camera, the object, etc. are all in the world coordinate system. The relationships among the virtual camera, the object, the model in the world coordinate system, the wide angle and the elevation angle of the virtual camera, the distance from the lens to the near plane and the far plane, and the like are all embodied in the projection matrix.
The first projection matrix comprises information for controlling the first image model, information of the first virtual camera and information of perspective projection of the first virtual camera.
The first projection matrix may be determined by: acquiring initial parameters of the set first virtual camera, wherein the initial parameters comprise the position of the first virtual camera (information of the first virtual camera), the Euler angle, the distance from the lens of the first virtual camera to a projection plane (also called a near plane), the distance from the lens of the first virtual camera to a far plane (information of perspective projection of the first virtual camera), and the like; a first projection matrix is determined from initial parameters of the first virtual camera. The first projection matrix is determined, for example, using a mathematical library based on initial parameters of the first virtual camera, for example, the initial parameters of the first virtual camera are input into a corresponding function of a glm (opengl mathematics) database, and the first projection matrix is calculated using the function. It should be noted that the first projection matrix determined according to the set initial parameters of the first virtual camera may also be understood as an initial first projection matrix. In the embodiment of the present application, since the first image model is not subjected to rotation control, and the information of the second virtual camera is not changed, the initial first projection matrix is not changed all the time, and thus the first projection matrix is the initial first projection matrix.
Fig. 5 is a schematic diagram of imaging of perspective projection provided in the embodiment of the present application. Wherein the distance of the lens to the near plane 11, i.e. the distance between point 0 and point 1, and the distance of the lens to the far plane 12, i.e. the distance between point 0 and point 2. The position of the virtual camera includes information such as coordinates of the 0 point in the world coordinate system, a lens orientation of the virtual camera, and the like.
The first image model and the first projection matrix described above may be predetermined. In step 102, a Central Processing Unit (CPU) may be directly utilized to obtain the first image model and the first projection matrix, and process the image data into a first image under a large viewing angle according to the first projection matrix and the first image model. Or in the process of executing step 102, the CPU may determine the first image model and the first projection matrix, and then process the image data into the first image under the large viewing angle according to the first projection matrix and the first image model.
Wherein, step 102, includes: the first projection matrix, the image data and the first image model are copied to a Graphic Processing Unit (GPU) through the CPU, so that the image data is processed into a first image under a large visual angle by the GPU according to the first projection matrix, the first image model and the image data. Specifically, a vertex in the first image model is transmitted to a vertex shader through a CPU, a texture coordinate in the first image model is copied to a fragment shader, a texture unit corresponding to the texture coordinate is determined according to image data, and a GPU is used for rendering to obtain a first image under a large visual angle.
The large viewing angle is essentially the viewing angle corresponding to the first image model being placed in the viewing cone of the first virtual camera. As shown in fig. 5, the viewing frustum is a trapezoidal region between the proximal plane 11 and the distal plane 12. It is to be understood that at large viewing angles the first image model is entirely within the viewing cone of the first virtual camera. Because the first image model is a sphere, a half of the sphere can be seen in a visual angle, the image data is used as a texture unit and is completely pasted on the half of the sphere, and the first image under a large visual angle is obtained. Alternatively, it can be easily understood that the large viewing angle is the viewing angle of the complete planar image corresponding to the first image model seen in the field of view by placing the first virtual camera farther outside the first image model. In this step, the first image at a large viewing angle is obtained, so that the user can understand the content of the image data as a whole.
And after the first image is obtained, displaying the first image in a first window of a data display interface. The data display interface comprises at least one first window and at least one second window. Referring to fig. 6, fig. 6 is a schematic view of a data display interface provided in an embodiment of the present application. The data presentation interface 20 comprises a first window 21 on the left side of the data presentation interface and two second windows 22 on the right side of the first window 21. The bottom layer in the first window 21 shows a first image. As can be seen from fig. 6, the obtained first image corresponds/matches the image data. The first window and/or the second window may exist on the data presentation interface 20 in the form of a display control, for example, the first window includes at least one display control, and the second window includes one display control; the first window and/or the second window may also be otherwise formed on the data presentation interface 20.
103, generating a second image under a small view angle according to the second projection matrix, the second image model and the image data, and displaying the second image in a second window of the data display interface, wherein the first projection matrix is different from the second projection matrix, and the first image model is the same as the second image model.
It is noted that as well as the initial first projection matrix, there is also an initial second projection matrix. The initial second projection matrix and the initial first projection matrix may be understood as default projection matrices corresponding to the data presentation interface when the data presentation interface is turned on/refreshed. The initial second image and the initial first image can be determined by using the initial second projection matrix and the second image model, and the initial first projection matrix and the first image model, namely the initial first image and the initial second image which correspond to the initial second image and the initial first image before any control operation is not carried out after the data display interface is opened. The initial first image and the initial second image are images under a default view angle corresponding to the default corresponding projection matrix.
The second projection matrix comprises information for controlling the second image model, information of the second virtual camera and information of perspective projection of the second virtual camera.
The initial second projection matrix may be determined by: acquiring set initial parameters of the second virtual camera, wherein the initial parameters comprise the position of the second virtual camera (information of the second virtual camera), the Euler angle, the distance from the lens of the second virtual camera to the near plane, the distance from the lens of the second virtual camera to the far plane (information of perspective projection of the second virtual camera), and the like; an initial second projection matrix is determined from the initial parameters of the second virtual camera. The initial second projection matrix may be predetermined. It should be noted that the initial first projection matrix is different from the initial second projection matrix, and correspondingly, the first projection matrix is different from the second projection matrix. And taking the projection matrix after the initial second projection matrix updating as a second projection matrix, updating the second projection matrix through the control operation on the second window, and updating the second projection matrix through the control operation on the first window. Wherein the control operation on the first window or the second window updates the information on the second image model rotation control and the information on the second virtual camera.
In the embodiment of the present application, the second image model is the same as the first image model, and the first image model can be directly obtained. In the step of generating the second image under the small viewing angle based on the second projection matrix and the second image model and the image data, the CPU may be directly used to obtain the second projection matrix and the second image model which are calculated in advance, and generate the second image under the small viewing angle based on the second projection matrix and the second image model and the image data.
The step of generating a second image under a small viewing angle according to the second projection matrix, the second image model and the image data includes: and copying the second projection matrix, the image data and the second image model into the GPU by the CPU so as to generate a second image under a small visual angle by the GPU according to the second projection matrix, the second image model and the image data. Specifically, a vertex in the second image model is transmitted to a vertex shader through a CPU, texture coordinates in the second image model are copied to a fragment shader, texture coordinates corresponding to the vertex which can be projected to an updated second projection matrix in the second image model are determined, a texture unit corresponding to the texture coordinates is determined according to image data, a GPU is used for rendering, and a second image under a small visual angle is generated.
The small view angle refers to a view angle at which local image data can be seen in the view field after rendering. It can be simply understood that the small viewing angle is the viewing angle of the local planar image corresponding to the second image model projected in the view field by placing the second virtual camera inside the second image model. In the step, the second image under the small visual angle is obtained, so that the user can understand the content of the image data locally (under the small visual angle), and the understanding efficiency of the content of the image data is improved.
And displaying the second image in a second window of the data display interface in the obtained second image. As shown in fig. 6, if there is only one second window 22 on the data presentation interface, the second image is presented in the second window. If there are a plurality of second windows 22 on the data display interface, the second image is displayed on the second window corresponding to the control operation. In the plurality of second windows, the small visual angles corresponding to each second window are different, and the displayed second images are also displayed as different images.
In the above steps, the first window on the data display interface displays the first image under the large viewing angle, and the second window displays the second image under the small viewing angle, so that the planar image of the image data under different viewing angles is obtained, the image data can be understood from different viewing angles, the user can conveniently understand the content of the image data, and the understanding efficiency of the content of the image data is improved.
The first image and the second image are projected under a large visual angle and a small visual angle through the same image model (the first image model and the second image model are the same), and are mapped by using the same texture (image data). The image data is understood from the whole through the first image under the large visual angle, and the image data is understood from the local part through the second image under the small visual angle, so that the detail display of the image data is realized. When the control operation is carried out on the windows (including the first window and the second window) of the data display interface, the second image under the small visual angle is continuously changed. And the second image model is spherical, 360 degrees and has no boundary, so that the second image is easy to repeat, namely the second image is easy to rotate in the process of controlling the second window. Therefore, when the user controls the window, the user needs to know to which part of the first image the second image displayed on the second window corresponds currently, so as to improve the speed of positioning the region of interest by the user.
And 104, determining a three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model.
The first image or the second image determined based on the projection matrix (the first projection matrix and the second projection matrix, respectively) and the image model (the first image model and the second image model, respectively) is an image obtained by the imaging principle of perspective projection. As shown in fig. 5, the projection of a point in the image model between the near plane 11 and the far plane 12 can be seen in our field of view.
According to the imaging principle of perspective projection, the visible part of the visual field is the vertex on the image model multiplied by the projection matrix, and the vertex on the near plane is normalized, cut and finally displayed by texture mapping. Therefore, if one wants to determine that the second image corresponds to a three-dimensional navigation area within the first image model, the problem can be transformed by reverse thinking into: and determining which vertexes on the first image model can be projected onto the near plane of the second projection matrix, and after determining the vertexes, taking the areas corresponding to the vertexes as three-dimensional navigation areas, and highlighting and displaying texture coordinates corresponding to the three-dimensional navigation areas. Further, if it is desired to determine which vertices of the first image model can be projected onto the near plane of the second projection matrix, this can be determined by the second projection matrix and the first image model.
Specifically, the step of determining a three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model includes: according to the second projection matrix and the first image model, determining a navigation vertex projected to a near plane corresponding to the second projection matrix from the vertexes of the first image model; and taking the area corresponding to the navigation vertex as a three-dimensional navigation area in the first image model corresponding to the second image. The area corresponding to the navigation vertex is understood as the area where the navigation vertex is located.
Navigation vertices are understood as vertices in the first image model which can be projected into the near plane of the second projection matrix. Specifically, the step of determining, from the vertices of the first image model, the navigation vertices projected into the near plane corresponding to the second projection matrix according to the second projection matrix and the first image model may be performed by a CPU, and specifically includes the following steps: traversing each vertex in the first image model; from each vertex(s), a navigation vertex projected into the near plane corresponding to the second projection matrix is determined.
Wherein the step of determining from each vertex a navigation vertex projected into the near plane corresponding to the second projection matrix comprises: determining the coordinate of each vertex after projection according to the second projection matrix, for example, multiplying the vertex in the first image model by the second projection matrix to obtain the coordinate of each vertex after projection; detecting whether the coordinates are in the range of the near plane corresponding to the second projection matrix; if yes, determining the vertex as a navigation vertex; if not, the vertex is determined to be a non-navigation vertex. Wherein the navigation vertices are visible to the user after being projected onto the near-plane of the second projection matrix, and the non-navigation vertices are not visible to the user after being projected.
Specifically, if the first image model is divided into 180 circles according to longitude and 30 points are allocated to each circle, the CPU traverses each vertex in the first image conversion model, that is, the number of traversed vertices is 180 × 30, and determines whether a vertex is a navigation vertex according to the second projection matrix and the vertex every time a vertex is traversed. Specifically, the second projection matrix is multiplied by the vertex coordinates of the vertex to determine the coordinates of the vertex after projection, and if the projected coordinates are in the range of the near plane corresponding to the second projection matrix, the vertex is determined as a navigation vertex, otherwise, the vertex is determined as a non-navigation vertex. It is understood that after the second projection matrix is determined, the range of the near plane corresponding to the second projection matrix is also determined. If projected coordinate (x)1,y1,z1)X in (2)1And y1Is in the [ -1,1] coordinate]In the range of-1. ltoreq. x1Y is not more than 1, and-1 is not more than y1And if the projection coordinate is less than or equal to 1, determining that the projected coordinate is in the range of the near plane corresponding to the second projection matrix. And after the navigation vertex is determined, taking the area corresponding to the navigation vertex as a three-dimensional navigation area in the first image model corresponding to the second image. It should be noted that it is not necessary to judge z here1The projected coordinates, and thus the near plane, are two-dimensional, all z-axis coordinates are equal. z is a radical of1The projected coordinates can be used as the depth of field subsequently, so as to realize the effect of large and small distances.
It can be understood that, the first projection matrix is multiplied by the vertex of the first image model, so that the vertex projected onto the near plane of the first projection matrix can be determined, and the vertex is the first image after clipping rendering and the like; multiplying the second projection matrix by the vertex of the second image model, determining the vertex projected to the near plane of the second projection matrix, and rendering to obtain a second image; therefore, after the second projection matrix is multiplied by the first image model, the determined navigation vertex is the corresponding vertex of the second image in the first image model (the second image can be obtained after the corresponding vertex is projected to the second projection matrix).
Or, it can also be simply understood that, outside the first image model, the first image is obtained by multiplying the first projection matrix by the vertex of the first image model, clipping and rendering the product, and the like; multiplying the second projection matrix with the vertex of the second image model in the second image model, and obtaining a second image after clipping, rendering and the like; then, after multiplying the internal second projection matrix with the first image model, it can be derived which vertices in the first image model can be projected onto the near plane of the second projection matrix, and the resulting vertices are used as navigation vertices.
And 105, processing the three-dimensional navigation area in a preset mode to obtain a navigation image, and displaying the navigation image in a first window in a highlighted mode.
According to the second projection matrix and the first image model, the step of determining the navigation vertex projected to the near plane corresponding to the second projection matrix from the vertex of the first image model is executed by a CPU (central processing unit), specifically, the step of processing the three-dimensional navigation area in a preset mode to obtain a navigation image and displaying the navigation image in a first window in a protruding mode comprises the following steps: determining texture coordinates corresponding to the navigation vertexes; and copying the texture coordinates into the GPU, so that the GPU processes the three-dimensional navigation area in a preset mode according to the texture coordinates to obtain a navigation image, and displaying the navigation image in the first window in a highlighted mode. The navigation image represents a position of the second image within the first image.
It should be noted that, if the CPU is used to process the three-dimensional navigation area, after the CPU determines the navigation vertex and the texture coordinate corresponding to the navigation vertex, the texture coordinate needs to be copied to the GPU, so that the GPU processes the three-dimensional navigation area according to the texture coordinate in a preset manner to obtain a navigation image, and the navigation image is prominently displayed in the first window.
Wherein, the step of processing the three-dimensional navigation area in a preset mode according to the texture coordinate to obtain the navigation image and displaying the navigation image in the first window in a highlighted manner comprises the following steps: acquiring a three-dimensional navigation area preset texture, wherein the three-dimensional navigation area preset texture comprises a preset color or a preset picture; and rendering the three-dimensional navigation area by using the GPU according to the preset texture and texture coordinates of the three-dimensional navigation area to obtain a navigation image, and displaying the navigation image in the first window in a protruding mode. Specifically, the texture corresponding to the texture coordinates is set as a three-dimensional navigation area preset texture, and the GPU is used to render the three-dimensional navigation area according to the set texture (i.e., the three-dimensional navigation area preset texture). Therefore, the three-dimensional navigation area is rendered through the preset color or the preset picture, and the purpose of highlighting the navigation image is achieved.
Further, the step of processing the three-dimensional navigation area in a preset mode according to the texture coordinates to obtain a navigation image and displaying the navigation image in the first window in a highlighted mode comprises the following steps: acquiring a three-dimensional navigation area preset texture and a first preset transparency, wherein the three-dimensional navigation area preset texture comprises a preset color or a preset picture; and rendering the three-dimensional navigation area by using the GPU according to the preset texture, the first preset transparency and the texture coordinate of the three-dimensional navigation area to obtain a navigation image, and displaying the navigation image in a first window in a protruding mode. Specifically, the texture corresponding to the texture coordinate is set as a preset texture of the three-dimensional navigation area, and the transparency of the texture corresponding to the texture coordinate is set as a first preset transparency; and rendering the three-dimensional navigation area according to the set texture by utilizing the GPU. Therefore, the three-dimensional navigation area is rendered into the three-dimensional navigation area preset texture, and the displayed transparency is the first preset transparency, so that the purpose of highlighting the navigation image is achieved.
Further, the step of taking a region outside the three-dimensional navigation region as a non-three-dimensional navigation region, specifically, processing the three-dimensional navigation region in a preset manner according to the texture coordinates to obtain a navigation image, and displaying the navigation image in the first window in a highlighted manner includes:
acquiring a preset texture, a first preset transparency and a second preset transparency of the three-dimensional navigation area, wherein the second preset transparency is smaller than the first preset transparency, and the preset texture of the three-dimensional navigation area is a preset color or a preset picture; the GPU renders the three-dimensional navigation area according to the preset texture, the first preset transparency and the texture coordinate of the three-dimensional navigation area to obtain a navigation image, and the rendered three-dimensional navigation area is highlighted in the first window; and rendering the non-three-dimensional navigation area into a second preset transparency by utilizing the GPU. The rendering of the three-dimensional navigation area by using the GPU according to the three-dimensional navigation area preset texture, the first preset transparency and the texture coordinate specifically includes: setting the texture corresponding to the texture coordinate as a three-dimensional navigation area preset texture, setting the transparency of the texture corresponding to the texture coordinate as a first preset transparency, rendering the three-dimensional navigation area according to the set texture by using the GPU, rendering the three-dimensional navigation area as the three-dimensional navigation area preset texture, and displaying the transparency as the first preset transparency.
It can be understood that, if the three-dimensional navigation area is rendered after the first image, the rendered navigation image is displayed on the first image. In order to not block the region corresponding to the non-three-dimensional navigation region in the first image and improve the display effect, the second preset transparency is set to be less than 0.8, for example, the second preset transparency may be set to be 0. In order to highlight the navigation image, the first preset transparency may be set to be between (0,1), and in order to not completely cover the area corresponding to the navigation image in the first image, so as to improve the user experience, the first preset transparency may be set to be 0.8. Wherein the preset color may be set to red to highlight the navigation image.
As shown in the left diagram of fig. 6, the rendered navigation image 23 and the rendered non-three-dimensional navigation area are located above the first image, and the current first preset transparency is not 1, and a partial area corresponding to the first image located below the navigation image 23 can be seen through the navigation image 23. The partial area corresponding to the first image, which is located below the navigation image 23, coincides with the second image. Since the second preset transparency is 0, the rendered non-three-dimensional navigation area is transparent and cannot be seen by human eyes.
In some other cases, the step of processing the three-dimensional navigation area in a preset manner according to the texture coordinates to obtain a navigation image and displaying the navigation image in the first window in a highlighted manner by using an area outside the three-dimensional navigation area as a non-three-dimensional navigation area includes:
acquiring a three-dimensional navigation area preset texture, a first preset transparency, a non-three-dimensional navigation area preset texture and a second preset transparency, wherein the second preset transparency is smaller than the first preset transparency, the three-dimensional navigation area preset texture is a first preset color or a first preset picture, and the non-three-dimensional navigation area preset texture is a second preset color or a second preset picture; rendering the three-dimensional navigation area by using the GPU according to the preset texture, the first preset transparency and the texture coordinate of the three-dimensional navigation area to obtain a navigation image, and displaying the navigation image in a first window in a protruding mode; and rendering the non-three-dimensional navigation area in the first window by utilizing the GPU according to the non-three-dimensional navigation area preset texture and the second preset transparency.
The method for rendering the three-dimensional navigation area by using the GPU according to the three-dimensional navigation area preset texture, the first preset transparency and the texture coordinate comprises the following steps: setting the texture corresponding to the texture coordinate as a preset texture of the three-dimensional navigation area, and setting the transparency of the texture corresponding to the texture coordinate as a first preset transparency; and rendering the three-dimensional navigation area according to the set texture so as to render the three-dimensional navigation area into a three-dimensional navigation area preset texture, wherein the displayed transparency is a first preset transparency. Rendering the non-three-dimensional navigation area according to the preset texture and the second preset transparency of the non-three-dimensional navigation area by using the GPU, wherein the rendering process comprises the following steps: setting the texture corresponding to the non-three-dimensional navigation area as a non-three-dimensional navigation area preset texture, and setting the transparency of the texture corresponding to the non-three-dimensional navigation area as a second preset transparency; and rendering the non-three-dimensional navigation area according to the set texture so as to render the non-three-dimensional navigation area into a non-three-dimensional navigation area preset texture, wherein the displayed transparency is a second preset transparency. Wherein, the setting of the first preset transparency and the second preset transparency can refer to the description above; the three-dimensional navigation area preset texture and the non-three-dimensional navigation area preset texture can be the same or different. And highlighting the navigation image, rendering the non-three-dimensional navigation area by using the preset texture of the non-three-dimensional navigation area, and setting the transparency as a second preset transparency.
In the embodiment, the three-dimensional navigation area and the non-three-dimensional navigation area are distinguished, the navigation image is further highlighted, and the user experience is improved.
It should be noted that there may be a plurality of implementation scenes in the step of processing the three-dimensional navigation area by the GPU in a preset manner according to the texture coordinates to obtain the navigation image, and displaying the navigation image in the first window in a highlighted manner.
For example, in one implementation scenario, there is only one display control in the first window, through which both the navigation image (and rendered non-three-dimensional navigation area) and the first image may be displayed. If the display control comprises two texture units: a first texture unit and a second texture unit. Wherein the first texture unit is used to display the first image, the second texture unit is used to display the navigation image (and the rendered non-three-dimensional navigation area), and the second texture unit is located above the first texture unit. Specifically, before the step of displaying the first image in the first window of the data display interface, the method further includes: acquiring a first texture unit and a second texture unit in a display control of a first window; the second texture unit is disposed on the first texture unit. Thus, the step of displaying the first image in the first window of the data display interface includes: the first image is presented within a first texture unit in a display control of a first window. The step of highlighting the navigation image within the first window comprises: the navigation image (and rendered non-three-dimensional navigation area) is highlighted within a second texture element in the display control of the first window. It should be noted that, in this case, while the step of processing the three-dimensional navigation area in the preset manner to obtain the navigation image and highlighting the navigation image in the second texture unit in the first window display control is executed, the step of rendering the first image data into the first image in the large viewing angle according to the first projection matrix and the first image model and displaying the first image in the first texture unit in the first window display control are also executed synchronously. It will be appreciated that because the first image and the navigation image are displayed in a single display control, the first image and the navigation image (and the non-three-dimensional navigation area) need to be rendered simultaneously, and if only the navigation image (and the non-three-dimensional navigation area) is rendered, the first image will not be displayed in the first window, thus defeating the purpose of the present application. In this way, when the three-dimensional navigation area is processed in the preset mode, the three-dimensional navigation area (and the non-three-dimensional navigation area) in the second texture unit is rendered, and the first image corresponding to the first texture unit is rendered.
As another implementation scenario, there are two display controls in the first window, the first display control for displaying the first image and the second display control for displaying the navigation image (and the processed non-three-dimensional navigation area). Specifically, before the step of displaying the first image in the first window of the data display interface, the method further includes: acquiring a first display control and a second display control in a first window; the second display control is disposed over the first display control. Thus, the step of displaying the first image in the first window of the data display interface includes: and displaying the first image in a first display control of a first window of the data display interface. The step of highlighting the navigation image within the first window comprises: the navigation image (and rendered non-three-dimensional navigation area) is highlighted in the second display control of the first window. In this way, the first image and the navigation image (and the rendered non-three-dimensional navigation area) are displayed through the two display controls respectively, and are processed separately, so that the processing efficiency is improved. If the three-dimensional navigation area is processed, only the content displayed on the second display control needs to be rendered, and the content displayed on the first display control does not need to be rendered, so that the consumption of electronic equipment is reduced, and the processing efficiency and speed are improved.
Therefore, the navigation image is highlighted, the user can clearly know the position of the second image displayed in the second window according to the navigation image and the position of the second image displayed in the first window in the first image displayed in the first window, so that the incidence relation between the images at different viewing angles is established, the understanding efficiency of the image data content is further improved, the user can conveniently adjust the watched area, the user can conveniently guide the user to quickly find the concerned area, the speed of positioning the concerned area in the image data by the user is improved, and the user experience is improved. In addition, the second image displayed through the second window also realizes the detail display of the image data.
At this point, the first image and the navigation image are displayed in the first window of the data display interface, and the second image is displayed in the second window of the data display interface. Wherein the navigation image indicates that the second image corresponds to a position in the first image.
And 106, acquiring a control operation of the user on the first window based on the navigation image.
Since the navigation image indicates that the second image corresponds to a position in the first image, the user can perform a control operation based on the navigation image in the first window of the data presentation interface. The control operation may be performed by a user performing a sliding touch operation on the navigation image of the first window, or may be performed in another manner. Here, the effects of the control operation will be briefly described: after the user touches and slides on the navigation image of the first window, the second image of the second window is changed, and therefore the navigation image of the first window is also changed. It appears as if the navigation image on the first window is directly controlled.
In the embodiment of the present application, a control operation by a slide touch operation is described as an example.
The events of the control operation corresponding to the sliding touch operation of the first window comprise a sliding event, a clicking event and the like. The click event is used to stop the accelerator introduced by the control operation of the second window, it being understood that the control operation on the first window does not involve the relevant processing of the accelerator. The slide event is used to control various conditions during the finger slide. The slide events include BeginDrag, DragMove, EndDrag, DragCancle, and the like. BeginDrag corresponds to touchesbgan, understood as a finger press event; DragMove corresponds to touchmoved, understood as a finger movement event; EndDrag corresponds to touchEnded, understood as a finger lift event; dragcancel corresponds to touchhers cancelled, understood as an unexpected interrupt event, such as an unexpected interrupt caused by a call.
And 107, converting the control operation into an angle in a three-dimensional space.
For the electronic device, the screen corresponds to one coordinate axis, the height direction (vertical direction) corresponds to the y-axis, and the width direction (horizontal direction) corresponds to the x-axis. Therefore, the position coordinates corresponding to the slide-touch operation generally include x-axis coordinates and y-axis coordinates, and the x-axis coordinates and the y-axis coordinates on the screen are physical coordinates. (0, 0) in the upper left corner of the screen, the z-axis is not included in the coordinate axes of the screen of the electronic device.
In the image model, since the rotation of the model in openGL can only be performed around the base axes, the base axes include the first base axis, the second base axis, and the third base axis, which in the embodiment of the present application correspond to the x-axis, the y-axis, and the z-axis in the three-dimensional coordinate system, respectively. I.e. the z-axis is introduced in the openGL, (0,0,0) corresponds to the midpoint of the first image of the first window or the midpoint of the second window. In the present embodiment, the object coordinate system is a right-hand coordinate system, and the base axis of the object coordinate system coincides with the base axis of the world coordinate system.
The second projection matrix is determined by how the user performs control operation on the first window based on the navigation image, which is a core of determining the second projection matrix in the embodiment of the present application, that is, the control operation of gesture sliding of the user on the screen of the electronic device is converted into a corresponding angle. The angles include a rotation angle of the second image model on the z-axis of the third base axis and a pitch angle of the second virtual camera on the x-axis of the first base axis corresponding to the second projection matrix.
Specifically, step 107 includes: acquiring a central coordinate of a central point of a first image in a first window; acquiring a control coordinate of a control point corresponding to the control operation; and converting the control operation into an angle in a three-dimensional space according to the central coordinate and the control coordinate.
As is apparent from fig. 6 and the above description, the first image is an image at a large viewing angle obtained by pasting the entire half of the sphere with the image data as texture units, and is visually displayed on a two-dimensional plane by a circle. Since the midpoint of the first image is corresponded to the origin of the three-dimensional coordinate axis in the openGL, the center of the first image is regarded as the center point in the two-dimensional coordinate system corresponding to the screen in order to convert the control operation into an angle. It is understood that, in general, the center point of the first image is the center point of the first window, but there may be a case where the center point is not the center point of the first window, and therefore the center point of the first image is not obtained here.
The center coordinate and the control coordinate are both coordinates with the upper left corner as the origin under the two-dimensional coordinate system corresponding to the screen. The center coordinates can be calculated in advance, the width corresponding to the first window is set as the windows _ width, the height corresponding to the first window is set as the windows _ height, and if the center point of the first image is the center point of the first window, the center coordinates are (windows _ width/2, windows _ height/2); if the center point of the first image is not the center point of the first window, the center coordinates are determined according to the pixel values of the first image, or calculated according to other manners.
And converting the control operation into an angle in a three-dimensional space according to the central coordinate and the control coordinate, namely converting the coordinate in the two-dimensional coordinate system into the angle in the three-dimensional space so as to control the display of the second image and the navigation image through the control operation.
How to determine the corresponding angle according to the control coordinate of the control point slid by the gesture and the center coordinate of the center point is the core of converting the control operation into an angle in a three-dimensional space in the embodiment of the present application.
The step of converting the control operation into an angle in three-dimensional space, in particular, according to the center coordinate and the control coordinate, for the rotation angle of the second image model on the z-axis of the third base axis, includes: determining the angle of a straight line formed by the control point and the central point on the first window according to the central coordinate and the control coordinate; the angle corresponds to the angle of rotation in the second image model as a control operation.
Fig. 7 is a schematic diagram for determining an angle at which a control point is located on a first window according to an embodiment of the present application. In the figure, point A is the central point, and the coordinate of point A is (x)0,y0) The point B is a control point, and the coordinates of the point B are (x, y). Note that, since the a point and the B point are on the first window, and the coordinates of the a point and the B point are coordinates on the screen in the two-dimensional space, the coordinates of the a point and the B point are coordinates with the upper left corner of the screen as the origin. Since the first image is circular and in openGL, (0,0,0) corresponds to the midpoint of the first image of the first window, in order to more conveniently and rapidly convert the coordinates in the two-dimensional coordinate system into an angle in the three-dimensional space, the first image is regarded as a clock, the angle corresponding to the 0-point or 12-point direction is 0 degree or 360 degrees, the angle corresponding to the 3-point direction is 90 degrees, the angle corresponding to the 6-point direction is 180 degrees, and the angle corresponding to the 9-point direction is 270 degrees.
The angle at which the straight line formed by the control point and the center point is located on the first window is understood to be the angle between the straight line formed by the control point and the center point and the straight line corresponding to the 0-point direction (or the 0-degree direction) in the first image. Specifically, the step of determining an angle of a straight line formed by the control point and the central point on the first window according to the central coordinate and the control coordinate includes: determining a quadrant where the control point is located according to the central coordinate and the control coordinate; and determining the angle of a straight line formed by the control point and the central point on the first window according to the quadrant, the central coordinate and the control coordinate. The quadrant is a quadrant formed by taking the point a as the center, and the quadrant in the embodiment of the present application is used for calculating an angle of a straight line formed by the control point and the center point on the first window.
Wherein, 0 to 90 degrees are the first quadrant, 90 to 180 degrees are the second quadrant, 180 to 270 degrees are the third quadrant, 270 to 360 degrees are the fourth quadrant. And determining the quadrant where the control point is located according to the central coordinate and the control coordinate. For example, if x>x0,y>y0Then, the control point is determined to be in the first quadrant, and the angle of the straight line formed by the control point and the central point on the first window is the control point and the central pointThe angle between the straight line formed by the central point and the straight line corresponding to the 0 point is arctan (x-x)0|/|y-y0I), may be represented by a cos angle, a sin angle, or the like. For example, if x0>x,y0>y, the determined control point is in the third quadrant, the angle of the straight line formed by the control point and the central point on the first window is the angle arctan (x-x) between the straight line formed by the control point and the central point and the straight line corresponding to the 6-point direction (180-degree direction)0|/|y-y0And the sum of the angles between the straight line corresponding to the 6-point direction (180-degree direction) and the straight line corresponding to the 0-point direction, namely arctan (x-x)0|/|y-y0|)+180°。
As shown in fig. 7, when the control point is in the second quadrant, the angle of the straight line formed by the control point and the central point in the first window is the sum of the angle between the straight line formed by the control point and the central point and the 3-point direction (90-degree direction) and the angle between the straight line corresponding to the 3-point direction (90-degree direction) and the straight line corresponding to the 0-point direction, i.e., arctan (x-x)0|/|y-y0|)+90°。
And taking the angle of the straight line formed by the control point and the central point on the first window as a control operation to correspond to the rotation angle in the second image model. The angle of a straight line formed by the control point and the central point on the first window is an absolute angle; all of roll, yaw, and pitch, etc. represent absolute angles, and therefore, in the embodiment of the present application, roll, yaw, and pitch are used to represent corresponding absolute angles. pitch represents rotation about the y-axis, also called yaw; yaw represents rotation about the x-axis, also called pitch; roll represents rotation about the z-axis, also called the roll angle. The control operation of the user on the second window essentially changes the roll angle roll, the pitch angle yaw, while the yaw angle pitch is always fixed and not changed, and the default yaw angle pitch is 90 degrees, ensuring that the second virtual camera always faces the direction pointed by the z-axis. Wherein the control operation is represented by a roll corresponding to the rotation angle in the second image model.
It will be appreciated that the second image model is spherical, a rotation of the sphere corresponding to 360 degrees, and the first image is also circular, and the control operation slides a rotation around the centre point of the first image, again exactly 360 degrees. On the other hand, the object coordinate system adopts a right-hand coordinate system, and the control operation slides one circle around the central point of the first image, namely, one circle of rotation is equivalent to one circle of rotation around the z-axis in the three-dimensional space of the object coordinate system. Therefore, the angle at which the straight line formed by the control point and the center point is located on the first window corresponds to the rotation angle in the second image model as a control operation, so that the control operation of the user on the basis of the two-dimensional plane is converted to correspond to the rotation angle in the second image model, that is, the rotation angle of the second image model on the z-axis of the third base axis.
For the pitch angle of the second virtual camera on the x-axis of the first base axis corresponding to the second projection matrix, specifically, the step of converting the control operation into an angle in a three-dimensional space according to the center coordinate and the control coordinate includes: acquiring a radius corresponding to the second image and a maximum pitch angle corresponding to the second virtual camera; determining the control distance from the control point to the central point according to the central coordinate and the control coordinate; determining a control distance of the control operation according to the radius and the maximum pitch angle corresponds to a pitch angle of the second virtual camera.
And the radius corresponding to the second image is the radius of the sphere corresponding to the first image model or the second image model and is represented by r. The pitch angle includes an elevation angle that is the upward shift of the second virtual camera and a depression angle that is the downward shift of the second virtual camera. Wherein the maximum value of the elevation angle is 90 DEG to/2, and the minimum value is 0. The Euler angle is generally preset to be 30 degrees, and is an included angle formed by a straight line between the upper surface of the viewing pyramid and the lens of the second virtual camera and a straight line between the lower surface of the viewing pyramid and the lens of the second virtual camera. The maximum and minimum values of the depression angle coincide with the maximum and minimum values of the elevation angle, but differ in direction. I.e. the maximum pitch angle max of the pitch angle is 90 ° -/2 and the minimum pitch angle min is 0.
Determining the control distance from the control point to the central point according to the central coordinate of the central point and the control coordinate of the control point, wherein the central coordinate of the central point is known as (x)0,y0) Control coordinates of control pointsIs (x, y), the control distance m from the control point to the central point is
Figure BDA0002712999930000201
Determining that a control distance of the control operation corresponds to a pitch angle of the second virtual camera based on the radius and the maximum pitch angle, comprising: the maximum pitch angle multiplied by the control distance divided by the radius to obtain a control distance for the control operation corresponding to the pitch angle of the second virtual camera. Specifically, as shown in equation (1).
Figure BDA0002712999930000202
Wherein a represents that the control distance m corresponds to the pitch angle of the second virtual camera, i.e. the pitch angle of the second virtual camera on the x-axis of the first base axis, max is the maximum pitch angle, r is the radius, which is the euler angle. The calculated control distance m is an absolute angle corresponding to the pitch angle a of the second virtual camera.
After determining that the control distance corresponds to the pitch angle of the second virtual camera, the direction of the pitch angle also needs to be determined. The direction of the pitch angle of the second virtual camera may be determined from the control coordinates and the center coordinates. If (x-x)0) If the direction of the pitch angle is negative, determining that the pitch angle is downward, namely, the depression angle; if (x-x)0) Positive, the direction of the pitch angle is determined to be up, i.e. the elevation angle.
It should be noted that the calculated angle, including the rotation angle of the second image model on the z-axis of the third base axis and the pitch angle of the second virtual camera corresponding to the second projection matrix on the x-axis of the first base axis, is obtained on the first window based on the control operation of the navigation image. Whereas the second projection matrix corresponds to the projection matrix of the second window. Therefore, the calculated angle needs to be sent to the second window, so that the second window updates the corresponding second projection matrix according to the angle.
And 108, updating a second projection matrix corresponding to the second window according to the angle.
The projection matrix (including the first projection matrix and the second projection matrix) corresponds to an MVP matrix, where MVP is a predictive view model. The model matrix (also called model matrix) corresponds to an operation matrix of the second image model, mainly operates the rotation of the second image model on the x, y and z axes, and comprises information for controlling the second image model. The view matrix (also referred to as a view matrix) mainly corresponds to a position point, an orientation (a position of the second virtual camera), and the like of the second virtual camera, and the proactive matrix (also referred to as a perspective matrix) corresponds to information such as an euler angle, a near plane, and a far plane of the second virtual camera, and is understood as information of a perspective projection of the second virtual camera.
How to correspond the angle to the second projection matrix is also the core of determining the second projection matrix in the embodiment of the present application: when a user performs control operation on the first window based on the navigation image, the control operation corresponds to the rotation angle of the second image model on the third base axis z-axis, and a model matrix is correspondingly adjusted; the pitch angle of the second virtual camera on the x-axis of the first base axis is correspondingly adjusted by a view matrix.
Specifically, step 108 includes: updating the model matrix corresponding to the rotation angle in the second image model according to the control operation; updating the view matrix corresponding to the pitch angle of the second virtual camera according to the control operation; and updating a second projection matrix corresponding to the second window according to the model matrix, the view matrix and the predictive matrix. Wherein the prestive matrix is unchanged.
How to update the model matrix corresponding to the rotation angle in the second image model according to the control operation. By the above description, the control operation corresponding to the rotation angle in the second image model being an absolute angle is represented by a roll. Therefore, the rotation angle roll can be converted into radian, and then a rotate function is called to rotate, so as to obtain a model matrix. For example, model glm:: rotate (glm:: radians (roll), glm:: vec3(0.0f,0.0f,1.0 f)). model, where glm:: radians represents the radian calculation function.
How to update the view matrix corresponding to the pitch angle of the second virtual camera according to the control operation. Typically the position of a virtual camera is determined by three parameters: camera _ pos: a location point of the virtual camera; camera _ front: an orientation of the virtual camera; camera _ up: perpendicular to the orientation of the virtual camera. After the initialization on the data display interface, before the control operation is performed on the second window, the camera _ pos, the camera _ front and the camera _ up all correspond to an initial value. Wherein the camera _ pos keeps the initial value unchanged, such as setting the camera _ pos to the very center inside the second image model. When the user performs a control operation on the first window based on the navigation image, the camera _ front is changed, and the camera _ up is also changed, so that the view matrix is changed.
Specifically, the step of updating the view matrix corresponding to the pitch angle of the second virtual camera according to the control operation includes: taking the pitch angle of the control operation corresponding to the second virtual camera as the pitch angle of the second virtual camera, and acquiring the yaw angle of the second virtual camera; updating the orientation vector of the second virtual camera according to the yaw angle and the pitch angle; and updating the view matrix according to the orientation vector.
Fig. 8 is a schematic diagram of determining an orientation of a second virtual camera according to an embodiment of the present disclosure. Where point C is the position camera _ pos of the second virtual camera, and CD is the orientation camera _ front of the second virtual camera, where the coordinates of point D are (x, y, z). It should be noted that the orientation camera _ front of the second virtual camera is on the ray of the CD, and the length of the CD may be any value. For the sake of calculation, it is assumed that the CD length is 1, and the yaw angle pitch, pitch angle yaw are known. The coordinates of the D point may be calculated according to formula (2), formula (3), formula (4), thereby obtaining a value of the orientation camera _ front of the second virtual camera.
x=CD×cos(yaw)×cos(pitch) (2)
y=CD×sin(yaw) (3)
z=CD×cos(yaw)×sin(pitch) (4)
After the orientation camera _ front of the second virtual camera is calculated, the value of camera _ up may be further calculated.
Since camera _ front and camera _ up define a plane and the control operation corresponds to tilting up and down about the y-axis, the point of (0,1,0) must be on the plane defined by camera _ front and camera _ up. An excess vector up _ help may be set to help calculate the value of camera up. Let up _ help be (0,1, 0).
And obtaining a right vector right of the second virtual camera by using the excess vector up _ help and the calculated orientation camera _ front of the second virtual camera, specifically, cross-multiplying the excess vector up _ help and the calculated orientation vector camera _ front of the second virtual camera, normalizing to obtain the right vector right, and obtaining the right vector right perpendicular to the orientation camera _ front of the second virtual camera according to the principle of cross-multiplication. Such as glm:: vec3 right:: glm:: normaize (glm:: cross), wherein glm:: cross represents a cross product. And then obtaining a value of the camera _ up by using the right vector right and the calculated orientation vector camera _ front of the second virtual camera, specifically, cross-multiplying the orientation vector camera _ front of the second virtual camera and the right vector right, and then normalizing to obtain the value of the camera _ up. Such as camera _ up:: glm:: normaize (glm:: cross (camera _ front, right)). According to the principle of cross multiplication, the resulting camera _ up is perpendicular to the orientation camera _ front of the second virtual camera.
After the camera _ pos, the camera _ front and the camera _ up are obtained, the camera _ pos, the camera _ front and the camera _ up are used to determine the view matrix. Specifically, the lookup at function is called to implement, and the view ═ glm: (camera _ pos, camera _ front, camera _ up) is obtained, i.e. the view matrix.
And generating a second projection matrix according to the updated view matrix, the updated model matrix and the predictive matrix so as to update the second projection matrix corresponding to the second window. In this way, the control operation of the user on the first window based on the navigation image is converted into an angle, and the second projection matrix corresponding to the second window is updated according to the angle, so that the second projection matrix is updated through the control operation.
In the process of updating the second projection matrix corresponding to the second window according to the control operation of the user on the first window, two threads, namely a first thread and a second thread, are respectively corresponding to the first projection matrix and the second projection matrix. The first thread is a main thread ui and is used for capturing gestures, such as capturing sliding events like BeginDrag, DragMove, EndDrag, dragcane, and determining corresponding angles according to gesture sliding. The second thread is a gl thread with a refresh rate of 60 frames per second. The gl thread generates a second projection matrix according to the angle so as to update the second projection matrix corresponding to the second window. The first thread and the second thread are used for processing separately, and the data processing efficiency is improved.
And 109, updating the second image under the small visual angle according to the updated second projection matrix, the second image model and the image data, and updating and displaying the second image in the second window.
Specifically, step 109 includes: and copying the updated second projection matrix, the image data and the second image model into the GPU through the CPU, so as to update the second image under the small visual angle by utilizing the GPU according to the updated second projection matrix, the updated second image model and the image data. Specifically, a vertex in the second image model is transmitted to a vertex shader through a CPU, a texture coordinate in the second image model is transmitted to a fragment shader, the texture coordinate corresponding to the vertex which can be projected to an updated second projection matrix in the second image model is determined, a texture unit corresponding to the texture coordinate is determined according to image data, then a GPU is used for rendering, and the second image under a small visual angle is updated.
Specifically, please refer to the above description corresponding to the step of generating the second image under the small viewing angle according to the second projection matrix and the second image model and the image data, which is not repeated herein.
And 110, updating the three-dimensional navigation area in the first image model corresponding to the second image according to the updated second projection matrix and the first image model.
Specifically, step 110 includes: according to the updated second projection matrix and the first image model, determining a navigation vertex projected to a near plane corresponding to the second projection matrix from the vertex of the first image model; and taking the area corresponding to the navigation vertex as a three-dimensional navigation area in the first image model corresponding to the second image. The area corresponding to the navigation vertex is understood as the area where the navigation vertex is located.
Navigation vertices are understood as vertices in the first image model which can be projected into the near plane of the second projection matrix. Specifically, the step of determining, from the vertices of the first image model, the navigation vertices projected into the near plane corresponding to the second projection matrix according to the second projection matrix and the first image model may be performed by a CPU, and specifically includes the following steps: traversing each vertex in the first image model; from each vertex(s), a navigation vertex projected into the near plane corresponding to the second projection matrix is determined.
Wherein the step of determining from each vertex a navigation vertex projected into the near plane corresponding to the second projection matrix comprises: determining the coordinate of each vertex after projection according to the second projection matrix, for example, multiplying the vertex in the first image model by the second projection matrix to obtain the coordinate of each vertex after projection; detecting whether the coordinates are in the range of the near plane corresponding to the second projection matrix; if yes, determining the vertex as a navigation vertex; if not, the vertex is determined to be a non-navigation vertex. Wherein the navigation vertices are visible to the user after being projected onto the near-plane of the second projection matrix, and the non-navigation vertices are not visible to the user after being projected.
For a specific implementation principle, please refer to the above description of determining the three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model, which is not repeated herein.
And 111, processing the three-dimensional navigation area in a preset mode to update the navigation image, and displaying the navigation image in a first window in a highlighted mode.
Specifically, step 111 includes: determining texture coordinates corresponding to the navigation vertexes; and copying the texture coordinates into the GPU, so that the GPU processes the three-dimensional navigation area in a preset mode according to the texture coordinates to update the navigation image, and displaying the navigation image in the first window in a highlighted mode. The navigation image represents a position of the second image within the first image.
The step of processing the three-dimensional navigation area in the preset manner according to the texture coordinate to update the navigation image and highlighting the navigation image in the first window is described above, and the description corresponding to the step of processing the three-dimensional navigation area in the preset manner according to the texture coordinate to obtain the navigation image and highlighting the navigation image in the first window is omitted here.
In the embodiment, the image data collected by the fisheye camera is acquired, the image data is processed into the first image under a large visual angle according to the first projection matrix and the first image model, the first image is displayed on the first window, the second image under a small visual angle is generated according to the second projection matrix, the second image is displayed on the second window, the second image is processed corresponding to the three-dimensional navigation area in the first image model to obtain the navigation image, the navigation image is displayed on the first window, the control operation of the user on the second window based on the navigation image is acquired, the control operation is converted into the angle under the three-dimensional space, the second projection matrix corresponding to the second window is updated according to the angle, and thus, the control operation of the user on the first window based on the navigation image is converted into the angle under the three-dimensional space, and the second projection matrix is generated according to the angle, converting the control operation on the first window into a second projection matrix corresponding to the second window; updating a second image under a small visual angle according to the second projection matrix, the second image model and the image data, and updating and displaying the second image in the second window, so that on one hand, the second projection matrix is obtained through control operation on the first window, and the display of the second image in the second window is controlled through the control operation on the first window, and on the other hand, the obtained first image and the second image are planar images of the image data under different visual angles, so that the image data can be understood from different visual angles, the understanding of a user on the content of the image data is facilitated, and the understanding efficiency of the content of the image data is improved; then, a three-dimensional navigation area in the first image model corresponding to the second image is updated according to the second projection matrix and the first image model, the three-dimensional navigation area is processed in a preset mode to update the navigation image, and the navigation image is displayed in the first window in a protruding mode, so that a user can clearly know which position of the second image displayed in the second window is in the first image displayed in the first window according to the navigation image to establish the incidence relation among the images at different viewing angles, the understanding efficiency of the image data content is further improved, the user can conveniently adjust the watched area, the user can be guided to quickly find the concerned area, the speed of positioning the concerned area in the image data by the user is improved, and the user experience is improved; and the second image under a small visual angle displayed through the second window also realizes the detail display of the image data. The data display method in the embodiment of the application can be applied to more application scenes.
Fig. 9a and 9b are schematic flow charts of a data presentation method provided in an embodiment of the present application. Please refer to the data display method of the present application in conjunction with fig. 9a and 9 b.
As shown in fig. 9a, when the data presentation interface is opened/refreshed, the set initial parameters of the first virtual camera and the set initial parameters of the second virtual camera, the first image model and the second image model, and the image data collected by the fisheye camera are acquired; determining an initial first projection matrix according to initial parameters of the first virtual camera, and determining an initial second projection matrix according to initial parameters of the second virtual camera; determining an initial first image under a large viewing angle according to the initial first projection matrix, the first image model and the image data, and displaying the initial first image in a first window of a data display interface; generating an initial second image under a small visual angle according to the initial second projection matrix, the second image model and the image data, and displaying the initial second image in a second window of the data display interface; and determining a three-dimensional navigation area corresponding to the initial second image in the first image model according to the initial second projection matrix and the first image model, processing the three-dimensional navigation area in a preset mode to obtain a navigation image, and displaying the navigation image in a first window in a highlighted mode. The above is the corresponding steps in opening/refreshing the data display interface.
Then, as shown in fig. 9b, when a control operation of the user on the first window of the data presentation interface based on the navigation image is detected, the control operation is converted into an angle in the three-dimensional space, and the second projection matrix is updated according to the angle. And updating a second image under the small visual angle according to the updated second projection matrix, the second image model and the image data, and displaying the second image in a second window, wherein the second image is an updated image. And updating the three-dimensional navigation area in the first image model corresponding to the second image according to the updated second projection matrix and the first image model, processing the three-dimensional navigation area in a preset mode to update the navigation image, and displaying the navigation image in the first window in a highlighted mode. It is understood that, after the data presentation interface is opened, when a control operation based on the navigation image in the first window is detected, the second projection matrix is updated according to the control operation to update the second image presented in the second window, and the navigation image is updated.
It should be noted that fig. 9a and 9b together illustrate the entire flow of the data presentation method. For details of each step, please refer to the description of the corresponding step above, which is not repeated herein.
According to the method described in the above embodiments, this embodiment will be further described from the perspective of a data presentation apparatus, which may be specifically implemented as an independent entity or integrated in an electronic device.
Fig. 10 is a schematic structural diagram of a data display device according to an embodiment of the present application. The apparatus may include an image acquisition module 201, a first processing presentation module 202, a second processing presentation module 203, an area determination module 204, a third processing presentation module 205, an operation acquisition module 206, an angle conversion module 207, a matrix update module 208, and an area update module 209.
And the image acquisition module 201 is used for acquiring image data acquired by the fisheye camera.
The image acquisition module 201 is specifically configured to perform data calibration on the fisheye camera; acquiring initial image data shot by a fisheye camera; and correcting the initial image data according to the result of the data calibration to obtain the image data acquired by the fisheye camera.
The first processing and displaying module 202 is configured to process the image data into a first image under a large viewing angle according to the first projection matrix and the first image model, and display the first image in a first window of the data display interface.
The first processing and displaying module 202, when executing the step of processing the image data into the first image under the large viewing angle according to the first projection matrix and the first image model, specifically executes: the first projection matrix, the image data and the first image model are copied into the GPU through the CPU, and the image data are processed into a first image under a large visual angle through a graphic processor according to the first projection matrix, the first image model and the image data.
And the second processing and displaying module 203 is configured to generate a second image under a small viewing angle according to the second projection matrix, the second image model and the image data, and display the second image in a second window of the data display interface, where the first projection matrix is different from the second projection matrix, and the first image model is the same as the second image model.
The second processing and displaying module 203 specifically executes, when executing the step of generating the second image under the small viewing angle according to the second projection matrix, the second image model, and the image data: copying, by the CPU, the second projection matrix, the image data, and the second image model into the GPU to generate a second image at a small viewing angle from the second projection matrix, the second image model, and the image data.
And the region determining module 204 is configured to determine, according to the second projection matrix and the first image model, a three-dimensional navigation region in the first image model corresponding to the second image.
The region determining module 204 is specifically configured to determine, according to the second projection matrix and the first image model, a navigation vertex projected into a near plane corresponding to the second projection matrix from vertices of the first image model; and taking the area corresponding to the navigation vertex as a three-dimensional navigation area in the first image model corresponding to the second image. The area corresponding to the navigation vertex is understood as the area where the navigation vertex is located.
The third processing and displaying module 205 is configured to process the three-dimensional navigation area in a preset manner to obtain a navigation image, so that the navigation image is highlighted in the first window, and the navigation image represents a position of the second image in the first image.
And an operation acquiring module 206, configured to acquire a control operation of the user on the first window based on the navigation image. Specifically, the operation acquiring module 206 is specifically configured to acquire, through the first thread, a control operation of the user on the first window based on the navigation image.
And an angle conversion module 207 for converting the control operation into an angle in a three-dimensional space.
Specifically, the angle conversion module 207 is specifically configured to convert the control operation into an angle in a three-dimensional space through the first thread.
The angle conversion module 207 specifically includes a coordinate obtaining unit and an angle conversion unit. The coordinate acquisition unit is used for acquiring the central coordinate of the central point of the first image in the first window; and acquiring the control coordinates of the control points corresponding to the control operation. And the angle conversion unit is used for converting the control operation into an angle in a three-dimensional space according to the central coordinate and the control coordinate.
The angle includes the every single move angle of the second virtual camera that the second projection matrix corresponds, and the angle conversion unit is used for specifically: acquiring a radius corresponding to the second image and a maximum pitch angle corresponding to the second virtual camera; determining the control distance from the control point to the central point according to the central coordinate and the control coordinate; determining a control distance of the control operation according to the radius and the maximum pitch angle corresponds to a pitch angle of the second virtual camera.
The angle comprises a rotation angle of the second image model, and the angle conversion unit is specifically configured to: determining a quadrant where the control point is located according to the central coordinate and the control coordinate; determining the angle of a straight line formed by the control point and the central point on the first window according to the quadrant and the central coordinate and the control coordinate; the angle corresponds to the angle of rotation in the second image model as a control operation.
And a matrix updating module 208, configured to angularly update the second projection matrix corresponding to the second window.
Specifically, the matrix updating module 208 is specifically configured to update, by the second thread, the second projection matrix corresponding to the second window according to the angle.
The matrix updating module 208 specifically includes a model updating unit, a view updating unit, and a matrix updating unit. And the model updating unit is used for updating the model matrix corresponding to the rotation angle in the second image model according to the control operation. And the visual angle updating unit is used for updating the visual angle matrix corresponding to the pitching angle of the second virtual camera according to the control operation. And the matrix updating unit is used for updating a second projection matrix corresponding to the second window according to the model matrix, the view matrix and the perspective matrix.
The view updating unit is specifically configured to: taking the pitch angle of the control operation corresponding to the second virtual camera as the pitch angle of the second virtual camera, and acquiring the yaw angle of the second virtual camera; updating the orientation vector of the second virtual camera according to the yaw angle and the pitch angle; the view angle matrix is updated according to the orientation vector.
And the area updating module 209 is configured to update the three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model.
The third processing and displaying module 205 updates the navigation image by processing the three-dimensional navigation area in a preset manner, and displays the navigation image in the first window in a highlighted manner.
In specific implementation, the above units may be implemented as independent entities, or may be implemented as one or several entities by any combination. The specific implementation processes of the above apparatus and each unit, and the achieved beneficial effects, may refer to the corresponding descriptions in the foregoing method embodiments applied to the node of the block chain, and for convenience and brevity of description, no further description is given here.
An electronic device according to an embodiment of the present application is further provided, as shown in fig. 11, which shows a schematic structural diagram of the electronic device according to an embodiment of the present application, specifically:
the electronic device may include components such as a processor 901 of one or more processing cores, memory 902 of one or more computer-readable storage media, Radio Frequency (RF) circuitry 903, a power supply 904, an input unit 905, and a display unit 906. Those skilled in the art will appreciate that the electronic device configurations shown in the figures do not constitute limitations of the electronic device, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components. Wherein:
the processor 901 is a control center of the electronic device, and includes a central processing unit and a graphics processing unit, and the central processing unit is connected to the graphics processing unit. The cpu connects various parts of the entire electronic device through various interfaces and lines, and executes various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 902 and calling data stored in the memory 902, thereby integrally monitoring the electronic device. Optionally, the central processor may include one or more processing cores; preferably, the central processor may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the central processor. The graphic processor is mainly used for processing data transmitted by the central processing unit, such as rendering and the like.
The memory 902 may be used to store software programs (computer programs) and modules, and the processor 901 executes various functional applications and data processing by operating the software programs and modules stored in the memory 902. The memory 902 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 902 may also include a memory controller to provide the processor 901 access to the memory 902.
The RF circuit 903 may be used for receiving and transmitting signals during information transmission and reception, and in particular, for processing downlink information of a base station after being received by one or more processors 901; in addition, data relating to uplink is transmitted to the base station. In general, RF circuitry 903 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 903 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The electronic device further includes a power supply 904 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 904 is logically connected to the processor 901 via a power management system, so that functions of managing charging, discharging, and power consumption are implemented via the power management system. The power supply 904 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The electronic device may further include an input unit 905, and the input unit 905 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. Specifically, in one particular embodiment, input unit 905 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 901, and can receive and execute commands sent by the processor 901. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 905 may include other input devices in addition to a touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The electronic device may also include a display unit 906, which display unit 906 may be used to display information input by or provided to the user as well as various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 906 may include a Display panel, and may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may cover the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 901 to determine the type of the touch event, and then the processor 901 provides a corresponding visual output on the display panel according to the type of the touch event. Although in the figures the touch sensitive surface and the display panel are shown as two separate components to implement input and output functions, in some embodiments the touch sensitive surface may be integrated with the display panel to implement input and output functions.
Although not shown, the electronic device may further include a camera (note that the camera here is different from the virtual camera described above, and the camera here refers to hardware), a bluetooth module, and the like, which are not described herein again. Specifically, in this embodiment, the processor 901 in the electronic device loads an executable file corresponding to a process of one or more application programs into the memory 902 according to the following instructions, and the processor 901 runs the application programs stored in the memory 902, so as to implement various functions as follows:
acquiring image data acquired by a fisheye camera; processing the image data into a first image under a large visual angle according to the first projection matrix and the first image model, and displaying the first image in a first window of a data display interface; generating a second image under a small visual angle according to the second projection matrix, the second image model and the image data, and displaying the second image in a second window of a data display interface, wherein the first projection matrix is different from the second projection matrix, and the first image model is the same as the second image model; determining a three-dimensional navigation area corresponding to the second image in the first image model according to the second projection matrix and the first image model, processing the three-dimensional navigation area in a preset mode to obtain a navigation image, and displaying the navigation image in a first window in a highlighted mode, wherein the navigation image represents the position of the second image in the first image; acquiring control operation of a user on a first window based on a navigation image; converting the control operation into an angle in a three-dimensional space; updating a second projection matrix corresponding to the second window according to the angle; updating a second image under a small visual angle according to the second projection matrix, the second image model and the image data, and updating and displaying the second image in a second window; updating a three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model; and processing the three-dimensional navigation area in a preset mode to update the navigation image, and displaying the navigation image in a first window in a highlighted mode.
The electronic device can implement the steps in any embodiment of the data presentation method provided in the embodiment of the present application, and therefore, the beneficial effects that can be achieved by any data presentation method provided in the embodiment of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions (computer programs) which are stored in a computer-readable storage medium and loaded and executed by a processor, or by related hardware controlled by the instructions (computer programs). To this end, an embodiment of the present application provides a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps of any embodiment of the data presentation method provided in the embodiment of the present application.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any data presentation method embodiment provided in the present application, the beneficial effects that can be achieved by any data presentation method provided in the present application embodiment can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The data display method, the data display device, the electronic device, and the storage medium provided in the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for displaying data, comprising:
acquiring image data acquired by a fisheye camera;
processing the image data into a first image under a large visual angle according to a first projection matrix and a first image model, and displaying the first image in a first window of a data display interface;
generating a second image under a small visual angle according to a second projection matrix, a second image model and the image data, and displaying the second image in a second window of the data display interface, wherein the first projection matrix is different from the second projection matrix, and the first image model is the same as the second image model;
determining a three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model;
processing the three-dimensional navigation area in a preset mode to obtain a navigation image so as to display the navigation image in the first window in a protruding mode, wherein the navigation image represents the position of the second image in the first image;
acquiring control operation of a user on the first window based on the navigation image;
converting the control operation into an angle in a three-dimensional space;
updating a second projection matrix corresponding to the second window according to the angle;
updating a second image under a small visual angle according to the second projection matrix, the second image model and the image data, and updating and displaying the second image in the second window;
updating a three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model;
and processing the three-dimensional navigation area in a preset mode to update the navigation image, and displaying the navigation image in a first window in a highlighted mode.
2. The data presentation method of claim 1,
acquiring a control operation of a user on the first window based on the navigation image through a first thread; converting the control operation into an angle in a three-dimensional space;
and updating a second projection matrix corresponding to the second window according to the angle through a second thread.
3. The data presentation method of claim 1, wherein said step of converting said control operation into an angle in three-dimensional space comprises:
acquiring a central coordinate of a central point of the first image in the first window;
acquiring a control coordinate of a control point corresponding to the control operation;
and converting the control operation into an angle in a three-dimensional space according to the central coordinate and the control coordinate.
4. The method of claim 1, wherein the angle comprises a rotation angle of the control operation corresponding to the second image model, and a pitch angle of the control operation corresponding to the second virtual camera, and the step of updating the second projection matrix corresponding to the second window according to the angle comprises:
updating a model matrix corresponding to the rotation angle in the second image model according to the control operation;
updating a view angle matrix corresponding to a pitch angle of a second virtual camera according to the control operation;
and updating a second projection matrix corresponding to the second window according to the model matrix, the visual angle matrix and the perspective matrix.
5. A data presentation device, comprising:
the image acquisition module is used for acquiring image data acquired by the fisheye camera;
the first processing and displaying module is used for processing the image data into a first image under a large visual angle according to the first projection matrix and the first image model and displaying the first image in a first window of a data displaying interface;
the second processing and displaying module is used for generating a second image under a small visual angle according to a second projection matrix, a second image model and the image data, and displaying the second image in a second window of the data display interface, wherein the first projection matrix is different from the second projection matrix, and the first image model is the same as the second image model;
the area determining module is used for determining a three-dimensional navigation area, corresponding to the second image, in the first image model according to the second projection matrix and the first image model;
the third processing and displaying module is used for processing the three-dimensional navigation area in a preset mode to obtain a navigation image so as to display the navigation image in the first window in a protruding mode, and the navigation image represents the position of the second image in the first image;
the operation acquisition module is used for acquiring control operation of a user on the first window based on the navigation image;
the angle conversion module is used for converting the control operation into an angle in a three-dimensional space;
the matrix updating module is used for updating a second projection matrix corresponding to the second window according to the angle;
the second processing and displaying module is further configured to update a second image under a small viewing angle according to the second projection matrix, the second image model and the image data, and update and display the second image in the second window;
the area updating module is used for updating a three-dimensional navigation area in the first image model corresponding to the second image according to the second projection matrix and the first image model;
and the third processing and displaying module is also used for processing the three-dimensional navigation area in a preset mode so as to update the navigation image and display the navigation image in the first window in a highlighted mode.
6. The data presentation device of claim 5, wherein the angle conversion module comprises:
a coordinate acquiring unit, configured to acquire a center coordinate of a center point of the first image in the first window; acquiring a control coordinate of a control point corresponding to the control operation;
and the angle conversion unit is used for converting the control operation into an angle in a three-dimensional space according to the central coordinate and the control coordinate.
7. The data presentation device of claim 6, wherein the angle comprises a pitch angle of a second virtual camera corresponding to the second projection matrix, and the angle conversion unit is specifically configured to:
acquiring a radius corresponding to the second image and a maximum pitch angle corresponding to a second virtual camera;
determining the control distance from the control point to the central point according to the central coordinate and the control coordinate;
determining that the control distance of the control operation corresponds to a pitch angle of the second virtual camera based on the radius and the maximum pitch angle.
8. The data presentation device of claim 6, wherein the angle comprises a rotation angle of the second image model, and the angle conversion unit is specifically configured to:
determining a quadrant where the control point is located according to the central coordinate and the control coordinate;
determining the angle of a straight line formed by the control point and the central point on the first window according to the quadrant and the central coordinate and the control coordinate;
the angle corresponds to a rotation angle in the second image model as the control operation.
9. The data presentation device of claim 5, wherein the angle comprises a rotation angle at which the control operation corresponds to the second image model and a pitch angle at which the control operation corresponds to the second virtual camera of the second projection matrix, the matrix update module comprising:
a model updating unit for updating a model matrix corresponding to the rotation angle in the second image model according to the control operation;
the visual angle updating unit is used for updating a visual angle matrix corresponding to the pitching angle of the second virtual camera according to the control operation;
and the matrix updating unit is used for updating a second projection matrix corresponding to the second window according to the model matrix, the view matrix and the perspective matrix.
10. An electronic device, comprising: one or more processors; a memory; and one or more computer programs, wherein the processor is coupled to the memory, the one or more computer programs being stored in the memory and configured to be executed by the processor to perform the data presentation method of any of the preceding claims 1 to 4.
CN202011063142.4A 2020-09-30 2020-09-30 Data display method and device and electronic equipment Pending CN112181230A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011063142.4A CN112181230A (en) 2020-09-30 2020-09-30 Data display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011063142.4A CN112181230A (en) 2020-09-30 2020-09-30 Data display method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112181230A true CN112181230A (en) 2021-01-05

Family

ID=73949270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011063142.4A Pending CN112181230A (en) 2020-09-30 2020-09-30 Data display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112181230A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218244A (en) * 2023-11-07 2023-12-12 武汉博润通文化科技股份有限公司 Intelligent 3D animation model generation method based on image recognition

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833243A (en) * 2020-09-20 2020-10-27 武汉中科通达高新技术股份有限公司 Data display method, mobile terminal and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833243A (en) * 2020-09-20 2020-10-27 武汉中科通达高新技术股份有限公司 Data display method, mobile terminal and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218244A (en) * 2023-11-07 2023-12-12 武汉博润通文化科技股份有限公司 Intelligent 3D animation model generation method based on image recognition
CN117218244B (en) * 2023-11-07 2024-02-13 武汉博润通文化科技股份有限公司 Intelligent 3D animation model generation method based on image recognition

Similar Documents

Publication Publication Date Title
US12056813B2 (en) Shadow rendering method and apparatus, computer device, and storage medium
CN111833243B (en) Data display method, mobile terminal and storage medium
US10055879B2 (en) 3D human face reconstruction method, apparatus and server
US20200302670A1 (en) Image processing method, electronic device, and storage medium
CN111813290B (en) Data processing method and device and electronic equipment
US20160232707A1 (en) Image processing method and apparatus, and computer device
CN112017133B (en) Image display method and device and electronic equipment
CN111932664A (en) Image rendering method and device, electronic equipment and storage medium
CN110033503B (en) Animation display method and device, computer equipment and storage medium
CN108701372B (en) Image processing method and device
CN112150560A (en) Method and device for determining vanishing point and computer storage medium
CN110853128A (en) Virtual object display method and device, computer equipment and storage medium
CN112308768B (en) Data processing method, device, electronic equipment and storage medium
CN108335259B (en) Image processing method, image processing equipment and mobile terminal
CN112308767B (en) Data display method and device, storage medium and electronic equipment
CN112181230A (en) Data display method and device and electronic equipment
CN117274475A (en) Halo effect rendering method and device, electronic equipment and readable storage medium
CN116091744A (en) Virtual three-dimensional object display method and head-mounted display device
CN112308766A (en) Image data display method and device, electronic equipment and storage medium
CN112306344B (en) Data processing method and mobile terminal
CN112184543B (en) Data display method and device for fisheye camera
CN112308757B (en) Data display method and mobile terminal
CN114093020A (en) Motion capture method, motion capture device, electronic device and storage medium
CN112184801A (en) Data display method for fisheye camera and mobile terminal
CN112150554B (en) Picture display method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination