CN112017133A - Image display method and device and electronic equipment - Google Patents

Image display method and device and electronic equipment Download PDF

Info

Publication number
CN112017133A
CN112017133A CN202011136153.0A CN202011136153A CN112017133A CN 112017133 A CN112017133 A CN 112017133A CN 202011136153 A CN202011136153 A CN 202011136153A CN 112017133 A CN112017133 A CN 112017133A
Authority
CN
China
Prior art keywords
image
transition
display control
position information
projection matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011136153.0A
Other languages
Chinese (zh)
Other versions
CN112017133B (en
Inventor
张凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhongke Tongda High New Technology Co Ltd
Original Assignee
Wuhan Zhongke Tongda High New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhongke Tongda High New Technology Co Ltd filed Critical Wuhan Zhongke Tongda High New Technology Co Ltd
Priority to CN202011136153.0A priority Critical patent/CN112017133B/en
Publication of CN112017133A publication Critical patent/CN112017133A/en
Application granted granted Critical
Publication of CN112017133B publication Critical patent/CN112017133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The embodiment of the application provides an image display method and device and electronic equipment, and relates to the technical field of smart cities. The method comprises the following steps: processing image data acquired by a fisheye camera into a first image under a large viewing angle according to a first projection matrix and an image model, and displaying the first image, acquiring first position information, second position information and first preset time when a first switching operation based on the first image is detected, determining a first transition projection matrix corresponding to the virtual camera moving from the first position information to the second position information within the first preset time, and determining a first transition image and displaying the first transition image; and when the virtual camera moves to the second position information, generating a second image under a small visual angle according to the second projection matrix and displaying the second image. According to the image data content display method and device, the understanding efficiency of the image data content can be improved, smooth switching among different images displayed under different visual angles is achieved, and user experience is improved.

Description

Image display method and device and electronic equipment
Technical Field
The application relates to the technical field of smart cities, in particular to an image display method and device and electronic equipment.
Background
In traditional video monitoring, 2D plane pictures are mainly displayed, but with the rise of computer technology, the advantages of fisheye images in the monitoring industry are more and more obvious. The scene of only a position can be monitored in traditional plane camera, but the fish-eye camera can monitor a wider visual field because of having a wider visual angle, so that the field needing monitoring by a plurality of plane cameras originally can be solved by only one fish-eye camera, and the hardware cost is greatly saved.
Because the fisheye camera has wider visual angle, the fisheye image (image data) obtained by shooting often has great distortion, and the fisheye image obtained by shooting is usually displayed through a circle, so that the fisheye image is not well understood and can be understood by professional technicians, and the application of the fisheye image cannot be well popularized and developed.
Disclosure of Invention
The embodiment of the application provides an image display method and device and electronic equipment, which can realize switching of different images displayed at different viewing angles of image data shot by a fisheye camera, and improve understanding efficiency of image data contents.
The embodiment of the application provides an image display method, which comprises the following steps:
acquiring image data, a first projection matrix, a second projection matrix and an image model which are acquired by a fisheye camera;
processing the image data into a first image under a large visual angle according to the first projection matrix and the image model, and displaying the first image in a first display control of a data display interface;
when a first switching operation based on the first image is detected, acquiring first position information of a corresponding virtual camera in the first projection matrix, second position information of a corresponding virtual camera in the second projection matrix, and a first preset time; acquiring a second display control and a third display control;
displaying the second display control and the third display control at a preset position of the data display interface, and displaying the first image on the second display control;
determining a first transition projection matrix corresponding to the virtual camera moving from the first position information to the second position information within a first preset time;
generating a first transition image corresponding to the first transition projection matrix according to the first transition projection matrix, the image model and the image data, and displaying the first transition image in the first display control;
determining first transition location information of the first transition image in the first image, the first transition location information representing a location of the first transition image in the first image, and highlighting the first transition location information on the third display control;
when the virtual camera moves to the second position information, generating a second image under a small visual angle according to the second projection matrix, the image model and the image data, and displaying the second image in the first display control, so that the first image under the large visual angle displayed by the first display control is switched to the second image under the small visual angle;
determining target location information of the second image in the first image, and highlighting the target location information on the third display control, the target location information representing a location of the second image in the first image.
An embodiment of the present application further provides an image display apparatus, including:
the first acquisition module is used for acquiring image data, a first projection matrix, a second projection matrix and an image model which are acquired by the fisheye camera;
the first processing and displaying module is used for processing the image data into a first image under a large visual angle according to the first projection matrix and the image model and displaying the first image in a first display control of a data display interface;
a second obtaining module, configured to obtain, when a first switching operation based on the first image is detected, first position information of a corresponding virtual camera in the first projection matrix, second position information of a corresponding virtual camera in the second projection matrix, and a first preset time; acquiring a second display control and a third display control;
the display processing module is used for displaying the second display control and the third display control at a preset position of the data display interface and displaying the first image on the second display control;
the matrix determination module is used for determining a first transition projection matrix corresponding to the virtual camera moving from the first position information to the second position information within a first preset time;
the second processing and displaying module is used for generating a first transition image under a first transition view angle corresponding to the first transition projection matrix according to the first transition projection matrix, the image model and the image data, and displaying the first transition image in the first display control;
a first determining and presenting module, configured to determine first transition position information of a first transition image in the first image when the first transition image is generated, and highlight the first transition position information on the third display control, where the first transition position information represents a position of the first transition image in the first image;
a third processing and displaying module, configured to generate a second image under a small viewing angle according to the second projection matrix, the image model, and the image data when the virtual camera moves to the second position information, and display the second image in the first display control, so that the first image under the large viewing angle displayed by the first display control is switched to the second image under the small viewing angle;
and the second determining and displaying module is used for determining target position information of the second image in the first image when the second image is generated, and highlighting the target position information on the third display control, wherein the target position information represents the position of the second image in the first image.
An embodiment of the present application further provides an electronic device, where the electronic device includes:
one or more processors; a memory; and one or more computer programs, wherein the processor is coupled to the memory, the one or more computer programs being stored in the memory and configured to be executed by the processor to perform the image presentation method described above.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in any image display method are implemented.
According to the embodiment of the application, image data acquired by a fisheye camera is processed into a first image under a large visual angle through a first projection matrix and an image model, the first image is displayed in a first display control, when a first switching operation based on the first image is detected, first position information of a virtual camera in the first projection matrix, second position information of the virtual camera in a second projection matrix and first preset time are acquired, a first transition projection matrix corresponding to the virtual camera moving from the first position information to the second position information in the first preset time is determined, and a first transition image displayed in the first display control is determined according to the first transition projection matrix; and when the virtual camera moves to the second position information, generating a second image under a small visual angle according to the second projection matrix, and displaying the second image in the first display control. Therefore, the first image displayed by the first display control under the large viewing angle is switched to the second image displayed by the first display control under the small viewing angle through the first preset time, so that the switching between different images (the first image and the second image) displayed by image data under different viewing angles is realized, on one hand, the first image under the large viewing angle and the second image under the small viewing angle can be displayed in the same display control, and if the first image and the second image are displayed in the same size, the understanding efficiency of the content of the image data is improved; on the other hand, smooth switching between different images displayed under different visual angles can be realized through the transition images, so that a user can experience the smooth switching process, and the user experience is improved.
In addition, when a first switching operation based on the first image is detected, a second display control and a third display control are obtained, so that the first image is displayed through the second display control in the process of smooth switching, first transition position information of the first transition image in the first image and target position information of the second image in the first image are determined, and the first transition position information and the target position information are displayed through the third display control. In this way, in the process of smooth switching, the positions of the first transition image and the second image displayed in the current first display control in the first image can be clearly known, so that the user can be helped to quickly locate which part of the first transition image and the second image (local images) displayed at present correspond to the first image (whole), and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of an image display system provided in an embodiment of the present application;
FIG. 2 is a schematic flowchart of an image displaying method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of image data directly acquired by a fisheye camera provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of an imaging principle of perspective projection provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a model under a large viewing angle provided by an embodiment of the present application;
FIG. 6 is a schematic view of a model at a small viewing angle provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of determining a first transition image provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of determining an orientation vector provided by an embodiment of the present application;
FIG. 9 is another schematic flow chart diagram illustrating an image displaying method according to an embodiment of the present disclosure;
10 a-10 d are schematic diagrams of data presentation interfaces provided by embodiments of the present application;
FIG. 11 is another schematic flow chart diagram illustrating an image displaying method according to an embodiment of the present disclosure;
FIG. 12 is a schematic structural diagram of an image display apparatus provided in an embodiment of the present application;
FIG. 13 is a schematic structural diagram of an image display apparatus provided in an embodiment of the present application;
fig. 14 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides an image display method and device, electronic equipment and a storage medium. Any kind of image display device that this application embodiment provided can integrate in electronic equipment. The electronic device includes, but is not limited to, a smart phone, a tablet Computer, a notebook Computer, a smart television, a smart robot, a Personal Computer (PC), a wearable device, a server Computer, a vehicle terminal, and the like.
Please refer to fig. 1, which is a scene diagram of an image display system according to an embodiment of the present application. The data display system comprises a fisheye camera and electronic equipment. The number of the fisheye cameras can be one or more, the number of the electronic equipment can also be one or more, and the fisheye cameras and the electronic equipment can be directly connected or can be connected through a network. The fisheye camera and the electronic device can be connected in a wired mode or a wireless mode. The fisheye camera in the embodiment of fig. 1 is connected to the electronic device through a network, where the network includes network entities such as a router and a gateway.
The fisheye camera can shoot to obtain initial image data of a fisheye image, and the shot initial image data is sent to the electronic equipment; the electronic equipment receives initial image data shot by the fisheye camera, in one case, the received initial image data are directly used as image data collected by the fisheye camera, in the other case, the received initial image data are corrected to obtain the number of images collected by the fisheye camera, and then the image data are correspondingly processed to obtain corresponding images and are displayed.
Fig. 2 is a schematic flowchart of an image displaying method according to an embodiment of the present application. The image display method is operated in the electronic equipment, and comprises the following steps:
101, acquiring image data, a first projection matrix, a second projection matrix and an image model collected by a fisheye camera.
Because the viewing angle of the fisheye camera is wider, the image shot by the fisheye camera contains more information than the image shot by the plane camera. The shooting angle of the fisheye camera is similar to a hemisphere, the obtained image is represented by a similar circle and the like, if the visual angle of the fisheye camera is 180 degrees, the shooting angle is just a hemisphere, and the obtained image is presented on a two-dimensional plane in a circle.
Fig. 3 is a schematic diagram of initial image data directly acquired by the fisheye camera provided in the embodiment of the present application, and a middle circular area is an initial image captured by the fisheye camera. In fig. 3, the fisheye camera faces the sky, and the captured image includes the sky, buildings, trees, and the like around the position where the fisheye camera is located.
Initial image data directly acquired by the fisheye camera can be pre-stored in the electronic equipment so as to directly acquire the initial image data acquired by the fisheye camera from the electronic equipment; the initial image data acquired by the fisheye camera can be acquired from other electronic equipment through a network; the captured initial image data can also be acquired in real time from the fisheye camera through a network, as shown in fig. 1. In this way, in step 101, acquiring the image data acquired by the fisheye camera may be understood as acquiring initial image data directly acquired by the fisheye camera, that is, the initial image data shown in fig. 3.
In some cases, in order to achieve a better display effect, the initial image data directly acquired by the fisheye camera needs to be further processed. Specifically, the acquiring of the image data collected by the fisheye camera in step 101 includes: calibrating data of the fisheye camera; acquiring initial image data shot by a fisheye camera; and correcting the acquired initial image data according to the result of data calibration, and taking the corrected image data as the image data acquired by the fisheye camera.
The fisheye camera manufacturer needs to calibrate the fisheye camera before mass production, provides a calibration interface, and inputs calibration parameters through the calibration interface after purchasing the fisheye camera so as to calibrate the fisheye camera. The main purpose of data calibration is to obtain parameters corresponding to the fisheye lens to find a circular area in the initial image data shown in fig. 3. Due to the difference of the hardware of the fisheye cameras, the positions of the circular areas in the images are different in the initial image data obtained by shooting by each different fisheye camera.
And after the fisheye camera is subjected to data calibration, correcting the initial image data according to the result of the data calibration. For example, the original image data is corrected by using a longitude and latitude method, or the original image data is corrected by using another method, and the corrected image data is used as the image data acquired in step 101. The purpose of the correction is to reduce or eliminate distortion in the original image data. Such as converting the image of the circular area shown in fig. 3 to a 2:1 rectangular image to reduce or eliminate distortion in the original image data.
Further, the corrected image data is converted into texture units for subsequent texture mapping.
The image model in the embodiment of the present application refers to a virtual image model established in a virtual scene. The image model in the embodiment of the application is spherical; in other cases, different shapes of image models may be accommodated depending on the particular use scenario. In the following, the image model is taken as a sphere as an example, and it can be simply understood that the image model is a sphere formed by dividing the image model into n circles according to longitude and allocating m points to each circle, such as n =180, m =30, and the like. It should be noted that the larger n and the larger m, the more rounded the sphere formed.
In a virtual scene, a coordinate system in which an object (or a model, which is displayed as an object after texture mapping on the model) is located is called an object coordinate system, and a camera coordinate system is a three-dimensional coordinate system established with a focus center of a virtual camera as an origin and corresponds to a world coordinate system. The virtual camera, the object, etc. are all in the world coordinate system. The relationship among the virtual camera, the object, the image model, the wide angle, the elevation angle, the distance from the lens to the near plane and the far plane, and the like in the world coordinate system is shown in the projection matrix.
The projection matrix in the embodiment of the present application refers to an MVP matrix, MVP = perspective view model. The MVP matrix comprises information for controlling the image model, information of the virtual camera and information of perspective projection of the virtual camera, and respectively corresponds to a model matrix, which is also called a model matrix; view matrices, also known as view matrices; the perspective matrix is also called a perspective matrix. The model matrix corresponds to an operation matrix of the image model, such as rotating the image model on the X, Y, Z axis. In the embodiment of the application, the image model does not change, namely, the model matrix does not change. The perspective matrix mainly corresponds to the position point, orientation, etc. of the virtual camera. The perspective matrix corresponds to information such as the euler angle, the near plane, and the far plane of the virtual camera, and is understood as information of the perspective projection of the virtual camera. The information of the perspective projection of the virtual camera is not changed in the embodiment of the application. In other cases, the adjustment may be performed according to the actual situation, such as adjusting the distance from the lens of the virtual camera to the near plane, so that the perspective matrix changes. In the embodiment of the present application, the description will be made by taking an example in which the model matrix is not changed and the perspective matrix is not changed.
The first projection matrix and the second projection matrix may be pre-calculated or may be determined in real time as needed. Taking the first projection matrix as an example, the first projection matrix can be determined as follows: acquiring a first parameter of the set virtual camera, wherein the first parameter comprises information of the virtual camera, information for controlling an image model, an Euler angle, a distance from a lens of the virtual camera to a near plane, a distance from the lens of the virtual camera to a far plane (information of perspective projection of the virtual camera), and the like; a first projection matrix is determined from first parameters of the virtual camera. The first projection matrix is determined using a mathematical library, e.g. based on first parameters of the virtual camera, e.g. the first parameters of the virtual camera are input into a corresponding function of a glm (opengl mathematics) database, which is used to calculate the first projection matrix. When the second projection matrix is determined, the set second parameters of the virtual camera can be acquired; a second projection matrix is determined from the second parameters. The information of the image model and the information of the perspective projection of the virtual camera in the first parameter and the second parameter are the same, and the only difference is the information of the virtual camera. The information of the virtual camera includes information such as a position point of the virtual camera and a pitch angle of the virtual camera.
Fig. 4 is a schematic diagram of imaging of perspective projection provided in the embodiment of the present application. Wherein, the distance from the lens of the virtual camera to the near plane one 11, i.e. the distance between the point 0 and the point 1, and the distance from the lens of the virtual camera to the far plane one 12, i.e. the distance between the point 0 and the point 2. The position point of the virtual camera can be simply understood as the coordinate of the 0 point in the world coordinate system.
And 102, processing the image data into a first image under a large viewing angle according to the first projection matrix and the image model, and displaying the first image in a first display control of a data display interface.
Specifically, the first projection matrix, the image data and the image model are copied to a Graphics Processing Unit (GPU) so as to process the image data into a first image at a large viewing angle by the GPU according to the first projection matrix, the image model and the image data. Specifically, a vertex in the image model is transmitted to a vertex shader through a CPU, a texture coordinate in the image model is copied to a fragment shader, a texture unit corresponding to the texture coordinate is determined according to image data, and a GPU is used for rendering to obtain a first image under a large visual angle.
The large viewing angle refers to a viewing angle at which at least complete image data can be seen in the field of view after rendering. It can be simply understood that a large viewing angle is the viewing angle of the complete planar image corresponding to the image model as seen in the field of view, by placing the virtual camera farther outside the image model. The large view angle is essentially the view angle at which the image model is placed into the viewing frustum of the virtual camera. As shown in fig. 4, the viewing frustum is a trapezoidal region between the first proximal plane 11 and the first distal plane 12. It will be appreciated that at large viewing angles the image model is entirely within the viewing cone of the virtual camera, as shown in figure 5, with the image model 20 within the viewing cone. In this step, the first image at a large viewing angle is obtained, so that the user can understand the content of the image data as a whole.
103, when a first switching operation based on the first image is detected, acquiring first position information of a corresponding virtual camera in the first projection matrix, second position information of a corresponding virtual camera in the second projection matrix, and a first preset time.
Wherein the first switching operation is a switching operation set in advance. The first switching operation may be a zoom-in operation based on two fingers detected on the first image sliding in opposite directions; double-click operation, multiple continuous touch operation and the like can also be performed; the method can also be used for triggering the switching operation of a first switching control (switching from a large visual angle to a small visual angle) on the data display interface and the like. The mode of triggering the first switching operation may be through a corresponding touch operation of a user on the data presentation interface, or through a voice mode, or the like.
When the first switching operation is triggered on the first image, first position information of the corresponding virtual camera in the first projection matrix, second position information of the corresponding virtual camera in the second projection matrix and first preset time are obtained.
The position information of the virtual camera shown in fig. 5 is taken as the first position information 21, and the first position information 21 includes information such as a first position point (e.g., 0 point in fig. 4) of the virtual camera and a first pitch angle of the virtual camera. As can be seen from fig. 5, the first location point is located at the outer side of the image model, at a distance from the image model. The image model 20 is located in the view frustum formed by the second proximal plane 22 and the second distal plane 23.
The second position information of the corresponding virtual camera in the second projection matrix includes information such as a second position point where the virtual camera is located and a second pitch angle of the virtual camera, wherein the second pitch angle of the virtual camera may be set to 90-euler angle/2. As shown in fig. 6, in the second projection matrix, the position information of the virtual camera is taken as second position information 31, and the second position information 31 includes information such as a second position point where the virtual camera is located and a second pitch angle where the virtual camera is located. In the second projection matrix, the second location point is located at the center of the image model, such as the spherical center of a sphere. As can be seen in fig. 6, the second location point is located inside the image model 20, at the location of the centre of sphere. A view frustum is formed between the proximal plane three 32 and the distal plane three 33.
The first preset time is a time required for switching the first image under the large viewing angle to the second image under the small viewing angle, for example, the first preset time is set to be 1 second, 1 second and half, and the like.
The first near plane 11, the second near plane 22, and the third near plane 32 may represent the same near plane or different near planes, and the first far plane 12, the second far plane 23, and the third far plane 33 may represent the same far plane or different far planes. In the embodiment of the present application, it is exemplified that the first proximal plane 11, the second proximal plane 22, and the third proximal plane 32 may represent the same proximal plane, and the first distal plane 12, the second distal plane 23, and the third distal plane 33 represent the same distal plane.
And 104, determining a first transition projection matrix corresponding to the movement of the virtual camera from the first position information to the second position information within a first preset time.
Specifically, determining interpolation position information corresponding to the fact that the virtual camera moves from the first position information to the second position information within a first preset time; and determining a corresponding first transition projection matrix according to the interpolation position information. Wherein each interpolated position information determines a corresponding first transition projection matrix.
The interpolation position information corresponding to the fact that the virtual camera moves from the first position information to the second position information within the first preset time is determined, and the interpolation function can be preset to achieve the interpolation. Since the position information includes information such as a position point and a pitch angle, the position point and the pitch angle are separated and processed.
Specifically, the step of determining the interpolation position information corresponding to the movement of the virtual camera from the first position information to the second position information within the first preset time includes: determining an interpolation position point corresponding to the virtual camera moving from the first position point to the second position point within a first preset time; determining an interpolation pitch angle corresponding to the movement of the virtual camera from the first pitch angle to the second pitch angle within a first preset time; and taking the interpolation position points and the interpolation pitch angles which correspond one to one as interpolation position information.
Specifically, an interpolation position point corresponding to the movement of the virtual camera from the first position point to the second position point within the first preset time is determined, and the interpolation is realized through a preset position interpolation function. As shown in fig. 7, the first preset time t is divided into t1, t2, t3, t4 and t5 on the time axis, and the determined interpolation position points include 5 points, such as P1, P2, P3, P4 and P5. It is understood that the interpolated position point determined at the time point t1 is P1, the interpolated position point determined at the time point t2 is P2,.... and the interpolated position point determined at the time point t5 is P5, respectively.
Specifically, an interpolated pitch angle corresponding to the movement of the virtual camera from the first pitch angle to the second pitch angle within a first preset time is determined, and the interpolation is realized through a preset angle interpolation function. The determined interpolation pitch angles are the same in the number of the determined interpolation position points and are in one-to-one correspondence. As shown in fig. 7, the determined interpolated pitch angles include 5 of θ 1, θ 2, θ 3, θ 4, θ 5, and so on. The interpolated pitch angle corresponding to the interpolated position point P1 is θ 1.. and the interpolated pitch angle corresponding to the interpolated position point P5 is θ 5.
And determining a corresponding first transition projection matrix according to each interpolation position information, and determining a first transition projection matrix according to each interpolation position information. As shown in fig. 7, the determined first transition projection matrices are M1, M2, M3, M4, M5, respectively. Wherein M1 is determined from P1 and θ 1. It should be noted that the dotted lines in fig. 7 are only for displaying the corresponding data separately, so as to avoid confusion caused by being connected together, and have no other role.
Specifically, the step of determining a corresponding first transition projection matrix according to each interpolation position information includes: determining a visual angle matrix according to the interpolation position point and the interpolation pitch angle in each interpolation position information; acquiring a model matrix and a perspective matrix; and determining a corresponding first transition projection matrix according to the model matrix, the perspective matrix and the visual angle matrix.
In the embodiment of the present application, a world coordinate system is taken as an example of a right-hand coordinate system. In the embodiment of the present application, the first pitch angle, the interpolated pitch angle, the second pitch angle, and the like are absolute angles, and roll, yaw, and pitch also represent absolute angles, so roll, yaw, and pitch are used to represent corresponding absolute angles. pitch represents rotation about the Y-axis, also called yaw; yaw represents rotation about the X axis, also called pitch; roll indicates rotation about the Z axis, also called the roll angle. In the embodiment of the present application, neither the yaw angle pitch nor the roll angle roll is changed, and what is changed is the pitch angle yaw. The default yaw angle pitch is 90 degrees, and the virtual camera is ensured to face the direction pointed by the Z axis all the time.
How to determine a view angle matrix according to the interpolation position points and the interpolation pitch angles in each piece of interpolation position information. The position of a virtual camera is determined by three parameters: camera _ pos: a location point of the virtual camera; camera _ front: an orientation of the virtual camera; camera _ up: perpendicular to the orientation of the virtual camera. At the time of the first position information, the values of camera _ pos, camera _ front, camera _ up, and the like are determination values; at the time of the second position information, the value of camera _ pos, camera _ front, camera _ up, etc. is another determined value; in the interpolation of the position information, the camera _ pos is an interpolation position point, which is already determined, and the interpolation pitch angle is also already determined. When moving from the first position information to the second position information, the camera _ pos changes, and the pitch angle of the virtual camera also changes, resulting in a change in the view angle matrix.
Specifically, the step of determining the view angle matrix according to the interpolated position point and the interpolated pitch angle in each piece of interpolated position information includes: taking the interpolated pitch angle as the pitch angle of the virtual camera, and acquiring the yaw angle of the virtual camera; updating the orientation vector of the virtual camera according to the interpolation position point, the yaw angle and the pitch angle; the view angle matrix is updated according to the orientation vector.
Fig. 8 is a schematic diagram of determining an orientation of a virtual camera according to an embodiment of the present disclosure. Where point a is the position camera _ pos of the virtual camera (e.g., an interpolated position point), and AB is the orientation camera _ front of the virtual camera, where the coordinates of point B are (x, y, z). It should be noted that the virtual camera is oriented toward camera _ front on the ray AB, and the length of AB may be any value. For the sake of calculation, it is assumed that the AB length is 1, and the yaw angle pitch, pitch angle yaw (e.g., interpolated pitch angle) are known. The coordinates of the point B may be calculated according to formula (1), formula (2), and formula (3), thereby obtaining a value of the orientation camera _ front of the virtual camera.
Figure 452891DEST_PATH_IMAGE001
(1)
Figure 323895DEST_PATH_IMAGE002
(2)
Figure 461615DEST_PATH_IMAGE003
(3)
After the orientation camera _ front of the virtual camera is calculated, the value of camera _ up may be further calculated.
Since camera _ front and camera _ up define a plane and the control operation corresponds to tilting up and down about the y-axis, the point of (0,1, 0) must be on the plane defined by camera _ front and camera _ up. A transition vector up _ help may be set to help calculate the value of camera up. Let up _ help be (0,1, 0).
And obtaining the right vector right of the virtual camera by using the transition vector up _ help and the calculated orientation vector camera _ front of the virtual camera, specifically, cross-multiplying the transition vector up _ help and the calculated orientation vector camera _ front of the virtual camera, and then normalizing to obtain the right vector right, wherein the obtained right vector right is perpendicular to the orientation camera _ front of the virtual camera according to the principle of cross-multiplication. Such as glm:: vec3 right = glm:: normaize (glm:: cross (up _ help, camera _ front)), where glm:: cross represents a cross product. Then, the right vector right and the calculated orientation vector camera _ front of the virtual camera are used to obtain a value of camera _ up, specifically, the orientation vector camera _ front of the virtual camera and the right vector right are cross-multiplied and then normalized to obtain a value of camera _ up. Such as camera _ up = glm:: normal (glm:: cross (camera _ front, right)). According to the principle of cross multiplication, the resulting camera _ up is perpendicular to the orientation camera _ front of the virtual camera.
After the camera _ pos, camera _ front, and camera _ up are obtained, the view angle matrix is determined using the camera _ pos, camera _ front, and camera _ up. Specifically, the lookup at function is called to realize, view = glm:: lookup at (camera _ pos, camera _ front, camera _ up), and the view angle matrix can be obtained.
Thus, the view angle matrix is determined according to the interpolation position points and the interpolation pitch angles in the interpolation position information, and each piece of interpolation position information determines one view angle matrix. After the visual angle matrix is determined, a model matrix and a perspective matrix are obtained; and determining a corresponding first transition projection matrix according to the model matrix, the perspective matrix and the visual angle matrix. Therefore, the corresponding first transition projection matrix in the first display control is determined according to the interpolation position information.
In the process from the time of detecting the first switching operation based on the first image to the time of determining the second transitional projection matrix, two threads are respectively corresponding to each other, one thread is a main thread ui thread and is used for capturing a gesture to determine whether the first switching operation is performed, and acquiring the first position information, the second position information, the first preset time, determining the interpolation position information and the like. The other thread is a gl thread with a refresh rate of 60 frames per second. The gl thread generates a first transition projection matrix according to the interpolation position information. The two threads are processed separately to improve the efficiency and speed of processing.
And 105, generating a first transition image corresponding to the first transition projection matrix according to the first transition projection matrix, the image model and the image data, and displaying the first transition image in the first display control.
Specifically, the step of generating a first transition image corresponding to the first transition projection matrix according to the first transition projection matrix, the image model and the image data includes: and copying the first transition projection matrix, the image data and the image model into a GPU (graphics processing Unit), so as to generate a first transition image under a transition view angle corresponding to the first transition projection matrix according to the first transition projection matrix, the image model and the image data. Specifically, a vertex in the image model is transmitted to a vertex shader through a CPU, a texture coordinate in the image model is copied to a fragment shader, a texture unit corresponding to the texture coordinate is determined according to image data, and a GPU is used for rendering to generate a second image under a transition view angle.
The transition view angle means that the position point of the virtual camera moves from the first position point to the second position point in the process of moving from the first position information to the second position information, the position point of the virtual camera is at each interpolation position point in the process of moving the pitch angle from the first pitch angle to the second pitch angle, and the pitch angle of the virtual camera is at the view angle corresponding to each interpolation pitch angle. The interpolation position points may include both some interpolation position points outside the image model and some interpolation position points inside the image model.
106, when the virtual camera moves to the second position information, generating a second image under a small viewing angle according to the second projection matrix, the image model and the image data, and displaying the second image in the first display control, so that the first image under the large viewing angle displayed by the first display control is switched to the second image under the small viewing angle.
Specifically, the step of generating a second image at a small viewing angle from the second projection matrix and the image model and the image data includes: and copying the second projection matrix, the image data and the image model into the GPU to generate a second image under a small visual angle according to the second projection matrix, the image model and the image data. Specifically, a vertex in the image model is transmitted to a vertex shader through a CPU, a texture coordinate in the image model is copied to a fragment shader, a texture unit corresponding to the texture coordinate is determined according to image data, and a GPU is used for rendering to generate a second image under a small visual angle.
The small view angle refers to a view angle at which local image data can be seen in the view field after rendering. Or, it is understood that the small viewing angle is the viewing angle of the corresponding local planar image after the second image model is projected, as seen in fig. 6, by placing the second virtual camera inside the second image model. In the step, a second image under a small visual angle is obtained, so that a user can understand the content of the image data locally (in the small visual angle) conveniently, and the understanding efficiency of the content of the image data is improved; and the second image under the small visual angle can be displayed in the whole first display control, so that a user can see the larger second image, and the user experience is improved.
Therefore, the first image displayed by the first display control under the large visual angle is switched to the second image displayed by the first display control under the small visual angle, the whole process of switching from the first image to the second image can be seen through the first transition image, smooth switching among different images displayed under different visual angles is achieved, and user experience is improved.
The first image and the second image are projected under a large visual angle and a small visual angle through the same image model and are obtained by using the same texture (image data) mapping. The image data is understood from the whole through the first image under the large visual angle, and the image data is understood from the local part through the second image under the small visual angle, so that the detail display of the image data is realized. Therefore, when the user sees the second image, it is not known to which part of the (entire) first image the currently displayed second image corresponds, degrading the user experience.
Fig. 9 is a schematic flowchart of an image displaying method according to an embodiment of the present application. The image display method is applied to electronic equipment, and comprises the following steps:
and 201, acquiring image data, a first projection matrix, a second projection matrix and an image model collected by a fisheye camera.
And 202, processing the image data into a first image under a large viewing angle according to the first projection matrix and the image model, and displaying the first image in a first display control of the data display interface.
203, when a first switching operation based on a first image is detected, acquiring first position information of a corresponding virtual camera in a first projection matrix, second position information of a corresponding virtual camera in a second projection matrix, and a first preset time; and acquiring a second display control and a third display control.
The second display control is displayed on the first display control, the third display control is displayed on the second display control, and the sizes of the second display control and the third display control are smaller than that of the first display control. And setting the sizes of the second display control and the third display control to be smaller than that of the first display control so as to prevent the second display control and the third display control from shielding the image display on the first display control, and on the other hand, highlighting the image displayed on the first display control.
And 204, displaying a second display control and a third display control at a preset position of the data display interface, and displaying the first image on the second display control.
In order to reduce the influence of the second display control on the image displayed by the first display control, the second display control is displayed at a preset position of the data display interface, if the first display control occupies the whole screen of the electronic device, the second display control is displayed at a corner of the first display control, and the like, and correspondingly, the preset position is the corner of the first display control. But also at other locations of the data presentation interface.
And 205, determining a first transition projection matrix corresponding to the movement of the virtual camera from the first position information to the second position information within a first preset time.
And 206, generating a first transition image corresponding to the first transition projection matrix according to the first transition projection matrix, the image model and the image data, and displaying the first transition image in the first display control.
And 207, determining first transition position information of the first transition image in the first image, and highlighting the first transition position information on the third display control.
Wherein the first transition position information indicates a position of the first transition image in the first image.
Specifically, the step of determining first transition position information of the first transition image in the first image includes: determining a transition position area in the image model corresponding to the first transition image according to the first transition projection matrix and the image model; and processing the transition position area in a preset mode to obtain a transition position image, wherein the transition position image represents first transition position information of the first transition image in the first image, namely the transition position image represents the position of the first transition image in the first image.
It is to be understood that the first image, the first transition image, or the second image determined according to the projection matrix (the first projection matrix, the first transition projection matrix, and the second projection matrix, respectively) and the image model in the embodiment of the present application is an image obtained by the imaging principle of perspective projection. As shown in fig. 4, the projection of a point in the image model between the first near plane 11 and the first far plane 12 can be seen in our field of view.
According to the imaging principle of perspective projection, the visible part of the visual field is the vertex on the image model multiplied by the projection matrix, and the vertex on the near plane is normalized, cut and finally displayed by texture mapping. Therefore, if one wants to determine the transition location area in the image model corresponding to the first transition image, the problem can be transformed by reverse thinking into: determining which vertexes on the image model can be projected onto the near plane of the first transition projection matrix, after determining the vertexes, taking areas corresponding to the vertexes as transition position areas, and highlighting and displaying texture coordinates corresponding to the transition position areas. Further, if it is desired to determine which vertices on the image model can be projected onto the near plane of the first transition projection matrix, it can be determined by the first transition projection matrix and the image model.
Specifically, the step of determining a transition position region in the image model corresponding to the first transition image according to the first transition projection matrix and the image model includes: determining a transition vertex projected to a near plane corresponding to the first transition projection matrix from the vertexes of the image model according to the first transition projection matrix and the image model; and taking the region corresponding to the transition vertex as a transition position region in the image model corresponding to the first transition image. The region corresponding to the transition vertex is understood as the region in which the transition vertex is located. The transition position region refers to a three-dimensional region formed by transition vertices in the image model, i.e., a three-dimensional region formed by three-dimensional points.
Transition vertices are understood as vertices in the image model that can be projected into the near plane of the first transition projection matrix. Specifically, the step of determining, from the vertices of the image model, the transition vertices projected into the near plane corresponding to the first transition projection matrix according to the first transition projection matrix and the image model may be performed by a CPU, and specifically includes: traversing each vertex in the image model; transition vertices projected into the near plane corresponding to the first transition projection matrix are determined from each vertex(s).
Wherein the step of determining a transition vertex projected into the near plane corresponding to the first transition projection matrix from each vertex comprises: determining the coordinate of each vertex after projection according to the first transition projection matrix, for example, multiplying the vertex in the image model by the first transition projection matrix to obtain the coordinate of each vertex after projection; detecting whether the coordinate projected by each vertex is in the range of the first transition projection matrix corresponding to the near plane or not; if yes, determining the vertex as a transition vertex; if not, the vertex is determined to be a non-transition vertex. Wherein the transition vertex projected to the near plane of the first transition projection matrix is viewable by a user, and the non-transition vertex projected is not viewable by the user.
Specifically, if the image model is divided into 180 circles according to longitude and 30 points are allocated to each circle, the CPU traverses each vertex in the image transformation model, that is, the number of traversed vertices is 180 × 30, and determines whether a vertex is a transition vertex according to the first transition projection matrix and the vertex every time a vertex is traversed. Specifically, the vertex coordinates of the vertex are multiplied by the first transition projection matrix to determine the projected coordinates of the vertex, and if the projected coordinates are in the range of the near plane corresponding to the first transition projection matrix, the vertex is determined to be a transition vertex, otherwise, the vertex is determined to be a non-transition vertex. If projected coordinate (x)1,y1,z1) X in (2)1And y1Is in the [ -1,1 ] coordinate]In the range of-1. ltoreq. x1Y is not more than 1, and-1 is not more than y1And if the projection coordinate is less than or equal to 1, determining that the projected coordinate is in the range of the near plane corresponding to the first transition projection matrix. And after the transition vertex is determined, taking the region corresponding to the transition vertex as the transition region in the image model corresponding to the first transition image. It should be noted that it is not necessary to judge z here1The projected coordinates, and thus the near plane, are two-dimensional, all z-axis coordinates are equal. z is a radical of1The projected coordinates can be used as the depth of field subsequently, so as to realize the effect of large and small distances.
Specifically, the step of determining, from the vertices of the image model, the transition vertices projected into the near plane corresponding to the first transition projection matrix according to the first transition projection matrix and the image model may also be performed by a GPU, and specifically includes the following steps: the CPU obtains a first transition projection matrix and an image model; sending the first transition projection matrix and the image model to a GPU; and the GPU determines a transition vertex projected to the near plane corresponding to the first transition projection matrix from the vertices of the image model according to the first transition projection matrix and the image model. And after the transition vertex is determined, taking the region corresponding to the transition vertex as a transition position region in the image model corresponding to the first transition image.
It should be noted that, if the step of determining the transition vertex projected into the near plane corresponding to the first transition projection matrix from the vertices of the image model according to the first transition projection matrix and the image model is implemented by the GPU. And the GPU calculates the coordinates of the image model after the vertex projection in a matrix mode, so that the processing speed is greatly improved, and the power consumption of the mobile terminal is reduced. It can be understood that if the CPU is used for calculation, the CPU is required to traverse each vertex in the image model, that is, the number of traversed vertices is 180 × 30, and each time a vertex is traversed, the coordinates after vertex projection are calculated according to the first transition projection matrix and the vertex, so that the GPU is adopted to increase the processing speed and reduce the power consumption of the mobile terminal. On the other hand, the coordinates of the projected vertex of the image model are calculated, if the CPU is adopted for calculation, the CPU floating point calculation efficiency is not high, and the error is larger; and the GPU is specially used for processing floating point operation, so that the efficiency is high, and the processing accuracy is greatly improved.
After determining a transition position area in the image model corresponding to the first transition image according to the first transition projection matrix and the image model, processing the transition position area in a preset mode to obtain a transition position image, wherein the transition position image represents first transition position information of the first transition image in the first image.
Specifically, the step of processing the transition position region in a preset manner to obtain a transition position image includes: determining texture coordinates corresponding to the transition vertexes; and copying the texture coordinates into the GPU, so that the GPU processes (i.e. renders) the transition position area in a preset mode according to the texture coordinates to obtain a transition position image. After the transition vertex is calculated by the CPU, the texture coordinate corresponding to the transition vertex is determined, and the texture coordinate corresponding to the transition vertex is copied into the GPU through the CPU, so that the GPU processes the transition position area corresponding to the transition vertex in a preset mode, and the transition position image is obtained. A transition position image representing a position of the first transition image in the first image.
The step of processing the transition position area in a preset mode according to the texture coordinates to obtain a transition position image comprises the following steps: acquiring a first preset texture and a first preset transparency, wherein the first preset texture comprises a preset color or a preset picture; and rendering a transition position area in the first window by using the GPU according to the first preset texture, the first preset transparency and the texture coordinate to obtain a transition position image. Specifically, the texture corresponding to the texture coordinate is set as a first preset texture, and the transparency of the first preset texture is set as a first preset transparency; and rendering the transition position area according to the set texture by utilizing the GPU. In this way, the transition position area is rendered into the first preset texture, and the displayed transparency is the first preset transparency, so that the purpose of highlighting the transition position image on the third display control is achieved.
Further, taking a region outside the transition position region as a non-transition position region, specifically, processing the transition position region in a preset manner according to the texture coordinates to obtain a transition position image, including:
acquiring a first preset texture, a first preset transparency and a second preset transparency, wherein the second preset transparency is smaller than the first preset transparency, and the first preset texture is a preset color or a preset picture; rendering a transition position area in the first window by using a GPU according to the first preset texture, the first preset transparency and the texture coordinate to obtain a transition position image; and rendering the non-transition position area into a second preset transparency by utilizing the GPU. The rendering of the transition position area by using the GPU according to the first preset texture, the first preset transparency and the texture coordinate specifically includes: and setting the texture corresponding to the texture coordinate as a first preset texture, setting the transparency of the first preset texture as a first preset transparency, rendering the transition position area by using the GPU according to the set texture, and rendering the transition position area as a first preset texture, wherein the displayed transparency is the first preset transparency.
It can be understood that, in order to not block the region corresponding to the non-transition position region in the first image, so as to improve the display effect, the second preset transparency is set to be less than 0.8, for example, the second preset transparency may be set to be 0. It should be noted that when the second preset transparency is 0, the rendered non-transition position area is transparent and cannot be seen by human eyes. In order to highlight the transition position image, the first preset transparency may be set to be between (0,1), and in order to not completely cover the area corresponding to the transition position area in the first image, so as to improve the user experience, the first preset transparency may be set to be 0.8. Wherein the preset color may be set to red to highlight the transition position image in the third display control.
In some other cases, the step of taking a region outside the transition position region as a non-transition position region, specifically, processing the transition position region in a preset manner according to the texture coordinates to obtain a transition position image, includes:
acquiring a first preset texture, a first preset transparency, a second preset texture and a second preset transparency, wherein the second preset transparency is smaller than the first preset transparency, the first preset texture is a first preset color or a first preset picture, and the second preset texture is a second preset color or a second preset picture; rendering a transition position area in the first window by using a GPU according to a first preset texture, a first preset transparency and texture coordinates to obtain a transition position image so as to highlight the rendered transition position area; and rendering the non-transition position area in the first window by using the GPU according to the second preset texture and the second preset transparency.
The method for rendering the transition position area by using the GPU according to the first preset texture, the first preset transparency and the texture coordinate comprises the following steps: setting the texture corresponding to the texture coordinate as a first preset texture, and setting the transparency of the first preset texture as a first preset transparency; rendering the transition position area according to the set texture to render the transition position area as a first preset texture, wherein the displayed transparency is a first preset transparency. Rendering the non-transition position area by using the GPU according to the second preset texture and the second preset transparency, wherein the rendering process comprises the following steps: setting the texture corresponding to the non-transition position area as a second preset texture, and setting the transparency of the second preset texture as a second preset transparency; and rendering the non-transition position area according to the set texture so as to render the non-transition position area as a second preset texture, wherein the displayed transparency is a second preset transparency. Wherein, the setting of the first preset transparency and the second preset transparency can refer to the description above; the first predetermined texture and the second predetermined texture may be the same or different. And highlighting the transition position area in the third display control, rendering the non-transition position area by using a second preset texture, and setting the non-transition position area as a second preset transparency.
In the embodiment, the transition position area and the non-transition position area are distinguished, the transition position image is further highlighted, and the user experience is improved.
After the transition position image is obtained, the transition position image (and the rendered non-transition position area) is highlighted on the third display control, and the third display control is displayed at the preset position of the data display interface.
And 208, when the virtual camera moves to the second position information, generating a second image under a small visual angle according to the second projection matrix, the image model and the image data, and displaying the second image in the first display control.
And 209, determining the target position information of the second image in the first image, and highlighting the target position information on the third display control.
Wherein the target position information indicates a position of the second image in the first image.
Specifically, the step of determining the target position information of the second image in the first image comprises the following steps: determining a target position area in the image model corresponding to the second image according to the second projection matrix and the image model; and processing the target position area in a preset mode to obtain a target position image, wherein the target position image represents the target position information of the second image in the first image, namely the target position image represents the position of the second image in the first image.
Further, the step of determining a target location area in the image model corresponding to the second image according to the second projection matrix and the image model includes: determining a target vertex projected to a near plane corresponding to the second projection matrix from the vertexes of the image model according to the second projection matrix and the image model; and taking the area corresponding to the target vertex as a target position area in the image model corresponding to the second image.
Further, the step of processing the target position area in a preset mode to obtain a target position image includes: acquiring a first preset texture and a first preset transparency, wherein the first preset texture comprises a preset color or a preset picture; and rendering the target position area in the first window by using the GPU according to the first preset texture, the first preset transparency and the texture coordinate to obtain a target position image.
After the target location image is obtained, the target location image (and rendered non-target location area) is highlighted on the third display control. For the specific implementation in this step, please refer to the above description corresponding to the transition position image, which is not repeated herein.
In an embodiment of the method, the same steps as in the above description are referred to the corresponding parts in the above description.
The method further comprises the steps of displaying a second display control and a third display control at a preset disposal position limited on the data display interface, displaying the first image under a large visual angle on the second display control, and displaying the first transition image or the position of the second image in the first image on the third display control, so that a user is helped to quickly locate which part of the first transition image or the second image which is currently displayed corresponds to the first image (whole), and the user experience is improved.
In one case, after the obtaining the second display control and the third display control, the image displaying method further includes: acquiring initial sizes and final sizes of the second display control and the third display control, wherein the initial sizes and the final sizes are smaller than the size of the first display control; determining a transition size of the second display control and the third display control to zoom from the initial size to the final size within a first preset time; the step of displaying the second display control at the preset position of the data display interface includes: displaying a second display control at a preset position of the data display interface according to the transition size; the step of displaying a third display control at a preset position of the data display interface includes: displaying a third display control at a preset position of the data display interface according to the transition size; and when the second image is generated, displaying the second display control and the third display control at the preset position of the data display interface according to the final size.
It will be appreciated that the initial size may be zero, or may be a smaller size; the final dimension is greater than the initial dimension. After the initial size and the final size are obtained, the step of determining the transition size in which the second display control and the third display control are scaled from the initial size to the final size within the first preset time may refer to the above method for determining the corresponding interpolation position point between the virtual camera moving from the first position point to the second position point within the first preset time, and refer to the above description specifically, which is not repeated herein.
In the situation, the process that the second display control and the third display control are zoomed from the initial size to the final size, namely the change process from small to large, is simulated, the final size is finally displayed, and the change process from small to large is realized through the second display control and the third display control, so that the user experience is improved.
Fig. 10a to 10d are schematic diagrams of data display interfaces provided in the embodiments of the present application. FIG. 10a is a schematic view of a data presentation interface displaying a first image; FIG. 10b is a schematic view of a corresponding data presentation interface during a switching process; FIG. 10c is another diagram of a corresponding data presentation interface during a switch process; fig. 10d is a schematic diagram of a corresponding data display interface when switching to the second image. Fig. 10a to 10b are schematic diagrams of the corresponding data presentation interfaces displayed in chronological order after the first switching operation is detected, and the schematic diagrams of the corresponding data presentation interfaces displayed in chronological order from fig. 10d, 10c, 10b after the second switching operation is detected.
In fig. 10a, a first image 51 is displayed on the first display control 41 of the data presentation interface, and the effect displayed by the first image 51 is consistent with the image data shot by the fisheye camera. The first switching operation based on the first image 51 is not detected in fig. 10a, and thus, the second display control and the third display control do not appear on the data presentation interface.
In fig. 10b, after the first switching operation based on the first image 51 is detected, the second display control 42 and the third display control 43 are displayed at the lower right corner of the data presentation interface, the second display control 42 is located above the first display control 41, and the third display control 43 is located above the second display control 42. A first transition image 52 is displayed on the first display control 41, a first image 51 is displayed in the second display control 42, and a transition position image 61 is displayed in the third display control 43, the transition position image 61 indicating the position of the first transition image 52 in the first image 51. Since fig. 10b is the corresponding data presentation interface shortly after the first switching operation is detected, the transition position image 61 is large, almost completely covering the first image 51 displayed on the second display control 42.
Fig. 10c is a data presentation interface subsequent to fig. 10 b. In fig. 10c, the first transition image 61 has changed, resulting in a change in the transition position image 61 of the first transition image 52 in the first image 51. In addition, as can be clearly seen in fig. 10c, the transition position image 61 has a certain transparency, and the first image 51 displayed on the corresponding region of the second display control 42 can be seen through the transition position image 61. It should be noted that the third display control 43 further includes a non-transition position region after rendering, and since the second transparency corresponding to the non-transition position region is set to 0 during rendering, the corresponding region of the first image 51 can be directly seen through the non-transition position region after rendering.
FIG. 10d is a diagram illustrating a corresponding data presentation interface when switching to the second image. In FIG. 10d, a second image 53 is displayed on the first display control 41; in the lower right corner, the first image 51 is still displayed on the second display control 42, and the target position image 62 of the second image 53 in the first image 51 is displayed on the third display control 43, and the target position image 62 is also transparent. Similarly, the third display control 43 further includes a non-transition position region after rendering, and since the second transparency corresponding to the non-transition position region is set to 0 during rendering, the corresponding region of the first image 51 can be directly seen through the non-transition position region after rendering.
As can be seen from fig. 10b to 10d, the sizes of the second display control 42 and the second display control 43 on the data presentation interface are from small to large; in FIG. 10d, the size of the second display control 42 and the third display control 43 becomes the final size when the second image 53 is displayed on the first display control 41 on the data presentation interface.
The transition position image 61 can clearly know the position of the first transition image 52 displayed on the current first display control 41 in the first image 51, and can also see the corresponding area in the first image 51, so that the speed of positioning the position of the current first transition image by the user is improved. The same is true for the target position image 62. It should be noted that, in the embodiment of the present application, it is described by taking the second display control 42 and the third display control 43 as an example, and in other embodiments, the sizes of them may be different.
Fig. 11 is a flowchart illustrating an image displaying method applied to an electronic device according to an embodiment of the present application, where the image displaying method includes the following steps.
301, when a second switching operation based on the second image is detected, acquiring third position information of the corresponding virtual camera in the first projection matrix, fourth position information of the corresponding virtual camera in the second projection matrix, and a second preset time.
When the second image is displayed in the first display control of the data presentation interface, a second switching operation based on the second image may be detected. Wherein the second switching operation is a switching operation set in advance. The second switching operation may be a zoom-out operation based on two fingers detected on the second image sliding from opposite directions to the inner side; double-click operation, multiple continuous touch operation, and the like; it may also be to trigger a switching operation of the corresponding second switching control (switching from the small viewing angle to the large viewing angle), and the like. The manner of triggering the second switching operation may be through a corresponding touch operation of the user on the data presentation interface, or through a voice manner, and so on. When the second switching operation is triggered on the second image, acquiring third position information of the corresponding virtual camera in the first projection matrix, fourth position information of the corresponding virtual camera in the second projection matrix and second preset time.
Wherein the third location information is the same as the first location information; the fourth location information may or may not be the same as the second location information. If the second image displayed in the switched first display control is not operated after the large visual angle is switched to the small visual angle, the fourth position information is the same as the second position information; and if the second image displayed in the switched first display control is operated and the pitch angle in the fourth position information is changed after the large visual angle is switched to the small visual angle, the fourth position information is different from the second position information. The second preset time is the time required for switching the second image under the small viewing angle to the first image under the large viewing angle, and the second preset time may be set to be the same as or different from the second preset time.
And 302, determining a second transition projection matrix for the virtual camera to move from the fourth position information to the third position information within a second preset time.
The fourth position information includes a fourth position point and a fourth pitch angle, and the third position information includes a third position point and a third pitch angle. The fourth position point is located at the center of the image model, such as the spherical center of the sphere, and the fourth position point is located at the outer side of the image model.
Specifically, the determining of the second transition projection matrix is the same as the determining of the first transition projection matrix, and please refer to the corresponding description in the determining of the first transition projection matrix, which is not repeated herein.
303, generating a second transition image corresponding to the second transition projection matrix according to the second transition projection matrix, the image model and the image data, and displaying the second transition image in the first display control.
304, when the virtual camera moves to the third position information, generating a first image under a large viewing angle according to the first projection matrix, the image model and the image data, and displaying the first image in the first display control, so as to switch the second image under the small viewing angle displayed by the first display control to the first image under the large viewing angle.
Therefore, the second image displayed by the first display control under the small visual angle is switched to the first image displayed by the first display control under the large visual angle, the whole process of switching from the second image to the first image can be seen through the second transition image, smooth switching among different images displayed under different visual angles is achieved, and user experience is improved.
In one case, when generating the second transition image, the image presentation method further includes: second transition location information of the second transition image in the first image is determined and the second transition location information is highlighted on the third display control. In one case, when generating the first image, the image presentation method further includes: and hiding and displaying the second display control and the third display control at a preset position of the data display interface. As can be appreciated, since the first image is the same as the image captured by the fisheye camera, there is no need to display the second display control and the third display control to display the corresponding position information and the like.
In one case, when a second switching operation based on a second image is detected, the image presentation method further includes: acquiring the final size and the initial size of the second display control and the third display control; determining a transition size of the second display control and the third display control that are scaled from the final starting size to the initial size within a second preset time; displaying a second display control and a third display control at a preset position of the data display interface according to the transition size; and when the first image is generated, displaying the second display control and the third display control at the preset position of the data display interface according to the initial size. In the situation, the process that the second display control and the third display control are zoomed from the final size to the initial size, namely the change process from large to small, is simulated, the initial size is finally displayed, and the change process from large to small is realized through the second display control and the third display control, so that the user experience is improved. In one case, when the first image is generated, if the initial size is not zero, the second display control and the third display control of the initial size are hidden and displayed at a preset position of the data display interface.
According to the method described in the above embodiments, the present embodiment will be further described from the perspective of an image display apparatus, which may be specifically implemented as an independent entity or integrated in an electronic device.
As shown in fig. 12, the image displaying apparatus includes modules for executing the image displaying method in the foregoing embodiment. The apparatus may include a first obtaining module 401, a first processing presentation module 402, a second obtaining module 403, a matrix determination module 404, a second processing presentation module 405, and a third processing presentation module 406.
The first obtaining module 401 is configured to obtain image data, a first projection matrix, a second projection matrix, and an image model, which are collected by the fisheye camera.
The first processing and displaying module 402 is configured to process the image data into a first image under a large viewing angle according to the first projection matrix and the image model, and display the first image in a first display control of the data display interface.
A second obtaining module 403, configured to obtain, when a first switching operation based on the first image is detected, first position information of a corresponding virtual camera in the first projection matrix, second position information of a corresponding virtual camera in the second projection matrix, and a first preset time.
The matrix determining module 404 is configured to determine a first transition projection matrix corresponding to the virtual camera moving from the first position information to the second position information within a first preset time.
A matrix determining module 404, configured to determine interpolation position information corresponding to movement of the virtual camera from the first position information to the second position information within a first preset time; and determining a corresponding first transition projection matrix according to the interpolation position information. Wherein each interpolated position information determines a corresponding first transition projection matrix.
The matrix determining module 404, when configured to determine interpolation position information corresponding to the virtual camera moving from the first position information to the second position information within the first preset time, is specifically configured to perform: determining an interpolation position point corresponding to the virtual camera moving from the first position point to the second position point within a first preset time; determining an interpolation pitch angle corresponding to the movement of the virtual camera from the first pitch angle to the second pitch angle within a first preset time; and taking the interpolation position points and the interpolation pitch angles which correspond one to one as interpolation position information.
The matrix determining module 404, when configured to perform the step of determining the corresponding first transition projection matrix according to each interpolation position information, is specifically configured to perform: determining a visual angle matrix according to the interpolation position point and the interpolation pitch angle in each interpolation position information; acquiring a model matrix and a perspective matrix; and determining a corresponding first transition projection matrix according to the model matrix, the perspective matrix and the visual angle matrix.
Determining a view angle matrix according to the interpolation position point and the interpolation pitch angle in each interpolation position information, wherein the step comprises the following steps: taking the interpolated pitch angle as the pitch angle of the virtual camera, and acquiring the yaw angle of the virtual camera; updating the orientation vector of the virtual camera according to the interpolation position point, the yaw angle and the pitch angle; the view angle matrix is updated according to the orientation vector.
The second processing and displaying module 405 is configured to generate a first transition image corresponding to the first transition projection matrix according to the first transition projection matrix, the image model and the image data, and display the first transition image in the first display control.
And a third processing and displaying module 406, configured to generate a second image under a small viewing angle according to the second projection matrix, the image model, and the image data when the virtual camera moves to the second position information, and display the second image in the first display control.
In one case, as shown in fig. 13, the image presentation apparatus further includes: a display processing module 407, a first determining and displaying module 408, and a second determining and displaying module 409. The second obtaining module 403 is further configured to, when a first switching operation based on the first image is detected, obtain a second display control and a third display control, where the second display control is displayed on the first display control, the third display control is displayed on the second display control, and sizes of the second display control and the third display control are both smaller than a size of the first display control. The display processing module 407 is configured to display the second display control and the third display control at a preset position of the data display interface, and display the first image on the second display control. The first determining and presenting module 408 is configured to determine first transition position information of the first transition image in the first image when the first transition image is generated, and highlight the first transition position information on the third display control. And a second determining and presenting module 409, configured to determine, when the second image is generated, target position information of the second image in the first image, and highlight the target position information on the third display control.
In one case, the image display device further includes: a size determination module. The second obtaining module 403 is further configured to, when obtaining the second display control and the third display control, obtain initial sizes and final sizes of the second display control and the third display control, where both the initial sizes and the final sizes are smaller than the size of the first display control. And the size determining module is used for determining a transition size of the second display control and the third display control which are zoomed from the initial size to the final size in the first preset time. The display processing module 407 is further configured to display the second display control and the third display control at a preset position of the data display interface according to the transition size, and further configured to display the second display control and the third display control at a preset position of the data display interface according to the final size when the second image is generated.
In some cases, the second obtaining module 403 in the image presentation apparatus is further configured to obtain third position information of the corresponding virtual camera in the first projection matrix, fourth position information of the corresponding virtual camera in the second projection matrix, and a second preset time when a second switching operation based on the second image is detected. The matrix determining module 404 is further configured to determine a second transition projection matrix in which the virtual camera moves from the fourth position information to the third position information within a second preset time. The second processing and displaying module 405 is further configured to generate a second transition image corresponding to the second transition projection matrix according to the second transition projection matrix, the image model and the image data, and display the second transition image in the first display control. The third processing and displaying module 406 is further configured to generate a first image under a large viewing angle according to the first projection matrix, the image model and the image data when the virtual camera moves to the third position information, and display the first image in the first display control.
Further, the first determining and presenting module 408 is further configured to determine second transition position information of the second transition image in the first image, and highlight the second transition position information on the third display control.
Further, the size determination module is further configured to determine a transition size of the second display control and the third display control to scale from the final starting size to the initial size within a second preset time. The display processing module 407 is further configured to display the second display control and the third display control at a preset position of the data display interface according to the initial size when the first image is generated.
In one case, the display processing module 407 is further configured to hide and display the second display control and the third display control at a preset position of the data presentation interface when the first image is generated.
In specific implementation, the above units/modules may be implemented as independent entities, or may be combined arbitrarily and implemented as one or several entities. The specific implementation processes of the above devices and units/modules, and the achieved beneficial effects may refer to the corresponding descriptions in the foregoing method embodiments, and for convenience and brevity of description, no further description is provided herein.
An electronic device according to an embodiment of the present application is further provided, as shown in fig. 14, which shows a schematic structural diagram of the electronic device according to an embodiment of the present application, specifically:
the electronic device may include components such as a processor 901 of one or more processing cores, memory 902 of one or more computer-readable storage media, Radio Frequency (RF) circuitry 903, a power supply 904, an input unit 905, and a display unit 906. Those skilled in the art will appreciate that the electronic device configurations shown in the figures do not constitute limitations of the electronic device, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components. Wherein:
the processor 901 is a control center of the electronic device, and includes a central processing unit and a graphics processing unit, and the central processing unit is connected to the graphics processing unit. The cpu connects various parts of the entire electronic device through various interfaces and lines, and executes various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 902 and calling data stored in the memory 902, thereby integrally monitoring the electronic device. Optionally, the central processor may include one or more processing cores; preferably, the central processor may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the central processor. The graphic processor is mainly used for processing data transmitted by the central processing unit, such as rendering.
The memory 902 may be used to store software programs (computer programs) and modules, and the processor 901 executes various functional applications and data processing by operating the software programs and modules stored in the memory 902. The memory 902 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 902 may also include a memory controller to provide the processor 901 access to the memory 902.
The RF circuit 903 may be used for receiving and transmitting signals during information transmission and reception, and in particular, for processing downlink information of a base station after being received by one or more processors 901; in addition, data relating to uplink is transmitted to the base station. In general, RF circuitry 903 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 903 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The electronic device further includes a power supply 904 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 904 is logically connected to the processor 901 via a power management system, so that functions of managing charging, discharging, and power consumption are implemented via the power management system. The power supply 904 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The electronic device may further include an input unit 905, and the input unit 905 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. Specifically, in one particular embodiment, input unit 905 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 901, and can receive and execute commands sent by the processor 901. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 905 may include other input devices in addition to a touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The electronic device may also include a display unit 906, which display unit 906 may be used to display information input by or provided to the user as well as various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 906 may include a Display panel, and may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may cover the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 901 to determine the type of the touch event, and then the processor 901 provides a corresponding visual output on the display panel according to the type of the touch event. Although in the figures the touch sensitive surface and the display panel are shown as two separate components to implement input and output functions, in some embodiments the touch sensitive surface may be integrated with the display panel to implement input and output functions.
Although not shown, the electronic device may further include a camera (note that the camera here is different from the virtual camera described above, and the camera here refers to hardware), a bluetooth module, and the like, which are not described herein again. Specifically, in this embodiment, the processor 901 in the electronic device loads an executable file corresponding to a process of one or more application programs into the memory 902 according to the following instructions, and the processor 901 runs the application programs stored in the memory 902, so as to implement various functions as follows:
acquiring image data, a first projection matrix, a second projection matrix and an image model which are acquired by a fisheye camera; processing the image data into a first image under a large visual angle according to the first projection matrix and the image model, and displaying the first image in a first display control of a data display interface; when a first switching operation based on a first image is detected, acquiring first position information of a corresponding virtual camera in a first projection matrix, second position information of a corresponding virtual camera in a second projection matrix and first preset time; determining a first transition projection matrix corresponding to the virtual camera moving from the first position information to the second position information within a first preset time; generating a first transition image corresponding to the first transition projection matrix according to the first transition projection matrix, the image model and the image data, and displaying the first transition image in a first display control; when the virtual camera moves to the second position information, a second image under a small visual angle is generated according to the second projection matrix, the image model and the image data, and the second image is displayed in the first display control, so that the first image under the large visual angle displayed by the first display control is switched to the second image under the small visual angle.
The electronic device can implement the steps in any embodiment of the image displaying method provided in the embodiment of the present application, and therefore, the beneficial effects that can be achieved by any image displaying method provided in the embodiment of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions (computer programs) which are stored in a computer-readable storage medium and loaded and executed by a processor, or by related hardware controlled by the instructions (computer programs). To this end, an embodiment of the present invention provides a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps of any embodiment of the image displaying method provided in the embodiment of the present invention.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any image displaying method embodiment provided by the embodiment of the present invention, the beneficial effects that can be achieved by any image displaying method provided by the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The image display method, the image display apparatus, the electronic device, and the storage medium provided in the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image presentation method, comprising:
acquiring image data, a first projection matrix, a second projection matrix and an image model which are acquired by a fisheye camera;
processing the image data into a first image under a large visual angle according to the first projection matrix and the image model, and displaying the first image in a first display control of a data display interface;
when a first switching operation based on the first image is detected, acquiring first position information of a corresponding virtual camera in the first projection matrix, second position information of a corresponding virtual camera in the second projection matrix, and a first preset time; acquiring a second display control and a third display control;
displaying the second display control and the third display control at a preset position of the data display interface, and displaying the first image on the second display control;
determining a first transition projection matrix corresponding to the virtual camera moving from the first position information to the second position information within a first preset time;
generating a first transition image corresponding to the first transition projection matrix according to the first transition projection matrix, the image model and the image data, and displaying the first transition image in the first display control;
determining first transition location information of the first transition image in the first image, the first transition location information representing a location of the first transition image in the first image, and highlighting the first transition location information on the third display control;
when the virtual camera moves to the second position information, generating a second image under a small visual angle according to the second projection matrix, the image model and the image data, and displaying the second image in the first display control, so that the first image under the large visual angle displayed by the first display control is switched to the second image under the small visual angle;
determining target location information of the second image in the first image, and highlighting the target location information on the third display control, the target location information representing a location of the second image in the first image.
2. The image presentation method of claim 1, further comprising:
when a second switching operation based on the second image is detected, acquiring third position information of a corresponding virtual camera in the first projection matrix, fourth position information of a corresponding virtual camera in the second projection matrix, and a second preset time;
determining a second transitional projection matrix in which the virtual camera moves from the fourth position information to the third position information within a second preset time;
generating a second transition image corresponding to the second transition projection matrix according to the second transition projection matrix, the image model and the image data, and displaying the second transition image in the first display control;
when the virtual camera moves to the third position information, a first image under a large visual angle is generated according to the first projection matrix, the image model and the image data, and the first image is displayed in the first display control, so that the second image under the small visual angle displayed by the first display control is switched to the first image under the large visual angle.
3. The image presentation method of claim 2, further comprising:
when a second transition image is generated, determining second transition position information of the second transition image in the first image, and highlighting the second transition position information on a third display control, wherein the second transition position information represents the position of the second transition image in the first image;
and when the first image is generated, the second display control and the third display control are hidden and displayed at a preset position of the data display interface.
4. The image display method according to claim 1, wherein when the second display control and the third display control are acquired, the method further comprises:
acquiring initial sizes and final sizes of the second display control and the third display control, wherein the initial sizes and the final sizes are smaller than the size of the first display control;
determining a transition size of the second display control and the third display control to scale from the initial size to a final size within a first preset time;
the step of displaying the second display control and the third display control at a preset position of the data display interface includes: displaying the second display control and the third display control at a preset position of the data display interface according to the transition size;
and when the second image is generated, displaying the second display control and the third display control at a preset position of the data display interface according to the final size.
5. The image presentation method according to claim 1, wherein the step of determining first transition position information of the first transition image in the first image comprises:
determining a transition position area corresponding to the first transition image in the image model according to the first transition projection matrix and the image model;
and processing the transition position area in a preset mode to obtain a transition position image, wherein the transition position image represents first transition position information of the first transition image in the first image.
6. The image presentation method according to claim 5, wherein the step of determining, from the first transition projection matrix and an image model, that the first transition image corresponds to a transition location region within the image model comprises:
determining a transition vertex projected to a near plane corresponding to the first transition projection matrix from the vertexes of the image model according to the first transition projection matrix and the image model;
and taking the region corresponding to the transition vertex as a transition position region in the image model corresponding to the first transition image.
7. The image displaying method according to claim 1, wherein the step of determining a first transition projection matrix corresponding to the virtual camera moving from the first position information to the second position information within a first preset time comprises:
determining interpolation position information corresponding to the virtual camera moving from the first position information to the second position information within a first preset time;
and determining a corresponding first transition projection matrix according to the interpolation position information.
8. The image display method according to claim 7, wherein the position information includes a position point and a pitch angle, and the step of determining the corresponding first transition projection matrix according to each piece of interpolated position information includes:
determining a visual angle matrix according to the interpolation position point and the interpolation pitch angle in each interpolation position information;
acquiring a model matrix and a perspective matrix;
and determining a corresponding first transition projection matrix according to the model matrix, the perspective matrix and the view angle matrix.
9. An image display apparatus, comprising:
the first acquisition module is used for acquiring image data, a first projection matrix, a second projection matrix and an image model which are acquired by the fisheye camera;
the first processing and displaying module is used for processing the image data into a first image under a large visual angle according to the first projection matrix and the image model and displaying the first image in a first display control of a data display interface;
a second obtaining module, configured to obtain, when a first switching operation based on the first image is detected, first position information of a corresponding virtual camera in the first projection matrix, second position information of a corresponding virtual camera in the second projection matrix, and a first preset time; acquiring a second display control and a third display control;
the display processing module is used for displaying the second display control and the third display control at a preset position of the data display interface and displaying the first image on the second display control;
the matrix determination module is used for determining a first transition projection matrix corresponding to the virtual camera moving from the first position information to the second position information within a first preset time;
the second processing and displaying module is used for generating a first transition image under a first transition view angle corresponding to the first transition projection matrix according to the first transition projection matrix, the image model and the image data, and displaying the first transition image in the first display control;
a first determining and presenting module, configured to determine first transition position information of a first transition image in the first image when the first transition image is generated, and highlight the first transition position information on the third display control, where the first transition position information represents a position of the first transition image in the first image;
a third processing and displaying module, configured to generate a second image under a small viewing angle according to the second projection matrix, the image model, and the image data when the virtual camera moves to the second position information, and display the second image in the first display control, so that the first image under the large viewing angle displayed by the first display control is switched to the second image under the small viewing angle;
and the second determining and displaying module is used for determining target position information of the second image in the first image when the second image is generated, and highlighting the target position information on the third display control, wherein the target position information represents the position of the second image in the first image.
10. An electronic device, characterized in that the electronic device comprises: one or more processors; a memory; and one or more computer programs, wherein the processor is connected to the memory, the one or more computer programs being stored in the memory and configured to be executed by the processor for performing the image presentation method of any of the preceding claims 1 to 8.
CN202011136153.0A 2020-10-22 2020-10-22 Image display method and device and electronic equipment Active CN112017133B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011136153.0A CN112017133B (en) 2020-10-22 2020-10-22 Image display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011136153.0A CN112017133B (en) 2020-10-22 2020-10-22 Image display method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112017133A true CN112017133A (en) 2020-12-01
CN112017133B CN112017133B (en) 2021-01-15

Family

ID=73527973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011136153.0A Active CN112017133B (en) 2020-10-22 2020-10-22 Image display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112017133B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113730905A (en) * 2021-09-03 2021-12-03 北京房江湖科技有限公司 Method and device for realizing free migration in virtual space
CN114900679A (en) * 2022-05-25 2022-08-12 安天科技集团股份有限公司 Three-dimensional model display method and device, electronic equipment and readable storage medium
CN115158344A (en) * 2022-06-08 2022-10-11 上海集度汽车有限公司 Display method, display device, vehicle, and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113730905A (en) * 2021-09-03 2021-12-03 北京房江湖科技有限公司 Method and device for realizing free migration in virtual space
CN114900679A (en) * 2022-05-25 2022-08-12 安天科技集团股份有限公司 Three-dimensional model display method and device, electronic equipment and readable storage medium
CN114900679B (en) * 2022-05-25 2023-11-21 安天科技集团股份有限公司 Three-dimensional model display method and device, electronic equipment and readable storage medium
CN115158344A (en) * 2022-06-08 2022-10-11 上海集度汽车有限公司 Display method, display device, vehicle, and medium

Also Published As

Publication number Publication date
CN112017133B (en) 2021-01-15

Similar Documents

Publication Publication Date Title
US10055879B2 (en) 3D human face reconstruction method, apparatus and server
CN112017133B (en) Image display method and device and electronic equipment
WO2019184889A1 (en) Method and apparatus for adjusting augmented reality model, storage medium, and electronic device
CN107958480B (en) Image rendering method and device and storage medium
CN106325649B (en) 3D dynamic display method and mobile terminal
WO2019233229A1 (en) Image fusion method, apparatus, and storage medium
CN111813290B (en) Data processing method and device and electronic equipment
CN111833243B (en) Data display method, mobile terminal and storage medium
EP3561667B1 (en) Method for displaying 2d application in vr device, and terminal
CN110502293B (en) Screen capturing method and terminal equipment
CN111010512A (en) Display control method and electronic equipment
CN108701372B (en) Image processing method and device
CN111124227A (en) Image display method and electronic equipment
CN107516099B (en) Method and device for detecting marked picture and computer readable storage medium
CN110853128A (en) Virtual object display method and device, computer equipment and storage medium
CN109618055B (en) Position sharing method and mobile terminal
CN112308766B (en) Image data display method and device, electronic equipment and storage medium
CN112308767B (en) Data display method and device, storage medium and electronic equipment
CN112308768B (en) Data processing method, device, electronic equipment and storage medium
CN110070617B (en) Data synchronization method, device and hardware device
CN109104573B (en) Method for determining focusing point and terminal equipment
CN112181230A (en) Data display method and device and electronic equipment
CN115268817A (en) Screen-projected content display method, device, equipment and storage medium
CN109842722B (en) Image processing method and terminal equipment
CN112184543B (en) Data display method and device for fisheye camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant