CN116668661A - Image processing method, device, electronic equipment and storage medium - Google Patents

Image processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116668661A
CN116668661A CN202310524311.7A CN202310524311A CN116668661A CN 116668661 A CN116668661 A CN 116668661A CN 202310524311 A CN202310524311 A CN 202310524311A CN 116668661 A CN116668661 A CN 116668661A
Authority
CN
China
Prior art keywords
virtual camera
plane
determining
target
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310524311.7A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chengshi Wanglin Information Technology Co Ltd
Original Assignee
Beijing Chengshi Wanglin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chengshi Wanglin Information Technology Co Ltd filed Critical Beijing Chengshi Wanglin Information Technology Co Ltd
Priority to CN202310524311.7A priority Critical patent/CN116668661A/en
Publication of CN116668661A publication Critical patent/CN116668661A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The application provides an image processing method, an image processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: under the condition that the virtual camera performs image acquisition on the panoramic sky box along the Nth shooting direction at a shooting point, detecting K target planes in the view range of the virtual camera in the panoramic sky box; updating the target plane based on the corresponding high-definition map for each target plane, and controlling the virtual camera to render the corresponding high-definition map at the display end; returning to the step of detecting K target planes in the view range of the virtual camera in the panoramic sky box by taking the (N+1) th shooting direction as the (N) th shooting direction under the condition that the virtual camera rotates to the (N+1) th shooting direction; the full-sedum empty box is generated based on the panoramic map of the target electronic sand table, and the shooting point positions are located in the full-sedum empty box. The application can ensure that the user browses images with different definition, can conduct image rendering in a targeted way, improves the image loading speed and improves the browsing experience of the user.

Description

Image processing method, device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a storage medium.
Background
In the house source transaction process, a user needs to know the house source and the cell to which the house source belongs. The on-line house watching has the advantages of convenience and high efficiency, is favored by a plurality of users, and plays an increasingly larger role in house watching scenes.
At present, in an online house watching scene, the definition of panoramic materials displayed by a user side is single, and the visual experience of the user for online house watching is poor; and when the panoramic material is loaded on the user side, the full data is loaded usually, so that the loading time is long, the attention of the user is easily reduced, and the user loss is caused.
Disclosure of Invention
In view of the foregoing, embodiments of the present application provide an image processing method, apparatus, electronic device, and storage medium that overcome or at least partially solve the foregoing problems.
In a first aspect, an embodiment of the present application provides an image processing method, including:
under the condition that a virtual camera performs image acquisition on a panoramic sky box along an Nth shooting direction at a shooting point, detecting K target planes in the panoramic sky box in a visual field range of the virtual camera, wherein K is an integer greater than or equal to 1;
updating the target plane based on the high-definition map corresponding to the target plane for each target plane, and controlling the virtual camera to render the high-definition map corresponding to the target plane at a display end so as to switch an original map corresponding to the target plane displayed by the display end into the high-definition map;
Returning to the step of detecting K target planes in the panoramic sky box within the field of view of the virtual camera, taking the n+1th shooting direction as the N-th shooting direction, when the virtual camera rotates to the n+1th shooting direction;
the full-sedum empty box is generated based on matching of a panoramic map corresponding to the target electronic sand table and a sky box model, and the shooting point is located in the panoramic sky box.
In a second aspect, an embodiment of the present application provides an image processing apparatus including:
the detection module is used for detecting K target planes in the panoramic sky box in the visual field range of the virtual camera under the condition that the virtual camera performs image acquisition on the panoramic sky box in the nth shooting direction at the shooting point, wherein K is an integer greater than or equal to 1;
the first processing module is used for updating the target plane based on the high-definition map corresponding to the target plane for each target plane, and controlling the virtual camera to render the high-definition map corresponding to the target plane at the display end so as to switch the original map corresponding to the target plane displayed at the display end into the high-definition map;
The second processing module is used for returning the (N+1) th shooting direction to the detection module for continuous processing when the virtual camera rotates to the (N+1) th shooting direction;
the full-sedum empty box is generated based on matching of a panoramic map corresponding to the target electronic sand table and a sky box model, and the shooting point is located in the panoramic sky box.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program implementing the steps of the image processing method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image processing method as described in the first aspect above.
According to the technical scheme, under the condition that the virtual camera collects images along a certain shooting direction, the target plane in the visual field range of the virtual camera is determined, the target plane is updated based on the high-definition mapping corresponding to the target plane, the high-definition mapping of the target plane is rendered at the display end, after the shooting direction is adjusted by the virtual camera, the target plane in the visual field range of the virtual camera is redetermined, mapping updating is carried out on the redetermined target plane, and the high-definition mapping is rendered at the display end, so that a user can browse images with different definition through the display end, the visual experience of the user is optimized, and the image loading speed of the display end can be improved based on targeted image rendering, so that the attention of the user is attracted, and the browsing experience of the user is improved.
Drawings
Fig. 1 is a schematic diagram of an image processing method according to an embodiment of the present application;
fig. 2 shows a schematic diagram of a positional relationship between a shooting point and four rays and a horizontal and vertical view angle corresponding to a virtual camera according to an embodiment of the present application;
FIG. 3 is a flowchart showing an overall implementation of an image processing method according to an embodiment of the present application;
fig. 4 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
fig. 5 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. The plurality of embodiments of the present application may include two or more.
In various embodiments of the present application, it should be understood that the sequence numbers of the following processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
An embodiment of the present application provides an image processing method, as shown in fig. 1, including the following steps:
step 101, under the condition that a virtual camera performs image acquisition on a panoramic sky box along an Nth shooting direction at a shooting point, detecting K target planes in the panoramic sky box in a visual field range of the virtual camera, wherein K is an integer greater than or equal to 1, the panoramic sky box is generated by matching a panoramic map corresponding to a target electronic sand table with a sky box model, and the shooting point is located in the panoramic sky box.
In this embodiment, the virtual camera corresponds to an eye effect, the panoramic sky box is generated based on matching of a panoramic map corresponding to the target electronic sand table and a sky box model, and the full-sedum empty box may be a regular hexahedral empty box or a spherical empty box. The target electronic sand table is an electronic sand table corresponding to the target physical space, the target physical space can comprise a main building and other buildings, and the target electronic sand table can comprise different building models. The target physical space may be a space including a plurality of cells including a primary cell and other cells; the target physical space may be a space corresponding to a certain cell.
The shooting point is located in the full-sedum empty box, and at a specific position of the panoramic sky box, for example, at the center of the panoramic sky box, the shooting point can be located obliquely above a main building model (such as a main cell model), and the virtual camera can shoot aiming at the main building model (such as a main cell) at the shooting point in an initial state in a overlooking view.
After the full-sedum empty box is generated, an original map corresponding to the panoramic sky box can be rendered at the display end, and the original map can be understood as a low-definition map, namely, the original map of the panoramic sky box rendered at the display end is the panoramic low-definition map corresponding to the panoramic sky box. The display end in this embodiment may be a PC end or a mobile end.
Under the condition that the virtual camera performs image acquisition on the panoramic sky box along the Nth shooting direction at the shooting point position, K target planes in the view range of the virtual camera in the panoramic sky box are detected. Since the full-sedum sky box can be a regular hexahedral sky box or a spherical sky box, the number of planes corresponding to the panoramic sky box is usually 6, and for the spherical sky box, it is close to the hexahedral sky box, so that the full-sedum sky box can also be regarded as corresponding to 6 planes. When detecting K target planes in the visual field of the virtual camera, actually screening planes in the visual field of the virtual camera from a plurality of planes corresponding to the panoramic sky box, and taking the screened planes as target planes.
The detection of the target plane within the view range of the virtual camera in the panoramic sky box may be performed when the virtual camera is stationary, or may be performed when the virtual camera is rotated to this state. That is, the detection timing may be when the virtual camera is in a stationary state or when the virtual camera is in a rotating state.
Step 102, for each target plane, updating the target plane based on the high-definition map corresponding to the target plane, and controlling the virtual camera to render the high-definition map corresponding to the target plane at the display end so as to switch the original map corresponding to the target plane displayed at the display end into the high-definition map.
For each of the K target planes within the field of view of the virtual camera, the target plane may be updated based on the corresponding high definition map of the target plane, and when the target plane is updated based on the high definition map, it may be understood that a layer of high definition map is overlaid on the target plane of the panoramic sky box.
Aiming at a plane which is not in the visual field range of the virtual camera in the full-sedum empty box, mapping updating is not needed, and the original mapping corresponding to the plane is maintained.
For each target plane, after updating the target plane based on the high definition map corresponding to the target plane, the virtual camera may be controlled to render the high definition map corresponding to the target plane at the display end, so as to switch the original map (low definition map) of the target plane displayed at the display end to the high definition map by rendering the high definition map of the target plane at the display end. The original mapping displayed by the display end is switched to the high-definition mapping, so that a user can browse images with different definition through the display end, and the visual experience of the user is optimized; and after the mapping update is carried out on the target plane, the high-definition mapping of the target plane is rendered at the display end, so that the image loading speed of the display end can be improved, the attention of a user is attracted, and the browsing experience of the user is improved compared with the situation of rendering the full-volume mapping.
And 103, when the virtual camera rotates to the (n+1) th shooting direction, taking the (n+1) th shooting direction as the (N) th shooting direction, and returning to the step of detecting K target planes in the panoramic sky box within the visual field range of the virtual camera.
After the map updating and the rendering of the display end in step 102 are completed, the virtual camera can perform image acquisition on the panoramic sky box along the n+1th shooting direction at the shooting point position under the condition that the virtual camera rotates to the n+1th shooting direction. At this time, the n+1 shooting direction may be taken as the nth shooting direction, then, for the updated nth shooting direction (the n+1 shooting direction), the K target planes in the view range of the virtual camera in the panoramic sky box are detected, and the mapping update is performed for the detected target planes, so that the mapping update is performed for the planes in the view range of the virtual camera in the n+1 shooting direction (the updated nth shooting direction), so that the high-definition mapping of the target planes is rendered at the display end for the n+1 shooting direction (the updated nth shooting direction).
Accordingly, after the mapping update and the presentation end rendering are completed for the n+1th photographing direction (the updated n+2th photographing direction), when the virtual camera rotates to the n+2th photographing direction, the n+2th photographing direction may be used as the updated n+1th photographing direction, the updated n+1th photographing direction may be used as the updated n+th photographing direction again, so as to detect a target plane in the view range of the virtual camera for the n+2th photographing direction, and perform the mapping update and the presentation end rendering for the target plane. And so on, each time the virtual camera changes the shooting direction, the target plane in the visual field range of the virtual camera needs to be detected aiming at the shooting direction, so that mapping update is carried out on the target plane, and rendering is carried out at the display end.
The virtual camera rotates according to the rotation instruction, and can rotate around a horizontal axis (X axis) and a vertical axis (Y axis) when rotating; and the virtual camera remains at the shooting point while rotating. The rotation instruction is generated based on the operation of the user on the page of the display end, namely, based on the operation of the user on the page of the display end, the virtual camera is controlled to rotate, and the rotation of the virtual camera causes the content in the visual field range of the virtual camera to change, so that the content in the page of the display end is updated. By detecting the target plane in the visual field range of the virtual camera and conducting mapping updating on the target plane, the display end can display the high-definition image.
When the user of the display end performs zoom input on the page, the change of the angle of the view field of the virtual camera can be controlled, so that the scaling of the page content of the display end is realized.
According to the embodiment of the application, under the condition that the virtual camera performs image acquisition along a certain shooting direction, the target plane in the visual field range of the virtual camera is determined, the target plane is updated based on the high definition map corresponding to the target plane, the high definition map of the target plane is rendered at the display end, after the shooting direction is adjusted by the virtual camera, the target plane in the visual field range of the virtual camera is redetermined, the map update is performed on the redetermined target plane, the high definition map is rendered at the display end, so that a user can be ensured to browse images with different definition through the display end, the visual experience of the user is optimized, and the image loading speed of the display end can be improved based on targeted image rendering, the attention of the user is attracted, and the browsing experience of the user is improved.
The process of detecting a target plane within the field of view of the virtual camera is described below. When detecting K target planes within the panoramic sky box within the field of view of the virtual camera, comprising:
Determining a lateral field angle of the virtual camera based on the aspect ratio of the virtual camera and the longitudinal field angle of the virtual camera;
determining four space coordinates based on the transverse view angle, the longitudinal view angle and a first distance corresponding to the panoramic sky box;
determining four rays taking the shooting point as an origin based on the four space coordinates;
and determining whether the current plane is in the visual field range of the virtual camera according to the four rays for each plane in the full-sedum empty box so as to acquire K target planes in the visual field range of the virtual camera.
When detecting K target planes within the view range of the virtual camera for the full-sedum empty box, a transverse view angle of the virtual camera may be determined based on an aspect ratio of the virtual camera and a longitudinal view angle (which is a known amount) of the virtual camera, then four space coordinates may be determined based on the transverse view angle, the longitudinal view angle and a first distance corresponding to the panoramic sky box, four rays with a shooting point as an origin may be determined based on the four space coordinates, and the shooting point may be a coordinate origin of a three-dimensional space coordinate system and may be a center point of the panoramic sky box.
After four rays taking a shooting point position as an origin are determined, determining whether the current plane is in the visual field range of the virtual camera or not according to the four rays for each plane in the full-sedum empty box, so as to screen out target planes in the visual field range of the virtual camera in the panoramic sky box, and obtaining K target planes in the visual field range of the virtual camera.
According to the embodiment, the transverse view angle can be determined according to the known longitudinal view angle and the aspect ratio of the virtual camera, four space coordinates are determined according to the transverse view angle, the longitudinal view angle and the first distance corresponding to the panoramic sky box, four rays taking a shooting point as an origin are determined based on the four space coordinates, a target plane in the view range of the virtual camera is screened out based on the four rays, the view range is determined based on the four rays, and then the plane in the view range is determined.
The following describes a scheme for determining a lateral angle of view of the virtual camera based on an aspect ratio of the virtual camera and a longitudinal angle of view of the virtual camera, including:
performing radian conversion on one half of the longitudinal field angle to determine a first radian;
Performing tangent function operation based on the first radian, and determining a first parameter;
calculating the product of the first parameter and the aspect ratio, and determining a second parameter;
performing arctangent operation based on the second parameter to determine a second radian;
and performing angle conversion on the second radian to determine a target angle, and determining the transverse field angle based on 2 times of the target angle.
For the virtual camera, the corresponding vertical field angle can be obtained, and the horizontal field angle introduced by the embodiment is used as a calculation parameter, so that only the vertical field angle is used in the actual application process of the virtual camera.
When the horizontal view angle is determined based on the vertical view angle, one half of the vertical view angle can be subjected to radian conversion, and a first radian is determined, and if the vertical view angle is 2, the first radian is alpha pi/180. After determining the first radian, performing tangent function operation based on the first radian, and determining a first parameter, wherein the first parameter is tan (alpha pi/180). The second parameter is determined based on the product of the first parameter and the aspect ratio (w/h), then the second parameter is [ tan (α. Pi./180) ] (w/h).
In the case of determining the second parameter, an arctangent operation is performed based on the second parameter, and a second radian may be determined, the second radian being arctan { [ tan (α. Pi/180) ] (w/h) }. After the second radian is determined, a target angle corresponding to the second radian is determined, and the transverse field angle can be determined based on 2 times of the target angle.
The above process of calculating the lateral field angle 2β may be expressed by the following formula:
the derivation of the above formula is described below:
then
Then
Where w is the camera field width, h is the camera field height, w/h represents the aspect ratio of the virtual camera, since α (half the longitudinal field angle) is known, w/h is a known quantity, the lateral field angle can be determined; referring to fig. 2, α is one half of the longitudinal field angle and β is one half of the transverse field angle.
In the above embodiment, when the vertical angle of view is known, the operation may be performed according to the vertical angle of view and the aspect ratio to determine the horizontal angle of view, so that the determination of the horizontal angle of view based on the relationship among the aspect ratio of the virtual camera, the horizontal angle of view, and the vertical angle of view is realized.
As an alternative embodiment, after determining the lateral view angle, 4 spatial coordinates need to be calculated as calculation parameters, and a process of determining the spatial coordinates based on the lateral view angle, the longitudinal view angle, and the first distance corresponding to the panoramic sky box is described below. In this embodiment, the full-sedum hollow box is a regular hexahedron or a sphere, and the first distance is one half of the length of the regular hexahedron or is the radius of the sphere;
The determining four spatial coordinates based on the lateral view angle, the longitudinal view angle, and the first distance corresponding to the panoramic sky box includes:
after determining the first parameter based on the longitudinal field angle, determining a camera field height h of the virtual camera from twice a product of the first parameter and a first distance;
performing tangent function operation on the radian corresponding to half of the transverse field angle, determining a third parameter, and determining the camera field width w of the virtual camera according to twice of the product of the third parameter and the first distance;
the four spatial coordinates are determined based on the camera field of view height h and the camera field of view width w.
In this embodiment, when the panoramic sky box is a regular hexahedral sky box, the first distance corresponding to the panoramic sky box is one half of the regular hexahedral edge length; when the panoramic sky box is a spherical sky box, the first distance corresponding to the panoramic sky box is the radius of the sphere.
Due to the first parameterThen (I)>
Thus, the camera field of view height h of the virtual camera is twice the product of the first parameter and the first distance.
After the transverse view angle is determined, determining an radian corresponding to one half of the transverse view angle, performing tangent function operation on the radian, and determining a third parameter, wherein the third parameter can be expressed as:
Due toThen (I)>
Thus, the camera field of view width w of the virtual camera is twice the product of the third parameter and the first distance.
After determining the camera view height h and the camera view width w, four spatial coordinates may be determined based on the camera view height h and the camera view width w.
Optionally, when determining four spatial coordinates based on the camera view height h and the camera view width w, comprising:
determining a first spatial coordinate based on a half of the camera view height h, a half of the camera view width w and a default coordinate value on a third coordinate axis;
determining a second spatial coordinate based on one half of the camera view height h, a negative value of one half of the camera view width w, and a default coordinate value on a third coordinate axis;
determining a third spatial coordinate based on a negative value of one half of the camera field of view height h, one half of the camera field of view width w, and a default coordinate value on a third coordinate axis;
determining a fourth spatial coordinate based on a negative value of one half of the camera view height h, a negative value of one half of the camera view width w, and a default coordinate value on a third coordinate axis;
and determining a coordinate value on a first coordinate axis based on the camera view width w, determining a coordinate value on a second coordinate axis based on the camera view height h, wherein the shooting point is a coordinate origin of a three-dimensional space coordinate system.
In this embodiment, the coordinate value on the first coordinate axis (e.g., X-axis) may be determined based on the camera view width w, the coordinate value on the second coordinate axis (e.g., Y-axis) may be determined based on the camera view height h, and the coordinate value on the third coordinate axis (e.g., Z-axis) may be a default value, e.g., unit coordinate value 1.
Based on one half of the camera view height h (corresponding to the coordinate value on the second coordinate axis), one half of the camera view width w (corresponding to the coordinate value on the first coordinate axis), and the default coordinate value on the third coordinate axis, the first spatial coordinate may be determined; based on one half of the camera view height h (corresponding to the coordinate value on the second coordinate axis), a negative value of one half of the camera view width w (corresponding to the coordinate value on the first coordinate axis), and a default coordinate value on the third coordinate axis, a second spatial coordinate may be determined; based on a negative value of one half of the camera view height h (corresponding to the coordinate value on the second coordinate axis), one half of the camera view width w (corresponding to the coordinate value on the first coordinate axis), and a default coordinate value on the third coordinate axis, a third spatial coordinate may be determined; the fourth spatial coordinate may be determined based on a negative value of one half of the camera view height h (corresponding to the coordinate value on the second coordinate axis), a negative value of one half of the camera view width w (corresponding to the coordinate value on the first coordinate axis), and a default coordinate value on the third coordinate axis. By determining four spatial coordinates, four coordinate points can be determined in three-dimensional space.
Referring to fig. 2, 4 points on the end face opposite to the virtual camera are the determined 4 spatial coordinate points.
After determining the four spatial coordinates, a vector corresponding to the spatial coordinate may be determined for each spatial coordinate based on the spatial coordinate and the shooting point (the origin of coordinates of the three-dimensional spatial coordinate system), and further a vector corresponding to each of the four rays may be determined.
In the above embodiment, the camera view height may be determined based on the longitudinal view angle and the first distance, the camera view width may be determined based on the lateral view angle and the first distance, and then four spatial coordinates may be determined according to the camera view height and the camera view width to determine four rays having the photographing point as an origin to determine the view range based on the four rays, and thus the plane within the view range.
The process of determining whether a plane is within the field of view of the virtual camera based on four rays is described below. When determining whether the current plane is within the visual field of the virtual camera according to the four rays, the method comprises the following steps:
detecting the intersection condition of the four rays and the current plane according to vectors respectively corresponding to the four rays and a plane normal vector corresponding to the current plane;
Determining that the current plane is in the visual field range of the virtual camera under the condition that at least one ray in the four rays has an intersection point with the current plane;
the vectors corresponding to the four rays respectively remain unchanged, and the plane normal vector corresponding to the current plane changes based on the change of the shooting direction of the virtual camera.
For each ray in the four rays, a vector corresponding to the current ray can be determined, and then the intersection condition of the four rays and the current plane is detected according to the normal plane vector of the vector corresponding to the four rays and the current plane respectively. When at least one ray intersecting the current plane exists in the four rays, the current plane is determined to be in the visual field of the virtual camera, that is, when one ray, two rays, three rays or four rays intersect the current plane, the current plane can be determined to be in the visual field of the virtual camera.
It should be noted that, in the rotation process of the virtual camera, the vectors corresponding to the four rays are kept unchanged, the plane normal vector corresponding to the plane is changed based on the change of the shooting direction of the virtual camera, which can be regarded as the virtual camera unchanged, the panoramic sky box rotates, and the plane normal vector corresponding to the plane on the full-sedum empty box changes because the panoramic sky box rotates. For example, at time 1, the plane normal vector corresponding to plane 1 is parallel to the Z axis, and the panoramic sky box can be regarded as rotating, but the coordinate system remains unchanged, and at time 2, the plane normal vector corresponding to plane 1 is parallel to the X axis.
Optionally, when detecting the intersection situation of the four rays and the current plane according to the vectors respectively corresponding to the four rays and the plane normal vector corresponding to the current plane, the method includes:
for each ray, determining whether an intersection point exists between the current ray and the current plane or not based on a dot multiplication result corresponding to dot multiplication between a vector corresponding to the current ray and the plane normal vector;
when the point multiplication result is 0, determining that the current ray is parallel to the current plane; when the point multiplication result is smaller than 0, determining that the current ray does not intersect with the current plane; and when the point multiplication result is greater than 0, determining that the current ray intersects the current plane.
For each ray in the four rays, when determining whether an intersection point exists between the current ray and the current plane, a vector corresponding to the current ray and a plane normal vector corresponding to the current plane can be subjected to point multiplication operation, a point multiplication result is obtained, and whether the intersection point exists between the current ray and the current plane is determined based on the point multiplication result.
When the dot multiplication result is 0, the current ray is perpendicular to the plane normal vector, and the current ray is parallel to the current plane; when the dot multiplication result is smaller than 0, the included angle of the two vectors is larger than 90 degrees, and the current ray is not intersected with the current plane; when the point multiplication result is larger than 0, the included angle between the two vectors is smaller than 90 degrees, and then the current ray intersects with the current plane.
It should be noted that, if the intersection point between the first ray and the current plane is determined through detection, the intersection condition between the first ray and the plane may not be detected any more for other rays, and the current plane may be directly determined to be within the visual field range of the virtual camera.
According to the embodiment, the intersection condition of the rays and the current plane can be determined according to the dot multiplication result of the vector corresponding to the rays and the plane normal vector, and when at least one ray has an intersection point with the current plane, the current plane is determined to be in the visual field range of the virtual camera, so that the plane in the visual field range of the virtual camera is screened based on vector operation.
As an optional embodiment, when updating the target plane based on the corresponding high definition map of the target plane, one of the following schemes is included:
mapping the target plane according to the panoramic high-definition mapping corresponding to the target plane;
and matching the plurality of high-definition fragment maps corresponding to the target plane with the plurality of fragments corresponding to the target plane to update the target plane.
When the mapping update is carried out on the target plane, the panoramic high-definition mapping corresponding to the target plane can be obtained, the mapping processing is directly carried out on the target plane based on the panoramic high-definition mapping, and when the mapping processing is carried out, the panoramic high-definition mapping can be pasted on the target plane of the panoramic sky box, so that the panoramic original mapping of the target plane is covered. By adopting the panoramic high-definition map to update the original map of the target plane, the map updating flow can be simplified, and the updating efficiency can be improved.
When the mapping update is performed on the target plane, a plurality of high-definition slice maps corresponding to the target plane can be obtained, the plurality of high-definition slice maps are in one-to-one correspondence with a plurality of slices corresponding to the target plane, the plurality of high-definition slice maps are matched with the plurality of slices of the target plane, the original slice maps of the target plane are covered, and the mapping update of the target plane is realized.
In the above embodiment, when the mapping update is performed on the target plane, the mapping process may be performed based on the panoramic high-definition mapping, or the mapping process may be performed based on the high-definition tile mapping, which provides different manners of mapping update, so that the mapping update is performed in a matching manner based on actual requirements.
The implementation of the image processing method of the present application will be described by way of a specific example, and referring to fig. 3, the implementation includes:
step 301, determining a horizontal view angle of the virtual camera based on an aspect ratio of the virtual camera and a vertical view angle of the virtual camera under the condition that the virtual camera performs image acquisition on the panoramic sky box along the nth shooting direction at the shooting point.
And 302, determining four space coordinates based on 1/2 of the length of the full-sedum hollow box edge of the transverse view angle, the longitudinal view angle and the regular hexahedron.
Step 303, determining four rays with shooting points as origins based on the four space coordinates.
Step 304, for each plane in the full-sedum empty box, detecting the intersection condition of the four rays and the current plane according to the normal vector of the plane corresponding to the vector corresponding to the current plane and the vector corresponding to the four rays.
In step 305, in the case that at least one of the four rays has an intersection point with the current plane, the current plane is determined to be the target plane within the visual field of the virtual camera.
Step 306, for each target plane, updating the target plane based on the high-definition map corresponding to the target plane, and controlling the virtual camera to render the high-definition map corresponding to the target plane at the display end, where the high-definition map may be a panoramic high-definition map or a high-definition tile map.
Step 307, under the condition that the virtual camera rotates to the n+1th shooting direction, detecting K target planes in the view range of the virtual camera in the panoramic sky box, and performing mapping update on the target planes so as to render the high-definition mapping at the display end.
The virtual camera needs to re-detect the target plane within the visual field of the virtual camera every time the shooting direction is changed, so that a display end user can browse the high-definition map, and after the shooting direction is changed, the plane normal vector is synchronously updated.
The implementation flow can ensure that a user browses images with different definition through the display end, and can realize targeted image rendering; by determining four rays, a target plane within the field of view of the virtual camera is screened out according to the rays, and the field of view can be determined based on the four rays, so that a plane within the field of view can be determined.
The above is an overall implementation of the image processing method provided by the embodiment of the application, under the condition that the virtual camera performs image acquisition along a certain shooting direction, a target plane in the visual field range of the virtual camera is determined, the target plane is updated based on the high definition map corresponding to the target plane, the high definition map of the target plane is rendered at the display end, after the shooting direction is adjusted by the virtual camera, the target plane in the visual field range of the virtual camera is redetermined, the map update is performed on the redetermined target plane, the high definition map is rendered at the display end, so that a user can browse images with different definition through the display end, the visual experience of the user is optimized, and the image loading speed of the display end can be improved based on targeted image rendering, thereby attracting the attention of the user and improving the browsing experience of the user.
Further, by determining four rays with shooting points as origins, screening out a target plane in the visual field range of the virtual camera based on the four rays, the visual field range can be determined based on the four rays, and then the plane in the visual field range can be determined; when a target plane is screened based on four rays, the intersection condition of the rays and the plane is determined according to the dot multiplication result of the vector corresponding to the rays and the normal vector of the plane, whether the plane is in the visual field of the virtual camera is determined according to the intersection condition, and the screening of the plane in the visual field of the virtual camera based on vector operation is realized.
When the mapping update is performed on the target plane, mapping processing can be performed based on panoramic high-definition mapping, or mapping processing can be performed based on high-definition slicing mapping, and different modes of mapping update are provided so as to perform mapping update in a matching mode based on actual requirements.
An embodiment of the present application provides an image processing apparatus, as shown in fig. 4, including:
the detection module 401 is configured to detect K target planes in the panoramic sky box within a field of view of the virtual camera when the virtual camera performs image acquisition on the panoramic sky box in an nth shooting direction at a shooting point, where K is an integer greater than or equal to 1;
A first processing module 402, configured to update, for each target plane, the target plane based on the high-definition map corresponding to the target plane, and control the virtual camera to render, at a display end, the high-definition map corresponding to the target plane, so as to switch an original map corresponding to the target plane displayed by the display end to the high-definition map;
a second processing module 403, configured to return, when the virtual camera rotates to an n+1th shooting direction, the n+1th shooting direction to the detection module 401 for further processing, where the n+1th shooting direction is the nth shooting direction;
the full-sedum empty box is generated based on matching of a panoramic map corresponding to the target electronic sand table and a sky box model, and the shooting point is located in the panoramic sky box.
Optionally, the detection module includes:
a first determination submodule for determining a lateral field angle of the virtual camera based on an aspect ratio of the virtual camera and a longitudinal field angle of the virtual camera;
the second determining submodule is used for determining four space coordinates based on the transverse view angle, the longitudinal view angle and the first distance corresponding to the panoramic sky box;
the third determining submodule is used for determining four rays taking the shooting point position as an original point based on the four space coordinates;
And the determining and acquiring submodule is used for determining whether the current plane is in the visual field range of the virtual camera according to the four rays aiming at each plane in the full-sedum empty box so as to acquire K target planes in the visual field range of the virtual camera.
Optionally, the first determining submodule includes:
the first determining unit is used for performing radian conversion on one half of the longitudinal field angle and determining a first radian;
the second determining unit is used for performing tangent function operation based on the first radian and determining a first parameter;
a third determining unit for calculating a product of the first parameter and the aspect ratio to determine a second parameter;
a fourth determining unit, configured to perform an arctangent operation based on the second parameter, and determine a second radian;
and a fifth determining unit, configured to perform angle conversion on the second radian to determine a target angle, and determine the lateral field angle based on 2 times of the target angle.
Optionally, the full-sedum empty box is a regular hexahedron or a sphere, and the first distance is one half of the edge length of the regular hexahedron or the radius of the sphere;
the second determination submodule includes:
A sixth determining unit configured to determine a camera view height h of the virtual camera from twice a product of the first parameter and a first distance after determining the first parameter based on the longitudinal view angle;
a seventh determining unit, configured to perform tangent function operation on an radian corresponding to one half of the lateral field angle, determine a third parameter, and determine a camera field width w of the virtual camera according to twice a product of the third parameter and the first distance;
an eighth determination unit configured to determine the four spatial coordinates based on the camera view height h and the camera view width w.
Optionally, the eighth determining unit includes:
a first determining subunit, configured to determine a first spatial coordinate based on a half of the camera view height h, a half of the camera view width w, and a default coordinate value on a third coordinate axis;
a second determining subunit configured to determine a second spatial coordinate based on a half of the camera view height h, a negative half of the camera view width w, and a default coordinate value on a third coordinate axis;
a third determination subunit configured to determine a third spatial coordinate based on a negative value of one half of the camera view height h, one half of the camera view width w, and a default coordinate value on a third coordinate axis;
A fourth determination subunit configured to determine a fourth spatial coordinate based on a negative value of one half of the camera view height h, a negative value of one half of the camera view width w, and a default coordinate value on a third coordinate axis;
and determining a coordinate value on a first coordinate axis based on the camera view width w, determining a coordinate value on a second coordinate axis based on the camera view height h, wherein the shooting point is a coordinate origin of a three-dimensional space coordinate system.
Optionally, the determining the obtaining submodule includes:
the detection unit is used for detecting the intersection condition of the four rays and the current plane according to the vectors respectively corresponding to the four rays and the plane normal vector corresponding to the current plane;
a ninth determining unit, configured to determine that, in a case where an intersection point exists between at least one ray of the four rays and a current plane, the current plane is within a field of view of the virtual camera;
the vectors corresponding to the four rays respectively remain unchanged, and the plane normal vector corresponding to the current plane changes based on the change of the shooting direction of the virtual camera.
Optionally, the detection unit is further configured to:
for each ray, determining whether an intersection point exists between the current ray and the current plane or not based on a dot multiplication result corresponding to dot multiplication between a vector corresponding to the current ray and the plane normal vector;
When the point multiplication result is 0, determining that the current ray is parallel to the current plane; when the point multiplication result is smaller than 0, determining that the current ray does not intersect with the current plane; and when the point multiplication result is greater than 0, determining that the current ray intersects the current plane.
Optionally, the first processing module includes one of the following sub-modules:
the first processing sub-module is used for mapping the target plane according to the panoramic high-definition mapping corresponding to the target plane;
and the second processing sub-module is used for matching the plurality of high-definition fragment maps corresponding to the target plane with the plurality of fragments corresponding to the target plane so as to update the target plane.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The embodiment of the application also provides electronic equipment, which comprises: the processor, the memory, store the computer program on the memory and can run on the processor, this computer program realizes each process of the above-mentioned image processing method embodiment when being carried out by the processor, and can reach the same technical result, in order to avoid repetition, will not be repeated here.
For example, fig. 5 shows a schematic diagram of the physical structure of an electronic device. As shown in fig. 5, the electronic device may include: processor 510, communication interface (Communications Interface) 520, memory 530, and communication bus 540, wherein processor 510, communication interface 520, memory 530 complete communication with each other through communication bus 540. Processor 510 may invoke logic instructions in memory 530, processor 510 for performing the steps of: under the condition that a virtual camera performs image acquisition on a panoramic sky box along an Nth shooting direction at a shooting point, detecting K target planes in the panoramic sky box in a visual field range of the virtual camera, wherein K is an integer greater than or equal to 1; updating the target plane based on the high-definition map corresponding to the target plane for each target plane, and controlling the virtual camera to render the high-definition map corresponding to the target plane at a display end so as to switch an original map corresponding to the target plane displayed by the display end into the high-definition map; returning to the step of detecting K target planes in the panoramic sky box within the field of view of the virtual camera, taking the n+1th shooting direction as the N-th shooting direction, when the virtual camera rotates to the n+1th shooting direction; the full-sedum empty box is generated based on matching of a panoramic map corresponding to the target electronic sand table and a sky box model, and the shooting point is located in the panoramic sky box. Processor 510 may also perform other aspects of embodiments of the present application, which are not further described herein.
Further, the logic instructions in the memory 530 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, realizes the processes of the above image processing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (11)

1. An image processing method, comprising:
under the condition that a virtual camera performs image acquisition on a panoramic sky box along an Nth shooting direction at a shooting point, detecting K target planes in the panoramic sky box in a visual field range of the virtual camera, wherein K is an integer greater than or equal to 1;
updating the target plane based on the high-definition map corresponding to the target plane for each target plane, and controlling the virtual camera to render the high-definition map corresponding to the target plane at a display end so as to switch an original map corresponding to the target plane displayed by the display end into the high-definition map;
returning to the step of detecting K target planes in the panoramic sky box within the field of view of the virtual camera, taking the n+1th shooting direction as the N-th shooting direction, when the virtual camera rotates to the n+1th shooting direction;
the full-sedum empty box is generated based on matching of a panoramic map corresponding to the target electronic sand table and a sky box model, and the shooting point is located in the panoramic sky box.
2. The method of claim 1, wherein the detecting K target planes within the panoramic sky box that are within a field of view of the virtual camera comprises:
Determining a lateral field angle of the virtual camera based on the aspect ratio of the virtual camera and the longitudinal field angle of the virtual camera;
determining four space coordinates based on the transverse view angle, the longitudinal view angle and a first distance corresponding to the panoramic sky box;
determining four rays taking the shooting point as an origin based on the four space coordinates;
and determining whether the current plane is in the visual field range of the virtual camera according to the four rays for each plane in the full-sedum empty box so as to acquire K target planes in the visual field range of the virtual camera.
3. The method of claim 2, wherein the determining the lateral field angle of the virtual camera based on the aspect ratio of the virtual camera and the longitudinal field angle of the virtual camera comprises:
performing radian conversion on one half of the longitudinal field angle to determine a first radian;
performing tangent function operation based on the first radian, and determining a first parameter;
calculating the product of the first parameter and the aspect ratio, and determining a second parameter;
performing arctangent operation based on the second parameter to determine a second radian;
And performing angle conversion on the second radian to determine a target angle, and determining the transverse field angle based on 2 times of the target angle.
4. A method according to claim 3, wherein the rhodiola empty box is a regular hexahedron or a sphere, and the first distance is one half of the regular hexahedron edge length or is the radius of the sphere;
the determining four spatial coordinates based on the lateral view angle, the longitudinal view angle, and the first distance corresponding to the panoramic sky box includes:
after determining the first parameter based on the longitudinal field angle, determining a camera field height h of the virtual camera from twice a product of the first parameter and a first distance;
performing tangent function operation on the radian corresponding to half of the transverse field angle, determining a third parameter, and determining the camera field width w of the virtual camera according to twice of the product of the third parameter and the first distance;
the four spatial coordinates are determined based on the camera field of view height h and the camera field of view width w.
5. The method of claim 4, wherein the determining the four spatial coordinates based on the camera field of view height h and the camera field of view width w comprises:
Determining a first spatial coordinate based on a half of the camera view height h, a half of the camera view width w and a default coordinate value on a third coordinate axis;
determining a second spatial coordinate based on one half of the camera view height h, a negative value of one half of the camera view width w, and a default coordinate value on a third coordinate axis;
determining a third spatial coordinate based on a negative value of one half of the camera field of view height h, one half of the camera field of view width w, and a default coordinate value on a third coordinate axis;
determining a fourth spatial coordinate based on a negative value of one half of the camera view height h, a negative value of one half of the camera view width w, and a default coordinate value on a third coordinate axis;
and determining a coordinate value on a first coordinate axis based on the camera view width w, determining a coordinate value on a second coordinate axis based on the camera view height h, wherein the shooting point is a coordinate origin of a three-dimensional space coordinate system.
6. The method of claim 2, wherein determining whether the current plane is within the field of view of the virtual camera based on the four rays comprises:
Detecting the intersection condition of the four rays and the current plane according to vectors respectively corresponding to the four rays and a plane normal vector corresponding to the current plane;
determining that the current plane is in the visual field range of the virtual camera under the condition that at least one ray in the four rays has an intersection point with the current plane;
the vectors corresponding to the four rays respectively remain unchanged, and the plane normal vector corresponding to the current plane changes based on the change of the shooting direction of the virtual camera.
7. The method according to claim 6, wherein detecting the intersection of the four rays with the current plane according to the vectors corresponding to the four rays and the plane normal vector corresponding to the current plane comprises:
for each ray, determining whether an intersection point exists between the current ray and the current plane or not based on a dot multiplication result corresponding to dot multiplication between a vector corresponding to the current ray and the plane normal vector;
when the point multiplication result is 0, determining that the current ray is parallel to the current plane; when the point multiplication result is smaller than 0, determining that the current ray does not intersect with the current plane; and when the point multiplication result is greater than 0, determining that the current ray intersects the current plane.
8. The method according to claim 1, wherein updating the target plane based on the corresponding high definition map of the target plane comprises one of the following schemes:
mapping the target plane according to the panoramic high-definition mapping corresponding to the target plane;
and matching the plurality of high-definition fragment maps corresponding to the target plane with the plurality of fragments corresponding to the target plane to update the target plane.
9. An image processing apparatus, comprising:
the detection module is used for detecting K target planes in the panoramic sky box in the visual field range of the virtual camera under the condition that the virtual camera performs image acquisition on the panoramic sky box in the nth shooting direction at the shooting point, wherein K is an integer greater than or equal to 1;
the first processing module is used for updating the target plane based on the high-definition map corresponding to the target plane for each target plane, and controlling the virtual camera to render the high-definition map corresponding to the target plane at the display end so as to switch the original map corresponding to the target plane displayed at the display end into the high-definition map;
the second processing module is used for returning the (N+1) th shooting direction to the detection module for continuous processing when the virtual camera rotates to the (N+1) th shooting direction;
The full-sedum empty box is generated based on matching of a panoramic map corresponding to the target electronic sand table and a sky box model, and the shooting point is located in the panoramic sky box.
10. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the image processing method according to any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the image processing method according to any one of claims 1 to 8.
CN202310524311.7A 2023-05-10 2023-05-10 Image processing method, device, electronic equipment and storage medium Pending CN116668661A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310524311.7A CN116668661A (en) 2023-05-10 2023-05-10 Image processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310524311.7A CN116668661A (en) 2023-05-10 2023-05-10 Image processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116668661A true CN116668661A (en) 2023-08-29

Family

ID=87727072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310524311.7A Pending CN116668661A (en) 2023-05-10 2023-05-10 Image processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116668661A (en)

Similar Documents

Publication Publication Date Title
CN107564089B (en) Three-dimensional image processing method, device, storage medium and computer equipment
US20170186219A1 (en) Method for 360-degree panoramic display, display module and mobile terminal
US9704282B1 (en) Texture blending between view-dependent texture and base texture in a geographic information system
US9171402B1 (en) View-dependent textures for interactive geographic information system
US8970583B1 (en) Image space stylization of level of detail artifacts in a real-time rendering engine
EP3534336B1 (en) Panoramic image generating method and apparatus
US20170154468A1 (en) Method and electronic apparatus for constructing virtual reality scene model
US11880956B2 (en) Image processing method and apparatus, and computer storage medium
WO2018188479A1 (en) Augmented-reality-based navigation method and apparatus
CN108269305A (en) A kind of two dimension, three-dimensional data linkage methods of exhibiting and system
CN112288873B (en) Rendering method and device, computer readable storage medium and electronic equipment
US20130222363A1 (en) Stereoscopic imaging system and method thereof
KR20180107271A (en) Method and apparatus for generating omni media texture mapping metadata
KR20190125526A (en) Method and apparatus for displaying an image based on user motion information
US10904562B2 (en) System and method for constructing optical flow fields
EP2159756B1 (en) Point-cloud clip filter
US20180061119A1 (en) Quadrangulated layered depth images
EP2672454A2 (en) Terrain-based virtual camera tilting and applications thereof
CN114782648A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110889384A (en) Scene switching method and device, electronic equipment and storage medium
CN116668661A (en) Image processing method, device, electronic equipment and storage medium
CN109949396A (en) A kind of rendering method, device, equipment and medium
CN110853143B (en) Scene realization method, device, computer equipment and storage medium
CN109814703B (en) Display method, device, equipment and medium
CN111476870A (en) Object rendering method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination