CN116206028A - Image processing method, device, equipment, medium and product - Google Patents

Image processing method, device, equipment, medium and product Download PDF

Info

Publication number
CN116206028A
CN116206028A CN202111667710.6A CN202111667710A CN116206028A CN 116206028 A CN116206028 A CN 116206028A CN 202111667710 A CN202111667710 A CN 202111667710A CN 116206028 A CN116206028 A CN 116206028A
Authority
CN
China
Prior art keywords
particle
coordinates
screen
particles
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111667710.6A
Other languages
Chinese (zh)
Inventor
李佩轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of CN116206028A publication Critical patent/CN116206028A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses an image processing method, an image processing device, an image processing medium and an image processing product, and relates to the technical field of images, wherein the method comprises the following steps: acquiring three-dimensional particle center coordinates of anisotropic particles in a particle coordinate system, wherein the particles are positioned in a limited particle space in the particle coordinate system; determining particle point coordinates of the particles according to the three-dimensional particle center coordinates of the particles; projecting the particle point coordinates of the particles to a screen to obtain screen coordinates of the particle point coordinates; and rendering the particles with anisotropy according to the screen coordinates of the particle point coordinates. By adopting the scheme provided by the embodiment of the disclosure, the particles with anisotropy can be rendered, and the method can be applied to the fields of maps, intelligent traveling, traffic and the like.

Description

Image processing method, device, equipment, medium and product
The present disclosure is based on the chinese patent application image processing method, apparatus, device, medium and product having application number 202111449882.6 and application date 2021, 11 and 30, and claims priority from the chinese patent application, the entire contents of which are incorporated herein by reference.
Technical Field
The present disclosure relates to the field of image technology, and in particular, to an image processing method, an image processing apparatus, a computer device, a computer readable storage medium, and a computer program product.
Background
In the related art, particles (such as rain particles and rain particles) in various weather scenes are simulated by using a particle system, only isotropic particles can be rendered, but visual effects of anisotropic particles cannot be rendered, so that the visual effects of the rendered weather particles are different from those of real weather scenes.
Disclosure of Invention
Embodiments of the present disclosure provide an image processing method, an image processing apparatus, a computer device, a computer-readable storage medium, and a computer program product, which can render particles having anisotropy.
The embodiment of the disclosure provides an image processing method, which comprises the following steps: acquiring three-dimensional particle center coordinates of anisotropic particles in a particle coordinate system, wherein the particles are positioned in a limited particle space in the particle coordinate system; determining particle point coordinates of the particles according to the three-dimensional particle center coordinates of the particles; projecting the particle point coordinates of the particles to a screen to obtain screen coordinates of the particle point coordinates; and rendering the particles with anisotropy according to the screen coordinates of the particle point coordinates.
An embodiment of the present disclosure provides an image processing apparatus including: an acquisition unit for acquiring a three-dimensional particle center coordinate of a particle having anisotropy in a particle coordinate system in which the particle is located in a limited particle space; a determining unit for determining particle point coordinates of the particles according to three-dimensional particle center coordinates of the particles; an obtaining unit, configured to project particle point coordinates of the particles onto a screen, and obtain screen coordinates of the particle point coordinates; and the rendering unit is used for rendering the particles with anisotropy according to the screen coordinates of the particle point coordinates.
The embodiment of the disclosure provides a computer device, which comprises a processor, a memory and an input-output interface; the processor is respectively connected with the memory and the input/output interface, wherein the input/output interface is used for receiving data and outputting data, the memory is used for storing a computer program, and the processor is used for calling the computer program so that the computer equipment executes the image processing method in the embodiment of the disclosure.
The present disclosure provides a computer-readable storage medium storing a computer program adapted to be loaded and executed by a processor to cause a computer device having the processor to perform the image processing method in the embodiments of the present disclosure.
An aspect of the disclosed embodiments provides a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the methods provided in the various alternatives in the embodiments of the present disclosure.
Implementation of the embodiment of the disclosure has the following beneficial effects: the method comprises the steps of obtaining the three-dimensional particle center coordinates of the particles with anisotropy under a particle coordinate system, wherein the particles are located in a limited particle space in the particle coordinate system, determining the particle point coordinates of the particles according to the three-dimensional particle center coordinates of the particles, and projecting the particle point coordinates of the particles to a screen to obtain the screen coordinates of the particle point coordinates, so that the particles with anisotropy can be rendered according to the screen coordinates of the particle point coordinates. On the one hand, the anisotropy of the particles is simulated by converting the particle coordinate system of the particles to be rendered in the particle space into screen coordinates; on the other hand, since the particle shape with anisotropy is not required to be calculated through camera parameters and texture information, the calculation efficiency is improved, and meanwhile, the visual effect is good.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
Fig. 1 is a network interaction architecture diagram of an image processing method according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of an image processing method provided in an embodiment of the present disclosure.
Fig. 3 is a schematic diagram of a particle coordinate system and particle space therein provided by an embodiment of the present disclosure.
Fig. 4 is a schematic diagram of a positional relationship between a camera and a particle space provided by an embodiment of the present disclosure.
Fig. 5 is a schematic diagram of a positional relationship between a camera and a particle space at zoom time according to an embodiment of the present disclosure.
Fig. 6 is a schematic diagram of adjusting particles located behind a camera provided by an embodiment of the present disclosure.
Fig. 7 is a schematic view of an elliptical model of a rain particle provided by an embodiment of the present disclosure.
Fig. 8 is a schematic diagram of a rain particle model as seen from above provided by embodiments of the present disclosure.
Fig. 9 is a schematic diagram of two foci of rain particles seen on a camera screen provided by embodiments of the present disclosure.
Fig. 10 is a schematic diagram of rendering a small square on a camera screen from two foci of rain particles, provided by embodiments of the present disclosure.
Fig. 11 is a schematic diagram of the effect of top-down rainfall seen by a camera provided by an embodiment of the present disclosure.
Fig. 12 is a schematic view of the influence of wind vector on the pose of raindrops provided by an embodiment of the present disclosure.
Fig. 13 is a schematic view of a rain scene effect displayed in a windy scene according to an embodiment of the present disclosure.
Fig. 14 is a schematic view of an effect of a rain scene displayed in a panning scene according to an embodiment of the present disclosure.
Fig. 15 is a schematic view of an effect of a rain scene displayed in a panning scene according to an embodiment of the present disclosure.
Fig. 16 is a schematic view of an effect of a rain scene displayed in a panning scene according to an embodiment of the present disclosure.
Fig. 17 is a schematic view of an effect of a rain scene displayed in a panning scene according to an embodiment of the present disclosure.
Fig. 18 is a schematic view of an effect of a rain scene displayed in a three-dimensional view scene according to an embodiment of the present disclosure.
Fig. 19 is a schematic view of an effect of a rain scene displayed in a three-dimensional view scene according to an embodiment of the present disclosure.
Fig. 20 is a schematic view of an effect of a rain scene displayed in a three-dimensional view scene according to an embodiment of the present disclosure.
Fig. 21 is a schematic view of an effect of a rain scene displayed in a three-dimensional view scene according to an embodiment of the present disclosure.
Fig. 22 is a schematic view of an effect of a rain scene displayed in a zoom scene according to an embodiment of the present disclosure.
Fig. 23 is a schematic view of an effect of a rain scene displayed in a zoom scene according to an embodiment of the present disclosure.
Fig. 24 is a schematic view of an effect of a rain scene displayed in a zoom scene according to an embodiment of the present disclosure.
Fig. 25 is a schematic view of an effect of a rain scene displayed in a zoom scene according to an embodiment of the present disclosure.
Fig. 26 is a schematic view of an effect of a rain scene displayed in a rotating scene according to an embodiment of the present disclosure.
Fig. 27 is a schematic view of an effect of a rain scene displayed in a rotating scene according to an embodiment of the present disclosure.
Fig. 28 is a schematic view of an effect of a rain scene displayed in a rotating scene according to an embodiment of the present disclosure.
Fig. 29 is a schematic view of an effect of a rain scene displayed in a rotating scene according to an embodiment of the present disclosure.
Fig. 30 is a schematic view of an effect of a rain scene displayed in a top view scene according to an embodiment of the present disclosure.
Fig. 31 is a schematic structural view of an image processing apparatus provided in an embodiment of the present disclosure.
Fig. 32 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure.
Detailed Description
The following description of the technical solutions in the embodiments of the present disclosure will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
The method provided by the embodiment of the disclosure can be applied to the field of maps. The intelligent transportation system (Intelligent Traffic System, ITS), also called intelligent transportation system (Intelligent Transportation System), is a comprehensive transportation system which uses advanced scientific technology (information technology, computer technology, data communication technology, sensor technology, electronic control technology, automatic control theory, operation study, artificial intelligence, etc.) effectively and comprehensively for transportation, service control and vehicle manufacturing, and enhances the connection among vehicles, roads and users, thereby forming a comprehensive transportation system for guaranteeing safety, improving efficiency, improving environment and saving energy.
In the embodiment of the present disclosure, please refer to fig. 1, fig. 1 is a network interaction architecture diagram of an image processing method provided in the embodiment of the present disclosure, and the embodiment of the present disclosure may be implemented by a user device and/or a computer device. The user device may acquire data from the computer device 101 and display the data, where the computer device 101 may perform data interaction with the user device, and the computer device 101 may be a server where an application program is located, may also belong to the user device (i.e. be a background of the user device), and the like, which is not limited herein. The user device may be the user device 102a, the user device 102b, or the user device 102c, etc., and the embodiments of the present disclosure may be implemented by any one of the user device 102a, the user device 102b, or the user device 102c, etc.
Specifically, taking the user equipment 102b as an example, acquiring a three-dimensional particle center coordinate of a particle with anisotropy in a particle coordinate system, wherein the particle is located in a limited particle space in the particle coordinate system; determining particle point coordinates of the particles according to the three-dimensional particle center coordinates of the particles; projecting the particle point coordinates of the particles to a screen to obtain screen coordinates of the particle point coordinates; and rendering the particles with anisotropy according to the screen coordinates of the particle point coordinates.
The user device may be a mobile phone (e.g., the user device 102 c) or a notebook computer (e.g., the user device 102 b), or may be a playing device in a vehicle (e.g., the user device 102 a), which is not limited herein. The user device 102a may be considered as a playback device in the vehicle 103, and an application program, such as a map navigation application, a game application with a map, a weather broadcast application, or the like, may be displayed in the user device 102 a. The user equipment in fig. 1 is only an exemplary part of the devices, and the user equipment in the present disclosure is not limited to the devices illustrated in fig. 1.
It is understood that the user device mentioned in the embodiments of the present disclosure may be a computer device, which includes, but is not limited to, a terminal device or a server. In other words, the computer device may be a server or a terminal device, or may be a system formed by the server and the terminal device.
The above-mentioned terminal device may be an electronic device, including but not limited to a mobile phone, a tablet computer, a desktop computer, a notebook computer, a palm computer, a vehicle-mounted device, an augmented Reality/Virtual Reality (AR/VR) device, a head-mounted display, a smart television, a wearable device, a smart speaker, a digital camera, a camera, and other mobile internet devices (mobile internet device, MID) with network access capability, or a terminal device in a scene such as a train, a ship, or a flight.
The servers mentioned above may be independent physical servers, or may be server clusters or distributed systems formed by a plurality of physical servers, or may be cloud servers that provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, vehicle-road collaboration, content distribution networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
Alternatively, the data related to the embodiments of the present disclosure may be stored in a computer device, or may be stored based on a cloud storage technology, which is not limited herein.
Fig. 2 is a flowchart of an image processing method provided in an embodiment of the present disclosure. The methods provided by the embodiments of the present disclosure may be performed by any electronic device, such as the computer device 101 and/or user device of the embodiment of fig. 1.
The method provided by the embodiment of the present disclosure may be applied to a map program, a game program, a virtual reality application program, an augmented reality application program, and the like, and the following embodiment is exemplified by the map program, but the present disclosure is not limited thereto.
As shown in fig. 2, the method provided by the embodiment of the present disclosure may include the following steps.
In step S210, three-dimensional particle center coordinates of a particle having anisotropy in a particle coordinate system are acquired, the particle being located in a limited particle space in the particle coordinate system.
In the presently disclosed embodiments, anisotropy refers to viewing an object from different angles, with different results being seen. For example, for a snow particle in the particles, which is a regular symmetrical hexagon, the lengths in all directions are the same, so the snow can be approximated by a sphere, so the snow particle is isotropic, i.e. the snow is observed from different angles, with the same result. While the rain particles in the particles are different from the rain particles, the shape of the rain drops is three-dimensionally anisotropic, and the lengths in the respective directions are different, that is, the rain particles are anisotropic, in the following embodiments, the rain particles are exemplified, but the present disclosure is not limited thereto, and in other embodiments, the anisotropic particles may be hail, for example.
In the related art, for a rain scene using 3D (3-dimension) particles, if a particle system is used, camera parameters and textures (texture) need to be transferred into a fragment shader for computation, and if a particle system is not used, a large number of three-dimensional objects need to be rendered, whichever rendering cost is very high.
The weather particle system in the embodiments of the present disclosure is a graphic presentation class for effectively simulating an object or shape, such as simulating rainy weather in a screen by one weather particle system, simulating snowy weather in a screen by another weather particle system, and so forth. In a particle system, an irregular object is defined as being composed of a large number of irregularly, randomly distributed particles, each having a certain life cycle, which constantly changes position, constantly moves.
The embodiments of the present disclosure refer to the limited particle space that confines the rain particles as a rain scene model box. The embodiment of the disclosure creates a continuous rain scene effect by means of the quantity of effective rain particles.
In an exemplary embodiment, the particle space is a cube with sides of 2n in the range of [ -n, n ] with n being greater than or equal to 1, with a first direction coordinate, a second direction coordinate, and a third direction coordinate in the particle coordinate system. The camera is always arranged on the surface of the inscribed sphere of the cube, and the field of view is directed towards the sphere center direction of the inscribed sphere.
It should be noted that the embodiments of the present disclosure do not limit the particle space to be a cube, and may be any shape with a limited space.
The following is an example with n=1, but the present disclosure is not limited thereto.
As shown in fig. 3, the disclosed embodiment sets this particle space as a cube of side length 2 with x-axis (first direction coordinate), y-axis (second direction coordinate), and z-axis (third direction coordinate) all in the range of [ -1,1 ]. While the camera is always located on the inscribed sphere of this cube and always looks at the center of the inscribed sphere.
The high-efficiency weather particle system provided by the embodiment of the disclosure limits the weather effect to a small-range space for rendering, and the particle space and the map space are mutually independent, so that the rain particles are always distributed in a cube with side length of 2 of [ -1,1 ].
According to life experience, rain is generated in cloud layers, namely, a camera cannot shoot the rain under normal conditions. Since the camera is inscribed on the ball surface, and initially the camera is looking down at the position of [0, 1 ]. The occurrence of rain particles may be distributed on the surface where z=1. When the device is started, the camera can not shoot the occurrence of rain particles. The camera position is then adjusted, and the map engine supports a downward adjustment of the camera dimension angle by at most about 80 degrees, i.e., an angle that does not even reach horizontal viewing. It is not necessary to adjust the camera up to 90 degrees downwards, i.e. just looking forward horizontally, to ensure that no rain particles are visible at this time, the FOV of the camera is limited (Field of view, the angle of view of the camera, i.e. how large a solid angle the camera can see a scene, similar to the Field of view size, e.g. a wide angle camera has a large FOV, and can see objects further left and right, if the FOV is 180 degrees, all objects in front of the camera's lens plane) at most is as shown in fig. 4.
When the rain scene is rendered, all rain particles are born on the upper surface of the box, and the camera cannot see the rain particles because the angle of the camera lens relative to the Z axis is not larger than ninety degrees, so that the rain scene is ensured to be endless. As shown in fig. 4, the Z-axis angle of the camera is ninety degrees, which is the maximum angle of the camera, to ensure that the upper surface of the cube is not visible at this time. I.e. the camera still cannot see the occurrence of rain particles at this time, this visual effect is fulfilled. Meanwhile, by using the FOV of the camera, the effect of depth of field can be generated, as shown in fig. 4, two objects with the same size are near and far smaller in final imaging.
Then, for the change of the rain scene, all the changes are carried out in the box.
Since the weather particle system is used to simulate the rain particles, the rain drops by moving the coordinates of the rain a small distance in a certain direction every frame. A gravity system is used herein. Namely, when the rain is generated, an initial speed of a three-dimensional space is randomly generated, and the gravity acceleration is [0, -1]. Each frame accumulates the gravity acceleration once at the speed of the previous frame and then accumulates the speed of the frame once at the coordinates, so that the faster the rain particles fall, the faster the gravity effect can be simulated.
The camera is always positioned on the surface of the inscribed sphere to look at the direction of the sphere center. When the map is transformed, for example, by translation, scaling, rotation, etc., the three-dimensional particle center coordinates of the rain particles in the cube and the position of the camera are transformed. If the three-dimensional particle center coordinates of the transformed rain particles are not within the range of [ -1,1], the model is taken to [ -1,1].
For example, if the current geographic location is shifted, the rain particles are proportionally moved in the same direction within the box, and if the rain particles are removed from the box coordinate range, the modulo adjustment is back to [ -1,1].
If the current map level changes, for example, zooms, which is equivalent to the camera moving rapidly in the map, it is equivalent to the rain particles moving toward or away from the camera lens. The movement is equivalent to the movement of the rain particles according to the ratio of the map magnification levels between two adjacent frames, so that the visual effect of extending and shrinking the head is created.
The enlarged map is like a forward extending head, the reduced map is like a backward shrinking head, and when the similar triangle is used for calculating that the side length is enlarged twice, the camera just moves from one point on the spherical shell to the spherical center. As shown by the black filled boxes shown in (a) and (b) of fig. 5.
For example, in a plan view, the helicopter can take off from the ground quickly, so that the map level is reduced, surrounding rain particles can be seen to accelerate to land on the ground, and if the surrounding rain particles fall from the beginning, the effect of putting out the rain particles can be seen.
The camera is moved forward by 1/2 unit length (1 unit length represents the side length 2 of the cube if the side length of the cube is 2), with the spherical shell being moved by 1/2 unit length, the rain particle distribution area is also within the cube area. The coordinates of the circulating rain particles are used to adjust the back of the camera to the back of the now visible rain particles.
As shown in fig. 6 (a), (b) and (c), in order to show the zoom operation principle when the camera is in a flat view, the mode of the removed rain particles is adjusted back to [ -1,1] in the case of twice the zoom operation principle.
The rotation matrix is equivalently processed on the camera in the same way, but the camera is always ensured to be positioned on the inscribed ball of the box, so that the rotation effect of the rain scene can be seen.
The rotation is the view matrix according to the current view of the map, which is a four-dimensional matrix, but the 3X3 main matrix in the upper left corner is an orthogonal matrix, which can be just directly taken out to rotate the camera on the unit spherical shell. Since the inscribed sphere of the cube of [ -1,1] is a unit sphere in three-dimensional space. The rain particles can be transformed into the camera coordinate space by directly multiplying the 3X3 matrix by the coordinates of the rain particles in the particle coordinate system.
The orthogonal matrix refers to a unit matrix that is the product of the transpose of B and B if B is an orthogonal matrix. When one rotation matrix is an orthogonal matrix, the rotation is equally long rotated about the origin (i.e., a certain point in space is rotated a constant distance from the origin).
From the specified FOV, the corresponding ProjectMatrix (projection matrix) can be derived, and the depth effect can be obtained by multiplying the rain particles of the camera coordinate space by ProjectMatrix.
In step S220, particle point coordinates of the particles are determined from three-dimensional particle center coordinates of the particles.
In the embodiment of the disclosure, the particle point coordinates are determined according to the three-dimensional particle center coordinates of the particles in the particle coordinate system, and after the particle point coordinates are projected onto the screen, the pixel points where the anisotropic particles are rendered are determined according to the screen coordinates obtained by projection, for example, in an elliptical model, the particle point coordinates may include a first particle focus coordinate and a second particle focus coordinate. For another example, in the parabolic embodiment illustrated below, the particle point coordinates may also be three-dimensional particle center coordinates.
In an exemplary embodiment, the three-dimensional particle center coordinates may be (x, y, z) first, second and third direction coordinates of the rain particle in the particle coordinate system, respectively. Wherein the range of values of x, y and z depends on the range of values of the particle space, e.g. within the range of [ -n, n ].
In an exemplary embodiment, the particle point coordinates may include first and second particle focus coordinates of the rain particle in the particle coordinate system.
In an exemplary embodiment, determining particle point coordinates of the particle from three-dimensional particle center coordinates of the particle includes: determining wind parameters; obtaining focal length L, L being greater than 0; and if the first particle focus coordinate of the rain particle is determined to be in a windless state according to the wind parameters, determining that the first particle focus coordinate of the rain particle is (x, y, z+L/2), and determining that the second particle focus coordinate of the rain particle is (x, y, z-L/2).
In the following examples, the simulation of rain particles is illustrated by an elliptical model, in which each rain particle is simulated into a three-dimensional ellipsoid in a particle coordinate system, and in a vertex shader, the vertex coordinates of each rain particle that enter the vertex shader are the three-dimensional particle center coordinates (x, y, z) of the rain particle in the particle coordinate system. Two focuses of the rain particles are calculated according to wind parameters and three-dimensional particle center coordinates (x, y, z), the two focuses are taken as elliptical focuses, namely a first particle focus and a second particle focus, and an extremely flat ellipse is drawn in a segment shader to obtain the shape of the rain drop.
It will be appreciated that although the embodiments are illustrated with elliptical models, the disclosure is not limited thereto, and that raindrop simulation may use other equations instead of elliptical equations, so long as a three-dimensional shape is drawn according to a three-dimensional particle center coordinate (x, y, z).
For example, a parabola, that is to say a conic, can be used. The central coordinates (x, y, z) of the three-dimensional particles are taken as the vertices of parabolas to generate upper parabolas and lower parabolas, and then the parabolas are turned over according to a certain position, so that two parabolas with the openings being just closed can simulate raindrops.
As shown in fig. 7, in the case of no wind, the first particle focal coordinates of the obtained rain particles are (x, y, z+l/2) and the second particle focal coordinates are (x, y, z-L/2) based on the three-dimensional particle center coordinates of the rain particles being (x, y, z).
In an exemplary embodiment, the wind parameters may include wind direction parameters and wind dynamics parameters.
In an exemplary embodiment, determining the particle point coordinates of the particle according to the three-dimensional particle center coordinates of the particle may further include: if the wind is in a windy state according to the wind parameters, and positive wind facing the first direction of the particle coordinate system is determined to exist according to the wind direction parameters, and the windiness parameter is k, wherein k is larger than 0, the first particle focus coordinate of the rain particles is determined to be (x-k, y, z+L/2), and the second particle focus coordinate is determined to be (x+k, y, z-L/2).
As shown in fig. 7, in the presence of wind, according to the three-dimensional particle center coordinates of the rain particles being (x, y, z) and the wind direction parameter and wind intensity parameter k, the obtained first particle focus coordinates of the rain particles are (x-k, y, z+l/2), and the second particle focus coordinates are (x+k, y, z-L/2).
In an exemplary embodiment, the wind parameters include a wind direction parameter θ and a wind intensity parameter β.
In an exemplary embodiment, determining the particle point coordinates of the particle from the three-dimensional particle center coordinates of the particle further comprises: and if the first particle focus coordinate of the rain particles is determined to be (x-beta cos theta, y-beta sin theta, z+L/2) and the second particle focus coordinate is determined to be (x+beta cos theta, y+beta sin theta, z-L/2) according to the wind parameters.
When considering the influence of the wind direction, the pose of the particle and the like may be expressed directly by a parametric expression, instead of using a manner in which the horizontal vector such as fig. 7 or fig. 12 represents the wind direction and the magnitude.
For example, the wind intensity parameter and wind direction parameter may be represented using polar coordinates (β, θ), and when calculating the upper and lower focuses, if the three-dimensional particle center coordinates are (x, y, z), the upper and lower focuses may be directly calculated as the first particle focus coordinates (x- βcos θ, y- βsin θ, z+l/2) and the second particle focus coordinates (x+βcos θ, y+βsin θ, z-L/2).
In step S230, the particle point coordinates of the particles are projected onto a screen, and screen coordinates of the particle point coordinates are obtained.
In an exemplary embodiment, projecting the particle point coordinates of the particles to a screen to obtain screen coordinates of the particle point coordinates may include: acquiring target operation information aiming at a target map; if the camera is determined to be rotated according to the target operation information, determining a rotation matrix of the camera; converting the particle point coordinates into first camera coordinates in a rotated camera coordinate system according to the rotation matrix; projecting the first camera coordinates to the screen coordinates.
For example, it is assumed that the camera is always located at the [0, 1] coordinates in the particle coordinate system, and the field of view of the camera is always oriented toward the center of the inscribed sphere in the particle space, that is, toward the coordinates of [0, -1], that the camera is regarded as a cylinder, that the line connecting the centers of two circles on the upper and lower surfaces of the cylinder is a center line, that the center line is the Z axis of the camera coordinate system, that coincides with the Z axis of the particle coordinate system, that a cross line is made on the circle on the lower surface of the camera, that one of the lines is the X axis of the camera coordinate system, that the other line is the Y axis of the camera coordinate system, and that the positive direction of the X axis of the camera coordinate system is parallel to the positive direction of the X axis of the particle coordinate system is initially assumed, and that the camera rotates around the Z axis is correspondingly made to rotate when the user rotates the map, that the X axis of the camera coordinate system deviates from the X axis of the particle coordinate system, and that the rotation matrix of the camera can be determined from the deviation.
In an exemplary embodiment, projecting the particle point coordinates of the particles to a screen to obtain screen coordinates of the particle point coordinates may further include: if the target operation information is determined to be translation operation, determining translated particle point coordinates according to a translation distance, a translation direction and the particle point coordinates; and projecting the translated particle point coordinates to the screen to obtain the screen coordinates.
In an exemplary embodiment, projecting the particle point coordinates of the particles to a screen to obtain screen coordinates of the particle point coordinates may further include: if the target operation information is determined to be a scaling operation, obtaining the orientation coordinate of the camera under the camera coordinate system; calculating an inverse matrix of the rotation matrix; according to the orientation coordinates of the camera in the coordinate system and the inverse matrix, obtaining the orientation coordinates of the camera in the particle coordinate system; adjusting the particle point coordinates according to the scaling operation and along the orientation coordinates of the camera in the particle coordinate system to obtain adjusted particle point coordinates; performing rotation transformation on the adjusted particle point coordinates, and converting the particle point coordinates into second camera coordinates in the camera coordinate system; projecting the second camera coordinates to the screen coordinates.
Taking a map application program as an example, in the map application program, a user can perform operations such as rotation, translation, scaling and the like on a map, rotate a three-dimensional particle center coordinate of a camera or a rain particle in a particle coordinate system after the operation according to the corresponding operation of the user, convert the three-dimensional particle center coordinate of the rain particle in the particle coordinate system after the operation under the camera coordinate system, then project the three-dimensional particle center coordinate to a screen from the camera coordinate system, and determine screen coordinates.
In step S240, particles having anisotropy are rendered according to the screen coordinates of the particle point coordinates.
In an exemplary embodiment, rendering the particles with anisotropy according to the screen coordinates of the particle point coordinates may include: determining an enclosing shape of screen coordinates enclosing the particle point coordinates; obtaining a distance from a surrounding shape of a screen coordinate surrounding the particle point coordinate to the screen coordinate of the particle point coordinate; and rendering the particles with anisotropy according to the distance from the surrounding shape to the screen coordinates of the particle point coordinates.
In an exemplary embodiment, the screen coordinates of the particle point coordinates may include first screen focus coordinates and second screen focus coordinates of the rain particles on the screen.
In an exemplary embodiment, obtaining the distance from the surrounding shape surrounding the screen coordinates of the particle point coordinates to the screen coordinates of the particle point coordinates may include: determining the size of a surrounding shape of screen coordinates surrounding the particle point coordinates according to depth information between the particles and the screen, wherein the center coordinates of the surrounding shape on the screen coincide with the coordinates of the three-dimensional particle center coordinates projected to the screen; and calculating the sum of the distances from each pixel point in the surrounding shape to the first screen focus coordinate and the second screen focus coordinate as the distance from the surrounding shape to the screen coordinate of the particle point coordinate.
In the embodiment of the present disclosure, the center coordinates of the shape (for example, square shape, but the present disclosure is not limited thereto) surrounding the particles are coordinates of the three-dimensional particle center coordinates of the particles projected to the screen, the coordinates of the three-dimensional particle center coordinates projected to the screen do not necessarily coincide with the midpoint of the line connecting the first screen focus coordinates and the second screen focus coordinates, which is due to perspective, whereas in the particle coordinate system, the three-dimensional particle center coordinates are located at the midpoint of the line connecting the first particle focus coordinates and the second particle focus coordinates.
Fig. 7 above shows an elliptical simulation of rain particles in three dimensions in particle space. Since the weather particle system is used for rendering, each point corresponding to each rain particle only has three-dimensional particle center coordinate a= (x, y, z) (which may also be referred to as particle vertex coordinate) information. And then rendering a square point by taking the particle vertex as a center according to the position of the particle vertex corresponding to the screen. In the embodiments of the present disclosure, in order for a rain particle having only one three-dimensional particle center coordinate (x, y, z) to exhibit anisotropy, two-dimensional information needs to be constructed, and the anisotropy is exhibited using a two-dimensional image.
Considering the windless condition, it is assumed that the ellipsoid is used to simulate the rain particle, the focal length is a fixed value L, and L is greater than 0. The space coordinates of the upper and lower focuses are the coordinates of the raindrops focus obtained by adding and subtracting L/2 on the Z axis of the particle vertex coordinates A of the raindrops: first particle focus coordinates (x-k, y, z+L/2) and second particle focus coordinates (x+k, y, z-L/2).
Fig. 8 is a schematic diagram of a rain particle model as seen from above provided by embodiments of the present disclosure.
In an exemplary embodiment, rendering particles having anisotropy according to a distance from the surrounding shape to screen coordinates of the particle point coordinates, includes: rendering a pixel point in the surrounding shape, wherein the sum of the distances from the first screen focus coordinate and the second screen focus coordinate is larger than the length of the long axis, to be a first color; and rendering the pixel points, which are in the surrounding shape and have the sum of the distances from the first screen focus coordinate and the second screen focus coordinate smaller than or equal to the length of the long axis, to be a second color so as to render the rain particles with anisotropy.
With the spatial coordinates of the focus of the rain particles, i.e. the first particle focus coordinates (x-k, y, z+l/2) and the second particle focus coordinates (x+k, y, z-L/2), the two points are projected onto the screen using a projection matrix, resulting in a perspective view of the focus, i.e. the first screen focus coordinates and the second screen focus coordinates are determined. For example, as shown in fig. 9, the upper focus seen by the camera is assumed to be the first screen focus coordinate, and the lower focus seen by the camera is assumed to be the second screen focus coordinate.
A perspective raindrop map can be obtained by drawing a two-dimensional flat ellipse to approximate a raindrop to be seen with the two points shown in fig. 9 as focal points.
For example, as shown in fig. 10, it is assumed that the closer the distance of the rain particles from the screen (referred to as depth information), the larger the side length of the square for surrounding the two points is set, whereas the smaller the side length of the square for surrounding the two points is set. And the center of the square coincides with the midpoint of the line connecting the two points.
Calculating the sum of the distances between each pixel point in the square and the two focuses, and if the sum of the distances between a certain pixel point and the two angles is larger than the length of the long axis of an ellipse constructed according to the two focuses, considering the pixel point to be outside the ellipse, and rendering the pixel point as a first color (such as transparent); if the sum of the distances from a pixel to the two angles is less than or equal to the length of the major axis of an ellipse constructed from the two foci, then the pixel is considered to be above or within the ellipse and is rendered as a second color (e.g., white).
It is to be understood that the present disclosure is not limited to a specific value of the first color and the second color, as long as the first color and the second color are different, for example, the second color may be white, and the first color may be blue. Or the second color is blue, the first color is black, etc.
Fig. 11 is a schematic diagram of the effect of top-down rainfall seen by a camera provided by an embodiment of the present disclosure.
As shown in fig. 11, such a calculation is performed for each rain particle, and then at the current camera position, a large number of raindrops enter from outside the screen to converge toward the center of the screen, i.e., a perspective view looking down at the rain.
Similarly for head-up, the effect seen by head-up is like the diagram shown in fig. 13, because the spatial coordinates of the focal point of the raindrop are the three-dimensional coordinates that it determines in the model box.
According to the image processing method provided by the embodiment of the disclosure, the three-dimensional particle center coordinates of the particles with anisotropy in the particle coordinate system are obtained, the particles are located in a limited particle space in the particle coordinate system, then the particle point coordinates of the particles are determined according to the three-dimensional particle center coordinates of the particles, then the particle point coordinates of the particles are projected to a screen to obtain the screen coordinates of the particle point coordinates, then the distance from the surrounding shape of the screen coordinates surrounding the particle point coordinates to the screen coordinates of the particle point coordinates is obtained, and further the particles with anisotropy can be rendered according to the distance from the surrounding shape to the screen coordinates of the particle point coordinates. On the one hand, by converting the particle coordinate system of the particles to be rendered in the particle space into screen coordinates, the anisotropy of the particles (e.g. raindrops) is simulated; on the other hand, since the shape of the particles (e.g., raindrops) having anisotropy does not need to be calculated by camera parameters and texture information, the calculation efficiency is improved, and at the same time, a good visual effect is achieved.
In the case of wind, the wind can be reduced to a vector with limited modular length in the horizontal direction. Assuming that the mode length of the wind does not exceed k at maximum (k can be set by the user), the wind direction acts on the raindrops in the manner shown in fig. 12.
The wind direction changes dynamically during operation, and the pose of the rain (the position and posture, that is, the position, rotation angle, orientation, etc. of the three-dimensional model in space) changes together, so that the effect in the diagram shown in fig. 13 is also achieved.
In one aspect, the weather particle system converts coordinates in the particle space into screen coordinates, and renders pixels in a small square range around the screen coordinates. Therefore, the high-efficiency implementation of the rain scene is simulated through the high-efficiency weather particle system of the small-range model box of the rain scene. On the other hand, the vertex and segment shapers are improved, the space coordinates of rain particles are converted into screen coordinates in rendering by using the vertex shapers, then the color of each pixel of a screen is calculated by the segment shapers according to the information output by the vertex shapers, namely, the three-dimensional anisotropy of the rain drops is simulated in the vertex shapers by using an elliptic equation, rather than calculating the shape of the rain drops by transmitting camera parameters and texture information into the segment shapers, so that the calculation efficiency is improved, and the visual effect is good. In addition, wind parameters can be changed in operation so as to dynamically change the pose of the raindrops, and a wind direction change function used for determining the wind parameters allows a user to customize, so that more weather information is accessed in the future, and an interface is prepared.
The effect of exhibiting a rainscape using the method provided by the embodiment of the present disclosure is illustrated below by fig. 13 to 30. It should be noted that, the rain scene simulated by the embodiment of the present disclosure is a dynamic video, and fig. 13 to 30 below are picture frames taken from the dynamic rain scene video under the corresponding scene.
Fig. 13 is a schematic view of a rain scene effect displayed in a windy scene according to an embodiment of the present disclosure.
Fig. 14 to 17 are schematic views of the effect of a rain scene displayed in a panning scene according to an embodiment of the present disclosure.
Fig. 18 to 21 are schematic views of the effect of a rain scene displayed in a three-dimensional view scene according to an embodiment of the present disclosure.
Fig. 22 to 25 are schematic views of the effect of a rain scene displayed in a zoom scene according to an embodiment of the present disclosure.
Fig. 26 to 29 are schematic views of the effect of a rain scene displayed in a rotating scene according to an embodiment of the present disclosure.
Fig. 30 is a schematic view of an effect of a rain scene displayed in a top view scene according to an embodiment of the present disclosure.
According to the method provided by the embodiment of the disclosure, on one hand, a particle space independent of a map is provided, the transformation of the map is equivalent to the small-scale particle space, so that a corresponding visual effect is achieved, meanwhile, the calculation efficiency is higher because the number of particles is controlled, the method can be operated well at a mobile terminal, namely, the embodiment of the disclosure provides a high-efficiency implementation method for the particle special effect of a rainy weather scene, the displayed rainy scene is more real under the condition of ensuring the frame rate based on the calculation capability of the mobile terminal, and the change of the visual effect can be met along with the movement, scaling, rotation and the like of a map visual angle. In another aspect, embodiments of the present disclosure use elliptic equations to simulate raindrops in three-dimensional shapes so that the efficient weather particle system of a rain scene model box can be reused. The improvement ensures the visual effect and improves the efficiency, so that the visual effect can be rendered in real time on the mobile equipment, and the efficiency and the visual effect are both well considered.
In the application aspect, the rain scene model box is multiplexed, so that the rendered rain scene can be stepped and changed, for example, the rain scene can be seen to be continuous and have moving sense when navigating. And the rain scene and the snow scene can be rendered simultaneously, so that the code integration level is improved.
Further, referring to fig. 31, fig. 31 is a schematic view of an image processing apparatus according to an embodiment of the disclosure. The image processing apparatus may be a computer program (including program code, etc.) running in a computer device, for example the image processing apparatus may be an application software; the apparatus may be used to perform the corresponding steps in the methods provided by the embodiments of the present disclosure. As shown in fig. 31, the image processing apparatus 3100 may be used for a computer device and/or a user device in the corresponding embodiment, and specifically, the image processing apparatus 3100 may include: an acquisition unit 3110, a determination unit 3120, an acquisition unit 3130, and a rendering unit 3140.
The acquisition unit 3110 may be used to acquire three-dimensional particle center coordinates of particles having anisotropy in a particle coordinate system, the particles being located in a limited particle space in the particle coordinate system.
The determining unit 3120 may be used to determine particle point coordinates of the particle from three-dimensional particle center coordinates of the particle.
The obtaining unit 3130 may be used to project the particle point coordinates of the particles to a screen, obtaining screen coordinates of the particle point coordinates.
The rendering unit 3140 may be used to render particles having anisotropy according to screen coordinates of the particle point coordinates.
In an exemplary embodiment, the rendering unit 3140 may also be used to determine a surrounding shape of screen coordinates surrounding the particle point coordinates; obtaining a distance from a surrounding shape of a screen coordinate surrounding the particle point coordinate to the screen coordinate of the particle point coordinate; and rendering the particles with anisotropy according to the distance from the surrounding shape to the screen coordinates of the particle point coordinates.
In an exemplary embodiment, the particles may be rain particles, and the three-dimensional particle center coordinates may be (x, y, z) first, second, and third direction coordinates of the rain particles in the particle coordinate system, respectively.
The particle point coordinates may include first particle focus coordinates and second particle focus coordinates of the rain particles in the particle coordinate system.
Wherein the determining unit 3120 may further be for: determining wind parameters; obtaining focal length L, L being greater than 0; and if the first particle focus coordinate of the rain particle is determined to be in a windless state according to the wind parameters, determining that the first particle focus coordinate of the rain particle is (x, y, z+L/2), and determining that the second particle focus coordinate of the rain particle is (x, y, z-L/2).
In an exemplary embodiment, the wind parameters may include wind direction parameters and wind dynamics parameters;
wherein the determining unit 3120 may further be for: if the wind is in a windy state according to the wind parameters, and positive wind facing the first direction of the particle coordinate system is determined to exist according to the wind direction parameters, and the windiness parameter is k, wherein k is larger than 0, the first particle focus coordinate of the rain particles is determined to be (x-k, y, z+L/2), and the second particle focus coordinate is determined to be (x+k, y, z-L/2).
In an exemplary embodiment, the wind parameters may include a wind direction parameter θ and a wind intensity parameter β.
Wherein the determining unit 3120 may further be for: and if the first particle focus coordinate of the rain particles is determined to be (x-beta cos theta, y-beta sin theta, z+L/2) and the second particle focus coordinate is determined to be (x+beta cos theta, y+beta sin theta, z-L/2) according to the wind parameters.
In an exemplary embodiment, the screen coordinates of the particle point coordinates may include first screen focus coordinates and second screen focus coordinates of the rain particles on the screen.
Wherein the obtaining unit 3130 may further be configured to: determining the size of a surrounding shape of screen coordinates surrounding the particle point coordinates according to depth information between the particles and the screen, wherein the center coordinates of the surrounding shape on the screen coincide with the coordinates of the three-dimensional particle center coordinates projected to the screen; and calculating the sum of the distances from each pixel point in the surrounding shape to the first screen focus coordinate and the second screen focus coordinate as the distance from the surrounding shape to the screen coordinate of the particle point coordinate.
In an exemplary embodiment, the rendering unit 3140 may also be used to: rendering a pixel point in the surrounding shape, wherein the sum of the distances from the first screen focus coordinate and the second screen focus coordinate is larger than the length of the long axis, to be a first color; and rendering the pixel points, which are in the surrounding shape and have the sum of the distances from the first screen focus coordinate and the second screen focus coordinate smaller than or equal to the length of the long axis, to be a second color so as to render the rain particles with anisotropy.
In an exemplary embodiment, the particle space may be a cube having a side length of 2n in a range of [ -n, n ] in which n is greater than or equal to 1, in the particle coordinate system, the first direction coordinate, the second direction coordinate, and the third direction coordinate.
A camera may be disposed on a surface of the inscribed sphere of the cube with a field of view directed toward a center of the inscribed sphere.
Wherein the obtaining unit 3130 may further be configured to: acquiring target operation information aiming at a target map; if the camera is determined to be rotated according to the target operation information, determining a rotation matrix of the camera; converting the particle point coordinates into first camera coordinates in a rotated camera coordinate system according to the rotation matrix; projecting the first camera coordinates to the screen coordinates.
In an exemplary embodiment, the obtaining unit 3130 may also be used to: if the target operation information is determined to be translation operation, determining translated particle point coordinates according to a translation distance, a translation direction and the particle point coordinates; and projecting the translated particle point coordinates to the screen to obtain the screen coordinates.
In an exemplary embodiment, the obtaining unit 3130 may also be used to: if the target operation information is determined to be a scaling operation, obtaining the orientation coordinate of the camera under the camera coordinate system; calculating an inverse matrix of the rotation matrix; according to the orientation coordinates of the camera in the coordinate system and the inverse matrix, obtaining the orientation coordinates of the camera in the particle coordinate system; adjusting the particle point coordinates according to the scaling operation and along the orientation coordinates of the camera in the particle coordinate system to obtain adjusted particle point coordinates; performing rotation transformation on the adjusted particle point coordinates, and converting the particle point coordinates into second camera coordinates in the camera coordinate system; projecting the second camera coordinates to the screen coordinates.
The disclosed embodiments provide an image processing apparatus that may be run in a user device and/or a computer device.
Referring to fig. 32, fig. 32 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure. As shown in fig. 32, a computer device in an embodiment of the present disclosure may include: one or more processors 3201, memory 3202, and input-output interfaces 3203. The processor 3201, memory 3202, and input/output interface 3203 are connected via a bus 3204. The memory 3202 is used for storing a computer program, the computer program includes program instructions, and the input/output interface 3203 is used for receiving data and outputting data, such as for performing data interaction between a host and a computer device, or for performing data interaction between each virtual machine in the host; processor 3201 is configured to execute program instructions stored in memory 3202.
The processor 3201 may perform the following operations, among others: acquiring three-dimensional particle center coordinates of anisotropic particles in a particle coordinate system, wherein the particles are positioned in a limited particle space in the particle coordinate system; determining particle point coordinates of the particles according to the three-dimensional particle center coordinates of the particles; projecting the particle point coordinates of the particles to a screen to obtain screen coordinates of the particle point coordinates; obtaining a distance from a surrounding shape of a screen coordinate surrounding the particle point coordinate to the screen coordinate of the particle point coordinate; and rendering the particles with anisotropy according to the distance from the surrounding shape to the screen coordinates of the particle point coordinates.
In some possible implementations, the processor 3201 may be a central processing unit (central processing unit, CPU), which may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field-programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 3202 may include read only memory and random access memory, and provides instructions and data to the processor 3201 and the input output interface 3203. A portion of memory 3202 may also include non-volatile random access memory. For example, the memory 3202 may also store information of a device type.
In a specific implementation, the computer device may execute, through each functional module built in the computer device, an implementation manner provided by each step in the foregoing embodiment of the present invention, and specifically, the implementation manner provided by each step in the foregoing embodiment of the present invention may be referred to, which is not described herein in detail.
Embodiments of the present disclosure provide a computer device comprising: the processor, the input/output interface, and the memory, and the computer program in the memory is acquired by the processor, so as to execute the steps of the method shown in the embodiment of the above figure.
The embodiments of the present disclosure further provide a computer readable storage medium, where the computer readable storage medium stores a computer program, where the computer program is adapted to be loaded by the processor and execute the image processing method provided by each step in the foregoing embodiment, and specifically refer to an implementation manner provided by each step in the foregoing embodiment, which is not described herein again. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the embodiments of the computer-readable storage medium according to the present disclosure, please refer to the description of the embodiments of the method according to the present disclosure. As an example, a computer program may be deployed to be executed on one computer device or on multiple computer devices at one site or distributed across multiple sites and interconnected by a communication network.
The computer readable storage medium may be the image processing apparatus provided in any of the foregoing embodiments or an internal storage unit of the computer device, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card (flash card) or the like, which are provided on the computer device. Further, the computer-readable storage medium may also include both internal storage units and external storage devices of the computer device. The computer-readable storage medium is used to store the computer program and other programs and data required by the computer device. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
The disclosed embodiments also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium and executes the computer instructions to cause the computer device to perform the methods provided in the various alternatives in the embodiments of the figures above.
The terms first, second and the like in the description and in the claims and drawings of the embodiments of the disclosure are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the term "include" and any variations thereof is intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or device that comprises a list of steps or elements is not limited to the list of steps or modules but may, in the alternative, include other steps or modules not listed or inherent to such process, method, apparatus, article, or device.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in this description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The methods and related devices provided by the embodiments of the present disclosure are described with reference to the method flowcharts and/or structure diagrams provided by the embodiments of the present disclosure, and each flowchart and/or block of the method flowcharts and/or structure diagrams may be implemented by computer program instructions, and combinations of flowcharts and/or block diagrams. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable image processing method apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image processing method apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or structural diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable image processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or structural diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable image processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or structures.
The foregoing disclosure is merely illustrative of the presently preferred embodiments of the present disclosure, and it is not intended to limit the scope of the claims hereof, as defined by the appended claims.

Claims (14)

1. An image processing method, comprising:
acquiring three-dimensional particle center coordinates of anisotropic particles in a particle coordinate system, wherein the particles are positioned in a limited particle space in the particle coordinate system;
determining particle point coordinates of the particles according to the three-dimensional particle center coordinates of the particles;
projecting the particle point coordinates of the particles to a screen to obtain screen coordinates of the particle point coordinates;
and rendering the particles with anisotropy according to the screen coordinates of the particle point coordinates.
2. The method of claim 1, wherein rendering particles having anisotropy based on screen coordinates of the particle point coordinates comprises:
determining an enclosing shape of screen coordinates enclosing the particle point coordinates;
obtaining a distance from a surrounding shape of a screen coordinate surrounding the particle point coordinate to the screen coordinate of the particle point coordinate;
and rendering the particles with anisotropy according to the distance from the surrounding shape to the screen coordinates of the particle point coordinates.
3. The method of claim 2, wherein the particles are rain particles and the three-dimensional particle center coordinates are (x, y, z) first, second and third direction coordinates of the rain particles in the particle coordinate system, respectively;
the particle point coordinates comprise first particle focus coordinates and second particle focus coordinates of the rain particles under the particle coordinate system;
wherein determining the particle point coordinates of the particles according to the three-dimensional particle center coordinates of the particles comprises:
determining wind parameters;
obtaining focal length L, L being greater than 0;
and if the first particle focus coordinate of the rain particle is determined to be in a windless state according to the wind parameters, determining that the first particle focus coordinate of the rain particle is (x, y, z+L/2), and determining that the second particle focus coordinate of the rain particle is (x, y, z-L/2).
4. A method according to claim 3, wherein the wind parameters include wind direction parameters and wind dynamics parameters;
wherein, confirm the particle point coordinate of the said particle according to the three-dimensional particle center coordinate of the said particle, still include:
if the wind is in a windy state according to the wind parameters, and positive wind facing the first direction of the particle coordinate system is determined to exist according to the wind direction parameters, and the windiness parameter is k, wherein k is larger than 0, the first particle focus coordinate of the rain particles is determined to be (x-k, y, z+L/2), and the second particle focus coordinate is determined to be (x+k, y, z-L/2).
5. A method according to claim 3, wherein the wind parameters include a wind direction parameter θ and a wind intensity parameter β;
wherein, confirm the particle point coordinate of the said particle according to the three-dimensional particle center coordinate of the said particle, still include:
and if the first particle focus coordinate of the rain particles is determined to be (x-beta cos theta, y-beta sin theta, z+L/2) and the second particle focus coordinate is determined to be (x+beta cos theta, y+beta sin theta, z-L/2) according to the wind parameters.
6. A method according to claim 3, wherein the screen coordinates of the particle point coordinates comprise first and second screen focus coordinates of the rain particles on the screen;
wherein obtaining a distance from a surrounding shape of a screen coordinate surrounding the particle point coordinate to the screen coordinate of the particle point coordinate includes:
determining the size of a surrounding shape of screen coordinates surrounding the particle point coordinates according to depth information between the particles and the screen, wherein the center coordinates of the surrounding shape on the screen coincide with the coordinates of the three-dimensional particle center coordinates projected to the screen;
and calculating the sum of the distances from each pixel point in the surrounding shape to the first screen focus coordinate and the second screen focus coordinate as the distance from the surrounding shape to the screen coordinate of the particle point coordinate.
7. The method of claim 6, wherein rendering particles having anisotropy based on a distance of the surrounding shape from screen coordinates of the particle point coordinates comprises:
rendering a pixel point in the surrounding shape, wherein the sum of the distances from the first screen focus coordinate and the second screen focus coordinate is larger than the length of the long axis, to be a first color;
and rendering the pixel points, which are in the surrounding shape and have the sum of the distances from the first screen focus coordinate and the second screen focus coordinate smaller than or equal to the length of the long axis, to be a second color so as to render the rain particles with anisotropy.
8. The method of claim 1, wherein the particle space is a cube of side length 2n in the range of [ -n, n ] with n being greater than or equal to 1, with a first direction coordinate, a second direction coordinate, and a third direction coordinate in the particle coordinate system;
the camera is arranged on the surface of the inscribed sphere of the cube, and the field of view faces to the direction of the sphere center of the inscribed sphere;
projecting the particle point coordinates of the particles to a screen to obtain screen coordinates of the particle point coordinates, wherein the method comprises the following steps:
acquiring target operation information aiming at a target map;
If the camera is determined to be rotated according to the target operation information, determining a rotation matrix of the camera;
converting the particle point coordinates into first camera coordinates in a rotated camera coordinate system according to the rotation matrix;
projecting the first camera coordinates to the screen coordinates.
9. The method of claim 8, wherein projecting the particle point coordinates of the particles onto a screen to obtain screen coordinates of the particle point coordinates, further comprising:
if the target operation information is determined to be translation operation, determining translated particle point coordinates according to a translation distance, a translation direction and the particle point coordinates;
and projecting the translated particle point coordinates to the screen to obtain the screen coordinates.
10. The method of claim 8, wherein projecting the particle point coordinates of the particles onto a screen to obtain screen coordinates of the particle point coordinates, further comprising:
if the target operation information is determined to be a scaling operation, obtaining the orientation coordinate of the camera under the camera coordinate system;
calculating an inverse matrix of the rotation matrix;
according to the orientation coordinates of the camera in the coordinate system and the inverse matrix, obtaining the orientation coordinates of the camera in the particle coordinate system;
Adjusting the particle point coordinates according to the scaling operation and along the orientation coordinates of the camera in the particle coordinate system to obtain adjusted particle point coordinates;
performing rotation transformation on the adjusted particle point coordinates, and converting the particle point coordinates into second camera coordinates in the camera coordinate system;
projecting the second camera coordinates to the screen coordinates.
11. An image processing apparatus, characterized in that the apparatus comprises:
an acquisition unit for acquiring a three-dimensional particle center coordinate of a particle having anisotropy in a particle coordinate system in which the particle is located in a limited particle space;
a determining unit for determining particle point coordinates of the particles according to three-dimensional particle center coordinates of the particles;
an obtaining unit, configured to project particle point coordinates of the particles onto a screen, and obtain screen coordinates of the particle point coordinates;
and the rendering unit is used for rendering the particles with anisotropy according to the screen coordinates of the particle point coordinates.
12. A computer device, comprising a processor, a memory, and an input-output interface;
the processor is connected to the memory and the input/output interface, respectively, wherein the input/output interface is used for receiving data and outputting data, the memory is used for storing a computer program, and the processor is used for calling the computer program to enable the computer device to execute the method of any one of claims 1-10.
13. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program adapted to be loaded and executed by a processor to cause a computer device having the processor to perform the method of any of claims 1-10.
14. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the method of any of claims 1-10.
CN202111667710.6A 2021-11-30 2021-12-30 Image processing method, device, equipment, medium and product Pending CN116206028A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021114498826 2021-11-30
CN202111449882 2021-11-30

Publications (1)

Publication Number Publication Date
CN116206028A true CN116206028A (en) 2023-06-02

Family

ID=86506558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111667710.6A Pending CN116206028A (en) 2021-11-30 2021-12-30 Image processing method, device, equipment, medium and product

Country Status (1)

Country Link
CN (1) CN116206028A (en)

Similar Documents

Publication Publication Date Title
CN107564089B (en) Three-dimensional image processing method, device, storage medium and computer equipment
EP3534336B1 (en) Panoramic image generating method and apparatus
US5936630A (en) Method of and apparatus for performing perspective transformation of visible stimuli
US8817079B2 (en) Image processing apparatus and computer-readable recording medium
US8907950B2 (en) Driving simulation apparatus, wide-angle camera video simulation apparatus, and image deforming/compositing apparatus
CN106558017B (en) Spherical display image processing method and system
CN105137705B (en) A kind of creation method and device of virtual ball curtain
CN110060230B (en) Three-dimensional scene analysis method, device, medium and equipment
CN111161398B (en) Image generation method, device, equipment and storage medium
CN114143528A (en) Multi-video stream fusion method, electronic device and storage medium
WO2017113729A1 (en) 360-degree image loading method and loading module, and mobile terminal
Sandnes et al. Translating the viewing position in single equirectangular panoramic images
CN114782648A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114549289A (en) Image processing method, image processing device, electronic equipment and computer storage medium
CN116485984A (en) Global illumination simulation method, device, equipment and medium for panoramic image vehicle model
CN114926612A (en) Aerial panoramic image processing and immersive display system
CN113132708B (en) Method and apparatus for acquiring three-dimensional scene image using fisheye camera, device and medium
CN116385622B (en) Cloud image processing method, cloud image processing device, computer and readable storage medium
WO2009068942A1 (en) Method and system for processing of images
US11100617B2 (en) Deep learning method and apparatus for automatic upright rectification of virtual reality content
CN116206028A (en) Image processing method, device, equipment, medium and product
US10275939B2 (en) Determining two-dimensional images using three-dimensional models
CN114463520A (en) Method and device for realizing Virtual Reality (VR) roaming
Meng et al. Embedded GPU 3D panoramic viewing system based on virtual camera roaming 3D environment
Verma et al. 3D Rendering-Techniques and challenges

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40087680

Country of ref document: HK