CN117670942A - Image processing method, device, equipment, storage medium and product - Google Patents

Image processing method, device, equipment, storage medium and product Download PDF

Info

Publication number
CN117670942A
CN117670942A CN202211021011.9A CN202211021011A CN117670942A CN 117670942 A CN117670942 A CN 117670942A CN 202211021011 A CN202211021011 A CN 202211021011A CN 117670942 A CN117670942 A CN 117670942A
Authority
CN
China
Prior art keywords
fluid
target
information
target time
particles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211021011.9A
Other languages
Chinese (zh)
Inventor
张莫涵
任小华
胡秉昌
刘浩
李琛
马子扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211021011.9A priority Critical patent/CN117670942A/en
Publication of CN117670942A publication Critical patent/CN117670942A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the application discloses an image processing method, an image processing device, image processing equipment, a storage medium and a product. The method comprises the following steps: acquiring a background image and flow properties of fluid presented in the background image at the current moment, wherein the background image can be a currently displayed image (such as an image in a video); according to the flow attribute of the fluid at the current moment, the field information of the fluid at the target moment is determined, the presentation information of the fluid at the target moment is determined according to the field information, and the background image is rendered based on the presentation information of the fluid at the target moment to obtain a rendered image. Therefore, the motion state of the fluid at the target time can be indicated by the field information, so that the presentation information of the fluid at the target time is determined by the field information of the fluid at the target time, the presentation form of the fluid at the target time can be more vivid, and the simulation degree of a rendered image rendered based on the presentation information of the fluid is further improved.

Description

Image processing method, device, equipment, storage medium and product
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method, apparatus, device, computer readable storage medium, and product.
Background
With the progress of scientific research, various image rendering techniques are rapidly developed; for example, images are rendered by animation, images are rendered by illustration, and so on. Rendering the image through animation includes simulating fluid (such as water flow, solution, etc.), and rendering the image to be rendered by using the simulated fluid to obtain a rendered image. Practice finds that how to improve the simulation degree of the rendered image becomes a popular problem in current research because the form of the fluid is continuously changed in the motion process.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, image processing equipment, a computer readable storage medium and a computer readable storage medium product, which can improve the simulation degree of a rendered image.
In one aspect, an embodiment of the present application provides an image processing method, including:
acquiring a background image and a flow attribute of fluid presented in the background image at the current moment;
according to the flow attribute of the fluid at the current moment, determining the field information of the fluid at the target moment, wherein the field information is used for indicating the motion state of the fluid at the target moment;
determining presentation information of the fluid at the target time according to the field information, wherein the presentation information is used for indicating the presentation form of the fluid at the target time;
And rendering the background image based on the presentation information of the fluid at the target time to obtain a rendered image.
In one aspect, an embodiment of the present application provides an image processing apparatus, including:
the acquisition unit is used for acquiring the background image and the flow attribute of the fluid presented in the background image at the current moment;
the processing unit is used for determining field information of the fluid at the target time according to the flow attribute of the fluid at the current time, wherein the field information is used for indicating the movement state of the fluid at the target time;
the method comprises the steps of determining presentation information of fluid at a target time according to field information, wherein the presentation information is used for indicating the presentation form of the fluid at the target time;
and rendering the background image based on the presentation information of the fluid at the target time to obtain a rendered image.
In one embodiment, the fluid is composed of N particles, and the flow attribute of the fluid at the current time includes attribute information of the N particles at the current time, and the attribute information of each particle at the current time includes a position and a velocity vector of the particle at the current time, where N is a positive integer.
In one embodiment, the field information includes a density field and a velocity field; the processing unit is used for determining field information of the fluid at the target moment according to the flow attribute of the fluid at the current moment, and is specifically used for:
Mapping the attribute information of N particles at the current moment into M grids, wherein M is a positive integer;
based on the momentum theorem, determining state information of M grids at a target time;
according to the state information of the M grids at the target moment, determining attribute information of N particles at the target moment;
and determining the density field and the speed field of the fluid at the target moment according to the attribute information of the N particles at the target moment.
In one embodiment, the background image presents at least one obstacle element; the processing unit is further configured to:
determining boundary collision conditions of each grid from the current moment to the target moment according to at least one obstacle element;
and updating the state information of the M grids at the target time according to the boundary collision condition of each grid from the current time to the target time.
In one embodiment, the processing unit is configured to map attribute information of N particles at a current time into M grids, specifically configured to:
performing difference weighting processing on attribute information of the target particles at the current moment based on the distance between the target particles and each grid node in the grid to obtain a target grid carrying the attribute information of the target particles at the current moment;
Wherein the target particle is any one of the N particles.
In one embodiment, the processing unit is configured to determine attribute information of N particles at the target time according to state information of M grids at the target time, and specifically is configured to:
and carrying out difference weighted inversion processing on the state information of the target grid at the target moment based on the distance between the target particle and each grid node in the target grid, and obtaining the attribute information of the target particle at the target moment.
In one embodiment, the attribute information of the N particles at the target time includes positions of the N particles at the target time and velocity vectors of the N particles at the target time; the processing unit is used for determining a density field and a velocity field of the fluid at the target moment according to the attribute information of the N particles at the target moment, and is specifically used for:
determining a density field of the fluid at the target time based on the positions of the N particles at the target time; the method comprises the steps of,
the velocity field of the fluid at the target time instant is determined by the velocity vectors of the N particles at the target time instant.
In one embodiment, the processing unit is configured to determine, based on the field information, presentation information of the fluid at the target time, in particular:
Determining a presentation area of the fluid at the target time according to the density field;
determining a rendering effect of the presentation area of the fluid at the target time based on the velocity field;
and taking the rendering area of the fluid at the target time and the rendering effect of the rendering area of the fluid at the target time as the rendering information of the fluid at the target time.
In an embodiment, the processing unit is configured to determine a presentation area of the fluid at the target time based on the density field, in particular:
acquiring a density threshold;
and determining the area of the fluid with the density larger than the density threshold value at the target moment as the presentation area of the fluid at the target moment.
In one embodiment, the field information includes a density field and a velocity field; the processing unit is used for determining field information of the fluid at the target moment according to the flow attribute of the fluid at the current moment, and is specifically used for:
acquiring a model to be compiled, wherein the model to be compiled is used for simulating the motion state of the fluid based on the attribute information of the fluid at the current moment;
compiling the model to be compiled to obtain a compiling product, wherein the compiling product comprises a kernel file and a configuration file corresponding to the kernel file, and the kernel file is used for processing the flow attribute of the fluid;
Analyzing the configuration file to obtain execution logic of the kernel file and a data structure required by executing the kernel file;
and executing the kernel file according to the execution logic and the data structure to obtain a density field and a speed field of the fluid at the target time.
Accordingly, the present application provides a computer device comprising:
a memory in which a computer program is stored;
and the processor is used for loading a computer program to realize the image processing method.
Accordingly, the present application provides a computer readable storage medium storing a computer program adapted to be loaded by a processor and to perform the above-described image processing method.
Accordingly, the present application provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the above-described image processing method.
In the embodiment of the application, a background image and the flow attribute of the fluid presented in the background image at the current moment are acquired, the field information of the fluid at the target moment is determined according to the flow attribute of the fluid at the current moment, the presentation information of the fluid at the target moment is determined according to the field information, and the background image is rendered based on the presentation information of the fluid at the target moment to obtain a rendered image. Therefore, the motion state of the fluid at the target time can be indicated by the field information, so that the presentation information of the fluid at the target time is determined by the field information of the fluid at the target time, the presentation form of the fluid at the target time can be more vivid, and the simulation degree of a rendered image rendered based on the presentation information of the fluid is further improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of image processing according to an embodiment of the present application;
fig. 2 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a compiling process according to an embodiment of the present application;
FIG. 4 is a flowchart of another image processing method according to an embodiment of the present disclosure;
FIG. 5a is a schematic diagram of mapping according to an embodiment of the present application;
FIG. 5b is a schematic diagram illustrating a backhaul according to an embodiment of the present application;
FIG. 5c is a schematic diagram of image rendering according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Referring to fig. 1, fig. 1 is a schematic diagram of image processing according to an embodiment of the present application. As shown in fig. 1, the image processing method provided by the embodiment of the present application may be executed by a computer apparatus 101. The computer device 101 may be a terminal device or a server in particular, and the terminal device may include, but is not limited to: smart phones (such as Android phones, IOS phones, etc.), tablet computers, portable personal computers, mobile internet devices (Mobile Internet Devices, abbreviated as MID), intelligent voice interaction devices, intelligent home appliances, vehicle terminals, aircrafts, etc. devices with image processing functions, which are not limited in this embodiment of the present application. The blockchain network node 102 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDN (Content Delivery Network ), basic cloud computing services such as big data and an artificial intelligence platform, which is not limited in the embodiment of the present application.
The general principle of the image processing method provided by the application is as follows:
(1) The computer device 101 acquires a background image and the flow properties of the fluid presented in the background image at the current moment. The background image may be a background image of a social session, an image selected by an object, an image currently displayed by a computer device, an image in a video, and the like, which is not limited in this application. The fluid presented in the background image may be object-selected or determined based on related operations of the object; for example, when the subject selects a "crying" expression, the fluid presented in the background image is determined to be a water flow.
In one embodiment, the fluid presented in the background image consists of N particles, N being a positive integer; the flow properties of the fluid at the current time include the property information of the N particles at the current time, and the property information of each particle at the current time includes the position and velocity vector of the particle at the current time.
It should be noted that, the flow attribute of the fluid at the current time may be preset, or may be specified by the object, or may be calculated based on the initial state of the fluid and the time difference between the current time and the initial time.
(2) The computer device 101 determines, based on the flow properties of the fluid at the current moment, field information of the fluid at the target moment, the field information being used to indicate a state of motion of the fluid at the target moment, the field information comprising a density field of the fluid at the target moment and a velocity field of the fluid at the target moment.
The computer device 101 may determine attribute information of the N particles at the target time based on attribute information of the N particles at the current time and a time difference between the current time and the target time, and further determine field information of the fluid at the target time.
In one embodiment, the computer device 101 may directly calculate the attribute information of the N particles at the target time based on the attribute information of the N particles at the current time and the time difference between the current time and the target time; for example, the position and velocity vector of each particle at the target time is calculated based on the position and velocity vector of each particle at the current time and the time difference between the current time and the target time. After obtaining the attribute information of the N particles at the target time, the computer device 101 obtains the field information of the fluid at the target time based on the attribute information of the N particles at the target time.
In another embodiment, the computer device 101 may map the attribute information of N particles at the current time into M grids, where N and M are positive integers. For example, the computer device may map a plurality of particles having an association relationship into the same grid, where the particles having an association relationship are particles that may affect each other during the movement process; the computer device may also map one particle individually into multiple grids. The attribute information of each particle at the current moment is mapped into the grid according to the corresponding weight, for example, the particle weight of the larger speed is higher, and for example, the particle weight of the particle closer to the center of the background image is higher; for another example, the greater the density of the region, the higher the particle weight. After the mapping is completed, the computer device can determine state information of the M grids at the target time based on the momentum theorem and the time difference between the current time and the target time, and then transmit the state information of the M grids at the target time back to the N particles to obtain attribute information of the N particles at the target time. After obtaining the attribute information of the N particles at the target time, the computer device 101 obtains the field information of the fluid at the target time based on the attribute information of the N particles at the target time.
(3) The computer device 101 determines, from the field information, presentation information of the fluid at the target time, the presentation information being indicative of a presentation morphology of the fluid at the target time. In one embodiment, the presentation information includes a presentation area of the fluid at the target time and a rendering effect corresponding to the presentation area. In practice, the computer device 101 may determine the presentation area of the fluid at the target time based on the density field; for example, a region where the density of the particles is greater than the density threshold is determined as a presentation region of the fluid at the target time. The computer device 101 may further divide one presentation area into a plurality of sub-areas based on a division rule (e.g., division based on a density of particles, or division based on a velocity vector of particles), and rendering effects of each sub-area may be the same or different; for example, the higher the density the darker the corresponding rendering color of the sub-region.
(4) And rendering the background image based on the presentation information of the fluid at the target time to obtain a rendered image. In one embodiment, the computer device 101 renders the fluid in the background image according to the rendering information of the fluid at the target time, resulting in a rendered image. In one embodiment, the computer device 101 may perform an offset sampling process on the background image according to the velocity field of the flow so that the rendered image has a refractive effect. For example, assuming that the coordinates of the target point in the fluid velocity field are (x, y), the velocity of the target point is (0, 0), the value of the coordinate point (x, y) is taken in the background picture when rendering; assuming that the velocity of the target point is (vx, vy), the value of the coordinate point (x+vx, y+vy) is taken at the background picture at the time of rendering.
In the embodiment of the application, a background image and the flow attribute of the fluid presented in the background image at the current moment are acquired, the field information of the fluid at the target moment is determined according to the flow attribute of the fluid at the current moment, the presentation information of the fluid at the target moment is determined according to the field information, and the background image is rendered based on the presentation information of the fluid at the target moment to obtain a rendered image. Therefore, the motion state of the fluid at the target time can be indicated by the field information, so that the presentation information of the fluid at the target time is determined by the field information of the fluid at the target time, the presentation form of the fluid at the target time can be more vivid, and the simulation degree of a rendered image rendered based on the presentation information of the fluid is further improved.
Based on the above image processing scheme, the embodiment of the present application proposes a more detailed image processing method, and the image processing method proposed by the embodiment of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 2, fig. 2 is a flowchart of an image processing method provided in an embodiment of the present application, where the image processing method may be performed by a computer device, and the computer device may be a terminal device or a server. As shown in fig. 2, the image processing method may include the following steps S201 to S204:
S201, acquiring a background image and the flow attribute of the fluid presented in the background image at the current moment.
The background image may be an image currently displayed by the computer device; for example, a background image in a social session page. The background image may also be an image of the object selection; for example, the object selects an image to be rendered from an image database.
The fluid presented in the background image may be object-selected; for example, the object selects the fluid presented in the background image to be a sand stream. The fluid presented in the background image may also be determined based on related operations of the object; for example, when the subject selects a "crying" expression, the fluid presented in the background image is determined to be a water flow; for another example, when the object selects a rendering effect of "wind", the fluid presented in the background image is determined to be an air flow.
In one embodiment, the fluid presented in the background image consists of N particles, N being a positive integer; the flow attribute of the fluid at the current moment comprises attribute information of the N particles at the current moment, and the attribute information of each particle at the current moment comprises a position and a speed vector of the particle at the current moment, wherein the speed vector is used for indicating the speed and the movement direction of the corresponding particle at the current moment.
It should be noted that, the flow attribute of the fluid at the current time may be preset, or may be specified by the object, or may be calculated by the computer device based on the initial state of the fluid and the time difference between the current time and the initial time.
S202, according to the flow attribute of the fluid at the current moment, determining the field information of the fluid at the target moment.
The field information is used to indicate a motion state of the fluid at the target time, and the field information includes a density field of the fluid at the target time and a velocity field of the fluid at the target time. The computer device may determine attribute information of the N particles at the target time based on attribute information of the N particles at the current time and a time difference between the current time and the target time, and further determine field information of the fluid at the target time.
In one embodiment, the computer device may directly calculate the attribute information of the N particles at the target time based on the attribute information of the N particles at the current time and the time difference between the current time and the target time; for example, the position and velocity vector of each particle at the target time is calculated based on the position and velocity vector of each particle at the current time and the time difference between the current time and the target time. After obtaining the attribute information of the N particles at the target time, the computer equipment obtains the field information of the fluid at the target time based on the attribute information of the N particles at the target time.
In another embodiment, the computer device may map the attribute information of N particles at the current time into M grids, where N and M are positive integers. For example, the computer device may map a plurality of particles having an association relationship into the same grid, where the particles having an association relationship are particles that may affect each other during the movement process; the computer device may also map one particle individually into multiple grids. The attribute information of each particle at the current moment is mapped into the grid according to the corresponding weight, for example, the particle weight of the larger speed is higher, and for example, the particle weight of the particle closer to the center of the background image is higher; for another example, the greater the density of the region, the higher the particle weight. After the mapping is completed, the computer device can determine state information of the M grids at the target time based on the momentum theorem and the time difference between the current time and the target time, and then transmit the state information of the M grids at the target time back to the N particles to obtain attribute information of the N particles at the target time. After obtaining the attribute information of N particles at the target moment, the computer equipment obtains the field information of the fluid at the target moment based on the attribute information of N particles at the target moment; for example, determining a density field of the fluid at the target time based on the positions of the N particles at the target time; and determining the speed field of the fluid at the target moment according to the speed vectors of the N particles at the target moment.
In still another embodiment, a model to be compiled is obtained, and the model to be compiled is used for simulating the motion state of the fluid based on the attribute information of the fluid at the current moment, and the model to be compiled can be a script written in any programming language (such as Python, java, etc.). The method comprises the steps that compiling processing is carried out on a model to be compiled by computer equipment to obtain a compiling product, wherein the compiling product comprises a Kernel (Kernel) file and a configuration file corresponding to the Kernel (Kernel) file; the configuration file is used to indicate the execution logic of the Kernel (Kernel) file, as well as the data structures required to execute the Kernel (Kernel) file. The computer equipment analyzes the configuration file to obtain the execution logic of the kernel file and the data structure required by executing the kernel file. After the execution logic of the kernel file and the data structure required by executing the kernel file are obtained, the computer equipment executes the kernel file according to the execution logic and the data structure to obtain the density field and the speed field of the fluid at the target time.
For example, the model to be compiled may be a script written based on Taichi Python, which is a high-performance language with an open source, and features such as open source, real-time, and parallelism. Taichi uses Python as the front-end language. The computer device may compile the model to be compiled using AOT (Ahead of Time) compilation to obtain a compilation product, the compilation product comprising: (1) A series of Kernel (Kernel) files described in a terminal language (e.g., openGL/Vulkan Shader Language, metal Shader Language, etc.). (2) A configuration file for a Kernel (Kernel) file, which is a file (e.g., json file) describing the Kernel (Kernel) file execution logic and the memory data structures required to execute the Kernel file. Wherein AOT (Ahead of Time) compilation is a compilation of higher-level programming languages into the underlying computer-executable language, such as C/c++ compilation into assembly language, and Java compilation into bytecode. AOT compilation of Taichi is a Kernel (Kernel) file that translates Python written Taichi Kernel into GPU-coloring language (e.g., openGL/Vulkan Shader Language, metal Shader Language) descriptions and a json file that describes the execution logic of these Kernel (Kernel) files. After the compiled product is obtained, the computer equipment can analyze the configuration file of the Kernel (Kernel) file by adopting an AOT-host to obtain a memory data structure required by the execution of the Kernel (Kernel) file and an execution logic (such as an execution flow chart) of the Kernel (Kernel) file, which are described by a high-level programming language (such as C++); wherein AOT-Launcher is a high-level programming language (e.g., C++) executive library that can parse and execute compiled products. After obtaining the memory data structure required by the execution Kernel (Kernel) file and the execution logic of the Kernel (Kernel) file described by a high-level programming language (such as C++), initializing a compiling product by the computer equipment; for example, the computer device may initialize the compilation product through OpenGL or Metal. And then the computer equipment calls the AOT-Launcher to execute the Kernel (Kernel) file according to the memory data structure required by the Kernel (Kernel) file and the execution logic of the Kernel (Kernel) file, so as to obtain the density field and the speed field of the fluid at the target time.
That is, the computer device compiles the Taichi Kernel described by Python into Kernel (Kernel) files described by the terminal language ((C/c++, openGL/Vulkan Shader Language, metal Shader Language or CUDA)), and then runs the Kernel (Kernel) files in parallel on the CPU or GPU. Fig. 3 is a schematic diagram of a compiling flow provided in an embodiment of the present application, where, as shown in fig. 3, a computer device compiles a model to be compiled (for example, a script written based on Taichi Python) through a compiling mechanism (AOT), and the AOT includes an operation platform (for example, CUDA), an image processing unit (for example, metal), and an application program interface (for example, vulkan); after the AOT acquires the model to be compiled, a corresponding Shader (loader) and a Data structure (Data Struct) are exported; the Launcher reproduces the operation logic (namely an image processing method) of the model to be compiled through a splicing shader (loader), and processes the fluid through the operation logic to obtain rendering resources (such as attribute information of the fluid at a target time); after the rendering resources provided by the AOT are obtained, the renderer determines the rendering information of the fluid based on the rendering resources and renders the background image based on the rendering information to obtain a rendering image.
S203, determining the presentation information of the fluid at the target time according to the field information.
The presentation information is used to indicate the presentation morphology of the fluid at the target time. The presentation information may include a presentation area of the fluid at the target time and a rendering effect corresponding to the presentation area.
In one embodiment, in one aspect a computer device may determine a presentation area of a fluid at a target time based on a density field; for example, a region where the density of the particles is greater than the density threshold is determined as a presentation region of the fluid at the target time. On the other hand, the computer device can determine the rendering effect corresponding to the rendering area based on the velocity field; for example, the computer device may divide the presentation area into a plurality of sub-areas based on the velocity field, each of which may correspond to a different rendering effect (e.g., color, flow rate per unit time, etc.).
And S204, rendering the background image based on the presentation information of the fluid at the target time to obtain a rendered image.
In one embodiment, the computer device renders the fluid in the background image according to the rendering information of the fluid at the target time, resulting in a rendered image. In one embodiment, the computer device performs an offset sampling process on the background image according to the velocity field of the flow so that the rendered image has a refractive effect. For example, assuming that the coordinates of the target point in the fluid velocity field are (x, y), the velocity of the target point is (0, 0), the value of the coordinate point (x, y) is taken in the background picture when rendering; assuming that the velocity of the target point is (vx, vy), the value of the coordinate point (x+vx, y+vy) is taken at the background picture at the time of rendering.
In the embodiment of the application, a background image and the flow attribute of the fluid presented in the background image at the current moment are acquired, the field information of the fluid at the target moment is determined according to the flow attribute of the fluid at the current moment, the presentation information of the fluid at the target moment is determined according to the field information, and the background image is rendered based on the presentation information of the fluid at the target moment to obtain a rendered image. Therefore, the motion state of the fluid at the target time can be indicated by the field information, so that the presentation information of the fluid at the target time is determined by the field information of the fluid at the target time, the presentation form of the fluid at the target time can be more vivid, and the simulation degree of a rendered image rendered based on the presentation information of the fluid is further improved.
Referring to fig. 4, fig. 4 is a flowchart of another image processing method provided in an embodiment of the present application, where the image processing method may be performed by a computer device, and the computer device may be a terminal device or a server. As shown in fig. 4, the image processing method may include the following steps S401 to S407:
s401, acquiring a background image and the flow attribute of the fluid presented in the background image at the current moment.
The specific embodiment of step S401 may refer to the embodiment of step S201 in fig. 2, and will not be described herein.
S402, mapping the attribute information of N particles at the current moment into M grids.
In one embodiment, the attribute information of N particles (e.g., the position, force, velocity vector, etc. of the particles) is mapped into M grids, where N and M are each positive integers. Specifically, the computer equipment can carry out differential weighting processing on the attribute information of the particles through a B-spline kernel function to obtain a grid carrying the attribute information of the particles; wherein, the B-spline kernel can be expressed as:
the above formula shows that different distance intervals correspond to different mapping functions, and the more the attribute information of the particles carried by the grid nodes closer to the particles is, namely the more the attribute information of the particles carried by the grid nodes is inversely proportional to the distance between the grid nodes and the particles. That is, the computer device performs a difference weighting process on the attribute information of the target particle at the current time based on the distance between the target particle and each grid node in the grid, so as to obtain a target grid carrying the attribute information of the target particle at the current time; wherein the target particle is any one of the N particles. In one embodiment, the mesh may be comprised of mesh nodes of 3*3.
Fig. 5a is a schematic diagram of mapping according to an embodiment of the present application. As shown in fig. 5a, the computer device may perform a difference weighting process on attribute information of the particles by using a B-spline kernel (based on distances between the particles and each grid node in the grid), and map the attribute information of the particles to the grid nodes of the grid, so as to obtain the grid carrying the attribute information of the particles. It should be understood that the mapping manner shown in fig. 5a is only used as an example, and in practical application, attribute information of a plurality of particles may be mapped to one grid in the manner described above, or attribute information of one particle may be mapped to a plurality of grids in the manner described above.
According to the embodiment, the computer device may map the attribute information of the N particles at the current time, to obtain M grids carrying the attribute information of the particles.
S403, determining state information of M grids at the target time based on the momentum theorem.
The state information of each grid at the target time point can reflect the attribute information of the particles corresponding to the grid at the target time point.
In one embodiment, the computer device calculates state information (such as speed) of each grid at the target time by using the momentum theorem (ft=mv), attribute information of particles carried in each grid, and a time difference between the target time and the current time. In one embodiment, the attribute information carried in each grid includes a force F to which the corresponding particle is subjected at the current time, a time difference t between the target time and the current time, and a mass m of the corresponding particle, based on which the computer device can calculate a velocity of the corresponding particle of each grid at the target time. Further, after obtaining the speed of each particle at the target time, the computer device may further perform offset sampling based on the speed of each particle to achieve the refraction effect. For example, assuming that the coordinates of the target point in the fluid velocity field are (x, y), the velocity of the target point is (0, 0), the value of the coordinate point (x, y) is taken in the background picture when rendering; assuming that the velocity of the target point is (vx, vy), the value of the coordinate point (x+vx, y+vy) is taken at the background picture at the time of rendering.
In another embodiment, at least one barrier element exists in the background image, wherein the barrier element refers to an element in the background image which can influence the movement of the fluid; the barrier element may be set by a display priority; specifically, an element displayed higher in priority than the fluid in the background image may be regarded as an obstacle element. For example, assuming that a conversation box is contained in the background image, the conversation box may be an obstacle element; for another example, assuming that a target character is included in the background image, the target character may be an obstacle element.
In this case, the computer device determines a boundary collision condition of each mesh during the current time to the target time based on at least one obstacle element, and updates state information of the M meshes at the target time according to the boundary collision condition of each mesh during the current time to the target time.
For example, the computer device may flag whether each grid will collide with the obstacle element during the current time to the target time; if the grid cannot collide with the barrier element in the period from the current moment to the target moment, the computer equipment calculates the state information of the grid at the target moment according to the previous embodiment; if the grid collides with the barrier element in the period from the current moment to the target moment and the collision moment is t, the computer equipment updates the state information of the grid based on the mechanics and kinematics principle; alternatively, the computer device may set the mesh speed of the mesh that would collide with the obstacle element during the present time to the target time to 0.
After obtaining updated state information of the grid, the computer device calculates state information of the grid at the target time based on the updated state information and a time difference between the target time and the collision time. Similarly, the computer device may determine the state information of the grid at the target time after multiple collisions according to the above method, which is not described herein.
Optionally, when the fluid encounters the barrier element, the fluid can continue to move according to the original movement state, and the fluid in the overlapping area of the fluid and the barrier element is subjected to hiding treatment (i.e. the barrier element is preferentially displayed) during rendering.
S404, determining attribute information of N particles at the target moment according to state information of M grids at the target moment.
In one embodiment, the computer device performs a difference weighted inverse process on the state information of the target grid at the target moment based on the distance between the target particle and each grid node in the target grid, so as to obtain attribute information of the target particle at the target moment; for example, assume thatThen->According to the method, the computer equipment can transmit the attribute information of the particles carried in the M grids back to the N particles.
Fig. 5b is a schematic diagram of a backhaul provided in an embodiment of the present application. As shown in fig. 5B, the computer device may transmit attribute information of the particles carried in the grid back to the particles based on the inverse function of the B-spline kernel in step S402. After obtaining the attribute information of each particle at the target time, the computer device may calculate attribute information of each particle at the next time (e.g., a position of each particle at the next time) based on the attribute information of each particle at the target time.
S405, determining a density field and a velocity field of the fluid at the target moment according to the attribute information of the N particles at the target moment.
In one embodiment, the attribute information of the N particles at the target time includes positions of the N particles at the target time and velocity vectors of the N particles at the target time. The computer device may determine a density field of the fluid at the target time based on the locations of the N particles at the target time; and determining a velocity field of the fluid at the target time by the velocity vectors of the N particles at the target time.
S406, determining the presentation information of the fluid at the target time according to the density field and the speed field of the fluid at the target time.
In one embodiment, in one aspect, a computer device determines a presentation area of a fluid at a target time based on a density field; specifically, a computer device obtains a density threshold; and determining the area of the fluid with the density larger than the density threshold value at the target moment as the presentation area of the fluid at the target moment. In another aspect, the computer device may determine a rendering effect of the presentation area of the fluid at the target time based on the velocity field; specifically, the computer device may perform offset sampling processing on the image to be rendered based on the velocity field to obtain an offset sampling result of the image to be rendered, so as to determine a rendering effect of the rendering region of the fluid at the target time; the computer device may also divide the presentation area of the fluid at the target time into a plurality of sub-areas of the speed interval based on the speed field, each sub-area of the speed interval corresponding to a rendering mode.
After determining the rendering effect of the fluid in the rendering area at the target time and the fluid in the rendering area at the target time, the computer device may use the rendering effect of the fluid in the rendering area at the target time and the fluid in the rendering area at the target time as the rendering information of the fluid in the rendering area at the target time.
And S407, rendering the background image based on the presentation information of the fluid at the target time to obtain a rendered image.
In one embodiment, to achieve a refraction effect, the computer device determines a velocity value of the target point based on the velocity field during rendering and performs offset sampling in the background image to achieve the refraction effect. For example, assuming that the coordinates of the target point in the fluid velocity field are (x, y), the velocity of the target point is (0, 0), the value of the coordinate point (x, y) is taken in the background picture when rendering; assuming that the velocity of the target point is (vx, vy), the value of the coordinate point (x+vx, y+vy) is taken at the background picture at the time of rendering.
Fig. 5c is a schematic diagram of image rendering according to an embodiment of the present application. As shown in fig. 5c, the computer device may render a background image based on the fluid velocity field and the fluid density field, and may sample the background image offset based on the fluid velocity field during the rendering process to achieve the refraction effect. As can be seen from the rendered image, the fluid in the rendered image can avoid barrier elements in the background image, so that the flexibility of rendering is increased, and the application scene of fluid rendering is expanded.
In the embodiment of the application, a background image and the flow attribute of the fluid presented in the background image at the current moment are acquired, the field information of the fluid at the target moment is determined according to the flow attribute of the fluid at the current moment, the presentation information of the fluid at the target moment is determined according to the field information, and the background image is rendered based on the presentation information of the fluid at the target moment to obtain a rendered image. Therefore, the motion state of the fluid at the target time can be indicated by the field information, so that the presentation information of the fluid at the target time is determined by the field information of the fluid at the target time, the presentation form of the fluid at the target time can be more vivid, and the simulation degree of a rendered image rendered based on the presentation information of the fluid is further improved. In addition, through the barrier elements in the background image, the boundary collision conditions of each grid are determined, and the state information of the grids is updated based on the boundary collision conditions, so that the image rendering is more flexible. The problem of fluid flow and grid distortion and negative volume can be overcome by mapping the attribute information of each particle into the grid and reflecting the movement of the fluid through the field information, so that the simulation degree of the rendered image is further improved.
The foregoing details of the method of embodiments of the present application are set forth in order to provide a better understanding of the foregoing aspects of embodiments of the present application, and accordingly, the following provides a device of embodiments of the present application.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, and the image processing apparatus shown in fig. 6 may be used to perform some or all of the functions of the method embodiments described in fig. 2 and fig. 4. Referring to fig. 6, the image processing apparatus includes:
an obtaining unit 601, configured to obtain a background image and a flow attribute of a fluid presented in the background image at a current time;
a processing unit 602, configured to determine, according to a flow attribute of the fluid at a current time, field information of the fluid at a target time, where the field information is used to indicate a motion state of the fluid at the target time;
the method comprises the steps of determining presentation information of fluid at a target time according to field information, wherein the presentation information is used for indicating the presentation form of the fluid at the target time;
and rendering the background image based on the presentation information of the fluid at the target time to obtain a rendered image.
In one embodiment, the fluid is composed of N particles, and the flow attribute of the fluid at the current time includes attribute information of the N particles at the current time, and the attribute information of each particle at the current time includes a position and a velocity vector of the particle at the current time, where N is a positive integer.
In one embodiment, the field information includes a density field and a velocity field; the processing unit 602 is configured to determine, according to a flow attribute of the fluid at a current time, field information of the fluid at a target time, and specifically configured to:
mapping the attribute information of N particles at the current moment into M grids, wherein M is a positive integer;
based on the momentum theorem, determining state information of M grids at a target time;
according to the state information of the M grids at the target moment, determining attribute information of N particles at the target moment;
and determining the density field and the speed field of the fluid at the target moment according to the attribute information of the N particles at the target moment.
In one embodiment, the background image presents at least one obstacle element; the processing unit 602 is further configured to:
determining boundary collision conditions of each grid from the current moment to the target moment according to at least one obstacle element;
and updating the state information of the M grids at the target time according to the boundary collision condition of each grid from the current time to the target time.
In one embodiment, the processing unit 602 is configured to map attribute information of N particles at the current time into M grids, specifically configured to:
Performing difference weighting processing on attribute information of the target particles at the current moment based on the distance between the target particles and each grid node in the grid to obtain a target grid carrying the attribute information of the target particles at the current moment;
wherein the target particle is any one of the N particles.
In one embodiment, the processing unit 602 is configured to determine, according to state information of the M grids at the target time, attribute information of the N particles at the target time, specifically configured to:
and carrying out difference weighted inversion processing on the state information of the target grid at the target moment based on the distance between the target particle and each grid node in the target grid, and obtaining the attribute information of the target particle at the target moment.
In one embodiment, the attribute information of the N particles at the target time includes positions of the N particles at the target time and velocity vectors of the N particles at the target time; the processing unit 602 is configured to determine, from attribute information of N particles at a target time, a density field and a velocity field of the fluid at the target time, and specifically configured to:
determining a density field of the fluid at the target time based on the positions of the N particles at the target time; the method comprises the steps of,
The velocity field of the fluid at the target time instant is determined by the velocity vectors of the N particles at the target time instant.
In one embodiment, the processing unit 602 is configured to determine, based on the field information, presentation information of the fluid at the target time, specifically configured to:
determining a presentation area of the fluid at the target time according to the density field;
determining a rendering effect of the presentation area of the fluid at the target time based on the velocity field;
and taking the rendering area of the fluid at the target time and the rendering effect of the rendering area of the fluid at the target time as the rendering information of the fluid at the target time.
In an embodiment, the processing unit 602 is configured to determine, based on the density field, a presentation area of the fluid at the target time, in particular:
acquiring a density threshold;
and determining the area of the fluid with the density larger than the density threshold value at the target moment as the presentation area of the fluid at the target moment.
In one embodiment, the field information includes a density field and a velocity field; the processing unit 602 is configured to determine, according to a flow attribute of the fluid at a current time, field information of the fluid at a target time, and specifically configured to:
acquiring a model to be compiled, wherein the model to be compiled is used for simulating the motion state of the fluid based on the attribute information of the fluid at the current moment;
Compiling the model to be compiled to obtain a compiling product, wherein the compiling product comprises a kernel file and a configuration file corresponding to the kernel file, and the kernel file is used for processing the flow attribute of the fluid;
analyzing the configuration file to obtain execution logic of the kernel file and a data structure required by executing the kernel file;
and executing the kernel file according to the execution logic and the data structure to obtain a density field and a speed field of the fluid at the target time.
According to one embodiment of the present application, part of the steps involved in the image processing methods shown in fig. 2 and 4 may be performed by respective units in the image processing apparatus shown in fig. 6. For example, step S201 shown in fig. 2 may be performed by the acquisition unit 601 shown in fig. 6, and steps S202 to S204 may be performed by the processing unit 602 shown in fig. 6; step S401 shown in fig. 4 may be performed by the acquisition unit 601 shown in fig. 6, and steps S402 to S407 may be performed by the processing unit 602 shown in fig. 6. The respective units in the image processing apparatus shown in fig. 6 may be individually or collectively combined into one or several additional units, or some unit(s) thereof may be further split into a plurality of units smaller in function, which can achieve the same operation without affecting the achievement of the technical effects of the embodiments of the present application. The above units are divided based on logic functions, and in practical applications, the functions of one unit may be implemented by a plurality of units, or the functions of a plurality of units may be implemented by one unit. In other embodiments of the present application, the image processing apparatus may also include other units, and in practical applications, these functions may also be realized with assistance of other units, and may be realized by cooperation of a plurality of units.
According to another embodiment of the present application, an image processing apparatus as shown in fig. 6 may be constructed by running a computer program (including program code) capable of executing the steps involved in the respective methods as shown in fig. 2 and 4 on a general-purpose computing apparatus such as a computer device including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read only storage medium (ROM), and the like, and a storage element, and the image processing method of the present application is implemented. The computer program may be recorded on, for example, a computer-readable recording medium, and loaded into and run in the above-described computing device through the computer-readable recording medium.
Based on the same inventive concept, the principle and beneficial effects of the image processing device for solving the problems provided in the embodiments of the present application are similar to those of the image processing method for solving the problems in the embodiments of the method of the present application, and may refer to the principle and beneficial effects of implementation of the method, which are not described herein for brevity.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a computer device provided in an embodiment of the present application, where the computer device may be a terminal device or a server. As shown in fig. 7, the computer device includes at least a processor 701, a communication interface 702, and a memory 703. Wherein the processor 701, the communication interface 702, and the memory 703 may be connected by a bus or other means. Among them, the processor 701 (or central processing unit (Central Processing Unit, CPU)) is a computing core and a control core of the computer device, which can parse various instructions in the computer device and process various data of the computer device, for example: the CPU can be used for analyzing a startup and shutdown instruction sent by the object to the computer equipment and controlling the computer equipment to perform startup and shutdown operation; and the following steps: the CPU may transmit various types of interaction data between internal structures of the computer device, and so on. Communication interface 702 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI, mobile communication interface, etc.), and may be controlled by processor 701 to receive and transmit data; the communication interface 702 may also be used for transmission and interaction of data within a computer device. Memory 703 (Memory) is a Memory device in a computer device for storing programs and data. It will be appreciated that the memory 703 herein may comprise either a built-in memory of the computer device or an extended memory supported by the computer device. The memory 703 provides storage space that stores the operating system of the computer device, which may include, but is not limited to: android systems, iOS systems, windows Phone systems, etc., which are not limiting in this application.
The embodiments of the present application also provide a computer-readable storage medium (Memory), which is a Memory device in a computer device, for storing programs and data. It is understood that the computer readable storage medium herein may include both built-in storage media in a computer device and extended storage media supported by the computer device. The computer readable storage medium provides storage space that stores a processing system of a computer device. In this memory space, a computer program suitable for being loaded and executed by the processor 701 is stored. Note that the computer readable storage medium can be either a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory; alternatively, it may be at least one computer-readable storage medium located remotely from the aforementioned processor.
In one embodiment, the processor 701 performs the following operations by running a computer program in the memory 703:
acquiring a background image and a flow attribute of fluid presented in the background image at the current moment through the communication interface 702;
According to the flow attribute of the fluid at the current moment, determining the field information of the fluid at the target moment, wherein the field information is used for indicating the motion state of the fluid at the target moment;
determining presentation information of the fluid at the target time according to the field information, wherein the presentation information is used for indicating the presentation form of the fluid at the target time;
and rendering the background image based on the presentation information of the fluid at the target time to obtain a rendered image.
As an alternative embodiment, the fluid is composed of N particles, the flow property of the fluid at the current moment includes property information of the N particles at the current moment, the property information of each particle at the current moment includes a position and a velocity vector of the particle at the current moment, and N is a positive integer.
As an alternative embodiment, the field information includes a density field and a velocity field; the specific embodiment of determining the field information of the fluid at the target time according to the flow attribute of the fluid at the current time by the processor 701 is:
mapping the attribute information of N particles at the current moment into M grids, wherein M is a positive integer;
based on the momentum theorem, determining state information of M grids at a target time;
according to the state information of the M grids at the target moment, determining attribute information of N particles at the target moment;
And determining the density field and the speed field of the fluid at the target moment according to the attribute information of the N particles at the target moment.
As an alternative embodiment, the background image presents at least one obstacle element; the processor 701 further performs the following operations by running a computer program in the memory 703:
determining boundary collision conditions of each grid from the current moment to the target moment according to at least one obstacle element;
and updating the state information of the M grids at the target time according to the boundary collision condition of each grid from the current time to the target time.
As an alternative embodiment, the specific embodiment of mapping the attribute information of the N particles at the current moment into the M grids by the processor 701 is:
performing difference weighting processing on attribute information of the target particles at the current moment based on the distance between the target particles and each grid node in the grid to obtain a target grid carrying the attribute information of the target particles at the current moment;
wherein the target particle is any one of the N particles.
As an alternative embodiment, the specific embodiment of determining the attribute information of the N particles at the target time according to the state information of the M grids at the target time by the processor 701 is:
And carrying out difference weighted inversion processing on the state information of the target grid at the target moment based on the distance between the target particle and each grid node in the target grid, and obtaining the attribute information of the target particle at the target moment.
As an alternative embodiment, the attribute information of the N particles at the target time includes positions of the N particles at the target time and velocity vectors of the N particles at the target time; the processor 701 determines, from the attribute information of the N particles at the target time, a density field and a velocity field of the fluid at the target time, which are specifically shown as follows:
determining a density field of the fluid at the target time based on the positions of the N particles at the target time; the method comprises the steps of,
the velocity field of the fluid at the target time instant is determined by the velocity vectors of the N particles at the target time instant.
As an alternative embodiment, the processor 701 determines, based on the field information, that the fluid is present at the target time, which is specifically:
determining a presentation area of the fluid at the target time according to the density field;
determining a rendering effect of the presentation area of the fluid at the target time based on the velocity field;
and taking the rendering area of the fluid at the target time and the rendering effect of the rendering area of the fluid at the target time as the rendering information of the fluid at the target time.
As an alternative embodiment, the processor 701 determines, based on the density field, a specific embodiment of the presentation area of the fluid at the target time instant as:
acquiring a density threshold;
and determining the area of the fluid with the density larger than the density threshold value at the target moment as the presentation area of the fluid at the target moment.
As an alternative embodiment, the field information includes a density field and a velocity field; the specific embodiment of determining the field information of the fluid at the target time according to the flow attribute of the fluid at the current time by the processor 701 is:
acquiring a model to be compiled, wherein the model to be compiled is used for simulating the motion state of the fluid based on the attribute information of the fluid at the current moment;
compiling the model to be compiled to obtain a compiling product, wherein the compiling product comprises a kernel file and a configuration file corresponding to the kernel file, and the kernel file is used for processing the flow attribute of the fluid;
analyzing the configuration file to obtain execution logic of the kernel file and a data structure required by executing the kernel file;
and executing the kernel file according to the execution logic and the data structure to obtain a density field and a speed field of the fluid at the target time.
Based on the same inventive concept, the principle and beneficial effects of the computer device for solving the problems provided in the embodiments of the present application are similar to those of the video processing method in the embodiments of the present application, and may refer to the principle and beneficial effects of implementation of the method, which are not described herein for brevity.
The present application also provides a computer readable storage medium having a computer program stored therein, the computer program being adapted to be loaded by a processor and to perform the video processing method of the above method embodiments.
The present application also provides a computer program product comprising a computer program adapted to be loaded by a processor and to perform the video processing method of the above method embodiments.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the video processing method described above.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The modules in the device of the embodiment of the application can be combined, divided and deleted according to actual needs.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program to instruct related hardware, the program may be stored in a computer readable storage medium, and the readable storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
The foregoing disclosure is only a preferred embodiment of the present application, and it is not intended to limit the scope of the claims, and one of ordinary skill in the art will understand that all or part of the processes for implementing the embodiments described above may be performed with equivalent changes in the claims of the present application and still fall within the scope of the claims.

Claims (14)

1. An image processing method, the method comprising:
acquiring a background image and a flow attribute of fluid presented in the background image at the current moment;
according to the flow attribute of the fluid at the current moment, determining the field information of the fluid at the target moment, wherein the field information is used for indicating the motion state of the fluid at the target moment;
determining presentation information of the fluid at the target time according to the field information, wherein the presentation information is used for indicating the presentation form of the fluid at the target time;
and rendering the background image based on the presentation information of the fluid at the target time to obtain a rendered image.
2. The method of claim 1, wherein the fluid is comprised of N particles, the flow properties of the fluid at the current time include the property information of the N particles at the current time, the property information of each particle at the current time includes the position and velocity vector of the particle at the current time, and N is a positive integer.
3. The method of claim 2, wherein the field information comprises a density field and a velocity field; the determining the field information of the fluid at the target moment according to the flow attribute of the fluid at the current moment comprises the following steps:
mapping the attribute information of the N particles at the current moment into M grids, wherein M is a positive integer;
based on a momentum theorem, determining state information of the M grids at a target time;
determining attribute information of the N particles at the target moment according to the state information of the M grids at the target moment;
and determining the density field and the speed field of the fluid at the target moment according to the attribute information of the N particles at the target moment.
4. The method of claim 3, wherein the background image presents at least one obstacle element, the method further comprising:
determining boundary collision conditions of each grid from the current moment to the target moment according to the at least one obstacle element;
and updating the state information of the M grids at the target moment according to the boundary collision condition of each grid from the current moment to the target moment.
5. The method of claim 3, wherein mapping the attribute information of the N particles at the current time into M grids comprises:
performing difference weighting processing on attribute information of the target particles at the current moment based on the distance between the target particles and each grid node in the grid to obtain a target grid carrying the attribute information of the target particles at the current moment;
wherein the target particle is any one of the N particles.
6. The method of claim 5, wherein determining the attribute information of the N particles at the target time based on the state information of the M grids at the target time comprises:
and carrying out difference weighted inverse processing on the state information of the target grid at the target moment based on the distance between the target particle and each grid node in the target grid, and obtaining the attribute information of the target particle at the target moment.
7. The method of claim 3, wherein the attribute information of the N particles at the target time instant includes positions of the N particles at the target time instant and velocity vectors of the N particles at the target time instant; determining a density field and a velocity field of the fluid at the target time according to the attribute information of the N particles at the target time, wherein the method comprises the following steps:
Determining a density field of the fluid at the target time based on the positions of the N particles at the target time; the method comprises the steps of,
and determining the speed field of the fluid at the target time by the speed vectors of the N particles at the target time.
8. A method according to claim 3, wherein said determining presentation information of said fluid at said target time based on said field information comprises:
determining a presentation area of the fluid at the target time according to the density field;
determining a rendering effect of a presentation area of the fluid at the target time based on the velocity field;
and taking the rendering area of the fluid at the target time and the rendering effect of the rendering area of the fluid at the target time as the rendering information of the fluid at the target time.
9. The method of claim 8, wherein the determining a presentation area of the fluid at the target time based on the density field comprises:
acquiring a density threshold;
and determining a region of the fluid with the density at the target time being greater than the density threshold as a presentation region of the fluid at the target time.
10. The method of claim 1, wherein the field information comprises a density field and a velocity field; the determining the field information of the fluid at the target moment according to the flow attribute of the fluid at the current moment comprises the following steps:
acquiring a model to be compiled, wherein the model to be compiled is used for simulating the motion state of the fluid based on the attribute information of the fluid at the current moment;
compiling the model to be compiled to obtain a compiling product, wherein the compiling product comprises a kernel file and a configuration file corresponding to the kernel file, and the kernel file is used for processing the flow attribute of the fluid;
analyzing the configuration file to obtain execution logic of the kernel file and a data structure required by executing the kernel file;
and executing the kernel file according to the execution logic and the data structure to obtain a density field and a speed field of the fluid at the target time.
11. An image processing apparatus, characterized in that the image processing apparatus comprises:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a background image and the flow attribute of fluid presented in the background image at the current moment;
The processing unit is used for determining field information of the fluid at the target moment according to the flow attribute of the fluid at the current moment, wherein the field information is used for indicating the movement state of the fluid at the target moment; and determining presentation information of the fluid at the target time according to the field information, wherein the presentation information is used for indicating the presentation form of the fluid at the target time; and rendering the background image based on the presentation information of the fluid at the target time to obtain a rendered image.
12. A computer device, comprising: a memory device and a processor;
a memory in which a computer program is stored;
processor for loading the computer program for implementing the image processing method according to any of claims 1-10.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program adapted to be loaded by a processor and to perform the image processing method according to any of claims 1-10.
14. A computer program product, characterized in that the computer program product comprises a computer program adapted to be loaded by a processor and to perform the image processing method according to any of claims 1-10.
CN202211021011.9A 2022-08-24 2022-08-24 Image processing method, device, equipment, storage medium and product Pending CN117670942A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211021011.9A CN117670942A (en) 2022-08-24 2022-08-24 Image processing method, device, equipment, storage medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211021011.9A CN117670942A (en) 2022-08-24 2022-08-24 Image processing method, device, equipment, storage medium and product

Publications (1)

Publication Number Publication Date
CN117670942A true CN117670942A (en) 2024-03-08

Family

ID=90079358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211021011.9A Pending CN117670942A (en) 2022-08-24 2022-08-24 Image processing method, device, equipment, storage medium and product

Country Status (1)

Country Link
CN (1) CN117670942A (en)

Similar Documents

Publication Publication Date Title
US20210287415A1 (en) Virtual object display method and apparatus, electronic device, and storage medium
CA3104243C (en) Real-time animation generation using machine learning
US7580821B2 (en) Application programming interface for fluid simulations
KR102103939B1 (en) Avatar facial expression animations with head rotation
US11823315B2 (en) Animation making method and apparatus, computing device, and storage medium
US9691172B2 (en) Furry avatar animation
CN109583509B (en) Data generation method and device and electronic equipment
CN112652044A (en) Particle special effect rendering method, device and equipment and storage medium
WO2022095526A1 (en) Graphics engine and graphics processing method applicable to player
WO2023071579A1 (en) Physical special effect rendering method and apparatus, computer device and storage medium
CN111773688B (en) Flexible object rendering method and device, storage medium and electronic device
Rudomín et al. Fragment shaders for agent animation using finite state machines
CN116704103A (en) Image rendering method, device, equipment, storage medium and program product
CN110930484B (en) Animation configuration method and device, storage medium and electronic device
Karmakharm et al. Agent-based Large Scale Simulation of Pedestrians With Adaptive Realistic Navigation Vector Fields.
WO2020249726A1 (en) Method and system for managing emotional relevance of objects within a story
Drone Real-time particle systems on the GPU in dynamic environments
CN117670942A (en) Image processing method, device, equipment, storage medium and product
US20180101975A1 (en) Animating a virtual object
US20220008826A1 (en) Strand Simulation in Multiple Levels
CN107609631B (en) Method and storage medium for implementing cluster AI in Unity
CN111340949A (en) Modeling method, computer device and storage medium for 3D virtual environment
CN116152405B (en) Service processing method and device, computer equipment and storage medium
US20240131424A1 (en) Method and system for incremental topological update within a data flow graph in gaming
Morais et al. CST-Godot: Bridging the Gap Between Game Engines and Cognitive Agents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination