CN114612596A - Generation method, device and equipment of special effect graph and storage medium - Google Patents

Generation method, device and equipment of special effect graph and storage medium Download PDF

Info

Publication number
CN114612596A
CN114612596A CN202210238148.3A CN202210238148A CN114612596A CN 114612596 A CN114612596 A CN 114612596A CN 202210238148 A CN202210238148 A CN 202210238148A CN 114612596 A CN114612596 A CN 114612596A
Authority
CN
China
Prior art keywords
vertex
surface map
map
offset
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210238148.3A
Other languages
Chinese (zh)
Inventor
陈佳明
林伟锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210238148.3A priority Critical patent/CN114612596A/en
Publication of CN114612596A publication Critical patent/CN114612596A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/603D [Three Dimensional] animation of natural phenomena, e.g. rain, snow, water or plants
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses a method, a device, equipment and a storage medium for generating a special effect graph. Scaling and/or translating the initial surface map of the 3D virtual fluid model to obtain at least one transformed surface map; determining the offset of each vertex along the direction of the longitudinal axis according to the initial surface map and the at least one transformed surface map; moving each vertex along the longitudinal axis direction according to the offset to obtain a static map of the 3D virtual fluid model after offset; and splicing and coding the continuous static images to obtain a dynamic special effect image corresponding to the 3D virtual model. According to the method for generating the special effect map, the offset of each vertex along the direction of the longitudinal axis is determined according to the initial surface map and at least one transformed surface map, and each vertex is moved along the direction of the longitudinal axis according to the offset, so that the generated dynamic special effect map has the effect of liquid flowing, and the authenticity of a liquid flow map is improved.

Description

Generation method, device and equipment of special effect graph and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of image processing, and in particular relates to a method, a device, equipment and a storage medium for generating a special effect graph.
Background
In the prior art, when fluid movement (such as tear flow) is simulated, virtual fluid is generally controlled to flow according to a periodic operation mode, so that the simulated fluid movement mode is relatively single, and a generated fluid dynamic diagram is not real enough.
Disclosure of Invention
The embodiment of the disclosure provides a method, a device and equipment for generating a special effect diagram and a storage medium, which can generate a flowing special effect diagram of liquid and improve the authenticity of the liquid flowing diagram.
In a first aspect, an embodiment of the present disclosure provides a method for generating a special effect graph, including:
scaling and/or translating the initial surface map of the 3D virtual fluid model to obtain at least one transformed surface map;
determining the offset of each vertex along the direction of the longitudinal axis according to the initial surface map and the at least one transformed surface map; wherein the longitudinal axis is perpendicular to a plane in which the surface map lies; the vertexes are pixel points forming the surface of the 3D virtual fluid model;
moving each vertex along the longitudinal axis direction according to the offset to obtain a static map of the 3D virtual fluid model after offset;
and splicing and coding the continuous static images to obtain a dynamic special effect image corresponding to the 3D virtual model.
In a second aspect, an embodiment of the present disclosure further provides an apparatus for generating a special effect map, including:
the transformation surface map obtaining module is used for carrying out scaling and/or translation transformation on the initial surface map of the 3D virtual fluid model to obtain at least one transformation surface map;
an offset determination module for determining an offset of each vertex along a longitudinal axis direction according to the initial surface map and the at least one transformed surface map; wherein the longitudinal axis is perpendicular to a plane in which the surface map lies; the vertexes are pixel points forming the surface of the 3D virtual fluid model;
the static map acquisition module is used for moving each vertex along the direction of the longitudinal axis according to the offset to acquire a static map of the 3D virtual fluid model after offset;
and the dynamic special effect image acquisition module is used for splicing and coding the continuous static images to acquire a dynamic special effect image corresponding to the 3D virtual model.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processing devices;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processing devices, the one or more processing devices are caused to implement the special effect image generation method according to the embodiment of the present disclosure.
In a fourth aspect, the disclosed embodiments also provide a computer readable medium, on which a computer program is stored, which when executed by a processing device, implements the special effect image generation method according to the disclosed embodiments.
The embodiment of the disclosure discloses a method, a device, equipment and a storage medium for generating a special effect graph. Scaling and/or translating the initial surface map of the 3D virtual fluid model to obtain at least one transformed surface map; determining the offset of each vertex along the longitudinal axis direction according to the initial surface map and the at least one transformed surface map; wherein the longitudinal axis is perpendicular to the plane of the surface map; the vertexes are pixel points forming the surface of the 3D virtual fluid model; moving each vertex along the direction of a longitudinal axis according to the offset to obtain a static diagram of the 3D virtual fluid model after offset; and splicing and coding the continuous static images to obtain a dynamic special effect image corresponding to the 3D virtual model. According to the method for generating the special effect map, the offset of each vertex along the direction of the longitudinal axis is determined according to the initial surface map and at least one transformed surface map, and each vertex is moved along the direction of the longitudinal axis according to the offset, so that the generated dynamic special effect map has the effect of liquid flowing, and the authenticity of a liquid flow map is improved.
Drawings
FIG. 1 is a flow chart of a method of generating a special effects graph in an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a special effect diagram generation apparatus in an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Fig. 1 is a flowchart of a method for generating a special effect graph according to an embodiment of the present disclosure, where the embodiment is applicable to a situation of simulating liquid flow, and the method may be executed by a device for generating a special effect graph, where the device may be composed of hardware and/or software, and may be generally integrated in a device having a function of generating a special effect graph, and the device may be an electronic device such as a server, a mobile terminal, or a server cluster. As shown in fig. 1, the method specifically includes the following steps:
s110, scaling and/or translation transformation is carried out on the initial surface map of the 3D virtual fluid model, and at least one transformation surface map is obtained.
Wherein, a 3D virtual fluid model may be understood as a 3D model constructed by simulating a real fluid, the fluid may be an object of the fluid, such as a liquid, and the 3D virtual fluid model may be, for example: water column, water surface. In this embodiment, the 3D virtual fluid model may be a water column simulating "tears". The initial surface map may be understood as a 2D map, i.e. a UV map, generated by unfolding the 3D virtual fluid model representation. For each pixel point in the initial surface map, the position information may be represented by a horizontal coordinate and a vertical coordinate, i.e., (x, y).
Wherein scaling and/or translation transformation of the initial surface map of the 3D virtual fluid model may be understood as: and zooming and/or translating the horizontal coordinates of each pixel point forming the initial surface map, and/or zooming and/or translating the vertical coordinates of each pixel point. In this embodiment, translating the coordinate may be understood as adding or subtracting a set value to or from the coordinate value; scaling the coordinates may be understood as setting a multiple of the scaling of the coordinate values. For example, assuming that the horizontal coordinate is translated, it can be expressed as: x + - [ delta ] x; assuming that the horizontal coordinate is scaled, it can be expressed as: m x, where m is a multiple of the scaling. The transformation of the initial surface map may be a translation of the initial surface map only, or a scaling of the surface map only, or a translation followed by a scaling, or a scaling followed by a translation. The transformation of the horizontal coordinate and the vertical coordinate of each pixel point can be only the transformation of the horizontal coordinate, or only the transformation of the vertical coordinate, or both the transformation of the horizontal coordinate and the transformation of the vertical coordinate. In this embodiment, the initial surface map may be transformed at least once to obtain at least one transformed surface map. In this embodiment, the manner of transforming the initial surface map may be set by a user, and is not limited herein.
Optionally, the scaling and/or translation transformation is performed on the initial surface map, and the manner of obtaining at least one transformed surface map may be: acquiring time information corresponding to the current moment; at least one scaling and/or translation of the initial surface map is performed at least once based on the time information to obtain at least one transformed surface map.
The time information can be understood as time on a time axis corresponding to the dynamic special effect diagram. Scaling and/or translating the initial surface map at least once based on the time information may be understood as: and when determining the translation amount or the scaling amount of the initial surface map, according to the time information corresponding to the current moment. In this embodiment, each pixel point in the initial surface map is transformed according to the same translation amount or scaling amount, so that the initial surface map is scaled in an equal proportion or translated as a whole during transformation. For example, transforming the horizontal coordinate based on the time information may be expressed as: x + time; transforming the vertical coordinate based on the time information may be expressed as: (y + time) m. In this embodiment, the initial surface map is scaled and/or translated at least once based on the time information, so that an irregular flow rule can be obtained, and the authenticity of the dynamic special effect map corresponding to the fluid is improved.
Specifically, the process of scaling and/or translating the initial surface map at least once based on the time information may be: scaling and/or translating the horizontal coordinates of the vertices in the initial surface map at least once based on the time information; and/or scaling and/or translating the vertical coordinates of the vertices in the initial surface map at least once based on the time information.
Wherein vertices can be understood as pixels constituting the surface of the 3D virtual fluid model. Scaling and/or translating the horizontal coordinates of the vertices in the initial surface map at least once based on the time information may be understood as: and for each pixel point forming the surface of the 3D virtual fluid model, determining the translation amount or the scaling amount of the horizontal coordinate according to the corresponding time information of the current moment. Scaling and/or translating the vertical coordinates of the vertices in the initial surface map at least once based on the time information may be understood as: and when the shift amount or the zoom amount of the vertical coordinate is determined, according to the corresponding time information of the current moment. Transforming the initial surface map based on the time information may be: only the initial surface map is translated based on the time information, or only the surface map is scaled based on the time information, or the surface map is translated first and then scaled based on the time information, or the surface map is scaled first and then translated based on the time information. The transformation of the horizontal coordinate and the vertical coordinate of each pixel point based on the time information may be based on the time information to transform only the horizontal coordinate, or based on the time information to transform only the vertical coordinate, or based on the time information to transform both the horizontal coordinate and the vertical coordinate. In this embodiment, the initial surface map may be transformed at least once based on the time information, thereby obtaining at least one transformed surface map. In this embodiment, the manner of transforming the initial surface map may be set by a user, and is not limited herein. In this embodiment, the horizontal coordinates and/or the vertical coordinates are transformed based on the time information, so that the accuracy of transformation can be improved.
And S120, determining the offset of each vertex along the direction of the longitudinal axis according to the initial surface map and the at least one transformed surface map.
Wherein the longitudinal axis is perpendicular to the plane of the surface map. The offset of each vertex along the longitudinal axis may be the same or different. In this embodiment, a corresponding relationship between the position information and the offset may be established, and for the initial surface map, the first sub-offset of each vertex may be determined according to the corresponding relationship; for the transformed surface map, a second sub-offset amount of each vertex after the offset can be determined from the correspondence. And finally, carrying out weighted summation on the first sub-offset and the second sub-offset of the corresponding vertex in the initial surface map and the transformed surface map to obtain the final offset of each vertex along the direction of the longitudinal axis.
Optionally, the manner of determining the offset of each vertex along the longitudinal axis direction according to the initial surface map and the at least one transformed surface map may be: sampling gray information from a set noise image according to coordinate information of each vertex in the initial surface map to obtain a first gray map; sampling gray information from a set noise image according to coordinate information of each vertex in the transformed surface map to obtain at least one second gray map; and carrying out weighted summation on the gray values of the corresponding pixel points in the first gray image and the at least one second gray image to obtain the offset of each vertex along the direction of the longitudinal axis.
The set noise map may be an arbitrary noise map. Sampling the gray scale information from the set noise map based on the coordinate information of each vertex in the initial surface map can be understood as: and for each vertex of the initial surface map, acquiring the gray value of a pixel point in the same coordinate information from the set noise map according to the coordinate information (x, y) of the vertex, thereby acquiring a first gray map of the initial surface map. Sampling the gray scale information from the set noise map based on the coordinate information of each vertex in the transformed surface map can be understood as: and for each vertex of the transformed surface map, acquiring the gray value of a pixel point under the same coordinate information from the set noise map according to the coordinate information of the vertex, thereby acquiring a second gray map corresponding to the transformed surface map. In this embodiment, the gray value obtained from the set noise map sampling is a normalized gray value, which is a number between 0 and 1. The process of performing weighted summation on the gray values of the corresponding pixel points in the first gray map and the at least one second gray map may be: and respectively determining the weight of the first gray-scale image and the weight of at least one second gray-scale image, and carrying out weighted summation on the gray-scale values of the corresponding pixel points according to the weights to obtain the offset of each vertex along the direction of the longitudinal axis. For example: and averaging the gray values of the corresponding pixel points, and taking the average value as the offset of each vertex along the longitudinal axis direction. In the embodiment, the gray scale information is sampled from the set noise map according to the vertex coordinate information to determine the offset amount along the longitudinal axis direction, so that the diversity of the flow modes of the analog liquid can be improved.
And S130, moving each vertex along the longitudinal axis direction according to the offset to obtain a static map after the 3D virtual fluid model is offset.
Specifically, after the offset of each vertex along the longitudinal axis direction is determined, each vertex is moved along the longitudinal axis direction according to the offset, so that the 3D virtual fluid model presents an effect of indicating 'fluctuation'.
Optionally, after moving each vertex along the longitudinal axis direction according to the offset, the method further includes the following steps: determining a main tangent and an auxiliary tangent of each vertex after movement; determining a normal line according to the main tangent line and the auxiliary tangent line; determining illumination information corresponding to the moved vertex based on the normal; and rendering each moved vertex based on the illumination information to obtain a static diagram of the 3D virtual fluid model after the displacement.
Wherein the principal tangent and the secondary tangent may be understood as tangents to the surface of the 3D virtual fluid model at the vertices.
Specifically, determining the primary tangent and the secondary tangent of each vertex after movement includes: for each moved vertex, obtaining a difference value of the offset between the vertex and the vertex adjacent to the vertex in the vertical direction, and determining the difference value as a first difference value; and the difference of the offset between the vertex and the adjacent vertex in the horizontal direction is determined as a second difference; acquiring world coordinate information of a vertex and a visual angle direction of a virtual camera; determining a middle direction according to the world coordinate information and the view direction; determining a tangent line based on the intermediate direction, the viewing direction and the first difference; a secondary tangent is determined based on the intermediate direction, the view direction, and the second difference.
The vertical direction is the x direction, the horizontal direction is the y direction, the interval between two adjacent vertexes in the vertical direction is assumed to be Δ x, and the interval between two adjacent vertexes in the horizontal direction is assumed to be Δ y. If the coordinates of the current vertex are (x, y), the coordinates of the vertex adjacent thereto in the vertical direction are (x +. DELTA.x, y), and the coordinates of the vertex adjacent thereto in the horizontal direction are (x, y +. DELTA.y).
Specifically, the process of determining the middle direction according to the world coordinate information and the viewing direction may be: and performing partial derivative processing on the world coordinate information, and performing cross multiplication on the world coordinate information subjected to partial derivative processing and a vector corresponding to the view direction, thereby obtaining a vector corresponding to the middle direction.
Specifically, the process of determining the tangent line based on the middle direction, the viewing direction and the first difference may be: and multiplying the first difference value by the vector corresponding to the view angle direction, and adding the multiplied first difference value and the vector corresponding to the middle direction to obtain the vector corresponding to the tangent line. The process of determining the secondary tangent based on the intermediate direction, the viewing direction, and the second difference may be: and multiplying the second difference value by the vector corresponding to the visual angle direction, and adding the multiplied second difference value and the vector corresponding to the middle direction to obtain a vector corresponding to the secondary tangent. According to the scheme in the embodiment, the accuracy of determining the main tangent line and the auxiliary tangent line can be improved.
Specifically, the process of determining the normal line according to the main tangent line and the sub tangent line may be: and performing cross multiplication on the vector corresponding to the main tangent and the vector corresponding to the secondary tangent to obtain a vector corresponding to the normal. In this embodiment, the determination of the illumination information based on the normal may be based on any existing scheme, which is not limited herein. In this embodiment, each vertex after moving is rendered based on the illumination information, so that the 3D virtual fluid model is more fit to the environment where the picture is located, and the authenticity of the 3D virtual fluid model is improved.
And S140, splicing and coding the continuous static images to obtain a dynamic special effect image corresponding to the 3D virtual model.
In this embodiment, the surface map of the 3D virtual fluid model at each time is processed in the above steps, so that a plurality of continuous static maps can be obtained, and the continuous static maps are subjected to splicing coding, so as to obtain a dynamic special effect map corresponding to the 3D virtual model. Thereby simulating the effect of liquid flow.
According to the technical scheme of the embodiment, scaling and/or translation transformation are/is carried out on the initial surface map of the 3D virtual fluid model to obtain at least one transformed surface map; determining the offset of each vertex along the longitudinal axis direction according to the initial surface map and the at least one transformed surface map; wherein the longitudinal axis is perpendicular to the plane of the surface map; the vertexes are pixel points forming the surface of the 3D virtual fluid model; moving each vertex along the direction of a longitudinal axis according to the offset to obtain a static diagram after the 3D virtual fluid model is offset; and splicing and coding the continuous static images to obtain a dynamic special effect image corresponding to the 3D virtual model. According to the method for generating the special effect map, the offset of each vertex along the direction of the longitudinal axis is determined according to the initial surface map and at least one transformed surface map, and each vertex is moved along the direction of the longitudinal axis according to the offset, so that the generated dynamic special effect map has the effect of liquid flowing, and the authenticity of a liquid flow map is improved.
Fig. 2 is a schematic structural diagram of an apparatus for generating a special effect graph according to an embodiment of the disclosure, and as shown in fig. 2, the apparatus includes:
a transformed surface map obtaining module 210, configured to perform scaling and/or translation transformation on an initial surface map of the 3D virtual fluid model to obtain at least one transformed surface map;
an offset determining module 220, configured to determine an offset of each vertex along the longitudinal axis direction according to the initial surface map and the at least one transformed surface map; wherein the longitudinal axis is perpendicular to the plane of the surface map; the vertexes are pixel points forming the surface of the 3D virtual fluid model;
the static map obtaining module 230 is configured to move each vertex along the longitudinal axis direction according to the offset amount to obtain a static map after the 3D virtual fluid model is offset;
and the dynamic special effect image obtaining module 240 is configured to splice and encode the continuous static images to obtain a dynamic special effect image corresponding to the 3D virtual model.
Optionally, the transformation surface map obtaining module 210 is further configured to:
acquiring time information corresponding to the current moment;
at least one scaling and/or translation of the initial surface map is performed at least once based on the time information to obtain at least one transformed surface map.
Optionally, the transformation surface map obtaining module 210 is further configured to:
scaling and/or translating the horizontal coordinates of the vertices in the initial surface map at least once based on the time information; and/or the presence of a gas in the gas,
the vertical coordinates of the vertices in the initial surface map are scaled and/or translated at least once based on the time information.
Optionally, the offset determining module 220 is further configured to:
sampling gray information from a set noise image according to coordinate information of each vertex in the initial surface map to obtain a first gray map;
sampling gray information from a set noise image according to coordinate information of each vertex in the transformed surface map to obtain at least one second gray map;
and carrying out weighted summation on the gray values of the corresponding pixel points in the first gray image and the at least one second gray image to obtain the offset of each vertex along the direction of the longitudinal axis.
Optionally, the method further includes: an illumination information determination module to:
determining a main tangent and an auxiliary tangent of each vertex after movement;
determining a normal line according to the main tangent line and the auxiliary tangent line;
determining illumination information corresponding to the moved vertex based on the normal;
and rendering each moved vertex based on the illumination information to obtain a static diagram of the 3D virtual fluid model after the displacement.
Optionally, the illumination information determining module is further configured to:
for each moved vertex, obtaining a difference value of the offset between the vertex and the vertex adjacent to the vertex in the vertical direction, and determining the difference value as a first difference value; and the difference of the offset between the vertex and the adjacent vertex in the horizontal direction is determined as a second difference;
acquiring world coordinate information of a vertex and a visual angle direction of a virtual camera;
determining a middle direction according to the world coordinate information and the view direction;
determining a tangent line based on the intermediate direction, the viewing direction and the first difference;
a secondary tangent is determined based on the intermediate direction, the view direction, and the second difference.
Optionally, the illumination information determining module is further configured to:
multiplying the first difference value by a vector corresponding to the view angle direction, and adding the multiplied first difference value and a vector corresponding to the middle direction to obtain a vector corresponding to the tangent line;
determining a secondary tangent based on the intermediate direction, the view direction, and the second difference, comprising:
and multiplying the second difference value by the vector corresponding to the visual angle direction, and adding the multiplied second difference value and the vector corresponding to the middle direction to obtain a vector corresponding to the secondary tangent.
The device can execute the methods provided by all the embodiments of the disclosure, and has corresponding functional modules and beneficial effects for executing the methods. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in all the foregoing embodiments of the disclosure.
Referring now to FIG. 3, a block diagram of an electronic device 300 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like, or various forms of servers such as a stand-alone server or a server cluster. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, electronic device 300 may include a processing means (e.g., central processing unit, graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a read-only memory device (ROM)302 or a program loaded from a storage device 305 into a random access memory device (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program containing program code for performing a method for recommending words. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 305, or installed from the ROM 302. The computer program, when executed by the processing device 301, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: scaling and/or translating the initial surface map of the 3D virtual fluid model to obtain at least one transformed surface map; determining the offset of each vertex along the direction of the longitudinal axis according to the initial surface map and the at least one transformed surface map; wherein the longitudinal axis is perpendicular to a plane in which the surface map lies; the vertexes are pixel points forming the surface of the 3D virtual fluid model; moving each vertex along the longitudinal axis direction according to the offset to obtain a static map of the 3D virtual fluid model after offset; and splicing and coding the continuous static images to obtain a dynamic special effect image corresponding to the 3D virtual model.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, a method for generating a special effect graph is disclosed in the present disclosure, including:
scaling and/or translating the initial surface map of the 3D virtual fluid model to obtain at least one transformed surface map;
determining the offset of each vertex along the direction of the longitudinal axis according to the initial surface map and the at least one transformed surface map; wherein the longitudinal axis is perpendicular to a plane in which the surface map lies; the vertexes are pixel points forming the surface of the 3D virtual fluid model;
moving each vertex along the longitudinal axis direction according to the offset to obtain a static map of the 3D virtual fluid model after offset;
and splicing and coding the continuous static images to obtain a dynamic special effect image corresponding to the 3D virtual model.
Further, carrying out scaling and/or translation transformation on the initial surface map to obtain at least one transformed surface map; the method comprises the following steps:
acquiring time information corresponding to the current moment;
and carrying out at least one scaling and/or translation on the initial surface map based on the time information to obtain at least one transformed surface map.
Further, scaling and/or translating the initial surface map at least once based on the time information comprises:
scaling and/or translating the horizontal coordinates of the vertices in the initial surface map at least once based on the time information; and/or the presence of a gas in the gas,
scaling and/or translating the vertical coordinates of the vertices in the initial surface map at least once based on the time information.
Further, determining an offset of each vertex along a longitudinal axis direction from the initial surface map and the at least one transformed surface map, comprising:
sampling gray information from a set noise image according to coordinate information of each vertex in the initial surface map to obtain a first gray map;
sampling gray information from the set noise image according to coordinate information of each vertex in the transformed surface map to obtain at least one second gray map;
and carrying out weighted summation on the gray values of the corresponding pixel points in the first gray image and the at least one second gray image to obtain the offset of each vertex along the direction of the longitudinal axis.
Further, after moving each vertex along the longitudinal axis direction according to the offset, the method further includes:
determining a main tangent and an auxiliary tangent of each vertex after movement;
determining a normal line according to the main tangent line and the secondary tangent line;
determining illumination information corresponding to the moved vertex based on the normal;
and rendering each moved vertex based on the illumination information to obtain a static map of the 3D virtual fluid model after the displacement.
Further, determining the primary tangent and the secondary tangent of each vertex after the movement comprises:
for each moved vertex, obtaining a difference value of the offset between the vertex and the vertex adjacent to the vertex in the vertical direction, and determining the difference value as a first difference value; and the difference value of the offset between the vertex and the adjacent vertex in the horizontal direction is determined as a second difference value;
acquiring world coordinate information of the vertex and a visual angle direction of the virtual camera;
determining a middle direction according to the world coordinate information and the view direction;
determining an orthotangent based on the intermediate direction, the view direction, and the first difference;
determining a secondary tangent based on the intermediate direction, the view direction, and the second difference.
Further, determining a tangent line based on the intermediate direction, the view direction, and the first difference, comprises:
multiplying the first difference value by the vector corresponding to the view angle direction, and adding the multiplied first difference value and the vector corresponding to the middle direction to obtain a vector corresponding to a tangent line;
determining a secondary tangent based on the intermediate direction, the view direction, and the second difference, comprising:
and multiplying the second difference value by the vector corresponding to the view angle direction, and adding the multiplied second difference value and the vector corresponding to the middle direction to obtain a vector corresponding to a secondary tangent.
It is to be noted that the foregoing is only illustrative of the presently preferred embodiments of the present disclosure and that the present principles apply. Those skilled in the art will appreciate that the present disclosure is not limited to the particular embodiments described herein, and that various obvious changes, adaptations, and substitutions are possible, without departing from the scope of the present disclosure. Therefore, although the present disclosure has been described in greater detail with reference to the above embodiments, the present disclosure is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present disclosure, the scope of which is determined by the scope of the appended claims.

Claims (10)

1. A method for generating a special effect graph is characterized by comprising the following steps:
scaling and/or translating the initial surface map of the 3D virtual fluid model to obtain at least one transformed surface map;
determining the offset of each vertex along the direction of the longitudinal axis according to the initial surface map and the at least one transformed surface map; wherein the longitudinal axis is perpendicular to a plane in which the surface map lies; the vertexes are pixel points forming the surface of the 3D virtual fluid model;
moving each vertex along the longitudinal axis direction according to the offset to obtain a static map of the 3D virtual fluid model after offset;
and splicing and coding the continuous static images to obtain a dynamic special effect image corresponding to the 3D virtual model.
2. The method according to claim 1, wherein the initial surface map is subjected to a scaling and/or translation transformation, obtaining at least one transformed surface map; the method comprises the following steps:
acquiring time information corresponding to the current moment;
and carrying out at least one scaling and/or translation on the initial surface map based on the time information to obtain at least one transformed surface map.
3. The method of claim 2, wherein scaling and/or translating the initial surface map at least once based on the time information comprises:
scaling and/or translating the horizontal coordinates of the vertices in the initial surface map at least once based on the time information; and/or the presence of a gas in the gas,
scaling and/or translating the vertical coordinates of the vertices in the initial surface map at least once based on the time information.
4. The method of claim 1, wherein determining an offset of each vertex along a longitudinal axis from the initial surface map and the at least one transformed surface map comprises:
sampling gray information from a set noise image according to coordinate information of each vertex in the initial surface map to obtain a first gray map;
sampling gray information from the set noise image according to coordinate information of each vertex in the transformed surface map to obtain at least one second gray map;
and carrying out weighted summation on the gray values of the corresponding pixel points in the first gray image and the at least one second gray image to obtain the offset of each vertex along the direction of the longitudinal axis.
5. The method of claim 1, further comprising, after moving each vertex in the longitudinal axis direction according to the offset amount:
determining a main tangent and an auxiliary tangent of each vertex after movement;
determining a normal line according to the main tangent line and the secondary tangent line;
determining illumination information corresponding to the moved vertex based on the normal;
and rendering each moved vertex based on the illumination information to obtain a static map of the 3D virtual fluid model after the displacement.
6. The method of claim 5, wherein determining the primary tangent and the secondary tangent for each vertex after the moving comprises:
for each moved vertex, obtaining a difference value of the offset between the vertex and the vertex adjacent to the vertex in the vertical direction, and determining the difference value as a first difference value; and the difference value of the offset between the vertex and the adjacent vertex in the horizontal direction is determined as a second difference value;
acquiring world coordinate information of the vertex and a visual angle direction of the virtual camera;
determining a middle direction according to the world coordinate information and the view direction;
determining an orthotangent based on the intermediate direction, the view direction, and the first difference;
determining a secondary tangent based on the intermediate direction, the view direction, and the second difference.
7. The method of claim 6, wherein determining a tangent line based on the intermediate direction, the view direction, and the first difference comprises:
multiplying the first difference value by the vector corresponding to the view angle direction, and adding the multiplied first difference value and the vector corresponding to the middle direction to obtain a vector corresponding to a tangent line;
determining a secondary tangent based on the intermediate direction, the view direction, and the second difference, comprising:
and multiplying the second difference value by the vector corresponding to the view angle direction, and adding the multiplied second difference value and the vector corresponding to the middle direction to obtain a vector corresponding to a secondary tangent.
8. An apparatus for generating a special effect map, comprising:
the transformation surface map obtaining module is used for carrying out scaling and/or translation transformation on the initial surface map of the 3D virtual fluid model to obtain at least one transformation surface map;
an offset determination module for determining an offset of each vertex along a longitudinal axis direction according to the initial surface map and the at least one transformed surface map; wherein the longitudinal axis is perpendicular to a plane in which the surface map lies; the vertexes are pixel points forming the surface of the 3D virtual fluid model;
the static map acquisition module is used for moving each vertex along the direction of the longitudinal axis according to the offset to acquire a static map of the 3D virtual fluid model after offset;
and the dynamic special effect image acquisition module is used for splicing and coding the continuous static images to acquire a dynamic special effect image corresponding to the 3D virtual model.
9. An electronic device, characterized in that the electronic device comprises:
one or more processing devices;
storage means for storing one or more programs;
when executed by the one or more processing devices, cause the one or more processing devices to implement the special effects image generation method of any of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, which program, when being executed by processing means, is adapted to carry out the method of generating a special effects image as claimed in any one of claims 1 to 7.
CN202210238148.3A 2022-03-11 2022-03-11 Generation method, device and equipment of special effect graph and storage medium Pending CN114612596A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210238148.3A CN114612596A (en) 2022-03-11 2022-03-11 Generation method, device and equipment of special effect graph and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210238148.3A CN114612596A (en) 2022-03-11 2022-03-11 Generation method, device and equipment of special effect graph and storage medium

Publications (1)

Publication Number Publication Date
CN114612596A true CN114612596A (en) 2022-06-10

Family

ID=81862792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210238148.3A Pending CN114612596A (en) 2022-03-11 2022-03-11 Generation method, device and equipment of special effect graph and storage medium

Country Status (1)

Country Link
CN (1) CN114612596A (en)

Similar Documents

Publication Publication Date Title
CN110413812B (en) Neural network model training method and device, electronic equipment and storage medium
CN110728622B (en) Fisheye image processing method, device, electronic equipment and computer readable medium
CN113177888A (en) Hyper-resolution restoration network model generation method, image hyper-resolution restoration method and device
CN112330788A (en) Image processing method, image processing device, readable medium and electronic equipment
CN114693876A (en) Digital human generation method, device, storage medium and electronic equipment
CN115908679A (en) Texture mapping method, device, equipment and storage medium
WO2024016923A1 (en) Method and apparatus for generating special effect graph, and device and storage medium
WO2024041623A1 (en) Special effect map generation method and apparatus, device, and storage medium
CN111292406B (en) Model rendering method, device, electronic equipment and medium
CN114067030A (en) Dynamic fluid effect processing method and device, electronic equipment and readable medium
CN114419298A (en) Virtual object generation method, device, equipment and storage medium
CN114612596A (en) Generation method, device and equipment of special effect graph and storage medium
CN114723600A (en) Method, device, equipment, storage medium and program product for generating cosmetic special effect
CN114742934A (en) Image rendering method and device, readable medium and electronic equipment
CN114663553A (en) Special effect video generation method, device and equipment and storage medium
CN115272060A (en) Transition special effect diagram generation method, device, equipment and storage medium
CN112070888B (en) Image generation method, device, equipment and computer readable medium
CN114419299A (en) Virtual object generation method, device, equipment and storage medium
CN114693860A (en) Highlight rendering method, highlight rendering device, highlight rendering medium and electronic equipment
CN114066722A (en) Method and device for acquiring image and electronic equipment
CN114332224A (en) Method, device and equipment for generating 3D target detection sample and storage medium
CN111460334A (en) Information display method and device and electronic equipment
CN111862342A (en) Texture processing method and device for augmented reality, electronic equipment and storage medium
CN112395826B (en) Text special effect processing method and device
CN115170674B (en) Camera principal point calibration method, device, equipment and medium based on single image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination