CN115147532A - Image processing method, device and equipment, storage medium and program product - Google Patents

Image processing method, device and equipment, storage medium and program product Download PDF

Info

Publication number
CN115147532A
CN115147532A CN202210704119.1A CN202210704119A CN115147532A CN 115147532 A CN115147532 A CN 115147532A CN 202210704119 A CN202210704119 A CN 202210704119A CN 115147532 A CN115147532 A CN 115147532A
Authority
CN
China
Prior art keywords
image
target
pixel point
key element
reference pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210704119.1A
Other languages
Chinese (zh)
Inventor
吴高
杨志鹏
彭悦
欧阳尔立
李志锋
刘威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210704119.1A priority Critical patent/CN115147532A/en
Publication of CN115147532A publication Critical patent/CN115147532A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an image processing method, an image processing device, image processing equipment, a storage medium and a program product, which can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic, driving assistance and the like. Wherein, the method comprises the following steps: carrying out element identification on the target image to obtain at least one key element of the target image; determining the water wave central point of the water wave in the target image based on the position of each key element in the target image; determining a target pixel point which is in a preset distance from the target image to the water wave central point; performing water ripple special effect rendering on the target pixel points according to the water ripple parameters to obtain a plurality of rendered images; and splicing the rendering images according to the time node corresponding to each rendering image to generate the ripple special effect video of the target image. By adopting the method and the device, the sense of reality of the water ripple special effect can be enhanced, and the attractiveness and interestingness of the image are improved.

Description

Image processing method, device and equipment, storage medium and program product
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method, apparatus, device, storage medium, and program product.
Background
With the rapid development of image processing technology, the way of processing images is also gradually diversified. Adding a special effect to an image is one of important processing modes, and adding different special effects to an image can enable the image to have different expression effects. For the specific water wave effect, the specific water wave effect is represented by the ripple effect in the image, and the real generation of the water wave is caused by the collision contact of an object and the water surface, so that the induced 360-degree diffusion from the center to the periphery of the water wave is realized. Therefore, how to generate the water ripple effect which shows the real water ripple effect is a technical problem which is worthy of study.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, image processing equipment, a storage medium and a program product, which can enhance the reality of a water ripple special effect and improve the attractiveness and interestingness of an image.
In one aspect, an embodiment of the present application provides an image processing method, including:
performing element identification on the target image to obtain at least one key element of the target image, wherein the key element is used for indicating the image content of the target image;
determining the water wave central point of the water wave in the target image based on the position of each key element in the target image;
determining a target pixel point with a preset distance from the target image to the water wave central point;
performing water ripple special effect rendering on the target pixel points according to the water ripple parameters to obtain a plurality of rendering images; the rendering method comprises the steps that different rendering images correspond to different time nodes, each rendering image comprises a water ripple special effect, water ripple parameters comprise an amplitude parameter, a frequency parameter and a speed parameter, the amplitude parameter is used for controlling the vibration intensity of the water ripple special effect in each rendering image, the amplitude parameter attenuates along the time nodes, the frequency parameter is used for controlling the intensity degree of the water ripple special effect in each rendering image, and the speed parameter is used for controlling the change speed of the water ripple special effect in each rendering image;
and splicing the rendering images according to the time node corresponding to each rendering image to generate the ripple special effect video of the target image.
In another aspect, an embodiment of the present application provides an image processing apparatus, including:
an acquisition unit configured to acquire a target image;
the processing unit is used for carrying out element identification on the target image to obtain at least one key element of the target image, and the key element is used for indicating the image content of the target image; the water wave center point of the water wave in the target image is determined based on the position of each key element in the target image; determining a target pixel point with a preset distance from the target image to the water wave central point; performing water ripple special effect rendering on the target pixel points according to the water ripple parameters to obtain a plurality of rendered images; the rendering method comprises the steps that different rendering images correspond to different time nodes, each rendering image comprises a water ripple special effect, the water ripple parameters comprise an amplitude parameter, a frequency parameter and a speed parameter, the amplitude parameter is used for controlling the vibration strength of the water ripple special effect in each rendering image, the amplitude parameter is attenuated along the time nodes, the frequency parameter is used for controlling the density degree of the water ripple special effect in each rendering image, and the speed parameter is used for controlling the change speed of the water ripple special effect in each rendering image; and splicing the rendering images according to the time node corresponding to each rendering image to generate the ripple special effect video of the target image.
Accordingly, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a network interface, where the processor is connected to the memory and the network interface, the network interface is used to provide a network communication function, the memory is used to store a program code, and the processor is used to call the program code to execute the method in the embodiment of the present application.
Accordingly, embodiments of the present application provide a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the method in the embodiments of the present application is implemented.
Accordingly, embodiments of the present application provide a computer program product comprising a computer program or computer instructions, which when executed by a processor, implement the method in embodiments of the present application.
Accordingly, embodiments of the present application provide a computer program, which includes computer instructions stored in a computer-readable storage medium, and a processor of a computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions to cause the computer device to perform the method in the embodiments of the present application.
According to the method and the device, the key elements in the target image are firstly identified, the water wave center point of the water wave is determined based on the position of the key elements in the target image, the water wave special effect rendering is carried out on the target pixel points around the water wave center point according to the water wave parameters, a plurality of rendering images are obtained, and finally the rendering images are spliced, so that the water wave special effect video of the target image is generated. Through the method, on one hand, the key elements in the target image are identified, and the water wave central point is determined according to the positions of the key elements in the target image, so that the image content of the target image and the water wave special effect are effectively fused; on the other hand, the water ripple special effect rendering is carried out on the target pixel points according to the water ripple parameters, so that the reality sense of the water ripple special effect is improved, and the attractiveness and interestingness of the target image are enhanced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic architecture diagram of an image processing system according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an element identification box in a target image according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a process of performing ripple special effect rendering on a target pixel point according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a rectangular coordinate system provided by an embodiment of the present application;
FIG. 6 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
(1) Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject, and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
(2) Computer vision technology (CV) computer vision is a science for researching how to make a machine look, and in particular, it refers to that a camera and a computer are used to replace human eyes to perform machine vision such as identification and measurement on a target, and further perform graphic processing, so that the computer processing becomes an image more suitable for human eyes to observe or to transmit to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, optical Character Recognition (OCR) technology, video processing, video semantic understanding, video content/behavior recognition, three-dimensional (3-dimensional, 3D) object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
(3) Machine Learning (ML) is a one-domain multi-domain cross discipline, and relates to a multi-domain discipline such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
The scheme provided by the embodiment of the application relates to a computer vision technology in an artificial intelligence technology, and is specifically explained by the following embodiment.
Referring to fig. 1, fig. 1 is a schematic diagram of an architecture of an image processing system according to an embodiment of the present disclosure. The image processing system comprises a terminal device 101, a server 102 and a database 103, wherein the terminal device 101 and the server 102 can establish communication connection in a wired or wireless mode, and the database 103 can provide data services for the server 102.
As shown in fig. 1, the terminal apparatus 101 transmits a target image to the server 102, and the server 102 can perform element recognition on the target image. In one embodiment, the server 102 may identify text information in the target image, the database 103 may store a keyword library, and if a target text matching at least one keyword in the keyword library exists in the text information identified by the server 102, the target text in the text information may be determined as a key element. After obtaining the key elements of the target image, the server 102 determines a water wave center point according to the positions of the key elements in the target image, then determines pixel points with a preset distance from the water wave center point in the target image as target pixel points, performs water wave special effect rendering on the target pixel points according to water wave parameters to obtain a plurality of rendering images, finally splices the rendering images into a water wave special effect video, and can send the water wave special effect video to the terminal device 101 through a network. The ripple special effect video generated by the method truly simulates the diffusion effect of ripple, and realizes the ripple special effect beautification of the target image.
In the embodiment of the present application, the terminal device may be, but is not limited to, a personal computer, a notebook computer, a smart phone, a tablet computer, a smart watch, a smart voice interaction device, a smart home appliance, a vehicle-mounted terminal, an aircraft, a smart wearable device, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. The database may be a local database or a cloud database that the server can access, and the like, which is not limited in the present application. In addition, the embodiment of the application can be applied to various scenes, including but not limited to cloud technology, artificial intelligence, intelligent traffic, driving assistance and the like.
Based on the above description, an image processing method is provided in the embodiment of the present application, please refer to fig. 2, and fig. 2 is a schematic flowchart of an image processing method provided in the embodiment of the present application. The image processing method is executed by computer equipment, and the computer equipment can be terminal equipment or a server. For ease of understanding, the embodiment of the present application is described by taking the method as an example, which is executed by a server, as shown in fig. 2, and may include, but is not limited to, the following steps.
S201: and carrying out element identification on the target image to obtain at least one key element of the target image.
In the embodiment of the present application, the target image refers to an image to which a moire special effect is to be added, and before the moire image is added to the target image, the server performs element recognition on the target image to obtain one or more key elements of the target image, where the key elements are used to indicate image content of the target image, and may be, for example, text in the target image.
In one embodiment, the target image may be an advertising image that typically includes advertising copy, button elements, advertising merchandise, and the like. When the element identification is carried out on the advertisement image, the identification can be carried out from the following three aspects: (1) and recognizing text information in the advertisement image by adopting an OCR technology, and determining the target text as a key element if the target text matched with the key words in the keyword library exists in the text information. The OCR technology is a technology for analyzing, recognizing and processing an image including text information to acquire the text information. (2) Whether the advertisement image contains the button elements or not is detected, and if the advertisement image contains the button elements, the button elements can be determined as key elements. (3) Whether the advertisement image contains the commodity element is detected, and if the advertisement image contains the commodity element, the commodity element can be determined as a key element. The detection of the button element and the commodity element can be realized by adopting a machine learning model, for example, deeplabv3+ (a semantic segmentation model).
In one embodiment, the target image may also be a landscape image, which typically includes a plant or an animal. When the element recognition is performed on the landscape image, the plant element and the animal element in the landscape image can be recognized separately. Specifically, if plant elements of the same kind as the plants in the preset material library exist in the landscape image, the plant elements can be determined as key elements; if animal elements of the same animal type as those in the preset material library exist in the landscape image, the animal elements can be determined as key elements.
S202: and determining the water wave central point of the water wave in the target image based on the positions of the key elements in the target image.
In the embodiment of the application, after the server identifies the key elements of the target image, the water wave center point of the ripple special effect added to the target image can be determined based on the positions of the key elements in the target image. Specifically, the server may determine a water wave center region based on the position of each element in the target image, and then determine a water wave center point according to the water wave center region, where the water wave center point may be a center point in the water wave center region or an arbitrary point in the water wave center region, and may be specifically selected according to actual needs, which is not limited in the embodiment of the present disclosure.
In one embodiment, the mode of determining the water wave center region by the server may specifically be that attribute information of each key element in at least one dimension is obtained first, the attribute information of any dimension may include element size information, text information or element category information, and the key elements are evaluated through the attribute information of different dimensions, so that the water wave center region is determined flexibly; then obtaining a characteristic value of each key element in each dimension according to the attribute information of each key element in each dimension; and calculating a reference value of each key element according to the characteristic value of each key element in at least one dimension, and finally determining the position of the key element with the highest reference value in the target image as a water wave central region.
In one embodiment, the attribute information of the key elements in the target dimension may include element size information referring to size information of an element identification box for identifying each key element, the element identification box being a rectangular box for highlighting the identified key elements in the element identification process. As shown in fig. 3, the object image 301 is subjected to element recognition, and text information and button elements in the object image 301 are obtained, and a rectangular frame for highlighting the recognized text information and button elements in fig. 3 is an element recognition frame 302. When the attribute information of the key elements in the target dimension includes element size information, the manner of determining the feature values of the key elements in the target dimension may be that, according to the size information of the element identification box used for identifying each key element, the height of the element identification box corresponding to each key element is determined, then the feature value of the key element in the target dimension identified by the element identification box with the highest height is set as the first feature value, and the feature values of the other remaining key elements in the target dimension are set as the second feature value. The first feature value and the second feature value are used to distinguish whether the height of the element identification box corresponding to the key element is the highest, the first feature value may be set to 1, for example, and the second feature value may be set to 0, for example, if the height of the element identification box corresponding to the key element is the highest, the feature value of the key element in the target dimension may be set to 1, and if the height of the element identification box corresponding to the key element is not the highest, the feature value of the key element in the target dimension may be set to 0.
In one embodiment, the attribute information of the key element in the target dimension may further include text information. When the attribute information of the key element in the target dimension includes text information, determining the feature value of the key element in the target dimension may be performed in a manner of detecting whether a preset keyword exists in the text information of the target dimension for each key element, and if the preset keyword exists in the text information of the target dimension for the key element, setting the feature value of the key element in the target dimension as a third feature value, and correspondingly, if the preset keyword does not exist in the text information of the target dimension for the key element, setting the feature value of the key element in the target dimension as a fourth feature value. Taking the advertisement image as an example, the preset keyword may be a keyword including key promotion information, such as a keyword of "discount", "buy one gift", and the like, and the preset keyword may be stored in a keyword dictionary, and if the text information in the advertisement image hits the keyword dictionary, the feature value of the text information in the target dimension is set as the third feature value, and otherwise, the feature value is set as the fourth feature value.
In one embodiment, the attribute information of the key element in the target dimension may further include element category information indicating a category of the key element. When the attribute information of the key element in the target dimension includes the element category information, the manner of determining the feature value of the key element in the target dimension may be to obtain the feature value corresponding to each element category according to the correspondence between the element category and the feature value, where the correspondence between the element category and the feature value may be a preset correspondence rule. For the advertisement image, the included key elements may be text information, button elements, commodity elements and the like, and the preset rule corresponding to the element category and the feature value of the advertisement image may be, for example, that the feature value of the button element is greater than the feature value of the text information, and the feature value of the text information is greater than the feature value of the commodity element, so as to set that the position of the button element in the advertisement image is more likely to be determined as the water wave center area than the text information and the commodity element, in other words, the water ripple effect is more likely to be added to the position of the button element in the advertisement image, which is beneficial to improving the click rate and conversion rate of the advertisement. Of course, the correspondence between the element type and the feature value may be specifically set according to an actual situation, and the embodiment of the present application is not limited.
Further, if only the feature value of the key element in one dimension is obtained, when the reference value of the key element is calculated, the feature value can be directly determined as the reference value of the key element; if the feature values of the key elements in multiple dimensions are obtained, when the reference value of the key element is calculated, the feature values of the key element in multiple dimensions may be subjected to weighted summation, where the feature values of different dimensions correspond to different weights, and the weights may be adjusted as needed, or the feature values of the key element in multiple dimensions may be averaged, so as to calculate the reference value of the key element.
S203: and determining target pixel points with preset distances from the target image to the water wave center point.
In the embodiment of the application, after the server determines the water wave central point of the water ripple in the target image, the server further determines a target pixel point with a preset distance from the water wave central point. The target pixel points refer to pixel points in the target image added with the water ripple special effect, and may be, for example, pixel points on a circle with a water wave center point as a center and a preset distance as a radius in the target image.
S204: and performing water ripple special effect rendering on the target pixel points according to the water ripple parameters to obtain a plurality of rendering images.
In the embodiment of the application, the server can perform water ripple special effect rendering on the target pixel point according to the water ripple parameter to obtain a plurality of rendering images, wherein different rendering images correspond to different time nodes, and each rendering image comprises the water ripple special effect. The generation of the real water ripple is caused by water surface vibration, water has damping, the strength of the real water ripple is gradually weakened along with time, in order to improve the expression effect of the water ripple special effect, the water ripple parameters comprise an amplitude parameter, a frequency parameter and a speed parameter, the amplitude parameter is used for controlling the vibration strength of the water ripple special effect, the amplitude parameter is weakened along with time so as to simulate the effect that the strength of the water ripple is gradually weakened along with time, the frequency parameter is used for controlling the density degree of the water ripple special effect, and the speed parameter is used for controlling the change speed of the water ripple special effect. The specific process of performing the rendering of the water ripple special effect on the target pixel point according to the water ripple parameter is shown in fig. 4, and will not be described in detail here.
S205: and splicing the rendering images according to the time node corresponding to each rendering image to generate the ripple special effect video of the target image.
In the embodiment of the application, different rendering images correspond to different time nodes, and the server can splice a plurality of rendering images according to a time sequence according to the time node corresponding to each rendering image, that is, the rendering image corresponding to each time node is used as one frame in the ripple special effect video, so that the ripple special effect video of the target image is generated. In other implementations, multiple rendered images may also be stitched into a kinetic map, such as a GIF (image file format standard) kinetic map. By the method, the cost for manufacturing the water ripple special effect is saved, and the target image is automatically converted into the water ripple special effect video or the water ripple special effect dynamic image.
According to the method and the device, the key elements in the target image are firstly identified, the water wave center point of the water wave is determined based on the position of the key elements in the target image, the water wave special effect rendering is carried out on the target pixel points around the water wave center point according to the water wave parameters, a plurality of rendering images are obtained, and finally the rendering images are spliced, so that the water wave special effect video of the target image is generated. Through the method, on one hand, the key elements in the target image are identified, and the water wave central point is determined according to the positions of the key elements in the target image, so that the image content of the target image and the water wave special effect are effectively fused; on the other hand, the water ripple special effect rendering is carried out on the target pixel points according to the water ripple parameters, so that the sense of reality of the water ripple special effect is improved, and the attractiveness and interestingness of the target image are enhanced.
In this embodiment of the application, based on step S204 in the embodiment shown in fig. 2, when the server acquires a rendered image corresponding to any time node, the server may implement the steps S401 to S405 in fig. 4, and the following description is given with reference to the steps.
S401: and acquiring the amplitude parameter corresponding to the target time node.
In this embodiment, the target time node refers to any time node, and since the amplitude parameter is attenuated with time, the server may obtain the amplitude parameter corresponding to the target time node before performing the ripple special effect rendering on the target pixel.
The law of the decay of the amplitude parameter with time may be: the attenuation of the amplitude parameter along with the time is multiple, and the multiple can be set to an arbitrary value, for example, the multiple is set to 2, namely, the result after the attenuation of the amplitude parameter is one half of that before the attenuation; the law of the decay of the amplitude parameter with time may also be: the amplitude parameter decays with time by a constant, which may be set to any value, e.g. a constant set to 3, i.e. it means that the result after the decay of the amplitude parameter is less than 3 before the decay. It can be understood that, in the embodiment of the present application, a specific manner of attenuation of the amplitude parameter with time is not limited, and a specific attenuation rule may be flexibly set and selected according to an actual situation.
S402: and determining the diffusion distance of the target pixel point relative to the water wave central point according to the amplitude parameter, the frequency parameter and the speed parameter corresponding to the target time node.
In the embodiment of the application, when the server performs the water ripple special effect rendering on the target pixel point, the distance from the target pixel point to the water wave central point can be changed to simulate the effect of water surface vibration, and the diffusion distance of the target pixel point relative to the water wave central point can be determined according to the amplitude parameter, the frequency parameter and the speed parameter corresponding to the target time node.
In one embodiment, the diffusion distance of the target pixel point relative to the center point of the water wave can be determined according to the following formula (1):
Figure BDA0003705565250000101
where d represents the diffusion distance, a represents the amplitude parameter, sin represents the sine function, B represents the frequency parameter, and t represents the velocity parameter.
In formula (1), a sine function is used to simulate the periodic variation of the water ripple, and in other embodiments, a cosine function may also be used to simulate the periodic variation of the water ripple. By comprehensively considering the amplitude parameter, the frequency parameter and the speed parameter corresponding to the target time node, a more real water ripple special effect can be simulated.
S403: and determining reference pixel points with the distance from the target pixel points in the target image as the diffusion distance.
In the embodiment of the application, after the server determines the diffusion distance between the target pixel point and the water wave central point, the server can further determine the reference pixel point of which the distance between the target pixel point and the target pixel point is the diffusion distance in the target image.
Specifically, assuming that the distance between the target pixel point and the water wave midpoint is r, and the diffusion distance of the target pixel point relative to the water wave midpoint is d, the reference pixel point is a pixel point having a distance (r + d) from the water wave midpoint, wherein the distance d of the target pixel point relative to the water wave midpoint can be calculated by the above formula (1). By determining the reference pixel point in such a way, the visual effect in the water ripple diffusion process can be restored.
S404: and adding water ripple textures to the reference pixel points to obtain an initial rendering image.
In the embodiment of the application, the server can obtain the initial rendering image by referring to the pixel points and adding the ripple texture. The amplitude value of the water ripple texture is determined according to the amplitude parameter corresponding to the target time node, and the amplitude parameter is attenuated along with time, so that the amplitude value of the water ripple is gradually reduced along with time, and the visual effect that the water ripple is gradually weakened along with time can be simulated, wherein the water ripple texture can be simulated by adopting the Wo Luo Nuoyi texture.
S405: and in the initial rendering image, carrying out position offset on the reference pixel point to obtain a rendering image corresponding to the target time node.
In the embodiment of the application, after the server obtains the initial rendering image, the position offset can be carried out on the reference pixel point in the initial rendering image so as to simulate the visual effect of water ripple refraction.
Specifically, when the water ripple refraction effect is simulated, the reference pixel point will be subjected to position offset according to the offset, the offset can be divided into horizontal offset and vertical offset, the horizontal offset represents the offset distance of the reference pixel point in the horizontal direction, and the vertical offset represents the offset distance of the reference pixel point in the vertical direction. In an embodiment, the server may perform position offset on the reference pixel point by first obtaining coordinate information of the reference pixel point in the initial rendered image and a distance between the reference pixel point and the water wave central point, then determining a horizontal offset and a vertical offset of the reference pixel point in the initial rendered image according to the coordinate information of the reference pixel point and the distance between the reference pixel point and the water wave central point, and finally offsetting the coordinate information of the reference pixel point according to the horizontal offset and the vertical offset.
In an embodiment, the specific manner of determining the horizontal offset of the reference pixel point in the initial rendered image by the server may be that coordinate information of the water wave center point in the initial rendered image is obtained first, then the horizontal distance between the reference pixel point and the water wave center point is determined according to the coordinate information of the reference pixel point and the coordinate information of the water wave center point, and the horizontal distance between the reference pixel point and the water wave center point is determined as the horizontal offset. Correspondingly, the specific way for the server to determine the vertical offset of the reference pixel point in the initial rendered image may be that, according to the coordinate information of the reference pixel point and the coordinate information of the water wave central point, the vertical distance between the reference pixel point and the water wave central point is determined, and the vertical distance between the reference pixel point and the water wave central point is determined as the vertical offset.
In one embodiment, the server may determine an angle between the reference pixel point and the water wave center point in the continuous horizontal direction according to the coordinate information of the reference pixel point and the coordinate information of the water wave center point, and then calculate a product of a distance between the reference pixel point and the water wave center point and a cosine value of the angle, so as to obtain a horizontal distance between the reference pixel point and the water wave center point. Correspondingly, the product of the distance between the reference pixel point and the water wave central point and the sine value of the included angle is calculated, and the vertical distance between the reference pixel point and the water wave central point can be obtained.
In an embodiment, as shown in fig. 5, the server may establish a rectangular coordinate system with the water wave center point as an origin, obtain coordinate information of the reference pixel point in the rectangular coordinate system and a distance between the reference pixel point and the water wave center point, and then determine an abscissa and an ordinate after the position of the reference pixel point is shifted according to the following formulas (2) and (3), respectively:
x'=x c +r c cos(θ) (2)
y'=y c -r c sin(θ) (3)
wherein x' represents the abscissa after the position offset of the reference pixel point, x c Represents the abscissa of the reference pixel point before the position deviation, y' represents the ordinate of the reference pixel point after the position deviation, y c Denotes the ordinate, r, of the reference pixel point before the position is shifted c Representing the distance from the center point of the water wave before the position of the reference pixel point is shifted, θ can be determined according to the following formula (4):
θ=arctan(Δy,Δx) (4)
and delta x represents the distance between the abscissa of the reference pixel point before the deviation and the abscissa of the water wave central point.
In another embodiment, the server may also directly obtain the initial coordinate information of the reference pixel point in the initial rendering image and the water wave center point in the initial rendering imageDetermining the abscissa and ordinate of the shifted reference pixel point. In this implementation, it is assumed that the initial coordinate information of the reference pixel point in the initial rendered image is (x) a ,y a ) The initial coordinate information of the water wave center point in the initial rendering image is (x) o ,y o ) Then, the distance between the reference pixel point and the center point of the water wave is determined according to the following formula (5):
Figure BDA0003705565250000121
and then respectively determining the abscissa and the ordinate after the position offset of the reference pixel point according to a formula (2) and a formula (3).
The amplitude parameter, the frequency parameter and the speed parameter that the time node corresponds when this application embodiment was taken into account comprehensively have simulated the water ripple by the center to the effect of diffusion all around, and simultaneously, the visual effect that the intensity of water ripple texture weakens with time has further been reduced to amplitude parameter decay with time, and finally, through carrying out offset to the reference pixel point, make the refraction effect of water ripple also can embody. By adopting the embodiment of the application, the real water ripple special effect can be simulated from multiple dimensions, and the visual experience of the object is improved.
Further, please refer to fig. 6, where fig. 6 is a schematic flowchart of another image processing method according to an embodiment of the present application. The image processing method is executed by computer equipment, and the computer equipment can be terminal equipment or a server. For convenience of understanding, the embodiment of the present application is described as an example in which the method is executed by a server.
As shown in fig. 6, the image processing method provided in the embodiment of the present application can be summarized into the following four parts: (1) the server acquires a target image of the ripple special effect to be added, wherein the target image can be sent to the server by the terminal equipment or can be an image which is directly input to the server by an object and needs to be added with the ripple special effect; (2) after obtaining a target image, a server carries out element identification on the target image to obtain at least one key element in the target image, determines a water wave central point of a water wave special effect according to the position of the key element in the target image, and then determines a target pixel point of the target image, wherein the distance between the target image and the water wave central point is a preset distance, and the target pixel point is a pixel point of the water wave special effect; (3) the server performs water ripple special effect rendering on the target pixel point according to the amplitude parameter, the frequency parameter and the speed parameter to obtain a plurality of rendering images, wherein each rendering image corresponds to a time node; (4) and the server splices the rendering images according to a time sequence to generate a ripple special effect video corresponding to the target image.
By implementing the embodiment of the application, on one hand, the key elements in the target image are identified, and the water wave central point is determined according to the positions of the key elements in the target image, so that the image content of the target image and the water wave special effect are effectively fused; on the other hand, the water ripple special effect rendering is carried out on the target pixel points according to the amplitude parameters, the frequency parameters and the speed parameters, the water ripple expression effect is truly restored, and the attractiveness and interestingness of the target image are enhanced.
Further, referring to fig. 7, for a schematic structural diagram of an image processing apparatus provided in an embodiment of the present application, the image processing apparatus 70 may include:
an acquisition unit 701 for acquiring a target image;
a processing unit 702, configured to perform element identification on a target image, to obtain at least one key element of the target image, where the key element is used to indicate image content of the target image; the water wave center point of the water ripple in the target image is determined based on the position of each key element in the target image; determining a target pixel point with a preset distance from the target image to the water wave central point; performing water ripple special effect rendering on the target pixel points according to the water ripple parameters to obtain a plurality of rendering images; the rendering method comprises the steps that different rendering images correspond to different time nodes, each rendering image comprises a water ripple special effect, the water ripple parameters comprise an amplitude parameter, a frequency parameter and a speed parameter, the amplitude parameter is used for controlling the vibration strength of the water ripple special effect in each rendering image, the amplitude parameter is attenuated along the time nodes, the frequency parameter is used for controlling the density degree of the water ripple special effect in each rendering image, and the speed parameter is used for controlling the change speed of the water ripple special effect in each rendering image; and splicing the rendering images according to the time node corresponding to each rendering image to generate the ripple special effect video of the target image.
In an embodiment, the processing unit 702 is specifically configured to, when determining the water wave center point of the water ripple in the target image based on the position of each key element in the target image: determining a water wave central area based on the positions of the key elements in the target image; and determining a water wave central point in the water wave central area.
In an embodiment, the processing unit 702 is specifically configured to, when determining the water wave center region based on the position of each key element in the target image: acquiring attribute information of each key element in at least one dimension, wherein the attribute information of any dimension comprises element size information, text information or element category information; obtaining a characteristic value of each key element in each dimension according to the attribute information of each key element in each dimension; calculating a reference value of each key element according to the characteristic value of each key element in at least one dimension; and determining the position of the key element with the highest reference value in the target image as a water wave central area.
In one embodiment, the attribute information of the target dimension among the at least one dimension includes element size information, the element size information referring to size information of an element identification box for identifying each key element; the processing unit 702 is specifically configured to, when obtaining the feature value of each key element in each dimension according to the attribute information of each key element in each dimension: determining the height of an element identification frame for identifying each key element according to the element size information; setting the characteristic value of the key element identified by the element identification box with the highest height in the target dimension as a first characteristic value; and setting the characteristic value of other key elements in the at least one key element in the target dimension as a second characteristic value.
In one embodiment, the attribute information of the target dimension in the at least one dimension includes text information; the processing unit 702 is specifically configured to, when obtaining the feature value of each key element in each dimension according to the attribute information of each key element in each dimension: if the preset keywords exist in the text information of the target dimension of each key element, setting the characteristic value of each key element in the target dimension as a third characteristic value; and if the preset key words do not exist in the text information of the target dimension of each key element, setting the feature value of each key element in the target dimension as a fourth feature value.
In one embodiment, the attribute information of the target dimension in the at least one dimension includes element category information; when the processing unit 702 obtains the feature value of each key element in each dimension according to the attribute information of each key element in each dimension, the processing unit is specifically configured to: acquiring a characteristic value corresponding to the element category of each key element according to the corresponding relation between the element category and the characteristic value; and determining the characteristic value corresponding to the element category of each key element as the characteristic value of each key element in the target dimension.
In one embodiment, each rendered image corresponds to a time node; the processing unit 702 performs ripple special effect rendering on the target pixel point according to the ripple parameter, and when obtaining a plurality of rendering images, is specifically configured to: when a rendering image corresponding to a target time node is obtained, obtaining an amplitude parameter corresponding to the target time node, wherein the target time node is any time node; determining the diffusion distance of the target pixel point relative to the water wave central point according to the amplitude parameter, the frequency parameter and the speed parameter corresponding to the target time node; determining reference pixel points with the distance from the target pixel points in the target image as diffusion distances; the distance between the reference pixel point and the water wave central point is greater than the distance between the target pixel point and the water wave central point; adding water ripple texture to the reference pixel point to obtain an initial rendering image; the amplitude value of the water ripple texture is determined according to the amplitude parameter corresponding to the target time node; and in the initial rendering image, carrying out position offset on the reference pixel point to obtain a rendering image corresponding to the target time node.
In an embodiment, when the processing unit 702 performs position offset on the reference pixel point in the initial rendering image to obtain a rendering image corresponding to the target time node, the processing unit is specifically configured to: acquiring coordinate information of a reference pixel point in an initial rendering image and the distance between the reference pixel point and a water wave central point; determining horizontal offset and vertical offset of the reference pixel point in the initial rendered image according to the coordinate information of the reference pixel point and the distance between the reference pixel point and the water wave central point, wherein the horizontal offset is used for indicating the offset distance of the reference pixel point in the horizontal direction, and the vertical offset is used for indicating the offset distance of the reference pixel point in the vertical direction; and offsetting the coordinate information of the reference pixel point according to the horizontal offset and the vertical offset to obtain a rendering image corresponding to the target time node.
In an embodiment, when determining the horizontal offset and the vertical offset of the reference pixel point in the initial rendered image according to the coordinate information of the reference pixel point and the distance between the reference pixel point and the water wave center point, the processing unit 702 is specifically configured to: acquiring coordinate information of a water wave central point in an initial rendering image; determining the horizontal distance between the reference pixel point and the water wave central point according to the coordinate information of the reference pixel point and the coordinate information of the water wave central point, and determining the horizontal distance between the reference pixel point and the water wave central point as the horizontal offset of the reference pixel point in the initial rendered image; and determining the vertical distance between the reference pixel point and the water wave central point according to the coordinate information of the reference pixel point and the coordinate information of the water wave central point, and determining the vertical distance between the reference pixel point and the water wave central point as the vertical offset of the reference pixel point in the initial rendered image.
In an embodiment, the processing unit 702 is further configured to determine, according to the coordinate information of the reference pixel point and the coordinate information of the water wave center point, an included angle of a connection line between the reference pixel point and the water wave center point with respect to the horizontal direction; multiplying the distance between the reference pixel point and the water wave central point by the cosine value of the included angle to obtain the horizontal distance between the reference pixel point and the water wave central point; and multiplying the distance between the reference pixel point and the water wave central point by the sine value of the included angle to obtain the vertical distance between the reference pixel point and the water wave central point.
In one embodiment, the target image comprises an advertisement image; the processing unit 702 is specifically configured to, when performing element identification on the target image to obtain at least one key element of the target image: identifying text information in the advertisement image, and determining a target text matched with at least one keyword in a keyword library in the text information as a key element; detecting whether the advertisement image contains a button element, and if so, determining the button element as a key element; and detecting whether the advertising image contains the commodity element, and if so, determining the commodity element as a key element.
It should be noted that, for details that are not mentioned in the embodiment corresponding to fig. 7 and the specific implementation manner of each step, reference may be made to the embodiments shown in fig. 2 to fig. 6 and the foregoing description, and details are not repeated here.
Further, please refer to fig. 8, wherein fig. 8 is a schematic structural diagram of a computer device 80 according to an embodiment of the present disclosure. The computer device may include: the network interface 801, the memory 802 and the processor 803 are connected through one or more communication buses for enabling connection communication between these components. Network interface 801 may include standard wired interfaces, wireless interfaces (e.g., WIFI interfaces). The memory 802 may include volatile memory (volatile memory), such as random-access memory (RAM); the memory 802 may also include a non-volatile memory (non-volatile memory), such as a flash memory (flash memory), a solid-state drive (SSD), etc.; the memory 802 may also comprise a combination of the above-described types of memory. The processor 803 may be a Central Processing Unit (CPU). The processor 803 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or the like. The PLD may be a field-programmable gate array (FPGA), a General Array Logic (GAL), or the like.
Optionally, the memory 802 is further used for storing program instructions, which the processor 803 may also call to implement the relevant methods and steps in the present application.
In one embodiment, the processor 803 invokes the program instructions stored by the memory 802 to implement: performing element identification on the target image to obtain at least one key element of the target image, wherein the key element is used for indicating the image content of the target image; the water wave center point of the water wave in the target image is determined based on the position of each key element in the target image; determining a target pixel point which is in a preset distance from the target image to the water wave central point; performing water ripple special effect rendering on the target pixel points according to the water ripple parameters to obtain a plurality of rendering images; the rendering method comprises the steps that different rendering images correspond to different time nodes, each rendering image comprises a water ripple special effect, the water ripple parameters comprise an amplitude parameter, a frequency parameter and a speed parameter, the amplitude parameter is used for controlling the vibration strength of the water ripple special effect in each rendering image, the amplitude parameter is attenuated along the time nodes, the frequency parameter is used for controlling the density degree of the water ripple special effect in each rendering image, and the speed parameter is used for controlling the change speed of the water ripple special effect in each rendering image; and splicing the rendering images according to the time node corresponding to each rendering image to generate the ripple special effect video of the target image.
In one embodiment, the processor 803 may also invoke the program instructions to implement: determining a water wave central area based on the positions of the key elements in the target image; and determining a water wave central point in the water wave central area.
In one embodiment, the processor 803 may also invoke the program instructions to implement: acquiring attribute information of each key element in at least one dimension, wherein the attribute information of any dimension comprises element size information, text information or element category information; obtaining a characteristic value of each key element in each dimension according to the attribute information of each key element in each dimension; calculating a reference value of each key element according to the characteristic value of each key element in at least one dimension; and determining the position of the key element with the highest reference value in the target image as a water wave center area.
In one embodiment, the attribute information of the target dimension among the at least one dimension includes element size information, the element size information referring to size information of an element identification box for identifying each key element; the processor 803 may also call the program instructions to implement: determining the height of an element identification frame for identifying each key element according to the element size information; setting the characteristic value of the key element identified by the element identification box with the highest height in the target dimension as a first characteristic value; and setting the characteristic value of other key elements in the at least one key element in the target dimension as a second characteristic value.
In one embodiment, the attribute information of the target dimension in the at least one dimension includes text information; the processor 803 may also call the program instructions to implement: if the preset keywords exist in the text information of the target dimension of each key element, setting the characteristic value of each key element in the target dimension as a third characteristic value; and if the preset key words do not exist in the text information of the target dimension of each key element, setting the feature value of each key element in the target dimension as a fourth feature value.
In one embodiment, the attribute information of the target dimension in the at least one dimension includes element category information; the processor 803 may also call the program instructions to implement: acquiring a characteristic value corresponding to the element category of each key element according to the corresponding relation between the element category and the characteristic value; and determining the characteristic value corresponding to the element category of each key element as the characteristic value of each key element in the target dimension.
In one embodiment, each rendered image corresponds to a time node; the processor 803 may also call the program instructions to implement: when a rendering image corresponding to a target time node is obtained, obtaining an amplitude parameter corresponding to the target time node, wherein the target time node is any time node; determining the diffusion distance of the target pixel point relative to the water wave central point according to the amplitude parameter, the frequency parameter and the speed parameter corresponding to the target time node; determining reference pixel points with the distance between the reference pixel points and the target pixel points as diffusion distances in the target image; the distance between the reference pixel point and the water wave central point is greater than the distance between the target pixel point and the water wave central point; adding water ripple texture to the reference pixel point to obtain an initial rendering image; the different time nodes correspond to the water ripple textures with different amplitude values, and the amplitude values of the water ripple textures are determined according to the amplitude parameters corresponding to the time nodes at the target; and in the initial rendering image, carrying out position offset on the reference pixel point to obtain a rendering image corresponding to the target time node.
In one embodiment, the processor 803 may also invoke the program instructions to implement: acquiring coordinate information of a reference pixel point in an initial rendering image and the distance between the reference pixel point and a water wave central point; determining horizontal offset and vertical offset of the reference pixel point in the initial rendered image according to the coordinate information of the reference pixel point and the distance between the reference pixel point and the water wave central point, wherein the horizontal offset is used for indicating the offset distance of the reference pixel point in the horizontal direction, and the vertical offset is used for indicating the offset distance of the reference pixel point in the vertical direction; and offsetting the coordinate information of the reference pixel point according to the horizontal offset and the vertical offset to obtain a rendering image corresponding to the target time node.
In one embodiment, the processor 803 may also invoke the program instructions to implement: acquiring coordinate information of a water wave central point in an initial rendering image; determining the horizontal distance between the reference pixel point and the water wave central point according to the coordinate information of the reference pixel point and the coordinate information of the water wave central point, and determining the horizontal distance between the reference pixel point and the water wave central point as the horizontal offset of the reference pixel point in the initial rendered image; and determining the vertical distance between the reference pixel point and the water wave central point according to the coordinate information of the reference pixel point and the coordinate information of the water wave central point, and determining the vertical distance between the reference pixel point and the water wave central point as the vertical offset of the reference pixel point in the initial rendered image.
In one embodiment, the processor 803 may also invoke the program instructions to implement: determining an included angle of a connecting line of the reference pixel point and the water wave central point relative to the horizontal direction according to the coordinate information of the reference pixel point and the coordinate information of the water wave central point; multiplying the distance between the reference pixel point and the water wave central point by the cosine value of the included angle to obtain the horizontal distance between the reference pixel point and the water wave central point; and multiplying the distance between the reference pixel point and the water wave central point by the sine value of the included angle to obtain the vertical distance between the reference pixel point and the water wave central point.
In one embodiment, the target image comprises an advertisement image; the processor 803 may also call the program instructions to implement: identifying text information in the advertisement image, and determining a target text matched with at least one keyword in a keyword library in the text information as a key element; detecting whether the advertisement image contains a button element, and if so, determining the button element as a key element; and detecting whether the advertising image contains the commodity element, and if so, determining the commodity element as a key element.
It should be understood that the principle and the advantageous effects of the computer device 80 described in the embodiment of the present application for solving the problems are similar to the embodiment shown in fig. 2 to fig. 6 of the present application and the principle and the advantageous effects of the foregoing for solving the problems, and are not repeated herein for brevity.
In addition, the embodiment of the present application further provides a computer readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the method provided by the foregoing embodiment.
Embodiments of the present application also provide a computer program product, which includes a computer program or computer instructions, and when the computer program or the computer instructions are executed by a processor, the method provided by the foregoing embodiments is implemented.
The embodiment of the present application provides a computer program, which includes computer instructions stored in a computer-readable storage medium, and a processor of a computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method provided by the foregoing embodiment.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device of the embodiment of the application can be combined, divided and deleted according to actual needs.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium and executed by a computer to implement the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (15)

1. An image processing method, comprising:
performing element identification on a target image to obtain at least one key element of the target image, wherein the key element is used for indicating the image content of the target image;
determining the water wave central point of the water wave in the target image based on the position of each key element in the target image;
determining a target pixel point with a preset distance from the target image to the water wave central point;
performing water ripple special effect rendering on the target pixel point according to the water ripple parameters to obtain a plurality of rendering images; the method comprises the steps that different rendering images correspond to different time nodes, each rendering image comprises a water ripple special effect, the water ripple parameters comprise an amplitude parameter, a frequency parameter and a speed parameter, the amplitude parameter is used for controlling the vibration intensity of the water ripple special effect in each rendering image, the amplitude parameter attenuates along with the time nodes, the frequency parameter is used for controlling the density degree of the water ripple special effect in each rendering image, and the speed parameter is used for controlling the change speed of the water ripple special effect in each rendering image;
and splicing the rendering images according to the time node corresponding to each rendering image to generate a ripple special effect video of the target image.
2. The method of claim 1, wherein the determining the water wave center point of the water ripple in the target image based on the position of each key element in the target image comprises:
determining a water wave central area based on the positions of the key elements in the target image;
determining the water wave center point in the water wave center region.
3. The method of claim 2, wherein determining a water wave center region based on the location of each key element in the target image comprises:
acquiring attribute information of each key element in at least one dimension, wherein the attribute information of any dimension comprises element size information, text information or element category information;
obtaining a characteristic value of each key element in each dimension according to the attribute information of each key element in each dimension;
calculating a reference value of each key element according to the characteristic value of each key element in the at least one dimension;
and determining the position of the key element with the highest reference value in the target image as a water wave central area.
4. The method of claim 3, wherein the attribute information of the target dimension among the at least one dimension includes element size information referring to size information of an element identification box for identifying the each key element; the obtaining the feature value of each key element in each dimension according to the attribute information of each key element in each dimension includes:
determining the height of an element identification frame for identifying each key element according to the element size information;
setting the characteristic value of the key element identified by the element identification box with the highest height in the target dimension as a first characteristic value;
setting the feature values of other key elements in the at least one key element in the target dimension as second feature values.
5. The method of claim 3, wherein the attribute information of the target dimension in the at least one dimension comprises text information; the obtaining the feature value of each key element in each dimension according to the attribute information of each key element in each dimension includes:
if the preset keyword exists in the text information of the target dimension of each key element, setting the feature value of each key element in the target dimension as a third feature value;
and if the preset key words do not exist in the text information of the target dimension of each key element, setting the feature value of each key element in the target dimension as a fourth feature value.
6. The method of claim 3, wherein attribute information of a target dimension in the at least one dimension comprises element category information; the obtaining the feature value of each key element in each dimension according to the attribute information of each key element in each dimension includes:
acquiring a characteristic value corresponding to the element category of each key element according to the corresponding relation between the element category and the characteristic value;
and determining the characteristic value corresponding to the element category of each key element as the characteristic value of each key element in the target dimension.
7. The method of claim 1, wherein each rendered image corresponds to a time node; performing water ripple special effect rendering on the target pixel point according to the water ripple parameter to obtain a plurality of rendering images, including:
when a rendering image corresponding to a target time node is obtained, obtaining an amplitude parameter corresponding to the target time node, wherein the target time node is any time node;
determining the diffusion distance of the target pixel point relative to the water wave central point according to the amplitude parameter, the frequency parameter and the speed parameter corresponding to the target time node;
determining a reference pixel point with the distance from the target pixel point in the target image as the diffusion distance; the distance between the reference pixel point and the water wave central point is greater than the distance between the target pixel point and the water wave central point;
adding water ripple textures to the reference pixel points to obtain an initial rendering image; the different time nodes correspond to the water ripple textures with different amplitude values, and the amplitude values of the water ripple textures are determined according to the amplitude parameters corresponding to the target time nodes;
and in the initial rendering image, carrying out position offset on the reference pixel point to obtain a rendering image corresponding to the target time node.
8. The method of claim 7, wherein said offsetting the reference pixel point in the initial rendered image to obtain the rendered image corresponding to the target time node comprises:
acquiring coordinate information of the reference pixel point in the initial rendering image and the distance between the reference pixel point and the water wave central point;
determining a horizontal offset and a vertical offset of the reference pixel point in the initial rendered image according to the coordinate information of the reference pixel point and the distance between the reference pixel point and the water wave central point, wherein the horizontal offset is used for indicating the offset distance of the reference pixel point in the horizontal direction, and the vertical offset is used for indicating the offset distance of the reference pixel point in the vertical direction;
and offsetting the coordinate information of the reference pixel point according to the horizontal offset and the vertical offset to obtain a rendering image corresponding to the target time node.
9. The method of claim 8, wherein the determining the horizontal offset and the vertical offset of the reference pixel point in the initial rendered image according to the coordinate information of the reference pixel point and the distance between the reference pixel point and the water wave center point comprises:
acquiring coordinate information of the water wave central point in the initial rendering image;
determining the horizontal distance between the reference pixel point and the water wave central point according to the coordinate information of the reference pixel point and the coordinate information of the water wave central point, and determining the horizontal distance between the reference pixel point and the water wave central point as the horizontal offset of the reference pixel point in the initial rendered image;
and determining the vertical distance between the reference pixel point and the water wave central point according to the coordinate information of the reference pixel point and the coordinate information of the water wave central point, and determining the vertical distance between the reference pixel point and the water wave central point as the vertical offset of the reference pixel point in the initial rendered image.
10. The method of claim 9, wherein the horizontal distance between the reference pixel point and the water wave center point and the vertical distance between the reference pixel point and the water wave center point are determined by:
determining an included angle of a connecting line of the reference pixel point and the water wave central point relative to the horizontal direction according to the coordinate information of the reference pixel point and the coordinate information of the water wave central point;
multiplying the distance between the reference pixel point and the water wave central point by the cosine value of the included angle to obtain the horizontal distance between the reference pixel point and the water wave central point;
and multiplying the distance between the reference pixel point and the water wave central point by the sine value of the included angle to obtain the vertical distance between the reference pixel point and the water wave central point.
11. The method of claim 1, wherein the target image comprises an advertisement image;
the element recognition of the target image to obtain at least one key element of the target image includes:
identifying text information in the advertisement image, and determining a target text matched with at least one keyword in a keyword library in the text information as the key element;
detecting whether a button element is contained in the advertisement image, and if the button element is contained, determining the button element as the key element;
and detecting whether the advertisement image contains a commodity element, and if so, determining the commodity element as the key element.
12. An image processing apparatus, characterized in that the apparatus comprises:
an acquisition unit configured to acquire a target image;
the processing unit is used for carrying out element identification on a target image to obtain at least one key element of the target image, wherein the key element is used for indicating the image content of the target image; determining the water wave central point of the water wave in the target image based on the position of each key element in the target image; determining a target pixel point with a preset distance from the target image to the water wave central point; performing water ripple special effect rendering on the target pixel points according to the water ripple parameters to obtain a plurality of rendering images; the rendering method comprises the steps that different rendering images correspond to different time nodes, each rendering image comprises a water ripple special effect, the water ripple parameters comprise an amplitude parameter, a frequency parameter and a speed parameter, the amplitude parameter is used for controlling the vibration intensity of the water ripple special effect in each rendering image, the amplitude parameter attenuates along with the time nodes, the frequency parameter is used for controlling the intensity degree of the water ripple special effect in each rendering image, and the speed parameter is used for controlling the change speed of the water ripple special effect in each rendering image; and splicing the rendering images according to the time node corresponding to each rendering image to generate the water ripple special effect video of the target image.
13. A computer device comprising a memory, a processor, and a network interface, the processor coupled to the memory and the network interface, wherein the network interface is configured to provide network communication functionality, the memory is configured to store program code, and the processor is configured to invoke the program code to perform the method of any of claims 1 to 11.
14. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 11.
15. A computer program product, characterized in that it comprises a computer program or computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 11.
CN202210704119.1A 2022-06-21 2022-06-21 Image processing method, device and equipment, storage medium and program product Pending CN115147532A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210704119.1A CN115147532A (en) 2022-06-21 2022-06-21 Image processing method, device and equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210704119.1A CN115147532A (en) 2022-06-21 2022-06-21 Image processing method, device and equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN115147532A true CN115147532A (en) 2022-10-04

Family

ID=83407692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210704119.1A Pending CN115147532A (en) 2022-06-21 2022-06-21 Image processing method, device and equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN115147532A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116617658A (en) * 2023-07-20 2023-08-22 腾讯科技(深圳)有限公司 Image rendering method and related device
CN117576247A (en) * 2024-01-17 2024-02-20 江西拓世智能科技股份有限公司 Picture generation method and system based on artificial intelligence

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116617658A (en) * 2023-07-20 2023-08-22 腾讯科技(深圳)有限公司 Image rendering method and related device
CN116617658B (en) * 2023-07-20 2023-10-20 腾讯科技(深圳)有限公司 Image rendering method and related device
CN117576247A (en) * 2024-01-17 2024-02-20 江西拓世智能科技股份有限公司 Picture generation method and system based on artificial intelligence
CN117576247B (en) * 2024-01-17 2024-03-29 江西拓世智能科技股份有限公司 Picture generation method and system based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN115147532A (en) Image processing method, device and equipment, storage medium and program product
CN114067321B (en) Text detection model training method, device, equipment and storage medium
CN108491848A (en) Image significance detection method based on depth information and device
CN114331829A (en) Countermeasure sample generation method, device, equipment and readable storage medium
CN112489099B (en) Point cloud registration method and device, storage medium and electronic equipment
Velho et al. Mathematical optimization in computer graphics and vision
CN113159232A (en) Three-dimensional target classification and segmentation method
CN113011387B (en) Network training and human face living body detection method, device, equipment and storage medium
CN113537180B (en) Tree obstacle identification method and device, computer equipment and storage medium
CN117058723B (en) Palmprint recognition method, palmprint recognition device and storage medium
Zhou et al. Context-aware 3D object detection from a single image in autonomous driving
CN113284237A (en) Three-dimensional reconstruction method, system, electronic equipment and storage medium
CN113537267A (en) Method and device for generating countermeasure sample, storage medium and electronic equipment
CN112700464B (en) Map information processing method and device, electronic equipment and storage medium
CN113591969B (en) Face similarity evaluation method, device, equipment and storage medium
CN111461091B (en) Universal fingerprint generation method and device, storage medium and electronic device
CN112667864B (en) Graph alignment method and device, electronic equipment and storage medium
KR102521565B1 (en) Apparatus and method for providing and regenerating augmented reality service using 3 dimensional graph neural network detection
CN115115699A (en) Attitude estimation method and device, related equipment and computer product
CN113568983A (en) Scene graph generation method and device, computer readable medium and electronic equipment
CN113516735A (en) Image processing method, image processing device, computer readable medium and electronic equipment
CN115688083B (en) Method, device and equipment for identifying image-text verification code and storage medium
Guo et al. Multi-scale point set saliency detection based on site entropy rate
Han et al. A Two‐Branch Pedestrian Detection Method for Small and Blurred Target
CN114332267A (en) Generation method and device of ink-wash painting image, computer equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination