CN110211211B - Image processing method, device, electronic equipment and storage medium - Google Patents

Image processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110211211B
CN110211211B CN201910340750.6A CN201910340750A CN110211211B CN 110211211 B CN110211211 B CN 110211211B CN 201910340750 A CN201910340750 A CN 201910340750A CN 110211211 B CN110211211 B CN 110211211B
Authority
CN
China
Prior art keywords
edge
rendering
image
key points
particles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910340750.6A
Other languages
Chinese (zh)
Other versions
CN110211211A (en
Inventor
闫鑫
侯沛宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910340750.6A priority Critical patent/CN110211211B/en
Publication of CN110211211A publication Critical patent/CN110211211A/en
Application granted granted Critical
Publication of CN110211211B publication Critical patent/CN110211211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The present disclosure provides an image processing method, an apparatus, an electronic device, and a storage medium, where the method includes: acquiring object key points included in an image to be processed; acquiring an object grid graph according to the object key points; performing edge processing on the object grid graph to obtain an object edge graph; when rendering particles are positioned at edge positions included in the object edge graph, controlling the rendering particles to be in a hovering state, wherein the rendering particles are used for performing image rendering on an image to be processed. Therefore, according to the image processing method provided by the embodiment of the disclosure, the object edge map is obtained by means of combining the extraction of the object key points and the edge processing, so that the rendering particles can hover at the object key points, and therefore, various rendering forms exist in the rendering particles, the rendering effect is further improved, and the visual effect is enhanced.

Description

Image processing method, device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to an image processing method, an image processing device, electronic equipment and a storage medium.
Background
With the development of terminal technology, different rendering can be performed on the image at present, such as rendering particles of fireworks, fallen leaves, motions, snowflakes and the like can be added in the image, so that the rendered image can meet the visual demands of users more.
However, the rendering particles in the currently rendered image all move according to the preset motion trail, so that the flexibility of the rendering particles is poor, and the use experience of a user is reduced.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image processing method, apparatus, electronic device, and storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method, the method including:
acquiring object key points included in an image to be processed;
acquiring an object grid graph according to the object key points;
performing edge processing on the object grid graph to obtain an object edge graph;
when rendering particles are located at edge positions included in the object edge graph, controlling the rendering particles to be in a hovering state, wherein the rendering particles are used for performing image rendering on the image to be processed.
Optionally, the acquiring the object key point included in the image to be processed includes:
and acquiring object key points included in the image to be processed through a key point extraction model.
Optionally, after the edge processing is performed on the object grid graph to obtain an object edge graph, the method further includes:
Acquiring pixel values of all pixel points included in the object edge graph;
and judging whether the rendering particles are positioned at the edge positions included in the object edge graph according to the pixel values.
Optionally, the determining, according to the pixel value, whether the rendering particle is located at an edge position included in the object edge map includes:
acquiring the current position of the rendering particles on the object edge graph;
acquiring a target pixel value corresponding to the current position according to the pixel value;
when the target pixel value is within a preset pixel value range, determining that the rendering particles are positioned at the edge position included in the object edge graph;
and when the target pixel value exceeds the preset pixel value range, determining that the rendering particles are not positioned at the edge positions included in the object edge graph.
Optionally, after performing edge processing on the object grid graph to obtain an object edge graph, the method further includes:
and when the rendering particles are not positioned at the edge positions included in the object edge graph, controlling the rendering particles to move according to a preset movement track.
Optionally, before the obtaining the object grid graph according to the object key points, the method further includes:
Judging whether the number of the object key points is smaller than or equal to a preset threshold value;
acquiring object expansion points according to the object key points under the condition that the number of the object key points is smaller than or equal to the preset threshold value;
the obtaining the object grid graph according to the object key points comprises the following steps:
and acquiring the object grid graph according to the object key points and the object extension points.
According to a second aspect of embodiments of the present disclosure, there is provided an image processing apparatus including:
the key point acquisition module is used for acquiring object key points included in the image to be processed;
the grid map acquisition module is used for acquiring an object grid map according to the object key points;
the edge map acquisition module is used for carrying out edge processing on the object grid map to obtain an object edge map;
and the hovering control module is used for controlling rendering particles to be in a hovering state when the rendering particles are positioned at edge positions included in the object edge graph, and the rendering particles are used for performing image rendering on the image to be processed.
Optionally, the keypoint obtaining module is configured to obtain, through a keypoint extraction model, an object keypoint included in the image to be processed.
Optionally, the apparatus further includes:
a pixel value obtaining module, configured to obtain a pixel value of each pixel point included in the object edge graph;
and the position judging module is used for judging whether the rendering particles are positioned at the edge positions included in the object edge graph according to the pixel values.
Optionally, the position determining module includes:
a position acquisition sub-module for acquiring a current position of the rendering particles on the object edge map;
a target pixel value obtaining sub-module, configured to obtain a target pixel value corresponding to the current position according to the pixel value;
an edge position determining sub-module, configured to determine that the rendering particles are located at an edge position included in the object edge map when the target pixel value is within a preset pixel value range;
and the non-edge position determining sub-module is used for determining that the rendering particles are not positioned at the edge positions included in the object edge graph under the condition that the target pixel value exceeds the preset pixel value range.
Optionally, the apparatus further includes:
and the motion control module is used for controlling the rendering particles to move according to a preset motion track when the rendering particles are not positioned at the edge positions included in the object edge graph.
Optionally, the apparatus further includes:
the key point judging module is used for judging whether the number of the key points of the object is smaller than or equal to a preset threshold value;
the expansion point acquisition module is used for acquiring object expansion points according to the object key points under the condition that the number of the object key points is smaller than or equal to the preset threshold value;
the grid graph acquisition module is used for acquiring the object grid graph according to the object key points and the object extension points.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the image processing method described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the above-described image processing method.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising one or more instructions which, when executed by a processor of an electronic device, enable the electronic device to perform the above-described image processing method.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
the image processing method shown in the present exemplary embodiment acquires object key points included in an image to be processed; acquiring an object grid graph according to the object key points; performing edge processing on the object grid graph to obtain an object edge graph; when rendering particles are located at edge positions included in the object edge graph, controlling the rendering particles to be in a hovering state, wherein the rendering particles are used for performing image rendering on the image to be processed. Therefore, according to the image processing method provided by the embodiment of the disclosure, the object edge map is obtained by means of combining the extraction of the object key points and the edge processing, so that the rendering particles can hover at the object key points, and therefore, various rendering forms exist in the rendering particles, the rendering effect is further improved, and the visual effect is enhanced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flowchart illustrating a method of image processing according to an exemplary embodiment;
FIG. 2 is another flow chart of an image processing method according to an exemplary embodiment;
FIG. 3 is a schematic diagram of an image to be processed, shown according to an exemplary embodiment;
FIG. 4 is a schematic diagram of an image to be processed labeled with object keypoints, according to an example embodiment;
FIG. 5 is a schematic diagram of an object grid graph, shown in accordance with an exemplary embodiment;
FIG. 6 is a schematic diagram of an object edge graph, shown in accordance with an exemplary embodiment;
FIG. 7 is a schematic diagram of a rendering effect image shown in accordance with an exemplary embodiment;
fig. 8 is a block diagram of a first image processing apparatus according to an exemplary embodiment;
fig. 9 is a block diagram of a second image processing apparatus according to an exemplary embodiment;
fig. 10 is a block diagram of a third image processing apparatus according to an exemplary embodiment;
fig. 11 is a block diagram of a fourth image processing apparatus according to an exemplary embodiment;
fig. 12 is a block diagram of a fifth image processing apparatus according to an exemplary embodiment;
fig. 13 is a block diagram illustrating a configuration of an electronic device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Fig. 1 is a flowchart of an image processing method according to an exemplary embodiment, which may include the steps of:
in step 101, object keypoints included in an image to be processed are acquired.
In an embodiment of the present disclosure, the image to be processed may be an image including a target object, for example, the target object is a face, the image to be processed is an image including a face, and for another example, the target object is a human body, the image to be processed is an image including a human body, and so on.
The object key point is a key point preset for the target object, for example, if the target object is a face, the object key point may include: eyes, nose, mouth, eyebrows, facial contours, etc.; as another example, if the target object is a human body, the object key points may include: head, neck, shoulder, elbow, hand, arm, knee, foot, etc., the above examples being illustrative only and the disclosure is not limited thereto.
In addition, the present disclosure may perform image rendering according to the image processing method described in the present disclosure for each frame image in a video clip. However, in view of the fact that the number of frames included in the video clip is large, so that the processing pressure is high, the image to be processed can be obtained from the video clip according to a preset collection rule, the preset collection rule can be that the image to be processed is collected once every m frames, for example, if the preset collection rule is that the image to be processed is collected once every 5 frames, the image to be processed can be obtained from the video clip every 5 frames, and the image to be processed is collected according to a preset period, so that the image processing pressure is reduced.
In step 102, an object mesh map is acquired according to the object keypoints.
In the embodiment of the present disclosure, a plurality of to-be-connected point sets may be obtained first according to the object key points, where the to-be-connected point sets include a specified number of object key points, and the same object key point may exist in different to-be-connected point sets. Further, in order to make the subsequent processing result accurate, all object key points may be included in the object grid graph in the present disclosure, so each object key point is located in at least one to-be-connected point set; and then, connecting the object key points of the instruction number in each to-be-connected point set to obtain a polygon, wherein the side length number of the polygon can be the appointed number.
Optionally, the designated number may be 3, where the set of points to be connected includes three object key points, and thus, the object mesh map in the present disclosure is a mesh map formed by combining multiple triangles.
In addition, generally, the finer the object grid graph is, the better the edge extraction effect of the subsequent object edge graph is, so when the set of points to be connected is obtained, the set of points to be connected can be formed by the adjacent specified number of object key points.
In step 103, edge processing is performed on the object grid graph to obtain an object edge graph.
In the embodiment of the present disclosure, the object mesh map may be edge-processed by a preset edge processing algorithm to obtain an object edge map, where the edge processing algorithm may include at least one of the following: sobel edge detection algorithm, laplace edge detection algorithm, canny edge detection algorithm, robert edge detection algorithm, priwitt edge detection algorithm, and the like.
In step 104, when a rendering particle is located at an edge position included in the object edge graph, the rendering particle is controlled to be in a hovering state, and the rendering particle is used for performing image rendering on the image to be processed.
In the embodiment of the disclosure, the edge position included in the object edge graph may be a position formed by the object key points, so in order to enable the object key points to achieve the effect of hovering of the rendering particles, the rendering particles are controlled to be in a hovering state when the rendering particles are located at the edge position included in the object edge graph.
In addition, the rendering particles are generally provided with a preset motion track, so that the rendering particles move based on the preset motion track, and in the process that the rendering particles move based on the preset motion track, the rendering particles can reach the edge position included in the object edge map. For example, if the rendering particle is a plurality of petals and the motion track of the rendering particle moves from one side to the other side of the image to be processed, so that the plurality of petals move from the corresponding initial position from one side to the other side, and in order to make the rendering effect more vivid, a corresponding motion form such as a rotation angle and a motion speed may also be set for each petal.
By adopting the method, the object key points included in the image to be processed are acquired; acquiring an object grid graph according to the object key points; performing edge processing on the object grid graph to obtain an object edge graph; when the rendering particles are positioned at the edge positions included in the object edge graph, controlling the rendering particles to be in a hovering state, wherein the rendering particles are used for performing image rendering on the image to be processed. Therefore, according to the image processing method provided by the embodiment of the disclosure, the object edge map is obtained by means of combining the extraction of the object key points and the edge processing, so that the rendering particles can hover at the object key points, and therefore, various rendering forms exist in the rendering particles, the rendering effect is further improved, and the visual effect is enhanced.
FIG. 2 is another flow chart of an image processing method, which may include the following steps in particular, according to an exemplary embodiment;
in step 201, object keypoints included in the image to be processed are acquired through a keypoint extraction model.
In the embodiment of the present disclosure, the keypoint extraction model may be pre-constructed by: firstly, obtaining an object image corresponding to a target object, and labeling key points of the object image; then taking the object image as input of a preset convolutional neural network to obtain key points of the object image; then constructing a loss function according to the object image key points and the marked object key points; and finally, updating the preset convolutional neural network through regression training until the loss function meets the iteration termination condition. For example, the iteration termination condition may be that the value corresponding to the loss function is a minimum value. In the present disclosure, a keypoint extraction model in the prior art, such as a DAN (Deep Alignment Network; depth alignment network) model, etc., may be further used to obtain the object keypoints included in the image to be processed, where the keypoint extraction model is not described in detail, and reference may be made to the prior art.
In an embodiment of the present disclosure, the image to be processed may be an image including a target object, for example, the target object is a face, the image to be processed is an image including a face, and for another example, the target object is a human body, the image to be processed is an image including a human body, and so on.
The object key point is a key point preset for the target object, for example, if the target object is a face, the object key point may include: eyes, nose, mouth, eyebrows, facial contours, etc.; as another example, if the target object is a human body, the object key points may include: head, neck, shoulder, elbow, hand, arm, knee, foot, etc., the above examples being illustrative only and the disclosure is not limited thereto.
As shown in fig. 3, an image to be processed is shown, where a target object included in the image to be processed is a human face, so that an object key point corresponding to the image to be processed shown in fig. 3 is obtained through a key point extraction model, where the object key point is denoted by ".", in fig. 4.
In step 202, it is determined whether the number of object keypoints is less than or equal to a preset threshold.
In this step, considering that the object grid graph cannot meet the requirement in the case that the object key points are fewer, it is necessary to determine whether the number of the object key points is less than or equal to the preset threshold.
Executing step 203, step 204, and steps 206 to 208 if the number of object key points is less than or equal to a preset threshold;
in the case where the number of object keypoints is greater than the preset threshold, steps 205 to 208 are performed.
In step 203, an object extension point is obtained according to the object key point.
In an embodiment of the present disclosure, in one possible implementation manner, a plurality of keypoint combinations may be obtained, where the keypoint combinations may include two object keypoints, and a target line segment is obtained according to the keypoint combinations, the two object keypoints included in the keypoint combinations are taken as two endpoints (i.e., a first endpoint and a second endpoint) of the target line segment, then a target point on the target line segment is obtained as the object extension point, a distance from the target point to the first endpoint is a first distance, a distance from the target point to the second endpoint is a second distance, and a ratio between the first distance and the second distance accords with a preset ratio. For example, if the preset ratio is 1:1, the target point is the midpoint of the target line segment, and so on. In addition, in the process of acquiring the plurality of key point combinations, two object key points can be acquired randomly to obtain the key point combinations. It should be noted that, because the object key point is a key point of the target object included in the image to be processed, in the case of determining the object type of the target object, the distribution situation of the object key point of each area may be obtained according to the object type, for example, if the target object is a human face, the human face is first divided into a plurality of areas based on the object type, such as the plurality of areas are areas between a human face mouth (including the human face mouth) and a human face chin (including the human face chin), areas between a human face nose (including the human face nose) and the human face mouth, areas between the human face nose and a human face eye (including the human face eye), and areas between the human face eye and a human face vertex (including the human face vertex), and then, whether the number of the object key points in the area is greater than or equal to the preset number and whether the object key points in the area conform to the preset distribution are determined, if the number of the object key points in the area is less than the preset number and/or not, the two key points in the area do not conform to the preset distribution are obtained from the two examples, and the key point distribution is not limited.
In another possible implementation manner, a reference point of the target object may be predetermined, where the object center point of the target object is generally considered to be relatively stable, and if the target object is a human face, the reference point includes a nose tip of the human face, so that a specified angle range of the reference point may be used as a ray, where the specified angle range is an angle range corresponding to a specified area, and the specified area is a position where a key point of the object needs to be constructed, and an object expansion point is acquired on the ray, where a point, which is a specified distance from the reference point, on the ray is determined as the object expansion point, or the object expansion point is determined according to an ergonomic proportional relationship. For example, if the target object is a human face and fewer key points of the eyebrows of the human face are detected, the nose tip of the human face is crossed to form a plurality of rays within a specified angle range, such as the nose tip of the human face is crossed to form a horizontal axis and a vertical axis, and the specified angle range can comprise (45 degrees, 80 degrees) and (100 degrees, 135 degrees). The above manner of obtaining the object extension point is merely illustrative, and the disclosure is not limited thereto.
In step 204, the object mesh map is obtained according to the object key points and the object extension points.
The method of obtaining the object grid map in this step is similar to the process in step 102, and will not be described again.
In step 205, an object mesh map is acquired according to the object keypoints.
The method of obtaining the object grid map in this step is similar to the process in step 102, and will not be described again. As shown in fig. 5, an object mesh map obtained by connecting object keypoints in fig. 4 is shown.
In step 206, edge processing is performed on the object mesh map to obtain an object edge map.
In this step, the object mesh map is first subject to object segmentation to obtain a target object image with separated background, that is, the background in the image to be processed is a specified color (such as black), and then the target object image is subject to edge processing to obtain an object edge map. Illustratively, continuing with the example in fig. 5, the object edge map shown in fig. 6 can be obtained by the image processing procedure in this step.
The above-described procedure may be generally performed in OpenGL (Open Graphics Library ) using a shader and a memory, by writing image processing codes (such as edge processing codes, acquisition codes of an object mesh map, image rendering codes, etc.) in the present disclosure in the shader, the image processing procedure is realized by calling the writing codes in the shader, and in the subsequent steps, pixel values of respective pixel points included in the object edge map and correspondence between the pixel values of the respective pixel points and pixel positions may be stored in the memory by the memory, and the present disclosure may store the pixel values of the respective pixel points included in the object edge map and correspondence between the pixel values of the respective pixel points and pixel positions in the memory by a glReadPixels function in OpenGL.
In the embodiment of the present disclosure, the object mesh map may be edge-processed by a preset edge processing algorithm to obtain an object edge map, where the edge processing algorithm may include at least one of the following: sobel edge detection algorithm, laplace edge detection algorithm, canny edge detection algorithm, robert edge detection algorithm, priwitt edge detection algorithm, and the like.
In step 207, pixel values of each pixel included in the object edge map are obtained.
In one possible implementation, it is considered that the object edge map may not be a black-and-white image, so that the pixels of the object edge in the object edge map are within a preset pixel value range, and the pixels of the object edge map at other positions except for the object edge are not within the preset pixel value range.
Further, in general, the object edge in the object edge graph is a first color, and pixels in other positions except for the object edge in the object edge graph are a second color, for example, the first color is white, the second color is black, and at this time, the pixel values of the pixels included in the object edge graph are: RGB is 255 or RGB is 0.
In step 208, it is determined, according to the pixel values, whether the rendering particles are located at edge positions included in the object edge map.
The rendering particles are used for performing image rendering on the image to be processed.
In this step, it may be determined whether the rendering particles are located at edge positions included in the object edge map by:
s11, acquiring the current position of the rendering particles on the object edge graph;
the object edge map is an edge image formed by object key points in the image to be processed, so that each pixel point in the image to be processed corresponds to the object edge map, and the position of the rendering particles in the image to be processed is the current position of the rendering particles on the object edge map. And the rendering particles in the present disclosure may move in the image to be processed, the current position needs to be acquired in real time. By way of example, the rendering particles may include fireworks, fallen leaves, sports, snowflakes, and the like.
S12, acquiring a target pixel value corresponding to the current position according to the pixel value;
the step may obtain the target pixel value through the above-mentioned memory, and because the memory stores the pixel values of each pixel included in the object edge graph and the correspondence between the pixel values of each pixel and the pixel positions, the step may obtain the target pixel value corresponding to the current position based on the stored data.
S13, determining that the rendering particles are positioned at the edge position included in the object edge graph when the target pixel value is in the preset pixel value range;
since the pixel points of the object edge in the object edge graph are within the preset pixel value range, the rendering particles are determined to be positioned at the edge positions included in the object edge graph under the condition that the target pixel value is within the preset pixel value range.
In addition, when the object edge in the object edge map is a first color and the pixel points at other positions in the object edge map than the object edge are a second color, if the first color is white and the second color is black, in this case, the step is that any one of RGB values in the target pixel values is 255, and the rendering particles are located at the edge positions included in the object edge map.
And S14, when the target pixel value exceeds the preset pixel value range, determining that the rendering particles are not positioned at the edge position included in the object edge graph.
Since the pixel points of the object edge in the object edge graph are within the preset pixel value range, if the target pixel value is not within the preset pixel value range, it is determined that the rendering particles are not located at the edge positions included in the object edge graph.
In addition, when the object edge in the object edge map is a first color and the pixel points at other positions in the object edge map than the object edge are a second color, if the first color is white and the second color is black, in this step, that is, if any one of the RGB values in the target pixel values is 0, the rendering particles are not located at the edge positions included in the object edge map.
Executing step 209 when the rendering particles are located at edge positions included in the object edge map;
step 210 is performed when the rendering particles are not located at edge positions comprised by the object edge map.
In step 209, the rendered particles are controlled to be in a hover state.
The hovering state is that the rendering particles are stationary at the current position and do not continue to move according to a preset movement track. As shown in fig. 7, when the rendering particles (i.e., petals) reach the edge position of the object edge graph shown in fig. 6 (i.e., the position where the line in fig. 6 is located), the rendering particles do not continue to move according to the preset motion trajectory and hover at the edge position of the edge image, so that there are more rendering particles at the object key points in fig. 7.
In step 210, the rendering particles are controlled to move according to a preset motion trajectory.
The rendering particles are typically provided with a preset motion trajectory, so that the rendering particles move based on the preset motion trajectory, and thus, in the process of moving the rendering particles based on the preset motion trajectory, the rendering particles can reach an edge position included in the object edge map. For example, if the rendering particle is a plurality of petals and the motion track of the rendering particle moves from one side to the other side of the image to be processed, so that the plurality of petals move from the corresponding initial position from one side to the other side, and in order to make the rendering effect more vivid, a corresponding motion form such as a rotation angle and a motion speed may also be set for each petal.
As shown in fig. 7, when the rendering particles (i.e., petals) reach the non-edge position of the object edge map shown in fig. 6 (i.e., the position other than the line in fig. 6), the rendering particles continue to move according to the preset motion trajectory, and thus, there are fewer rendering particles at the position other than the object key point in fig. 7.
By adopting the method, the object key points included in the image to be processed are acquired; acquiring an object grid graph according to the object key points; performing edge processing on the object grid graph to obtain an object edge graph; when the rendering particles are positioned at the edge positions included in the object edge graph, controlling the rendering particles to be in a hovering state, wherein the rendering particles are used for performing image rendering on an image to be processed. Therefore, according to the image processing method provided by the embodiment of the disclosure, the object edge map is obtained by means of combining the extraction of the object key points and the edge processing, so that the rendering particles can hover at the object key points, and therefore, various rendering forms exist in the rendering particles, the rendering effect is further improved, and the visual effect is enhanced.
Fig. 8 is a block diagram of an image processing apparatus 80 according to an exemplary embodiment. Referring to fig. 8, the apparatus includes:
a key point obtaining module 81, configured to obtain an object key point included in an image to be processed;
a mesh map obtaining module 82, configured to obtain an object mesh map according to the object key points;
an edge map obtaining module 83, configured to perform edge processing on the object mesh map to obtain an object edge map;
a hover control module 84, configured to control, when a rendering particle is located at an edge position included in the object edge map, the rendering particle to be in a hover state, where the rendering particle is used for performing image rendering on an image to be processed.
Optionally, the keypoint obtaining module 81 is configured to obtain, by using a keypoint extraction model, an object keypoint included in the image to be processed.
Fig. 9 is a block diagram of an image processing apparatus 80 according to an exemplary embodiment. Referring to fig. 9, the apparatus further includes:
a pixel value obtaining module 85, configured to obtain pixel values of each pixel point included in the object edge graph;
and a position judging module 86, configured to judge whether the rendering particles are located at an edge position included in the object edge map according to the pixel values.
Fig. 10 is a block diagram of an image processing apparatus 80 according to an exemplary embodiment. Referring to fig. 10, the position determining module 86 includes:
a position acquisition sub-module 861 for acquiring a current position of the rendering particles on the object edge map;
a target pixel value obtaining sub-module 862, configured to obtain a target pixel value corresponding to the current position according to the pixel value;
an edge position determining sub-module 863, configured to determine that the rendering particles are located at an edge position included in the object edge map when the target pixel value is within a preset pixel value range;
and a non-edge position determining sub-module 864, configured to determine that the rendering particles are not located at an edge position included in the object edge map when the target pixel value exceeds the preset pixel value range.
Fig. 11 is a block diagram of an image processing apparatus 80 according to an exemplary embodiment. Referring to fig. 11, the apparatus 80 further includes:
the motion control module 87 is configured to control the rendering particles to move according to a preset motion trajectory when the rendering particles are not located at the edge positions included in the object edge map.
Fig. 12 is a block diagram of an image processing apparatus 80 according to an exemplary embodiment. Referring to fig. 12, the apparatus 80 further includes:
A key point judging module 88, configured to judge whether the number of key points of the object is less than or equal to a preset threshold;
an extension point obtaining module 89, configured to obtain an object extension point according to the object key point when the number of the object key points is less than or equal to the preset threshold;
the mesh map obtaining module 82 is configured to obtain the object mesh map according to the object key point and the object extension point.
The device is adopted to acquire the key points of the object included in the image to be processed; acquiring an object grid graph according to the object key points; performing edge processing on the object grid graph to obtain an object edge graph; controlling the rendering particles to be in a hovering state when the rendering particles are positioned at edge positions included in the object edge graph; the rendering particles are used for performing image rendering on the image to be processed. Therefore, according to the image processing method provided by the embodiment of the disclosure, the object edge map is obtained by means of combining the extraction of the object key points and the edge processing, so that the rendering particles can hover at the object key points, and therefore, various rendering forms exist in the rendering particles, the rendering effect is further improved, and the visual effect is enhanced.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 13 is a block diagram of an electronic device 1300, according to an example embodiment. The electronic device may be a mobile terminal or a server. For example, the electronic device 1300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 13, an electronic device 1300 may include one or more of the following components: a processing component 1302, a memory 1304, a power component 1306, a multimedia component 1308, an audio component 1310, an input/output (I/O) interface 1312, a sensor component 1314, and a communication component 1316.
The processing component 1302 generally controls overall operation of the electronic device 1300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1302 may include one or more processors 1320 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1302 can include one or more modules that facilitate interactions between the processing component 1302 and other components. For example, the processing component 1302 may include a multimedia module to facilitate interaction between the multimedia component 1308 and the processing component 1302.
The memory 1304 is configured to store various types of data to support operations at the electronic device 1300. Examples of such data include instructions for any application or method operating on the electronic device 1300, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1304 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply assembly 1306 provides power to the various components of the electronic device 1300. The power components 1306 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 1300.
The multimedia component 1308 includes a screen between the electronic device 1300 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1308 includes a front-facing camera and/or a rear-facing camera. When the electronic device 1300 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 1310 is configured to output and/or input audio signals. For example, the audio component 1310 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 1300 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 1304 or transmitted via the communication component 1316. In some embodiments, the audio component 1310 also includes a speaker for outputting audio signals.
The I/O interface 1312 provides an interface between the processing component 1302 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 1314 includes one or more sensors for providing status assessment of various aspects of the electronic device 1300. For example, the sensor assembly 1314 may detect an on/off state of the electronic device 1300, a relative positioning of the components, such as a display and keypad of the electronic device 1300, the sensor assembly 1314 may also detect a change in position of the electronic device 1300 or a component of the electronic device 1300, the presence or absence of a user's contact with the electronic device 1300, an orientation or acceleration/deceleration of the electronic device 1300, and a change in temperature of the electronic device 1300. The sensor assembly 1314 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 1314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1314 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1316 is configured to facilitate communication between the electronic device 1300 and other devices, either wired or wireless. The electronic device 1300 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 1316 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1316 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 1300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the image processing methods illustrated in fig. 1, 2, described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1304, including instructions executable by processor 1320 of electronic device 1300 to perform the image processing methods shown in fig. 1 and 2 described above. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product is also provided, which, when executed by the processor 1320 of the electronic device 1300, causes the electronic device 1300 to perform the image processing method shown in fig. 1, 2 described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (9)

1. An image processing method, the method comprising:
acquiring object key points included in an image to be processed;
Judging whether the number of the object key points is smaller than or equal to a preset threshold value;
acquiring object expansion points according to the object key points under the condition that the number of the object key points is smaller than or equal to the preset threshold value;
acquiring an object grid graph according to the object key points and the object expansion points;
performing edge processing on the object grid graph according to a preset edge processing algorithm to obtain an object edge graph;
when the rendering particles are positioned at the edge positions included in the object edge graph, controlling the rendering particles to be in a hovering state, wherein the rendering particles are used for performing image rendering on the image to be processed, and the hovering state is that the rendering particles are static at the current position and do not continue to move according to a preset movement track.
2. The method according to claim 1, wherein the acquiring the object keypoints included in the image to be processed includes:
and acquiring object key points included in the image to be processed through a key point extraction model.
3. The method according to claim 1, further comprising, after the edge processing the object mesh map to obtain an object edge map:
Acquiring pixel values of all pixel points included in the object edge graph;
and judging whether the rendering particles are positioned at the edge positions included in the object edge graph according to the pixel values.
4. A method according to claim 3, wherein said determining whether said rendering particles are located at edge positions comprised by said object edge map based on said pixel values comprises:
acquiring the current position of the rendering particles on the object edge graph;
acquiring a target pixel value corresponding to the current position according to the pixel value;
when the target pixel value is within a preset pixel value range, determining that the rendering particles are positioned at the edge position included in the object edge graph;
and when the target pixel value exceeds the preset pixel value range, determining that the rendering particles are not positioned at the edge positions included in the object edge graph.
5. The method of claim 1, further comprising, after performing edge processing on the object mesh map to obtain an object edge map:
and when the rendering particles are not positioned at the edge positions included in the object edge graph, controlling the rendering particles to move according to a preset movement track.
6. An image processing apparatus, characterized in that the apparatus comprises:
the key point acquisition module is used for acquiring object key points included in the image to be processed;
the grid diagram acquisition module is used for acquiring an object grid diagram according to the object key points and the object expansion points;
the edge map acquisition module is used for carrying out edge processing on the object grid map according to a preset edge processing algorithm to obtain an object edge map;
a hover control module, configured to control, when a rendering particle is located at an edge position included in the object edge graph, the rendering particle to be in a hover state, where the rendering particle is used to perform image rendering on the image to be processed, and the hover state is that the rendering particle is stationary at the current position and does not continue to move according to a preset motion track;
the key point judging module is used for judging whether the number of the key points of the object is smaller than or equal to a preset threshold value;
and the expansion point acquisition module is used for acquiring object expansion points according to the object key points under the condition that the number of the object key points is smaller than or equal to the preset threshold value.
7. The apparatus according to claim 6, wherein the keypoint obtaining module is configured to obtain the object keypoints included in the image to be processed by using a keypoint extraction model.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the image processing method of any one of claims 1 to 5.
9. A non-transitory computer-readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method of any one of claims 1 to 5.
CN201910340750.6A 2019-04-25 2019-04-25 Image processing method, device, electronic equipment and storage medium Active CN110211211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910340750.6A CN110211211B (en) 2019-04-25 2019-04-25 Image processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910340750.6A CN110211211B (en) 2019-04-25 2019-04-25 Image processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110211211A CN110211211A (en) 2019-09-06
CN110211211B true CN110211211B (en) 2024-01-26

Family

ID=67786458

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910340750.6A Active CN110211211B (en) 2019-04-25 2019-04-25 Image processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110211211B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091610B (en) * 2019-11-22 2023-04-11 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
US11403788B2 (en) 2019-11-22 2022-08-02 Beijing Sensetime Technology Development Co., Ltd. Image processing method and apparatus, electronic device, and storage medium
CN112581620A (en) * 2020-11-30 2021-03-30 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113743177A (en) * 2021-02-09 2021-12-03 北京沃东天骏信息技术有限公司 Key point detection method, system, intelligent terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6084596A (en) * 1994-09-12 2000-07-04 Canon Information Systems Research Australia Pty Ltd. Rendering self-overlapping objects using a scanline process
JP2014194635A (en) * 2013-03-28 2014-10-09 Canon Inc Image forming apparatus, image forming method, and program
CN106022337A (en) * 2016-05-22 2016-10-12 复旦大学 Planar object detection method based on continuous edge characteristic
CN108428214A (en) * 2017-02-13 2018-08-21 阿里巴巴集团控股有限公司 A kind of image processing method and device
CN108986016A (en) * 2018-06-28 2018-12-11 北京微播视界科技有限公司 Image beautification method, device and electronic equipment
CN109063560A (en) * 2018-06-28 2018-12-21 北京微播视界科技有限公司 Image processing method, device, computer readable storage medium and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6084596A (en) * 1994-09-12 2000-07-04 Canon Information Systems Research Australia Pty Ltd. Rendering self-overlapping objects using a scanline process
JP2014194635A (en) * 2013-03-28 2014-10-09 Canon Inc Image forming apparatus, image forming method, and program
CN106022337A (en) * 2016-05-22 2016-10-12 复旦大学 Planar object detection method based on continuous edge characteristic
CN108428214A (en) * 2017-02-13 2018-08-21 阿里巴巴集团控股有限公司 A kind of image processing method and device
CN108986016A (en) * 2018-06-28 2018-12-11 北京微播视界科技有限公司 Image beautification method, device and electronic equipment
CN109063560A (en) * 2018-06-28 2018-12-21 北京微播视界科技有限公司 Image processing method, device, computer readable storage medium and terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
树叶边缘渲染;涅槃的凤凰;《https://blog.csdn.net/u014630768/article/details/32716117》;20140630;全文 *

Also Published As

Publication number Publication date
CN110211211A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110211211B (en) Image processing method, device, electronic equipment and storage medium
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
EP3125158B1 (en) Method and device for displaying images
CN109087238B (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN111553864B (en) Image restoration method and device, electronic equipment and storage medium
US11030733B2 (en) Method, electronic device and storage medium for processing image
CN110909654A (en) Training image generation method and device, electronic equipment and storage medium
CN109472738B (en) Image illumination correction method and device, electronic equipment and storage medium
CN110490164B (en) Method, device, equipment and medium for generating virtual expression
CN113194254A (en) Image shooting method and device, electronic equipment and storage medium
CN112509005B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111091610B (en) Image processing method and device, electronic equipment and storage medium
CN111626183A (en) Target object display method and device, electronic equipment and storage medium
CN113409342A (en) Training method and device for image style migration model and electronic equipment
US20210118148A1 (en) Method and electronic device for changing faces of facial image
CN111144266B (en) Facial expression recognition method and device
CN110807769B (en) Image display control method and device
CN110619325A (en) Text recognition method and device
CN109167921B (en) Shooting method, shooting device, shooting terminal and storage medium
CN111988522B (en) Shooting control method and device, electronic equipment and storage medium
CN110502993B (en) Image processing method, image processing device, electronic equipment and storage medium
CN112347911A (en) Method and device for adding special effects of fingernails, electronic equipment and storage medium
CN111373409B (en) Method and terminal for obtaining color value change
EP3905660A1 (en) Method and device for shooting image, and storage medium
CN114463212A (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant