CN110070554A - Image processing method, device, hardware device - Google Patents
Image processing method, device, hardware device Download PDFInfo
- Publication number
- CN110070554A CN110070554A CN201811222643.5A CN201811222643A CN110070554A CN 110070554 A CN110070554 A CN 110070554A CN 201811222643 A CN201811222643 A CN 201811222643A CN 110070554 A CN110070554 A CN 110070554A
- Authority
- CN
- China
- Prior art keywords
- contour feature
- inner contour
- point
- feature point
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 31
- 239000000463 material Substances 0.000 claims abstract description 51
- 238000012545 processing Methods 0.000 claims description 19
- 238000003860 storage Methods 0.000 claims description 12
- 230000000694 effects Effects 0.000 abstract description 26
- 238000000034 method Methods 0.000 description 35
- 238000003709 image segmentation Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000001514 detection method Methods 0.000 description 6
- 230000036544 posture Effects 0.000 description 6
- 230000011218 segmentation Effects 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 241000282472 Canis lupus familiaris Species 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000012880 independent component analysis Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 210000001364 upper extremity Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The disclosure discloses a kind of image processing method, device, hardware device.Wherein, which includes: to be split to target image, obtains the profile of target object;According to the profile of the target object, the Internal periphery characteristic point of the target object is generated;According to the Internal periphery characteristic point, contour characteristic points are generated;Preset material is filled into the region between the Internal periphery characteristic point and the contour characteristic points.The image processing method of the embodiment of the present disclosure, target object to be treated can be partitioned into from image, and material is added in the relevant range of target object to form special efficacy, when modifying special efficacy, it only needs to modify material, without updating again to image, the efficiency and flexibility of special effect making are improved.
Description
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method, an image processing apparatus, and a hardware apparatus.
Background
With the development of computer technology, the application range of the intelligent terminal is widely improved, for example, the intelligent terminal can listen to music, play games, chat on internet, take pictures and the like. For the shooting technology of the intelligent terminal, the shooting pixels of the intelligent terminal reach more than ten million pixels, and the intelligent terminal has higher definition and higher shooting effect compared with professional cameras.
At present, when an intelligent terminal is used for photographing, not only can a photographing effect of a traditional function be realized by using a photographing software built in when the intelligent terminal leaves a factory, but also a photographing effect with an additional function can be realized by downloading an Application program (APP for short) from a network end, for example, the APP with functions of dark light detection, a beauty camera, super pixels and the like can be realized. The beautifying function of the intelligent terminal generally comprises beautifying processing effects of skin color adjustment, skin grinding, large eye, face thinning and the like, and can perform beautifying processing of the same degree on all faces recognized in an image. At present, APP can realize simple special effects.
However, the current special effect function can only preset the effect of the special effect and synthesize the special effect into a video or an image, and if the special effect needs to be modified, the special effect needs to be made again and then synthesized into the video or the image, so that the generation of the special effect is very inflexible.
Disclosure of Invention
According to one aspect of the present disclosure, the following technical solutions are provided:
an image processing method comprising: segmenting the target image to obtain the outline of a target object; generating inner contour feature points of the target object according to the contour of the target object; generating outer contour characteristic points according to the inner contour characteristic points; and filling preset materials into the area between the inner contour characteristic point and the outer contour characteristic point.
Further, the generating inner contour feature points of the target object according to the contour of the target object comprises: and generating the inner contour characteristic points along the contour line of the target object.
Further, the distances between adjacent inner contour feature points are the same. .
Further, the generating of the outer contour feature point according to the inner contour feature point includes: and generating outer contour characteristic points in the direction of the inner contour characteristic points far away from the target object according to the inner contour characteristic points.
Further, the generating, according to the inner contour feature point, an outer contour feature point in a direction of the inner contour feature point away from a target object includes: making a perpendicular line to a line segment formed by connecting a first inner contour feature point and a second inner contour feature point through the first inner contour feature point, wherein the first inner contour feature point and the second inner contour feature point are adjacent inner contour feature points; and taking a first point in the direction of the perpendicular line far away from the target object, wherein the length of the line between the first point and the first inner contour feature point is a preset length, and taking the first point as an outer contour feature point corresponding to the first inner contour feature point.
Further, the generating, according to the inner contour feature point, an outer contour feature point in a direction of the inner contour feature point away from a target object further includes: generating two different outer contour feature points through a third inner contour feature point, wherein the two different outer contour feature points are respectively a first auxiliary outer contour feature point and a second auxiliary outer contour feature point, and the third inner contour point is an inner contour feature point at the outline inflection point; calculating the intersection point of a straight line where a first auxiliary outer contour feature point and a second outer contour feature point are located and a straight line where a second auxiliary outer contour feature point and a fourth outer contour feature point are located, wherein the second outer contour feature point is an outer contour feature point corresponding to a second inner contour feature point, the fourth outer contour feature point is an outer contour feature point corresponding to a fourth inner contour feature point, and the second inner contour feature point and the fourth inner contour feature point are two inner contour feature points adjacent to the third inner contour feature point; and taking the intersection point as a third outer contour feature point corresponding to the third inner contour feature point.
Further, the distance between the inner contour feature point and the outer contour feature point corresponding to the inner contour feature point is the width of the material.
Further, the filling of a preset material into the region between the inner contour feature point and the outer contour feature point includes: filling the material into an area formed by two adjacent inner contour characteristic points and two adjacent outer contour characteristic points corresponding to the two adjacent inner contour characteristic points; repeating the filling operation until all the areas between the inner contour characteristic points and the outer contour characteristic points are filled with the material.
Further, the segmenting the target image to obtain the contour of the target object includes: acquiring a video, and segmenting a video frame image in the video; separating a target object from other objects in the video frame image; and obtaining the outline of the target object.
Further, before segmenting the target image to obtain the contour of the target object, the method further includes: and setting a corresponding relation between the preset material and the target image.
According to another aspect of the present disclosure, the following technical solutions are also provided:
an image processing apparatus comprising:
the contour acquisition module is used for segmenting the target image to obtain the contour of the target object;
an inner contour feature point generation module, configured to generate inner contour feature points of the target object according to the contour of the target object;
the outer contour characteristic point generating module is used for generating outer contour characteristic points according to the inner contour characteristic points;
and the filling module is used for filling preset materials into the area between the inner contour characteristic point and the outer contour characteristic point.
Further, the inner contour feature point generating module is configured to generate the inner contour feature point along the contour line of the target object.
Further, the distances between adjacent inner contour feature points are the same.
Further, the outer contour feature point generating module comprises an outer contour feature point generating submodule for generating an outer contour feature point according to the inner contour feature point in a direction of the inner contour feature point away from the target object.
Further, the outer contour feature point generation submodule is used for making a perpendicular line to a line segment formed by connecting a first inner contour feature point and a second inner contour feature point through the first inner contour feature point, wherein the first inner contour feature point and the second inner contour feature point are adjacent inner contour feature points; and taking a first point in the direction of the perpendicular line far away from the target object, wherein the length of the line between the first point and the first inner contour characteristic point is a preset length, and taking the first point as the outer contour characteristic point corresponding to the first inner contour characteristic point.
Further, the outer contour feature point generation submodule is further configured to: generating two different outer contour feature points through a third inner contour feature point, wherein the two different outer contour feature points are respectively a first auxiliary outer contour feature point and a second auxiliary outer contour feature point, and the third inner contour point is an inner contour feature point at the outline inflection point; calculating the intersection point of a straight line where a first auxiliary outer contour feature point and a second outer contour feature point are located and a straight line where a second auxiliary outer contour feature point and a fourth outer contour feature point are located, wherein the second outer contour feature point is an outer contour feature point corresponding to a second inner contour feature point, the fourth outer contour feature point is an outer contour feature point corresponding to a fourth inner contour feature point, and the second inner contour feature point and the fourth inner contour feature point are two inner contour feature points adjacent to the third inner contour feature point; and taking the intersection point as a third outer contour feature point corresponding to the third inner contour feature point.
Further, the distance between the inner contour feature point and the outer contour feature point corresponding to the inner contour feature point is the width of the material.
Further, the filling module is configured to fill the material into an area formed by two adjacent inner contour feature points and two adjacent outer contour feature points corresponding to the two adjacent inner contour feature points; repeating the filling operation until all the areas between the inner contour feature points and the outer contour feature points are filled with the material.
Further, the contour acquisition module is configured to acquire a video and segment a video frame image in the video; separating a target object from other objects in the video frame image; and obtaining the outline of the target object.
Further, the image processing apparatus further includes a correspondence setting module configured to set a correspondence between a preset material and the target image.
According to still another aspect of the present disclosure, there is also provided the following technical solution:
an electronic device, comprising: a memory for storing non-transitory computer readable instructions; and a processor for executing the computer readable instructions, so that the processor realizes the steps of any image processing method when executing the computer readable instructions.
According to still another aspect of the present disclosure, there is also provided the following technical solution:
a computer readable storage medium storing non-transitory computer readable instructions which, when executed by a computer, cause the computer to perform the steps of any of the methods described above.
The disclosure discloses an image processing method, an image processing device and a hardware device. The image processing method comprises the following steps: segmenting the target image to obtain the contour of the target object; generating inner contour feature points of the target object according to the contour of the target object; generating outer contour characteristic points according to the inner contour characteristic points; and filling preset materials into the area between the inner contour characteristic point and the outer contour characteristic point. According to the image processing method, the target object to be processed can be divided from the image, the material is added in the relevant area of the target object to form the special effect, when the special effect is modified, only the material needs to be modified, the image does not need to be re-edited, and the efficiency and the flexibility of special effect production are improved.
The foregoing is a summary of the present disclosure, and for the purpose of promoting a clear understanding of the technical aspects of the present disclosure, the present disclosure may be implemented in accordance with the following description, and the following is a detailed description of the preferred embodiments with reference to the accompanying drawings in order to make the foregoing and other objects, features, and advantages of the present disclosure more comprehensible.
Drawings
FIG. 1 is a schematic flow diagram of an image processing method according to one embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an outer contour feature point generation method according to the present disclosure;
FIG. 3 is a schematic diagram of a material filling method according to the present disclosure;
FIG. 4 is a diagram illustrating an effect after an image is processed according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and it is understood that various changes, modifications, and alterations can be made in the details of this description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be further noted that the drawings provided in the following embodiments are only schematic representations of the basic concepts of the present disclosure, and the drawings are only intended to show the components relevant to the present disclosure rather than to show the components in terms of the number, shape and size in actual implementation.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides an image processing method. The image processing method provided by the embodiment can be executed by a computing device, the computing device can be implemented as software, or implemented as a combination of software and hardware, and the computing device can be integrated in a server, a terminal device and the like. As shown in fig. 1, the image processing method mainly includes the following steps S101 to S104. Wherein:
step S101: segmenting the target image to obtain the contour of the target object;
in this step, the target image may be any image, and in one embodiment, the target image is a picture, wherein the target image includes a target object, and the target object may be any object, and in one embodiment, the target object is a human body. And segmenting the target image, segmenting the object in the image, and separating the segmented object from other objects to obtain the outline of the target object. In one embodiment, when the target image is a video, the video needs to be acquired first, and a video frame image in the video is segmented; separating a target object from other objects in the video frame image; and obtaining the contour of the target object.
Image segmentation is generally divided into interactive image segmentation and automatic image segmentation, and conventional image processing generally uses interactive image segmentation, which requires human intervention in image segmentation. In the present disclosure, automatic image segmentation is used, and the following description will be given of automatic image segmentation, taking human body image segmentation as an example.
Generally, automatic human image segmentation methods can be classified into the following methods: (1) the human body image segmentation method based on the model comprises the steps of firstly detecting a human face according to prior knowledge of the human face, then using a trunk model to search a trunk below the human face, then estimating the position of a lower body according to the segmented trunk, and finally providing seed points for image segmentation by utilizing the estimated trunk and upper limb areas of legs so as to complete the segmentation of the human body image; (2) a human body image segmentation method based on a hierarchical tree is characterized in that adjacent body parts are modeled firstly, then the whole human body posture is modeled, different postures of a human body are modeled as the summation of nodes on different paths in the hierarchical detection tree, different layers in the hierarchical detection tree correspond to different models of the adjacent human body parts, different paths on the hierarchical detection tree correspond to different human body postures, the detection is performed downwards along a root node of the tree during detection, and different postures of the human body are segmented along different paths; (3) the method comprises the steps of firstly detecting a human face according to prior knowledge of the human face, then using a trunk model to search a trunk under the human face, then obtaining a reference signal from the detected trunk, then using the independent component analysis method of the reference signal to highlight the trunk from an image to finish the segmentation of the trunk, wherein the segmentation of other body parts is similar, and finally finishing the segmentation of the whole human body image; (4) the human body image segmentation method based on the expectation-maximization algorithm comprises the steps of firstly estimating human body postures in an image by using a pattern structure model to obtain a probability map of the human body postures, and then obtaining a final human body segmentation image by using an image segmentation method on the basis of the probability map. Of course, other human body image segmentation methods may also be used, which are not described in detail in the present disclosure, and any image segmentation method may be introduced into the present disclosure to segment the target object from the target image.
Step S102: generating inner contour characteristic points of the target object according to the contour of the target object;
in this step, based on the contour of the target object obtained in step S101, an inner contour feature point of the target object is generated, and the inner contour feature point may be directly located on the contour line of the target object or may be kept a predetermined distance from the contour line, for example, the inner contour feature point may be kept a distance of 0.1cm from the contour line. In one embodiment, the distances between the inner contour feature points are the same, that is to say the inner contour feature points are evenly distributed with respect to the contour of the target object.
Step S103: generating outer contour characteristic points according to the inner contour characteristic points;
in this step, outer contour feature points are generated according to the inner contour feature points generated in step S102, and in one embodiment, outer contour feature points are generated according to the inner contour feature points in a direction of the inner contour feature points away from the target object, and the generation process may be an interpolation process. Taking the target object as a human body as an example, the inner contour feature points are located on a human body contour line, and for each inner contour feature point, an outer contour feature point corresponding to the inner contour feature point is generated on the outer side of the human body. In one embodiment, a line segment connected by a first inner contour feature point and a second inner contour feature point is plumbed via the first inner contour feature point, the first inner contour feature point and the second inner contour feature point being adjacent inner contour feature points; and taking a first point in the direction of the perpendicular line far away from the target object, wherein the length of the line between the first point and the first inner contour characteristic point is a preset length, and taking the first point as the outer contour characteristic point corresponding to the first inner contour characteristic point. And then repeating the two steps until all the first inner contour points generate the corresponding outer contour characteristic points. Generating two different outer contour feature points through a third inner contour feature point, wherein the two different outer contour feature points are respectively a first auxiliary outer contour feature point and a second auxiliary outer contour feature point, and the third inner contour point is an inner contour feature point at the outline inflection point; calculating the intersection point of a straight line where a first auxiliary outer contour feature point and a second outer contour feature point are located and a straight line where a second auxiliary outer contour feature point and a fourth outer contour feature point are located, wherein the second outer contour feature point is an outer contour feature point corresponding to a second inner contour feature point, the fourth outer contour feature point is an outer contour feature point corresponding to a fourth inner contour feature point, and the second inner contour feature point and the fourth inner contour feature point are two inner contour feature points adjacent to the third inner contour feature point; and taking the intersection point as a third outer contour feature point corresponding to the third inner contour feature point.
Referring to fig. 2, as a specific example of the above embodiment, as shown in fig. 2, the inner contour feature points include points 1, 2, 3, 4, 5, 6, where the points 1, 2, 3, 5, 6 correspond to a first inner contour point and a second inner contour point in the above embodiment, and the point 4 is a contour inflection point of the target object and corresponds to a third inner contour point in the above embodiment. Taking point 2 as an example, a perpendicular line is drawn from point 2 to line segment 12, a point b is taken in the direction of the perpendicular line far away from the target object, the length of line segment 1b is made to be H, wherein H is a preset length, each inner contour feature point has two adjacent inner contour feature points, point 2 has two adjacent inner contour feature points, point 3 has two adjacent points in addition to point 1, and point 2 has two adjacent points, so that a perpendicular line is drawn through point 2 to be perpendicular to line segment 23, in this embodiment, point 1, point 2 and point 3 are located on the same straight line, so that the perpendicular line of line segment 12 drawn through point 2 coincides with the perpendicular line of line segment 23, and two points b obtained at this time coincide, and point b can be determined to be the outer contour feature point corresponding to point 2, whereby repeating the above operation for each point can obtain outer contour feature point a, a corresponding to inner contour feature points 1, 2, 3, 5, 6, b. c, f, g; in this specific example, a special case is also included, such as point 4 shown in fig. 2, which is an inflection point located at a corner of the contour of the target object, at this time, since point 4 and its adjacent points 3 and 5 are not on a straight line, and thus, perpendicular lines of line segment 34 and line segment 45 are respectively made through point 4, and the two perpendicular lines do not coincide, at this time, two points d and e appear, at this time, line segment cd and line segment fe are extended, and the extended lines intersect at point h, and point h is determined as the outer contour feature point corresponding to inner contour feature point 4.
It should be noted that the above-mentioned method for generating the outer contour feature points is only an example and does not constitute a limitation to the present disclosure, and practically any method for generating the outer contour points from the inner contour points may be used in the present disclosure.
Step S104: and filling preset materials into the area between the inner contour characteristic point and the outer contour characteristic point.
In the present disclosure, the preset material may be a color chart having a fixed size, in which the material is filled into an area between the inner contour feature point and the outer contour feature point to form an effect of stroking a target object. In one embodiment, the process of filling is: the material is filled into an area formed by two adjacent inner contour characteristic points and two adjacent outer contour characteristic points corresponding to the two adjacent inner contour characteristic points; repeating the filling operation until all the areas between the inner contour characteristic points and the outer contour characteristic points are filled with the material. Fig. 3 shows an example of filling material, in which the length of the material is L, the width of the material is H, that is, the distance between the inner contour feature points is L, the distance between the inner contour feature points and the corresponding outer contour feature points is H, and the area 1ab2 is the size of a material, in which the preset material is just filled, as shown by the shaded portion of the area 1ab2 in fig. 3. In one embodiment, the length and width attributes of the preset material are obtained in advance, when the inner contour feature point is generated in step S102, the inner contour feature point is sampled on the contour of the target object by using the length of the preset material as a sampling distance, and in step S103, the outer contour feature point of the inner contour feature point is obtained by using the width of the preset material. In another embodiment, the distance between the inner contour feature points is not L, and may be n times or 1/n times of L, and in this case, the material may be filled after being stretched or cut.
In one embodiment, before segmenting the target image to obtain the contour of the target object, the method further includes: and setting a corresponding relation between the preset material and the target image. In this embodiment, a plurality of materials may be prepared in advance, and correspond to a plurality of target images, the plurality of target objects may be pictures or video frames of videos, when a target image is a video frame of a video, the corresponding material is set for the plurality of video frames, a different edge-tracing effect is generated for each frame of video, and when a video is played, the edge-tracing effect changes along with the playing of the video.
In one embodiment, before segmenting the target image to obtain the contour of the target object, the method further includes: a target object is selected. The target object may be any object that can be segmented from the target image, and in this embodiment, the target object may be a human body, various animals such as cats and dogs, plants, buildings, and the like. After different target objects are selected, different object segmentation algorithms are called, and a user can flexibly adjust the objects to be segmented.
In one embodiment, before segmenting the target image to obtain the contour of the target object, the method further includes: the sequence number of the target object to be segmented is selected. In an embodiment, a plurality of target objects may exist in the target image, for example, a plurality of human bodies exist in the video frame, at this time, a serial number of the target object that needs to be processed may be preset, if the serial number is set to be 1, the image processing in the present disclosure is performed on the first segmented human body, and if the serial number is set to be 0, the image processing in the present disclosure is performed on all the segmented human bodies.
In one embodiment, the display attribute of the contour may be set, for example, a certain section of the contour may be set not to be displayed in a certain frame or in certain frames or a certain section of the contour may be randomly displayed, so that the effect of the occurrence of the pixel flicker may occur.
As shown in fig. 4, an example of a valid result obtained after the image processing method disclosed by the present disclosure processes an image, in which the target object is a human body, and the human body is stroked to highlight the position of the human body in the image.
The disclosure discloses an image processing method, an image processing device and a hardware device. The image processing method comprises the following steps: segmenting the target image to obtain the contour of the target object; generating inner contour feature points of the target object according to the contour of the target object; generating outer contour characteristic points according to the inner contour characteristic points; and filling preset materials into the area between the inner contour characteristic point and the outer contour characteristic point. According to the image processing method, the target object to be processed can be divided from the image, the material is added in the relevant area of the target object to form the special effect, when the special effect is modified, only the material needs to be modified, the image does not need to be re-edited, and the efficiency and the flexibility of special effect production are improved.
In the above, although the steps in the above method embodiments are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiments of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, other steps may also be added by those skilled in the art, and these obvious modifications or equivalent alternative modes should also be included in the protection scope of the present disclosure, and are not described herein again.
For convenience of description, only the relevant parts of the embodiments of the present disclosure are shown, and details of the specific techniques are not disclosed, please refer to the embodiments of the method of the present disclosure.
The embodiment of the disclosure provides an image processing apparatus. The apparatus may perform the steps described in the above-described image processing method embodiments. As shown in fig. 5, the apparatus 500 mainly includes: a contour acquisition module 501, an inner contour feature point generation module 502, an outer contour feature point generation module 503, and a filling module 504. Wherein,
the contour acquisition module 501 is configured to segment a target image to obtain a contour of a target object;
an inner contour feature point generating module 502, configured to generate inner contour feature points of the target object according to the contour of the target object;
an outer contour feature point generating module 503, configured to generate an outer contour feature point according to the inner contour feature point;
a filling module 504, configured to fill a preset material into an area between the inner contour feature point and the outer contour feature point.
Further, the inner contour feature point generating module 502 is configured to generate the inner contour feature points along a contour line of the target object, where distances between adjacent inner contour feature points are the same.
Further, the distance between the adjacent inner contour feature points is the length of the material.
Further, the outer contour feature point generating module 503 includes an outer contour feature point generating submodule for generating an outer contour feature point according to the inner contour feature point in a direction away from the target object.
Furthermore, the outer contour feature point generation submodule makes a perpendicular line to a line segment formed by connecting a first inner contour feature point and a second inner contour feature point through the first inner contour feature point, wherein the first inner contour feature point and the second inner contour feature point are adjacent inner contour feature points; and taking a first point in the direction of the perpendicular line far away from the target object, wherein the length of the line between the first point and the first inner contour feature point is a preset length, and taking the first point as the outer contour feature point corresponding to the first inner contour feature point.
Further, the outer contour feature point generation submodule is further configured to: generating two different outer contour feature points through a third inner contour feature point, wherein the two different outer contour feature points are respectively a first auxiliary outer contour feature point and a second auxiliary outer contour feature point, and the third inner contour point is an inner contour feature point at the outline inflection point; calculating the intersection point of a straight line where a first auxiliary outer contour feature point and a second outer contour feature point are located and a straight line where a second auxiliary outer contour feature point and a fourth outer contour feature point are located, wherein the second outer contour feature point is an outer contour feature point corresponding to a second inner contour feature point, the fourth outer contour feature point is an outer contour feature point corresponding to a fourth inner contour feature point, and the second inner contour feature point and the fourth inner contour feature point are two inner contour feature points adjacent to the third inner contour feature point; and taking the intersection point as a third outer contour feature point corresponding to the third inner contour feature point.
Further, the distance between the inner contour feature point and the outer contour feature point corresponding to the inner contour feature point is the width of the material.
Further, the filling module 504 is configured to fill the material into an area formed by two adjacent inner contour feature points and two adjacent outer contour feature points corresponding to the two adjacent inner contour feature points; repeating the filling operation until all the areas between the inner contour characteristic points and the outer contour characteristic points are filled with the material.
Further, the contour obtaining module 501 is configured to obtain a video, and segment a video frame image in the video; separating a target object from other objects in the video frame image; and obtaining the outline of the target object.
Further, the image processing apparatus 500 further includes a corresponding relationship setting module, configured to set a corresponding relationship between a preset material and a target image.
The apparatus shown in fig. 5 can perform the method of the embodiment shown in fig. 1, and reference may be made to the related description of the embodiment shown in fig. 1 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 1, and are not described herein again.
Referring now to FIG. 6, a block diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from storage 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608 or from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be understood by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of the above-described features, but also encompasses other embodiments in which any combination of the above-described features or their equivalents is encompassed without departing from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Claims (13)
1. An image processing method, comprising:
segmenting the target image to obtain the contour of the target object;
generating inner contour feature points of the target object according to the contour of the target object;
generating outer contour characteristic points according to the inner contour characteristic points;
and filling preset materials into the area between the inner contour characteristic point and the outer contour characteristic point.
2. The image processing method of claim 1, wherein generating inner contour feature points of a target object from a contour of the target object comprises:
and generating the inner contour characteristic points along the contour line of the target object. .
3. The image processing method according to claim 2, characterized in that: the distances between adjacent inner contour feature points are the same.
4. The image processing method of claim 1, wherein said generating outer contour feature points from said inner contour feature points comprises:
and generating outer contour feature points in the direction of the inner contour feature points far away from the target object according to the inner contour feature points.
5. The image processing method as claimed in claim 4, wherein said generating outer contour feature points in a direction of the inner contour feature points away from the target object based on the inner contour feature points comprises:
making a perpendicular line to a line segment formed by connecting a first inner contour feature point and a second inner contour feature point through the first inner contour feature point, wherein the first inner contour feature point and the second inner contour feature point are adjacent inner contour feature points;
and taking a first point in the direction of the perpendicular line far away from the target object, wherein the length of the line between the first point and the first inner contour characteristic point is a preset length, and taking the first point as the outer contour characteristic point corresponding to the first inner contour characteristic point.
6. The image processing method as claimed in claim 5, wherein said generating outer contour feature points in a direction of said inner contour feature points away from the target object based on said inner contour feature points, further comprises:
generating two different outer contour feature points through a third inner contour feature point, wherein the two different outer contour feature points are respectively a first auxiliary outer contour feature point and a second auxiliary outer contour feature point, and the third inner contour point is an inner contour feature point at a contour turning point of the target object;
calculating the intersection point of a straight line where a first auxiliary outer contour feature point and a second outer contour feature point are located and a straight line where a second auxiliary outer contour feature point and a fourth outer contour feature point are located, wherein the second outer contour feature point is an outer contour feature point corresponding to a second inner contour feature point, the fourth outer contour feature point is an outer contour feature point corresponding to a fourth inner contour feature point, and the second inner contour feature point and the fourth inner contour feature point are two inner contour feature points adjacent to the third inner contour feature point;
and taking the intersection point as a third outer contour feature point corresponding to the third inner contour feature point.
7. The image processing method according to claim 1, characterized in that: and the distance between the inner contour characteristic point and the outer contour characteristic point corresponding to the inner contour characteristic point is the width of the material.
8. The image processing method as claimed in claim 1, wherein said filling a preset material into a region between said inner contour feature point and said outer contour feature point comprises:
filling the material into an area formed by two adjacent inner contour characteristic points and two adjacent outer contour characteristic points corresponding to the two adjacent inner contour characteristic points;
repeating the filling operation until all the areas between the inner contour characteristic points and the outer contour characteristic points are filled with the material.
9. The image processing method of claim 1, wherein the segmenting the target image to obtain the contour of the target object comprises:
acquiring a video, and segmenting a video frame image in the video;
separating a target object from other objects in the video frame image;
and obtaining the outline of the target object.
10. The image processing method of claim 1, wherein before the segmenting the target image to obtain the contour of the target object, further comprising:
and setting a corresponding relation between the preset material and the target image.
11. An image processing apparatus characterized by comprising:
the contour acquisition module is used for segmenting the target image to obtain the contour of the target object;
an inner contour feature point generation module, configured to generate inner contour feature points of the target object according to the contour of the target object;
the outer contour characteristic point generating module is used for generating outer contour characteristic points according to the inner contour characteristic points;
and the filling module is used for filling preset materials into the area between the inner contour characteristic point and the outer contour characteristic point.
12. An electronic device, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executing implements the image processing method according to any of claims 1-10.
13. A computer-readable storage medium storing non-transitory computer-readable instructions which, when executed by a computer, cause the computer to perform the image processing method of any one of claims 1-10.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811222643.5A CN110070554A (en) | 2018-10-19 | 2018-10-19 | Image processing method, device, hardware device |
PCT/CN2019/073082 WO2020077912A1 (en) | 2018-10-19 | 2019-01-25 | Image processing method, device, and hardware device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811222643.5A CN110070554A (en) | 2018-10-19 | 2018-10-19 | Image processing method, device, hardware device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110070554A true CN110070554A (en) | 2019-07-30 |
Family
ID=67365892
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811222643.5A Pending CN110070554A (en) | 2018-10-19 | 2018-10-19 | Image processing method, device, hardware device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110070554A (en) |
WO (1) | WO2020077912A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112308769A (en) * | 2020-10-30 | 2021-02-02 | 脸萌有限公司 | Image synthesis method, apparatus and storage medium |
CN112581620A (en) * | 2020-11-30 | 2021-03-30 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN114125320A (en) * | 2021-08-31 | 2022-03-01 | 北京达佳互联信息技术有限公司 | Method and device for generating image special effect |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5751852A (en) * | 1996-04-29 | 1998-05-12 | Xerox Corporation | Image structure map data structure for spatially indexing an imgage |
US6300955B1 (en) * | 1997-09-03 | 2001-10-09 | Mgi Software Corporation | Method and system for mask generation |
CN101950427A (en) * | 2010-09-08 | 2011-01-19 | 东莞电子科技大学电子信息工程研究院 | Vector line segment contouring method applicable to mobile terminal |
CN108399654A (en) * | 2018-02-06 | 2018-08-14 | 北京市商汤科技开发有限公司 | It retouches in the generation of special efficacy program file packet and special efficacy generation method and device when retouching |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9330495B2 (en) * | 2012-08-09 | 2016-05-03 | Qualcomm Incorporated | Extending DX11 GPU for programmable vector graphics |
CN105513006B (en) * | 2014-10-16 | 2018-10-19 | 北京汉仪科印信息技术有限公司 | A kind of TrueType font profile thickness method of adjustment and device |
-
2018
- 2018-10-19 CN CN201811222643.5A patent/CN110070554A/en active Pending
-
2019
- 2019-01-25 WO PCT/CN2019/073082 patent/WO2020077912A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5751852A (en) * | 1996-04-29 | 1998-05-12 | Xerox Corporation | Image structure map data structure for spatially indexing an imgage |
US6300955B1 (en) * | 1997-09-03 | 2001-10-09 | Mgi Software Corporation | Method and system for mask generation |
CN101950427A (en) * | 2010-09-08 | 2011-01-19 | 东莞电子科技大学电子信息工程研究院 | Vector line segment contouring method applicable to mobile terminal |
CN108399654A (en) * | 2018-02-06 | 2018-08-14 | 北京市商汤科技开发有限公司 | It retouches in the generation of special efficacy program file packet and special efficacy generation method and device when retouching |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112308769A (en) * | 2020-10-30 | 2021-02-02 | 脸萌有限公司 | Image synthesis method, apparatus and storage medium |
CN112308769B (en) * | 2020-10-30 | 2022-06-10 | 北京字跳网络技术有限公司 | Image synthesis method, apparatus and storage medium |
US12079911B2 (en) | 2020-10-30 | 2024-09-03 | Beijing Zitiao Network Technology Co., Ltd. | Method for image synthesis, device for image synthesis and storage medium |
CN112581620A (en) * | 2020-11-30 | 2021-03-30 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN112581620B (en) * | 2020-11-30 | 2024-07-02 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN114125320A (en) * | 2021-08-31 | 2022-03-01 | 北京达佳互联信息技术有限公司 | Method and device for generating image special effect |
WO2023029379A1 (en) * | 2021-08-31 | 2023-03-09 | 北京达佳互联信息技术有限公司 | Image special effect generation method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
WO2020077912A1 (en) | 2020-04-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110058685B (en) | Virtual object display method and device, electronic equipment and computer-readable storage medium | |
CN110070896B (en) | Image processing method, device and hardware device | |
CN110070555A (en) | Image processing method, device, hardware device | |
CN110378947B (en) | 3D model reconstruction method and device and electronic equipment | |
CN110035236A (en) | Image processing method, device and electronic equipment | |
CN110211017B (en) | Image processing method and device and electronic equipment | |
CN110288519A (en) | Image beautification method, device and electronic equipment | |
CN110070554A (en) | Image processing method, device, hardware device | |
CN111862349A (en) | Virtual brush implementation method and device and computer readable storage medium | |
CN112734910A (en) | Real-time human face three-dimensional image reconstruction method and device based on RGB single image and electronic equipment | |
CN111803952A (en) | Topographic map editing method and device, electronic equipment and computer readable medium | |
CN114422698B (en) | Video generation method, device, equipment and storage medium | |
CN112258622B (en) | Image processing method and device, readable medium and electronic equipment | |
CN110378948B (en) | 3D model reconstruction method and device and electronic equipment | |
CN110264430B (en) | Video beautifying method and device and electronic equipment | |
CN110069641B (en) | Image processing method and device and electronic equipment | |
CN110264431A (en) | Video beautification method, device and electronic equipment | |
CN116360661A (en) | Special effect processing method and device, electronic equipment and storage medium | |
CN110209861A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN115953504A (en) | Special effect processing method and device, electronic equipment and storage medium | |
CN110390717B (en) | 3D model reconstruction method and device and electronic equipment | |
CN110363860B (en) | 3D model reconstruction method and device and electronic equipment | |
CN111696041B (en) | Image processing method and device and electronic equipment | |
CN111223105B (en) | Image processing method and device | |
CN111292276B (en) | Image processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Country or region after: China Address after: 2nd Floor, Building 4, No. 18 North Third Ring West Road, Haidian District, Beijing, 2022 Applicant after: Tiktok Technology Co.,Ltd. Address before: 100080 408, 4th floor, 51 Zhichun Road, Haidian District, Beijing Applicant before: BEIJING MICROLIVE VISION TECHNOLOGY Co.,Ltd. Country or region before: China |