CN110211211A - Image processing method, device, electronic equipment and storage medium - Google Patents
Image processing method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN110211211A CN110211211A CN201910340750.6A CN201910340750A CN110211211A CN 110211211 A CN110211211 A CN 110211211A CN 201910340750 A CN201910340750 A CN 201910340750A CN 110211211 A CN110211211 A CN 110211211A
- Authority
- CN
- China
- Prior art keywords
- key point
- image
- target
- target edges
- rendered particle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 239000002245 particle Substances 0.000 claims abstract description 109
- 238000012545 processing Methods 0.000 claims abstract description 55
- 238000009877 rendering Methods 0.000 claims abstract description 29
- 238000007667 floating Methods 0.000 claims abstract description 12
- 230000033001 locomotion Effects 0.000 claims description 27
- 238000000034 method Methods 0.000 claims description 23
- 230000000694 effects Effects 0.000 abstract description 11
- 230000000007 visual effect Effects 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 16
- 238000003708 edge detection Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 210000004709 eyebrow Anatomy 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 241000533950 Leucojum Species 0.000 description 1
- 241000209140 Triticum Species 0.000 description 1
- 235000021307 Triticum Nutrition 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
Abstract
Present disclose provides a kind of image processing method, device, electronic equipment and storage mediums, which comprises obtains the object key point for including in image to be processed;Object grid chart is obtained according to the object key point;Edge processing is carried out to the object grid chart and obtains target edges figure;When rendered particle is located at the marginal position that the target edges figure includes, controls the rendered particle and be in floating state, the rendered particle is used to carry out image rendering to image to be processed.It can be seen that, the image processing method that the embodiment of the present disclosure provides, target edges figure has been obtained by way of extracting object key point and edge processing combination, rendered particle is hovered in object key point, so that there are a variety of rendering forms for rendered particle, and then rendering effect is improved, and enhance visual effect.
Description
Technical field
This disclosure relates to technical field of image processing more particularly to a kind of image processing method, device, electronic equipment and deposit
Storage media.
Background technique
With the development of terminal technology, different renderings can be carried out for image at present, can such as added in the picture
The rendered particles such as fireworks, fallen leaves, movement, snowflake, so that the image after rendering more meets the visual demand of user.
But the rendered particle in the image after rendering at present, moved according to predetermined movement track, so that
The flexibility of rendered particle is poor, thereby reduces the usage experience of user.
Summary of the invention
To overcome the problems in correlation technique, the disclosure provide a kind of image processing method, device, electronic equipment and
Storage medium.
According to the first aspect of the embodiments of the present disclosure, a kind of image processing method is provided, which comprises
Obtain the object key point for including in image to be processed;
Object grid chart is obtained according to the object key point;
Edge processing is carried out to the object grid chart and obtains target edges figure;
When rendered particle is located at the marginal position that the target edges figure includes, controls the rendered particle and be in hovering
State, the rendered particle are used to carry out image rendering to the image to be processed.
It is optionally, described to obtain the object key point for including in image to be processed, comprising:
Model is extracted by key point, obtains the object key point for including in the image to be processed.
Optionally, the object grid chart is carried out described after edge processing obtains target edges figure, further includes:
Obtain the pixel value for each pixel that the target edges figure includes;
According to the pixel value, judge whether the rendered particle is located at the marginal position that the target edges figure includes.
Optionally, described according to the pixel value, judge whether the rendered particle is located at the target edges figure and includes
Marginal position, comprising:
Obtain current location of the rendered particle on the target edges figure;
According to the pixel value, the corresponding target pixel value in the current location is obtained;
In the case that the target pixel value is within the scope of presetted pixel value, it is described right to determine that the rendered particle is located at
The marginal position for including as edge graph;
In the case that the target pixel value is beyond the presetted pixel value range, determine that the rendered particle is not located at
The marginal position that the target edges figure includes.
Optionally, after carrying out edge processing to the object grid chart and obtaining target edges figure, further includes:
When the rendered particle is not located at the marginal position that the target edges figure includes, controls the rendered particle and press
It is moved according to predetermined movement track.
Optionally, before the acquisition object grid chart according to the object key point, further includes:
Judge whether the quantity of the object key point is less than or equal to preset threshold;
In the case where the quantity of the object key point is less than or equal to the preset threshold, closed according to the object
Key point obtains object extension point;
It is described that object grid chart is obtained according to the object key point, comprising:
According to the object key point and object extension point, the object grid chart is obtained.
According to the second aspect of an embodiment of the present disclosure, a kind of image processing apparatus is provided, described device includes:
Key point obtains module, for obtaining the object key for including in image to be processed point;
Grid chart obtains module, for obtaining object grid chart according to the object key point;
Edge graph obtains module, obtains target edges figure for carrying out edge processing to the object grid chart;
Hovering control module, for controlling institute when rendered particle is located at the marginal position that the target edges figure includes
It states rendered particle and is in floating state, the rendered particle is used to carry out image rendering to the image to be processed.
Optionally, the key point obtains module, for extracting model by key point, obtains in the image to be processed
Including object key point.
Optionally, described device further include:
Pixel value obtains module, for obtaining the pixel value for each pixel that the target edges figure includes;
Position judging module, for judging whether the rendered particle is located at the target edges according to the pixel value
The marginal position that figure includes.
Optionally, the position judging module, comprising:
Position acquisition submodule, for obtaining current location of the rendered particle on the target edges figure;
Target pixel value acquisition submodule, for obtaining the corresponding target picture in the current location according to the pixel value
Element value;
Marginal position determines submodule, in the case where for working as the target pixel value within the scope of presetted pixel value, really
The fixed rendered particle is located at the marginal position that the target edges figure includes;
Non-edge position determination submodule, for working as the case where target pixel value exceeds the presetted pixel value range
Under, determine that the rendered particle is not located at the marginal position that the target edges figure includes.
Optionally, described device further include:
Motion-control module, for when the rendered particle is not located at the marginal position that the target edges figure includes,
The rendered particle is controlled to be moved according to predetermined movement track.
Optionally, described device further include:
Key point judgment module, for judging whether the quantity of the object key point is less than or equal to preset threshold;
Extension point obtains module, and the feelings of the preset threshold are less than or equal to for the quantity in the object key point
Under condition, object extension point is obtained according to the object key point;
The grid chart obtains module, for according to the object key point and object extension point, it to be described right to obtain
As grid chart.
According to the third aspect of an embodiment of the present disclosure, a kind of electronic equipment is provided, comprising:
Processor;
For storing the memory of the processor-executable instruction;
Wherein, the processor is configured to executing above-mentioned image processing method.
According to a fourth aspect of embodiments of the present disclosure, a kind of non-transitorycomputer readable storage medium is provided, when described
When instruction in storage medium is executed by the processor of electronic equipment, so that the electronic equipment is able to carry out at above-mentioned image
Reason method.
According to a fifth aspect of the embodiments of the present disclosure, a kind of computer program product, including one or more instruction are provided,
When one or more instruction can be executed by the processor of electronic equipment, so that electronic equipment is able to carry out at above-mentioned image
Reason method.
The technical scheme provided by this disclosed embodiment can include the following benefits:
Image processing method shown in the present exemplary embodiment obtains the object key point for including in image to be processed;Root
Object grid chart is obtained according to the object key point;Edge processing is carried out to the object grid chart and obtains target edges figure;?
When rendered particle is located at the marginal position that the target edges figure includes, controls the rendered particle and be in floating state, it is described
Rendered particle is used to carry out image rendering to the image to be processed.As it can be seen that the image processing method that the embodiment of the present disclosure provides,
Target edges figure has been obtained by way of extracting object key point and edge processing combination, rendered particle is hovered
In object key point, so that there are a variety of rendering forms for rendered particle, and then rendering effect is improved, and enhance view
Feel effect.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not
The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure
Example, and together with specification for explaining the principles of this disclosure.
Fig. 1 is a kind of flow chart of image processing method shown according to an exemplary embodiment;
Fig. 2 is a kind of another flow chart of image processing method shown according to an exemplary embodiment;
Fig. 3 is a kind of schematic diagram of image to be processed shown according to an exemplary embodiment;
Fig. 4 is a kind of signal of image to be processed for being labeled with object key point shown according to an exemplary embodiment
Figure;
Fig. 5 is a kind of schematic diagram of object grid chart shown according to an exemplary embodiment;
Fig. 6 is a kind of schematic diagram of target edges figure shown according to an exemplary embodiment;
Fig. 7 is a kind of schematic diagram of rendering effect image shown according to an exemplary embodiment;
Fig. 8 is the block diagram of the first image processing apparatus shown according to an exemplary embodiment;
Fig. 9 is the block diagram of second of image processing apparatus shown according to an exemplary embodiment;
Figure 10 is the block diagram of the third image processing apparatus shown according to an exemplary embodiment;
Figure 11 is the block diagram of the 4th kind of image processing apparatus shown according to an exemplary embodiment;
The block diagram of Figure 12 the 5th kind of image processing apparatus shown according to an exemplary embodiment;
The structural block diagram of Figure 13 a kind of electronic equipment shown according to an exemplary embodiment.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended
The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
Fig. 1 is a kind of flow chart of image processing method shown according to an exemplary embodiment, may include following step
It is rapid:
In a step 101, the object key point for including in image to be processed is obtained.
In the embodiments of the present disclosure, which can be the image for including target object, for example, the target object
For face, then the image to be processed is the image for including face, and for another example, which is human body, then the image to be processed is
Image, etc. including human body.
Wherein, which is to the pre-set key point of the target object, for example, if the target object is behaved
Face, then the object key point may include: eyes, nose, mouth, eyebrow and face mask etc.;For another example, if the target object
For human body, then the object key point may include: head, neck, shoulder, elbow, hand, arm, knee and foot etc., and above-mentioned example is only illustrated
Bright, the disclosure is not construed as limiting this.
In addition, the disclosure can be for each frame image in video clip according to image processing method described in the disclosure
Carry out image rendering.But, it is contemplated that the frame number for including in video clip is more, thus cause processing pressure larger, therefore, this
Open to obtain the image to be processed from video clip according to default collection rule, which can be every
It is primary every the acquisition of m frame, it illustratively, can be every 5 frames from view if the default collection rule is to acquire an image every 5 frames
The image to be processed is obtained in frequency segment, by acquiring image to be processed according to predetermined period, to reduce image procossing pressure.
In a step 102, object grid chart is obtained according to the object key point.
In the embodiments of the present disclosure, can be first according to object key point, acquisition is multiple to Pointcut, to tie point
It include the object key point of specified quantity in set, and different to may exist identical object key in Pointcut
Point.It further, may include whole objects in the object grid chart in the disclosure in order to enable subsequent processing result is accurate
Key point, therefore, each object key point are located at least one and wait for Pointcut;Then, each is waited for into Pointcut
In the object key point of instruction number carry out line and obtain polygon, the side length quantity of the polygon can be the specified number
Amount.
Optionally, which can be 3, at this point, described above to include three objects in Pointcut
Key point, in this way, the object grid chart in the disclosure is the grid chart of multiple triangle sets synthesis.
In addition, usually object grid chart more refines, then make the edge extracting effect of subsequent target edges figure better, because
This, in above-mentioned acquisition when Pointcut, the object key point of adjacent specified quantity can be constituted should point set be connected
It closes.
In step 103, edge processing is carried out to the object grid chart and obtains target edges figure.
In the embodiments of the present disclosure, object grid chart can be carried out by edge processing by preset edge processing algorithm
Obtain target edges figure, wherein the edge processing algorithm may include following at least one: Sobel edge detection algorithm,
Laplace edge detection algorithm, Canny edge detection algorithm, Robert edge detection algorithm and Priwitt edge detection are calculated
Method etc..
At step 104, when rendered particle is located at the marginal position that the target edges figure includes, the rendering is controlled
Particle is in floating state, and the rendered particle is used to carry out image rendering to the image to be processed.
In the embodiments of the present disclosure, the position that the marginal position that target edges figure includes can be constituted for object key point,
Therefore, in order to enable object key point reaches rendered particle hovering effect, then being located at the target edges figure in rendered particle includes
Marginal position in the case where, control rendered particle be floating state.
In addition, the rendered particle is usually provided with predetermined movement track, so that rendered particle is based on the predetermined movement
Track is moved, in this way, it is right can to reach this during the rendered particle is moved based on the predetermined movement track
The marginal position for including as edge graph.Illustratively, if the rendered particle is multiple petals, and the motion profile of the rendered particle is
The other side is moved to from the side of the image to be processed, so that multiple petal is transported from corresponding initial position by side
It moves to the other side, and in order to enable rendering effect is more lively, corresponding motion morphology can also be set to each petal, such as
Rotate angle and movement velocity etc..
Using the above method, the object key point for including in image to be processed is obtained;Object is obtained according to object key point
Grid chart;Edge processing is carried out to object grid chart and obtains target edges figure;Being located at target edges figure in rendered particle includes
When marginal position, control rendered particle is in floating state, and the rendered particle is used to carry out image to the image to be processed
Rendering.As it can be seen that the image processing method that the embodiment of the present disclosure provides, is combined by extraction object key point and edge processing
Mode has obtained target edges figure, and rendered particle is hovered in object key point, so that rendered particle is in the presence of more
Kind rendering form, and then rendering effect is improved, and enhance visual effect.
Fig. 2 is a kind of another flow chart of image processing method shown according to an exemplary embodiment, specifically be can wrap
Include following steps;
In step 201, model is extracted by key point, obtains the object key point for including in the image to be processed.
In the embodiments of the present disclosure, the key point can be constructed in advance in the following manner and extract model: acquisition mesh first
The corresponding object images of object are marked, and object images are subjected to object key point mark;Then using object images as default
The input of convolutional neural networks obtains object images key point;Then according to the object key of object images key point and mark
Point constructs loss function;The default convolutional neural networks are updated until loss function meets iteration finally by regression training
Termination condition.Illustratively, it is minimum value which, which can be the corresponding numerical value of the loss function,.May be used also in the disclosure
To extract model using key point in the prior art, such as DAN (Deep Alignment Network;Depth match network) mould
Type etc. obtains the object key point for including in the image to be processed, herein, can to key point extraction model without repeating
To refer to the prior art.
In the embodiments of the present disclosure, which can be the image for including target object, for example, the target object
For face, then the image to be processed is the image for including face, and for another example, which is human body, then the image to be processed is
Image, etc. including human body.
Wherein, which is to the pre-set key point of the target object, for example, if the target object is behaved
Face, then the object key point may include: eyes, nose, mouth, eyebrow and face mask etc.;For another example, if the target object
For human body, then the object key point may include: head, neck, shoulder, elbow, hand, arm, knee and foot etc., and above-mentioned example is only illustrated
Bright, the disclosure is not construed as limiting this.
As shown in figure 3, showing a kind of image to be processed, the target object for including in the image to be processed is face, this
Sample extracts model by key point, gets the corresponding object key point of image to be processed shown in Fig. 3, wherein the object
Key point is indicated by " " in Fig. 4.
In step 202, judge whether the quantity of the object key point is less than or equal to preset threshold.
This step considers that in the case where the object key point is less, being unable to satisfy it will cause object grid chart needs
It asks, therefore, it is necessary to judge whether the quantity of object key point is less than or equal to preset threshold.
In the case where whether the quantity of object key point is less than or equal to preset threshold, step 203, step are executed
204, step 206 is to step 208;
In the case where the quantity of object key point is greater than preset threshold, step 205 is executed to step 208.
In step 203, object extension point is obtained according to the object key point.
In the embodiments of the present disclosure, in one possible implementation, available multiple key point combinations, the key
Point combination may include two object key points, and be combined to obtain target segment according to the key point, will be in key point combination
Including two endpoints (i.e. first end point and second endpoint) of two object key points as target segment, then obtain the mesh
Target point in graticule section is the object extension point, and the distance of the target point to first end point is first distance, which arrives
The distance of second endpoint is second distance, and the ratio between the first distance and the second distance meets default ratio.Illustratively,
If default ratio is 1:1, target point is the midpoint, etc. of the target segment.In addition, the above-mentioned multiple key point combinations of acquisition
In the process, two object key points can be obtained at random obtains key point combination.It should be noted that due to the object key
Point is the key point for the target object for including in the image to be processed, in the case where determining the object type of the target object,
The distribution situation of the object key point can be obtained according to object type, for example, being primarily based on pair if target object is face
As face is divided into multiple regions by type, such as multiple region is face mouth (including face mouth) and face chin (wraps
Include face chin) between region, the region between face nose (including face nose) and face mouth, face nose and people
Area between region and face eyes between face eyes (including face eyes) and the face crown (including the face crown)
Domain, then determines the key point distribution situation in each region, i.e., the quantity of the object key point in the region whether be greater than or
Whether meet default distribution equal to the object key point in preset quantity and the region, in this way, however, it is determined that the object key point
Quantity be less than preset quantity, and/or, the object key point in the region does not meet default distribution, then can be from the region
It obtains two object key points and obtains key point combination, above-mentioned example is merely illustrative, and the disclosure is not construed as limiting this.
In alternatively possible implementation, it may be predetermined that the datum mark of the target object, it is generally recognized that the mesh
The object center point for marking object is relatively stable, and if the target object is face, then the datum mark includes face nose, in this way, can
Make ray to cross datum mark specified angle range, which is the specified corresponding angular range in region, the specified area
Domain is to need to construct the position of object key point, and object extension point is obtained on the ray, and as determined, distance should on the ray
The point of datum mark distance to a declared goal is object extension point, or determines object extension point according to anthroposomatology proportionate relationship.For example, if should
Target object is face, and detects that the key point of face eyebrow is less, then mistake face nose is made more within the scope of specified angle
Ray such as crosses face nose and makees trunnion axis and vertical axis, the specified angle range may include (45 °, 80 °) and (100 °,
135°).The mode of above-mentioned acquisition object extension point is merely illustrative, and the disclosure is not construed as limiting this.
In step 204, according to the object key point and object extension point, the object grid chart is obtained.
The mode that this step obtains object grid chart is similar with the process in step 102, repeats no more.
In step 205, object grid chart is obtained according to the object key point.
The mode that this step obtains object grid chart is similar with the process in step 102, repeats no more.As shown in figure 5, showing
The object key point gone out in a kind of couple of Fig. 4 carries out the object grid chart that line obtains.
In step 206, edge processing is carried out to the object grid chart and obtains target edges figure.
In this step, object grid chart progress Object Segmentation is obtained to the target object image after background separation first,
Background i.e. in the image to be processed is designated color (such as black), then carries out edge processing to the target object image and obtains
Target edges figure.Illustratively, continue by taking the example in Fig. 5 as an example, pass through available Fig. 6 of image processing process in this step
Shown in target edges figure.
It using tinter and can usually be deposited in OpenGL (Open Graphics Library, open graphic library)
Reservoir proceeds as described above, by the way that image procossing code (such as edge processing code, object in the disclosure are written in tinter
The acquisition code of grid chart, image rendering code etc.), thus by calling the write-in code in tinter to realize image procossing mistake
Journey, and can be and each by the pixel value for each pixel that memory storage object edge graph includes in subsequent step
Corresponding relationship between the pixel value and location of pixels of a pixel, the disclosure can pass through glReadPixels in OpenGL
Between the pixel value for each pixel that target edges figure is included by function and the pixel value and location of pixels of each pixel
Corresponding relationship store to memory.
In the embodiments of the present disclosure, object grid chart can be carried out by edge processing by preset edge processing algorithm
Obtain target edges figure, wherein the edge processing algorithm may include following at least one: Sobel edge detection algorithm,
Laplace edge detection algorithm, Canny edge detection algorithm, Robert edge detection algorithm and Priwitt edge detection are calculated
Method etc..
In step 207, the pixel value for each pixel that the target edges figure includes is obtained.
In one possible implementation, it is contemplated that target edges figure may not be black white image, therefore, the object
The pixel of target edges in edge graph within the scope of presetted pixel value, in the target edges figure other than target edges its
The pixel of his position is not within the scope of presetted pixel value.
Further, the target edges usually in the target edges figure are the first color, in addition to right in the target edges figure
As other positions other than edge pixel be the second color, if first color be white, second color be black, this
When, the pixel value for the pixel which includes are as follows: RGB is that 255 or RGB is 0.
In a step 208, according to the pixel value, judge whether the rendered particle is located at the target edges figure and includes
Marginal position.
Wherein, the rendered particle is used to carry out image rendering to image to be processed.
In this step, it can determine whether rendered particle is located at the side that the target edges figure includes by following steps
Edge position:
S11, current location of the rendered particle on the target edges figure is obtained;
Since the target edges figure is the edge image that the object key point in the image to be processed is constituted, object
Each pixel in edge graph and image to be processed be it is corresponding, therefore, position of the rendered particle in image to be processed is
For current location of the rendered particle on target edges figure.And rendered particle can carry out in image to be processed in the disclosure
Movement, so needing to get the current location in real time.Illustratively, which may include fireworks, fallen leaves, movement, snow
It spends.
S12, according to the pixel value, obtain the corresponding target pixel value in the current location;
This step can obtain the target pixel value by memory described above, due to being stored with this in the memory
It is corresponding between the pixel value for each pixel that target edges figure includes and the pixel value of each pixel and location of pixels
Relationship, therefore, this step can get the corresponding target pixel value in current location based on the data of storage.
S13, in the case that the target pixel value is within the scope of presetted pixel value, determine that the rendered particle is located at institute
State the marginal position that target edges figure includes;
Since the pixel of the target edges in the target edges figure is within the scope of presetted pixel value, in the mesh
In the case where pixel value is marked within the scope of presetted pixel value, determine that the rendered particle is located at the side that the target edges figure includes
Edge position.
In addition, target edges in the target edges figure are the first color, in addition to target edges in the target edges figure
In the case that the pixel of other positions is the second color in addition, if first color is white, which is black, this
When, this step be in target pixel value any one value of RGB be 255, then rendered particle is located at the target edges figure and includes
Marginal position.
S14, when the target pixel value is beyond in the case where the presetted pixel value range, determine the rendered particle not
The marginal position for including positioned at the target edges figure.
Since the pixel of the target edges in the target edges figure is within the scope of presetted pixel value, in the mesh
In the case where pixel value is marked not within the scope of presetted pixel value, determine that the rendered particle is not located at the target edges figure and includes
Marginal position.
In addition, target edges in the target edges figure are the first color, in addition to target edges in the target edges figure
In the case that the pixel of other positions is the second color in addition, if first color is white, which is black, this
When, this step be in target pixel value any one value of RGB be 0, then rendered particle is not located at the target edges figure and includes
Marginal position.
When rendered particle is located at the marginal position that target edges figure includes, step 209 is executed;
When rendered particle is not located at the marginal position that target edges figure includes, step 210 is executed.
In step 209, it controls the rendered particle and is in floating state.
Wherein, floating state is that position is static in this prior for the rendered particle, do not continue according to predetermined movement track into
Row movement.As shown in fig. 7, rendered particle (i.e. petal) reaches the marginal position of target edges figure shown in fig. 6 (i.e. in Fig. 6
Lines where position) when, rendered particle does not continue to be moved according to predetermined movement track, and on the side of the edge image
Edge position is hovered, and therefore, there are more rendered particles for the object key point in Fig. 7.
In step 210, the rendered particle is controlled to be moved according to predetermined movement track.
The rendered particle is usually provided with predetermined movement track so that rendered particle be based on the predetermined movement track into
Row movement, in this way, the target edges can be reached during the rendered particle is moved based on the predetermined movement track
The marginal position that figure includes.Illustratively, if the rendered particle is multiple petals, and the motion profile of the rendered particle is to wait for from this
The side of processing image moves to the other side, so that multiple petal is moved to separately from corresponding initial position by side
Side, and in order to enable rendering effect is more lively, corresponding motion morphology, such as rotation angle can also be set to each petal
Degree and movement velocity etc..
As shown in fig. 7, rendered particle (i.e. petal) is in non-edge position (the i.e. Fig. 6 for reaching target edges figure shown in fig. 6
In the position other than lines) when, rendered particle continuation moved according to predetermined movement track, therefore, in Fig. 7 in addition to
There are less rendered particles for position other than object key point.
Using the above method, the object key point for including in image to be processed is obtained;Object is obtained according to object key point
Grid chart;Edge processing is carried out to object grid chart and obtains target edges figure;Being located at target edges figure in rendered particle includes
When marginal position, control rendered particle is in floating state, and the rendered particle is used to carry out image rendering to image to be processed.
As it can be seen that the image processing method that the embodiment of the present disclosure provides, by way of extracting object key point and edge processing combination
Target edges figure has been obtained, rendered particle is hovered in object key point, so that there are a variety of wash with watercolours for rendered particle
Form is contaminated, and then improves rendering effect, and enhance visual effect.
Fig. 8 is a kind of block diagram of image processing apparatus 80 shown according to an exemplary embodiment.Referring to Fig. 8, the device
Include:
Key point obtains module 81, for obtaining the object key for including in image to be processed point;
Grid chart obtains module 82, for obtaining object grid chart according to the object key point;
Edge graph obtains module 83, obtains target edges figure for carrying out edge processing to the object grid chart;
Hovering control module 84, for controlling when rendered particle is located at the marginal position that the target edges figure includes
The rendered particle is in floating state, and the rendered particle is used to carry out image rendering to image to be processed.
Optionally, the key point obtains module 81, for extracting model by key point, obtains the image to be processed
In include object key point.
Fig. 9 is a kind of block diagram of image processing apparatus 80 shown according to an exemplary embodiment.Referring to Fig. 9, the device
Further include:
Pixel value obtains module 85, for obtaining the pixel value for each pixel that the target edges figure includes;
Position judging module 86, for judging whether the rendered particle is located at the object edges according to the pixel value
The marginal position that edge figure includes.
Figure 10 is a kind of block diagram of image processing apparatus 80 shown according to an exemplary embodiment.Referring to Fig.1 0, it is described
Position judging module 86, comprising:
Position acquisition submodule 861, for obtaining current location of the rendered particle on the target edges figure;
Target pixel value acquisition submodule 862, for obtaining the corresponding target in the current location according to the pixel value
Pixel value;
Marginal position determines submodule 863, in the case where for working as the target pixel value within the scope of presetted pixel value,
Determine that the rendered particle is located at the marginal position that the target edges figure includes;
Non-edge position determination submodule 864, for when the target pixel value is beyond the presetted pixel value range
In the case of, determine that the rendered particle is not located at the marginal position that the target edges figure includes.
Figure 11 is a kind of block diagram of image processing apparatus 80 shown according to an exemplary embodiment.Referring to Fig.1 1, it is described
Device 80 further include:
Motion-control module 87, for being not located at the marginal position that the target edges figure includes in the rendered particle
When, it controls the rendered particle and is moved according to predetermined movement track.
Figure 12 is a kind of block diagram of image processing apparatus 80 shown according to an exemplary embodiment.Referring to Fig.1 2, it is described
Device 80 further include:
Key point judgment module 88, for judging whether the quantity of the object key point is less than or equal to default threshold
Value;
Extension point obtains module 89, is less than or equal to the preset threshold for the quantity in the object key point
In the case of, object extension point is obtained according to the object key point;
The grid chart obtains module 82, for according to the object key point and object extension point, described in acquisition
Object grid chart.
Using above-mentioned apparatus, the object key point for including in image to be processed is obtained;Object is obtained according to object key point
Grid chart;Edge processing is carried out to object grid chart and obtains target edges figure;Being located at target edges figure in rendered particle includes
When marginal position, control rendered particle is in floating state;The rendered particle is used to carry out image rendering to image to be processed.
As it can be seen that the image processing method that the embodiment of the present disclosure provides, by way of extracting object key point and edge processing combination
Target edges figure has been obtained, rendered particle is hovered in object key point, so that there are a variety of wash with watercolours for rendered particle
Form is contaminated, and then improves rendering effect, and enhance visual effect.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method
Embodiment in be described in detail, no detailed explanation will be given here.
Figure 13 is the block diagram of a kind of electronic equipment 1300 shown according to an exemplary embodiment.Electronic equipment can be shifting
Dynamic terminal may be server.For example, electronic equipment 1300 can be mobile phone, computer, digital broadcast terminal, message
Transceiver, game console, tablet device, Medical Devices, body-building equipment, personal digital assistant etc..
Referring to Fig.1 3, electronic equipment 1300 may include following one or more components: processing component 1302, memory
1304, electric power assembly 1306, multimedia component 1308, audio component 1310, the interface 1312 of input/output (I/O), sensor
Component 1314 and communication component 1316.
The integrated operation of the usual controlling electronic devices 1300 of processing component 1302, such as with display, call, data are logical
Letter, camera operation and record operate associated operation.Processing component 1302 may include one or more processors 1320
It executes instruction, to perform all or part of the steps of the methods described above.In addition, processing component 1302 may include one or more
Module, convenient for the interaction between processing component 1302 and other assemblies.For example, processing component 1302 may include multimedia mould
Block, to facilitate the interaction between multimedia component 1308 and processing component 1302.
Memory 1304 is configured as storing various types of data to support the operation in electronic equipment 1300.These numbers
According to example include any application or method for being operated on electronic equipment 1300 instruction, contact data, electricity
Talk about book data, message, picture, video etc..Memory 1304 can be by any kind of volatibility or non-volatile memory device
Or their combination is realized, such as static random access memory (SRAM), electrically erasable programmable read-only memory
(EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), read-only memory
(ROM), magnetic memory, flash memory, disk or CD.
Power supply module 1306 provides electric power for the various assemblies of electronic equipment 1300.Power supply module 1306 may include power supply
Management system, one or more power supplys and other with for electronic equipment 1300 generate, manage, and distribute associated group of electric power
Part.
Multimedia component 1308 includes the screen of one output interface of offer between the electronic equipment 1300 and user
Curtain.In some embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touching
Panel, screen may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touchings
Sensor is touched to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or cunning
The boundary of movement, but also detect duration and pressure associated with the touch or slide operation.In some embodiments
In, multimedia component 1308 includes a front camera and/or rear camera.When electronic equipment 1300 is in operation mould
Formula, such as in a shooting mode or a video mode, front camera and/or rear camera can receive external multi-medium data.
Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom energy
Power.
Audio component 1310 is configured as output and/or input audio signal.For example, audio component 1310 includes a wheat
Gram wind (MIC), when electronic equipment 1300 is in operation mode, when such as call mode, recording mode, and voice recognition mode, Mike
Wind is configured as receiving external audio signal.The received audio signal can be further stored in memory 1304 or via
Communication component 1316 is sent.In some embodiments, audio component 1310 further includes a loudspeaker, for exporting audio letter
Number.
I/O interface 1312 provides interface, above-mentioned peripheral interface module between processing component 1302 and peripheral interface module
It can be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and
Locking press button.
Sensor module 1314 includes one or more sensors, for providing the shape of various aspects for electronic equipment 1300
State assessment.For example, sensor module 1314 can detecte the state that opens/closes of electronic equipment 1300, component it is relatively fixed
Position, such as the component are the display and keypad of electronic equipment 1300, and sensor module 1314 can also detect electronics and set
For 1300 or the position change of 1,300 1 components of electronic equipment, the existence or non-existence that user contacts with electronic equipment 1300,
The temperature change in 1300 orientation of electronic equipment or acceleration/deceleration and electronic equipment 1300.Sensor module 1314 may include connecing
Nearly sensor is configured to detect the presence of nearby objects without any physical contact.Sensor module 1314 is also
It may include optical sensor, such as CMOS or ccd image sensor, for being used in imaging applications.In some embodiments, should
Sensor module 1314 can also include acceleration transducer, and gyro sensor, Magnetic Sensor, pressure sensor or temperature pass
Sensor.
Communication component 1316 is configured to facilitate the logical of wired or wireless way between electronic equipment 1300 and other equipment
Letter.Electronic equipment 1300 can access the wireless network based on communication standard, such as WiFi, carrier network (such as 2G, 3G, 4G or
5G) or their combination.In one exemplary embodiment, communication component 1316 is received via broadcast channel from external wide
The broadcast singal or broadcast related information of broadcast management system.In one exemplary embodiment, the communication component 1316 also wraps
Near-field communication (NFC) module is included, to promote short range communication.For example, it can be based on radio frequency identification (RFID) technology in NFC module, it is red
Outer data association (IrDA) technology, ultra wide band (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, electronic equipment 1300 can by one or more application specific integrated circuit (ASIC),
Digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field-programmable gate array
It arranges (FPGA), controller, microcontroller, microprocessor or other electronic components to realize, for executing shown in above-mentioned Fig. 1, Fig. 2
Image processing method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided
Such as include instruction memory 1304, above-metioned instruction can by the processor 1320 of electronic equipment 1300 execute with complete above-mentioned Fig. 1,
Image processing method shown in Fig. 2.For example, the non-transitorycomputer readable storage medium can be ROM, arbitrary access
Memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc..
In the exemplary embodiment, a kind of computer program product is additionally provided, when the instruction in computer program product
When being executed by the processor 1320 of electronic equipment 1300, so that electronic equipment 1300 executes above-mentioned Fig. 1, image shown in Fig. 2
Processing method.
Those skilled in the art will readily occur to its of the disclosure after considering specification and practicing disclosure disclosed herein
Its embodiment.The disclosure is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or
Person's adaptive change follows the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure
Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following
Claim is pointed out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the accompanying claims.
Claims (10)
1. a kind of image processing method, which is characterized in that the described method includes:
Obtain the object key point for including in image to be processed;
Object grid chart is obtained according to the object key point;
Edge processing is carried out to the object grid chart and obtains target edges figure;
When rendered particle is located at the marginal position that the target edges figure includes, controls the rendered particle and be in hovering shape
State, the rendered particle are used to carry out image rendering to the image to be processed.
2. the method according to claim 1, wherein described obtain the object key for including in image to be processed
Point, comprising:
Model is extracted by key point, obtains the object key point for including in the image to be processed.
3. the method according to claim 1, wherein being obtained described to object grid chart progress edge processing
To after target edges figure, further includes:
Obtain the pixel value for each pixel that the target edges figure includes;
According to the pixel value, judge whether the rendered particle is located at the marginal position that the target edges figure includes.
4. according to the method described in claim 3, judging the rendered particle it is characterized in that, described according to the pixel value
Whether marginal position that the target edges figure include is located at, comprising:
Obtain current location of the rendered particle on the target edges figure;
According to the pixel value, the corresponding target pixel value in the current location is obtained;
In the case that the target pixel value is within the scope of presetted pixel value, determine that the rendered particle is located at the object edges
The marginal position that edge figure includes;
In the case that the target pixel value is beyond the presetted pixel value range, it is described to determine that the rendered particle is not located at
The marginal position that target edges figure includes.
5. the method according to claim 1, wherein being obtained pair carrying out edge processing to the object grid chart
After edge graph, further includes:
When the rendered particle is not located at the marginal position that the target edges figure includes, the rendered particle is controlled according to pre-
If motion profile is moved.
6. the method according to claim 1, wherein obtaining object grid according to the object key point described
Before figure, further includes:
Judge whether the quantity of the object key point is less than or equal to preset threshold;
In the case where the quantity of the object key point is less than or equal to the preset threshold, according to the object key point
Obtain object extension point;
It is described that object grid chart is obtained according to the object key point, comprising:
According to the object key point and object extension point, the object grid chart is obtained.
7. a kind of image processing apparatus, which is characterized in that described device includes:
Key point obtains module, for obtaining the object key for including in image to be processed point;
Grid chart obtains module, for obtaining object grid chart according to the object key point;
Edge graph obtains module, obtains target edges figure for carrying out edge processing to the object grid chart;
Hovering control module, for controlling the wash with watercolours when rendered particle is located at the marginal position that the target edges figure includes
Dye particle is in floating state, and the rendered particle is used to carry out image rendering to the image to be processed.
8. device according to claim 7, which is characterized in that the key point obtains module, for being mentioned by key point
Modulus type obtains the object key point for including in the image to be processed.
9. a kind of electronic equipment characterized by comprising
Processor;
For storing the memory of the processor-executable instruction;
Wherein, the processor is configured to image processing method described in any one of perform claim requirement 1 to 6.
10. a kind of non-transitorycomputer readable storage medium, which is characterized in that when the instruction in the storage medium is by electronics
When the processor of equipment executes, so that the electronic equipment is able to carry out image procossing described in any one of claims 1 to 6
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910340750.6A CN110211211B (en) | 2019-04-25 | 2019-04-25 | Image processing method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910340750.6A CN110211211B (en) | 2019-04-25 | 2019-04-25 | Image processing method, device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110211211A true CN110211211A (en) | 2019-09-06 |
CN110211211B CN110211211B (en) | 2024-01-26 |
Family
ID=67786458
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910340750.6A Active CN110211211B (en) | 2019-04-25 | 2019-04-25 | Image processing method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110211211B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111091610A (en) * | 2019-11-22 | 2020-05-01 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112581620A (en) * | 2020-11-30 | 2021-03-30 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113780040A (en) * | 2020-06-19 | 2021-12-10 | 北京沃东天骏信息技术有限公司 | Lip key point positioning method and device, storage medium and electronic equipment |
US11403788B2 (en) | 2019-11-22 | 2022-08-02 | Beijing Sensetime Technology Development Co., Ltd. | Image processing method and apparatus, electronic device, and storage medium |
WO2022170896A1 (en) * | 2021-02-09 | 2022-08-18 | 北京沃东天骏信息技术有限公司 | Key point detection method and system, intelligent terminal, and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6084596A (en) * | 1994-09-12 | 2000-07-04 | Canon Information Systems Research Australia Pty Ltd. | Rendering self-overlapping objects using a scanline process |
JP2014194635A (en) * | 2013-03-28 | 2014-10-09 | Canon Inc | Image forming apparatus, image forming method, and program |
CN106022337A (en) * | 2016-05-22 | 2016-10-12 | 复旦大学 | Planar object detection method based on continuous edge characteristic |
CN108428214A (en) * | 2017-02-13 | 2018-08-21 | 阿里巴巴集团控股有限公司 | A kind of image processing method and device |
CN108986016A (en) * | 2018-06-28 | 2018-12-11 | 北京微播视界科技有限公司 | Image beautification method, device and electronic equipment |
CN109063560A (en) * | 2018-06-28 | 2018-12-21 | 北京微播视界科技有限公司 | Image processing method, device, computer readable storage medium and terminal |
-
2019
- 2019-04-25 CN CN201910340750.6A patent/CN110211211B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6084596A (en) * | 1994-09-12 | 2000-07-04 | Canon Information Systems Research Australia Pty Ltd. | Rendering self-overlapping objects using a scanline process |
JP2014194635A (en) * | 2013-03-28 | 2014-10-09 | Canon Inc | Image forming apparatus, image forming method, and program |
CN106022337A (en) * | 2016-05-22 | 2016-10-12 | 复旦大学 | Planar object detection method based on continuous edge characteristic |
CN108428214A (en) * | 2017-02-13 | 2018-08-21 | 阿里巴巴集团控股有限公司 | A kind of image processing method and device |
CN108986016A (en) * | 2018-06-28 | 2018-12-11 | 北京微播视界科技有限公司 | Image beautification method, device and electronic equipment |
CN109063560A (en) * | 2018-06-28 | 2018-12-21 | 北京微播视界科技有限公司 | Image processing method, device, computer readable storage medium and terminal |
Non-Patent Citations (2)
Title |
---|
涅槃的凤凰: "树叶边缘渲染", 《HTTPS://BLOG.CSDN.NET/U014630768/ARTICLE/DETAILS/32716117》 * |
涅槃的凤凰: "树叶边缘渲染", 《HTTPS://BLOG.CSDN.NET/U014630768/ARTICLE/DETAILS/32716117》, 30 June 2014 (2014-06-30) * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111091610A (en) * | 2019-11-22 | 2020-05-01 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
US11403788B2 (en) | 2019-11-22 | 2022-08-02 | Beijing Sensetime Technology Development Co., Ltd. | Image processing method and apparatus, electronic device, and storage medium |
CN113780040A (en) * | 2020-06-19 | 2021-12-10 | 北京沃东天骏信息技术有限公司 | Lip key point positioning method and device, storage medium and electronic equipment |
CN112581620A (en) * | 2020-11-30 | 2021-03-30 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
WO2022170896A1 (en) * | 2021-02-09 | 2022-08-18 | 北京沃东天骏信息技术有限公司 | Key point detection method and system, intelligent terminal, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110211211B (en) | 2024-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110211211A (en) | Image processing method, device, electronic equipment and storage medium | |
CN110929651B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
US11114130B2 (en) | Method and device for processing video | |
CN106339680B (en) | Face key independent positioning method and device | |
CN112070015B (en) | Face recognition method, system, device and medium fusing occlusion scene | |
CN109670397A (en) | Detection method, device, electronic equipment and the storage medium of skeleton key point | |
US11030733B2 (en) | Method, electronic device and storage medium for processing image | |
CN108712603B (en) | Image processing method and mobile terminal | |
CN106713696B (en) | Image processing method and device | |
CN107368810A (en) | Method for detecting human face and device | |
CN105512605A (en) | Face image processing method and device | |
CN107392933B (en) | Image segmentation method and mobile terminal | |
CN105357425B (en) | Image capturing method and device | |
CN110909654A (en) | Training image generation method and device, electronic equipment and storage medium | |
WO2022227393A1 (en) | Image photographing method and apparatus, electronic device, and computer readable storage medium | |
CN107392166A (en) | Skin color detection method, device and computer-readable recording medium | |
CN113409342A (en) | Training method and device for image style migration model and electronic equipment | |
CN107341777A (en) | image processing method and device | |
CN107967459A (en) | convolution processing method, device and storage medium | |
CN113870121A (en) | Image processing method and device, electronic equipment and storage medium | |
CN110807769B (en) | Image display control method and device | |
CN112509005A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN109639981B (en) | Image shooting method and mobile terminal | |
CN109784327A (en) | Bounding box determines method, apparatus, electronic equipment and storage medium | |
CN109981989A (en) | Render method, apparatus, electronic equipment and the computer readable storage medium of image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |