CN111091519B - Image processing method and device - Google Patents
Image processing method and device Download PDFInfo
- Publication number
- CN111091519B CN111091519B CN201911329892.9A CN201911329892A CN111091519B CN 111091519 B CN111091519 B CN 111091519B CN 201911329892 A CN201911329892 A CN 201911329892A CN 111091519 B CN111091519 B CN 111091519B
- Authority
- CN
- China
- Prior art keywords
- nail
- target
- model
- region
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 18
- 210000000282 nail Anatomy 0.000 claims abstract description 422
- 238000012545 processing Methods 0.000 claims abstract description 45
- 210000004905 finger nail Anatomy 0.000 claims abstract description 28
- 238000000034 method Methods 0.000 claims abstract description 25
- 230000036562 nail growth Effects 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 8
- 241001272720 Medialuna californiensis Species 0.000 abstract description 23
- 230000000694 effects Effects 0.000 description 12
- 230000006870 function Effects 0.000 description 11
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an image processing method and device. The method comprises the following steps: acquiring a target image; identifying nail regions in the target image and determining a nail model corresponding to each nail region; for a target nail model in the nail models, generating a sphere model according to the width of the root of the fingernail in the target nail model; determining an intersection region of the sphere model and the target nail model; and carrying out preset processing on a target area matched with the intersection area in the target image. The invention can make the half moon mark of the nail of the hand image after processing more obvious and clear when processing the hand image.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
With the increasing popularity of shooting culture, more and more people travel, conference, life, party and other scenes are shot and the life of the people is recorded. Also, the continual progress of mobile phone photographing software has led to the beautification of face, stature, etc. to meet the ever increasing user demands. In addition to face and stature, many people prefer to express themselves by hand gestures. Crescent is a milky arc appearing at the root of the nail, but not every person's nail root will have the milky arc (abbreviated as a half moon mark) due to individual differences, or the half moon mark at the root of the nail will not be obvious enough.
Therefore, in the current image processing method, when the hand image is processed, the effect of enabling the half moon mark of the nail to be more obvious and clear is difficult to achieve.
Disclosure of Invention
The embodiment of the invention provides an image processing method and an image processing device, which are used for solving the problem that the half moon mark of a fingernail is difficult to be more obvious and clear when the image processing method in the related technology processes the hand image.
In order to solve the technical problems, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, applied to an electronic device, where the method includes:
acquiring a target image;
identifying nail regions in the target image and determining a nail model corresponding to each nail region;
for a target nail model in the nail models, generating a sphere model according to the width of the root of the fingernail in the target nail model;
determining an intersection region of the sphere model and the target nail model;
and carrying out preset processing on a target area matched with the intersection area in the target image.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, including:
The acquisition module is used for acquiring a target image;
a first determining module for identifying nail regions in the target image and determining a nail model corresponding to each nail region;
the generation module is used for generating a sphere model according to the width of the root of the fingernail in the target nail model for the target nail model in the nail model;
a second determination module for determining an intersection region of the sphere model and the target nail model;
and the processing module is used for carrying out preset processing on the target area matched with the intersection area in the target image.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the image processing method.
In a fourth aspect, embodiments of the present invention also provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements the steps of the image processing method.
In the embodiment of the invention, by identifying the nail region in the target image and determining the nail model corresponding to each nail region, for the target nail model in the nail model, generating a spherical model according to the width of the nail root in the target nail model, then determining the intersection region of the spherical model and the target nail model, so that the nail shape of the intersection region can be in a crescent shape, finally, carrying out preset processing on the target region matched with the intersection region in the target image, namely mapping the intersection region to the target region in the target image, so that the shape of the target region is also in the crescent shape and is positioned at the nail root of the target nail region corresponding to the target nail model, after the preset processing is carried out on the target region, the target region in the target image can be distinguished from other nail regions, so that the shape of the target region is close to the crescent shape and is more obvious, and the effect that the half moon mark of the nail of the processed hand image is more obvious when the hand image is processed is achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an image processing method of one embodiment of the present invention;
FIG. 2 is a two-dimensional image schematic of a nail region according to one embodiment of the invention;
FIG. 3 is a schematic plan view of a target nail model of one embodiment of the invention;
FIG. 4 is a schematic plan view of a target nail model intersecting the sphere model according to one embodiment of the invention;
fig. 5 is a block diagram of an image processing apparatus according to another embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a flowchart of an image processing method according to an embodiment of the present invention is shown and applied to an electronic device, and the method may specifically include the following steps:
The target image may be an image received from the outside, or may be a locally generated image (for example, an image captured by a camera of the electronic device), or may be an image collected by a camera of the electronic device in real time (for example, a preview image collected by a camera of the electronic device).
102, identifying nail areas in the target image, and determining a nail model corresponding to each nail area;
wherein the target image comprises nail regions, and because the nail regions are different from other positions of the hand, the nail regions in the target image can be identified, and a nail model corresponding to each nail region can be determined, wherein the nail model is a three-dimensional model.
Alternatively, in one embodiment, in performing the step of identifying the nail region in the target image in step 102, this may be achieved by S201:
s201, identifying the nail region in the target image according to the nail characteristics in the target image.
Wherein, since the nail feature is a unique feature that is distinct from other locations of the hand, two-dimensional coordinates of the nail feature point(s), typically a plurality, of each nail region can be located from the target image.
Optionally, when identifying the nail feature, since the target image is a two-dimensional image, if a certain nail region in the target image is an image of the side of the nail, in order to improve accuracy of identifying the nail feature, the nail feature of each nail region may be identified from the target image according to an average thickness (e.g., 0.5 mm) of the nail and a characteristic specific to the nail, and by means of the thickness information, nail feature points in the target image may be more accurately located.
Wherein, because the nail feature points of the same nail region are relatively concentrated in position, each nail region in the target image is determined based on the identified nail feature points.
Alternatively, in one embodiment, in performing the step of determining the nail model corresponding to each nail region of step 102, it may be implemented through S202 to S204:
s202, acquiring a hand model corresponding to a hand area in the target image;
the hand model is a three-dimensional model of a hand region in the two-dimensional target image.
In addition, a hand model corresponding to the hand region in the target image may be acquired from an external device, or the target image may be processed to obtain the hand model.
Specifically, when the target image is processed to obtain the hand model, a two-dimensional image (i.e., RGB information) and depth information corresponding to a hand region in the target image may be obtained; then, a hand model of the hand region is constructed from the two-dimensional image and the depth information.
For example, when a user uses a mobile phone photographing mode to preview or photograph, an image is automatically captured by using a camera to obtain a two-dimensional image of a hand and depth information of the hand image, so as to construct a three-dimensional model of the hand.
Also, the depth information is also called depth image information (RGB-D), which is an information image or image channel containing information about the distance of the scene object surface of the viewpoint, and each pixel value thereof is the actual distance of the sensor from the object. Therefore, the acquired data corresponding to the target image can be changed into three-dimensional data from two-dimensional data, the hand area in the image can be effectively identified in real time by combining the depth image information, a three-dimensional model of the hand is built, and finally the purpose of adding crescent white to the root of the middle finger nail of the hand is achieved.
Note that the present invention is not limited to the execution order between S201 and S202.
S203, acquiring corresponding three-dimensional positioning information of the nail features in the hand model;
the two-dimensional coordinates of the nail feature points corresponding to each nail region determined in S201 may be used to position the accurate depth coordinates corresponding to the two-dimensional coordinates in the hand model, so as to obtain three-dimensional positioning information of the nail features of each nail region.
S204, generating a nail model corresponding to each nail region in the target image according to the three-dimensional positioning information corresponding to the nail characteristics of each nail region.
Corresponding three-dimensional positioning information is obtained for the nail characteristic points of each nail region in the target image, so that a nail model of the nail region can be constructed according to the three-dimensional positioning information of a plurality of nail characteristic points of one nail region, and the nail model corresponding to each nail region is obtained. The number of nail regions in the target image is the same as the number of nail models, for example, a nail image including 10 fingers in the target image, then 10 nail models corresponding to the nail regions of the 10 fingers may be generated here. And the nail model is a three-dimensional model of a certain nail.
Crescent as a sign of health is one of the ways many users show up in social contact, and needs to be paid attention to and developed. Depth image technology (RGB-D) based hand recognition technology can enable modeling and tracking of the hand, but lacks localization tracking and application to the nail. In the embodiment of the invention, the hand model is acquired for the hand area in the target image, and the nail area in the target image is identified according to the nail characteristics in the target image, so that the three-dimensional positioning information of each nail characteristic can be positioned by combining the hand model, then the three-dimensional positioning information of the nail characteristics of each nail area is utilized, so that the nail model of the nail area can be generated, the three-dimensional positioning information of the nail model is established based on the hand model and the two-dimensional target image, the accuracy of the three-dimensional positioning information is higher, the established nail model is matched with the actual three-dimensional image of the nail area, the target area processed by utilizing the nail model is the position of each nail in the target image, which is generally supposed to be provided with the half moon mark, and the position accuracy of the added half moon mark is improved.
In addition, in the embodiment of the invention, since the nail model is calculated based on the hand model, the finally determined target area is not an area outside the nail, so that the target area (for example, crescent white) after the preset treatment in the embodiment of the invention is ensured not to appear in the area outside the nail, and the matching accuracy is improved.
Of course, in other embodiments, the nail model corresponding to each nail region may also be received from the outside when the step of determining the nail model corresponding to each nail region of step 102 is performed.
in step 102, a nail model corresponding to each nail region may be determined, and then, in different scenarios, the nail added with the half moon mark may be one nail or a plurality of nails in the target image, and thus, the target nail model may be one or more of the plurality of nail models obtained in step 101. That is, the nail region corresponding to the target nail model is the nail object to which the half moon mark needs to be added.
Those skilled in the art will appreciate that the finger nail is in a growing direction, and that the area corresponding to the root of the finger nail in one target nail model is the portion of the target nail model that is directed away from the growing direction of the finger nail.
The width of the nail root may be information of the width of the area corresponding to the nail root in the nail width direction in the nail model. And the nail width direction is a direction perpendicular to the nail growth direction and in the same plane.
The width of the nail root can be any nail width of the lower half nail model of the target nail model.
Optionally, when step 103 is executed, for a target nail model in the nail model, a preset area corresponding to a nail root in the target nail model may be identified; and then, generating a sphere model according to the width of the preset area.
Wherein the predetermined area is identified from the target nail model, and thus the predetermined area is also three-dimensional. The preset area may be understood as a three-dimensional model of the root area in the nail area visible in the target image.
In identifying the predetermined area, the height taken from the direction of nail growth of the target nail model may be random, but less than or equal to one area of half the total length of the nail (i.e., the total height of the target nail model).
Optionally, in one embodiment, when identifying the preset area corresponding to the fingernail root in the target nail model, S301 to S305 may be implemented:
S301, acquiring target nail characteristics of a target nail region corresponding to the target nail model in the target image;
to facilitate understanding, in one example, FIG. 2 shows a two-dimensional image of a target nail region (e.g., left index finger) in a target image.
The target nail region in fig. 2 includes a first region 21 that does not adhere to the flesh, and a second region 22 that adheres to the flesh, and the nail growth direction of the target nail region is shown by the arrow.
The target nail characteristics obtained in this step are the respective characteristic points 23 of the target nail region of fig. 2, and the respective characteristic points 24.
Wherein the feature points 23 are a plurality of feature points at the edge of the nail uppermost in the nail growth direction in the target nail region, which can be understood as the top feature points of the nail; while feature points 24 are the most downward feature points of the target nail region facing away from the direction of nail growth at the edge of the nail, and can be understood as the root feature points of the nail. Thus, S301 can acquire two-dimensional coordinates of the nail top feature point of the target nail region and two-dimensional coordinates of the nail root feature point in the two-dimensional target image.
S302, obtaining corresponding three-dimensional positioning information of the target nail characteristics in the target nail model;
Wherein, because the target nail model is a three-dimensional model of the two-dimensional image of the target nail region, the three-dimensional coordinate information corresponding to the target nail feature in the target nail model can be positioned according to the two-dimensional coordinates of the target nail feature.
The step corresponds to positioning the three-dimensional coordinates of each nail top characteristic point and the three-dimensional coordinates of each nail root characteristic point in the target nail model matched with the target nail region by utilizing the two-dimensional coordinates of the nail top characteristic point and the two-dimensional coordinates of the nail root characteristic point of the target nail region in the two-dimensional image of the target image.
S303, determining the total length y of the nail corresponding to the target nail model according to the three-dimensional positioning information of the target nail characteristics, wherein the direction of the total length of the nail is the nail growing direction;
in one example, a schematic plan view of the target nail model is shown as shown in FIG. 3.
Thus, the three-dimensional coordinates of the target nail features (including the nail root feature points and the nail tip feature points) in the target nail model may be obtained using S202 described above to determine the position 32 of the bottom-most root of the nail in the target nail model, and the position 31 of the top of the nail in the target nail model, wherein the bottom-up direction is the nail growth direction.
The position 31 may be one of the nail top feature points, or may be a new feature point position determined based on each nail top feature point; the method of determining the location 32 is similar and will not be described in detail here.
Therefore, the distance y between the position 31 and the position 32 of the target nail model can be determined as the total length of the nail corresponding to the target nail model.
S304, equally dividing the target nail model by n in the nail growth direction to generate n model areas with the nail length of y/n, wherein n is more than 0;
wherein each model region is a three-dimensional model.
In one example, as shown in FIG. 3, the target nail model is equally divided by n according to the nail growth direction, such that the nail length of each model area in the nail growth direction is y/n.
S305, identifying the lowest target model area in the nail growth direction of the n model areas as a preset area corresponding to the root of the fingernail in the target nail model.
In one example, as shown in fig. 3, the shadow zone 33 (i.e., the target model zone) located at the lowest of the n model zones in the nail growth direction may be identified as a preset zone corresponding to the nail root in the target nail model shown in fig. 3.
Since fig. 3 is a plan view of the three-dimensional model, the hatched area 33 is also a plan view of the three-dimensional target model area.
Further, the value of n may be any number of 10 or more, the value of n may be input by a user, and/or the system configuration.
As for the user input of the value of n, the user may input the value of n in the photographing preview interface, and may associate the value of n with a certain finger.
In addition, the values of n corresponding to different target nail models can be different, so that the heights of crescent white added by different fingers in the nail growth direction can be different.
In the embodiment of the invention, when the target area (two-dimensional) for adding the half moon mark in the target image is identified, a manner of determining the preset area (three-dimensional) of the target area corresponding to the nail root in the target nail model is provided, specifically, the total length of the nail corresponding to the target nail model is determined according to the three-dimensional positioning information of the target nail characteristics of the target area in the target nail model, then the target nail model is divided into n equal parts in the nail growth direction according to the total length of the nail, finally, the target model area which is positioned at the lowest part in the nail growth direction in the n equal parts of the model areas is identified as the preset area corresponding to the nail root in the target nail model, so that the target area in the two-dimensional target image is mapped to the three-dimensional area corresponding to the nail root of the target nail model, and therefore, the target area accords with the position of the actual crescent moon mark in the actual life, and the image generated after the target area is finally processed can have the position of the half moon mark in the actual life, and the half moon mark is highlighted.
In addition, in the step of generating a sphere model according to the width of the nail root in the target nail model for the step 103, and generating a sphere model according to the width of the preset region in the embodiment refined in the step 103. The width of both embodiments is the width of the nail root in the nail width direction, that is, the width of the preset area in the nail width direction.
Wherein the nail width direction is a direction perpendicular to the nail growth direction in the same plane.
For example, FIG. 4 shows a schematic plan view of the target nail model intersecting the sphere model.
Fig. 4 shows two arrow directions, the nail growth direction and the nail width direction, respectively.
In which fig. 3 and 4 are schematic plan views of the same target nail model, it can be seen from comparing fig. 3 and 4 that the width of the preset area 33 in the nail width direction is x.
Wherein the sphere model is a three-dimensional sphere model. Then the radius of the sphere model can be determined at the width x when the sphere model is generated. When the radius is greater than x, the shape of the target area will not appear to be a half moon mark, but the target area will still be positioned close to the usual position of the half moon mark with the finger nail.
Alternatively, in order to make the shape of the target area approximate to a half moon mark, i.e., crescent moon shape, when step 103 is performed, a radius r may be determined according to a width x of the nail root in the nail width direction in the target nail model, where a < r.ltoreq.x, where a is a constant; and generating a sphere model according to the radius r.
Similarly, in another embodiment, in order to make the shape of the target area approximate to a half moon mark, that is, crescent moon shape, when the step of generating a sphere model according to the width of the preset area is performed, a radius r may be determined according to the width x of the preset area in the width direction of the nail, where a < r is equal to or less than x, where a is a constant; and generating a sphere model according to the radius r.
In the above two embodiments, when r=x, the boundary of the generated target area in the nail width direction overlaps with the width boundary of the nail, and the effect is as shown in fig. 4, the target area 34 is a gray area labeled "crescent" and the crescent width formed by this processing is wider.
When a < r < x, the boundary of the generated target area in the width direction of the nail is not overlapped with the width boundary of the nail, and the effect is that the width of the formed crescent is narrower.
In the embodiment of the invention, in order to enable the generated target area to be approximately crescent, namely, enable the shape of the target area to be more approximate to the shape of the half moon mark of the finger, the radius of the generated sphere model is smaller than or equal to the width of the preset area in the width direction of the nail, so that the target area processed by the target image is more approximate to the actual shape of the crescent white of the finger, and the half moon mark of the nail is more obvious and clear.
in particular, an intersection region between the two models, which is also a three-dimensional model, can be identified when the spherical surface of the spherical model overlaps the nail surface of the target nail model.
In addition, when step 104 is performed, a preset area corresponding to the nail root in the target nail model may be identified, and then an intersection area of the sphere model and the preset area may be determined.
For a specific implementation manner of the step of identifying the preset area corresponding to the nail root in the target nail model, reference may be made to S301 to S305 in the foregoing embodiment, which are not repeated herein.
And, when executing step 104, an intersection area between the sphere model and the preset area may be obtained when the sphere surface of the sphere model overlaps with the nail surface corresponding to the preset area (or the nail surface of the target nail model);
wherein the sphere model is a three-dimensional model, and the predetermined area is also a three-dimensional model of a portion of the nail taken from the three-dimensional target nail model.
In order to ensure that the generated intersection area is in the shape of an arc, it is necessary to ensure that the crescent exhibits a balanced curve, so that when taking the intersection area (also three-dimensional) between the sphere model and the preset area, it is necessary to make the sphere surface of the sphere model overlap with the nail surface of the preset area (the three-dimensional model of part of the nail), i.e., the three-dimensional space angle of the sphere surface coincides with the maximum plane angle of the preset area (i.e., the nail area corresponding to the root of the nail).
Because the nail has a curvature, the surface of the sphere model needs to overlap with the curvature of the preset area, i.e. the angle is consistent, so that the resulting intersection area maps to the shape of the target area in the two-dimensional target image, i.e. crescent-shaped.
Thus, the shape of the target area described below is always maintained in a uniform circular arc shape regardless of how the user adjusts the value of n.
In one example, as shown in fig. 3 and 4, when the ball surface of the ball model 35 overlaps with the nail surface corresponding to the preset area 33, the intersection area between the ball model 35 and the preset area 33 is a crescent-marked area 34.
And 105, performing preset processing on a target area matched with the intersection area in the target image.
Wherein the intersection region also belongs to a part of the three-dimensional model in the target nail model, so that in order to map the three-dimensional intersection region into a two-dimensional target image, two-dimensional positioning information corresponding to the intersection region in the target nail model can be acquired; and then, carrying out preset processing on a target area matched with the two-dimensional positioning information in the target image.
The target nail model is a three-dimensional model of a target nail region in the target image, so that coordinate matching is carried out on two-dimensional positioning information of an intersection region in the target nail model in the target image, namely, a target region with the same coordinates as the two-dimensional positioning information can be obtained from the target image, and the target region is a region for adding moon tracks at the root of a certain nail in the two-dimensional target image.
The preset processing can be to add a layer of white mask to enable the area similar to the crescent to be white and to be distinguished from other areas of the nail, so that when no crescent moon mark exists in the finger nail of the user or the crescent moon mark is not obvious, the crescent moon mark can be clear and obvious by the method of the embodiment of the invention.
In addition, the pre-set process may be to add a mask (preferably white, or other color).
In the embodiment of the invention, by identifying the nail region in the target image and determining the nail model corresponding to each nail region, for the target nail model in the nail model, generating a spherical model according to the width of the nail root in the target nail model, then determining the intersection region of the spherical model and the target nail model, so that the nail shape of the intersection region can be in a crescent shape, finally, carrying out preset processing on the target region matched with the intersection region in the target image, namely mapping the intersection region to the target region in the target image, so that the shape of the target region is also in the crescent shape and is positioned at the nail root of the target nail region corresponding to the target nail model, after the preset processing is carried out on the target region, the target region in the target image can be distinguished from other nail regions, so that the shape of the target region is close to the crescent shape and is more obvious, and the effect that the half moon mark of the nail of the processed hand image is more obvious when the hand image is processed is achieved.
Moreover, the method provided by the embodiment of the invention can meet the demand of a user on adding crescent white (half moon mark and little sun) to the fingernails of the hand. The user can obtain different personal experiences and social experiences, and the requirements of the user on showing the healthy and exquisite living states of the user to other people are met. At the same time, the hand recognition algorithm is extended, focusing on positioning the nail and applying.
In addition, on the basis of ensuring the accurate matching of the target areas corresponding to the crescent marks by the technical scheme, the embodiment of the invention can also provide more crescent white schemes to arouse greater interest of users.
For example, after the crescent with accurate matching degree (namely, the target area) is calculated, a certain degree of size change and color processing (changing color) can be performed on the crescent, so that various crescent schemes can be realized. The user can control the size of the crescent through changing the value of n, the larger the value of n is, the smaller the crescent is, and otherwise, the larger the crescent is; in addition, the target areas of different fingers can be respectively adjusted with different n values, for example, an interface is provided on an image processing interface, and an n-value adjusting interface and an n-value independent adjusting interface of crescent white of each finger are provided; and the color processed by the target area can be directly used for applying a certain color template to the basic crescent white for tone.
In addition, when the target image is a preview image in a shooting preview interface of the electronic device, a target area in the target image can be identified in real time, and preset processing is carried out on the target area; when the gesture of each finger in the target image changes, the spatial position relation between the target area (crescent white) after real-time synchronous processing and the nails in the image. The hand is continuously identified and positioned to the nail position, and the corresponding matched crescent white is precisely attached to the nail, so that a better effect of accurately positioned processed target areas can be obtained when a user previews target images at all angles.
By means of the technical scheme, when a user uses a mobile phone photographing mode to preview or photograph, the camera is utilized to automatically capture images, two-dimensional images of the hands and depth information of the hand images are obtained, and therefore a three-dimensional model of the hands is constructed. Further detecting the characteristic point coordinates of the nail region in the current frame, positioning the characteristic point coordinates to the nail position for model identification and segmentation, and uploading nail accurate data information. According to the uploaded nail model information, calculating crescent white of corresponding matching degree to achieve accurate matching, synchronizing depth information of the hand and the nail in real time, and continuously modeling and matching to provide preview effects.
Referring to fig. 5, a block diagram of an image processing apparatus of an embodiment of the present invention is shown. The image processing device of the embodiment of the invention can realize the details of the image processing method in the embodiment and achieve the same effect. The image processing apparatus shown in fig. 5 includes:
an acquisition module 501, configured to acquire a target image;
a first determining module 502, configured to identify nail regions in the target image, and determine a nail model corresponding to each nail region;
a generating module 503, configured to generate, for a target nail model in the nail models, a sphere model according to a width of a nail root in the target nail model;
a second determination module 504 for determining an intersection area of the sphere model and the target nail model;
and the processing module 505 is configured to perform preset processing on a target area matched with the intersection area in the target image.
Optionally, the second determining module 504 includes:
a first obtaining submodule, configured to obtain a target nail feature of a target nail region corresponding to the target nail model in the target image;
the second acquisition sub-module is used for acquiring corresponding three-dimensional positioning information of the target nail characteristics in the target nail model;
The first determining submodule is used for determining the total length y of the nail corresponding to the target nail model according to the three-dimensional positioning information of the target nail characteristics, wherein the direction of the total length of the nail is the growing direction of the nail;
dividing a molecular module, which is used for dividing the target nail model into n equal parts in the nail growth direction to generate n model areas with the nail length of y/n, wherein n is more than 0;
the first recognition submodule is used for recognizing a target model area which is positioned at the lowest part of the n model areas in the nail growth direction as a preset area corresponding to the root of the fingernail in the target nail model;
and the second determining submodule is used for determining an intersection area of the sphere model and the preset area.
Optionally, the generating module 503 includes:
a third determining submodule for determining a radius r according to the width d of the fingernail root in the width direction of the fingernail in the target fingernail model, wherein a is smaller than r and smaller than d, and a is a constant;
and the first generation submodule is used for generating a sphere model according to the radius r.
Optionally, the first determining module 502 includes:
a second recognition sub-module for recognizing a nail region in the target image according to the nail characteristics in the target image;
The third acquisition sub-module is used for acquiring a hand model corresponding to the hand area in the target image;
a fourth obtaining sub-module, configured to obtain three-dimensional positioning information corresponding to the nail feature in the hand model;
and the second generation submodule is used for generating a nail model corresponding to each nail region in the target image according to the three-dimensional positioning information corresponding to the nail characteristics of each nail region.
The image processing device provided by the embodiment of the invention can realize each process realized by the electronic equipment in the embodiment of the method, and in order to avoid repetition, the description is omitted here.
The image processing device is used for identifying nail regions in a target image and determining a nail model corresponding to each nail region, generating a sphere model according to the width of the root of a nail in the target nail model for the target nail model in the nail model, then determining an intersection region of the sphere model and the target nail model, enabling the nail shape of the intersection region to be crescent, finally carrying out preset processing on the target region matched with the intersection region in the target image, namely mapping the intersection region to the target region in the target image, enabling the shape of the target region to be crescent, and being positioned at the root of the target nail corresponding to the target nail model, and enabling the target region in the target image to be different from other nail regions after the preset processing on the target region, so that the shape of the target region is close to the crescent, and the effect that the half moon mark of the nail of the processed hand image is more clear when the hand image is processed is achieved.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power source 411. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 6 is not limiting of the electronic device and that the electronic device may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In the embodiment of the invention, the electronic equipment comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
An input unit 404 for acquiring a target image;
a processor 410 for identifying nail regions in the target image and determining a nail model corresponding to each nail region; for a target nail model in the nail models, generating a sphere model according to the width of the root of the fingernail in the target nail model; determining an intersection region of the sphere model and the target nail model; and carrying out preset processing on a target area matched with the intersection area in the target image.
In the embodiment of the invention, by identifying the nail region in the target image and determining the nail model corresponding to each nail region, for the target nail model in the nail model, generating a spherical model according to the width of the nail root in the target nail model, then determining the intersection region of the spherical model and the target nail model, so that the nail shape of the intersection region can be in a crescent shape, finally, carrying out preset processing on the target region matched with the intersection region in the target image, namely mapping the intersection region to the target region in the target image, so that the shape of the target region is also in the crescent shape and is positioned at the nail root of the target nail region corresponding to the target nail model, after the preset processing is carried out on the target region, the target region in the target image can be distinguished from other nail regions, so that the shape of the target region is close to the crescent shape and is more obvious, and the effect that the half moon mark of the nail of the processed hand image is more obvious when the hand image is processed is achieved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, specifically, receiving downlink data from a base station and then processing the received downlink data by the processor 410; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 401 may also communicate with networks and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user through the network module 402, such as helping the user to send and receive e-mail, browse web pages, and access streaming media, etc.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the electronic device 400. The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive an audio or video signal. The input unit 404 may include a graphics processor (Graphics Processing Unit, GPU) 4041 and a microphone 4042, the graphics processor 4041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphics processor 4041 may be stored in memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 401 in the case of a telephone call mode.
The electronic device 400 also includes at least one sensor 405, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 4061 and/or the backlight when the electronic device 400 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 405 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 406 is used to display information input by a user or information provided to the user. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. The touch panel 4071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 4071 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 410, and receives and executes commands sent from the processor 410. In addition, the touch panel 4071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 407 may include other input devices 4072 in addition to the touch panel 4071. In particular, other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 4071 may be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or thereabout, the touch operation is transferred to the processor 410 to determine the type of touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of touch event. Although in fig. 6, the touch panel 4071 and the display panel 4061 are two independent components for implementing the input and output functions of the electronic device, in some embodiments, the touch panel 4071 may be integrated with the display panel 4061 to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 408 is an interface to which an external device is connected to the electronic apparatus 400. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 400 or may be used to transmit data between the electronic apparatus 400 and an external device.
The processor 410 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 409 and invoking data stored in the memory 409, thereby performing overall monitoring of the electronic device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The electronic device 400 may also include a power supply 411 (e.g., a battery) for powering the various components, and preferably the power supply 411 may be logically connected to the processor 410 via a power management system that performs functions such as managing charging, discharging, and power consumption.
In addition, the electronic device 400 includes some functional modules, which are not shown, and are not described herein.
Preferably, the embodiment of the present invention further provides an electronic device, including a processor 410, a memory 409, and a computer program stored in the memory 409 and capable of running on the processor 410, where the computer program when executed by the processor 410 implements each process of the above embodiment of the image processing method, and the same technical effects can be achieved, and for avoiding repetition, a description is omitted herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the above-mentioned image processing method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.
Claims (8)
1. An image processing method applied to an electronic device, the method comprising:
acquiring a target image;
identifying nail regions in the target image, and determining a nail model corresponding to each nail region, wherein the nail model is constructed according to three-dimensional positioning information of a plurality of nail features of each nail region;
for a target nail model in the nail models, generating a sphere model according to the width of the root of the fingernail in the target nail model;
determining an intersection area of the sphere model and the target nail model, wherein the intersection area is determined according to the sphere model and a preset area, and the preset area is obtained by identifying the root of a nail in the target nail model;
Performing preset processing on a target area matched with the intersection area in the target image, wherein the preset processing comprises adding a layer of mask;
generating a sphere model according to the width of the nail root in the target nail model, wherein the sphere model comprises the following steps:
determining a radius r according to the width d of the fingernail root in the width direction of the fingernail in the target fingernail model, wherein a is more than r and less than or equal to d, and a is a constant;
and generating a sphere model according to the radius r.
2. The method of claim 1, wherein the determining an intersection region of the sphere model and the target nail model comprises:
acquiring target nail characteristics of a target nail region corresponding to the target nail model in the target image;
acquiring corresponding three-dimensional positioning information of the target nail characteristics in the target nail model;
determining the total length y of the nail corresponding to the target nail model according to the three-dimensional positioning information of the target nail characteristics, wherein the direction of the total length of the nail is the nail growing direction;
n equally dividing the target nail model in the nail growth direction to generate n model areas with the nail length of y/n, wherein n is more than 0;
Identifying a target model area which is positioned at the lowest part of the n model areas in the nail growth direction as a preset area corresponding to the root of the fingernail in the target nail model;
and determining an intersection area of the sphere model and the preset area.
3. The method of claim 1, wherein the identifying nail regions in the target image and determining a nail model corresponding to each nail region comprises:
identifying a nail region in the target image according to the nail characteristics in the target image;
acquiring a hand model corresponding to a hand region in the target image;
acquiring corresponding three-dimensional positioning information of the nail features in the hand model;
and generating a nail model corresponding to each nail region in the target image according to the three-dimensional positioning information corresponding to the nail characteristics of each nail region.
4. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a target image;
a first determining module, configured to identify nail regions in the target image, and determine a nail model corresponding to each nail region, where the nail model is constructed according to three-dimensional positioning information of a plurality of nail features of each nail region;
The generation module is used for generating a sphere model according to the width of the root of the fingernail in the target nail model for the target nail model in the nail model;
a second determining module, configured to determine an intersection area of the sphere model and the target nail model, where the intersection area is determined according to the sphere model and a preset area, and the preset area is obtained by identifying a nail root in the target nail model;
the processing module is used for carrying out preset processing on a target area matched with the intersection area in the target image, wherein the preset processing comprises adding a layer of mask;
the generation module comprises:
a third determining submodule for determining a radius r according to the width d of the fingernail root in the width direction of the fingernail in the target fingernail model, wherein a is smaller than r and smaller than d, and a is a constant;
and the first generation submodule is used for generating a sphere model according to the radius r.
5. The apparatus of claim 4, wherein the second determining module comprises:
a first obtaining submodule, configured to obtain a target nail feature of a target nail region corresponding to the target nail model in the target image;
The second acquisition sub-module is used for acquiring corresponding three-dimensional positioning information of the target nail characteristics in the target nail model;
the first determining submodule is used for determining the total length y of the nail corresponding to the target nail model according to the three-dimensional positioning information of the target nail characteristics, wherein the direction of the total length of the nail is the growing direction of the nail;
dividing a molecular module, which is used for dividing the target nail model into n equal parts in the nail growth direction to generate n model areas with the nail length of y/n, wherein n is more than 0;
the first recognition submodule is used for recognizing a target model area which is positioned at the lowest part of the n model areas in the nail growth direction as a preset area corresponding to the root of the fingernail in the target nail model;
and the second determining submodule is used for determining an intersection area of the sphere model and the preset area.
6. The apparatus of claim 4, wherein the first determining module comprises:
a second recognition sub-module for recognizing a nail region in the target image according to the nail characteristics in the target image;
the third acquisition sub-module is used for acquiring a hand model corresponding to the hand area in the target image;
A fourth obtaining sub-module, configured to obtain three-dimensional positioning information corresponding to the nail feature in the hand model;
and the second generation submodule is used for generating a nail model corresponding to each nail region in the target image according to the three-dimensional positioning information corresponding to the nail characteristics of each nail region.
7. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the image processing method according to any one of claims 1 to 3.
8. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps in the image processing method according to any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911329892.9A CN111091519B (en) | 2019-12-20 | 2019-12-20 | Image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911329892.9A CN111091519B (en) | 2019-12-20 | 2019-12-20 | Image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111091519A CN111091519A (en) | 2020-05-01 |
CN111091519B true CN111091519B (en) | 2023-04-28 |
Family
ID=70396634
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911329892.9A Active CN111091519B (en) | 2019-12-20 | 2019-12-20 | Image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111091519B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112347911A (en) * | 2020-11-05 | 2021-02-09 | 北京达佳互联信息技术有限公司 | Method and device for adding special effects of fingernails, electronic equipment and storage medium |
CN112750203B (en) * | 2021-01-21 | 2023-10-31 | 脸萌有限公司 | Model reconstruction method, device, equipment and storage medium |
CN113660424A (en) * | 2021-08-19 | 2021-11-16 | 展讯通信(上海)有限公司 | Image shooting method and related equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103473563A (en) * | 2013-09-23 | 2013-12-25 | 程涛 | Fingernail image processing method and system, and fingernail feature analysis method and system |
JP2014215735A (en) * | 2013-04-24 | 2014-11-17 | 国立大学法人筑波大学 | Nail image synthesizing device, nail image synthesizing method, and nail image synthesizing program |
CN104414105A (en) * | 2013-09-05 | 2015-03-18 | 卡西欧计算机株式会社 | Nail print apparatus and printing method thereof |
CN106127181A (en) * | 2016-07-02 | 2016-11-16 | 乐活无限(北京)科技有限公司 | One is virtual tries manicure method, system on |
CN106651879A (en) * | 2016-12-23 | 2017-05-10 | 深圳市拟合科技有限公司 | Method and system for extracting nail image |
CN109272519A (en) * | 2018-09-03 | 2019-01-25 | 先临三维科技股份有限公司 | Determination method, apparatus, storage medium and the processor of nail outline |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9687059B2 (en) * | 2013-08-23 | 2017-06-27 | Preemadonna Inc. | Nail decorating apparatus |
JP6428415B2 (en) * | 2015-03-20 | 2018-11-28 | カシオ計算機株式会社 | Drawing apparatus and nail shape detection method |
GB2544971B (en) * | 2015-11-27 | 2017-12-27 | Holition Ltd | Locating and tracking fingernails in images |
-
2019
- 2019-12-20 CN CN201911329892.9A patent/CN111091519B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014215735A (en) * | 2013-04-24 | 2014-11-17 | 国立大学法人筑波大学 | Nail image synthesizing device, nail image synthesizing method, and nail image synthesizing program |
CN104414105A (en) * | 2013-09-05 | 2015-03-18 | 卡西欧计算机株式会社 | Nail print apparatus and printing method thereof |
CN103473563A (en) * | 2013-09-23 | 2013-12-25 | 程涛 | Fingernail image processing method and system, and fingernail feature analysis method and system |
CN106127181A (en) * | 2016-07-02 | 2016-11-16 | 乐活无限(北京)科技有限公司 | One is virtual tries manicure method, system on |
CN106651879A (en) * | 2016-12-23 | 2017-05-10 | 深圳市拟合科技有限公司 | Method and system for extracting nail image |
CN109272519A (en) * | 2018-09-03 | 2019-01-25 | 先临三维科技股份有限公司 | Determination method, apparatus, storage medium and the processor of nail outline |
Also Published As
Publication number | Publication date |
---|---|
CN111091519A (en) | 2020-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102144489B1 (en) | Method and device for determining a rotation angle of a human face, and a computer storage medium | |
CN108184050B (en) | Photographing method and mobile terminal | |
CN110495819B (en) | Robot control method, robot, terminal, server and control system | |
CN107835367A (en) | A kind of image processing method, device and mobile terminal | |
CN111223143B (en) | Key point detection method and device and computer readable storage medium | |
CN107835364A (en) | One kind is taken pictures householder method and mobile terminal | |
CN111091519B (en) | Image processing method and device | |
CN107833177A (en) | A kind of image processing method and mobile terminal | |
CN107817939A (en) | A kind of image processing method and mobile terminal | |
CN109685915B (en) | Image processing method and device and mobile terminal | |
CN107948499A (en) | A kind of image capturing method and mobile terminal | |
CN107248137B (en) | Method for realizing image processing and mobile terminal | |
CN111047511A (en) | Image processing method and electronic equipment | |
CN109461117A (en) | A kind of image processing method and mobile terminal | |
CN108683850B (en) | Shooting prompting method and mobile terminal | |
CN109272473B (en) | Image processing method and mobile terminal | |
CN109544445B (en) | Image processing method and device and mobile terminal | |
CN111031253B (en) | Shooting method and electronic equipment | |
CN109671034B (en) | Image processing method and terminal equipment | |
CN113365085B (en) | Live video generation method and device | |
CN107678672A (en) | A kind of display processing method and mobile terminal | |
JP2023518548A (en) | Detection result output method, electronic device and medium | |
CN110717964B (en) | Scene modeling method, terminal and readable storage medium | |
CN110908517B (en) | Image editing method, image editing device, electronic equipment and medium | |
CN109345636B (en) | Method and device for obtaining virtual face image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230705 Address after: 518133 tower a 2301-09, 2401-09, 2501-09, 2601-09, phase III, North District, Yifang center, 99 Xinhu Road, N12 District, Haiwang community, Xin'an street, Bao'an District, Shenzhen City, Guangdong Province Patentee after: VIVO MOBILE COMMUNICATIONS (SHENZHEN) Co.,Ltd. Address before: 523860 No. 283 BBK Avenue, Changan Town, Changan, Guangdong. Patentee before: VIVO MOBILE COMMUNICATION Co.,Ltd. |
|
TR01 | Transfer of patent right |