CN110349079B - Image correction method and system based on user habits and big data - Google Patents

Image correction method and system based on user habits and big data Download PDF

Info

Publication number
CN110349079B
CN110349079B CN201910482556.1A CN201910482556A CN110349079B CN 110349079 B CN110349079 B CN 110349079B CN 201910482556 A CN201910482556 A CN 201910482556A CN 110349079 B CN110349079 B CN 110349079B
Authority
CN
China
Prior art keywords
image
corrected
data
parameters
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910482556.1A
Other languages
Chinese (zh)
Other versions
CN110349079A (en
Inventor
姚俊浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Palm Interactive Network Technology Co ltd
Original Assignee
Nanjing Palm Interactive Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Palm Interactive Network Technology Co ltd filed Critical Nanjing Palm Interactive Network Technology Co ltd
Priority to CN201910482556.1A priority Critical patent/CN110349079B/en
Publication of CN110349079A publication Critical patent/CN110349079A/en
Application granted granted Critical
Publication of CN110349079B publication Critical patent/CN110349079B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image correction method and system based on user habits and big data, comprising the following steps: acquiring a submitted image to be corrected, and extracting facial features in the image; performing region segmentation on the extracted facial features, wherein the region segmentation at least segments the facial features into four feature regions; performing face matching on the image according to the characteristic regions to obtain a user with the face image most matched with the current four characteristic regions; extracting prestored image correction parameters of the user, wherein the image correction parameters comprise: feature region adjustment data; the characteristic region adjusting parameters comprise user uploading parameters; respectively carrying out three-dimensional modeling on the characteristic regions to generate modeling images; carrying out data correction on the characteristic area of the modeling image according to the characteristic area adjusting data; and integrating the characteristic areas after data correction, adjusting the posture of the modeling image according to the posture of the image to be corrected, and intercepting the two-dimensional image to generate a corrected image.

Description

Image correction method and system based on user habits and big data
Technical Field
The invention relates to the field of image correction, in particular to an image correction method and an image correction system based on user habits and big data.
Background
With the increasing personalized demands, the correction of the picture by people is not more and more satisfied with a simple filter, such as a color conversion filter, and the like, and people are pursuing more fresh and alive, a qualified advanced filter and a picture correction with a more personal style.
The existing image correction algorithm can perform correction of a corresponding style on an image to be corrected by referring to the style of the compared image. However, the correction cannot be performed according to the correction habit of the user, and the manual correction effect of the user on the image to be corrected cannot be realized by the reference image. And the prior art can not correct the image according to the history of the user, obtain the content which can be referred to, and correct the image to be corrected according to the content.
Disclosure of Invention
The purpose of the invention is as follows:
the invention provides an image correction method and system based on user habits and big data, aiming at the technical problems related in the background technology.
The technical scheme is as follows:
an image correction method based on user habits and big data comprises the following steps:
acquiring a submitted image to be corrected, and extracting facial features in the image;
performing region segmentation on the extracted facial features, wherein the region segmentation at least divides the facial features into four feature regions;
performing face matching on the image according to the characteristic regions to obtain a user with the face image most matched with the current four characteristic regions;
extracting prestored image correction parameters of the user, wherein the image correction parameters comprise: feature region adjustment data; the characteristic region adjusting parameters comprise user uploading parameters;
respectively carrying out three-dimensional modeling on the characteristic regions to generate modeling images; carrying out data correction on the characteristic area of the modeling image according to the characteristic area adjusting data;
and integrating the characteristic areas after data correction, adjusting the posture of the modeling image according to the posture of the image to be corrected, and intercepting the two-dimensional image to generate a corrected image.
As a preferred mode of the present invention, the method comprises the steps of:
the image modification parameters further include: a style parameter comprising a definition of a style of the imagery;
acquiring a corrected image within a preset time of a user, and acquiring color mixing data;
classifying the toning data according to the toning type, obtaining the parameter utilization rate of the toning type, and obtaining the utilization rate
The highest palette type and store it as a style parameter;
and correcting the corrected image according to the style parameters.
As a preferred mode of the present invention, the method comprises the steps of:
the image correction parameters include: a light parameter comprising a definition of an image light;
acquiring a corrected image within a preset time of a user, and acquiring facial light data;
and acquiring the facial ray data with the highest user utilization rate in the facial ray data, and storing the facial ray data as the ray parameters.
The method comprises the following steps:
acquiring light parameters, and calling a modified modeling image;
providing a simulation of light parameters according to the modeled image;
and adjusting the posture of the modeling image according to the posture of the image to be corrected, and intercepting the two-dimensional image to generate a corrected image.
The method comprises the following steps:
the image correction parameters further include: skin tone data comprising an actual skin tone of a user;
acquiring a corrected image within a preset time of a user, and acquiring the area skin color of the corrected image according to a preset corrected area;
and obtaining skin color data of a corresponding preset correction area in the skin color data, and performing skin color correction on the preset correction area corresponding to the corrected image.
A system of an image correction method based on user habits and big data comprises the following steps:
the image extraction module is used for extracting facial features according to the submitted image to be corrected;
the characteristic region extraction module is used for extracting a characteristic region according to the extracted facial characteristics;
the parameter storage module is used for prestoring image correction parameters, and the image correction parameters comprise: adjusting parameters of the characteristic region;
the modeling module is used for carrying out three-dimensional modeling on the characteristic region to generate a modeling image;
the characteristic region adjusting module is used for correcting according to the modeling image of the characteristic region and the characteristic region adjusting data;
the integration module is used for integrating the characteristic regions after the data correction;
and the intercepting module is used for adjusting the posture of the modeling image according to the posture of the image to be corrected and intercepting the two-dimensional image to generate a corrected image.
The method comprises the following steps:
the parameter storage module is used for prestoring image correction parameters, and the image correction parameters comprise: a style parameter;
and the style adjusting module is used for carrying out style parameter correction on the corrected image according to the color matching data.
The method comprises the following steps:
the parameter storage module is used for prestoring image correction parameters, and the image correction parameters comprise: a style parameter;
and the light ray adjusting module is used for correcting the light ray according to the light ray parameters.
The method comprises the following steps:
the parameter storage module is used for prestoring image correction parameters, and the image correction parameters comprise: skin color parameters;
the skin color acquisition module is used for acquiring a corrected image of a user at preset time and acquiring the area skin color in a preset corrected area as a skin color parameter;
and the skin color adjusting module is used for performing skin color correction on the preset correction area of the corrected image according to the skin color parameters.
The invention realizes the following beneficial effects:
1. performing region segmentation on the image to be corrected by extracting the face in the image and segmenting the feature region; and modeling and data correction are carried out on the characteristic region after the region segmentation according to the preset region adjustment parameters, so that the abrupt feeling caused by the two-dimensional data correction is reduced.
2. And correcting the style parameters, the light parameters and the skin color parameters of the corrected image according to the habits and the big data of the user to ensure that the corrected image conforms to the correction habits and common styles of the user.
3. Through the light adjustment of the three-dimensional modeling, the representation of the facial shadow during the light adjustment is improved, and the stereoscopic impression of the portrait is improved. By adjusting the skin color, the influence of the environmental color on the skin color of the figure in the image is reduced, and the distortion of the figure is avoided.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart of an image correction method based on user habits and big data according to the present invention;
FIG. 2 is a style correction flowchart of an image correction method based on user habits and big data according to the present invention;
fig. 3 is a light data correction flow chart of an image correction method based on user habits and big data according to the present invention;
fig. 4 is a skin color correction flow chart of an image correction method based on user habits and big data according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Example one
Reference is made to fig. 1-4.
An image correction method based on user habits and big data comprises the following steps:
s1, acquiring a submitted image to be corrected, and extracting facial features in the image.
The user submits the image to be corrected to the system, wherein the image comprises pictures and films. The system identifies the face in the image according to the face identification system, and identifies and acquires the face image.
S2: and performing region segmentation on the extracted facial features, wherein the region segmentation at least divides the facial features into four feature regions.
And acquiring a face image of the face, and performing region division according to the face image. The feature region may be a preset segmentation region, where the feature region is a feature region obtained by segmenting the face image into four feature regions from top to bottom, or a feature region obtained by segmenting the face image into four feature regions according to four quadrants.
S3: and performing face matching on the image according to the characteristic regions to acquire the user with the face image most matched with the current four characteristic regions.
And respectively extracting the features according to the segmented feature areas, acquiring recognizable features, and performing face matching. And identifying the user according to the characteristic areas, improving the identification accuracy through partition characteristic identification, and respectively matching different characteristic areas.
S4: extracting prestored image correction parameters of the user, wherein the image correction parameters comprise: feature region adjustment data.
The feature area adjustment parameters include user upload parameters.
The user uploads the prestored image correction parameters to the system, and the image correction parameters can comprise parameters for modifying the facial features by the user. Specifically, the parameters for adjusting the circumference of the face, the proportion of five sense organs, the limb and the like can be included.
S5: and respectively carrying out three-dimensional modeling on the characteristic regions to generate a modeling image. And performing data correction on the characteristic area of the modeling image according to the characteristic area adjusting data.
The face circumference of the feature region is three-dimensional modeling, so that the face features can be adjusted conveniently. And acquiring the adjustment data of the corresponding characteristic region according to the prestored image correction parameters, thereby correcting and adjusting the characteristic region.
S6: and integrating the characteristic areas after data correction, adjusting the posture of the modeling image according to the posture of the image to be corrected, and intercepting the two-dimensional image to generate a corrected image.
And integrating the corrected modeling images, and integrating the modeling of the feature areas into the modeling of the facial features. And adjusting the additional modeling image to the initial position of the image, and intercepting the two-dimensional image.
Example two
The present embodiment is substantially the same as the first embodiment, except that as a preferred mode of the present embodiment, the following steps are included:
the image correction parameters further include: a style parameter comprising a definition of a style of the imagery.
And acquiring a corrected image within a preset time of a user, and acquiring color mixing data.
Classifying the toning data according to the toning type, acquiring the parameter utilization rate of the toning type, acquiring the toning type with the highest utilization rate and storing the toning type as the style parameter.
And correcting the corrected image according to the style parameters.
The color mixing type may include a filter, and specifically, may include: exposure value, sharpness, brightness, filter, noise and other common image correction contents.
The preset time can be set to be one month to one week, and the corrected image of the user in the preset time is obtained so as to obtain the most frequently used style parameters of the user.
The corrected images can comprise automatically corrected images and manually corrected images, and the system captures the corrected images generated by the user within preset time and captures style parameters of the corrected images. Specifically, the system acquires the color mixing type with the highest use frequency and the corresponding adjustment data, so as to acquire the style parameters. And performing style correction on the corrected image according to the style parameters.
As a preferable mode of the present embodiment, the method includes the steps of:
the image correction parameters include: light parameters including definitions of image light.
And acquiring a corrected image within a preset time of a user, and acquiring facial light data.
And acquiring the facial ray data with the highest user utilization rate in the facial ray data, and storing the facial ray data as the ray parameters. The definition of the image light includes the position of the light source in the image. The preset time can be set to be one month to one week, and the corrected image of the user in the preset time is obtained, so that the most frequently used facial light data of the user is obtained.
And acquiring the position of the light source with the highest utilization rate, setting the position as a light parameter, and correspondingly correcting the corrected image according to the light parameter.
As a preferable mode of the present embodiment, the method includes the steps of:
and acquiring light parameters, and calling the modified modeling image.
A simulation of the light parameters is provided based on the modeled image.
And adjusting the posture of the modeling image according to the posture of the image to be corrected, and intercepting the two-dimensional image to generate a corrected image.
And performing three-dimensional light source simulation according to modeling, and acquiring the influence under the light source condition, so as to reduce the influence of light data correction on the user image.
As a preferable mode of the present embodiment, the method includes the steps of:
the image correction parameters further include: skin tone data comprising an actual skin tone of the user.
And acquiring a corrected image within a preset time of a user, and acquiring the area skin color of the corrected image according to a preset corrected area.
And obtaining skin color data of a corresponding preset correction area in the skin color data, and performing skin color correction on the preset correction area corresponding to the corrected image.
The preset time can be set to be one month to one week, and the corrected image of the user in the preset time is obtained so as to obtain the skin color data of the user. Specifically, the skin color of the face area of the user is obtained according to the area characteristics of the face of the user. And regulating the skin color of the corrected image according to the skin color of the area, so that the influence of the environmental color on the skin color is reduced.
EXAMPLE III
The present embodiment is a system embodiment corresponding to the above embodiment, and specifically includes the following contents:
a system of an image correction method based on user habits and big data comprises the following steps:
and the image extraction module is used for extracting facial features according to the submitted image to be corrected.
And the characteristic region extraction module is used for extracting the characteristic region according to the extracted facial characteristics.
The parameter storage module is used for prestoring image correction parameters, and the image correction parameters comprise: and adjusting parameters of the characteristic region.
And the modeling module is used for carrying out three-dimensional modeling on the characteristic region to generate a modeling image.
And the characteristic region adjusting module is used for correcting according to the modeling image of the characteristic region and the characteristic region adjusting data.
And the integration module is used for integrating the characteristic region after the data correction.
And the intercepting module is used for adjusting the posture of the modeling image according to the posture of the image to be corrected and intercepting the two-dimensional image to generate a corrected image.
As a preferable mode of the present embodiment, the method includes the steps of:
the parameter storage module is used for prestoring image correction parameters, and the image correction parameters comprise: a style parameter.
And the style adjusting module is used for carrying out style parameter correction on the corrected image according to the color matching data.
As a preferable mode of the present embodiment, the method includes the steps of:
the parameter storage module is used for prestoring image correction parameters, and the image correction parameters comprise: a style parameter.
And the light ray adjusting module is used for correcting the light ray according to the light ray parameters.
As a preferable mode of the present embodiment, the method includes the steps of:
the parameter storage module is used for prestoring image correction parameters, and the image correction parameters comprise: a skin color parameter.
The skin color acquisition module is used for acquiring a corrected image of a user at preset time and acquiring the regional skin color in a preset corrected region as a skin color parameter.
And the skin color adjusting module is used for performing skin color correction on the preset correction area of the corrected image according to the skin color parameters. The above embodiments are only for illustrating the technical idea and features of the present invention, and the purpose of the present invention is to enable those skilled in the art to understand the content of the present invention and implement the present invention accordingly, and not to limit the protection scope of the present invention accordingly. All equivalent changes and modifications made according to the spirit of the present invention should be covered in the protection scope of the present invention.

Claims (6)

1. An image correction method based on user habits and big data is characterized by comprising the following steps:
acquiring a submitted image to be corrected, and extracting facial features in the image;
performing region segmentation on the extracted facial features, wherein the region segmentation at least segments the facial features into four feature regions according to four quadrants;
performing face matching on the image according to the characteristic regions to obtain a user with the face image most matched with the current four characteristic regions;
extracting prestored image correction parameters of the user, wherein the image correction parameters comprise: feature region adjustment data;
the feature area adjustment data comprises user uploading parameters;
respectively carrying out three-dimensional modeling on the characteristic regions to generate modeling images; carrying out data correction on the characteristic area of the modeling image according to the characteristic area adjusting data;
integrating the characteristic areas after data correction, adjusting the posture of the modeling image according to the posture of the image to be corrected, and intercepting the two-dimensional image to generate a corrected image;
the image correction parameters include: a light parameter comprising a definition of an image light;
acquiring a corrected image within a preset time of a user, and acquiring facial light data;
acquiring facial ray data with the highest user utilization rate in the facial ray data, and storing the facial ray data as ray parameters;
acquiring light parameters, and calling a modified modeling image;
providing a simulation of light parameters according to the modeled image;
and adjusting the posture of the modeling image according to the posture of the image to be corrected, and intercepting the two-dimensional image to generate a corrected image.
2. The method as claimed in claim 1, comprising the steps of:
the image correction parameters further include: a style parameter comprising a definition of a style of the imagery;
acquiring a corrected image within a preset time of a user, and acquiring color mixing data;
classifying the toning data according to the toning types, acquiring the parameter utilization rate of the toning types, acquiring the toning type with the highest utilization rate and storing the toning type as a style parameter;
and correcting the corrected image according to the style parameters.
3. The method as claimed in claim 2, comprising the steps of:
the image correction parameters further include: skin tone data comprising an actual skin tone of a user;
acquiring a corrected image within a preset time of a user, and acquiring the area skin color of the corrected image according to a preset corrected area; and acquiring skin color data of a corresponding preset correction area in the skin color data, and correcting the skin color of the preset correction area corresponding to the corrected image.
4. An image correction system based on user habits and big data, comprising:
the image extraction module is used for extracting facial features of the submitted image to be corrected;
the feature region extraction module is used for extracting feature regions according to the extracted facial features, performing region segmentation on the extracted facial features, at least dividing the facial features into four feature regions according to four quadrants, performing face matching on the image according to the feature regions, and acquiring a user with the facial image most matched with the current four feature regions;
the parameter storage module is used for prestoring image correction parameters and image correction parameters, and the image correction parameters comprise: feature region adjustment data, wherein the feature region adjustment data includes user upload parameters, and the image modification parameters include: light parameters;
the modeling module is used for carrying out three-dimensional modeling on the characteristic region to generate a modeling image;
the characteristic region adjusting module is used for correcting according to the modeling image of the characteristic region and the characteristic region adjusting data;
the integration module is used for integrating the characteristic regions subjected to data correction;
the intercepting module is used for acquiring a corrected image within the preset time of a user, acquiring facial light data, calling a corrected modeling image according to light parameters prestored in the parameter storage module, and providing simulation of the light parameters according to the modeling image; and adjusting the posture of the modeling image according to the posture of the image to be corrected, and intercepting the two-dimensional image to generate a corrected image.
5. The system of claim 4, comprising: the parameter storage module is used for prestoring image correction parameters, and the image correction parameters comprise: a style parameter;
and the style adjusting module is used for carrying out style parameter correction on the corrected image according to the color matching data.
6. The system of claim 4, comprising: the parameter storage module is used for prestoring image correction parameters, and the image correction parameters comprise: skin color parameters;
the skin color acquisition module is used for acquiring a corrected image of a user at preset time and acquiring the area skin color in a preset corrected area as a skin color parameter;
and the skin color adjusting module is used for performing skin color correction on the preset correction area of the corrected image according to the skin color parameters.
CN201910482556.1A 2019-06-04 2019-06-04 Image correction method and system based on user habits and big data Active CN110349079B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910482556.1A CN110349079B (en) 2019-06-04 2019-06-04 Image correction method and system based on user habits and big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910482556.1A CN110349079B (en) 2019-06-04 2019-06-04 Image correction method and system based on user habits and big data

Publications (2)

Publication Number Publication Date
CN110349079A CN110349079A (en) 2019-10-18
CN110349079B true CN110349079B (en) 2023-04-18

Family

ID=68181514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910482556.1A Active CN110349079B (en) 2019-06-04 2019-06-04 Image correction method and system based on user habits and big data

Country Status (1)

Country Link
CN (1) CN110349079B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503606A (en) * 2015-09-08 2017-03-15 宏达国际电子股份有限公司 Face image adjustment system and face image method of adjustment
JP6872742B2 (en) * 2016-06-30 2021-05-19 学校法人明治大学 Face image processing system, face image processing method and face image processing program
CN108257097A (en) * 2017-12-29 2018-07-06 努比亚技术有限公司 U.S. face effect method of adjustment, terminal and computer readable storage medium
CN108573480B (en) * 2018-04-20 2020-02-11 太平洋未来科技(深圳)有限公司 Ambient light compensation method and device based on image processing and electronic equipment
CN108665408A (en) * 2018-05-21 2018-10-16 北京微播视界科技有限公司 Method for regulating skin color, device and electronic equipment
CN109064388A (en) * 2018-07-27 2018-12-21 北京微播视界科技有限公司 Facial image effect generation method, device and electronic equipment

Also Published As

Publication number Publication date
CN110349079A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN111126125B (en) Method, device, equipment and readable storage medium for extracting target text in certificate
CN106778928B (en) Image processing method and device
US11323676B2 (en) Image white balance processing system and method
CN108257084B (en) Lightweight face automatic makeup method based on mobile terminal
US7751640B2 (en) Image processing method, image processing apparatus, and computer-readable recording medium storing image processing program
Wang et al. Example-based image color and tone style enhancement
CN107680128A (en) Image processing method, device, electronic equipment and computer-readable recording medium
US7840087B2 (en) Image processing apparatus and method therefor
CN107180415B (en) Skin beautifying processing method and device in image
CN110163810B (en) Image processing method, device and terminal
JP2002245471A (en) Photograph finishing service for double print accompanied by second print corrected according to subject contents
Kwok et al. Simultaneous image color correction and enhancement using particle swarm optimization
JP4421761B2 (en) Image processing method and apparatus, and recording medium
US7251054B2 (en) Method, apparatus and recording medium for image processing
CN107730444A (en) Image processing method, device, readable storage medium storing program for executing and computer equipment
US20080232692A1 (en) Image processing apparatus and image processing method
JP3959909B2 (en) White balance adjustment method and adjustment device
JP2004062651A (en) Image processor, image processing method, its recording medium and its program
CN108022207A (en) Image processing method, device, storage medium and electronic equipment
CN110930341A (en) Low-illumination image enhancement method based on image fusion
EP2384583A1 (en) Image processing
US7212674B1 (en) Method, apparatus and recording medium for face extraction
WO2008102296A2 (en) Method for enhancing the depth sensation of an image
US9092889B2 (en) Image processing apparatus, image processing method, and program storage medium
CN113052783A (en) Face image fusion method based on face key points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230327

Address after: Floor 31-32, Building 1, No. 268 Lushan Road, Jianye District, Nanjing City, Jiangsu Province, 210000 (elevator floor, actual structural floors 26 and 27)

Applicant after: Nanjing Palm Interactive Network Technology Co.,Ltd.

Address before: 215400 room 644, building 1, 18 Taiping South Road, Chengxiang Town, Taicang City, Suzhou City, Jiangsu Province

Applicant before: SUZHOU HAOGE CULTURAL COMMUNICATIONS CO.,LTD.

GR01 Patent grant
GR01 Patent grant