CN115293996B - Image toning method, device and storage medium - Google Patents

Image toning method, device and storage medium Download PDF

Info

Publication number
CN115293996B
CN115293996B CN202211223095.4A CN202211223095A CN115293996B CN 115293996 B CN115293996 B CN 115293996B CN 202211223095 A CN202211223095 A CN 202211223095A CN 115293996 B CN115293996 B CN 115293996B
Authority
CN
China
Prior art keywords
image
region
interest
segmentation
original image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211223095.4A
Other languages
Chinese (zh)
Other versions
CN115293996A (en
Inventor
宗盖盖
马传旭
董付春
林晓丹
郭娟
陆浩杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qunhe Information Technology Co Ltd
Original Assignee
Hangzhou Qunhe Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qunhe Information Technology Co Ltd filed Critical Hangzhou Qunhe Information Technology Co Ltd
Priority to CN202211223095.4A priority Critical patent/CN115293996B/en
Publication of CN115293996A publication Critical patent/CN115293996A/en
Application granted granted Critical
Publication of CN115293996B publication Critical patent/CN115293996B/en
Priority to PCT/CN2023/102937 priority patent/WO2024074060A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image toning method, an image toning device and a storage medium, which relate to the technical field of image processing, and the method comprises the following steps: receiving a trigger signal acting on an original image; after receiving the trigger signal, segmenting the original image through an interactive segmentation algorithm to obtain a two-value segmentation graph; segmenting the original image through a superpixel segmentation algorithm to obtain a superpixel segmentation image; determining an interested area according to the binary segmentation map and the super-pixel segmentation map; toning the interesting area to obtain a toning map of the interesting area; and generating a toned target image according to the region-of-interest toned graph and the original image. The method and the device solve the problem that the color mixing efficiency is low during local color mixing in the prior art, achieve the purpose that the region of interest can be automatically determined according to two segmentation algorithms, automatically perform image fusion, and avoid the effect that the manual operation of a user improves the color mixing efficiency.

Description

Image toning method, device and storage medium
Technical Field
The invention relates to an image toning method, an image toning device and a storage medium, and belongs to the technical field of image processing.
Background
In many scenes, users often need to color the image, particularly local areas in the image.
An existing local toning method includes: and scratching the region of interest in the image through scratching software, toning the region of interest, and aligning and covering the toned region of interest into the original image.
In the above scheme, the user needs to manually scratch before color mixing, the region of interest still needs to be manually aligned to the original image after color mixing, and the color mixing efficiency is low.
Disclosure of Invention
The invention aims to provide an image toning method, an image toning device and a storage medium, which are used for solving the problems in the prior art.
In order to achieve the purpose, the invention provides the following technical scheme:
according to a first aspect, embodiments of the present invention provide an image toning method, the method comprising:
receiving a trigger signal acting on an original image;
after the trigger signal is received, segmenting the original image through an interactive segmentation algorithm to obtain a two-value segmentation graph;
segmenting the original image through a superpixel segmentation algorithm to obtain a superpixel segmentation graph;
determining a region of interest from the two-valued segmentation map and the superpixel segmentation map;
toning the interesting area to obtain a toning map of the interesting area;
and generating a toned target image according to the interesting region toning graph and the original image.
Optionally, the determining a region of interest from the two-valued segmentation map and the superpixel segmentation map comprises:
calculating the coincidence ratio of a foreground region in the two-value segmentation map and each super-pixel segmentation block in the super-pixel segmentation map;
and determining a region composed of the super-pixel segmentation blocks with the contact ratio reaching a preset threshold value as the region of interest.
Optionally, the generating a toned target image according to the region-of-interest toned map and the original image includes:
generating a trimap image of the region of interest;
generating a transparent channel map according to the trimap image and the original image;
and carrying out image fusion on the region-of-interest color mixing map and the original image according to the transparent channel map to obtain a color-mixed target image.
Optionally, the generating a trimap image of the region of interest includes:
and carrying out image processing on the region of interest by using an expansion corrosion method and/or a skeleton extraction method to obtain the trimap image.
Optionally, the generating a transparent channel map according to the trimap image and the original image includes:
and calculating to obtain the transparent channel map by an alphaMatting matting algorithm according to the trimap image and the original image, wherein the transparent channel map comprises the transparency of each pixel point in the original image.
Optionally, the performing image fusion on the region-of-interest color modulation map and the original image according to the transparent channel map to obtain a color-modulated target image includes:
and calculating the pixel value of each pixel point after fusion according to the pixel value of each pixel point in the region-of-interest color chart, the pixel value of each pixel point in the original image and the transparency of each pixel point in the transparent channel chart.
Optionally, the calculating the pixel value of each fused pixel point according to the pixel value of each pixel point in the color chart of the region of interest, the pixel value of each pixel point in the original image, and the transparency of each pixel point in the transparent channel chart includes:
for each pixel point, the fused pixel value at the pixel point = pixel value (1-alpha) of the region of interest toned image) + pixel value (alpha) of the original image;
where alpha is the transparency at the pixel point.
Optionally, the toning the region of interest to obtain a region of interest toning map includes:
receiving an adjusting instruction for adjusting preset parameters of the region of interest, wherein the preset parameters comprise at least one of hue, saturation, brightness, contrast and definition;
and adjusting the region of interest according to the adjusting instruction to obtain the region of interest color chart.
In a second aspect, there is provided an image toning apparatus, the apparatus comprising a memory having at least one program instruction stored therein and a processor that implements the method of the first aspect by loading and executing the at least one program instruction.
In a third aspect, there is provided a computer storage medium having stored therein at least one program instruction which is loaded and executed by a processor to implement the method of the first aspect.
By receiving a trigger signal acting on the original image; after receiving the trigger signal, segmenting the original image through an interactive segmentation algorithm to obtain a two-value segmentation graph; segmenting the original image through a superpixel segmentation algorithm to obtain a superpixel segmentation image; determining an interested area according to the binary segmentation map and the super-pixel segmentation map; toning the interesting area to obtain a toning map of the interesting area; and generating a toned target image according to the region-of-interest toned graph and the original image. The problem of the local mixing of colors efficiency during lower is solved among the prior art, reached and can be based on two kinds of segmentation algorithm automatic determination interesting area to carry out image fusion automatically, avoid user manual operation to improve the effect of mixing of colors efficiency.
In addition, the trimap image of the region of interest is generated according to the expansion corrosion method and the skeleton extraction method, and then the transparent channel map is generated according to the trimap image and the original image, so that the situation that the boundary is judged by mistake as the background is avoided, the accuracy of the generated transparent channel map is improved, and the precision of the target image obtained through fusion is further improved.
According to the method and the device, the image fusion is carried out according to the transparent channel image, the problem that the region of interest and other regions without color mixing are excessively unnatural is solved, and the image quality of the generated target image is improved.
According to the method and the device, the region of interest is determined through two segmentation algorithms and the coincidence degree, so that the accuracy of the region of interest obtained through identification is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical solutions of the present invention more clearly understood and to implement them in accordance with the contents of the description, the following detailed description is given with reference to the preferred embodiments of the present invention and the accompanying drawings.
Drawings
FIG. 1 is a flowchart of a method for toning an image according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an original image according to one embodiment of the present invention;
FIG. 3 is a super-pixel segmentation graph obtained after super-pixel segmentation is performed on the original image shown in FIG. 2 according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of one possible method for identifying the original image shown in FIG. 2 to determine a region of interest according to an embodiment of the present invention;
fig. 5 is a trimap image obtained by performing image processing on the region of interest shown in fig. 4 by using an expansion erosion method according to an embodiment of the present invention;
fig. 6 is a trimap image obtained by performing image processing on the region of interest shown in fig. 4 by using a skeleton extraction method according to an embodiment of the present invention;
fig. 7 is a trimap image obtained by performing image processing on the region of interest shown in fig. 4 through an expansion erosion method and a skeleton extraction method according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Referring to fig. 1, a flowchart of a method for toning an image according to an embodiment of the present application is shown, where as shown in fig. 1, the method includes:
step 101, receiving a trigger signal acting on an original image;
the terminal can show the original image, and when the user needs to carry out local color matching on the original image, the user can click the original image. Accordingly, the terminal may receive the trigger signal. The trigger signal may be a click signal for clicking the original image through a mouse, or may also be a touch signal for touching the original image through a touch screen, which is not limited herein.
Alternatively, the trigger signal may be a signal that acts on the area to be toned in the original image.
In a possible embodiment, please refer to fig. 2, which shows a possible schematic diagram of an original image displayed by the terminal.
102, after receiving the trigger signal, segmenting the original image through an interactive segmentation algorithm to obtain a two-value segmentation graph;
after receiving the trigger signal, the original image can be segmented through an interactive segmentation algorithm to obtain the foreground and the background in the original image.
103, segmenting the original image through a superpixel segmentation algorithm to obtain a superpixel segmentation image;
after the trigger signal is received, the original image can be segmented through a superpixel segmentation algorithm to obtain a corresponding superpixel segmentation graph. For example, please refer to fig. 3, which shows a super-pixel segmentation map obtained after super-pixel segmentation is performed on the original image shown in fig. 2. The superpixel segmentation algorithm may be a Simple Linear Iterative Clustering (SLIC) algorithm.
It should be noted that, the present application is only illustrated by performing step 102 and then step 103, and in actual implementation, step 102 and step 103 may be performed at the same time, or step 103 and then step 102 are performed, which is not limited in the present application.
104, determining a region of interest according to the binary segmentation map and the superpixel segmentation map;
optionally, this step includes:
firstly, calculating the coincidence degree of a foreground region in the two-value segmentation image and each super-pixel segmentation block in the super-pixel segmentation image;
in one possible embodiment, the foreground region is segmented based on an interactive segmentation algorithm
Figure DEST_PATH_IMAGE001
A super pixel block in the super pixel segmentation map @>
Figure 687710DEST_PATH_IMAGE002
Then the degree of coincidence IOS is designated>
Figure 762107DEST_PATH_IMAGE001
And/or>
Figure 807424DEST_PATH_IMAGE002
The intersection of the areas is divided by->
Figure 468212DEST_PATH_IMAGE002
I.e.:
Figure DEST_PATH_IMAGE003
secondly, determining a region composed of the super-pixel segmentation blocks with the coincidence degree reaching a preset threshold value as the region of interest.
The preset threshold may be a preset numerical value, or may also be a system default numerical value, which is not limited in the present application. And the specific value of the preset threshold can be determined according to the actual application scene. Typically, the predetermined threshold is a value greater than 0.5. For example, in one possible embodiment, the preset threshold is 0.75 by default.
In actual implementation, if the calculated coincidence degree is greater than the preset threshold, it indicates that the calculated coincidence degree is highly likely to be the foreground, and at this time, the super-pixel segmentation block may be determined as the foreground, otherwise, the super-pixel segmentation block is identified as the background. And determining the area formed by the finally determined each foreground as the region of interest.
Still illustrated with reference to fig. 2 as an original image, please refer to fig. 4, which shows a possible schematic diagram of the determined region of interest.
105, toning the region of interest to obtain a region of interest toning map;
optionally, this step may include:
firstly, receiving an adjusting instruction for adjusting preset parameters of the region of interest, wherein the preset parameters comprise at least one of hue, saturation, brightness, contrast and definition;
secondly, adjusting the region of interest according to the adjusting instruction to obtain the region of interest color chart.
It should be noted that when the sharpness of the region of interest is toned, the region of interest may be processed by using a double-fuzzy algorithm, and the sharpness may be adjusted by a user through an adjustment button during actual implementation. The adjusting button may be a slider, a slide bar, or the like, which is not limited in this respect.
And 106, generating a toned target image according to the interesting area toning graph and the original image.
Optionally, this step includes:
firstly, generating a trimap image of the region of interest;
and carrying out image processing on the region of interest by using an expansion corrosion method and/or a skeleton extraction method to obtain the trimap image.
Please refer to fig. 5, which shows a trimap image obtained by performing image processing on the region of interest shown in fig. 4 by the dilation-erosion method.
Please refer to fig. 6, which shows a trimap image obtained by image processing the region of interest shown in fig. 4 through a skeleton extraction method.
In the above, only the image processing by using the dilation-erosion method or the skeleton extraction method respectively is illustrated, please refer to fig. 7, which shows a trimap image obtained by simultaneously performing the dilation-erosion method processing and the skeleton extraction method processing on the region of interest shown in fig. 4. The interesting region is processed by using the expansion corrosion method and the skeleton extraction method, so that the problem that the foreground at the boundary of the interesting region and the adjacent region is judged as the background by mistake is solved, and the precision of the generated trimap image is improved.
Secondly, generating a transparent channel map according to the trimap image and the original image;
optionally, the original image may be modified from the trimap image and the original image, the transparent channel image is obtained by the computation of alphaMatting matting algorithm, the transparency channel map comprises the transparency of each pixel point in the original image.
Thirdly, carrying out image fusion on the region-of-interest color mixing map and the original image according to the transparent channel map to obtain a color-mixed target image.
Optionally, the pixel value of each pixel point after fusion is calculated according to the pixel value of each pixel point in the region of interest toned graph, the pixel value of each pixel point in the original image, and the transparency of each pixel point in the transparent channel graph.
Namely: for each pixel point, the fused pixel value at the pixel point = pixel value (1-alpha) of the region of interest toned image) + pixel value (alpha) of the original image;
where alpha is the transparency at the pixel point.
After the above-mentioned processing is performed on each pixel, a fused target image can be obtained.
In summary, by receiving a trigger signal acting on the original image; after the trigger signal is received, segmenting the original image through an interactive segmentation algorithm to obtain a two-value segmentation graph; segmenting the original image through a superpixel segmentation algorithm to obtain a superpixel segmentation image; determining a region of interest from the two-valued segmentation map and the superpixel segmentation map; toning the interesting area to obtain a toning map of the interesting area; and generating a toned target image according to the interesting region toning graph and the original image. The problem of among the prior art local mixing of colors efficiency lower is solved.
In summary, by receiving a trigger signal acting on the original image; after receiving the trigger signal, segmenting the original image through an interactive segmentation algorithm to obtain a binary segmentation graph; segmenting the original image through a superpixel segmentation algorithm to obtain a superpixel segmentation image; determining an interested area according to the binary segmentation map and the super-pixel segmentation map; toning the interesting area to obtain a toning map of the interesting area; and generating a toned target image according to the region-of-interest toned graph and the original image. The problem of the local mixing of colors in prior art the toning efficiency is lower is solved, reach and can confirm the region of interest according to two segmentation algorithms automatically to carry out image fusion automatically, avoid user manual operation to improve the effect of mixing colors efficiency.
In addition, the trimap image of the region of interest is generated according to the expansion corrosion method and the skeleton extraction method, and then the transparent channel map is generated according to the trimap image and the original image, so that the situation that the boundary is judged by mistake as the background is avoided, the accuracy of the generated transparent channel map is improved, and the precision of the target image obtained through fusion is further improved.
According to the method and the device, the image fusion is carried out according to the transparent channel image, the problem that the region of interest and other regions without color mixing are excessively unnatural is solved, and the image quality of the generated target image is improved.
According to the method and the device, the region of interest is determined through two segmentation algorithms and the coincidence degree, so that the accuracy of the region of interest obtained through identification is improved.
The present application also provides an image toning apparatus comprising a memory having at least one program instruction stored therein and a processor that implements the method described above by loading and executing the at least one program instruction.
The present application also provides a computer storage medium having stored therein at least one program instruction, which is loaded and executed by a processor to implement the method as described above.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. An image toning method, the method comprising:
receiving a trigger signal acting on an original image;
after the trigger signal is received, segmenting the original image through an interactive segmentation algorithm to obtain a two-value segmentation graph;
segmenting the original image by a superpixel segmentation algorithm, obtaining a super-pixel segmentation graph;
determining a region of interest from the two-valued segmentation map and the superpixel segmentation map;
toning the interesting area to obtain a toning map of the interesting area;
generating a toned target image according to the region-of-interest tonemap and the original image, wherein the generating of the toned target image comprises the following steps:
generating a trimap image of the region of interest;
generating a transparent channel map according to the trimap image and the original image;
and carrying out image fusion on the region-of-interest color mixing map and the original image according to the transparent channel map to obtain a color-mixed target image.
2. The method of claim 1, wherein determining a region of interest from the binary segmentation map and the superpixel segmentation map comprises:
calculating the coincidence ratio of a foreground region in the binary segmentation map and each super-pixel segmentation block in the super-pixel segmentation map;
and determining a region composed of the super-pixel segmentation blocks with the contact ratio reaching a preset threshold value as the region of interest.
3. The method of claim 1, wherein the generating the trimap image of the region of interest comprises:
and carrying out image processing on the region of interest by using an expansion corrosion method and/or a skeleton extraction method to obtain the trimap image.
4. The method of claim 1, wherein generating a transparent channel map from the trimap image and the raw image comprises:
and calculating to obtain the transparent channel map through an alphaMatting matting algorithm according to the trimap image and the original image, wherein the transparent channel map comprises the transparency of each pixel point in the original image.
5. The method according to claim 1, wherein the performing image fusion on the region-of-interest color modulation map and the original image according to the transparent channel map to obtain a color-modulated target image comprises:
and calculating the pixel value of each pixel point after fusion according to the pixel value of each pixel point in the region-of-interest color chart, the pixel value of each pixel point in the original image and the transparency of each pixel point in the transparent channel chart.
6. The method according to claim 5, wherein the calculating the pixel value of each pixel point after fusion according to the pixel value of each pixel point in the region-of-interest tonemap, the pixel value of each pixel point in the original image, and the transparency of each pixel point in the transparent channel map comprises:
for each pixel point, the fused pixel value at the pixel point = pixel value (1-alpha) of the region of interest toned image) + pixel value (alpha) of the original image;
where alpha is the transparency at the pixel point.
7. The method according to any one of claims 1 to 6, wherein the toning the region of interest to obtain a region of interest tone map comprises:
receiving an adjusting instruction for adjusting preset parameters of the region of interest, wherein the preset parameters comprise at least one of hue, saturation, brightness, contrast and definition;
and adjusting the region of interest according to the adjusting instruction to obtain the region of interest color chart.
8. An image toning apparatus, comprising a memory in which at least one program instruction is stored and a processor that implements the method according to any one of claims 1 to 7 by loading and executing the at least one program instruction.
9. A computer storage medium having stored therein at least one program instruction which is loaded and executed by a processor to implement the method of any one of claims 1 to 7.
CN202211223095.4A 2022-10-08 2022-10-08 Image toning method, device and storage medium Active CN115293996B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211223095.4A CN115293996B (en) 2022-10-08 2022-10-08 Image toning method, device and storage medium
PCT/CN2023/102937 WO2024074060A1 (en) 2022-10-08 2023-06-27 Image toning method and apparatus and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211223095.4A CN115293996B (en) 2022-10-08 2022-10-08 Image toning method, device and storage medium

Publications (2)

Publication Number Publication Date
CN115293996A CN115293996A (en) 2022-11-04
CN115293996B true CN115293996B (en) 2023-03-24

Family

ID=83833979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211223095.4A Active CN115293996B (en) 2022-10-08 2022-10-08 Image toning method, device and storage medium

Country Status (2)

Country Link
CN (1) CN115293996B (en)
WO (1) WO2024074060A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115293996B (en) * 2022-10-08 2023-03-24 杭州群核信息技术有限公司 Image toning method, device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732506A (en) * 2015-03-27 2015-06-24 浙江大学 Character picture color style converting method based on face semantic analysis

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2791906A4 (en) * 2011-11-25 2015-07-08 Circle Cardiovascular Imaging Inc Method for interactive threshold segmentation of medical images
US10229340B2 (en) * 2016-02-24 2019-03-12 Kodak Alaris Inc. System and method for coarse-to-fine video object segmentation and re-composition
CN108765428A (en) * 2017-10-25 2018-11-06 江苏大学 A kind of target object extracting method based on click interaction
CN110060197A (en) * 2019-03-26 2019-07-26 浙江达峰科技有限公司 A kind of 3D rendering interactive module and processing method
CN110969629B (en) * 2019-10-30 2020-08-25 上海艾麒信息科技有限公司 Interactive matting system, method and device based on super-pixel segmentation
CN111815733A (en) * 2020-08-07 2020-10-23 深兰科技(上海)有限公司 Video coloring method and system
CN114187215A (en) * 2021-11-17 2022-03-15 青海师范大学 Local recoloring algorithm based on image segmentation
CN115293996B (en) * 2022-10-08 2023-03-24 杭州群核信息技术有限公司 Image toning method, device and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732506A (en) * 2015-03-27 2015-06-24 浙江大学 Character picture color style converting method based on face semantic analysis

Also Published As

Publication number Publication date
WO2024074060A1 (en) 2024-04-11
CN115293996A (en) 2022-11-04

Similar Documents

Publication Publication Date Title
US20210258479A1 (en) Image processing method and apparatus
CN110188760B (en) Image processing model training method, image processing method and electronic equipment
CN107516319B (en) High-precision simple interactive matting method, storage device and terminal
US7574069B2 (en) Retargeting images for small displays
CN109829850B (en) Image processing method, device, equipment and computer readable medium
US20030053692A1 (en) Method of and apparatus for segmenting a pixellated image
US20020037103A1 (en) Method of and apparatus for segmenting a pixellated image
CN115631117B (en) Image enhancement method, device, detection system and storage medium for defect detection
CN105825494A (en) Image processing method and mobile terminal
CN111563908B (en) Image processing method and related device
CN115293996B (en) Image toning method, device and storage medium
CN109525786B (en) Video processing method and device, terminal equipment and storage medium
CN110399842B (en) Video processing method and device, electronic equipment and computer readable storage medium
JP5703255B2 (en) Image processing apparatus, image processing method, and program
CN105120154A (en) Image processing method and terminal
CN111311528A (en) Image fusion optimization method, device, equipment and medium
CN114677394A (en) Matting method, matting device, image pickup apparatus, conference system, electronic apparatus, and medium
CN110266926B (en) Image processing method, image processing device, mobile terminal and storage medium
CN112073718B (en) Television screen splash detection method and device, computer equipment and storage medium
CN111669492A (en) Method for processing shot digital image by terminal and terminal
CN111476735B (en) Face image processing method and device, computer equipment and readable storage medium
CN113780330A (en) Image correction method and device, computer storage medium and electronic equipment
CN108810407B (en) Image processing method, mobile terminal and computer readable storage medium
CN109816613B (en) Image completion method and device
CN109242750B (en) Picture signature method, picture matching method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant