CN112598694B - Video image processing method, electronic device and storage medium - Google Patents

Video image processing method, electronic device and storage medium Download PDF

Info

Publication number
CN112598694B
CN112598694B CN202011634880.XA CN202011634880A CN112598694B CN 112598694 B CN112598694 B CN 112598694B CN 202011634880 A CN202011634880 A CN 202011634880A CN 112598694 B CN112598694 B CN 112598694B
Authority
CN
China
Prior art keywords
color gamut
color
preset
video
range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011634880.XA
Other languages
Chinese (zh)
Other versions
CN112598694A (en
Inventor
韦超惠
谢昕虬
金健忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jitter technology (Shenzhen) Co.,Ltd.
Original Assignee
Jitter Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jitter Technology Shenzhen Co ltd filed Critical Jitter Technology Shenzhen Co ltd
Priority to CN202011634880.XA priority Critical patent/CN112598694B/en
Publication of CN112598694A publication Critical patent/CN112598694A/en
Application granted granted Critical
Publication of CN112598694B publication Critical patent/CN112598694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application provides a video image processing method, which comprises the following steps: acquiring a frame image of a first video, wherein the frame image comprises a background area with a preset color; removing the variegated colors in the background area according to the color gamut of the background area; when a foreground region in the frame image is scratched, carrying out gradual change processing on the edge of the foreground region according to the color gamut of the background region; and scratching the processed foreground area and placing the foreground area in a frame image of a second video. The application also provides an electronic device and a storage medium. The method and the device can solve the problem of different colors of the edge when the video image is scratched, and can ensure that the obtained picture is placed in any background and has the effect of adapting to the environment, so that the picture after the background is changed is more real.

Description

Video image processing method, electronic device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a video image processing method, an electronic device, and a storage medium.
Background
The matting technique refers to a technique for classifying an image foreground region selected by a user from an image background. The region required by the user can be extracted through the matting technology, so that the user can conveniently perform subsequent operation on the key information of the image. The image matting technique applied in the computer at present mainly comprises the steps of receiving a matting boundary line defined by a user, and separating a foreground from a background by taking the matting boundary line defined by the user as a standard, thereby realizing the matting operation. However, the way of matting completely depending on the boundary of matting defined by the user is poor in matting effect, and is prone to edge discoloration or edge jaggy.
Disclosure of Invention
In view of the above, it is desirable to provide a video image processing method, an electronic device and a storage medium, which can remove edge jaggies from a scratched image and make an edge of a picture smoother after removing a background.
A first aspect of the present application provides a video image processing method, the method comprising:
acquiring a frame image of a first video, wherein the frame image comprises a background area with a preset color;
removing the variegated colors in the background area according to the color gamut of the background area;
when a foreground region in the frame image is scratched, carrying out gradual change processing on the edge of the foreground region according to the color gamut of the background region; and
and scratching the processed foreground area and placing the foreground area in a frame image of a second video.
Optionally, the method further comprises:
and dividing the foreground area and the background area in the frame image of the first video according to an edge detection algorithm and color distinguishing scales.
Optionally, the removing the mottle in the background region according to the color gamut of the background region includes:
determining a second preset color gamut range as (b-r, b + r) according to the saturation threshold b of the color gamut of the background area and the first preset color gamut range r;
determining pixel points of the colors in the background area within the second preset color gamut range; and
and adjusting the color of the determined pixel point to be the preset color.
Optionally, the tapering the edge of the foreground region according to the color gamut of the background region includes:
determining a third color gamut range (b-r-x, b-r) and a fourth color gamut range (b + r, b + r + x) according to the saturation threshold b, the first preset color gamut range r and the first preset range x;
carrying out transparency gradual change of [0,1] on pixel points of the pixel values in the third color domain range (b-r-x, b-r) in the edge pixel points of the foreground region; and
and (3) carrying out transparency gradual change of [1,0] on pixel points of the pixel values in the fourth color gamut range (b + r, b + r + x) in the edge pixel points of the foreground area.
Optionally, the method further comprises:
selecting pixel points in the foreground area within the color gamut range of the background area;
calculating the average color value of the selected pixel points in a second preset range;
calculating the transparency according to the color average value and the pixel value of the edge point;
and determining the pixel color after the matting according to the transparency and the color average value.
Optionally, the calculating the average value of the colors of the selected pixel points in a second preset range includes:
and calculating the average value of the colors in a second preset range (m) around the selected pixel points.
Optionally, the transparency alpha is (b-resultColor)/(b-frontColor), where frontColor is the color average and resultColor is the actual pixel value of the edge point.
Optionally, the placing the foreground region in a frame image of a second video comprises:
and placing the foreground region at a preset position of a frame image of the second video.
A second aspect of the present application provides an electronic device, comprising:
a processor; and
a memory in which a plurality of program modules are stored, the program modules being loaded by the processor and executing the video image processing method described above.
A third aspect of the present application provides a computer-readable storage medium having stored thereon at least one computer instruction, which is loaded by a processor and executes the video image processing method described above.
The video image processing method, the electronic device and the storage medium can solve the problem of different colors of edges when the video image is scratched, and meanwhile, the scratched picture can be placed in any background and has an effect of adapting to the environment, so that the picture after the background is replaced is more real.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic diagram of an application environment architecture of a video image processing method according to a preferred embodiment of the present application.
Fig. 2 is a flowchart of a video image processing method according to an embodiment of the present application.
Fig. 3 is a flowchart of a video image processing method according to another embodiment of the present application.
Fig. 4 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present application.
Description of the main elements
Electronic device 1
Processor 10
Memory 20
Computer program 30
The following detailed description will further illustrate the present application in conjunction with the above-described figures.
Detailed Description
In order that the above objects, features and advantages of the present application can be more clearly understood, a detailed description of the present application will be given below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth to provide a thorough understanding of the present application, and the described embodiments are merely a subset of the embodiments of the present application and are not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Fig. 1 is a schematic view of an application environment architecture of a video image processing method according to a preferred embodiment of the present application.
The video image processing method is applied to an electronic device 1, and the electronic device 1 and at least one terminal device 2 establish communication connection through a network. The network may be a wired network or a Wireless network, such as radio, Wireless Fidelity (WIFI), cellular, satellite, broadcast, etc. The cellular network may be a 4G network or a 5G network.
The electronic device 1 may be an electronic device installed with a video image processing program, such as a personal computer, a server, and the like, wherein the server may be a single server, a server cluster, a cloud server, or the like.
The terminal device 2 may be a smart phone or a personal computer.
Fig. 2 is a flowchart of a video image processing method according to an embodiment of the present application. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
S201, acquiring a frame image of the first video.
In one embodiment, the first video is a video shot against a background of a preset color. The frame image comprises a background area with a preset color. The preset color may be green, and the background may be a green screen background.
S202, dividing a foreground area and a background area in the frame image of the first video.
In one embodiment, the foreground region and the background region in the frame image of the first video are divided according to an edge detection algorithm and a color distinguishing scale.
In one embodiment, the similarity of the foreground region and the background region in the frame image is determined, and the matting algorithm is determined according to the similarity and the category. The matting algorithm can be a threshold value method, a flooding method, a watershed method, an image segmentation method or a deep learning method and the like.
In other embodiments, the pixels meeting the requirement of the preset threshold in the frame image after the discrimination adjustment may be determined as foreground pixels, and the pixels not meeting the requirement of the preset threshold may be determined as background pixels, so that the image to be processed is converted into a binary image to separate the foreground region from the background region.
And S203, removing the variegated colors in the background area according to the color gamut of the background area.
In one embodiment, the color gamut of the background region includes a saturation threshold b and a first preset color gamut range r. The first preset color gamut range is a saturation range or a pixel value range.
And determining a second preset color gamut range as (b-r, b + r) according to the saturation threshold b of the background area and the first preset color gamut range r. Wherein b-r is b-rmin,b+r=b+rmax,rminIs the minimum value, r, of the first predetermined color gamut range rmaxIs the maximum value of the first preset color gamut range r. And further determining pixel points of the background region with the colors within the second preset color gamut range, and adjusting the determined colors of the pixel points to the preset colors.
And S204, when the foreground region in the frame image is scratched, performing gradual change processing on the edge of the foreground region according to the color gamut of the background region.
In one embodiment, the third gamut range (b-r-x, b-r) and the fourth gamut range (b + r, b + r + x) are determined according to the saturation threshold b, the first predetermined gamut range r and the first predetermined range x. The first preset range x is a saturation range or a pixel value range preset by a user. b-r-x ═ b-rmin-xmin,b-r=b-rmin,b+r=b+rmax,b+r+x=b+rmax+xmax。rminIs the minimum value, r, of the first predetermined color gamut range rmaxFor said first predetermined gamut range rMaximum value, xminIs the minimum value, x, of the first predetermined range xmaxIs the maximum value of the first preset gamut range x.
In one embodiment, the transparency of [0,1] is gradually changed for the pixels with pixel values in the third color gamut range (b-r-x, b-r) among the edge pixels of the foreground region, that is, the pixels in the third color gamut range (b-r-x, b-r) are changed from transparent to opaque.
In an embodiment, the transparency gradient of [1,0] is performed on the pixels with pixel values in the fourth color gamut range (b + r, b + r + x) among the edge pixels of the foreground region, that is, the pixels in the fourth color gamut range (b-r-x, b-r) are changed from opaque to transparent.
S205, the processed foreground region is extracted, and the foreground region is placed in a frame image of a second video.
In one embodiment, the terminal device 2 sends preset location information to the electronic apparatus 1 in advance. After the foreground region is scratched, the electronic device 1 places the foreground region at a preset position of a frame image of the second video based on the preset position information.
Fig. 3 is a flowchart of a video image processing method according to another embodiment of the present application. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
S301, acquiring a frame image of the first video.
In one embodiment, the first video is a video shot against a background of a preset color. The frame image comprises a background area with a preset color. The preset color may be green, and the background may be a green screen background.
S302, dividing a foreground area and a background area in a frame image of the first video.
In one embodiment, the foreground region and the background region in the frame image of the first video are divided according to an edge detection algorithm and a color distinguishing scale.
In one embodiment, the similarity of the foreground region and the background region in the frame image is determined, and the matting algorithm is determined according to the similarity and the category. The matting algorithm can be a threshold value method, a flooding method, a watershed method, an image segmentation method or a deep learning method and the like.
In other embodiments, the pixels meeting the requirement of the preset threshold in the frame image after the discrimination adjustment may be determined as foreground pixels, and the pixels not meeting the requirement of the preset threshold may be determined as background pixels, so that the image to be processed is converted into a binary image to separate the foreground region from the background region.
And S303, removing the variegated colors in the background area according to the color gamut of the background area.
In an embodiment, the color gamut of the background region includes a saturation threshold and a first preset color gamut range r. The first preset color gamut range is a saturation range or a pixel value range.
And determining a second preset color gamut range as (b-r, b + r) according to the saturation threshold b of the background area and the first preset color gamut range r. Wherein b-r is b-rmin,b+r=b+rmax,rminIs the minimum value, r, of the first predetermined color gamut range rmaxIs the maximum value of the first preset color gamut range r. And further determining pixel points of the background region with the colors within the second preset color gamut range, and adjusting the determined colors of the pixel points to the preset colors.
S304, when the foreground area in the frame image is scratched, performing gradual change processing on the edge of the foreground area according to the color gamut of the background area.
In one embodiment, the third gamut range (b-r-x, b-r) and the fourth gamut range (b + r, b + r + x) are determined according to the saturation threshold b, the first predetermined gamut range r and the first predetermined range x. The first preset range x is a saturation range or a pixel value range preset by a user. b-r-x ═ b-rmin-xmin,b-r=b-rmin,b+r=b+rmax,b+r+x=b+rmax+xmax。rminIs the maximum of the first preset color gamut range rSmall value of rmaxIs the maximum value, x, of the first predetermined color gamut range rminIs the minimum value, x, of the first predetermined range xmaxIs the maximum value of the first preset gamut range x.
In one embodiment, the transparency of [0,1] is gradually changed for the pixels with pixel values in the third color gamut range (b-r-x, b-r) among the edge pixels of the foreground region, that is, the pixels in the third color gamut range (b-r-x, b-r) are changed from transparent to opaque.
In an embodiment, the transparency gradient of [1,0] is performed on the pixels with pixel values in the fourth color gamut range (b + r, b + r + x) among the edge pixels of the foreground region, that is, the pixels in the fourth color gamut range (b-r-x, b-r) are changed from opaque to transparent.
S305, after the foreground region is scratched, selecting pixel points in the color gamut of the background region from the edge pixel points of the foreground region.
In an embodiment, the pixel points in the color gamut of the background region are pixel points whose pixel values are similar to the preset color but are not in the second preset color gamut range (b-r, b + r).
S306, calculating the average value of the colors of the selected pixel points in a second preset range.
In one embodiment, the second predetermined range is m × m. Wherein m is the number of pixel points, and the numerical value of m is preset by a user. Namely, the average value of the colors of m pixel points around the pixel point is calculated.
And S307, calculating the transparency according to the color average value and the pixel value of the edge pixel point.
In an embodiment, a relation among the color average value, the pixel values of the edge pixels, and the saturation threshold b is backColor (1-alpha) + frontColor alpha ═ resultColor, and then the transparency alpha ═ b-resultColor)/(b-frontColor). Wherein frontColor is the color average value, and resultColor is the pixel value of the edge point.
And S308, adjusting the pixel value of the edge pixel point after the cutout according to the transparency and the color average value.
In one embodiment, the pixel value of the edge pixel after the matting is adjusted to (frontColor, alpha).
The pixel value of edge pixel point after the cutout is adjusted the back, can remove the colour component that the edge received the background influence, solves the heterochrosis problem in edge, simultaneously, puts into the curtain background of predetermineeing the colour with the result after the cutout and also can restore the source picture, puts into other backgrounds, and the edge effect also can adapt to the environmental change, gets rid of the marginal sawtooth to reach the effect of edge reinforcing.
S309, placing the scratched foreground region in a frame image of a second video.
In one embodiment, the terminal device 2 sends preset location information to the electronic apparatus 1 in advance. After the foreground region is scratched, the electronic device 1 places the foreground region at a preset position of a frame image of the second video based on the preset position information.
The video image processing method provided by the application can solve the problem of different colors of edges when the video image is scratched, and meanwhile, the obtained picture can be placed in any background and has an effect of adapting to the environment, so that the picture after the background is changed is more real.
Fig. 4 is a schematic structural diagram of a preferred embodiment of an electronic device according to the present application.
The electronic device 1 includes, but is not limited to, a processor 10, a memory 20, and a computer program 30, such as a video image processing program, stored in the memory 20 and executable on the processor 10. The processor 10, when executing the computer program 30, implements steps in a video image processing method, such as steps S201 to S205 shown in fig. 2 and steps S301 to S309 shown in fig. 3.
It will be appreciated by a person skilled in the art that the schematic diagram is only an example of the electronic apparatus 1 and does not constitute a limitation of the electronic apparatus 1, and may comprise more or less components than those shown, or combine some components, or different components, for example, the electronic apparatus 1 may further comprise an input output device, a network access device, a bus, etc.
The Processor 10 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor 10 may be any conventional processor or the like, the processor 10 being the control center of the electronic device 1, and various interfaces and lines connecting the various parts of the whole electronic device 1.
The memory 20 may be used for storing the computer program 30 and/or the module/unit, and the processor 10 implements various functions of the electronic device 1 by running or executing the computer program and/or the module/unit stored in the memory 20 and calling data stored in the memory 20. The memory 20 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the electronic apparatus 1, and the like. In addition, the memory 20 may include volatile and non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other storage device.
The integrated modules/units of the electronic device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the embodiments of the methods described above can be realized. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM).
The video image processing method, the electronic device and the storage medium can solve the problem of different colors of edges when the video image is scratched, and meanwhile, the obtained picture can be placed in any background and has an effect of adapting to the environment, so that the picture after the background is changed is more real.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. Several units or means recited in the apparatus claims may also be embodied by one and the same item or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Although the present application has been described in detail with reference to preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the spirit and scope of the present application.

Claims (9)

1. A method for video image processing, the method comprising:
acquiring a frame image of a first video, wherein the frame image comprises a background area with a preset color;
removing the variegated colors in the background area according to the color gamut of the background area;
when a foreground region in the frame image is scratched, carrying out gradual change processing on the edge of the foreground region according to the color gamut of the background region;
selecting pixel points in the color gamut range of the background area from the edge pixel points of the foreground area; calculating the average color value of the selected pixel points in a second preset range; calculating the transparency according to the color average value and the pixel value of the edge pixel point; determining the pixel value of the edge pixel point of the foreground region after matting according to the transparency and the color average value;
and scratching the processed foreground area and placing the foreground area in a frame image of a second video.
2. The video image processing method of claim 1, wherein the method further comprises:
and dividing the foreground area and the background area in the frame image of the first video according to an edge detection algorithm and color distinguishing scales.
3. The video image processing method of claim 1, wherein the removing the mottle in the background region according to the color gamut of the background region comprises:
determining a second preset color gamut range as (b-r, b + r) according to the saturation threshold b of the color gamut of the background area and the first preset color gamut range r;
determining pixel points of the colors in the background area within the second preset color gamut range; and
and adjusting the color of the determined pixel point to be the preset color.
4. The video image processing method of claim 3, wherein the tapering the edge of the foreground region according to the color gamut of the background region comprises:
determining a third color gamut range (b-r-x, b-r) and a fourth color gamut range (b + r, b + r + x) according to the saturation threshold b, the first preset color gamut range r and the first preset range x;
carrying out transparency gradual change of [0,1] on pixel points of the pixel values in the third color domain range (b-r-x, b-r) in the edge pixel points of the foreground region; and
and (3) carrying out transparency gradual change of [1,0] on pixel points of the pixel values in the fourth color gamut range (b + r, b + r + x) in the edge pixel points of the foreground area.
5. The method of claim 1, wherein said calculating the average color value of the selected pixel points within a second predetermined range comprises:
and calculating the average value of the colors in a second preset range (m) around the selected pixel points, wherein m is the number of the pixel points.
6. The video image processing method of claim 5, wherein: and the transparency alpha is (b-resultColor)/(b-frontColor), wherein frontColor is the color average value, and resultColor is the actual pixel value of the edge pixel point.
7. The video image processing method of claim 1, wherein the placing the foreground region in a frame image of a second video comprises:
and placing the foreground region at a preset position of a frame image of the second video.
8. An electronic device, comprising:
a processor; and
a memory in which a plurality of program modules are stored, the program modules being loaded by the processor and executing the video image processing method according to any one of claims 1 to 7.
9. A computer-readable storage medium having stored thereon at least one computer instruction, wherein the instruction is loaded by a processor and performs a video image processing method according to any of claims 1 to 7.
CN202011634880.XA 2020-12-31 2020-12-31 Video image processing method, electronic device and storage medium Active CN112598694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011634880.XA CN112598694B (en) 2020-12-31 2020-12-31 Video image processing method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011634880.XA CN112598694B (en) 2020-12-31 2020-12-31 Video image processing method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112598694A CN112598694A (en) 2021-04-02
CN112598694B true CN112598694B (en) 2022-04-08

Family

ID=75206770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011634880.XA Active CN112598694B (en) 2020-12-31 2020-12-31 Video image processing method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112598694B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919836A (en) * 2019-03-20 2019-06-21 广州华多网络科技有限公司 Video keying processing method, video keying processing client and readable storage medium storing program for executing
CN110111342A (en) * 2019-04-30 2019-08-09 贵州民族大学 A kind of optimum option method and device of stingy nomography
CN112149592A (en) * 2020-09-28 2020-12-29 上海万面智能科技有限公司 Image processing method and device and computer equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9628722B2 (en) * 2010-03-30 2017-04-18 Personify, Inc. Systems and methods for embedding a foreground video into a background feed based on a control input
US8792013B2 (en) * 2012-04-23 2014-07-29 Qualcomm Technologies, Inc. Method for determining the extent of a foreground object in an image
CN107516319B (en) * 2017-09-05 2020-03-10 中北大学 High-precision simple interactive matting method, storage device and terminal
CN107958449A (en) * 2017-12-13 2018-04-24 北京奇虎科技有限公司 A kind of image combining method and device
CN110503725B (en) * 2019-08-27 2023-07-14 百度在线网络技术(北京)有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN110838131B (en) * 2019-11-04 2022-05-17 网易(杭州)网络有限公司 Method and device for realizing automatic cutout, electronic equipment and medium
CN112132852B (en) * 2020-08-28 2022-01-07 稿定(厦门)科技有限公司 Automatic image matting method and device based on multi-background color statistics

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919836A (en) * 2019-03-20 2019-06-21 广州华多网络科技有限公司 Video keying processing method, video keying processing client and readable storage medium storing program for executing
CN110111342A (en) * 2019-04-30 2019-08-09 贵州民族大学 A kind of optimum option method and device of stingy nomography
CN112149592A (en) * 2020-09-28 2020-12-29 上海万面智能科技有限公司 Image processing method and device and computer equipment

Also Published As

Publication number Publication date
CN112598694A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN111950723B (en) Neural network model training method, image processing method, device and terminal equipment
CN110443140B (en) Text positioning method, device, computer equipment and storage medium
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN110399842B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN111489322B (en) Method and device for adding sky filter to static picture
CN110930296A (en) Image processing method, device, equipment and storage medium
CN107704797B (en) Real-time detection method, system and equipment based on pedestrians and vehicles in security video
CN111325667A (en) Image processing method and related product
CN111192190A (en) Method and device for eliminating image watermark and electronic equipment
CN111161299A (en) Image segmentation method, computer program, storage medium, and electronic device
CN111126372B (en) Logo region marking method and device in video and electronic equipment
CN113391779B (en) Parameter adjusting method, device and equipment for paper-like screen
CN110838088A (en) Multi-frame noise reduction method and device based on deep learning and terminal equipment
CN113313662B (en) Image processing method, device, equipment and storage medium
CN112149745B (en) Method, device, equipment and storage medium for determining difficult example sample
CN108932703B (en) Picture processing method, picture processing device and terminal equipment
CN112598694B (en) Video image processing method, electronic device and storage medium
CN110942488B (en) Image processing device, image processing system, image processing method, and recording medium
CN114677319A (en) Stem cell distribution determination method and device, electronic equipment and storage medium
CN108810407B (en) Image processing method, mobile terminal and computer readable storage medium
CN108776959B (en) Image processing method and device and terminal equipment
CN113132786A (en) User interface display method and device and readable storage medium
CN111127310B (en) Image processing method and device, electronic equipment and storage medium
CN116246298A (en) Space occupation people counting method, terminal equipment and storage medium
CN114820938A (en) Modeling method and related device for meta-universe scene materials

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210414

Address after: 518000 Room 201, building A, 1 front Bay Road, Shenzhen Qianhai cooperation zone, Shenzhen, Guangdong

Applicant after: Jitter technology (Shenzhen) Co.,Ltd.

Address before: 518000 13C, jiajiahao building, 10168 Shennan Avenue, Liancheng community, Nantou street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen Instant Construction Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant