CN110378867A - By prospect background pixel to and grayscale information obtain transparency mask method - Google Patents
By prospect background pixel to and grayscale information obtain transparency mask method Download PDFInfo
- Publication number
- CN110378867A CN110378867A CN201910444633.4A CN201910444633A CN110378867A CN 110378867 A CN110378867 A CN 110378867A CN 201910444633 A CN201910444633 A CN 201910444633A CN 110378867 A CN110378867 A CN 110378867A
- Authority
- CN
- China
- Prior art keywords
- pixel
- image
- transparency
- transparency mask
- mask
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Abstract
A method of by prospect background pixel to and grayscale information obtain the first image transparency mask, transparency estimated value is recalculated to obtain the first transparency mask of the first image by measuring the confidence level of prospect background pixel pair first, then new picture is generated by superposition grayscale information and obtain the second transparency mask of the first image, and further correct the first transparency mask of the first image.The disclosure is capable of the confidence level and grayscale information of Prospects of Comprehensive Utilization background pixel pair, provides a kind of scheme of new acquisition transparency mask.
Description
Technical field
The disclosure belongs to field of image processing, in particular to it is a kind of by prospect background pixel to and grayscale information schemed
The method of the transparency mask of picture.
Background technique
In image domains, scratches diagram technology and realized based on the estimation of transparency mask.Can by selection color gamut come
Different transparency masks is generated for image.
However, in the prior art, the preparation method of transparency mask is although enough, but on how to Utilization prospects back
Scene element to and grayscale information obtain transparency mask, there has been no the implementation method of related novel.
Summary of the invention
Present disclose provides it is a kind of by prospect background pixel to and grayscale information obtain the first image transparency mask
Method, include the following steps:
S100 divides all foreground pixel set F, all background pixel set B and all unknown pictures in the first image
Plain set Z;
S200 gives certain prospect background pixels to (Fi, Bj), each unknown pixel Z is measured according to the following formulakIt is saturating
Lightness
Wherein, IkFor unknown pixel ZkRGB color value, the foreground pixel FiFor apart from unknown pixel ZkNearest m
Foreground pixel, the background pixel BjAlso for apart from unknown pixel ZkM nearest background pixel, the prospect background pixel pair
(Fi, Bj) amount to m2Group;
S300, for the m2Each group of prospect background pixel in group is to (Fi, Bj) and its it is correspondingAccording to as follows
Formula measures prospect background pixel to (Fi, Bj) confidence level nij:
Wherein, σ value 0.1, and choose the highest MAX (n of confidence levelij) corresponding to that group prospect background pixel to for
(FiMAX, BjMAX);
S400 calculates each unknown pixel Z according to the following formulakTransparency estimated value
S500, according to each unknown pixel ZkTransparency estimated valuePrimarily determine the of the first image
One transparency mask;
S600, to the first image superposition grayscale information to generate the second image, and it is all to divide it to second image
Foreground pixel set, all background pixel set and all unknown pixel set;
S700 executes step S200 to S500 for second image, to determine that the first transparency of the second image hides
Cover, and using the first transparency mask of second image as the second transparency mask of the first image;
S800, using the second transparency mask of the first image, the first transparency for correcting the first image is hidden
Cover.
By the method, the disclosure is capable of the confidence level and grayscale information of Prospects of Comprehensive Utilization background pixel pair, is provided
A kind of scheme of new acquisition transparency mask.
Detailed description of the invention
Fig. 1 is the schematic diagram of one embodiment the method in the disclosure.
Specific embodiment
In order to make those skilled in the art understand that technical solution disclosed by the disclosure, below in conjunction with embodiment and related
The technical solution of each embodiment is described in attached drawing, and described embodiment is a part of this disclosure embodiment, without
It is whole embodiments.Term " first " used by the disclosure, " second " etc. rather than are used for for distinguishing different objects
Particular order is described.In addition, " comprising " and " having " and their any deformation, it is intended that covering and non-exclusive packet
Contain.Such as contain the process of a series of steps or units or method or system or product or equipment are not limited to arrange
Out the step of or unit, but optionally further include the steps that not listing or unit, or further includes optionally for these mistakes
Other intrinsic step or units of journey, method, system, product or equipment.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments
It is contained at least one embodiment of the disclosure.Each position in the description occur the phrase might not each mean it is identical
Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.It will be appreciated by those skilled in the art that
, embodiment described herein can combine with other embodiments.
Referring to Fig. 1, Fig. 1 be in the disclosure one embodiment provide one kind by prospect background pixel to and grayscale information
Obtain the flow diagram of the method for the transparency mask of the first image.As shown, described method includes following steps:
S100 divides all foreground pixel set F, all background pixel set B and all unknown pictures in the first image
Plain set Z;
It is understood that there are many means for dividing foreground pixel, background pixel and unknown pixel to image, can be artificial
Mark, can also by way of machine learning or data-driven, can also be according to corresponding prospect threshold value, background threshold come
Mark off all foreground and background pixels and its corresponding set;If after foreground and background pixel divides, unknown pixel, its is right
It should gather also with regard to being divided out naturally;
S200 gives certain prospect background pixels to (Fi, Bj), each unknown pixel Z is measured according to the following formulakIt is saturating
Lightness
Wherein, IkFor unknown pixel ZkRGB color value, the foreground pixel FiFor apart from unknown pixel ZkNearest m
Foreground pixel, the background pixel BjAlso for apart from unknown pixel ZkM nearest background pixel, the prospect background pixel pair
(Fi, Bj) amount to m2Group;
To those skilled in the art, theoretically, the selection of m can make corresponding prospect background pixel to being
Part sample, can also be with exhaustive whole image;For step S200, it is intended to the color and prospect background by unknown pixel
The color relationship of pixel pair estimates the transparency of unknown pixel;In addition, the selection of m can also further combined with neighborhood territory pixel with
Between unknown pixel color, texture, gray scale, brightness, in terms of feature;
S300, for the m2Each group of prospect background pixel in group is to (Fi, Bj) and its it is correspondingAccording to such as
Lower formula measurement prospect background pixel is to (Fi, Bj) confidence level nij:
Wherein, σ value 0.1, and choose the highest MAX (n of confidence levelij) corresponding to that group prospect background pixel to for
(FiMAX, BjMAX);
It is understood that the value of σ is empirical value or statistical value or simulation value, step S300 is further screened using confidence level
Prospect background pixel pair, and for subsequent step by the prospect background pixel further screened to estimating that unknown pixel is transparent
Degree;
S400 calculates each unknown pixel Z according to the following formulakTransparency estimated value
S500, according to each unknown pixel ZkTransparency estimated valuePrimarily determine the of the first image
One transparency mask;
This is to say, after the transparency estimated value of each unknown pixel obtains, the present embodiment with regard to primarily determining naturally
First transparency mask of the first image;Why say be naturally, be because transparency mask can be considered as by
By selected those respective pixels composition of certain value (or value range);
S600, to the first image superposition grayscale information to generate the second image, and it is all to divide it to second image
Foreground pixel set, all background pixel set and all unknown pixel set;
For the step, the present embodiment is contemplated that gray scale is believed in view of each pixel is in addition to the effect of RGB color
Cease the influence to pixel;Therefore, after being superimposed grayscale information, transparency mask is modified using following steps.
S700 executes step S200 to S500 for second image, to determine that the first transparency of the second image hides
Cover, and using the first transparency mask of second image as the second transparency mask of the first image;
S800, using the second transparency mask of the first image, the first transparency for correcting the first image is hidden
Cover.
So far, the confidence level and grayscale information of disclosure Prospects of Comprehensive Utilization background pixel pair provides a kind of new acquisition
The scheme of transparency mask.It is understood that the acquisition of transparency mask, is the process infinitely approached, at present it's hard to say certain
Kind method transparency mask obtained is unique correct.
In another embodiment, in step S600, in the following way to the first image superposition grayscale information to generate
Second image:
S601 carries out mean filter to the first image and obtains third image;
S602, the first image and third image generate the second image by following formula:
Wherein, IM2Indicate the gray value of k-th of pixel on the second image after being superimposed, xrIndicate k-th of picture on the first image
Plain xkNeighborhood territory pixel, NkIt indicates with xkCentered on neighborhood in number of pixels,It indicates to the first image
The pixel value of k-th of pixel on the resulting third image of mean filter is carried out, β takes 0.5.
The mode of specific superposition grayscale information is given by empirical value and related formula for above-described embodiment.
In another embodiment, step S800 further include:
S801 is found respectively according to the first transparency mask of the second transparency mask of the first image and the first image
The edge at the edge of its second transparency mask, the first transparency mask;
S802 obtains the position of all pixels at the edge of the second transparency mask and the edge of the first transparency mask
All pixels position, and determine position and the first transparency mask of all pixels at the edge of the second transparency mask
The region that the position of all pixels at edge is overlapped, and then determine the identical pixel Z in positionsp;
S803 searches pixel Z respectivelyspThe transparency estimated value of the first transparency mask corresponding to the first image and right
It should be in the transparency estimated value of the second transparency mask of the first image, and using the average value of the two as pixel ZspIt is revised
Transparency estimated value;
S804, with pixel ZspRevised transparency estimated value corrects the first transparency mask of the first image.
For above-described embodiment, it is intended to find, compares the identical pixel in position in two kinds of transparency masks, and utilize
Transparency estimated value of the identical pixel in the position in respective transparency mask is averaged to correct the of the first image
One transparency mask.
In another embodiment, the step S802 further comprises:
S8021, according to the position of all pixels at the edge of the second transparency mask of judgement and the first transparency mask
Edge all pixels position be overlapped region, further determine that the different pixel Z in positiondp, comprising: it is transparent to be located at second
Spend the pixel Z at the edge of maskdp2With the pixel Z at the edge for being located at the first transparency maskdp1;
Unlike previous embodiment, edge that two transparency masks of the present embodiment additional attention are determined
The different pixel in middle position, and find out these pixels of position different from each other;
S8022 utilizes the different pixel Z in the positiondpPixel Z identical with positionsp, obtain the second transparency mask
Edge and the first transparency mask edge determined by: the closed enclosed region of institute and described between edge and edge
The position of all closing pixels of enclosed region;
For the step, the edge as corresponding to each mask can be considered as a connection or closure to a certain degree
Curve, then no matter closed curve corresponding to two masks is what kind of overlapping or nonoverlapping relationship: two are hidden
Those of on the corresponding edge of cover for the pixel of position not corresponding (i.e. position is different or position is not overlapped), jointly really
All closing pixels of the closed enclosed region of institute and the enclosed region between the edge and edge of two masks are determined
Position;
S8023 executes following sub-step:
(1) pixel Z is searcheddp1Position corresponding to pixel estimate in the transparency of the first transparency mask of the first image
Evaluation, and the transparence value of the corresponding pixel in the second image is searched, and using the average value of the two as pixel Zdp1Amendment
Transparency estimated value afterwards;
(2) pixel Z is searcheddp2Position corresponding to pixel estimate in the transparency of the second transparency mask of the first image
Evaluation, and the transparence value of the corresponding pixel first transparency mask in the first image is searched, and with the average value of the two
As pixel Zdp2Revised transparency estimated value;
For the step, it is transparent under two different systems to be intended to find each pixel in aforementioned enclosed region
Estimated value or transparence value are spent, and using the average value of the two as the revised transparency estimated value of respective pixel, then under
For correcting the first transparency mask of the first image in one step S8024.That is, the present embodiment is similar to previous reality
The amendment thinking for applying example is such, and what only the present embodiment solved is the corresponding edge of two masks closed region jointly.
S8024, in conjunction with pixel Zdp1Revised transparency estimated value and pixel Zdp2Revised transparency estimated value, is repaired
First transparency mask of positive the first image.
Step in embodiment of the disclosure method can be sequentially adjusted, merged and deleted according to actual needs.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described because
According to the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art should also know
It knows, the embodiments described in the specification are all preferred embodiments, and related movement, module, unit are not necessarily originally
Necessary to invention.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, reference can be made to the related descriptions of other embodiments.
In several embodiments provided by the disclosure, it should be understood that disclosed method is, it can be achieved that be corresponding function
Energy unit, processor or even system may be distributed over multiple wherein each section of the system both can be located in one place
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.In addition, each functional unit can integrate in one processing unit, it is also possible to each unit individualism, it can also two
A or more than two units are integrated in one unit.Above-mentioned integrated unit both can take the form of hardware realization, can also
To realize in the form of software functional units.If the integrated unit is realized in the form of SFU software functional unit and conduct
Independent product when selling or using, can store in a computer readable storage medium.Based on this understanding, originally
The disclosed technical solution substantially all or part of the part that contributes to existing technology or the technical solution in other words
It can be embodied in the form of software products, which is stored in a storage medium, including several fingers
It enables and using so that a computer equipment (can be smart phone, personal digital assistant, wearable device, laptop, plate
Computer) execute the disclosure each embodiment the method all or part of the steps.And storage medium above-mentioned include: USB flash disk,
Read-only memory (ROM, Read-Only Memory), is moved random access memory (RAM, Random Access Memory)
The various media that can store program code such as dynamic hard disk, magnetic or disk.
The above, above embodiments are only to illustrate the technical solution of the disclosure, rather than its limitations;Although referring to before
Embodiment is stated the disclosure is described in detail, it should be understood by those skilled in the art that: it still can be to aforementioned each reality
Technical solution documented by example is applied to modify or equivalent replacement of some of the technical features;And these modification or
Person's replacement, the range for the presently disclosed embodiments technical solution that it does not separate the essence of the corresponding technical solution.
Claims (4)
1. it is a kind of by prospect background pixel to and grayscale information obtain the first image transparency mask method, including it is as follows
Step:
S100 divides all foreground pixel set F, all background pixel set B and all unknown pixel collection in the first image
Close Z;
S200 gives certain prospect background pixels to (Fi, Bj), each unknown pixel Z is measured according to the following formulakTransparency
Wherein, IkFor unknown pixel ZkRGB color value, the foreground pixel FiFor apart from unknown pixel ZkM nearest prospect
Pixel, the background pixel BjAlso for apart from unknown pixel ZkM nearest background pixel, the prospect background pixel is to (Fi,
Bj) amount to m2Group;
S300, for the m2Each group of prospect background pixel in group is to (Fi, Bj) and its it is correspondingAccording to the following formula
Prospect background pixel is measured to (Fi, Bj) confidence level nij:
Wherein, σ value 0.1, and choose the highest MAX (n of confidence levelij) corresponding to that group prospect background pixel to for
(FiMAX, BjMAX);
S400 calculates each unknown pixel Z according to the following formulakTransparency estimated value
S500, according to each unknown pixel ZkTransparency estimated valuePrimarily determine the first transparent of the first image
Spend mask;
S600 to the first image superposition grayscale information to generate the second image, and divides its all prospect to second image
Pixel set, all background pixel set and all unknown pixel set;
S700 executes step S200 to S500 for second image, to determine the first transparency mask of the second image,
And using the first transparency mask of second image as the second transparency mask of the first image;
S800 corrects the first transparency mask of the first image using the second transparency mask of the first image.
2. according to the method described in claim 1, wherein, it is preferred that folded to the first image in the following way in step S600
Add grayscale information to generate the second image:
S601 carries out mean filter to the first image and obtains third image;
S602, the first image and third image generate the second image by following formula:
Wherein, IM2Indicate the gray value of k-th of pixel on the second image after being superimposed, xrIndicate k-th of pixel x on the first imagek
Neighborhood territory pixel, NkIt indicates with xkCentered on neighborhood in number of pixels,It indicates to carry out mean value to the first image
The pixel value of k-th of pixel on resulting third image is filtered, β takes 0.5.
3. according to the method described in claim 1, wherein, step S800 further include:
S801, according to the first transparency mask of the second transparency mask of the first image and the first image, find respectively its
The edge at the edge of two transparency masks, the first transparency mask;
S802 obtains the institute of the position of all pixels at the edge of the second transparency mask and the edge of the first transparency mask
There is the position of pixel, and determines the position of all pixels at the edge of the second transparency mask and the edge of the first transparency mask
All pixels the region that is overlapped of position, and then determine the identical pixel Z in positionsp;
S803 searches pixel Z respectivelyspThe transparency estimated value of the first transparency mask corresponding to the first image, and correspond to
The transparency estimated value of second transparency mask of the first image, and using the average value of the two as pixel ZspIt is revised transparent
Spend estimated value;
S804, with pixel ZspRevised transparency estimated value corrects the first transparency mask of the first image.
4. according to the method described in claim 3, wherein, the step S802 further comprises:
S8021, according to the side of the position of all pixels at the edge of the second transparency mask of judgement and the first transparency mask
The region that the position of all pixels of edge is overlapped, further determines that the different pixel Z in positiondp, comprising: it is located at the second transparency and hides
The pixel Z at the edge of coverdp2With the pixel Z at the edge for being located at the first transparency maskdp1;
S8022 utilizes the different pixel Z in the positiondpPixel Z identical with positionsp, obtain the edge of the second transparency mask
Determined by edge with the first transparency mask: the closed enclosed region of institute and the closed area between edge and edge
The position of all closing pixels in domain;
S8023 executes following sub-step:
(1) pixel Z is searcheddp1Position corresponding to pixel in the transparency estimated value of the first transparency mask of the first image,
And the transparence value of the corresponding pixel in the second image is searched, and using the average value of the two as pixel Zdp1It is revised
Transparency estimated value;
(2) pixel Z is searcheddp2Position corresponding to pixel in the transparency estimated value of the second transparency mask of the first image,
And the transparence value of the corresponding pixel first transparency mask in the first image is searched, and using the average value of the two as picture
Plain Zdp2Revised transparency estimated value;
S8024, in conjunction with pixel Zdp1Revised transparency estimated value and pixel Zdp2Revised transparency estimated value corrects institute
State the first transparency mask of the first image.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2018107514 | 2018-09-26 | ||
CNPCT/CN2018/107514 | 2018-09-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110378867A true CN110378867A (en) | 2019-10-25 |
Family
ID=68140393
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910444633.4A Withdrawn CN110378867A (en) | 2018-09-26 | 2019-05-24 | By prospect background pixel to and grayscale information obtain transparency mask method |
CN201910444632.XA Withdrawn CN110335288A (en) | 2018-09-26 | 2019-05-24 | A kind of video foreground target extraction method and device |
CN201910505008.6A Withdrawn CN110363788A (en) | 2018-09-26 | 2019-06-11 | A kind of video object track extraction method and device |
CN201910628287.5A Withdrawn CN110516534A (en) | 2018-09-26 | 2019-07-11 | A kind of method for processing video frequency and device based on semantic analysis |
CN201910737589.6A Withdrawn CN110659562A (en) | 2018-09-26 | 2019-08-09 | Deep learning (DNN) classroom learning behavior analysis method and device |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910444632.XA Withdrawn CN110335288A (en) | 2018-09-26 | 2019-05-24 | A kind of video foreground target extraction method and device |
CN201910505008.6A Withdrawn CN110363788A (en) | 2018-09-26 | 2019-06-11 | A kind of video object track extraction method and device |
CN201910628287.5A Withdrawn CN110516534A (en) | 2018-09-26 | 2019-07-11 | A kind of method for processing video frequency and device based on semantic analysis |
CN201910737589.6A Withdrawn CN110659562A (en) | 2018-09-26 | 2019-08-09 | Deep learning (DNN) classroom learning behavior analysis method and device |
Country Status (2)
Country | Link |
---|---|
CN (5) | CN110378867A (en) |
WO (5) | WO2020062898A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112989962B (en) * | 2021-02-24 | 2024-01-05 | 上海商汤智能科技有限公司 | Track generation method, track generation device, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001057496A2 (en) * | 2000-02-03 | 2001-08-09 | Applied Materials, Inc. | Straight line defect detection |
CN101621615A (en) * | 2009-07-24 | 2010-01-06 | 南京邮电大学 | Self-adaptive background modeling and moving target detecting method |
CN102999892A (en) * | 2012-12-03 | 2013-03-27 | 东华大学 | Intelligent fusion method for depth images based on area shades and red green blue (RGB) images |
CN106204567A (en) * | 2016-07-05 | 2016-12-07 | 华南理工大学 | A kind of natural background video matting method |
US20170116481A1 (en) * | 2015-10-23 | 2017-04-27 | Beihang University | Method for video matting via sparse and low-rank representation |
CN107516319A (en) * | 2017-09-05 | 2017-12-26 | 中北大学 | A kind of high accuracy simple interactive stingy drawing method, storage device and terminal |
CN108391118A (en) * | 2018-03-21 | 2018-08-10 | 惠州学院 | A kind of display system for realizing 3D rendering based on projection pattern |
Family Cites Families (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6870945B2 (en) * | 2001-06-04 | 2005-03-22 | University Of Washington | Video object tracking by estimating and subtracting background |
US7466842B2 (en) * | 2005-05-20 | 2008-12-16 | Mitsubishi Electric Research Laboratories, Inc. | Modeling low frame rate videos with bayesian estimation |
US8508546B2 (en) * | 2006-09-19 | 2013-08-13 | Adobe Systems Incorporated | Image mask generation |
US8520972B2 (en) * | 2008-09-12 | 2013-08-27 | Adobe Systems Incorporated | Image decomposition |
CN101686338B (en) * | 2008-09-26 | 2013-12-25 | 索尼株式会社 | System and method for partitioning foreground and background in video |
CN101588459B (en) * | 2009-06-26 | 2011-01-05 | 北京交通大学 | Video keying processing method |
US8625888B2 (en) * | 2010-07-21 | 2014-01-07 | Microsoft Corporation | Variable kernel size image matting |
US8386964B2 (en) * | 2010-07-21 | 2013-02-26 | Microsoft Corporation | Interactive image matting |
CN102456212A (en) * | 2010-10-19 | 2012-05-16 | 北大方正集团有限公司 | Separation method and system for visible watermark in numerical image |
CN102163216B (en) * | 2010-11-24 | 2013-02-13 | 广州市动景计算机科技有限公司 | Picture display method and device thereof |
CN102236901B (en) * | 2011-06-30 | 2013-06-05 | 南京大学 | Method for tracking target based on graph theory cluster and color invariant space |
US8744123B2 (en) * | 2011-08-29 | 2014-06-03 | International Business Machines Corporation | Modeling of temporarily static objects in surveillance video data |
US8731315B2 (en) * | 2011-09-12 | 2014-05-20 | Canon Kabushiki Kaisha | Image compression and decompression for image matting |
US9305357B2 (en) * | 2011-11-07 | 2016-04-05 | General Electric Company | Automatic surveillance video matting using a shape prior |
CN102651135B (en) * | 2012-04-10 | 2015-06-17 | 电子科技大学 | Optimized direction sampling-based natural image matting method |
US8792718B2 (en) * | 2012-06-29 | 2014-07-29 | Adobe Systems Incorporated | Temporal matte filter for video matting |
CN103366364B (en) * | 2013-06-07 | 2016-06-29 | 太仓中科信息技术研究院 | A kind of stingy drawing method based on color distortion |
AU2013206597A1 (en) * | 2013-06-28 | 2015-01-22 | Canon Kabushiki Kaisha | Depth constrained superpixel-based depth map refinement |
US9898856B2 (en) * | 2013-09-27 | 2018-02-20 | Fotonation Cayman Limited | Systems and methods for depth-assisted perspective distortion correction |
US20150091891A1 (en) * | 2013-09-30 | 2015-04-02 | Dumedia, Inc. | System and method for non-holographic teleportation |
CN104112144A (en) * | 2013-12-17 | 2014-10-22 | 深圳市华尊科技有限公司 | Person and vehicle identification method and device |
US10089740B2 (en) * | 2014-03-07 | 2018-10-02 | Fotonation Limited | System and methods for depth regularization and semiautomatic interactive matting using RGB-D images |
CN104952089B (en) * | 2014-03-26 | 2019-02-15 | 腾讯科技(深圳)有限公司 | A kind of image processing method and system |
CN103903230A (en) * | 2014-03-28 | 2014-07-02 | 哈尔滨工程大学 | Video image sea fog removal and clearing method |
CN105590307A (en) * | 2014-10-22 | 2016-05-18 | 华为技术有限公司 | Transparency-based matting method and apparatus |
CN104573688B (en) * | 2015-01-19 | 2017-08-25 | 电子科技大学 | Mobile platform tobacco laser code intelligent identification Method and device based on deep learning |
CN104680482A (en) * | 2015-03-09 | 2015-06-03 | 华为技术有限公司 | Method and device for image processing |
CN104935832B (en) * | 2015-03-31 | 2019-07-12 | 浙江工商大学 | For the video keying method with depth information |
CN105100646B (en) * | 2015-08-31 | 2018-09-11 | 北京奇艺世纪科技有限公司 | Method for processing video frequency and device |
CN105809679B (en) * | 2016-03-04 | 2019-06-18 | 李云栋 | Mountain railway side slope rockfall detection method based on visual analysis |
US10275892B2 (en) * | 2016-06-09 | 2019-04-30 | Google Llc | Multi-view scene segmentation and propagation |
CN117864918A (en) * | 2016-07-29 | 2024-04-12 | 奥的斯电梯公司 | Monitoring system for passenger conveyor, passenger conveyor and monitoring method thereof |
CN107872644B (en) * | 2016-09-23 | 2020-10-09 | 亿阳信通股份有限公司 | Video monitoring method and device |
CN106778810A (en) * | 2016-11-23 | 2017-05-31 | 北京联合大学 | Original image layer fusion method and system based on RGB feature Yu depth characteristic |
US10198621B2 (en) * | 2016-11-28 | 2019-02-05 | Sony Corporation | Image-Processing device and method for foreground mask correction for object segmentation |
CN106952276A (en) * | 2017-03-20 | 2017-07-14 | 成都通甲优博科技有限责任公司 | A kind of image matting method and device |
CN107194867A (en) * | 2017-05-14 | 2017-09-22 | 北京工业大学 | A kind of stingy picture synthetic method based on CUDA |
CN107273905B (en) * | 2017-06-14 | 2020-05-08 | 电子科技大学 | Target active contour tracking method combined with motion information |
CN107230182B (en) * | 2017-08-03 | 2021-11-09 | 腾讯科技(深圳)有限公司 | Image processing method and device and storage medium |
CN108399361A (en) * | 2018-01-23 | 2018-08-14 | 南京邮电大学 | A kind of pedestrian detection method based on convolutional neural networks CNN and semantic segmentation |
CN108320298B (en) * | 2018-04-28 | 2022-01-28 | 亮风台(北京)信息科技有限公司 | Visual target tracking method and equipment |
-
2019
- 2019-05-24 WO PCT/CN2019/088278 patent/WO2020062898A1/en active Application Filing
- 2019-05-24 CN CN201910444633.4A patent/CN110378867A/en not_active Withdrawn
- 2019-05-24 CN CN201910444632.XA patent/CN110335288A/en not_active Withdrawn
- 2019-05-24 WO PCT/CN2019/088279 patent/WO2020062899A1/en active Application Filing
- 2019-06-11 CN CN201910505008.6A patent/CN110363788A/en not_active Withdrawn
- 2019-07-11 CN CN201910628287.5A patent/CN110516534A/en not_active Withdrawn
- 2019-08-09 CN CN201910737589.6A patent/CN110659562A/en not_active Withdrawn
- 2019-08-19 WO PCT/CN2019/101273 patent/WO2020063189A1/en active Application Filing
- 2019-09-10 WO PCT/CN2019/105028 patent/WO2020063321A1/en active Application Filing
- 2019-09-19 WO PCT/CN2019/106616 patent/WO2020063436A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001057496A2 (en) * | 2000-02-03 | 2001-08-09 | Applied Materials, Inc. | Straight line defect detection |
CN101621615A (en) * | 2009-07-24 | 2010-01-06 | 南京邮电大学 | Self-adaptive background modeling and moving target detecting method |
CN102999892A (en) * | 2012-12-03 | 2013-03-27 | 东华大学 | Intelligent fusion method for depth images based on area shades and red green blue (RGB) images |
US20170116481A1 (en) * | 2015-10-23 | 2017-04-27 | Beihang University | Method for video matting via sparse and low-rank representation |
CN106204567A (en) * | 2016-07-05 | 2016-12-07 | 华南理工大学 | A kind of natural background video matting method |
CN107516319A (en) * | 2017-09-05 | 2017-12-26 | 中北大学 | A kind of high accuracy simple interactive stingy drawing method, storage device and terminal |
CN108391118A (en) * | 2018-03-21 | 2018-08-10 | 惠州学院 | A kind of display system for realizing 3D rendering based on projection pattern |
Non-Patent Citations (4)
Title |
---|
NING XU 等: "Deep Image Matting", 《COMPUTER VISION FOUNDATION》 * |
SHUTAO LI 等: "Image matting for fusion of multi-focus images in dynamic scenes", 《INFORMATION FUSION》 * |
ZHAOQUAN CAI 等: "Improving sampling-based image matting with cooperative coevolution differential evolution algorithm", 《SOFT COMPUTING》 * |
谢斌 等: "基于雾气遮罩理论的图像去雾算法", 《计算机工程与科学》 * |
Also Published As
Publication number | Publication date |
---|---|
CN110335288A (en) | 2019-10-15 |
CN110363788A (en) | 2019-10-22 |
WO2020063189A1 (en) | 2020-04-02 |
WO2020063436A1 (en) | 2020-04-02 |
WO2020062898A1 (en) | 2020-04-02 |
WO2020062899A1 (en) | 2020-04-02 |
WO2020063321A1 (en) | 2020-04-02 |
CN110516534A (en) | 2019-11-29 |
CN110659562A (en) | 2020-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11610082B2 (en) | Method and apparatus for training neural network model used for image processing, and storage medium | |
Liu et al. | Single image dehazing with depth-aware non-local total variation regularization | |
Correal et al. | Automatic expert system for 3D terrain reconstruction based on stereo vision and histogram matching | |
US9185270B2 (en) | Ghost artifact detection and removal in HDR image creation using graph based selection of local reference | |
CN111127476A (en) | Image processing method, device, equipment and storage medium | |
CN111783779A (en) | Image processing method, apparatus and computer-readable storage medium | |
Song et al. | Stylizing face images via multiple exemplars | |
CN104599288A (en) | Skin color template based feature tracking method and device | |
CN113039576A (en) | Image enhancement system and method | |
CN113724379A (en) | Three-dimensional reconstruction method, device, equipment and storage medium | |
CN116030498A (en) | Virtual garment running and showing oriented three-dimensional human body posture estimation method | |
CN113378812A (en) | Digital dial plate identification method based on Mask R-CNN and CRNN | |
CN115393231A (en) | Defect image generation method and device, electronic equipment and storage medium | |
CN114021704B (en) | AI neural network model training method and related device | |
CN115456921A (en) | Synthetic image harmony model training method, harmony method and device | |
Priego et al. | 4DCAF: A temporal approach for denoising hyperspectral image sequences | |
CN110378867A (en) | By prospect background pixel to and grayscale information obtain transparency mask method | |
Yan et al. | A natural-based fusion strategy for underwater image enhancement | |
Hao et al. | Texture enhanced underwater image restoration via Laplacian regularization | |
CN115526891A (en) | Training method and related device for generation model of defect data set | |
CN110827373A (en) | Advertisement picture generation method and device and storage medium | |
Faghih et al. | Neural gray edge: Improving gray edge algorithm using neural network | |
Van Vo et al. | High dynamic range video synthesis using superpixel-based illuminance-invariant motion estimation | |
Veeravasarapu et al. | Fast and fully automated video colorization | |
Seetharaman et al. | A statistical framework based on a family of full range autoregressive models for edge extraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20191025 |
|
WW01 | Invention patent application withdrawn after publication |