CN108109117A - A kind of method of the color real-time transform based on moving object - Google Patents

A kind of method of the color real-time transform based on moving object Download PDF

Info

Publication number
CN108109117A
CN108109117A CN201711339167.0A CN201711339167A CN108109117A CN 108109117 A CN108109117 A CN 108109117A CN 201711339167 A CN201711339167 A CN 201711339167A CN 108109117 A CN108109117 A CN 108109117A
Authority
CN
China
Prior art keywords
value
region
color
image
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711339167.0A
Other languages
Chinese (zh)
Inventor
陆晓
叶树阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liuzhou Wisdom Vision Technology Co Ltd
Original Assignee
Liuzhou Wisdom Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liuzhou Wisdom Vision Technology Co Ltd filed Critical Liuzhou Wisdom Vision Technology Co Ltd
Priority to CN201711339167.0A priority Critical patent/CN108109117A/en
Publication of CN108109117A publication Critical patent/CN108109117A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Of Color Television Signals (AREA)
  • Studio Circuits (AREA)
  • Image Analysis (AREA)

Abstract

A kind of method of the color real-time transform based on moving object, pass through the moving target in stable camera detection image sequence, obtain moving region of the moving target in each two field picture, and the change in location amplitude according to the moving object detected for a period of time in the picture, the color of moving region in each two field picture of change of regularity, including step:(1)Image processing early period,(2)The object of range of motion is taken the photograph in detection,(3)The displacement of frame before and after calculating object,(4)The real-time color variation of moving object region, present invention employs Three image difference, median filtering method, maximum variance between clusters and region-growing methods, change quickly and effectively and in real time the color of moving object.

Description

A kind of method of the color real-time transform based on moving object
Technical field
The present invention relates to technical field of video image processing, more particularly to a kind of color real-time transform based on moving object Method.
Background technology
The current prior art, in the range of image capture, it is desirable to change the colour switching of object in video image, only limit It is handled, captured moving object color can not be carried out when shooting real-time in the later stage again in after video image is obtained Conversion, although the processing of this later stage form can achieve the effect that identical, its hysteresis quality is too big, and post-processing need to be spent greatly Manpower and materials are measured, and Internet market is increasing for the technical need of this real-time transform moving object color now, it is anxious This technology is needed to meet the needs of users.
The content of the invention
The technical problem to be solved by the present invention is to:There is provided a kind of quick, conveniently, being not required to the later stage is handled and can be various The method of the color real-time transform based on moving object used in equipment, to overcome the above-mentioned deficiency present in prior art.
The present invention adopts the technical scheme that:A kind of method of the color real-time transform based on moving object, this method bag Include following steps:
(1)Image processing early period:
S1:Camera gathers real-time image information, obtains current video sequence frame;
S2:The coloured image of present frame is changed into gray-scale map;
S3:Using median filtering method, gray-scale map is filtered;
(2)The object of range of motion is taken the photograph in detection:
S4:Using Three image difference, the gray-scale map of treated in step S3 gray-scale map and consecutive frame is compared, it is poor to obtain Component;
S5:Using maximum variance between clusters, the adaptive threshold value T of difference diagram is calculated;
S6:The pixel of difference diagram and threshold value T are compared, obtain the binary map of moving region in image sequence;
S7:Using region-growing method, the point coordinates set of the connected region of moving object in binary map is extracted, is transported so as to filter out Dynamic region;
S8:Moving region is filled, smooth perimetrical forms the profile and shape of moving object;
(3)The displacement of frame before and after calculating object:
S9:Based on the point set of the moving region obtained in step S7, the square boundary and rectangle in calculating current frame motion region The length on range image border in the picture, i.e. upper and lower, left and right distance value obtain the location information in current frame motion region;
S10:The location information in current frame motion region and the location information of previous frame moving region are compared, in reservation, Under, the smaller value of left and right distance;
S11:Left and right distance value is subtracted each other, moving object motion amplitude d1 in the horizontal direction is obtained, by upper and lower distance value Subtract each other, obtain motion amplitude d2 of the moving object in vertical direction, calculate the amplitude of object of which movement, is i.e. gap parameter l is calculated Formula is as follows:l=sqrt(d1*d1+d2*d2);
(4)The real-time color variation of moving object region:
S12:Present frame is traveled through, obtains color point pixel value;
S13:The rectangular shaped rim being made of the motion amplitude d1 and motion amplitude d2 that are obtained in step S11, if the rectangular shaped rim Catercorner length is percentage of the ratio of s, catercorner length s and gap parameter l for the shared total amplitude of movement in current kinetic region Compare t;
S14:The pixel point value in coverage motion region obtains the pixel color value F of each coordinate points(R,G,B);
S15:According to the color value Z being finally changing into of setting(R,G,B)With the pixel color value F of each coordinate points(R,G,B) The distance between p, calculate the color of each pixel, calculation formula is as follows:I(R,G,B)= F(R,G,B)+ p*t;
S16:Color change simultaneously exports display image.
The step S4 to S6 specifically includes following steps:
S21:Using Three image difference, the gray-scale map of the (n+1)th frame in video sequence, n-th frame and the n-th 1 frames is taken, three frames are corresponding The gray value of pixel point coordinates is respectively fn+1 (x, y), fn (x, y) and fn 1 (x, y);
S22:The gray value of the respective coordinates of (n+1)th frame and n-th frame point is subtracted each other, obtains difference diagram d1;
S23:Maximum variance between clusters are used, if the prospect points of difference diagram d1 account for image scaled as w0, average gray u0, the back of the body Sight spot number accounts for image scaled as w1, average gray u1, calculates the overall average gray scale u of image, and calculation formula is as follows:u=w0*u0+ w1*u1;
S24:The variance of foreground and background image is calculated, calculation formula is as follows:
g=w0*(u0-u)*(u0-u)+w1*(u1-u)*(u1-u)=w0*w1*(u0-u1)*(u0-u1);
S25:When variance g maximums, foreground and background difference is maximum, then gray scale at this time is the optimal threshold T of difference diagram d1;
S26:The pixel of difference diagram d1 and its optimal threshold T-phase are subtracted, if obtained absolute value is less than threshold value T, by the picture Vegetarian refreshments color is set to black, if the absolute value subtracted each other is more than threshold value T, which is set to white, is finally obtained Obtain the binary map D1 of difference;
S27:Similarly, the gray value of n-th frame and the respective coordinates of the (n-1)th frame point is subtracted each other, difference diagram d2 is obtained, using maximum kind Between variance method obtain difference binary map D2;
S28:Binary map D1 and D2 are carried out and operated, obtains the binary map of binary map D, i.e. moving region.
The step S7 and S8 specifically includes following steps:
S31:Traversing graph picture, extraction one are not the pixels of background, which is set to background colour;
S32:All tie points in the range of the vertex neighborhood are searched using the point as seed point, and the tie point found is whole It is set to background colour;
S33:Using the tie point found all as seed point, all tie points in the contiguous range of the seed point are searched, The point found is all set to background colour again;
S34:It is moved in circles with this, until not new seed point occurs.
The median filtering method specifically includes following steps:
S41:Extract the gray value of a certain pixel coordinate point A in image;
S42:The neighborhood window of coordinate points A is obtained, neighborhood window shape can be linear, square or circular etc.;
S43:It calculates the gray value in neighborhood window and is ranked up;
S44:Gray scale median after extraction sequence substitutes the gray value of A points.
Due to the adoption of the above technical scheme, a kind of method of color real-time transform based on moving object of the present invention has Following advantageous effect:
1. a kind of method of color real-time transform based on moving object of the present invention passes through stable camera detection image sequence Moving target in row obtains moving region of the moving target in each two field picture, and according to the fortune detected for a period of time The change in location amplitude of animal body in the picture, the regular color for changing moving region in each two field picture, fast and easy It is and practical;
2. the present invention is based on real-times and the condition of calculation amount, the moving object in video flowing is extracted using Three image difference, Since the algorithm is less sensitive to scene changes such as light, various dynamic environment are can adapt to, stability is preferable.
With reference to the accompanying drawings and examples to the method for color real-time transform based on moving object of the present invention a kind of Technical characteristic is further described.
Description of the drawings
Fig. 1:A kind of flow chart of the method for color real-time transform based on moving object of the present invention.
Specific embodiment
Embodiment
A kind of method of the color real-time transform based on moving object, this method comprise the following steps:
(1)Image processing early period:
S1:Camera gathers real-time image information, obtains current video sequence frame;
S2:The coloured image of present frame is changed into gray-scale map;
S3:In order to reduce the influence that noise extracts difference diagram, using median filtering method, gray-scale map is filtered, from And impulsive noise is filtered out, protection signal edge;
(2)The object of range of motion is taken the photograph in detection:The video sequence of camera acquisition has continuity, if do not had in scene Moving target, then the variation of successive frame coordinate respective pixel is very faint, if there is moving target, then between continuous frame and frame Pixel have and significantly change;
S4:Using Three image difference, the gray-scale map of treated in step S3 gray-scale map and consecutive frame is compared, it is poor to obtain Component, difference image can detect the profile of moving target;
S5:The moving object i.e. pixel value of prospect and the background interference pixel occurred are contained in difference diagram, it is necessary to image It is split and extracts foreground area, and since photographed scene is not fixed, there are many factors interference such as light variation, threshold values T cannot take fixed value, it is necessary to use different threshold values for different situations, therefore using maximum variance between clusters, calculate The adaptive threshold value T of difference diagram;
S6:The pixel of difference diagram and threshold value T are compared, obtain the binary map of moving region in image sequence;
S7:Using region-growing method, the point coordinates set of the connected region of moving object in binary map is extracted, is transported so as to filter out Dynamic region;
S8:Moving region is filled, smooth perimetrical forms the profile and shape of moving object, i.e., carries out Morphological scale-space to image (Expansion and corrosion), connected domain extraction and holes filling;
(3)The displacement of frame before and after calculating object:
S9:Based on the point set of the moving region obtained in step S7, the square boundary and rectangle in calculating current frame motion region The length on range image border in the picture, i.e. upper and lower, left and right distance value obtain the location information in current frame motion region;
S10:The location information in current frame motion region and the location information of previous frame moving region are compared, in reservation, Under, the smaller value of left and right distance;
S11:Left and right distance value is subtracted each other, moving object motion amplitude d1 in the horizontal direction is obtained, by upper and lower distance value Subtract each other, obtain motion amplitude d2 of the moving object in vertical direction, calculate the amplitude of object of which movement, is i.e. gap parameter l is calculated Formula is as follows:l=sqrt(d1*d1+d2*d2);
(4)The real-time color variation of moving object region:The color value of moving object and real is changed according to the motion change of object When show, by moving region scope, zoning bounding rectangles, and it is every in the picture to record the movement rectangle in the video sequence The amplitude of variation of one frame and the amplitude of variation of maximum, so as to change the pixel color in moving region according to the variation of amplitude Value;
S12:Present frame is traveled through, obtains color point pixel value;
S13:The rectangular shaped rim being made of the motion amplitude d1 and motion amplitude d2 that are obtained in step S11, if the rectangular shaped rim Catercorner length is percentage of the ratio of s, catercorner length s and gap parameter l for the shared total amplitude of movement in current kinetic region Compare t;
S14:The pixel point value in coverage motion region obtains the pixel color value F of each coordinate points(R,G,B);
S15:Since the colour switching rule in moving object region is changed according to the amplitude of the object of which movement, step Rectangular shaped rim size in S13 is becoming always, is not fixed size, therefore percentage t is also not fixed, according to the final of setting The color value Z being changing into(R,G,B)With the pixel color value F of each coordinate points(R,G,B)The distance between p, calculate each picture The color of vegetarian refreshments, calculation formula are as follows:I(R,G,B)= F(R,G,B)+ p*t;
S16:Color change simultaneously exports display image.
The step S4 to S6 specifically includes following steps:
S21:Using Three image difference, the gray-scale map of the (n+1)th frame in video sequence, n-th frame and the n-th 1 frames is taken, three frames are corresponding The gray value of pixel point coordinates is respectively fn+1 (x, y), fn (x, y) and fn 1 (x, y);
S22:The gray value of the respective coordinates of (n+1)th frame and n-th frame point is subtracted each other, it is prominent aobvious to weaken the similar portion of image The changing unit of diagram picture obtains difference diagram d1;
S23:Maximum variance between clusters are used, if the prospect points of difference diagram d1 account for image scaled as w0, average gray u0, the back of the body Sight spot number accounts for image scaled as w1, average gray u1, calculates the overall average gray scale u of image, and calculation formula is as follows:u=w0*u0+ w1*u1;
S24:The variance of foreground and background image is calculated, calculation formula is as follows:
g=w0*(u0-u)*(u0-u)+w1*(u1-u)*(u1-u)=w0*w1*(u0-u1)*(u0-u1);
S25:When variance g maximums, foreground and background difference is maximum, then gray scale at this time is the optimal threshold T of difference diagram d1;
S26:The pixel of difference diagram d1 and its optimal threshold T-phase are subtracted, if obtained absolute value is less than threshold value T, by the picture Vegetarian refreshments color is set to black, if the absolute value subtracted each other is more than threshold value T, which is set to white, is finally obtained Obtain the binary map D1 of difference;
S27:Similarly, the gray value of n-th frame and the respective coordinates of the (n-1)th frame point is subtracted each other, difference diagram d2 is obtained, using maximum kind Between variance method obtain difference binary map D2;
S28:Binary map D1 and D2 are carried out and operated, obtains the binary map of binary map D, i.e. moving region.
The step S7 and S8 specifically includes following steps:
S31:Traversing graph picture, extraction one are not the pixels of background, which is set to background colour;
S32:All tie points in the range of the vertex neighborhood are searched using the point as seed point, and the tie point found is whole It is set to background colour;
S33:Using the tie point found all as seed point, all tie points in the contiguous range of the seed point are searched, The point found is all set to background colour again;
S34:It is moved in circles with this, until not new seed point occurs.
The median filtering method specifically includes following steps:
S41:Extract the gray value of a certain pixel coordinate point A in image;
S42:The neighborhood window of coordinate points A is obtained, neighborhood window shape can be linear, square or circular etc.;
S43:It calculates the gray value in neighborhood window and is ranked up;
S44:Gray scale median after extraction sequence substitutes the gray value of A points.
Above example is only presently preferred embodiments of the present invention, and structure of the invention is not limited to what above-described embodiment was enumerated Form, any modification made within the spirit and principles of the invention, equivalent substitution etc. should be included in the guarantor of the present invention Within the scope of shield.

Claims (4)

  1. A kind of 1. method of the color real-time transform based on moving object, it is characterised in that:This method comprises the following steps:
    (1)Image processing early period:
    S1:Camera gathers real-time image information, obtains current video sequence frame;
    S2:The coloured image of present frame is changed into gray-scale map;
    S3:Using median filtering method, gray-scale map is filtered;
    (2)The object of range of motion is taken the photograph in detection:
    S4:Using Three image difference, the gray-scale map of treated in step S3 gray-scale map and consecutive frame is compared, it is poor to obtain Component;
    S5:Using maximum variance between clusters, the adaptive threshold value T of difference diagram is calculated;
    S6:The pixel of difference diagram and threshold value T are compared, obtain the binary map of moving region in image sequence;
    S7:Using region-growing method, the point coordinates set of the connected region of moving object in binary map is extracted, is transported so as to filter out Dynamic region;
    S8:Moving region is filled, smooth perimetrical forms the profile and shape of moving object;
    (3)The displacement of frame before and after calculating object:
    S9:Based on the point set of the moving region obtained in step S7, the square boundary and rectangle in calculating current frame motion region The length on range image border in the picture, i.e. upper and lower, left and right distance value obtain the location information in current frame motion region;
    S10:The location information in current frame motion region and the location information of previous frame moving region are compared, in reservation, Under, the smaller value of left and right distance;
    S11:Left and right distance value is subtracted each other, moving object motion amplitude d1 in the horizontal direction is obtained, by upper and lower distance value Subtract each other, obtain motion amplitude d2 of the moving object in vertical direction, calculate the amplitude of object of which movement, is i.e. gap parameter l is calculated Formula is as follows:l=sqrt(d1*d1+d2*d2);
    (4)The real-time color variation of moving object region:
    S12:Present frame is traveled through, obtains color point pixel value;
    S13:The rectangular shaped rim being made of the motion amplitude d1 and motion amplitude d2 that are obtained in step S11, if the rectangular shaped rim Catercorner length is percentage of the ratio of s, catercorner length s and gap parameter l for the shared total amplitude of movement in current kinetic region Compare t;
    S14:The pixel point value in coverage motion region obtains the pixel color value F of each coordinate points(R,G,B);
    S15:According to the color value Z being finally changing into of setting(R,G,B)With the pixel color value F of each coordinate points(R,G,B) The distance between p, calculate the color of each pixel, calculation formula is as follows:I(R,G,B)= F(R,G,B)+ p*t;
    S16:Color change simultaneously exports display image.
  2. 2. a kind of method of color real-time transform based on moving object according to claim 1, it is characterised in that:It is described Step S4 to S6 specifically includes following steps:
    S21:Using Three image difference, the gray-scale map of the (n+1)th frame in video sequence, n-th frame and the n-th 1 frames is taken, three frames are corresponding The gray value of pixel point coordinates is respectively fn+1 (x, y), fn (x, y) and fn 1 (x, y);
    S22:The gray value of the respective coordinates of (n+1)th frame and n-th frame point is subtracted each other, obtains difference diagram d1;
    S23:Maximum variance between clusters are used, if the prospect points of difference diagram d1 account for image scaled as w0, average gray u0, the back of the body Sight spot number accounts for image scaled as w1, average gray u1, calculates the overall average gray scale u of image, and calculation formula is as follows:u=w0*u0+ w1*u1;
    S24:The variance of foreground and background image is calculated, calculation formula is as follows:
    g=w0*(u0-u)*(u0-u)+w1*(u1-u)*(u1-u)=w0*w1*(u0-u1)*(u0-u1);
    S25:When variance g maximums, foreground and background difference is maximum, then gray scale at this time is the optimal threshold T of difference diagram d1;
    S26:The pixel of difference diagram d1 and its optimal threshold T-phase are subtracted, if obtained absolute value is less than threshold value T, by the picture Vegetarian refreshments color is set to black, if the absolute value subtracted each other is more than threshold value T, which is set to white, is finally obtained Obtain the binary map D1 of difference;
    S27:Similarly, the gray value of n-th frame and the respective coordinates of the (n-1)th frame point is subtracted each other, difference diagram d2 is obtained, using maximum kind Between variance method obtain difference binary map D2;
    S28:Binary map D1 and D2 are carried out and operated, obtains the binary map of binary map D, i.e. moving region.
  3. 3. a kind of method of color real-time transform based on moving object according to claim 2, it is characterised in that:It is described Step S7 and S8 specifically include following steps:
    S31:Traversing graph picture, extraction one are not the pixels of background, which is set to background colour;
    S32:All tie points in the range of the vertex neighborhood are searched using the point as seed point, and the tie point found is whole It is set to background colour;
    S33:Using the tie point found all as seed point, all tie points in the contiguous range of the seed point are searched, The point found is all set to background colour again;
    S34:It is moved in circles with this, until not new seed point occurs.
  4. 4. a kind of method of color real-time transform based on moving object according to claim 3, it is characterised in that:It is described Median filtering method specifically includes following steps:
    S41:Extract the gray value of a certain pixel coordinate point A in image;
    S42:The neighborhood window of coordinate points A is obtained, neighborhood window shape can be linear, square or circular etc.;
    S43:It calculates the gray value in neighborhood window and is ranked up;
    S44:Gray scale median after extraction sequence substitutes the gray value of A points.
CN201711339167.0A 2017-12-14 2017-12-14 A kind of method of the color real-time transform based on moving object Pending CN108109117A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711339167.0A CN108109117A (en) 2017-12-14 2017-12-14 A kind of method of the color real-time transform based on moving object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711339167.0A CN108109117A (en) 2017-12-14 2017-12-14 A kind of method of the color real-time transform based on moving object

Publications (1)

Publication Number Publication Date
CN108109117A true CN108109117A (en) 2018-06-01

Family

ID=62216809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711339167.0A Pending CN108109117A (en) 2017-12-14 2017-12-14 A kind of method of the color real-time transform based on moving object

Country Status (1)

Country Link
CN (1) CN108109117A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034058A (en) * 2018-07-25 2018-12-18 哈工大机器人(合肥)国际创新研究院 One kind is for region division and self-correction method and system in image
CN109431681A (en) * 2018-09-25 2019-03-08 吉林大学 A kind of intelligent eyeshade and its detection method detecting sleep quality
CN110599493A (en) * 2018-06-12 2019-12-20 财团法人工业技术研究院 Numerical array data image processing device and method and color code table generating method
CN111505651A (en) * 2020-04-22 2020-08-07 西北工业大学 Feature extraction method for potential moving target of active sonar echo map
CN112351191A (en) * 2020-09-14 2021-02-09 中标慧安信息技术股份有限公司 Mobile detection processing method and system
CN114998391A (en) * 2022-05-26 2022-09-02 国网河南省电力公司电力科学研究院 Method for rapidly filtering redundant information under condition of detachable yolov 5-based image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101533517A (en) * 2009-04-15 2009-09-16 北京联合大学 Structure feature based on Chinese painting and calligraphy seal image automatic extracting method
CN101739551A (en) * 2009-02-11 2010-06-16 北京智安邦科技有限公司 Method and system for identifying moving objects
CN102222214A (en) * 2011-05-09 2011-10-19 苏州易斯康信息科技有限公司 Fast object recognition algorithm
CN102385754A (en) * 2010-08-30 2012-03-21 三星电子株式会社 Method and equipment for tracking object
CN105069816A (en) * 2015-07-29 2015-11-18 济南中维世纪科技有限公司 Method and system for counting inflow and outflow people

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739551A (en) * 2009-02-11 2010-06-16 北京智安邦科技有限公司 Method and system for identifying moving objects
CN101533517A (en) * 2009-04-15 2009-09-16 北京联合大学 Structure feature based on Chinese painting and calligraphy seal image automatic extracting method
CN102385754A (en) * 2010-08-30 2012-03-21 三星电子株式会社 Method and equipment for tracking object
CN102222214A (en) * 2011-05-09 2011-10-19 苏州易斯康信息科技有限公司 Fast object recognition algorithm
CN105069816A (en) * 2015-07-29 2015-11-18 济南中维世纪科技有限公司 Method and system for counting inflow and outflow people

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599493A (en) * 2018-06-12 2019-12-20 财团法人工业技术研究院 Numerical array data image processing device and method and color code table generating method
CN109034058A (en) * 2018-07-25 2018-12-18 哈工大机器人(合肥)国际创新研究院 One kind is for region division and self-correction method and system in image
CN109034058B (en) * 2018-07-25 2022-01-04 哈工大机器人(合肥)国际创新研究院 Method and system for dividing and self-correcting region in image
CN109431681A (en) * 2018-09-25 2019-03-08 吉林大学 A kind of intelligent eyeshade and its detection method detecting sleep quality
CN109431681B (en) * 2018-09-25 2023-12-19 吉林大学 A smart eye mask for detecting sleep quality and its detection method
CN111505651A (en) * 2020-04-22 2020-08-07 西北工业大学 Feature extraction method for potential moving target of active sonar echo map
CN111505651B (en) * 2020-04-22 2022-11-11 西北工业大学 A Feature Extraction Method for Potential Moving Targets in Active Sonar Echo Maps
CN112351191A (en) * 2020-09-14 2021-02-09 中标慧安信息技术股份有限公司 Mobile detection processing method and system
CN112351191B (en) * 2020-09-14 2021-11-23 中标慧安信息技术股份有限公司 Mobile detection processing method and system
CN114998391A (en) * 2022-05-26 2022-09-02 国网河南省电力公司电力科学研究院 Method for rapidly filtering redundant information under condition of detachable yolov 5-based image

Similar Documents

Publication Publication Date Title
CN108109117A (en) A kind of method of the color real-time transform based on moving object
Sommer et al. A survey on moving object detection for wide area motion imagery
CN108154520B (en) A kind of moving target detecting method based on light stream and frame matching
CN103606132B (en) Based on the multiframe Digital Image Noise method of spatial domain and time domain combined filtering
CN104700430A (en) Method for detecting movement of airborne displays
CN108596169B (en) Block signal conversion and target detection method and device based on video stream image
CN109359593B (en) Rain and snow environment picture fuzzy monitoring and early warning method based on image local grid
CN107742307A (en) Feature extraction and parameter analysis method of transmission line galloping based on improved frame difference method
CN104616290A (en) Target detection algorithm in combination of statistical matrix model and adaptive threshold
CN105046719B (en) A kind of video frequency monitoring method and system
CN103413276A (en) Depth enhancing method based on texture distribution characteristics
CN105447890A (en) Motion vehicle detection method resisting light effect
CN113155032A (en) Building structure displacement measurement method based on dynamic vision sensor DVS
CN103578121A (en) Motion detection method based on shared Gaussian model in disturbed motion environment
Chen et al. Night-time pedestrian detection by visual-infrared video fusion
CN103996199A (en) Movement detection method based on depth information
CN110705492A (en) Stage mobile robot obstacle target detection method
Hisanaga et al. Tone mapping and blending method to improve SAR image visibility
Li et al. A shadow detection method based on improved Gaussian Mixture Model
Chaiyawatana et al. Robust object detection on video surveillance
CN110599431B (en) Time domain filtering method applied to infrared camera
CN103335636B (en) Detection method of small targets on ground
CN118154635A (en) Monitoring image processing method and system based on machine vision
Wang et al. Identification and extraction of circular markers in 3D reconstruction
KR102800427B1 (en) AI-based Low-Light Environment CCTV Video Enhancement System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180601