CN111414149B - Background model updating method and related device - Google Patents

Background model updating method and related device Download PDF

Info

Publication number
CN111414149B
CN111414149B CN201910007726.0A CN201910007726A CN111414149B CN 111414149 B CN111414149 B CN 111414149B CN 201910007726 A CN201910007726 A CN 201910007726A CN 111414149 B CN111414149 B CN 111414149B
Authority
CN
China
Prior art keywords
pixel
color
color information
current image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910007726.0A
Other languages
Chinese (zh)
Other versions
CN111414149A (en
Inventor
詹尚伦
李宗轩
杨朝勋
陈世泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Realtek Semiconductor Corp
Original Assignee
Realtek Semiconductor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Realtek Semiconductor Corp filed Critical Realtek Semiconductor Corp
Priority to CN201910007726.0A priority Critical patent/CN111414149B/en
Publication of CN111414149A publication Critical patent/CN111414149A/en
Application granted granted Critical
Publication of CN111414149B publication Critical patent/CN111414149B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/58Random or pseudo-random number generators
    • G06F7/588Random number generators, i.e. based on natural stochastic processes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A method for updating a background model and a related device are provided, the method is used for an image processing device, and the method comprises the following steps: receiving at least one background image; counting the occurrence times of the color information of each pixel according to the color information of each pixel in the at least one background image so as to establish a background model containing the color information and statistical information of the occurrence times; receiving a current image; determining whether to update the occurrence frequency of the color information of each pixel in the background model corresponding to the current image according to the color information of each pixel in the current image and the comparison result of a random number and a threshold value.

Description

Background model updating method and related device
Technical Field
The present invention relates to a method for updating a background model, and more particularly, to a method for updating a background model by a random number generator to reduce the memory space required by the background model.
Background
Most of the present image monitoring consumer electronics products, such as monitoring systems, mobile phones, and network cameras, are provided with motion detection (motion detection) functions. Briefly, the process of motion detection includes: the method comprises the steps of taking an image stream (image stream) provided by a video interface of the device as input, establishing a background model after training for a period of time (continuous time sequence) according to the characteristics of an image, comparing the background model with each pixel of a current image to detect a moving object, and outputting a binary image usually, wherein white represents the moving object, and black is a background (non-moving object).
However, in the conventional background Model building method, for example, a Gaussian Mixed Model (GMM) or a codebook (codebook) is used to build the background Model, although the accurate motion image can be detected and the periodically moving object can be filtered out, so as to avoid the occurrence of erroneous judgment and ghost phenomenon, the cost is too high in terms of hardware architecture. For example, the floating-point number operand for modeling is too high and output effect control is not easy in the gaussian mixture model method, whereas in the codebook method, although the floating-point number operand is small, a large amount of background data needs to be recorded. It should be noted that, although the longer the recorded background time is, an object with a longer moving period may be recorded in the background model, a larger amount of temporary storage space is required, so that a large amount of register space is required for statistics to achieve high accuracy when an effective background model is established, and a large amount of floating point operations also cause complexity of operations.
Disclosure of Invention
Therefore, it is a primary objective of the present invention to provide a method for updating a background model to solve the above problems.
The invention discloses a background model updating method, which is used for an image processing device and comprises the following steps: receiving at least one background image; counting the occurrence times of the color information of each pixel according to the color information of each pixel in the at least one background image so as to establish a background model containing the color information and statistical information of the occurrence times; receiving a current image; determining whether to update the occurrence frequency of the color information of each pixel in the background model corresponding to the current image according to the color information of each pixel in the current image and the comparison result of a random number and a threshold value.
The invention also discloses an image processing device, comprising: a processing unit for executing a program code; a storage unit, coupled to the processing unit, for storing the program code, wherein the program code instructs the processing unit to perform the following steps: receiving at least one background image; counting the occurrence times of the color information of each pixel according to the color information of each pixel in the at least one background image so as to establish a background model containing the color information and statistical information of the occurrence times; receiving a current image; and determining whether to update the occurrence frequency of the color information of each pixel in the background model corresponding to the current image according to the color information of each pixel in the current image and the comparison result of a random number and a threshold value.
The invention also discloses an image processing device, comprising: a processing unit for performing the steps of: receiving at least one background image; counting the occurrence times of the color information of each pixel according to the color information of each pixel in the at least one background image so as to establish a background model containing the color information and statistical information of the occurrence times; receiving a current image; determining whether to update the occurrence frequency of the color information of each pixel in the background model corresponding to the current image according to the color information of each pixel in the current image and a comparison result of a random number and a threshold value; and a storage unit, coupled to the processing unit, for storing the background image, the current image, the color information and the statistical information.
Drawings
Fig. 1 is a schematic diagram of an image processing apparatus according to an embodiment of the invention.
FIG. 2 is a flowchart illustrating an image processing method according to an embodiment of the invention.
Fig. 3 is a flowchart of updating a background model according to an embodiment of the present invention.
Fig. 4a to 5c are schematic diagrams illustrating an image detection result according to an embodiment of the invention.
Description of the symbols
10 image processing device
100 processing unit
102 memory cell
104 communication interface unit
30 flow path
300 to 306 steps
Detailed Description
The invention aims to reduce the operation complexity and the storage space required by the background model, and realize the establishment and the update of the background model on a hardware architecture in a low-cost mode.
Please refer to fig. 1, which is a diagram illustrating an image processing apparatus 10 according to an embodiment of the present invention. The image processing apparatus 10 may be a camera of a personal computer or a notebook computer, or a monitoring system, a mobile phone, a network camera, etc., and includes a processing unit 100, a storage unit 102 and a communication interface unit 104. The processing unit 100 may be a microprocessor or an application-specific integrated circuit (ASIC). The storage unit 102 may be any data storage device for storing a program code, image and color information, and reading and executing the program code through the processing unit 100. For example, the storage unit 102 may be a Subscriber Identity Module (SIM), a read-only memory (ROM), a flash memory (flash memory), a random-access memory (RAM), a compact disc read-only memory (CD-ROM/DVD-ROM), a magnetic tape (magnetic tap), a hard disk (hard disk), an optical data storage device (optical data storage device), and the like, but is not limited thereto. The communication interface unit 104 may be a wireless or wired transceiver, which exchanges image data with other image devices according to the processing result of the processing unit 100.
Please refer to fig. 2, which is a flowchart illustrating an image processing method according to an embodiment of the present invention. First, the image processing apparatus 10 creates a background model (an image used to create the background model during this time period, which is referred to as a background image herein) for images received in a continuous time sequence (such as sub-frames) to record pixel information (such as color information) of each pixel in the background image. The image processing apparatus 10 receives pixel information (e.g., color information) of a current image, matches a closest color interval according to the color information recorded in the background model (referred to as a background model building procedure), estimates a moving object in the image according to the color interval (referred to as an action detection procedure), and updates the background model according to a random number. About the backThe scene model creation program, the motion detection program, and the background model update program are explained in detail below. The background model is used to record the color information of each pixel in the image, so the temporary memory needs to store the statistical information of the color occurrence times of each pixel in a period of time. However, in the conventional method, the statistical information of the color appearance times of a single pixel (
Figure BDA0001936110970000041
Where size is the size and pixel is the pixel), the space required by the pixel is too large (recorded every time a color appears), as shown in the following formula 1.1, where c is the number of channels in the color space, r is the range corresponding to a single channel, and v is the space required for storing the number of occurrences. In the conventional method, if a background model is to be built on hardware, the range of r must be fixed, usually 0 to 255 is a common range, and the larger the range of v is, the longer the recordable background timing (e.g., sub-frame) is, and the more complete pixel information is available for determining the background. In other words, the larger the value range r corresponding to a single channel and the space v required for the occurrence number of the storage are, the larger the storage space is, which is not favorable for the hardware implementation.
Figure BDA0001936110970000042
Therefore, the present invention proposes a way to store the statistical information of the color occurrence number based on histogram (histogram), i.e. vectorize the color space, so that the storage space required by each pixel can be reduced to
Figure BDA0001936110970000043
As shown in the following formula 1.2, wherein q is a quantization step size, the larger the step size of q, the smaller the representative number of colors, and thus the storage space is relatively reduced. For example, the range of a single channel is 0 to 255, and if q is 16, each color vector interval includes 16 (256/16 is 16) ranges, that is, the range of 0 to 255 is represented by 16 representative colors, such as the range of 1 st representative color is 0 to 15,The 2 nd representative color range is 16-31, and so on. Briefly, for the background model information of the histogram, each color vector corresponds to a color vector bin of the histogram, and the occurrence frequency corresponds to the value of the color vector bin, as shown in the following formula 1.3, wherein
Figure BDA0001936110970000044
Representing the color information of the pixel at picture position (i, j) at channel c, qc representing the quantization step size at channel c,
Figure BDA0001936110970000045
representing the number of occurrences.
Figure BDA0001936110970000046
Figure BDA0001936110970000051
Because the Y-channel image has characteristics (light and dark information) that are obvious to the human eye, the color information of the pixels described herein is applied to the motion detection program by applying a histogram-based background model building method, taking only the Y-channel color statistical information of the YUV color space as an example. However, the form of the color space, such as the number of color channels YUV, RGB, HSV, etc., and the vectorization manner, the present invention is not limited thereto.
As can be seen from the above, by introducing the concept of color space vectorization (quantization), the color range can be effectively compressed and the storage cost can be saved. In the motion detection procedure, whenever a new image enters, we will find out the two closest color vector intervals idx1 and idx2 corresponding to the color information of each pixel according to the background model established previously, and as shown in the following formula 1.4, the tolerance of the model to the color vectorization error is improved by taking out the two closest color vector intervals. For example, when the pixel value range I (I, j) is 31, although the pixel belongs to the 0 th representative color (the color range is 0 to 31), the pixel value range q is closer to the 1 st representative color (the color range is 32 to 63), that is, idx1 is 0, and idx2 is 1. When the pixel value range I (I, j) is 65, the pixel belongs to the 2 nd representative color (the color range is 64 to 95), but is relatively close to the 1 st representative color (the color range is 32 to 63), that is, idx1 is 2, and idx2 is 1. Therefore, the two closest color vector intervals idx1 and idx2 can be taken out to effectively improve the tolerance of the color vectorization error.
Figure BDA0001936110970000052
idx2 ═ idx1+1 or idx1-1 (1.4)
Then, the occurrence times of the two closest color vector intervals in the background models corresponding to idx1 and idx2 are obtained
Figure BDA0001936110970000053
Thereby, the judgment of the action detection is carried out. The mechanism of judgment is obtained by the following formula (2.1):
Figure BDA0001936110970000054
where T is the background threshold. If the color vector of the pixel appears too frequently (the background model information is larger than the background threshold), the pixel is judged as a background, otherwise, the pixel is judged as a moving pixel.
According to the background model building procedure of the embodiment of the invention, the image detection result of the current image can be seen in fig. 4a to 5 c. In fig. 4a to 5c, the left-most side of the original image is shown, the middle side is shown as the detection result with q being 32, and the right side is shown as the detection result with q being 16, so that the size of the compressed range affects the detection sensitivity, and the smaller the compressed range (i.e. the more representative colors), the higher the detection sensitivity is, as the detection results of fig. 4c and 5c have more details than those of fig. 4b and 5 b. Conversely, when the compression range is larger (i.e. the representative color is less), the sensitivity of detection is reduced, but the more the temporary storage space can be saved. In the embodiment of the present invention, since different configurations of hardware have different conditions, the design is required according to the owned space and the required sensitivity of the background model.
Further, it is an object of the present invention to reduce the storage space while ensuring the validity of the background model information. Please refer to fig. 3, which is a flowchart illustrating a background model updating procedure 30 according to an embodiment of the present invention. The background model update program 30 can be compiled into program code, stored in the storage unit 102, and includes the following steps:
step 300: at least one background image is received.
Step 302: and establishing a background model according to the color information of each pixel in the at least one background image, wherein the background model comprises statistical information of the occurrence times of the color information of each pixel.
Step 304: a current image is received.
Step 306: determining whether to update the occurrence frequency of the color information of each pixel in the background model corresponding to the current image according to the color information of each pixel in the current image and the comparison result of a random number and a threshold value.
According to the background model update program 30, after establishing the background model (such as the color space vectorized histogram described above), the image processing apparatus 10 needs to learn and eliminate new and old background information by increasing the statistics of the color vector appearing interval and the color vector non-appearing interval of the current image pixel, and the general background model update is as the following formula (3.1):
Figure BDA0001936110970000061
wherein v isl、vfAre all positive floating point numbers and are used to adjust the rate of learning and forgetting the background, respectively. In short, if the color vector interval of the current video pixel belongs to the two closest color vector intervals idx1 and idx2, the number of occurrences corresponding to the color vector interval of the current video pixel is increased, otherwise, if the color vector interval of the current video pixel does not belong to the two closest color vector intervals idx1 and idx2, the current video pixel is not the most adjacent color vector intervalWhen the two color vector intervals idx1 and idx2 are close, the occurrence frequency corresponding to the color vector interval of the pixel is reduced, and the information update of the background model is further realized.
The text proposes another method for learning and forgetting the background through random numbers, which can effectively reduce the size of the storage space and keep the high-precision effect of the background model. To reduce the number of bits required to store the background model from m bits to k bits, where m and k are positive integers and 0 < k < m. The following formulas (3.2) and (3.3) are calculated through hardware:
Figure BDA0001936110970000071
if the motion detection result is equal to background, if
Figure BDA0001936110970000072
Or
Figure BDA0001936110970000073
Wherein rand, seed, sparethdAnd forketthdRespectively a random number generator, a random number seed, a learning threshold and a forgetting threshold, wherein seed is greater than 0 and leftthd≥0、forgetthdIs more than or equal to 0. In addition, the threshold of step 306 includes a learning threshold and a forgetting threshold. Further, rand (seed) is 0. ltoreq. rand (seed) < 2rAnd uniformly distributed random number integers, wherein the random number, the learning threshold and the forgetting threshold are integers which are more than or equal to zero, and r is the number of bits for storing the random number (the larger r is, the wider the range of the random number can be). According to the formulas (3.2) and (3.3), if the color vector interval of the current image pixel belongs to the two closest color vector intervals idx1 and idx2, the threshold learn is learnedthdIf the current image pixel is not in the two nearest color vector intervals idx1 and idx2, and the forgetting threshold value forgetthdIf greater than random number rand (seed), the pixel is selectedThe number of occurrences corresponding to the color vector interval is reduced by 1, and further the information update of the background model is realized. It is worth noting that the learning threshold value learn is obtained by adding a random numberthdAnd forgetting threshold value forgetthdThe comparison result between the two values is used to determine whether to update the number of occurrences corresponding to the color vector interval of the current video pixel, i.e., the number of occurrences corresponding to the color vector interval of the current video pixel is not updated every time. However, as can be seen from the above, in order to make the background model have higher accuracy, the number N of images needs to be a maximum value, i.e. it needs long-time image data to establish an accurate background model, so based on the characteristics of random numbers, the longer the time for recording the image data is, the smaller the error probability of updating the image data is, i.e. the closer the statistical data result is to the real background model.
On the other hand, when learning the threshold value learnthdAnd forgetting threshold value forgetthdThe larger the value of (A), since the random number (rand (seed)) is smaller than the learning threshold value learnthdOr the random number (rand (seed)) is less than a forgetting threshold value forgetthdThe more easily the condition(s) is satisfied, the probability of increasing/decreasing the number of tickets corresponding to the color vector interval is relatively increased, and therefore, more storage space is required. Conversely, when learning the threshold value learnthdAnd forgetting threshold value forgetthdThe smaller the value of (a), the lower the probability of increasing/decreasing the number of tickets, so the less memory space is required, but in a long time, the statistical situation of the real image can be approached by updating the occurrence frequency of the color vector interval in the background model through the random number generator or the random number.
In one embodiment, a Linear feedback shift register (Linear feedback shift register) is used as the random number generator, which is relatively low in cost due to only bit-wise operation and shift operation in hardware architecture, and can avoid large number multiplication in a long cycle period compared to a Linear congruence generator (Linear congruence generator), but the protection scope of the present invention should not be limited by the random number generators of different operation methods.
All of the steps described above, including the steps suggested, can be implemented by hardware, firmware (i.e., a combination of hardware devices and computer instructions, where the data in the hardware devices is read-only software data), or electronic systems. The hardware may include analog, digital, and hybrid circuits (i.e., microcircuits, microchips, or silicon chips). The electronic system may include a System On Chip (SOC), a system in package (Sip), and a Computer On Module (COM). For example, the background model update program 30 is implemented by a hardware circuit (e.g., an image processing module), i.e., the processing unit 100 itself is a circuit designed according to the above steps and formula operations, and stores the execution results (e.g., image, color information/color statistics) in the storage unit 102.
In summary, the present invention provides a method for updating a background model, which is used to reduce the storage space required by image data and maintain high accuracy. In detail, the background model establishment of the present disclosure is based on a color space vectorization histogram, and the background model update determines whether to update the frequency information corresponding to the color vector interval according to a random number manner, so that the update frequency can be reduced, and the storage space can be reduced.
The above-mentioned embodiments are only preferred embodiments of the present invention, and all equivalent changes and modifications made by the claims of the present invention should be covered by the scope of the present invention.

Claims (10)

1. A method for updating a background model for an image processing apparatus, the method comprising:
receiving at least one background image;
counting the occurrence times of the color information of each pixel according to the color information of each pixel in the at least one background image so as to establish a background model containing the color information and statistical information of the occurrence times;
receiving a current image;
determining whether to update the occurrence frequency of the color information of each pixel in the background model corresponding to the current image according to the color information of each pixel in the current image and a comparison result of a random number and a threshold value.
2. The method of claim 1, wherein the background model comprises the number of occurrences of the color information of each pixel in a plurality of color vector intervals.
3. The method of claim 2, further comprising:
performing an action detection procedure on the current image, wherein the action detection procedure is used to determine whether each pixel in the current image is a background pixel or a moving pixel to detect whether an object moves, and the action detection procedure comprises the following steps:
acquiring two color vector intervals closest to the color information according to the color information of a pixel in the current image; and
when one of the two color vector intervals of the pixel corresponding to the occurrence times of the two color vector intervals in the background model is larger than a background threshold value, the pixel is judged as the background pixel, otherwise, the pixel is judged as the moving pixel.
4. The method of claim 3, wherein the threshold comprises a learning threshold and a forgetting threshold, and the step of determining whether to update the number of occurrences of the color information corresponding to each pixel in the current image in the background model according to the color information of each pixel in the current image and a comparison result of the random number and the threshold comprises:
judging whether a color vector interval corresponding to the color information of the pixel of the current image is one of the two color vector intervals;
when the color vector interval corresponding to the color information of the pixel in the current image is one of the two color vector intervals and the random number is smaller than the learning threshold, increasing the occurrence frequency of the color vector interval corresponding to the pixel in the current image in the background model; and
when the color vector interval corresponding to the color information of the pixel in the current image is not one of the two color vector intervals and the random number is smaller than the forgetting threshold, reducing the occurrence frequency of the color vector interval corresponding to the pixel in the current image in the background model.
5. An image processing apparatus includes:
a processing unit for performing the steps of:
receiving at least one background image;
counting the occurrence times of the color information of each pixel according to the color information of each pixel in the at least one background image so as to establish a background model containing the color information and statistical information of the occurrence times;
receiving a current image; and
determining whether to update the occurrence frequency of the color information of each pixel in the background model corresponding to the current image according to the color information of each pixel in the current image and a comparison result of a random number and a threshold value; and
and the storage unit is coupled with the processing unit and used for storing the background image, the current image, the color information and the statistical information.
6. The image processing device as claimed in claim 5, wherein the background model comprises the number of occurrences of the color information of each pixel in a plurality of color vector intervals.
7. The image processing device as claimed in claim 6, wherein the processing unit further performs the steps of:
performing an action detection procedure on the current image, wherein the action detection procedure is used to determine whether each pixel in the current image is a background pixel or a moving pixel to detect whether an object moves, and the action detection procedure comprises the following steps:
acquiring two color vector intervals closest to the color information according to the color information of a pixel in the current image; and
when one of the two color vector intervals of the pixel corresponding to the occurrence times of the two color vector intervals in the background model is larger than a background threshold value, the pixel is judged as the background pixel, otherwise, the pixel is judged as the moving pixel.
8. The image processing device as claimed in claim 7, wherein the threshold comprises a learning threshold and a forgetting threshold, the processing unit further performs the steps of:
judging whether a color vector interval corresponding to the color information of the pixel of the current image is one of the two color vector intervals;
when the color vector interval corresponding to the color information of the pixel in the current image is one of the two color vector intervals and the random number is smaller than the learning threshold, increasing the occurrence frequency of the color vector interval corresponding to the pixel in the current image in the background model; and
when the color vector interval corresponding to the color information of the pixel in the current image is not one of the two color vector intervals and the random number is smaller than the forgetting threshold, reducing the occurrence frequency of the color vector interval corresponding to the pixel in the current image in the background model.
9. The image processing device as claimed in claim 5, wherein the random number is generated by a random number generator, and the random number generator comprises a linear feedback shift register.
10. An image processing apparatus includes:
a processing unit for executing a program code;
a storage unit, coupled to the processing unit, for storing the program code, wherein the program code instructs the processing unit to perform the following steps:
receiving at least one background image;
counting the occurrence times of the color information of each pixel according to the color information of each pixel in the at least one background image so as to establish a background model containing the color information and statistical information of the occurrence times;
receiving a current image; and
determining whether to update the occurrence frequency of the color information of each pixel in the background model corresponding to the current image according to the color information of each pixel in the current image and a comparison result of a random number and a threshold value.
CN201910007726.0A 2019-01-04 2019-01-04 Background model updating method and related device Active CN111414149B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910007726.0A CN111414149B (en) 2019-01-04 2019-01-04 Background model updating method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910007726.0A CN111414149B (en) 2019-01-04 2019-01-04 Background model updating method and related device

Publications (2)

Publication Number Publication Date
CN111414149A CN111414149A (en) 2020-07-14
CN111414149B true CN111414149B (en) 2022-03-29

Family

ID=71490633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910007726.0A Active CN111414149B (en) 2019-01-04 2019-01-04 Background model updating method and related device

Country Status (1)

Country Link
CN (1) CN111414149B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012016588A1 (en) * 2010-08-03 2012-02-09 Verigy (Singapore) Pte. Ltd. Bit sequence generator
CN103489196A (en) * 2013-10-16 2014-01-01 北京航空航天大学 Moving object detection method based on codebook background modeling
CN104573625A (en) * 2013-10-23 2015-04-29 想象技术有限公司 Facial detection
CN105139372A (en) * 2015-02-06 2015-12-09 哈尔滨工业大学深圳研究生院 Codebook improvement algorithm for prospect detection
WO2017004803A1 (en) * 2015-07-08 2017-01-12 Xiaoou Tang An apparatus and a method for semantic image labeling
CN106780544A (en) * 2015-11-18 2017-05-31 深圳中兴力维技术有限公司 The method and apparatus that display foreground is extracted

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060170769A1 (en) * 2005-01-31 2006-08-03 Jianpeng Zhou Human and object recognition in digital video
TWI348659B (en) * 2007-10-29 2011-09-11 Ind Tech Res Inst Method and system for object detection and tracking
WO2013178725A1 (en) * 2012-05-31 2013-12-05 Thomson Licensing Segmentation of a foreground object in a 3d scene

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012016588A1 (en) * 2010-08-03 2012-02-09 Verigy (Singapore) Pte. Ltd. Bit sequence generator
CN103489196A (en) * 2013-10-16 2014-01-01 北京航空航天大学 Moving object detection method based on codebook background modeling
CN104573625A (en) * 2013-10-23 2015-04-29 想象技术有限公司 Facial detection
CN105139372A (en) * 2015-02-06 2015-12-09 哈尔滨工业大学深圳研究生院 Codebook improvement algorithm for prospect detection
WO2017004803A1 (en) * 2015-07-08 2017-01-12 Xiaoou Tang An apparatus and a method for semantic image labeling
CN106780544A (en) * 2015-11-18 2017-05-31 深圳中兴力维技术有限公司 The method and apparatus that display foreground is extracted

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ViBe: A Universal Background Subtraction Algorithm for Video Sequences;Olivier Barnich .etc;《IEEE Transactions on Image Processing》;20101223;第20卷(第6期);第1709-1724页 *
基于随机聚类的复杂背景建模与前景检测算法;毕国玲等;《物理学报》;20150831;第64卷(第15期);第33-44页 *
复杂场景下多运动目标实时检测与跟踪;王艳丽;《中国优秀硕士学位论文全文数据库(电子期刊)》;20120515;第I138-1320页 *

Also Published As

Publication number Publication date
CN111414149A (en) 2020-07-14

Similar Documents

Publication Publication Date Title
US9344690B2 (en) Image demosaicing
US20100098331A1 (en) System and method for segmenting foreground and background in a video
US20210192349A1 (en) Method and apparatus for quantizing neural network model in device
CN110008961B (en) Text real-time identification method, text real-time identification device, computer equipment and storage medium
US10713470B2 (en) Method of determining image background, device for determining image background, and a non-transitory medium for same
US20110211233A1 (en) Image processing device, image processing method and computer program
CN101170708A (en) Display device and method for improving image flashing
US7298899B2 (en) Image segmentation method, image segmentation apparatus, image processing method, and image processing apparatus
US20240029272A1 (en) Matting network training method and matting method
US8693791B2 (en) Object detection apparatus and object detection method
CN111353956B (en) Image restoration method and device, computer equipment and storage medium
CN114096994A (en) Image alignment method and device, electronic equipment and storage medium
CN112801918A (en) Training method of image enhancement model, image enhancement method and electronic equipment
CN104202448A (en) System and method for solving shooting brightness unevenness of mobile terminal camera
US10755386B2 (en) Median filtering of images using directed search
CN113192081B (en) Image recognition method, image recognition device, electronic device and computer-readable storage medium
TWI689893B (en) Method of background model update and related device
CN111414149B (en) Background model updating method and related device
CN103685854A (en) Image processing apparatus, image processing method, and program
CN113628259A (en) Image registration processing method and device
CN112866797A (en) Video processing method and device, electronic equipment and storage medium
US20230222639A1 (en) Data processing method, system, and apparatus
CN112101135A (en) Moving target detection method and device and terminal equipment
CN114049539B (en) Collaborative target identification method, system and device based on decorrelation binary network
CN111402164B (en) Training method and device for correction network model, text recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant