CN108875759B - Image processing method and device and server - Google Patents
Image processing method and device and server Download PDFInfo
- Publication number
- CN108875759B CN108875759B CN201710326076.7A CN201710326076A CN108875759B CN 108875759 B CN108875759 B CN 108875759B CN 201710326076 A CN201710326076 A CN 201710326076A CN 108875759 B CN108875759 B CN 108875759B
- Authority
- CN
- China
- Prior art keywords
- color system
- value
- small block
- determining
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 claims abstract description 64
- 238000012545 processing Methods 0.000 claims abstract description 54
- 238000005286 illumination Methods 0.000 claims description 55
- 238000012549 training Methods 0.000 claims description 25
- 238000012706 support-vector machine Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 abstract description 15
- 235000019557 luminance Nutrition 0.000 description 43
- 238000004458 analytical method Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 239000003086 colorant Substances 0.000 description 6
- 238000010801 machine learning Methods 0.000 description 5
- 238000007726 management method Methods 0.000 description 5
- 238000007781 pre-processing Methods 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013499 data model Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000012850 discrimination method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses an image processing method, an image processing device and a server, which are used for reducing the interference of background images in a Mask to foreground targets. In the image processing process, the boundary frame containing the foreground target Mask is divided into a plurality of small blocks, the color system of each small block is determined, the scaling coefficient corresponding to the color system is determined according to each color system, and the brightness value of each pixel in each small block belonging to the color system is multiplied by the scaling coefficient. The embodiment of the invention can divide the Mask into a plurality of small blocks, classify the small blocks according to the color system, and can adopt different processing modes for each kind of small blocks, and adopt the same mode for uniform processing for the same kind of small blocks, so that the small blocks with larger color value difference between the foreground target and the background image in the Mask can be distinguished and processed differently, thereby reducing the interference caused by the background image to the foreground target in the Mask.
Description
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method, an image processing apparatus, and a server.
Background
In the application of video analysis such as image searching, it is a very critical technique to accurately obtain the color value of the whole or a part of the target to extract features or perform corresponding processing.
However, the Mask (Mask) used for determining the object can not completely and accurately frame the outline of the object, and the Mask inevitably comprises a part of a background image except a foreground object. In some cases, these background images can cause significant noise interference for determining the color of the foreground object.
However, the current technology cannot effectively remove the background image in the Mask, and the Mask contains more or less background images except for the foreground object. Therefore, in the case that an accurate Mask cannot be obtained, the background image in the Mask inevitably causes noise interference on the foreground object in the image enhancement processing.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device and a server, which can reduce the interference of background images in a Mask to foreground targets.
A first aspect of the present application provides an image processing method, including the steps of:
dividing a bounding box containing a foreground target Mask into a plurality of small blocks, wherein each small block comprises at least one pixel; determining the color system of each small block in the small blocks occupied by the Mask; and for each color system, determining a scaling coefficient corresponding to the color system, and multiplying the brightness value of each pixel in the small block belonging to the color system by the scaling coefficient so as to process the small blocks of each color system respectively.
When the color values of the foreground target and the background image are greatly different, the interference of the background image on the foreground target is greatly influenced. Based on the principle, the image processing method in the embodiment of the invention divides the bounding box of the Mask containing the foreground target into blocks, namely the Mask is divided into blocks, then the divided blocks are classified according to the color system, so as to distinguish the blocks with larger color value difference between the foreground target and the background image, each type of block can adopt different processing modes, and the same type of blocks adopts the same mode to carry out uniform processing, so that the foreground target and the background image are processed differently when the color value difference between the foreground target and the background image is larger, and the interference of the background image in the Mask to the foreground target is reduced.
Optionally, the scaling factor corresponding to each color system may specifically be a preset scaling value, where the preset scaling value is an empirical value obtained through experimental data or data model training.
Optionally, the scaling coefficient corresponding to each color system may be obtained by calculating according to an average luminance value of the small blocks of the color system, and the specific calculation process is as follows: firstly, determining a brightness expected value corresponding to the color system, and calculating an average brightness value (a first average brightness value) of pixels contained in small blocks belonging to the color system; and calculating the ratio of the first average brightness value to the expected brightness value to obtain the scaling coefficient corresponding to the color system.
Optionally, the method for determining the color system of each of the small blocks occupied by the Mask may be: and for each small block in the small blocks occupied by the Mask, determining the color system of the small block according to at least one of a background brightness value of the small block, a brightness value of a target part of the small block, a saturation value of the target part of the small block and a hue value of the target part of the small block. The background luminance value of the small block refers to an average luminance value of a plurality of pixels in the boundary frame except for the Mask, and the target portion of the small block refers to a portion of the small block belonging to the Mask.
Specifically, the method for determining the color system of the small block according to the background brightness value of the small block and the brightness value of the target portion of the small block is as follows: firstly, acquiring a corresponding relation between a background brightness value and an illumination coefficient; determining an illumination coefficient (a first target illumination coefficient) corresponding to the background brightness value of the small block according to the corresponding relation, and multiplying the brightness value of the target part of the small block by the first target illumination coefficient to obtain a first adjustment brightness value; and determining the color system of the small block according to the first adjusted brightness value.
Wherein, the determining the color system of the small block according to the first adjusted brightness value specifically comprises: the color system to which the first adjustment brightness value belongs is determined by judging which color system (dark color system, white-bright color system or colored color system) is in the corresponding brightness value range.
Specifically, the method for determining the color system of the small block according to the background brightness value of the small block and the brightness value of the target portion of the small block is as follows: firstly, acquiring a preset background brightness preset value; calculating the ratio of the background brightness preset value to the background brightness value of the small block to obtain a second target illumination coefficient; multiplying the brightness value of the target portion of the small block by a second target illumination coefficient to obtain a second adjusted brightness value; and determining the color system of the small block according to the second adjusted brightness value. The method for determining the color system of the small block according to the second adjusted brightness value and the method for determining the color system of the small block according to the first adjusted brightness value may be the same.
Specifically, the method for determining the color system of the small block according to the saturation value of the target portion of the small block and the hue value of the target portion of the small block comprises the following steps: judging whether the tone value of the target portion of the patch is within a tone threshold range of a certain color system (first color system); judging whether the saturation value of the target part of the small block is within the saturation threshold range of the first color system; if the hue value of the target portion of the tile is within the hue threshold range of the first color system and the saturation value of the target portion of the tile is within the saturation threshold range of the first color system, determining that the color system of the tile belongs to the first color system.
It should be noted that the method may be combined with other methods to determine the color system of the patch.
Specifically, determining the color system of the small block according to the background luminance value of the small block, the luminance value of the target portion of the small block, the saturation value of the target portion of the small block, and the hue value of the target portion of the small block includes: and inputting the background brightness value of the small block, the brightness value of the target part of the small block, the saturation value of the target part of the small block and the hue value of the target part of the small block into a parameter model to obtain the color system of the small block.
Optionally, the method for determining the parametric model may be: acquiring a plurality of groups of model training parameters, wherein each group of model training parameters comprises a color system determining parameter and a color system value corresponding to the color system determining parameter, and the color system determining parameter comprises a brightness value of a background image, a brightness value of a selected foreground target, a saturation value of the selected foreground target and a hue value of the selected foreground target; and inputting the multiple groups of model training parameters into an SVM (support vector machine) to obtain a parameter model.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, which specifically implements the functions corresponding to the image processing method provided in the first aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software programs by hardware. The hardware and software include one or more unit modules corresponding to the above functions, which may be software and/or hardware.
In one possible design, the image processing apparatus includes:
the small block dividing unit is used for dividing the boundary frame into a plurality of small blocks, each small block comprises at least one pixel, and the boundary frame comprises a Mask of the foreground object;
a color system judging unit, configured to determine a color system of each of the small blocks occupied by the Mask;
the scaling coefficient determining unit is used for determining the scaling coefficient corresponding to each color system;
a luminance adjusting unit for multiplying a luminance value of each pixel in a patch belonging to the color system by the scaling coefficient.
In a third aspect, an embodiment of the present invention provides a server, which specifically implements functions corresponding to the image processing method provided in the first aspect. The server in the embodiment of the present invention may include:
a processor, a memory, and a network interface; the memory is used for storing program codes, and the processor calls the program codes in the memory to execute all or part of the steps in the image processing method of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the method of the first aspect described above.
A further aspect of the present application provides a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of the first aspect described above.
According to the technical scheme, only when the color values of the foreground target and the background image are greatly different, the background image has a large interference influence on the foreground target, and the foreground target and the background image in the Mask can be distinguished through small blocks to a certain extent. Therefore, in the image processing process of the embodiment of the invention, the bounding box is firstly divided into a plurality of small blocks, and the small blocks occupied by the Mask are classified according to the color system so as to distinguish the small blocks with larger color value difference between the foreground target and the background image. And further determining a scaling coefficient corresponding to each color system, multiplying the brightness value of each pixel in the small blocks belonging to the color system by the scaling coefficient, uniformly processing the small blocks of the same class in the same way, and processing the small blocks of each class in different ways, thereby realizing different processing of pixels with larger difference values of foreground objects and background images in the Mask. Therefore, the interference of the background image in the Mask to the foreground object can be reduced.
Drawings
FIG. 1a is a schematic diagram of a bounding box of a target pedestrian according to an embodiment of the present invention;
FIG. 1b is a Mask diagram of the target pedestrian of FIG. 1a according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an image processing system according to an embodiment of the invention;
FIG. 3 is a flow chart of an image processing method in an embodiment of the invention;
FIG. 4 is a block diagram of a Mask for the target pedestrian of FIG. 1b in an embodiment of the present invention;
FIG. 5 is a set of frame images used in determining color systems according to an embodiment of the present invention;
FIG. 6 is a functional block diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of a server in the embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
The Mask inevitably contains background noise, i.e. a background image of the Mask except the foreground object. Since Mask contains background noise, it is difficult to effectively remove the background noise. In some cases, for example: the foreground object is black or white, and when the brightness value of the background image is different from that of the foreground object greatly, the background noise has great interference for determining the color of the foreground object.
Fig. 1a is a Bounding Box (Bounding Box) of a target pedestrian (i.e., a foreground target) in a frame of image, where the Bounding Box refers to a rectangular Box containing the foreground target, and generally refers to a smallest rectangular Box containing a Mask of the foreground target. Fig. 1b is a Mask of the target pedestrian in fig. 1 a. As can be seen from fig. 1b, the Mask includes the background between the legs of the pedestrian and the shadow under the feet, these background images should not be included in the Mask, and these background images may become background noise, which may interfere with the determination of the color of the target pedestrian.
For example, if the ratio of the pixel value of a single pixel adjacent to the foot of a target pedestrian is (0, 0, 1), and the pixel point of the background ground (near the foot) contained in the Mask to the number of pixel points adjacent to the foot of the pedestrian is 1:3, the average pixel value of these pixel points may be (48, 47, 50), which deviates far from the average pixel value (0, 0, 1) that the target pedestrian foot should have, resulting in a large deviation.
Therefore, the present application aims to reduce the interference of the background noise to the image enhancement processing as much as possible when the Mask is not accurate enough and the Mask contains the background noise.
When the color values of the foreground target and the background image in the Mask are not greatly different, the influence of background noise on the foreground target is very limited and is probably not required to be distinguished; only when the color values of the foreground target and the background image are greatly different, the interference influence of the background noise on the foreground target is large. Therefore, only the pixels with larger color value difference between the foreground target and the background image need to be distinguished, and the noise reduction processing is carried out, so that the interference of the background image to the foreground target can be avoided.
Based on the principle, the image processing method in the embodiment of the invention divides the pixels in the Mask into blocks, divides the pixels into a plurality of small blocks, classifies the blocks (namely the small blocks) according to the color system, thereby distinguishing the blocks with larger color value difference in the foreground target and the background image, each type of blocks can adopt different processing modes, the blocks in the same type adopt the same mode to carry out uniform processing, thereby realizing different processing on the foreground target and the background image when the color value difference between the foreground target and the background image is larger, and further reducing the interference of the background image in the Mask on the foreground target.
The image processing method in the embodiment of the present invention may be executed by a network device, and the network device may be specifically a computer device in various forms, including a server, a server cluster, or a terminal device with a communication function.
For convenience of description, the method in the embodiment of the present invention is described by taking a network device as a server or a server cluster as an example.
As shown in fig. 2, the image processing system for implementing the image processing method in the embodiment of the present invention mainly includes two network elements, namely a server and a client device.
The servers in fig. 2 may be single servers or may be a distributed server cluster. The client device may be a computer device or user device of various forms, such as: and terminal devices such as mobile phones, tablet computers, and Personal Computers (PCs).
The client device communicates with the server via a network, and the client device transmits an image processing request to the server. In the embodiment of the present invention, the image processing request is taken as an example to process the image in the video, and in practical applications, the image processing request is not necessarily to process the image in the video, and may be to process a single image or multiple images stored in the memory.
When the server performs video image processing, an image processing request of the client device is executed by the video content management program, and the image processing request can be image enhancement processing, illumination processing or the like. And then, the server returns the image processing result of the video content management module to the client equipment.
As shown in fig. 2, the video content management program in the server includes a management module and a video analysis module.
The management module is responsible for executing functions of scheduling requests, resource management, data retrieval and the like.
The video analysis module is used for executing the function of video analysis and comprises an image preprocessing module. The image preprocessing module comprises a background noise removal module and a color system judgment module, and the background noise removal module and the color system judgment module are specifically used for executing the method steps in the method embodiment of the application.
The image preprocessing module preprocesses the image read by the video analysis module from the storage module and returns a preprocessing result to a core analysis module in the video analysis module, the preprocessing result is further processed by the core analysis module of the video analysis module to obtain a processing result, and finally the server returns the processing result to the client device.
It should be noted that the image processing system shown in fig. 2 is only an example, and in practical applications, the network element composition and the deployment manner of the image processing system may be determined according to a specific application scenario. In addition, the division of the functional modules in the video content management program in fig. 2 is also only an example, and is only for explaining the process of video image processing.
The image processing method in the embodiment of the present invention is described in detail below with reference to fig. 3.
301. Dividing a Mask boundary frame containing a foreground target into a plurality of small blocks, wherein each small block comprises at least one pixel;
due to illumination, wrinkles and shielding of the foreground target, the difference of pixel values of pixel points at different positions of the same color of the foreground target is large, and the color system is judged and processed based on the pixel points, so that the image is obviously distorted. Therefore, it is appropriate to determine the color system based on a small block composed of a plurality of pixels as a basic unit.
Therefore, the bounding box for framing the foreground object is divided into n × m small blocks, wherein the bounding box may be divided into n columns in the horizontal direction and m rows in the vertical direction to be divided into n × m small blocks. Each tile contains at least one pixel.
Thus, the Mask in the bounding box is divided into a plurality of small blocks. Some small blocks in the Mask all belong to the foreground target; some small blocks all belong to background images; some of the patches belong partly to the foreground object and partly to the background image.
Taking the bounding box shown in fig. 1a containing the target pedestrian as an example, fig. 4 is a bounding box that cuts the lower body of the target pedestrian in fig. 1a and divides the bounding box of the lower body into a plurality of small blocks. Shown at 401 is a bounding box and at 402 is a Mask of the lower body. All the small blocks in the Mask of the lower half body are the small blocks which belong to the foreground; marked G is each patch that belongs entirely to the background; half and more of each patch belonging to the background is labeled S1; more than half of each patch labeled S2 that is foreground; the remaining unmarked patches in the bounding box except Mask are all partial patches belonging to the background. Typically, the number of tiles labeled S1 and S2 in the Mask will be much less than the number of tiles labeled H.
After dividing the bounding box into small blocks, the Mask is divided into small blocks. It should be noted that, in the actual step, no mark is needed, and whether the small block in fig. 4 belongs to the foreground or the background is marked, just to better understand that the foreground part (i.e. the foreground object) and the background part (i.e. the background image) in the Mask can be distinguished by the small block to some extent.
It should be noted that, the purpose of the present application is to reduce the interference of the background noise in the Mask to the foreground object as much as possible. Therefore, the essence of dividing the bounding box into small blocks is to divide the Mask into small blocks. Since the bounding box is a regular rectangle and the Mask is an irregular shape, it is more appropriate to divide the minimum rectangular box containing the Mask into small blocks than to divide the Mask into small blocks.
However, in the process of processing the small blocks in the boundary box (i.e. in step 302 and step 303), only the small blocks involved in the Mask may actually be processed, and the small blocks not involved in the Mask in the boundary box may not be processed in step 302 and step 303, and the small blocks not involved in the Mask may refer to the small blocks belonging to the background image which are not marked, except for the Mask in fig. 4.
The small blocks referred to in the Mask are small blocks occupied by the Mask, that is, small blocks belonging to the Mask entirely and small blocks belonging to the Mask partially in the bounding box.
302. Determining a color system of each small block in the small blocks occupied by the Mask;
after dividing the boundary frame into a plurality of small blocks, judging the color system of each small block occupied by Mask in the boundary frame. The color system includes one of a dark color system, a colored color system and a white-light color system.
The dark color system refers to black, black gray, dark blue, dark black and other colors and similar colors, and one of the main characteristics of the colors is that the Hue Value (Hue, H) of an image is kept unchanged or changes little, or under the condition that the illumination intensity is not excessive, the Value of the brightness (V) component is small, the Value range of V is also small, and for example, the Value of V is from 0.02 to 0.18.
The color system refers to red, orange, yellow, green, cyan, blue, purple and other colors and similar colors, and one of the main characteristics of the color system is that the V component of an image can have a large value range under the condition that the H value is kept unchanged or is changed slightly, and the value of V can be smaller or larger, for example, from 0.2 to 0.8.
The white-light color system refers to white or similar colors, and one of the main characteristics of the white-light color system is that the value range of the V component of the image is small (relative to the color system) under the condition that the H value is kept unchanged or changes little, but the V value is large, for example, the V value is from 0.7 to 0.9.
Optionally, the method for determining the color system to which each tile belongs may be: the color system of the small block is determined according to at least one of a background luminance value of the small block, a luminance value of a target portion of the small block, a saturation value of the target portion of the small block, and a hue value of the target portion of the small block.
The target portion of a small block refers to a portion of a Mask belonging to a foreground target in the small block, that is, pixels belonging to the Mask. If the whole small block is located in the Mask for a certain small block, the target part of the small block is all pixels in the whole small block; if a small block is partially in the Mask and partially out of the Mask, the target portion of the small block is the portion of pixels in the Mask. The luminance value of the target portion of the small block refers to the average luminance value of pixels belonging to the Mask in the small block; the saturation value of the target portion of the small block refers to the average value of the saturations of the pixels belonging to the Mask in the small block; the tone value of the target portion of the small block refers to the average value of tones of pixels belonging to the Mask in the small block.
The background luminance value of the small block is an average luminance value of a plurality of pixels except for the Mask of the foreground object in the boundary frame, and specifically may be an average luminance value of all pixels except for the Mask in the boundary frame, or an average luminance value of a part of pixels except for the Mask in the boundary frame, where the part of pixels may be a part of pixels located in a neighboring area of the current small block.
A specific manner of determining the color system of the small block from at least one of a background luminance value of the small block, a luminance value of the target portion of the small block, a saturation value of the target portion of the small block, and a hue value of the target portion of the small block includes: determining a color system of the small block according to the background brightness value of the small block and the brightness value of the target part of the small block; the color system of the small block can be directly determined according to the brightness value of the target part of the small block; and determining the color system of the small block according to the saturation value of the foreground object of the small block and the hue value of the foreground object of the small block.
Optionally, the color system to which each tile belongs may be determined by inputting a plurality of sets of parameters (each set of parameters includes a color system value and H, S and/or a V value for determining the color system value) into a machine learning model for training to obtain a parameter model, and inputting H, S and/or V value of a certain tile into the parameter model to obtain the color system value of the tile.
303. And for each color system, determining a scaling coefficient corresponding to the color system, and multiplying the brightness value of each pixel in the small block belonging to the color system by the scaling coefficient.
After the color system to which each small block occupied by the Mask belongs is determined, all the small blocks occupied by the Mask are classified according to the color system, and all the small blocks belonging to the same color system are determined.
For each tile belonging to the same color family, it is "processed uniformly". The "unified processing" is to suppress noise, avoid causing large distortion and obvious blocking effect of an image, reduce the apparent difference of a foreground target, for example, reduce the apparent difference caused under different illumination, and adjust the brightness value of the pixel point in the boundary frame to the brightness value under the normal illumination environment.
The unified processing method comprises the following steps: for each color family, the luminance values of each pixel in the patch belonging to that color family are scaled by the same factor, i.e. multiplied by a scaling factor.
Alternatively, the scaling factor may be a preconfigured scaling preset value, which may be an empirical value obtained from experimental data or training of a data model. For example: the value range of the scaling factor corresponding to the dark color system can be in the range of [0.5-5], the value range of the scaling factor corresponding to the colored color system can be in the range of [0.625-2], and the value range of the scaling factor corresponding to the white and bright color system can be in the range of [0.98-1.25 ].
Alternatively, the scaling factor may be calculated according to the luminance values of the color system patches in the bounding box. The step of calculating the scaling factor corresponding to a color system specifically includes: determining the expected brightness value V corresponding to the color system0Calculating the average brightness value (called the first average brightness value) of the pixels (i.e. pixels) contained in the patch belonging to the color systemCalculating the ratio of the first average brightness value to the expected brightness valueTo obtain the scaling factor corresponding to the color system in the bounding box. Examples are as follows:
for example, a calculation is made for a certain class of small blocksAccording to the range of the brightness value of each color system, the color system is a small block of the dark color system and is in a dark environment, and the expected brightness value V of the dark color system00.16, calculateThen the class is smallThe luminance value of each pixel in the block is multiplied by 4.
The small blocks in each color system are multiplied by the corresponding scaling coefficient of the color system, and the obtained practical effect is to enhance the image of the small blocks in the dark environment, so that the brightness value of the small blocks is closer to that of the dark color system image in the normal illumination environment.
Therefore, independent processing of the pixels of the dark color system, the white-bright color system and the colored color system is realized. Because the foreground part and the background part in the Mask can be distinguished by small blocks to a certain extent, the image in the bounding box is partitioned by the embodiment of the invention. After the image is blocked, the blocks are classified according to the color system, so that pixels with large color value difference between the foreground target and the background image can be distinguished. And then the same type of blocks are uniformly processed in the same mode, and each type of blocks can adopt different processing modes, so that different processing on pixels with larger difference values of the foreground target and the background image is realized, and the interference of the background image in the Mask to the foreground target can be reduced.
How to determine the color system of each tile is described in detail below.
Firstly, judging a color system by a fuzzy discrimination method:
in some embodiments, the color system of the tile may be determined based on a background luminance value of the tile and a luminance value of the target portion of the tile. The specific judgment method comprises the following steps:
acquiring a preset corresponding relation between a background brightness value and an illumination coefficient, wherein the corresponding relation can be obtained through a large amount of experimental data; then, according to the corresponding relation between the background brightness value and the illumination coefficient, determining a target illumination coefficient (called as a first target illumination coefficient) corresponding to the background brightness value of the small block; multiplying the brightness value of the target part of the small block by a first target illumination coefficient to obtain a first adjustment brightness value so as to adjust the brightness value of the pixels belonging to the Mask in the small block to the brightness value under normal illumination; and determining the color system of the small block according to the first adjusted brightness value.
The color system of the small block is determined according to the first adjustment brightness value, namely, the color system of the small block is determined by judging whether the first adjustment brightness value is in a V value range corresponding to a dark color system, a V value range corresponding to a colored color system or a V value range corresponding to a white and bright color system respectively.
When the first adjusted brightness value is in the range of the V value corresponding to the dark color system (which can be set to 0.02-0.18), the color system to which the small block belongs is the dark color system; when the first adjusted brightness value V is in the V value range (which can be set to 0.2-0.8) corresponding to the color system, the color system to which the small block belongs is the color system; when the first adjusted brightness value V is within the V value range (which can be set to 0.7-0.9) corresponding to the white-light color system, the color system of the small block is the dark color system.
Further, if the color system of the current tile cannot be determined when the first adjusted luminance value is within a threshold value between two color systems (e.g., 0.18-0.2), the color system of the current tile can be determined by combining the color systems of one or more neighboring tiles and the adjusted luminance values thereof.
In some embodiments, if all pixels included in a certain small block are located in the Mask of the foreground object, the color system of the small block may be determined according to the brightness value of the target portion of the small block. The specific judgment method comprises the following steps: calculating the average brightness value (called as the second average brightness value) of the pixels in the small block, and determining the color system of the small block according to the second average brightness value.
The method for determining the color system of the small block according to the second average brightness value is the same as the method for determining the color system of the small block according to the first adjusting brightness value: when the second average brightness value V is in the V value range (which can be set to 0.02-0.18) corresponding to the dark color system, the color system to which the small block belongs is the dark color system; when the second average brightness value V is in the V value range (which can be set to 0.2-0.8) corresponding to the color system, the color system to which the small block belongs is the color system; when the second average luminance value V is within a V value range (which may be set to 0.7-0.9) corresponding to the white-light color system, the color system to which the small block belongs is the dark color system.
The following examples are given with specific data:
based on the observation of data of a plurality of samples (for example, 5000 samples) to be processed, the corresponding relationship between the background brightness value and the illumination coefficient is obtained, and as shown in table 1, the background brightness is divided into five ranges of dark, darker, normal, brighter and bright according to the magnitude of the V value, and each range has a corresponding background brightness V value and illumination coefficient.
TABLE 1
Background V value | Background brightness judgment result | Illumination factor (value of the example) |
V<=0.3 | Darkness | 2.0 |
0.3<V<=0.44 | Darker color | 1.4 |
0.44<V<=0.54 | Is normal and normal | 1.1 |
0.54<V<=0.65 | Is brighter | 0.9 |
V>0.65 | Bright Light (LIGHT) | 0.8 |
The illumination coefficient is used for adjusting the V value of the foreground target under different illumination to enable the V value to be closer to the V value under the normal illumination condition. For example, if the V value of the background brightness of the current tile is 0.4, the background brightness is determined to be darker according to the background V value range and the background brightness determination result in table 1, and the corresponding illumination coefficient is 1.4. If the brightness of the target portion of the small block is 0.16, the brightness is adjusted to 1.4 × 0.16 — 0.224, and 0.224 belongs to the range of the brightness values of the colored color systems from 0.2 to 0.8, so that the color system to which the small block belongs is judged to be the colored color system based on the adjusted V value of 0.224.
The determination of color systems by the blur segmentation method in the embodiments of the present invention is described below with a specific set of image data.
A specific set of frame images is shown in fig. 5, each frame image in the set of image frames is a bounding box containing a target pedestrian (the foreground target is a pedestrian), and in order to describe more intuitively how to determine the color system, the embodiment of the present invention takes as an example that the blocks of the bounding box are divided into n × m ═ 1 × 2 ═ 2 small blocks, and the blocks are divided into two small blocks, namely, an upper half and a lower half.
It should be noted that, in practical applications, the bounding box should be divided into more small blocks, so that the foreground part and the background part in the Mask are better distinguished by the small blocks.
Fig. 5 shows a group of frame images (images 1 to 10) having picture names of 0.001png, 0.002png, 0.003png … … 0.010.010 png, respectively, and a V value (V) of the upper half of each frame image is calculatedUpper part of) Lower body V value (V)Lower part) And a background V value (V)Background) The values are as follows:
in the example shown in fig. 5, the upper body V value (V) isOn the upper part) Is the luminance value of the target portion of the small block corresponding to the upper body, i.e., the average luminance value of the pixels in the Mask of the upper body; v value (V) of lower bodyLower part) Brightness of a target portion which is a small block corresponding to a lower bodyA value, which is an average luminance value of pixels in the Mask of the lower body; in this example, it is assumed that the upper and lower bodies are in relatively even illumination, and therefore, V isBackgroundIs a background luminance value of a small block of the upper body, and is also a background luminance value of a small block of the lower body, VBackgroundThe value may be an average value of the luminances of pixels other than masks of the upper and lower bodies.
Determining V according to the corresponding relation between the background V value and the illumination coefficient in the table 1BackgroundThe corresponding illumination coefficient is multiplied by the corresponding VOn the upper partObtaining the brightness value of the adjusted upper half body, and multiplying the illumination coefficient by the corresponding VLower partThe adjusted brightness value of the lower body is obtained. Adjusted V of each frame imageOn the upper partAnd VLower partAs shown in tables 2 and 3:
TABLE 2
TABLE 3
Comparing the pre-adjusted brightness values with 0.18, wherein the brightness values are all smaller than 0.18, so that the lower half of the 10 frames of images can be judged to belong to a dark color system; the upper body of images 1, 4, 8, 9 and 10 are colored color systems. And the upper body lightness of images 2, 3, 5, 6 and 7 lies at the critical value of the dark color system and the colored color system: the interval of 0.18-0.2, and therefore, it is necessary to further determine the color system of the upper body of the images 2, 3, 5, 6, and 7 by another method.
Further, when the color system of the upper body cannot be specified by the adjusted luminance value, the determination may be performed in association with the corresponding lower body, which is a neighboring patch. Since the light environments of the upper and lower bodies are substantially similar, the brightness value of the lower body is determined to be in the dark color system, the adjusted brightness value is still small, and the adjusted brightness value of the upper body is greater than 0.18, it can be determined that the pictures 2, 3, 5, 6, and 7 are not in the dark color, and thus belong to the color system.
Furthermore, if the color system of the lower body still cannot be determined by combining the lower body, in order to prevent erroneous judgment, the lower body can be classified into two types, namely, the lower body is judged to be both a dark color system and a colored color system, and then the processing is respectively carried out according to the two color systems, so that the processing error caused by the erroneous judgment is avoided.
Secondly, judging color system through specific combination of HS
In some specific embodiments, the color system to which a tile belongs may be determined directly from a specific combination of HS of the target portion of the tile, i.e., the color system of the tile is determined from the saturation value of the target portion of the tile and the hue value of the target portion of the tile. The method specifically comprises the following steps: judging whether the tone value of the target portion of the patch is within a tone threshold range of a certain color system (first color system); judging whether the saturation value of the target part of the small block is within the saturation threshold range of the color system; if the hue value of the target portion of the patch is within the hue threshold range of the color system and the saturation value of the target portion of the patch is within the saturation threshold range of the color system, it is determined that the color system of the patch belongs to the color system, i.e., belongs to the first color system. Wherein the first color is one of a dark color system, a colored color system and a white-light color system. The saturation threshold range and the hue threshold range of a certain color system are specific saturation ranges and hue ranges corresponding to the color system obtained based on experimental observation.
For example: based on experimental observation, in the colored color system, red H and S have a specific combination range, and when H is in the interval of [0.9,1] and correspondingly S is also in the interval of [0.9,1], although the value of V is completely different, the color of the small block can be determined to be red and belongs to the colored color system.
The method of determining the color system by the specific combination of HS may be used alone, or may be used in combination with other color system determination methods. For example: the following HSV data were calculated for the upper body as shown in fig. 5 as a patch:
picture name HOn the upper part SOn the upper part VOn the upper part VBackground
0011.png 0.982 0.936 0.095 0.292
The color system of the picture 0.011 can not be determined by the fuzzy differentiation method in the embodiment of the invention because the color system is determined according to VBackgroundAnd the correspondence between the background luminance value and the illumination coefficient shown in Table 1, and the luminance value V of the target portion of the upper body whose illumination coefficient is 2.0 is determinedOn the upper partThe adjusted V value obtained by multiplying the illumination coefficient is 0.19, which is just in the critical value interval (0.18-0.2) between the dark color system and the colored color system, and the color system is not determined well. And the H value and the S value of the frame image are both in [0.9,1] in combination with the specific combination of HS]In the interval, the color of the small block can be determined to be red, and the small block belongs to a colored color system.
Thirdly, judging color system by comprehensive evaluation method
In some specific embodiments, the color system of the small blocks can be determined by a comprehensive evaluation method. The specific process of the comprehensive evaluation method comprises the following steps: acquiring a preset background brightness preset value, wherein the preset background brightness preset value can be a background brightness average value calculated according to background brightness values of a large number of pictures; then calculating the ratio of the background brightness preset value to the background brightness value of the current small block to obtain a second target illumination coefficient; and for each small block, multiplying the foreground target brightness value of the small block by the obtained second target illumination coefficient to obtain a second adjustment brightness value, so as to adjust the brightness value of the small block to the brightness value under normal illumination, and then determining the color system of the small block according to the second adjustment brightness value. The background luminance value of the current small block is an average luminance value of a plurality of pixels except for the Mask in the boundary frame, and the plurality of pixels except for the Mask in the boundary frame may be all pixels except for the Mask in the boundary frame or a part of pixels except for the Mask in the boundary frame, where the part of pixels may be a part of pixels located in a neighboring area of the current small block.
The method for determining the color system of the small block according to the second adjusted brightness value is the same as the method for determining the color system of the small block according to the first adjusted brightness value: that is, when the second adjusted brightness value V is within the V value range (which can be set to 0.02-0.18) corresponding to the dark color system, the color system of the small block is the dark color system; when the second adjusted brightness value V is in the V value range (which can be set to 0.2-0.8) corresponding to the color system, the color system to which the small block belongs is the color system; when the second adjusted brightness value V is within the V value range (which can be set to 0.7-0.9) corresponding to the white-light color system, the color system of the small block is the dark color system.
The determination of color systems by the comprehensive evaluation method in the examples of the present invention is described below with reference to a set of specific image data.
The method comprises the steps of firstly obtaining millions of pictures in the existing database, calculating the mean value of the background brightness of the pictures, and setting the mean value as a preset background brightness preset value if the calculation result is 0.491.
The ratio of the background luminance to 0.491 for each patch is then calculated, the result of which is called the illumination coefficient for that patch. And multiplying the illumination coefficient by the brightness value of the target part of the small block to obtain an adjusted brightness value of the foreground target of the small block, and then comparing the adjusted brightness value with the set threshold value of each color system to determine the color system to which the small block belongs.
Fourthly, judging color system by a machine learning method
In some specific embodiments, the parametric model may be established by a machine learning method, and the color system of the tile is obtained by inputting the relevant parameters of the tile into the parametric model. The specific implementation method comprises the following steps:
a parametric model is determined, and for each patch, the background luminance value of the patch and the parameters of the target portion of the patch (H, S and V values of the target portion of the patch) are input into the parametric model to obtain the color system of the patch.
The specific method for determining the parameter model comprises the following steps: obtaining a plurality of groups of model training parameters, wherein each group of model training parameters comprises a color system determining parameter and a color system value corresponding to the color system determining parameter, and the color system determining parameter comprises a brightness value of a background image, a brightness value of a selected foreground target, a saturation value of the selected foreground target and a hue value of the selected foreground target; and inputting the multiple groups of model training parameters into a machine learning model to obtain the parameter model.
Specifically, the machine learning model for model training may adopt an SVM model, and may use an SVM model carried by Matlab for training. First, data is labeled, for example, label values of-1, 0, and 1 are set for a dark color system, a colored color system, and a white-light color system, respectively. And then the H, S and V values of the selected foreground object, the V value of the background image and the label value of the labeling result are used as parameter input for training. The training result is a parameter model, the parameter model obtained by training is used for small blocks which are not manually labeled, and the output result is a label value. Based on the label value (label values of a dark color system, a colored color system and a white-light color system are-1, 0 and 1, respectively), the color system of the patch can be determined.
It should be noted that the color system determination methods in the embodiments of the present invention may be used alone, and when a color system of a certain patch cannot be determined by using one of the methods, the determination may be performed in combination with other methods.
The foregoing is an introduction to an embodiment of a method in an embodiment of the present invention, and an image processing apparatus in an embodiment of the present invention is described below from a functional block perspective.
As shown in fig. 6, the image processing apparatus includes the following functional units:
a small block dividing unit 601, configured to divide a bounding box of a Mask that includes a foreground object into a plurality of small blocks, where each small block includes at least one pixel;
a color system determining unit 602, configured to determine a color system of each of the small blocks occupied by the Mask;
a scaling factor determining unit 603, configured to determine, for each color system, a scaling factor corresponding to the color system;
a brightness adjustment unit 604 for multiplying the brightness value of each pixel in the patch belonging to the color family by the scaling factor.
Optionally, in some specific embodiments, the scaling factor determining unit 603 is specifically configured to determine a brightness expected value corresponding to a color system; calculating a first average luminance value of pixels included in a patch belonging to a color system; and calculating the ratio of the first average brightness value to the expected brightness value to obtain a scaling coefficient corresponding to the color system in the boundary box.
Optionally, in some specific embodiments, the color system determining unit 602 is specifically configured to, for each of a plurality of small blocks occupied by the Mask, determine a color system of the small block according to at least one of a background brightness value of the small block, a brightness value of a target portion of the small block, a saturation value of the target portion of the small block, and a hue value of the target portion of the small block; the background luminance value of the small block includes an average luminance value of a plurality of pixels except for the Mask in the boundary frame, and the target portion of the small block includes a portion belonging to the Mask in the small block.
Optionally, in some specific embodiments, the color system determining unit 602 is specifically configured to obtain a corresponding relationship between a background brightness value and an illumination coefficient; determining a first target illumination coefficient corresponding to the background brightness value of the current small block according to the corresponding relation between the background brightness value and the illumination coefficient; multiplying the brightness value of the target part of the small block by a first target illumination coefficient to obtain a first adjustment brightness value; and determining the color system of the small block according to the first adjusted brightness value.
Optionally, in some specific embodiments, the color system determining unit 602 is specifically configured to obtain a preset background brightness preset value; calculating the ratio of the background brightness preset value to the background brightness value of the current small block to obtain a second target illumination coefficient; multiplying the brightness value of the target part of the small block by a second target illumination coefficient to obtain a second adjustment brightness value; and determining the color system of the small block according to the second adjusted brightness value.
Optionally, in some specific embodiments, the color system determining unit 602 is specifically configured to determine whether a tone value of the target portion of the tile is within a tone threshold range of the first color system; judging whether the saturation value of the target part of the small block is within the saturation threshold range of the first color system; if the hue value of the target portion of the patch is within the hue threshold range of the first color system and the saturation value of the target portion of the patch is within the saturation threshold range of the first color system, it is determined that the color system of the patch belongs to the first color system.
Optionally, in some specific embodiments, the color system determining unit 602 is specifically configured to input a background luminance value of the small block, a luminance value of the target portion of the small block, a saturation value of the target portion of the small block, and a hue value of the target portion of the small block into the parametric model to obtain the color system of the small block.
Optionally, the apparatus further comprises: a model training unit (not shown in the figure) configured to obtain a plurality of sets of model training parameters, where each set of model training parameters includes a color system determination parameter and a color system value corresponding to the color system determination parameter, and the color system determination parameter includes a brightness value of a background image, a brightness value of a selected foreground target, a saturation value of the selected foreground target, and a hue value of the selected foreground target; and inputting the multiple groups of model training parameters into a Support Vector Machine (SVM) to obtain a parameter model.
For information interaction among the units in the image processing apparatus, reference may be made to the above method embodiment (embodiments shown in fig. 1 to fig. 5), which is not described in detail herein.
In practical applications, the image processing apparatus shown in fig. 6 may be implemented by an application-specific integrated circuit (ASIC) or a Programmable Logic Device (PLD). The PLD may be a Complex Programmable Logic Device (CPLD), a Field Programmable Gate Array (FPGA), a General Array Logic (GAL), or any combination thereof.
The image processing apparatus shown in fig. 6 may be a server in practical applications. Fig. 7 is a hardware structure diagram of a server according to an embodiment of the present invention, which may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 722, a memory 730, and one or more wired or wireless network interfaces 750. Where memory 730 has stored thereon one or more application programs 742, data 744, and one or more operating systems 741, application programs 742 stored in memory 730 may include one or more modules, each of which may include a sequence of instruction operations.
In the embodiment of the present invention, the application 742 may include the video content management program shown in fig. 2, and the processor 720 runs the video content management program stored in the memory 730 to implement all or part of the steps in the above-described method embodiments (the embodiments shown in fig. 1 to 5).
The server can also include one or more power supplies 726, one or more input-output interfaces 758.
It should be noted that the server shown in fig. 7 may be a distributed server cluster, and the application programs, data 744 and the like on the storage 730 may be stored in a distributed manner in each server of the server cluster.
The server shown in fig. 7 corresponds to the server in the image processing system architecture shown in fig. 2, the client device interacts with the server, and the server executes the request of the client device through the video content management module and returns a corresponding result to the client device.
It should be noted that, in the above embodiments, all or part of the embodiments may be implemented by software, hardware, firmware or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The terms "first," "second," "third," "fourth," and the like in the description and claims of this application and in the above-described drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (15)
1. An image processing method, characterized in that the method comprises:
dividing a boundary box into a plurality of small blocks, wherein each small block comprises at least one pixel, and the boundary box comprises a Mask of a foreground object;
determining a color system of each small block in the small blocks occupied by the Mask;
for each color system, determining a scaling coefficient corresponding to the color system, and multiplying the brightness value of each pixel in a small block belonging to the color system by the scaling coefficient;
the determining the color system of each of the small blocks occupied by the Mask includes:
for each small block in the small blocks occupied by the Mask, determining a color system of the small block according to at least one of a background brightness value of the small block, a brightness value of a target portion of the small block, a saturation value of the target portion of the small block and a hue value of the target portion of the small block;
the background brightness value of the small block comprises an average brightness value of a plurality of pixels except the Mask in the boundary frame, and the target part of the small block comprises a part of the small block belonging to the Mask.
2. The method of claim 1, wherein the determining the scaling factor corresponding to the color system comprises:
determining a brightness expected value corresponding to the color system;
calculating a first average luminance value of pixels included in a patch belonging to the color family;
and calculating the ratio of the first average brightness value to the expected brightness value to obtain a scaling coefficient corresponding to the color system.
3. The method of claim 1, wherein determining the color system of the tile based on a background luminance value of the tile and a luminance value of the target portion of the tile comprises:
acquiring a corresponding relation between a background brightness value and an illumination coefficient;
determining a first target illumination coefficient corresponding to the background brightness value of the small block according to the corresponding relation between the background brightness value and the illumination coefficient;
multiplying the luminance value of the target portion of the tile by the first target illumination coefficient to obtain a first adjusted luminance value;
and determining the color system of the small block according to the first adjusted brightness value.
4. The method of claim 1, wherein determining the color system of the tile based on a background luminance value of the tile and a luminance value of the target portion of the tile comprises:
acquiring a preset background brightness preset value;
calculating the ratio of the background brightness preset value to the background brightness value of the small block to obtain a second target illumination coefficient;
multiplying the luminance value of the target portion of the tile by the second target illumination coefficient to obtain a second adjusted luminance value;
and determining the color system of the small block according to the second adjusted brightness value.
5. The method of any one of claims 1 to 4, wherein determining the color system of the tile from the saturation value of the target portion of the tile and the hue value of the target portion of the tile comprises:
determining whether a hue value of a target portion of the patch is within a hue threshold range of a first color system;
judging whether the saturation value of the target part of the small block is within the saturation threshold range of the first color system;
determining that the color system of the tile belongs to the first color system if the hue value of the target portion of the tile is within the hue threshold range of the first color system and the saturation value of the target portion of the tile is within the saturation threshold range of the first color system.
6. The method of any one of claims 1 to 4, wherein said determining a color system of a tile from a background luminance value of the tile, a luminance value of a target portion of the tile, a saturation value of the target portion of the tile, and a hue value of the target portion of the tile comprises:
and inputting the background brightness value of the small block, the brightness value of the target part of the small block, the saturation value of the target part of the small block and the hue value of the target part of the small block into a parameter model to obtain the color system of the small block.
7. The method of claim 6, further comprising:
obtaining a plurality of groups of model training parameters, wherein each group of model training parameters comprises a color system determining parameter and a color system value corresponding to the color system determining parameter, and the color system determining parameter comprises a brightness value of a background image, a brightness value of a selected foreground target, a saturation value of the selected foreground target and a hue value of the selected foreground target;
and inputting the multiple groups of model training parameters into a Support Vector Machine (SVM) to obtain the parameter model.
8. An image processing apparatus, characterized in that the apparatus comprises:
the small block dividing unit is used for dividing a boundary frame into a plurality of small blocks, each small block comprises at least one pixel, and the boundary frame comprises a Mask of a foreground object;
a color system judging unit, configured to determine a color system of each of the small blocks occupied by the Mask;
the scaling coefficient determining unit is used for determining the scaling coefficient corresponding to each color system;
a luminance adjusting unit for multiplying a luminance value of each pixel in a patch belonging to the color system by the scaling coefficient;
the color system determining unit is specifically configured to determine, for each of a plurality of small blocks occupied by the Mask, a color system of the small block according to at least one of a background brightness value of the small block, a brightness value of a target portion of the small block, a saturation value of the target portion of the small block, and a hue value of the target portion of the small block; the background brightness value of the small block comprises an average brightness value of a plurality of pixels except the Mask in the boundary frame, and the target part of the small block comprises a part of the small block belonging to the Mask.
9. The apparatus of claim 8,
the scaling factor determining unit is specifically configured to determine a brightness expected value corresponding to the color system; calculating a first average luminance value of pixels included in a patch belonging to the color family; and calculating the ratio of the first average brightness value to the expected brightness value to obtain a scaling coefficient corresponding to the color system in the boundary box.
10. The apparatus of claim 8,
the color system judging unit is specifically used for acquiring the corresponding relation between the background brightness value and the illumination coefficient; determining a first target illumination coefficient corresponding to the background brightness value of the small block according to the corresponding relation between the background brightness value and the illumination coefficient; multiplying the luminance value of the target portion of the tile by the first target illumination coefficient to obtain a first adjusted luminance value; and determining the color system of the small block according to the first adjusted brightness value.
11. The apparatus of claim 8,
the color system judging unit is specifically used for acquiring a preset background brightness preset value; calculating the ratio of the background brightness preset value to the background brightness value of the small block to obtain a second target illumination coefficient; multiplying the brightness value of the target portion of the small block by the second target illumination coefficient to obtain a second adjusted brightness value; and determining the color system of the small block according to the second adjusted brightness value.
12. The apparatus according to any one of claims 8 to 11,
the color system judgment unit is specifically used for judging whether the tone value of the target part of the small block is within a tone threshold value range of a first color system; judging whether the saturation value of the target part of the small block is within the saturation threshold range of the first color system; determining that the color system of the tile belongs to the first color system if the hue value of the target portion of the tile is within the hue threshold range of the first color system and the saturation value of the target portion of the tile is within the saturation threshold range of the first color system.
13. The apparatus according to any one of claims 8 to 11,
the color system determining unit is specifically configured to input a background luminance value of the small block, a luminance value of a target portion of the small block, a saturation value of the target portion of the small block, and a hue value of the target portion of the small block into a parameter model to obtain a color system of the small block.
14. The apparatus of claim 13, further comprising:
the model training unit is used for acquiring a plurality of groups of model training parameters, each group of model training parameters comprises color system determining parameters and color system values corresponding to the color system determining parameters, and the color system determining parameters comprise a brightness value of a background image, a brightness value of a selected foreground target, a saturation value of the selected foreground target and a hue value of the selected foreground target; and inputting the multiple groups of model training parameters into a Support Vector Machine (SVM) to obtain the parameter model.
15. A server, comprising:
a processor, a memory, and a network interface;
the memory is configured to store program code, and the processor calls the program code in the memory to:
dividing a boundary frame into a plurality of small blocks, wherein each small block comprises at least one pixel, and the boundary frame comprises a Mask of a foreground object; determining a color system of each small block in the small blocks occupied by the Mask; for each color system, determining a scaling coefficient corresponding to the color system, and multiplying the brightness value of each pixel in a small block belonging to the color system by the scaling coefficient;
the determining the color system of each of the small blocks occupied by the Mask includes:
for each small block in the small blocks occupied by the Mask, determining a color system of the small block according to at least one of a background brightness value of the small block, a brightness value of a target portion of the small block, a saturation value of the target portion of the small block and a hue value of the target portion of the small block;
the background brightness value of the small block comprises an average brightness value of a plurality of pixels except the Mask in the boundary frame, and the target part of the small block comprises a part of the small block belonging to the Mask.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710326076.7A CN108875759B (en) | 2017-05-10 | 2017-05-10 | Image processing method and device and server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710326076.7A CN108875759B (en) | 2017-05-10 | 2017-05-10 | Image processing method and device and server |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108875759A CN108875759A (en) | 2018-11-23 |
CN108875759B true CN108875759B (en) | 2022-05-24 |
Family
ID=64287631
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710326076.7A Active CN108875759B (en) | 2017-05-10 | 2017-05-10 | Image processing method and device and server |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108875759B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110189354B (en) * | 2019-04-18 | 2021-12-28 | 北京迈格威科技有限公司 | Image processing method, image processor, image processing apparatus, and medium |
CN110659581B (en) * | 2019-08-29 | 2024-02-20 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment and storage medium |
CN111275648B (en) * | 2020-01-21 | 2024-02-09 | 腾讯科技(深圳)有限公司 | Face image processing method, device, equipment and computer readable storage medium |
CN111861869B (en) * | 2020-07-15 | 2024-02-02 | 广州光锥元信息科技有限公司 | Image processing method and device for beautifying human images and preventing background distortion |
CN114419560B (en) * | 2022-04-01 | 2022-07-12 | 苏州万店掌网络科技有限公司 | Method, device and medium for detecting target object |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101790023A (en) * | 2009-01-22 | 2010-07-28 | 夏普株式会社 | Method for compressing image, image compressing device and image processing system |
CN102292991A (en) * | 2009-05-15 | 2011-12-21 | 夏普株式会社 | Image processing device and image processing method |
CN103116763A (en) * | 2013-01-30 | 2013-05-22 | 宁波大学 | Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics |
CN103414841A (en) * | 2013-07-29 | 2013-11-27 | 西北工业大学 | Image encryption and decryption method for utilizing gray system theory to confirm ICA output |
CN103945089A (en) * | 2014-04-18 | 2014-07-23 | 上海复控华龙微系统技术有限公司 | Dynamic target detection method based on brightness flicker correction and IP camera |
CN104364825A (en) * | 2012-04-09 | 2015-02-18 | 华为技术有限公司 | Visual conditioning for augmented-reality-assisted video conferencing |
CN104809245A (en) * | 2015-05-13 | 2015-07-29 | 信阳师范学院 | Image retrieval method |
CN105761267A (en) * | 2016-03-08 | 2016-07-13 | 重庆邮电大学 | Image processing method and device |
CN106251286A (en) * | 2015-12-30 | 2016-12-21 | 深圳超多维科技有限公司 | Image processing method, device and equipment |
CN106504241A (en) * | 2016-10-25 | 2017-03-15 | 西安交通大学 | A kind of apparatus and method of checking colors automatically |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040091173A1 (en) * | 2002-07-17 | 2004-05-13 | Hiroshi Akimoto | Method, apparatus and system for the spatial interpolation of color images and video sequences in real time |
US8335344B2 (en) * | 2010-03-31 | 2012-12-18 | Xerox Corporation | Method and system for document annotation |
US9349054B1 (en) * | 2014-10-29 | 2016-05-24 | Behavioral Recognition Systems, Inc. | Foreground detector for video analytics system |
-
2017
- 2017-05-10 CN CN201710326076.7A patent/CN108875759B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101790023A (en) * | 2009-01-22 | 2010-07-28 | 夏普株式会社 | Method for compressing image, image compressing device and image processing system |
CN102292991A (en) * | 2009-05-15 | 2011-12-21 | 夏普株式会社 | Image processing device and image processing method |
CN104364825A (en) * | 2012-04-09 | 2015-02-18 | 华为技术有限公司 | Visual conditioning for augmented-reality-assisted video conferencing |
CN103116763A (en) * | 2013-01-30 | 2013-05-22 | 宁波大学 | Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics |
CN103414841A (en) * | 2013-07-29 | 2013-11-27 | 西北工业大学 | Image encryption and decryption method for utilizing gray system theory to confirm ICA output |
CN103945089A (en) * | 2014-04-18 | 2014-07-23 | 上海复控华龙微系统技术有限公司 | Dynamic target detection method based on brightness flicker correction and IP camera |
CN104809245A (en) * | 2015-05-13 | 2015-07-29 | 信阳师范学院 | Image retrieval method |
CN106251286A (en) * | 2015-12-30 | 2016-12-21 | 深圳超多维科技有限公司 | Image processing method, device and equipment |
CN105761267A (en) * | 2016-03-08 | 2016-07-13 | 重庆邮电大学 | Image processing method and device |
CN106504241A (en) * | 2016-10-25 | 2017-03-15 | 西安交通大学 | A kind of apparatus and method of checking colors automatically |
Non-Patent Citations (1)
Title |
---|
一种地震属性图像配色阈值选取算法;唐世伟等;《计算机与数字工程》;20130220;第287-290页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108875759A (en) | 2018-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108875759B (en) | Image processing method and device and server | |
CN111798467B (en) | Image segmentation method, device, equipment and storage medium | |
CN110008969B (en) | Method and device for detecting image saliency region | |
Jiang et al. | Image dehazing using adaptive bi-channel priors on superpixels | |
US20070292038A1 (en) | Image Processing Apparatus and Method, and Image Processing Program | |
CN109658330B (en) | Color development adjusting method and device | |
CN104504722B (en) | Method for correcting image colors through gray points | |
CN108205671A (en) | Image processing method and device | |
US20170178341A1 (en) | Single Parameter Segmentation of Images | |
CN112634312B (en) | Image background processing method and device, electronic equipment and storage medium | |
CN108806638B (en) | Image display method and device | |
CN115082400B (en) | Image processing method, device, computer equipment and readable storage medium | |
CN113052923B (en) | Tone mapping method, tone mapping apparatus, electronic device, and storage medium | |
CN108711160B (en) | Target segmentation method based on HSI (high speed input/output) enhanced model | |
Srinivas et al. | Remote sensing image segmentation using OTSU algorithm | |
CN111626967A (en) | Image enhancement method, image enhancement device, computer device and readable storage medium | |
CN111489322A (en) | Method and device for adding sky filter to static picture | |
CN111311500A (en) | Method and device for carrying out color restoration on image | |
CN111462221A (en) | Method, device and equipment for extracting shadow area of object to be detected and storage medium | |
CN116630139A (en) | Method, device, equipment and storage medium for generating data | |
WO2022120799A9 (en) | Image processing method and apparatus, electronic device, and storage medium | |
CN110310341B (en) | Method, device, equipment and storage medium for generating default parameters in color algorithm | |
CN114693697A (en) | Image processing method, device, equipment and computer readable storage medium | |
CN109242750B (en) | Picture signature method, picture matching method, device, equipment and storage medium | |
CN111539975A (en) | Method, device and equipment for detecting moving target and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |