CN109308704B - Background eliminating method, device, computer equipment and storage medium - Google Patents

Background eliminating method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN109308704B
CN109308704B CN201810872355.8A CN201810872355A CN109308704B CN 109308704 B CN109308704 B CN 109308704B CN 201810872355 A CN201810872355 A CN 201810872355A CN 109308704 B CN109308704 B CN 109308704B
Authority
CN
China
Prior art keywords
pixel
value
video
processing
target person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810872355.8A
Other languages
Chinese (zh)
Other versions
CN109308704A (en
Inventor
车宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201810872355.8A priority Critical patent/CN109308704B/en
Priority to PCT/CN2018/106379 priority patent/WO2020024394A1/en
Publication of CN109308704A publication Critical patent/CN109308704A/en
Application granted granted Critical
Publication of CN109308704B publication Critical patent/CN109308704B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The invention discloses a background eliminating method, a device, computer equipment and a storage medium, wherein a first-order derivative is carried out on pixel values corresponding to all pixel points in a pre-obtained initial picture to obtain first processing values of the pixel values corresponding to all pixel points, a second-order derivative is carried out on the first processing values of the pixel values corresponding to all pixel points to obtain second processing values of the pixel values corresponding to all pixel points, if the first processing values of the pixel values corresponding to the pixel points meet a preset first condition, and if the second processing values of the pixel values corresponding to the pixel points meet a preset second condition, the pixel points are determined to be edge points between a target person and a target person background in the initial picture, and the target person background in the initial picture is eliminated according to an edge line to obtain a target picture. Thus solving the problem of large calculation amount.

Description

Background eliminating method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of financial science and technology, and in particular, to a method and apparatus for removing background, a computer device, and a storage medium.
Background
At present, the crime rate of society is continuously rising, and the task of capturing criminals is becoming more and more heavy.
In order to catch criminals, pedestrian re-recognition technology has emerged. In the process of detecting the target person in the video monitoring by adopting the traditional pedestrian re-recognition algorithm, the traditional pedestrian re-recognition algorithm can be a neural network algorithm or other algorithms and the like, and the intercepted target person is usually extracted from the picture, so that the separation of the target person and the background of the target person is realized. However, the conventional method is large in calculation amount when calculating a picture in a case where the color difference between the target person and the background of the target person is large, or the background light of the target person is strong, or the like.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a background eliminating method, apparatus, computer device, and storage medium that can avoid the problem of large calculation amount when calculating a picture in which the color difference between the target person and the background of the target person is large, or the background light of the target person is strong, and the like.
A background culling method, comprising:
respectively carrying out first-order derivation on pixel values corresponding to all pixel points in the initial picture obtained in advance to obtain first processing values of the pixel values corresponding to all the pixel points, wherein the pixel values corresponding to the pixel points and the first processing values of the pixel values corresponding to the pixel points have a one-to-one correspondence;
Respectively carrying out second order derivation on the first processing values of the pixel values corresponding to the pixel points to obtain second processing values of the pixel values corresponding to the pixel points, wherein the first processing values of the pixel values corresponding to the pixel points and the second processing values of the pixel values corresponding to the pixel points have a one-to-one correspondence;
if the first processing value of the pixel value corresponding to the pixel point meets a preset first condition and the second processing value of the pixel value corresponding to the pixel point meets a preset second condition, determining the pixel point as an edge point between a target person and a target person background in the picture;
and removing the target person background in the initial picture according to an edge line to obtain a target picture, wherein the edge line is formed by connecting edge points between the target person and the target person background in each initial picture.
A background rejection apparatus, comprising:
the first derivation module is used for respectively performing first-order derivation on pixel values corresponding to all pixel points in the initial picture obtained in advance to obtain first processing values of the pixel values corresponding to all the pixel points, wherein the pixel values corresponding to the pixel points and the first processing values of the pixel values corresponding to the pixel points have a one-to-one correspondence;
The second derivation module is used for respectively performing second order derivation on the first processing values of the pixel values corresponding to the pixel points to obtain second processing values of the pixel values corresponding to the pixel points, wherein the first processing values of the pixel values corresponding to the pixel points and the second processing values of the pixel values corresponding to the pixel points have a one-to-one correspondence;
a determining module, configured to determine the pixel point as an edge point between a target person and a target person background in the picture if a first processing value of a pixel value corresponding to the pixel point meets a preset first condition and a second processing value of a pixel value corresponding to the pixel point meets a preset second condition;
and the rejecting module is used for rejecting the target person background in the initial picture according to an edge line to obtain a target picture, wherein the edge line is formed by connecting edge points between the target person in each initial picture and the target person background.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the background culling method described above when the computer program is executed.
A computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the background culling method described above.
According to the background eliminating method, the device, the computer equipment and the storage medium, the first-order derivation is carried out on the pixel value corresponding to each pixel point in the initial picture obtained in advance to obtain the first processing value of the pixel value corresponding to each pixel point, the second-order derivation is carried out on the first processing value of the pixel value corresponding to each pixel point to obtain the second processing value of the pixel value corresponding to each pixel point, if the first processing value of the pixel value corresponding to the pixel point meets the preset first condition, and if the second processing value of the pixel value corresponding to the pixel point meets the preset second condition, the pixel point is determined to be the edge point between the target person in the initial picture and the target person background, and the target person background in the initial picture is eliminated according to the edge line to obtain the target picture. The edge points between the target person and the target person background can be determined only by a derivation method with small calculation amount, the target person background is removed from the picture according to the edge line formed by connecting the edge points, and the separation of the target person and the target person background is realized, so that the problem of large calculation amount when calculating the picture under the conditions of large color difference between the target person and the target person background, or strong light of the target person background and the like is avoided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an application environment of a background rejection method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a background rejection method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a background rejection method for obtaining an initial picture according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for obtaining a trace video in a background rejection method according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for computing pixels in a background rejection method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a background eliminating device according to an embodiment of the invention;
FIG. 7 is a schematic diagram of a computer device in accordance with an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The background rejection method provided by the application can be applied to an application environment as shown in fig. 1, wherein the computer equipment communicates with a server through a network. Firstly, the server conducts first-order derivation on pixel values corresponding to all pixel points in an initial picture obtained in advance to obtain first processing values of the pixel values corresponding to all pixel points, then the server conducts second-order derivation on the first processing values of the pixel values corresponding to all pixel points to obtain second processing values of the pixel values corresponding to all pixel points, and if the first processing values of the pixel values corresponding to the pixel points meet a preset first condition and the second processing values of the pixel values corresponding to the pixel points meet a preset second condition, the server determines that the pixel points are edge points between a target character and a target character background in the initial picture; and the server eliminates the target character background in the initial picture according to the edge line to obtain a target picture. The computer device may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, among others. The server may be implemented by a stand-alone server or a server cluster formed by a plurality of servers.
In an embodiment, as shown in fig. 2, a background rejection method is provided, and the background rejection method is applied in the financial industry, and is described by taking a server side of the method in fig. 1 as an example, and includes the following steps:
s10: respectively carrying out first-order derivation on pixel values corresponding to all pixel points in the initial picture obtained in advance to obtain a first processing value of the pixel value corresponding to each pixel point;
in this embodiment, the pixel points are pixels, and the pixel values corresponding to the pixel points are the modulo sum color values of the position vectors corresponding to the pixels.
Specifically, to obtain the first processing value of the pixel value corresponding to each pixel in the selected picture, the pixel value corresponding to each pixel in the selected picture is obtained from each pixel in the selected picture, and then the obtained pixel values corresponding to each pixel are subjected to first-order derivation to obtain the first processing value of the pixel value corresponding to each pixel. For example, in order to obtain the first processing value of the pixel value corresponding to each pixel point of the selected picture "basketball" jpg ", the pixel value corresponding to each pixel point in the selected picture" basketball "jpg" is obtained from each pixel point in the selected picture "basketball" obtained in advance, and then the obtained pixel values corresponding to each pixel point are respectively subjected to first-order derivation, where the pixel value corresponding to the first pixel point is preferably x n Then to x n Performing first order derivation to obtain a first processing value of nx corresponding to the pixel value of the first pixel point n-1 Wherein n is a positive integer greater than or equal to the number 1, and x is a positive integer greater than or equal to the number 1.
It should be noted that, there is a one-to-one correspondence between the pixel value corresponding to the pixel point and the first processing value of the pixel value corresponding to the pixel point.
S20: respectively carrying out second order derivation on the first processing values of the pixel values corresponding to the pixel points to obtain second processing values of the pixel values corresponding to the pixel points;
in this embodiment, in order to obtain the second processing value of the pixel value corresponding to each pixel in the selected picture, the second order derivative is required to be performed on the first processing value of the pixel value corresponding to each pixel in the selected picture obtained by the first order derivative, so as to obtain the second processing value of the pixel value corresponding to each pixel. For example, in order to obtain the second processed value of the pixel value corresponding to each pixel point in the selected picture "bicycle. Jpg", the second order derivative needs to be performed on the first processed value of the pixel value corresponding to each pixel point in the selected picture "bicycle. Jpg", where the first processed value of the pixel value corresponding to the second pixel point is preferably nx n-1 Then to nx n-1 Second order derivation is carried out to obtain a second processing value of n (n-1) x of the pixel value corresponding to the first pixel point n-2 Wherein n is a positive integer greater than or equal to the number 2, and x is a positive integer greater than or equal to the number 1.
It should be noted that, there is a one-to-one correspondence between the first processing value of the pixel value corresponding to the pixel point and the second processing value of the pixel value corresponding to the pixel point.
S30: if the first processing value of the pixel value corresponding to the pixel point meets a preset first condition and the second processing value of the pixel value corresponding to the pixel point meets a preset second condition, determining the pixel point as an edge point between a target person and a target person background in the picture;
in this embodiment, when the first processing value of the pixel value corresponding to the obtained pixel point meets a preset first condition, and when the second processing value of the pixel value corresponding to the obtained pixel point meets a preset second condition, the pixel point is determined to be an edge point between the target person and the target person background in the picture, and the edge point is also referred to as a boundary point where the target person and the target person background are tangent.
It should be noted that the preset first condition may be greater than or equal to a preset first threshold, the preset second condition may be greater than or equal to a preset second threshold, and specific contents of the preset first condition and the preset second condition may be set according to practical applications, which is not limited herein.
S40: removing the target character background in the picture according to the edge line to obtain a target picture;
in this embodiment, first, edge points between a target person and a target person background in each obtained picture are connected to obtain an edge line, then the target person and the target person background are distinguished along the edge line, and then a preset third threshold is adopted to replace pixel values of pixel values corresponding to each pixel point in the target person background, so that a region with a color corresponding to the preset third threshold can be clearly known as the target person background, interference of the background in detection of the target person in the picture is avoided, and further an effect of removing the background is achieved.
It should be noted that the specific content of the preset third threshold may be set according to practical applications, which is not limited herein.
In the embodiment corresponding to fig. 2, a first-order derivative is performed on a pixel value corresponding to each pixel point in a pre-obtained initial picture to obtain a first processing value of the pixel value corresponding to each pixel point, a second-order derivative is performed on the first processing value of the pixel value corresponding to each pixel point to obtain a second processing value of the pixel value corresponding to each pixel point, if the first processing value of the pixel value corresponding to the pixel point meets a preset first condition, and if the second processing value of the pixel value corresponding to the pixel point meets a preset second condition, the pixel point is determined to be an edge point between a target person in the initial picture and a target person background, and the target person background in the initial picture is removed according to an edge line to obtain a target picture. The edge points between the target person and the target person background can be determined only by a derivation method with small calculation amount, the target person background is removed from the picture according to the edge line formed by connecting the edge points, and the separation of the target person and the target person background is realized, so that the problem of large calculation amount when calculating the picture under the conditions of large color difference between the target person and the target person background, or strong light of the target person background and the like is avoided.
Further, in an embodiment, the background eliminating method is applied in the financial industry, a first processing value of a pixel value corresponding to a pixel point in the background eliminating method meets a preset first condition, and a second processing value of a pixel value corresponding to a pixel point meets a preset second condition:
specifically, the first absolute value is larger than a preset first threshold value, and the second absolute value is larger than a preset second threshold value, wherein the first absolute value is the absolute value of the difference between the first-order derivative result of the obtained pixel point and the first-order derivative result of a preset pixel point which is transversely adjacent to the pixel point, and the second absolute value is the absolute value of the difference between the second-order derivative result of the obtained pixel point and the second-order derivative result of a preset pixel point which is transversely adjacent to the pixel point.
It should be noted that, the lateral direction may be left or right, or may include both left and right preset pixels and may be 3 or 5 pixels, for example, 3 pixels on the left, 3 pixels on the right, or include both 3 pixels on the left and 3 pixels on the right, that is, 6 pixels, where the specific content of the preset pixels may be set according to practical applications, and this is not limited herein.
In this embodiment, the obtained pixel point is determined to be an edge point between the target person and the target person background in the selected picture by comparing the absolute value of the difference between the obtained first-order derivative of the pixel point and the first-order derivative of the preset pixel point laterally adjacent to the pixel point with a preset first threshold value and comparing the second-order derivative of the obtained pixel point with the absolute value of the difference between the second-order derivative of the preset pixel point laterally adjacent to the pixel point. The size of the first-order derivative represents the change rate of the color of the pixel point, namely the larger the first-order derivative is, the larger the color change of the pixel point is, otherwise the smaller the color change of the pixel point is represented, meanwhile, the size of the second-order derivative represents the change speed of the color change of the pixel point, namely the larger the second-order derivative is, the faster the color change of the pixel point is represented, otherwise the slower the color change of the pixel point is represented, and the larger the color change and the slower the color change of the two pixel points are, so that the two different colors of the two pixel points can be conveniently distinguished, and the convenience of distinguishing the edge points is improved.
Further, in an embodiment, the background rejection method is applied in the financial industry, as shown in fig. 3, in a flowchart of an initial picture obtained in an application scenario in the background rejection method in the embodiment corresponding to fig. 2, the initial picture is obtained by:
s101: calculating the distance between each video frame according to the time difference value between each video frame in the track video and the similarity of the image color characteristics between each video frame;
s102: and dividing the trace video by adopting a hierarchical clustering method according to the distance between the video frames to obtain an initial picture.
For the above step S101, it may be understood that, first, the trace video captured in advance is determined as the video to be segmented, and then, the distance between the video frames is calculated by using the video frame distance calculation formula according to the time difference between the video frames in the video to be segmented and the similarity of the image color features between the video frames.
For the above step S102, it may be understood that taking an example that the video to be segmented includes M video frames, the distance between the 1 st video frame and the 2 nd video frame is obtained according to the time difference between the 1 st video frame and the 2 nd video frame in the video to be segmented and the similarity of the image color features between the 1 st video frame and the 2 nd video frame; sequentially executing until the distance between the 1 st video frame and the kth video frame is obtained according to the time difference value between the 1 st video frame and the kth video frame in the video to be segmented and the similarity of the image color characteristics between the 1 st video frame and the kth video frame; according to the time difference between the 2 nd video frame and the 3 rd video frame in the video to be segmented and the similarity of the image color characteristics between the 2 nd video frame and the 3 rd video frame, obtaining the distance between the 2 nd video frame and the 3 rd video frame; sequentially executing until the distance between the 1 st video frame and the k video frame is obtained according to the time difference value between the 2 nd video frame and the k video frame in the video to be segmented and the similarity of the image color characteristics between the 2 nd video frame and the k video frame; and analogically, obtaining the distance between the q-1 video frame and the q video frame according to the time difference between the q-1 video frame and the q video frame in the video to be segmented and the similarity of the image color characteristics between the q-1 video frame and the q video frame. Through the method, the distance between each video frame can be obtained, then the video to be segmented is segmented by hierarchical clustering according to the distance, the desired picture is obtained, and the selected picture is selected from the pictures.
In the embodiment corresponding to fig. 3, through step S101 and step S102, the desired picture can be obtained, and since the images in two video frames belonging to the same video event have the same color feature similarity between the scene and the target person, the video frames belonging to the same video can be divided into the pictures of the video according to the time difference and the similarity, thereby avoiding the division of the video frames belonging to the same video into the pictures of different videos, and further ensuring the integrity of video content division.
Further, in an embodiment, the background eliminating method is applied in the financial industry, and in the background eliminating method, according to the time difference value between each video frame and the similarity of the image color characteristics between each video frame in the obtained trace video, the distance between each video frame is calculated as follows:
specifically, firstly, calculating the chi-square distance between video frames according to the video frames, and then, inputting the time difference value between the video frames and the chi-square distance between the video frames in the track video obtained by shooting in advance into the following calculation formula to obtain the distance between the video frames:
wherein the trace video comprises q video frames, q being greater than 1; k (k) i K for the ith video frame in the track video j I is equal to or greater than 1 and equal to or less than q, j is equal to or greater than 1 and equal to or less than q, s (k) i ,k j ) X is the distance between the ith video frame and the jth video frame 2 (k i ,k j ) Chi-square distance, w, for color histogram between the ith video frame and the jth video frame 1 For the preset chi-square distance, r is a preset positive integer, i-j represents the time difference between the ith video frame and the jth video frame, the time difference is in seconds, minutes or hours, max (0, r-i-j) represents the maximum value between 0 and r-i-j, and r-i-j is a natural number greater than 0. If r is 30, i is 20, j is 15, |i-j| represents the time difference between the 20 th video frame and the 15 th video frame is 20 seconds, r- |i-j| is 10, max (0, r- |i-j|) is 10, or r is 7,i is 8,j to 5, |i-j| represents the 15 th video frameThe time difference between the 8 video frames and the 5 th video frame is 1.6 minutes, then r- |i-j| is 5.4, and max (0, r- |i-j|) is 5.4.
The larger the chi-square distance, the lower the similarity.
In the present embodiment, the above calculation formula is used to calculate the value of s (k i ,k j ) A short video, even with some irrelevant video frames, may not be further segmented into shorter videos, thus guaranteeing the integrity of the video.
Further, in an embodiment, the background rejection method is applied in the financial industry, and as shown in fig. 4, in a background rejection method in the embodiment corresponding to fig. 3, a flowchart of a trace video under an application scene is obtained, where the trace video is obtained by:
s1011: continuously shooting the trace video of the target person for multiple times, and judging whether the total time length of the trace video obtained by continuous multiple shooting reaches a preset time length in the process of shooting the trace video;
s1012: if the total time length of the trace video obtained by continuous and repeated shooting in the process of shooting the trace video reaches the preset time length, stopping shooting of the trace video, and splicing the trace video of the target person shot continuously and repeatedly into one trace video to obtain the spliced trace video;
s1013: and determining the spliced whereabouts video as the obtained whereabouts video.
As for the above step S1011, it is understood that the target person may be a pedestrian or other object, and the specific content of the target person may be set according to the actual situation, which is not limited herein. The video of the target person may be stored in a database, or may be stored in an optical disk or a hard disk, and the specific content of the storage location of the video of the target person may be set according to the actual situation, which is not limited herein.
Specifically, first, a target person, that is, an artificial selected target person, is determined from a plurality of objects, then the target person is selected from the plurality of objects, next, the whereabouts of the target person is continuously photographed a plurality of times, and whether the total time length of the whereabouts video obtained by continuously photographing a plurality of times reaches a preset time length is determined in the process of photographing the whereabouts video each time, wherein, whether the total time length of the whereabouts video photographed reaches the preset time length is determined, the whereabouts video of a desired time length can be flexibly obtained, the preset time length can be 30 minutes or 1 hour, and the specific content of the preset time length can be set according to the actual situation, and the present invention is not limited.
It should be noted that, the video of the target person may be obtained by the image capturing apparatus, and then stored in the storage location, the image capturing apparatus may be a video camera or a digital camera, and the specific content of the image capturing apparatus may be set according to the actual situation, which is not limited herein.
For the step S1012, it may be understood that if the total time length of the trace video obtained by continuous multiple shooting in the process of shooting the trace video reaches the preset time length, shooting of the trace video is stopped, the identifier corresponding to the trace video of the continuous multiple shooting target object is displayed, and the trace video of the continuous multiple shooting target object is spliced into a video according to the result of ordering the identifier corresponding to the trace video of the continuous multiple shooting target object by the user, or the time corresponding to the trace video of the continuous multiple shooting target object is recorded, and the trace video of the continuous multiple shooting target object is spliced into a video according to the result of ordering the time corresponding to the trace video of the continuous multiple shooting target object by the user. If the total time length of the trace video obtained by continuous multiple shooting in the process of shooting the trace video does not reach the preset time length, subtracting the total time length from the preset time length to obtain a residual time length, and then calculating a first duty ratio of the residual time length in the preset time length and a second duty ratio of the total time length in the preset time length, so that the progress of the shot time length in the preset time length is clear, and returning to the step S1011.
For the above step S1013, it may be understood that the track video obtained after the splicing is determined as the obtained track video, and is stored in the video database. The video database may be a database such as sql or Oracle, and the specific content of the video database may be set according to practical applications, which is not limited herein.
In the embodiment corresponding to fig. 4, through the steps S1011 to S1013, multiple selective shooting may be implemented to obtain a track video, and meanwhile, autonomous screening may be performed on the content of the shot track video, so as to improve flexibility in obtaining the video.
Further, in an embodiment, the background rejection method is applied in the financial industry, as shown in fig. 5, which is a flowchart of a background rejection method in an application scenario before step S10 in the embodiment corresponding to fig. 2 to 4, where the background rejection method further includes:
s50: acquiring a first target pixel value corresponding to each pixel point of an initial picture, a second target pixel value corresponding to each pixel point of a target person background and a pixel point sum, wherein the pixel points of the picture comprise the pixel points of the target person and the pixel points of the target person background, and the pixel point sum is the total number of the pixel points of the picture or the total number of the pixel points of the target person background;
S60: and calculating the pixel value corresponding to each pixel point of the picture by adopting a preset calculation method according to the first target pixel value, the second target pixel value and the sum of the pixel points to obtain the calculated pixel value corresponding to the pixel point.
For the above step S50, it is understood that the selected picture includes the target person and the target person background. The selected picture is formed by combining all pixels. And obtaining a first target pixel value corresponding to each pixel point in the whole selected picture, a second target pixel value corresponding to each pixel point of the background and a pixel point sum, so as to perform a mean difference method processing on the pixel values corresponding to each pixel point. The sum of the pixel points is the total number of the pixel points of the picture or the total number of the pixel points of the background of the target person.
The pixel value of the pixel point where the target person overlaps with the target person background is 0 in the target person background.
For the above step S60, it may be understood that, according to the first target pixel value, the second target pixel value, and the sum of the pixel points, a preset calculation method is adopted to calculate the pixel value corresponding to each pixel point of the picture, so as to obtain the calculated pixel value corresponding to each pixel point, that is, subtracting the second target pixel value corresponding to each pixel point of the background of the picture from the first target pixel value corresponding to each pixel point of the obtained picture, so as to obtain each offset value, and dividing the sum of all the offset values by the total number of the pixel points, so as to obtain an average value, and subtracting the average value from each offset value, so as to obtain the pixel point after each average value.
In the embodiment corresponding to fig. 5, through the above steps S50 and S60, the average pixel value of the pixel values corresponding to each pixel point may be obtained, and the average pixel value represents the average pixel of all the pixel points, so that the accuracy of evaluating the pixels of the pixel points is improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In an embodiment, a background removing device is provided, where the background removing device corresponds to the background removing method in the above embodiment one by one. As shown in fig. 6, the background removing device includes a first deriving module 701, a second deriving module 702, a determining module 703 and a removing module 704. The functional modules are described in detail as follows:
the first deriving module 701 is configured to perform first-order derivation on pixel values corresponding to each pixel point in the initial image obtained in advance, so as to obtain a first processing value of the pixel value corresponding to each pixel point, where the pixel value of the pixel point has a one-to-one correspondence with the first processing value of the pixel value corresponding to the pixel point;
The second derivative module 702 is configured to perform second order derivative on the first processing values corresponding to the pixel points, so as to obtain second processing values of the pixel values corresponding to the pixel points, where the first processing values of the pixel values corresponding to the pixel points and the second processing values of the pixel values corresponding to the pixel points have a one-to-one correspondence;
the first determining module 703 is configured to determine that the pixel point is an edge point between the target person and the target person background in the picture if the first processing value of the pixel value corresponding to the pixel point satisfies a preset first condition and the second processing value of the pixel value corresponding to the pixel point satisfies a preset second condition;
and the rejecting module 704 is configured to reject the target person background in the picture according to an edge line, so as to obtain a target picture, where the edge line is formed by connecting edge points between the target person and the target person background in each picture.
Further, the determining module 703 determines that the first processing value of the pixel value corresponding to the pixel point meets a preset first condition, and the second processing value of the pixel value corresponding to the pixel point meets a preset second condition is: the first judging module is used for judging whether the first absolute value is larger than a first preset threshold value or not and whether the second absolute value is larger than a second preset threshold value or not, wherein the first absolute value is the absolute value of the difference value between the first processing value of the pixel value corresponding to the pixel point and the first processing value of the pixel value corresponding to the preset pixel point which is adjacent to the pixel point in the transverse direction, and the second absolute value is the absolute value of the difference value between the second processing value of the pixel value corresponding to the pixel point and the second processing value of the pixel value corresponding to the preset pixel point which is adjacent to the pixel point in the transverse direction.
Further, the initial picture is obtained by the following modules:
the first calculation module is used for calculating the distance between the video frames according to the time difference value among the video frames in the track video obtained in advance and the similarity of the image color characteristics among the video frames;
and the segmentation module is used for segmenting the trace video by adopting a hierarchical clustering method according to the distance between the video frames to obtain an initial picture.
Further, the first calculating module calculates the distance between each video frame according to the time difference value between each video frame and the similarity of the image color characteristics between each video frame in the obtained track video specifically as follows: the second calculation module is used for inputting the time difference value between the video frames in the track video obtained in advance and the chi-square distance between the video frames into the following calculation formula to obtain the distance between the video frames:
wherein the trace video comprises q video frames, q being greater than 1; k (k) i K for the ith video frame in the track video j I is equal to or greater than 1 and equal to or less than q, j is equal to or greater than 1 and equal to or less than q, s (k) i ,k j ) X is the distance between the ith video frame and the jth video frame 2 (k i ,k j ) Chi-square distance, w, for color histogram between the ith video frame and the jth video frame 1 For the preset chi-square distance, r is a preset positive integer, |i-j| represents the time difference between the ith video frame and the jth video frame, max (0, r- |i-j|) represents the maximum value between 0 and r- |i-j|, and r- |i-j| is a natural number greater than 0.
Further, the whereabouts video is acquired by:
the second judging module is used for continuously shooting the video of the target person for multiple times, and judging whether the total time length of the video of the target person obtained by continuous multiple shooting reaches the preset time length or not in the process of shooting the video of the target person;
the splicing module is used for stopping shooting of the whereabouts video if the total time length of the whereabouts video obtained by continuous repeated shooting in the process of shooting the whereabouts video reaches the preset time length, and splicing the whereabouts video of the target person shot continuously repeated times into one whereabouts video to obtain the spliced whereabouts video;
and the second determining module is used for determining the spliced whereabouts video as the obtained whereabouts video.
Further, before the first deriving module 701, the background removing apparatus further includes:
the image processing device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first target pixel value corresponding to each pixel point of an initial image, a second target pixel value corresponding to each pixel point of a target person background and a pixel point sum, wherein the pixel points of the image comprise the pixel points of the target person and the pixel points of the target person background, and the pixel point sum is the total number of the pixel points of the image or the total number of the pixel points of the target person background;
And the third calculation module is used for calculating the pixel value corresponding to each pixel point of the picture by adopting a preset calculation method according to the first target pixel value, the second target pixel value and the sum of the pixel points to obtain the calculated pixel value corresponding to the pixel point.
For specific limitations of the background removing device, reference may be made to the above limitation of the background removing method, and no further description is given here. The modules in the background removing device can be all or partially realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing the data related to the background rejection method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of context culling.
In one embodiment, a computer device is provided, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the steps of the background culling method of the above embodiment, such as steps S10 to S40 shown in fig. 2. Alternatively, the processor may implement the functions of the modules/units of the background removing device in the above embodiment when executing the computer program, for example, the functions of the first deriving module 701 to the removing module 704 shown in fig. 6. In order to avoid repetition, a description thereof is omitted.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for removing the background in the embodiment of the method, or where the computer program, when executed by the processor, implements the functions of each module/unit in the background removing device in the embodiment of the device. In order to avoid repetition, a description thereof is omitted. Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (8)

1. A background rejection method, characterized in that the background rejection method comprises:
respectively carrying out first-order derivation on pixel values corresponding to all pixel points in the initial picture obtained in advance to obtain first processing values of the pixel values corresponding to all the pixel points, wherein the pixel values corresponding to the pixel points and the first processing values of the pixel values corresponding to the pixel points have a one-to-one correspondence;
Respectively carrying out second order derivation on the first processing values of the pixel values corresponding to the pixel points to obtain second processing values of the pixel values corresponding to the pixel points, wherein the first processing values of the pixel values corresponding to the pixel points and the second processing values of the pixel values corresponding to the pixel points have a one-to-one correspondence;
if the first processing value of the pixel value corresponding to the pixel point meets a preset first condition and the second processing value of the pixel value corresponding to the pixel point meets a preset second condition, determining the pixel point as an edge point between a target person and a target person background in the picture;
removing the target person backgrounds in the initial pictures according to edge lines to obtain target pictures, wherein the edge lines are formed by connecting edge points between target persons in the initial pictures and the target person backgrounds;
the first processing value of the pixel value corresponding to the pixel point meets a preset first condition, and the second processing value of the pixel value corresponding to the pixel point meets a preset second condition: the first absolute value is larger than a first preset threshold value, and the second absolute value is larger than a second preset threshold value, wherein the first absolute value is the absolute value of the difference value between the first processing value of the pixel value corresponding to the pixel point and the first processing value of the pixel value corresponding to the preset pixel point which is transversely adjacent to the pixel point, and the second absolute value is the absolute value of the difference value between the second processing value of the pixel value corresponding to the pixel point and the second processing value of the pixel value corresponding to the preset pixel point which is transversely adjacent to the pixel point.
2. The background rejection method according to claim 1, wherein the initial picture is obtained by:
calculating the distance between each video frame according to the time difference value between each video frame in the track video and the similarity of the image color characteristics between each video frame;
and dividing the trace video by adopting a hierarchical clustering method according to the distance between the video frames to obtain the initial picture.
3. The method of removing background according to claim 2, wherein the calculating the distance between each video frame based on the time difference between each video frame in the obtained trace video and the similarity of the image color features between each video frame is: inputting the time difference value between each video frame and the chi-square distance between each video frame in the obtained track video into the following calculation formula to obtain the distance between each video frame:
wherein the track video comprises q video frames, the q being greater than 1;for the +.>Video frames->For the +.>Video frames, said->1 or more and 1 or less of said q, said >1 or more and 1 or less of said q, -/-, is a whole of the formula>Is->Video frames and->The distance between video frames, theFor the->Video frames and->Chi-square distance of color histogram between video frames, said +.>For a preset chi-square distance, said +.>Is a preset positive integer, +.>Indicate->Video frames and->Time difference between video frames, +.>Representation->And->Maximum value between>Is a natural number greater than 0.
4. The background rejection method according to claim 2, wherein the whereabouts video is acquired by:
continuously shooting the trace video of the target person for multiple times, and judging whether the total time length of the trace video obtained by continuous multiple shooting reaches a preset time length in the process of shooting the trace video;
if the total time length of the trace video obtained by continuous repeated shooting in the process of shooting the trace video reaches the preset time length, stopping shooting of the trace video, and splicing the trace video of the target person shot continuously for multiple times into one trace video to obtain a spliced trace video;
and determining the spliced whereabouts video as the obtained whereabouts video.
5. The background rejection method according to any one of claims 1 to 4, wherein, before the first-order derivation is performed on pixel values corresponding to respective pixel points in the initial picture obtained in advance, the background rejection method further comprises:
acquiring a first target pixel value corresponding to each pixel of the initial picture, a second target pixel value corresponding to each pixel of the target person background and a pixel sum, wherein the pixel of the initial picture comprises the pixel of the target person and the pixel of the target person background, and the pixel sum is the total number of the pixels of the picture or the total number of the pixels of the target person background;
and calculating the pixel value corresponding to each pixel point of the picture by adopting a preset algorithm according to the first target pixel value, the second target pixel value and the pixel point sum, so as to obtain the calculated pixel value corresponding to the pixel point.
6. A background rejection apparatus, comprising:
the first derivation module is used for respectively performing first-order derivation on pixel values corresponding to all pixel points in the initial picture obtained in advance to obtain first processing values of the pixel values corresponding to all the pixel points, wherein the pixel values corresponding to the pixel points and the first processing values of the pixel values corresponding to the pixel points have a one-to-one correspondence;
The second derivation module is used for respectively performing second order derivation on the first processing values of the pixel values corresponding to the pixel points to obtain second processing values of the pixel values corresponding to the pixel points, wherein the first processing values of the pixel values corresponding to the pixel points and the second processing values of the pixel values corresponding to the pixel points have a one-to-one correspondence;
a determining module, configured to determine the pixel point as an edge point between a target person and a target person background in the picture if a first processing value of a pixel value corresponding to the pixel point meets a preset first condition and a second processing value of a pixel value corresponding to the pixel point meets a preset second condition;
the rejecting module is used for rejecting the target person background in the initial picture according to an edge line to obtain a target picture, wherein the edge line is formed by connecting edge points between the target person in each initial picture and the target person background;
the determining module includes: the first judging module is used for judging whether the first absolute value is larger than a first preset threshold value or not and whether the second absolute value is larger than a second preset threshold value or not, wherein the first absolute value is the absolute value of the difference value between the first processing value of the pixel value corresponding to the pixel point and the first processing value of the pixel value corresponding to the preset pixel point which is adjacent to the pixel point in the transverse direction, and the second absolute value is the absolute value of the difference value between the second processing value of the pixel value corresponding to the pixel point and the second processing value of the pixel value corresponding to the preset pixel point which is adjacent to the pixel point in the transverse direction.
7. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the background culling method according to any one of claims 1 to 5 when the computer program is executed.
8. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the background culling method according to any one of claims 1 to 5.
CN201810872355.8A 2018-08-02 2018-08-02 Background eliminating method, device, computer equipment and storage medium Active CN109308704B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810872355.8A CN109308704B (en) 2018-08-02 2018-08-02 Background eliminating method, device, computer equipment and storage medium
PCT/CN2018/106379 WO2020024394A1 (en) 2018-08-02 2018-09-19 Background elimination method and device, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810872355.8A CN109308704B (en) 2018-08-02 2018-08-02 Background eliminating method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109308704A CN109308704A (en) 2019-02-05
CN109308704B true CN109308704B (en) 2024-01-16

Family

ID=65226067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810872355.8A Active CN109308704B (en) 2018-08-02 2018-08-02 Background eliminating method, device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN109308704B (en)
WO (1) WO2020024394A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110502205B (en) * 2019-08-29 2023-08-01 百度在线网络技术(北京)有限公司 Picture display edge processing method and device, electronic equipment and readable storage medium
CN110765935A (en) * 2019-10-22 2020-02-07 上海眼控科技股份有限公司 Image processing method, image processing device, computer equipment and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR19980028903A (en) * 1996-10-24 1998-07-15 배순훈 Target tracking device and method of video telephone
CN105719274A (en) * 2014-12-10 2016-06-29 全视技术有限公司 Edge Detection System And Methods
WO2016131300A1 (en) * 2015-07-22 2016-08-25 中兴通讯股份有限公司 Adaptive cross-camera cross-target tracking method and system
CN106056532A (en) * 2016-05-20 2016-10-26 深圳市奥拓电子股份有限公司 Method and device of removing background images
CN106534951A (en) * 2016-11-30 2017-03-22 北京小米移动软件有限公司 Method and apparatus for video segmentation
WO2017116808A1 (en) * 2015-12-30 2017-07-06 Ebay, Inc. Background removal
CN108038869A (en) * 2017-11-20 2018-05-15 江苏省特种设备安全监督检验研究院 Passenger falls down to the ground behavioral value method in a kind of lift car

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005100176A (en) * 2003-09-25 2005-04-14 Sony Corp Image processor and its method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR19980028903A (en) * 1996-10-24 1998-07-15 배순훈 Target tracking device and method of video telephone
CN105719274A (en) * 2014-12-10 2016-06-29 全视技术有限公司 Edge Detection System And Methods
WO2016131300A1 (en) * 2015-07-22 2016-08-25 中兴通讯股份有限公司 Adaptive cross-camera cross-target tracking method and system
WO2017116808A1 (en) * 2015-12-30 2017-07-06 Ebay, Inc. Background removal
CN106056532A (en) * 2016-05-20 2016-10-26 深圳市奥拓电子股份有限公司 Method and device of removing background images
CN106534951A (en) * 2016-11-30 2017-03-22 北京小米移动软件有限公司 Method and apparatus for video segmentation
CN108038869A (en) * 2017-11-20 2018-05-15 江苏省特种设备安全监督检验研究院 Passenger falls down to the ground behavioral value method in a kind of lift car

Also Published As

Publication number Publication date
WO2020024394A1 (en) 2020-02-06
CN109308704A (en) 2019-02-05

Similar Documents

Publication Publication Date Title
CN110569721B (en) Recognition model training method, image recognition method, device, equipment and medium
KR102122348B1 (en) Method and device for face in-vivo detection
CN109034078B (en) Training method of age identification model, age identification method and related equipment
EP3579180A1 (en) Image processing method and apparatus, electronic device and non-transitory computer-readable recording medium for selective image enhancement
EP3648448A1 (en) Target feature extraction method and device, and application system
CN108337505B (en) Information acquisition method and device
WO2020233397A1 (en) Method and apparatus for detecting target in video, and computing device and storage medium
CN110059666B (en) Attention detection method and device
CN103795920A (en) Photo processing method and device
TWI721786B (en) Face verification method, device, server and readable storage medium
CN113496208B (en) Video scene classification method and device, storage medium and terminal
CN104751164A (en) Method and system for capturing movement trajectory of object
CN109308704B (en) Background eliminating method, device, computer equipment and storage medium
CN110769262B (en) Video image compression method, system, equipment and storage medium
CN110766077A (en) Method, device and equipment for screening sketch in evidence chain image
CN113286084A (en) Terminal image acquisition method and device, storage medium and terminal
CN110223219B (en) 3D image generation method and device
CN110163183B (en) Target detection algorithm evaluation method and device, computer equipment and storage medium
CN112330618A (en) Image offset detection method, device and storage medium
CN110399823B (en) Subject tracking method and apparatus, electronic device, and computer-readable storage medium
CN108334811B (en) Face image processing method and device
CN113553990B (en) Method and device for tracking and identifying multiple faces, computer equipment and storage medium
CN115239551A (en) Video enhancement method and device
CN112329729B (en) Small target ship detection method and device and electronic equipment
CN110838134B (en) Target object statistical method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant