CN110674787A - Video decompression method and system based on Hog feature and lgb classifier - Google Patents
Video decompression method and system based on Hog feature and lgb classifier Download PDFInfo
- Publication number
- CN110674787A CN110674787A CN201910952402.4A CN201910952402A CN110674787A CN 110674787 A CN110674787 A CN 110674787A CN 201910952402 A CN201910952402 A CN 201910952402A CN 110674787 A CN110674787 A CN 110674787A
- Authority
- CN
- China
- Prior art keywords
- frame
- image
- video
- background
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/423—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses a video decompression method and a system based on a Hog feature and an lgb classifier, which relate to the technical field of video decompression and adopt the scheme that: converting a monitoring video of an unmanned convenience store into a multi-frame image; selecting a first frame image of an original monitoring video, detecting pedestrians by using the Hog characteristics and an lgb classifier, removing the positions of the pedestrians, and forming a background image without moving objects; detecting pedestrians by using the Hog characteristics and an lgb classifier, finding out the positions of the pedestrians, cutting the positions to be used as pedestrian pictures, storing the pedestrian pictures and naming the pedestrian pictures; comparing each frame image contained in the original video with the background image to obtain a difference value, storing residual data in a linked list, and compressing; taking out the background picture, and decoding and restoring the background of each frame of picture according to residual data in the linked list; taking out the pedestrian images in the linked list to cover each frame of background picture, and completing the decoding and restoring of all the frame images; and splicing all the covered frame images into a video according to the bit rate and the code rate of the original video to finish video restoration.
Description
Technical Field
The invention relates to the technical field of video decompression, in particular to a video decompression method and a video decompression system based on Hog characteristics and an lgb classifier.
Background
Video monitoring is an important component of a safety precaution system, and a traditional monitoring system comprises a front-end camera, a transmission cable and a video monitoring platform.
The unmanned convenience store refers to all or part of operation processes in the store, intelligent automatic processing is carried out through technical means, and manual intervention is reduced or not existed. The monitoring video in the unmanned convenience store runs continuously for 24 hours, the storage space of the formed video is huge, and a large amount of funds are consumed for storage. The video compression can reduce the space occupied by the video and save the storage resource. . The background scene of the surveillance video of a human convenience store is substantially constant, with the main change resulting from the movement of the customer into the store.
The traditional video compression algorithms including MPEG-4, h.264, h.265 and the like mostly follow a predictive coding structure, belong to hard coding, and cannot adapt to increasing requirements and diversified video use cases.
The method based on machine learning brings personality-changing development to the field of video compression. Wherein the content of the first and second substances,
the pedestrian detection is to judge whether a pedestrian exists in an image or a video sequence by using a computer vision technology and give accurate positioning. The technology can be combined with technologies such as pedestrian tracking and pedestrian re-identification, and is applied to the fields of artificial intelligence systems, vehicle driving assistance systems, intelligent robots, intelligent video monitoring, human body behavior analysis, intelligent transportation and the like.
Histogram of Oriented Gradient (HOG) feature is a feature descriptor used in computer vision and image processing to detect objects, and is constructed by calculating and counting Histogram of Gradient direction in local area of image.
Disclosure of Invention
The invention aims to overcome the defects of the traditional compression technology, provides a video decompression method and system based on the Hog characteristic and the lgb classifier, and effectively solves the problems of compression and simplification of the repetition rate of a large number of video streams acquired by a monitoring camera in an unmanned convenience store.
Firstly, the invention provides a video decompression method based on the Hog feature and the lgb classifier, and the technical scheme adopted for solving the technical problems is as follows:
a video decompression method based on Hog characteristics and lgb classifiers is implemented by the following steps:
s10, converting the monitoring video of the unmanned convenience store into a multi-frame image through an OpenCV technology;
s20, selecting a first frame image of an original monitoring video, detecting pedestrians by using Hog characteristics and a lgb classifier, removing the positions of the pedestrians, and forming a background image without moving objects;
s30, detecting pedestrians by using the Hog characteristics and the lgb classifier, finding out the positions of the pedestrians, cutting the pedestrians as pedestrian pictures to be stored, and coding the pedestrian pictures by frame number labels and position information in the video images;
s40, comparing each frame image contained in the original video with the background image to obtain a difference value, storing residual data in a linked list, and compressing;
s50, taking out the background picture, and decoding and restoring the background of each frame of picture according to the residual error data in the linked list;
s60, taking out the pedestrian image in the chain table, covering each frame of background picture according to the position information of the image file code, and completing the decoding and restoring of all the frame images;
and S70, splicing all the covered frame images into a video according to the bit rate and the code rate of the original video by utilizing an OpenCV technology, and finishing video restoration.
In step S30, the pedestrian detection is performed by using the Hog features and the lgb classifier, and the specific implementation process is as follows:
firstly, collecting training samples and dividing the training samples into positive samples and negative samples, extracting the hog features of the positive samples and the negative samples, and putting the features into an lgb classifier for training to form a required model;
then, generating a detector by using the formed model, and detecting a negative sample by using the detector to obtain a difficult sample;
and finally, extracting the hog characteristics of the hard sample, and training the hog characteristics in combination with the characteristics in the first step to obtain a final detector.
In step S30, the saved pedestrian picture is named as n _ x _ y, where n is the frame, x is the horizontal and vertical coordinate of the upper left corner of the video detection bounding box, and y is the horizontal and vertical coordinate of the lower right corner of the bounding box, so as to record the position information of the saved pedestrian picture.
In step S60, the pedestrian image in the linked list is extracted, and the coordinate information x and y of the bounding box are detected according to the frame number label information n of the picture file name, so as to overlay the object picture on the corresponding background picture of each frame.
In step S40, each frame image included in the original video is compared with the background image to make a difference, and the residual data is stored in a linked list, which specifically includes:
comparing the integral chroma of each frame image and the background image contained in the original video, and determining the brightness difference value, the saturation difference value, the gray difference and the histogram difference of each frame image and the background image according to the obtained color difference;
and storing the determined contents into a linked list in a node form according to a processing sequence, wherein the structure of the linked list sequentially comprises a background picture, an overall difference parameter of each frame of image of the video, a cut picture and a starting and stopping frame number of the video from front to back.
In step S50, the background picture is taken out, and the background of each frame of picture is decoded and restored according to the residual data in the linked list, and the specific implementation steps include:
first, the background picture is taken from the first node of the linked list,
and then, taking out the overall difference parameter information of each frame of image behind the background image, rendering the background image according to the brightness difference value, the saturation difference value, the gray level difference and the histogram difference of each frame of image and the background image, and restoring the background image into an image close to the original video.
Secondly, the invention provides a video decompression system based on the Hog feature and the lgb classifier, and the technical scheme adopted for solving the technical problems is as follows:
a Hog feature and lgb classifier based video decompression system, the system based on Hog feature and lgb classifier, comprising:
the video splitting module is used for converting a monitoring video of the unmanned convenience store into a multi-frame image through an OpenCV technology;
the background processing module is used for selecting a first frame image of an original monitoring video, detecting pedestrians by using the Hog characteristics and the lgb classifier, removing the positions of the pedestrians and forming a background image without moving objects;
the pedestrian processing module is used for detecting pedestrians by using the Hog characteristics and the lgb classifier, finding out the positions of the pedestrians, cutting the pedestrians as pedestrian pictures to be stored, and coding the pedestrian pictures by frame number labels and position information in the video images;
the comparison storage module is used for comparing each frame image contained in the original video with the background image to obtain a difference value, storing residual data in a linked list and compressing the residual data;
the decoding and restoring module is used for taking out the background picture and decoding and restoring the background of each frame of picture according to the residual data in the linked list;
the pedestrian image covering module is used for taking out the pedestrian images in the linked list, covering each frame of background picture according to the position information of the image file code, and finishing the decoding and restoring of all the frame images;
and the splicing and restoring module is used for splicing all the covered frame images into a video according to the bit rate and the code rate of the original video by utilizing the OpenCV technology to finish video restoration.
Specifically, the related comparison storage module compares each frame image contained in the original video with the background image to make a difference value, when the residual data is stored in the linked list,
firstly, comparing the integral chroma of each frame of image and background image contained in an original video, and determining the brightness difference value, saturation difference value, gray difference and histogram difference of each frame of image and the background image according to the obtained color difference;
and then, storing the determined contents into a linked list in a node form according to a processing sequence, wherein the structure of the linked list sequentially comprises a background picture, an overall difference parameter of each frame of image of the video, a cut picture and a starting and stopping frame number of the video from front to back.
Specifically, when the decoding restoration module decodes and restores the background of each frame of image according to the residual data in the linked list,
first, the background picture is taken from the first node of the linked list,
and then, taking out the overall difference parameter information of each frame of image behind the background image, rendering the background image according to the brightness difference value, the saturation difference value, the gray level difference and the histogram difference of each frame of image and the background image, and restoring the background image into an image close to the original video.
Specifically, the pedestrian processing module names the stored pedestrian picture as n _ x _ y, wherein n is the frame, x is the horizontal and vertical coordinate of the upper left corner of the video detection boundary box, and y is the horizontal and vertical coordinate of the lower right corner of the boundary box, so as to record the position information of the stored pedestrian picture;
the related pedestrian image covering module takes out the pedestrian image in the linked list, detects the coordinate information x and y of the bounding box according to the frame number label information n of the picture file name, and covers the object picture in each corresponding frame of background picture.
Compared with the prior art, the video decompression method and system based on the Hog characteristics and the lgb classifier have the beneficial effects that:
1) according to the invention, firstly, a Hog characteristic and an lgb classifier are used for optimizing a pedestrian detection technology, pedestrians and static objects in a monitoring video are stored separately in a linked list mode, and meanwhile, the detected pedestrians in the video are subjected to image capture and encoding, so that only a small number of images and a plurality of videos with very small sizes need to be stored, the storage space is greatly saved, the compression performance is improved, and then, the captured pedestrian images are spliced and restored, and the effect of restoring the restored video is achieved to the greatest extent;
2) the invention is suitable for an intelligent shopping place without the participation of store personnel, the goods shelf and goods placing positions of the place are relatively fixed, and pedestrians and moving objects are easy to detect, so that the pedestrians and background pictures in the monitoring video are conveniently separated in the later period, and the problems of compression and simplification of the repetition rate of a large number of video streams collected by the monitoring camera in an unmanned convenience store can be effectively solved.
Drawings
FIG. 1 is a flow chart of a method according to a first embodiment of the present invention;
fig. 2 is a connection block diagram of the second embodiment of the present invention.
The reference information in the drawings indicates:
1. a video splitting module 2, a background processing module 3, a pedestrian processing module 4, a comparison storage module,
5. and the decoding restoration module 6, the pedestrian image covering module 7 and the splicing restoration module.
Detailed Description
In order to make the technical solutions, technical problems to be solved, and technical effects of the present invention more clearly apparent, the technical solutions of the present invention are described below in detail and completely with reference to specific embodiments, and it is obvious that the described embodiments are only a part of embodiments of the present invention, but not all embodiments.
The first embodiment is as follows:
with reference to fig. 1, the present embodiment proposes a video decompression method based on the Hog feature and lgb classifier, and the implementation process of the method includes:
s10, converting the monitoring video of the unmanned convenience store into a multi-frame image through an OpenCV technology;
s20, selecting a first frame image of an original monitoring video, detecting pedestrians by using Hog characteristics and a lgb classifier, removing the positions of the pedestrians, and forming a background image without moving objects;
s30, detecting pedestrians by using the Hog characteristics and the lgb classifier, finding out the positions of the pedestrians, cutting the pedestrians as pedestrian pictures to be stored, and coding the pedestrian pictures by frame number labels and position information in the video images;
s40, comparing each frame image contained in the original video with the background image to obtain a difference value, storing residual data in a linked list, and compressing;
s50, taking out the background picture, and decoding and restoring the background of each frame of picture according to the residual error data in the linked list;
s60, taking out the pedestrian image in the chain table, covering each frame of background picture according to the position information of the image file code, and completing the decoding and restoring of all the frame images;
and S70, splicing all the covered frame images into a video according to the bit rate and the code rate of the original video by utilizing an OpenCV technology, and finishing video restoration.
In step S30 of this embodiment, the method for detecting pedestrians using the Hog features and lgb classifier specifically performs the following steps:
firstly, collecting training samples and dividing the training samples into positive samples and negative samples, extracting the hog features of the positive samples and the negative samples, and putting the features into an lgb classifier for training to form a required model;
then, generating a detector by using the formed model, and detecting a negative sample by using the detector to obtain a difficult sample;
and finally, extracting the hog characteristics of the hard sample, and training the hog characteristics in combination with the characteristics in the first step to obtain a final detector.
In step S30 of the present embodiment, the saved pedestrian picture is named as n _ x _ y, where n is the frame, x is the horizontal and vertical coordinate of the upper left corner of the video detection bounding box, and y is the horizontal and vertical coordinate of the lower right corner of the bounding box, so as to record the position information of the saved pedestrian picture.
In step S60 of this embodiment, the pedestrian image in the linked list is extracted, the coordinate information x and y of the bounding box are detected according to the frame number label information n of the picture file name, and the object picture is overlaid on the corresponding background picture of each frame.
In step S40 of this embodiment, each frame image included in the original video is compared with the background image to make a difference, and the residual data is stored in the linked list, which specifically includes the following operations:
comparing the integral chroma of each frame image and the background image contained in the original video, and determining the brightness difference value, the saturation difference value, the gray difference and the histogram difference of each frame image and the background image according to the obtained color difference;
and storing the determined contents into a linked list in a node form according to a processing sequence, wherein the structure of the linked list sequentially comprises a background picture, an overall difference parameter of each frame of image of the video, a cut picture and a starting and stopping frame number of the video from front to back.
In step S50 of this embodiment, a background picture is taken out, and the background of each frame of image is decoded and restored according to residual data in the linked list, and the specific implementation steps include:
first, the background picture is taken from the first node of the linked list,
and then, taking out the overall difference parameter information of each frame of image behind the background image, rendering the background image according to the brightness difference value, the saturation difference value, the gray level difference and the histogram difference of each frame of image and the background image, and restoring the background image into an image close to the original video.
The embodiment stores pedestrians and static objects in the surveillance video in a linked list mode, so that the storage space can be greatly saved, the compression performance is improved, and the captured pedestrian pictures are covered on the background pictures and are spliced and restored, so that the effect of restoring the original video can be achieved to the greatest extent.
Example two:
with reference to fig. 2, the present embodiment provides a video decompression system based on the Hog feature and lgb classifier, which is based on the Hog feature and lgb classifier, and includes:
the video splitting module 1 is used for converting a monitoring video of an unmanned convenience store into a multi-frame image through an OpenCV technology;
the background processing module 2 is used for selecting a first frame image of an original monitoring video, detecting pedestrians by using the Hog characteristics and the lgb classifier, removing the positions of the pedestrians and forming a background image without moving objects;
the pedestrian processing module 3 is used for detecting pedestrians by using the Hog characteristics and the lgb classifier, finding out the positions of the pedestrians, cutting the pedestrians as pedestrian pictures to be stored, and coding the pedestrian pictures by frame number labels and position information in the video images;
the comparison storage module 4 is used for comparing each frame image contained in the original video with the background image to make a difference value, storing residual data in a linked list and compressing the residual data;
the decoding restoration module 5 is used for taking out the background picture and decoding and restoring the background of each frame of picture according to the residual data in the linked list;
the pedestrian image covering module 6 is used for taking out the pedestrian images in the linked list, covering each frame of background picture according to the position information of the image file code, and completing the decoding and restoring of all the frame images;
the splicing and restoring module 7 is configured to splice all the covered frame images into a video according to the bit rate and the code rate of the original video by using an OpenCV technique, so as to complete video restoration.
In this embodiment, each frame of image included in the original video of the comparison and storage module 4 is compared with the background image to make a difference value, when the residual data is stored in the linked list,
firstly, comparing the integral chroma of each frame of image and background image contained in an original video, and determining the brightness difference value, saturation difference value, gray difference and histogram difference of each frame of image and the background image according to the obtained color difference;
and then, storing the determined contents into a linked list in a node form according to a processing sequence, wherein the structure of the linked list sequentially comprises a background picture, an overall difference parameter of each frame of image of the video, a cut picture and a starting and stopping frame number of the video from front to back.
In the embodiment, when the decoding restoration module 5 decodes and restores the background of each frame of image according to the residual data in the linked list,
first, the background picture is taken from the first node of the linked list,
and then, taking out the overall difference parameter information of each frame of image behind the background image, rendering the background image according to the brightness difference value, the saturation difference value, the gray level difference and the histogram difference of each frame of image and the background image, and restoring the background image into an image close to the original video.
In this embodiment, the pedestrian picture stored by the pedestrian processing module 6 is named as n _ x _ y, where n is the frame, x is the horizontal and vertical coordinate of the upper left corner of the video detection bounding box, and y is the horizontal and vertical coordinate of the lower right corner of the bounding box, so as to record the position information of the stored pedestrian picture;
the related pedestrian image covering module takes out the pedestrian image in the linked list, detects the coordinate information x and y of the bounding box according to the frame number label information n of the picture file name, and covers the object picture in each corresponding frame of background picture.
The embodiment is suitable for an intelligent shopping place without the participation of store personnel, and the pedestrian image and the background image in the monitoring video are stored separately based on the Hog characteristic and the lgb classifier, so that the high-efficiency compression and high-level restoration of the video are realized.
In summary, the video decompression method and system based on the Hog feature and the lgb classifier can effectively solve the problems of compression and simplification of a large number of video streams collected by a monitoring camera in an unmanned convenience store.
The principles and embodiments of the present invention have been described in detail using specific examples, which are provided only to aid in understanding the core technical content of the present invention. Based on the above embodiments of the present invention, those skilled in the art should make any improvements and modifications to the present invention without departing from the principle of the present invention, and therefore, the present invention should fall into the protection scope of the present invention.
Claims (10)
1. A video decompression method based on Hog characteristics and lgb classifiers is characterized in that the method is realized by the following steps:
s10, converting the monitoring video of the unmanned convenience store into a multi-frame image through an OpenCV technology;
s20, selecting a first frame image of an original monitoring video, detecting pedestrians by using Hog characteristics and a lgb classifier, removing the positions of the pedestrians, and forming a background image without moving objects;
s30, detecting pedestrians by using the Hog characteristics and the lgb classifier, finding out the positions of the pedestrians, cutting the pedestrians as pedestrian pictures to be stored, and coding the pedestrian pictures by frame number labels and position information in the video images;
s40, comparing each frame image contained in the original video with the background image to obtain a difference value, storing residual data in a linked list, and compressing;
s50, taking out the background picture, and decoding and restoring the background of each frame of picture according to the residual error data in the linked list;
s60, taking out the pedestrian image in the chain table, covering each frame of background picture according to the position information of the image file code, and completing the decoding and restoring of all the frame images;
and S70, splicing all the covered frame images into a video according to the bit rate and the code rate of the original video by utilizing an OpenCV technology, and finishing video restoration.
2. The method for decompressing video based on the Hog features and lgb classifier according to claim 1, wherein in step S30, the Hog features and lgb classifier are used for pedestrian detection, and the specific implementation process is as follows:
firstly, collecting training samples and dividing the training samples into positive samples and negative samples, extracting the hog features of the positive samples and the negative samples, and putting the features into an lgb classifier for training to form a required model;
then, generating a detector by using the formed model, and detecting a negative sample by using the detector to obtain a difficult sample;
and finally, extracting the hog characteristics of the hard sample, and training the hog characteristics in combination with the characteristics in the first step to obtain a final detector.
3. The Hog feature and lgb classifier based video decompression method according to claim 1 or 2, wherein in step S30, the saved pedestrian picture is named n _ x _ y, where n is the number of frames, x is the horizontal and vertical coordinate of the upper left corner of the video detection bounding box, and y is the horizontal and vertical coordinate of the lower right corner of the bounding box, so as to record the location information of the saved pedestrian picture.
4. The Hog feature and lgb classifier based video decompression method according to claim 3, wherein in step S60, the pedestrian images in the linked list are extracted, the bounding box coordinate information x and y are detected according to the frame number label information n of the picture file name, and the object picture is overlaid on the corresponding background picture of each frame.
5. The method of claim 1, wherein in step S40, each frame of image included in the original video is compared with a background image to be differenced, and residual data is stored in a linked list, and the method specifically comprises:
comparing the integral chroma of each frame image and the background image contained in the original video, and determining the brightness difference value, the saturation difference value, the gray difference and the histogram difference of each frame image and the background image according to the obtained color difference;
and storing the determined contents into a linked list in a node form according to a processing sequence, wherein the structure of the linked list sequentially comprises a background picture, an overall difference parameter of each frame of image of the video, a cut picture and a starting and stopping frame number of the video from front to back.
6. The method of claim 5, wherein in step S50, the background picture is taken out, and the background of each frame of picture is decoded and restored according to the residual data in the linked list, and the specific steps include:
first, the background picture is taken from the first node of the linked list,
and then, taking out the overall difference parameter information of each frame of image behind the background image, rendering the background image according to the brightness difference value, the saturation difference value, the gray level difference and the histogram difference of each frame of image and the background image, and restoring the background image into an image close to the original video.
7. A video decompression system based on a Hog feature and lgb classifier, the system based on the Hog feature and lgb classifier, comprising:
the video splitting module is used for converting a monitoring video of the unmanned convenience store into a multi-frame image through an OpenCV technology;
the background processing module is used for selecting a first frame image of an original monitoring video, detecting pedestrians by using the Hog characteristics and the lgb classifier, removing the positions of the pedestrians and forming a background image without moving objects;
the pedestrian processing module is used for detecting pedestrians by using the Hog characteristics and the lgb classifier, finding out the positions of the pedestrians, cutting the pedestrians as pedestrian pictures to be stored, and coding the pedestrian pictures by frame number labels and position information in the video images;
the comparison storage module is used for comparing each frame image contained in the original video with the background image to obtain a difference value, storing residual data in a linked list and compressing the residual data;
the decoding and restoring module is used for taking out the background picture and decoding and restoring the background of each frame of picture according to the residual data in the linked list;
the pedestrian image covering module is used for taking out the pedestrian images in the linked list, covering each frame of background picture according to the position information of the image file code, and finishing the decoding and restoring of all the frame images;
and the splicing and restoring module is used for splicing all the covered frame images into a video according to the bit rate and the code rate of the original video by utilizing the OpenCV technology to finish video restoration.
8. The Hog feature and lgb classifier-based video decompression system according to claim 7, wherein the contrast storage module compares each frame of image included in the original video with the background image for difference, stores the residual data in a linked list,
firstly, comparing the integral chroma of each frame of image and background image contained in an original video, and determining the brightness difference value, saturation difference value, gray difference and histogram difference of each frame of image and the background image according to the obtained color difference;
and then, storing the determined contents into a linked list in a node form according to a processing sequence, wherein the structure of the linked list sequentially comprises a background picture, an overall difference parameter of each frame of image of the video, a cut picture and a starting and stopping frame number of the video from front to back.
9. The Hog feature and lgb classifier-based video decompression system according to claim 8, wherein the decoding recovery module decodes and recovers the background of each frame of image according to the residual data in the linked list,
first, the background picture is taken from the first node of the linked list,
and then, taking out the overall difference parameter information of each frame of image behind the background image, rendering the background image according to the brightness difference value, the saturation difference value, the gray level difference and the histogram difference of each frame of image and the background image, and restoring the background image into an image close to the original video.
10. The Hog feature and lgb classifier based video decompression system according to claim 7, wherein the pedestrian processing module names the saved pedestrian picture as n _ x _ y, where n is the number of frames, x is the horizontal and vertical coordinate of the upper left corner of the video detection bounding box, and y is the horizontal and vertical coordinate of the lower right corner of the bounding box, so as to record the position information of the saved pedestrian picture;
and the pedestrian image covering module takes out the pedestrian images in the linked list, detects the coordinate information x and y of the bounding box according to the frame number label information n of the picture file name, and covers the object picture in each corresponding frame of background picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910952402.4A CN110674787A (en) | 2019-10-09 | 2019-10-09 | Video decompression method and system based on Hog feature and lgb classifier |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910952402.4A CN110674787A (en) | 2019-10-09 | 2019-10-09 | Video decompression method and system based on Hog feature and lgb classifier |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110674787A true CN110674787A (en) | 2020-01-10 |
Family
ID=69080965
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910952402.4A Pending CN110674787A (en) | 2019-10-09 | 2019-10-09 | Video decompression method and system based on Hog feature and lgb classifier |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110674787A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111953940A (en) * | 2020-08-06 | 2020-11-17 | 中标慧安信息技术股份有限公司 | Uploading processing method and system for monitoring video |
CN112348967A (en) * | 2020-10-29 | 2021-02-09 | 国网浙江省电力有限公司 | Seamless fusion method for three-dimensional model and real-time video of power equipment |
CN115866334A (en) * | 2023-02-27 | 2023-03-28 | 成都华域天府数字科技有限公司 | Data processing method for cutting and associating content in video flow |
CN116431857A (en) * | 2023-06-14 | 2023-07-14 | 山东海博科技信息系统股份有限公司 | Video processing method and system for unmanned scene |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101887524A (en) * | 2010-07-06 | 2010-11-17 | 湖南创合制造有限公司 | Pedestrian detection method based on video monitoring |
CN102810161A (en) * | 2012-06-07 | 2012-12-05 | 江苏物联网研究发展中心 | Method for detecting pedestrians in crowding scene |
CN102842045A (en) * | 2012-08-03 | 2012-12-26 | 华侨大学 | Pedestrian detection method based on combined features |
CN105023001A (en) * | 2015-07-17 | 2015-11-04 | 武汉大学 | Selective region-based multi-pedestrian detection method and system |
CN109512442A (en) * | 2018-12-21 | 2019-03-26 | 杭州电子科技大学 | A kind of EEG fatigue state classification method based on LightGBM |
CN109815974A (en) * | 2018-12-10 | 2019-05-28 | 清影医疗科技(深圳)有限公司 | A kind of cell pathology slide classification method, system, equipment, storage medium |
CN109889981A (en) * | 2019-03-08 | 2019-06-14 | 中南大学 | A kind of localization method and system based on two sorting techniques |
CN109948455A (en) * | 2019-02-22 | 2019-06-28 | 中科创达软件股份有限公司 | One kind leaving object detecting method and device |
CN109951710A (en) * | 2019-03-26 | 2019-06-28 | 中国民航大学 | Machine level ground monitoring video compression method and system based on deep learning |
CN110046601A (en) * | 2019-04-24 | 2019-07-23 | 南京邮电大学 | For the pedestrian detection method of crossroad scene |
-
2019
- 2019-10-09 CN CN201910952402.4A patent/CN110674787A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101887524A (en) * | 2010-07-06 | 2010-11-17 | 湖南创合制造有限公司 | Pedestrian detection method based on video monitoring |
CN102810161A (en) * | 2012-06-07 | 2012-12-05 | 江苏物联网研究发展中心 | Method for detecting pedestrians in crowding scene |
CN102842045A (en) * | 2012-08-03 | 2012-12-26 | 华侨大学 | Pedestrian detection method based on combined features |
CN105023001A (en) * | 2015-07-17 | 2015-11-04 | 武汉大学 | Selective region-based multi-pedestrian detection method and system |
CN109815974A (en) * | 2018-12-10 | 2019-05-28 | 清影医疗科技(深圳)有限公司 | A kind of cell pathology slide classification method, system, equipment, storage medium |
CN109512442A (en) * | 2018-12-21 | 2019-03-26 | 杭州电子科技大学 | A kind of EEG fatigue state classification method based on LightGBM |
CN109948455A (en) * | 2019-02-22 | 2019-06-28 | 中科创达软件股份有限公司 | One kind leaving object detecting method and device |
CN109889981A (en) * | 2019-03-08 | 2019-06-14 | 中南大学 | A kind of localization method and system based on two sorting techniques |
CN109951710A (en) * | 2019-03-26 | 2019-06-28 | 中国民航大学 | Machine level ground monitoring video compression method and system based on deep learning |
CN110046601A (en) * | 2019-04-24 | 2019-07-23 | 南京邮电大学 | For the pedestrian detection method of crossroad scene |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111953940A (en) * | 2020-08-06 | 2020-11-17 | 中标慧安信息技术股份有限公司 | Uploading processing method and system for monitoring video |
CN112348967A (en) * | 2020-10-29 | 2021-02-09 | 国网浙江省电力有限公司 | Seamless fusion method for three-dimensional model and real-time video of power equipment |
CN115866334A (en) * | 2023-02-27 | 2023-03-28 | 成都华域天府数字科技有限公司 | Data processing method for cutting and associating content in video flow |
CN116431857A (en) * | 2023-06-14 | 2023-07-14 | 山东海博科技信息系统股份有限公司 | Video processing method and system for unmanned scene |
CN116431857B (en) * | 2023-06-14 | 2023-09-05 | 山东海博科技信息系统股份有限公司 | Video processing method and system for unmanned scene |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110674787A (en) | Video decompression method and system based on Hog feature and lgb classifier | |
KR101942808B1 (en) | Apparatus for CCTV Video Analytics Based on Object-Image Recognition DCNN | |
EP3291558A1 (en) | Video coding and decoding methods and apparatus | |
CN113158738B (en) | Port environment target detection method, system, terminal and readable storage medium based on attention mechanism | |
CN111901604B (en) | Video compression method, video reconstruction method, corresponding devices, camera and video processing equipment | |
CN112488071B (en) | Method, device, electronic equipment and storage medium for extracting pedestrian features | |
WO2012139228A1 (en) | Video-based detection of multiple object types under varying poses | |
CN111147768A (en) | Intelligent monitoring video review method for improving review efficiency | |
CN110751630A (en) | Power transmission line foreign matter detection method and device based on deep learning and medium | |
CN112287875A (en) | Abnormal license plate recognition method, device, equipment and readable storage medium | |
CN116129291A (en) | Unmanned aerial vehicle animal husbandry-oriented image target recognition method and device | |
CN110717070A (en) | Video compression method and system for indoor monitoring scene | |
US11455785B2 (en) | System and method for use in object detection from video stream | |
CN101321241A (en) | Interactive video moving object elimination method | |
CN116052090A (en) | Image quality evaluation method, model training method, device, equipment and medium | |
CN106339684A (en) | Pedestrian detection method, device and vehicle | |
CN113486856A (en) | Driver irregular behavior detection method based on semantic segmentation and convolutional neural network | |
CN112418127A (en) | Video sequence coding and decoding method for video pedestrian re-identification | |
Dahirou et al. | Motion Detection and Object Detection: Yolo (You Only Look Once) | |
CN115914676A (en) | Real-time monitoring comparison method and system for ultra-high-definition video signals | |
Moura et al. | A spatiotemporal motion-vector filter for object tracking on compressed video | |
CN114972840A (en) | Momentum video target detection method based on time domain relation | |
Low et al. | Frame Based Object Detection--An Application for Traffic Monitoring | |
CN113239931A (en) | Logistics station license plate recognition method | |
CN112200840A (en) | Moving object detection system in visible light and infrared image combination |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200110 |