CN112822496B - Video analysis method and device - Google Patents

Video analysis method and device Download PDF

Info

Publication number
CN112822496B
CN112822496B CN202110114966.8A CN202110114966A CN112822496B CN 112822496 B CN112822496 B CN 112822496B CN 202110114966 A CN202110114966 A CN 202110114966A CN 112822496 B CN112822496 B CN 112822496B
Authority
CN
China
Prior art keywords
image
images
compressing
video
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110114966.8A
Other languages
Chinese (zh)
Other versions
CN112822496A (en
Inventor
田杰
李伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110114966.8A priority Critical patent/CN112822496B/en
Publication of CN112822496A publication Critical patent/CN112822496A/en
Application granted granted Critical
Publication of CN112822496B publication Critical patent/CN112822496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel

Abstract

The embodiment of the invention provides a video analysis method and a video analysis device, which can solve the problems that in the prior art, a single video analysis device can only analyze one path of monitoring data, so that the video analysis device is wasted and the utilization rate of the video analysis device is low. The video analysis method comprises the following steps: acquiring a plurality of first images respectively corresponding to a plurality of video channels at the same time; compressing and splicing the plurality of first images based on a preset rule to obtain a second image, wherein the number of pixel points contained in the second image is the same as that of the first images containing the largest number of pixel points in the plurality of first images, and each subarea forming the second image corresponds to the first image after each video channel is compressed; and analyzing the second image, and determining the detection result of the corresponding video channel based on the analysis result of each subregion.

Description

Video analysis method and device
Technical Field
The invention relates to the technical field of video monitoring, in particular to a video analysis method and a video analysis device.
Background
Video monitoring is an important technical means of modern security management, and is widely applied to various scenes, for example, a monitoring camera is installed at a crossing provided with a pedestrian crossing, image data of passing pedestrians and vehicles is collected through the monitoring camera, the collected image data is sent to a video analysis device, the video analysis device automatically performs detection and analysis, and corresponding action is performed according to a detection and analysis result, for example, alarm information is sent out. In the prior art, one path of monitoring cameras usually corresponds to one video analysis device, and thus, as the number of monitoring scenes increases, the number of monitoring cameras (the number of paths of monitoring data) also increases correspondingly, which results in that more video analysis devices must be arranged, thereby causing waste of the video analysis devices and a low utilization rate of the video analysis devices.
Therefore, in the prior art, the monitoring data and the video analysis devices are in one-to-one correspondence, which causes waste of the video analysis devices and reduces the utilization rate of the video analysis devices.
Disclosure of Invention
The embodiment of the invention provides a video analysis method and a video analysis device, which can solve the problems that in the prior art, a single video analysis device can only analyze one path of monitoring data, so that the video analysis device is wasted and the utilization rate of the video analysis device is low.
In a first aspect, an embodiment of the present invention provides a video analysis method, where the method includes:
acquiring a plurality of first images respectively corresponding to a plurality of video channels at the same time;
compressing and splicing the plurality of first images based on a preset rule to obtain a second image, wherein the number of pixel points contained in the second image is the same as that of the first images containing the largest number of pixel points in the plurality of first images, and each subarea forming the second image corresponds to the first image after each video channel is compressed;
and analyzing the second image, and determining the detection result of the corresponding video channel based on the analysis result of each subregion.
In the embodiment of the present invention, the analysis capability of the image to be analyzed in the unit time may be considered to be related to the number of pixel points included in the image to be analyzed, that is, if the analysis capability in the unit time is kept unchanged, the number of pixel points included in the image to be analyzed cannot exceed a preset threshold, for example, the preset threshold is the maximum value of the number of pixel points included in a single image to be analyzed. Therefore, in the embodiment of the present invention, a plurality of first images corresponding to a plurality of video channels acquired at the same time may be compressed and stitched to form a second image. Since the number of the pixel points included in the second image is the same as that of the first image including the largest number of the pixel points in the plurality of first images, the analysis of the second image can be considered that the analysis capability provided in unit time is not exceeded, and the detection results of the video channels can be determined at the same time. The method can be used for simultaneously analyzing the images acquired by the multiple video channels at the same time by compressing and splicing the multiple first images acquired by the multiple video channels at the same time under the condition of ensuring that the analysis capability provided in unit time is not changed, so that a second image is obtained, and the newly formed second image is analyzed, so that the reusability of a single video analysis device is improved.
Optionally, if the number of the pixel points included in the plurality of first images is the same, compressing and splicing the plurality of first images based on a preset rule, and obtaining a second image includes:
compressing the number of pixel points contained in the plurality of first images in different proportions based on the view ranges of the image acquisition devices corresponding to the plurality of first images, and splicing the plurality of compressed first images into the second image, wherein the view ranges include a far view and a near view; alternatively, the first and second electrodes may be,
and compressing the number of pixel points contained in the plurality of first images in different proportions based on the importance levels of the installation positions of the image acquisition devices corresponding to the plurality of first images respectively, and splicing the plurality of compressed first images into the second image.
In the embodiment of the present invention, the number of the pixel points included in the plurality of first images from the plurality of video channels is the same, and it can be considered that the types of the image capturing devices corresponding to the plurality of first images are the same. Then, when the same type of image capturing device is installed in different view ranges or at different positions with different importance levels, on the premise that the number of pixels included in the second image is not changed, the multiple first images from different video channels are compressed in different proportions according to the different view ranges or the different positions with different importance levels, so that the accuracy of video analysis can be ensured while the multiple first images are analyzed based on the same video analyzing device.
Optionally, compressing, in different proportions, the number of pixel points included in the plurality of first images based on the view ranges of the image capturing devices corresponding to the plurality of first images respectively includes:
determining the compression ratio of each first image based on the corresponding relationship between the video channel and the view range which is stored in advance and the corresponding relationship between the view range and the pixel compression ratio; the far vision field corresponds to a far vision image, the near vision field corresponds to a near vision image, and the compression ratio of the far vision image is smaller than that of the near vision image;
compressing the first image based on the determined compression ratio.
In the embodiment of the invention, in the plurality of first images, a part of the plurality of first images may be a distant view image and a part of the plurality of first images may be a near view image. In a distant view image, the target to be detected is usually small; in a close-up image, the object to be detected is generally large. When the long-range image and the short-range image are compressed respectively, the compression ratio of the long-range image can be smaller than that of the short-range image, that is, the compressed long-range image can retain more pixel points than the compressed short-range image, so that the long-range image can be ensured to have a more accurate analysis result, and the analysis result of the short-range image cannot be influenced too much.
Optionally, compressing, in different proportions, the number of pixel points included in the plurality of first images based on the importance levels of the installation positions of the image capturing devices corresponding to the plurality of first images includes:
determining the compression ratio of each first image based on the corresponding relationship between the pre-stored video channel and the important level and the corresponding relationship between the important level and the pixel compression ratio, wherein the important level and the pixel compression ratio are in negative correlation;
compressing the first image based on the determined compression ratio.
In the embodiment of the present invention, in the plurality of first images, a part of the plurality of first images may come from an image capturing device with a relatively important installation position, and a part of the plurality of first images may come from an image capturing device with a relatively less important installation position, so that the first images captured by the image capturing device with the relatively important installation position need to be subjected to emphasis analysis. Therefore, when a plurality of first images are compressed, the compression ratio can be determined according to the importance levels of the first images. For example, the higher the importance level, the smaller the compression ratio; on the contrary, the lower the importance level is, the larger the compression ratio is, so that the first image with the higher importance level can reserve more pixel points compared with the first image with the lower importance level, thereby preferentially ensuring that the first image with the higher importance level has a more accurate analysis result.
Optionally, if the number of the pixel points included in the plurality of first images is different, compressing and splicing the plurality of first images based on a preset rule, and obtaining a second image includes:
and splicing a plurality of first images containing different numbers of pixel points, and compressing the number of the pixel points contained in the spliced plurality of first images in equal proportion to obtain the second image.
In the embodiment of the invention, the number of the pixel points contained in the first images from the video channels is different, and the types of the image acquisition devices corresponding to the first images are different, and the image acquisition devices of different types are applied to different monitoring scenes. Therefore, a plurality of first images containing different numbers of pixel points can be spliced directly and then compressed in equal proportion, and a second image is obtained. Not only is the plurality of first images guaranteed to be analyzed based on the same video analysis device, but also the accuracy of video analysis is guaranteed.
Optionally, after determining the detection result of the corresponding video channel based on the analysis result of each sub-region, the method further includes:
determining whether the detection result of each video channel is consistent with a preset detection result for triggering an alarm message;
and if the current detection result is consistent with the current detection result, outputting an alarm message corresponding to the event represented by the current detection result.
In the embodiment of the present invention, it may be considered that a detection result for triggering an alarm message is preset, and if the detection result of each current video channel is consistent with the preset detection result, it may be indicated that a certain event indicated by the preset detection result occurs, and at this time, an alarm message corresponding to the event may be output, so that relevant personnel may take corresponding measures according to the alarm message.
In a second aspect, an embodiment of the present invention provides a video analysis apparatus, where the apparatus includes:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a plurality of first images respectively corresponding to a plurality of video channels at the same moment;
the processing unit is used for compressing and splicing the plurality of first images based on a preset rule to obtain a second image, wherein the number of pixel points contained in the second image is the same as that of the first images containing the largest number of pixel points in the plurality of first images, and each subarea forming the second image corresponds to the first image after each video channel is compressed;
and the analysis unit is used for analyzing the second image and determining the detection result of the corresponding video channel based on the analysis result of each sub-region.
Optionally, if the number of the pixel points included in the plurality of first images is the same, the processing unit is specifically configured to:
compressing the number of pixel points contained in the plurality of first images in different proportions based on the view ranges of the image acquisition devices corresponding to the plurality of first images, and splicing the plurality of compressed first images into the second image, wherein the view ranges include a far view and a near view; alternatively, the first and second electrodes may be,
and compressing the number of pixel points contained in the plurality of first images in different proportions based on the importance levels of the installation positions of the image acquisition devices corresponding to the plurality of first images respectively, and splicing the plurality of compressed first images into the second image.
Optionally, the processing unit is specifically configured to:
determining the compression ratio of each first image based on the corresponding relationship between the video channel and the view range which is stored in advance and the corresponding relationship between the view range and the pixel compression ratio; the far vision field corresponds to a far vision image, the near vision field corresponds to a near vision image, and the compression ratio of the far vision image is smaller than that of the near vision image;
compressing the first image based on the determined compression ratio.
Optionally, the processing unit is specifically configured to:
determining the compression ratio of each first image based on the corresponding relationship between the pre-stored video channel and the important level and the corresponding relationship between the important level and the pixel compression ratio, wherein the important level and the pixel compression ratio are in negative correlation;
compressing the first image based on the determined compression ratio.
Optionally, if the number of the pixel points included in the plurality of first images is different, the processing unit is specifically configured to:
and splicing a plurality of first images containing different numbers of pixel points, and compressing the number of the pixel points contained in the spliced plurality of first images in equal proportion to obtain the second image.
Optionally, the apparatus further comprises:
the determining unit is used for determining whether the detection result of each video channel is consistent with the preset detection result for triggering the alarm message;
and the output unit is used for outputting the alarm message corresponding to the event represented by the current detection result.
In a third aspect, an embodiment of the present invention provides a video analysis apparatus, where the apparatus includes a processor and a memory, and the execution of the computer program stored in the memory by the processor is a step of implementing the method according to the embodiment of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method as described in the embodiment of the first aspect.
Drawings
Fig. 1 is a schematic flow chart of a video analysis method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a second image according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a second image according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a second image according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a video analysis apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a video analysis apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments.
In the prior art, each video channel corresponds to one video analysis device, and as the number of monitored scenes increases, more video analysis devices need to be arranged, which causes waste of the video analysis devices, and the utilization rate of a single video analysis device is low.
In view of this, the embodiment of the present invention provides a video analysis method, which, under the condition that the analysis capability provided by a video analysis device is not changed in a unit time, obtains a second image by compressing and splicing a plurality of first images acquired by multiple video channels at the same time, and analyzes the newly formed second image, that is, it can be considered that images acquired by the plurality of video channels at the same time are simultaneously analyzed, so that the reusability of a single video analysis device is improved.
The technical solution provided by the embodiment of the present invention is described below with reference to the accompanying drawings. Referring to fig. 1, the present invention provides a video analysis method, which includes the following steps:
step 101: and acquiring a plurality of first images respectively corresponding to a plurality of video channels at the same time.
In the embodiment of the present invention, one image capturing device may be considered to correspond to one video channel, and each video channel is used for transmitting an image captured by the image capturing device, for example, a first image. The video analysis device may be considered to be connected to the video channel, so as to obtain the first image from the video channel, and analyze the first image, so as to perform a corresponding action according to an analysis result of the first image. For example, the video analysis device may take the action of issuing an alarm message. In order to improve the reusability of a single video analysis apparatus, images acquired by a plurality of image acquisition devices can be analyzed by the single video analysis apparatus. Heretofore, it has been necessary to simultaneously acquire a plurality of first images acquired by a plurality of image acquisition apparatuses.
As a possible implementation, a plurality of first images corresponding to a plurality of video channels at the same time may be acquired.
Step 102: and compressing and splicing the plurality of first images based on a preset rule to obtain a second image, wherein the number of pixel points contained in the second image is the same as that of the first images containing the largest number of pixel points in the plurality of first images, and each subarea forming the second image corresponds to the first image after each video channel is compressed.
In the embodiment of the present invention, it is considered that the analysis capability of a single video analysis device per unit time is limited, for example, only one first image can be analyzed by a single video analysis device per unit time. After acquiring the plurality of first images corresponding to the plurality of video channels, the single video analysis device inevitably exceeds the analysis capability of the single video analysis device if the plurality of first images are to be analyzed simultaneously. Therefore, in the embodiment of the present invention, before analyzing the plurality of first images, the plurality of first images may be preprocessed, so that image information included in the preprocessed plurality of first images is only equivalent to image information included in one first image in an original state, and a single video analysis device may analyze the plurality of first images at the same time while ensuring that its own analysis capability is unchanged, so as to improve reusability of the single video analysis device.
As a possible implementation, the plurality of first images may be compressed and stitched based on a preset rule, so as to form the second image. And each sub-region constituting the second image corresponds to the compressed first image. That is to say, according to the position where the object to be analyzed appears, it can be determined from which first image the object to be analyzed comes from, and thus it can be determined from which video channel and which image capturing device the object to be analyzed comes from.
It should be understood that the number of pixel points included in the first images acquired by the different types of image acquisition devices may be considered to be different, and therefore, the number of pixel points included in the second image may be considered to be the same as that of the first images including the largest number of pixel points in the plurality of first images, so that the overall analysis performance of a single video analysis device in unit time is fully utilized.
For example, the first image acquired by the image acquisition device a includes 4096 × 2160 pixels, that is, 4096 pixels in the width direction and 2160 pixels in the height direction of the first image; a first image acquired by the image acquisition device B contains 1920 × 1080 pixels, that is, the first image has 1920 pixels in the width direction and 1080 pixels in the height direction; the first image collected by the image collecting device C includes 1280 × 720 pixels, that is, 1280 pixels are present in the width direction of the first image, and 720 pixels are present in the height direction of the first image. That is, it can be considered that a single video analysis device can analyze a first image including 4096 × 2160 pixels under the condition of fully utilizing its own analysis performance in a unit time, and therefore, if the three first images are compressed and spliced, the number of pixels included in the formed second image can be 4096 × 2160 at most, and the second image can still maintain 4096 pixels in the width direction and 2160 pixels in the height direction. Certainly, under the condition that the total number of the second images is not changed, the number of the pixel points included in the second images in the width direction and the height direction may be changed according to the actual situation, and is not particularly limited here.
In the following, a detailed description is given of how to compress and stitch the plurality of first images under the condition that the number of the pixels included in each of the plurality of first images is the same or different.
In the first case: the number of pixel points contained in each of the plurality of first images is the same.
In the embodiment of the present invention, if the number of the pixel points included in the plurality of first images is the same, the types of the image capturing devices corresponding to the plurality of first images may be considered to be the same. When the same type of image acquisition device is applied to different scenes, different compression ratios should be adopted for different scenes on the premise of ensuring that the number of pixels contained in the formed second image is not changed, so that the accuracy of video analysis can be ensured while a plurality of first images can be analyzed based on the same video analysis device.
As a possible implementation manner, when the field of view ranges of the image capturing devices corresponding to the plurality of first images are different, for example, the image capturing device a corresponds to a near field of view, an object (for example, a person or a vehicle) appearing in the first image corresponding to the image capturing device a is generally larger, that is, the number of pixels occupied by the object is larger; the image capturing device B corresponds to a far field of view, and then the object (e.g., a person or a vehicle) appearing in the first image corresponding to the image capturing device B is generally small, i.e., the number of pixels occupied to represent the object is small. Therefore, the plurality of first images can be compressed in different proportions by taking the view range as a reference, and the plurality of compressed first images can be spliced into the second image.
Specifically, referring to fig. 2, when the image capturing device corresponds to the near field of view, the captured first image may be regarded as a near field image; when the image capturing device corresponds to a far field of view, the captured first image may be considered a far field image. Then, when compressing a plurality of first images, the compression ratio of each first image may be determined based on the pre-stored correspondence between the video channels and the view ranges and the correspondence between the view ranges and the pixel compression ratios. For example, the compression ratio of the distant view image is smaller than that of the near view image. The plurality of first images are then compressed based on the determined compression ratio. Compared with a compressed close-range image, the compressed far-range image can keep more pixel points, so that the analysis result of the far-range image is accurate, and the analysis result of the close-range image is not influenced too much.
As another possible embodiment, when the importance levels of the installation positions of the image capturing apparatuses corresponding to the plurality of first images are different, for example, the image capturing apparatus a is installed in a street 1 with a heavy traffic and the image capturing apparatus B is installed in a street 2 with a sparse vehicle, it is obvious that the monitoring importance of the street 1 is higher than that of the street 2. Therefore, the plurality of first images can be compressed in different proportions by taking the importance level of the installation position as a reference, and the plurality of compressed first images can be spliced into the second image.
Specifically, referring to fig. 3, when compressing a plurality of first images, the compression ratio of each first image may be determined based on the correspondence between the pre-stored video channels and the importance levels and the correspondence between the importance levels and the pixel compression ratios. For example, the higher the importance level (e.g., the importance level is the first level), the smaller the compression ratio; conversely, the lower the importance level (e.g., the second level), the greater the compression ratio. Compared with the first image with lower importance level, the first image with higher importance level can reserve more pixel points, so that the first image with higher importance level is preferentially ensured to have more accurate analysis result.
In the second case: the number of pixel points contained in each of the plurality of first images is different.
In the embodiment of the present invention, if the number of the pixel points included in the plurality of first images is different, it can be considered that the types of the image capturing devices corresponding to the plurality of first images are different, and the image capturing devices of different types are applied to different monitoring scenes. Therefore, a plurality of first images can be directly spliced and then compressed to form a second image. Not only is the plurality of first images guaranteed to be analyzed based on the same video analysis device, but also the accuracy of video analysis is guaranteed.
As a possible implementation manner, please refer to fig. 4, a plurality of first images including different numbers of pixels are stitched, and the numbers of pixels included in the plurality of stitched first images are compressed in equal proportion to obtain a second image.
Step 103: and analyzing the second image, and determining the detection result of the corresponding video channel based on the analysis result of each subregion.
In the embodiment of the invention, after the plurality of first images are compressed and spliced to form the second image, the second image can be analyzed, so that the analysis result corresponding to each video channel is determined.
As a possible implementation, the analysis may be performed based on the second image, and the detection result of the corresponding video channel may be determined based on the analysis result of each sub-region constituting the second image.
In some embodiments, after determining the detection result of the corresponding video channel based on the analysis result of each sub-region, it may be determined whether a corresponding action needs to be performed according to the detection result of each video channel. For example, the action may be to send an alarm message to the associated staff.
As a possible implementation manner, it may be considered that a detection result for triggering an alarm message is preset, and if the detection result of each current video channel is consistent with the preset detection result, it may indicate that a certain event indicated by the preset detection result occurs, and at this time, an alarm message corresponding to the event may be output, so that relevant personnel may take corresponding measures according to the alarm message.
It should be understood that whether to send out the alarm information may be determined according to the detection result of a single video channel, or may be determined according to the detection results of at least two video channels, which are related to a specific event type, and are not particularly limited herein.
Referring to fig. 5, based on the same inventive concept, an embodiment of the present invention provides a video analysis apparatus, including: an acquisition unit 201, a processing unit 202 and an analysis unit 203.
An obtaining unit 201, configured to obtain a plurality of first images corresponding to a plurality of video channels at the same time respectively;
the processing unit 202 is configured to compress and splice a plurality of first images based on a preset rule to obtain a second image, where the number of pixels included in the second image is the same as that of the first images including the largest number of pixels in the plurality of first images, and each sub-region constituting the second image corresponds to the compressed first image of each video channel;
an analyzing unit 203, configured to analyze the second image and determine a detection result of the corresponding video channel based on an analysis result of each sub-region.
Optionally, if the number of the pixel points included in the plurality of first images is the same, the processing unit 202 is specifically configured to:
compressing the number of pixel points contained in the plurality of first images in different proportions based on the view ranges of the image acquisition devices corresponding to the plurality of first images, and splicing the plurality of compressed first images into a second image, wherein the view ranges comprise a far view and a near view; alternatively, the first and second electrodes may be,
and compressing the number of pixel points contained in the plurality of first images in different proportions based on the importance levels of the installation positions of the image acquisition devices corresponding to the plurality of first images, and splicing the plurality of compressed first images into the second image.
Optionally, the processing unit 202 is further specifically configured to:
determining the compression ratio of each first image based on the corresponding relationship between the video channel and the view range which is stored in advance and the corresponding relationship between the view range and the pixel compression ratio; the far-vision domain correspondingly comprises a far-vision image, the near-vision domain correspondingly comprises a near-vision image, and the compression ratio of the far-vision image is smaller than that of the near-vision image;
the first image is compressed based on the determined compression ratio.
Optionally, the processing unit 202 is specifically configured to:
determining the compression ratio of each first image based on the corresponding relationship between the pre-stored video channel and the important level and the corresponding relationship between the important level and the pixel compression ratio, wherein the important level and the pixel compression ratio are in negative correlation;
the first image is compressed based on the determined compression ratio.
Optionally, if the number of the pixel points included in the plurality of first images is different, the processing unit 202 is specifically configured to:
and splicing a plurality of first images containing different numbers of pixel points, and compressing the number of the pixel points contained in the spliced plurality of first images in equal proportion to obtain a second image.
Optionally, the video analysis apparatus further includes:
the determining unit is used for determining whether the detection result of each video channel is consistent with the preset detection result for triggering the alarm message;
and the output unit is used for outputting the alarm message corresponding to the event represented by the current detection result.
Referring to fig. 6, based on the same inventive concept, an embodiment of the present invention provides a video analysis apparatus, which includes at least one processor 301, where the processor 301 is configured to execute a computer program stored in a memory, and implement the steps of the video analysis method shown in fig. 1 provided by the embodiment of the present invention.
Alternatively, the processor 301 may be specifically a central processing unit, a specific ASIC, and may be one or more integrated circuits for controlling the execution of programs.
Optionally, the apparatus may further comprise a memory 302 connected to the at least one processor 301, the memory 302 may comprise ROM, RAM and disk memory. The memory 302 is used for storing data required by the processor 301 during operation, that is, storing instructions executable by the at least one processor 301, and the at least one processor 301 executes the method shown in fig. 1 by executing the instructions stored in the memory 302. The number of the memories 302 is one or more. The memory 302 is shown in fig. 6, but it should be understood that the memory 302 is not an optional functional module, and is shown in fig. 6 by a dotted line.
The physical devices corresponding to the obtaining unit 201, the processing unit 202, and the analyzing unit 203 may be the processor 301. The apparatus may be used to perform the method provided by the embodiment shown in fig. 1. Therefore, regarding the functions that can be realized by the functional modules in the apparatus, reference may be made to the corresponding description in the embodiment shown in fig. 1, which is not repeated herein.
Embodiments of the present invention also provide a computer storage medium, where the computer storage medium stores computer instructions, and when the computer instructions are executed on a computer, the computer is caused to execute the method as described in fig. 1.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (9)

1. A method of video analysis, the method comprising:
acquiring a plurality of first images respectively corresponding to a plurality of video channels at the same moment;
compressing and splicing the plurality of first images based on a preset rule to obtain a second image, wherein the number of pixel points contained in the second image is the same as that of the first images containing the largest number of pixel points in the plurality of first images, and each subarea forming the second image corresponds to the first image after each video channel is compressed;
analyzing the second image, and determining a detection result of a corresponding video channel based on an analysis result of each subregion;
determining whether the detection result of each video channel is consistent with a preset detection result for triggering an alarm message;
and if the current detection result is consistent with the current detection result, outputting an alarm message corresponding to the event represented by the current detection result.
2. The method of claim 1, wherein if the number of pixels included in the first images is the same, compressing and stitching the first images based on a preset rule to obtain a second image comprises:
compressing the number of pixel points contained in the plurality of first images in different proportions based on the view ranges of the image acquisition devices corresponding to the plurality of first images, and splicing the plurality of compressed first images into the second image, wherein the view ranges include a far view and a near view; alternatively, the first and second electrodes may be,
and compressing the number of pixel points contained in the plurality of first images in different proportions based on the importance levels of the installation positions of the image acquisition devices corresponding to the plurality of first images respectively, and splicing the plurality of compressed first images into the second image.
3. The method of claim 2, wherein compressing the number of pixels included in the plurality of first images in different proportions based on the field of view of the image capture device to which each of the plurality of first images corresponds comprises:
determining the compression ratio of each first image based on the corresponding relationship between the video channel and the view range which is stored in advance and the corresponding relationship between the view range and the pixel compression ratio; the far vision field corresponds to a far vision image, the near vision field corresponds to a near vision image, and the compression ratio of the far vision image is smaller than that of the near vision image;
compressing the first image based on the determined compression ratio.
4. The method of claim 2, wherein compressing the number of pixels included in the plurality of first images in different proportions based on the importance level of the installation location of the image capture device corresponding to each of the plurality of first images comprises:
determining the compression ratio of each first image based on the corresponding relationship between the pre-stored video channel and the importance level and the corresponding relationship between the importance level and the pixel compression ratio, wherein the importance level is negatively related to the pixel compression ratio;
compressing the first image based on the determined compression ratio.
5. The method of claim 1, wherein if the number of pixels included in the first images is different, compressing and stitching the first images based on a predetermined rule to obtain a second image comprises:
and splicing a plurality of first images containing different numbers of pixel points, and compressing the number of the pixel points contained in the spliced plurality of first images in equal proportion to obtain the second image.
6. A video analysis apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a plurality of first images respectively corresponding to a plurality of video channels at the same time;
the processing unit is used for compressing and splicing the plurality of first images based on a preset rule to obtain a second image, wherein the number of pixel points contained in the second image is the same as that of the first images containing the largest number of pixel points in the plurality of first images, and each subarea forming the second image corresponds to the first image after each video channel is compressed;
the analysis unit is used for analyzing the second image and determining the detection result of the corresponding video channel based on the analysis result of each subregion; determining whether the detection result of each video channel is consistent with a preset detection result for triggering an alarm message; and if the current detection result is consistent with the current detection result, outputting an alarm message corresponding to the event represented by the current detection result.
7. The apparatus of claim 6, wherein if the number of pixels included in the plurality of first images is the same, the processing unit is specifically configured to:
compressing the number of pixel points contained in the plurality of first images in different proportions based on the vision field range of the image acquisition device corresponding to each of the plurality of first images, and splicing the plurality of compressed first images into the second image, wherein the vision field range comprises a far vision field and a near vision field; alternatively, the first and second electrodes may be,
and compressing the number of pixel points contained in the plurality of first images in different proportions based on the importance levels of the installation positions of the image acquisition devices corresponding to the plurality of first images respectively, and splicing the plurality of compressed first images into the second image.
8. A video analysis apparatus, characterized in that the apparatus comprises at least one processor and a memory connected to the at least one processor, the at least one processor being adapted to carry out the steps of the method according to any one of claims 1-5 when executing a computer program stored in the memory.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the execution of the computer program by a processor is to realize the steps of the method according to any of the claims 1-5.
CN202110114966.8A 2021-01-28 2021-01-28 Video analysis method and device Active CN112822496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110114966.8A CN112822496B (en) 2021-01-28 2021-01-28 Video analysis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110114966.8A CN112822496B (en) 2021-01-28 2021-01-28 Video analysis method and device

Publications (2)

Publication Number Publication Date
CN112822496A CN112822496A (en) 2021-05-18
CN112822496B true CN112822496B (en) 2023-03-17

Family

ID=75860066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110114966.8A Active CN112822496B (en) 2021-01-28 2021-01-28 Video analysis method and device

Country Status (1)

Country Link
CN (1) CN112822496B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103731631A (en) * 2012-10-16 2014-04-16 华为软件技术有限公司 Method, device and system for transmitting video image
WO2018130016A1 (en) * 2017-01-10 2018-07-19 哈尔滨工业大学深圳研究生院 Parking detection method and device based on monitoring video
CN111583251A (en) * 2020-05-15 2020-08-25 国网浙江省电力有限公司信息通信分公司 Video image analysis method and device and electronic equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100677A (en) * 2014-05-21 2015-11-25 华为技术有限公司 Method for presenting video conference, devices for presenting video conference and system for presenting video conference
CN107770484A (en) * 2016-08-19 2018-03-06 杭州海康威视数字技术股份有限公司 A kind of video monitoring information generation method, device and video camera
CN111263208B (en) * 2018-11-30 2022-11-04 杭州海康威视数字技术股份有限公司 Picture synthesis method and device, electronic equipment and storage medium
CN111510731B (en) * 2019-01-31 2022-03-25 杭州海康威视数字技术股份有限公司 System and method for splicing traffic images
CN111836102B (en) * 2019-04-23 2023-03-24 杭州海康威视数字技术股份有限公司 Video frame analysis method and device
CN112017441B (en) * 2019-05-28 2022-03-08 浙江宇视科技有限公司 Vehicle traffic behavior detection method, device, equipment and storage medium
CN112073648B (en) * 2020-08-12 2023-02-17 深圳市捷视飞通科技股份有限公司 Video multi-picture synthesis method and device, computer equipment and storage medium
CN112188136A (en) * 2020-09-24 2021-01-05 高新兴科技集团股份有限公司 Method, system, storage medium and equipment for splicing and recording videos in real time in all-in-one mode

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103731631A (en) * 2012-10-16 2014-04-16 华为软件技术有限公司 Method, device and system for transmitting video image
WO2018130016A1 (en) * 2017-01-10 2018-07-19 哈尔滨工业大学深圳研究生院 Parking detection method and device based on monitoring video
CN111583251A (en) * 2020-05-15 2020-08-25 国网浙江省电力有限公司信息通信分公司 Video image analysis method and device and electronic equipment

Also Published As

Publication number Publication date
CN112822496A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
US10943357B2 (en) Video based indoor leak detection
EP2549738B1 (en) Method and camera for determining an image adjustment parameter
CN111862228B (en) Occlusion detection method, system, computer device and readable storage medium
CN110738150B (en) Camera linkage snapshot method and device and computer storage medium
CN110059666B (en) Attention detection method and device
KR101974105B1 (en) Photographing system and method for increasing recognition rate of vehicle number
CN107240092A (en) A kind of image blur detection method and device
US9456127B2 (en) Blur detection method of images, monitoring device, and monitoring system
KR101697060B1 (en) Method of sening event and apparatus performing the same
CN110225236B (en) Method and device for configuring parameters for video monitoring system and video monitoring system
JP2020087312A (en) Behavior recognition device, behavior recognition method, and program
CA2860014C (en) Video based indoor leak detection
CN112822496B (en) Video analysis method and device
CN110913209B (en) Camera shielding detection method and device, electronic equipment and monitoring system
CN113014876A (en) Video monitoring method and device, electronic equipment and readable storage medium
KR20180089249A (en) Method for diagnosing fault of camera
CN112906651B (en) Target detection method and device
CN113452903B (en) Snapshot equipment, snap method and main control chip
CN115147752A (en) Video analysis method and device and computer equipment
CN112738387B (en) Target snapshot method, device and storage medium
KR101327256B1 (en) System and method of detecting vehicle using detecting shadow region of the vehicle by ptz camera
CN108363985B (en) Target object perception system testing method and device and computer readable storage medium
US10223592B2 (en) Method and associated apparatus for performing cooperative counting with aid of multiple cameras
CN113658229B (en) Method and device for determining abnormal object, storage medium and electronic device
KR102409857B1 (en) Apparatus and method for monitoring environment of vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant