CN107580228B - Monitoring video processing method, device and equipment - Google Patents

Monitoring video processing method, device and equipment Download PDF

Info

Publication number
CN107580228B
CN107580228B CN201710835206.XA CN201710835206A CN107580228B CN 107580228 B CN107580228 B CN 107580228B CN 201710835206 A CN201710835206 A CN 201710835206A CN 107580228 B CN107580228 B CN 107580228B
Authority
CN
China
Prior art keywords
video
decoding
picture
display
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710835206.XA
Other languages
Chinese (zh)
Other versions
CN107580228A (en
Inventor
赵立峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weihai Yuancheng Information Technology Co.,Ltd.
Original Assignee
Weihai Yuancheng Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weihai Yuancheng Information Technology Co ltd filed Critical Weihai Yuancheng Information Technology Co ltd
Priority to CN201710835206.XA priority Critical patent/CN107580228B/en
Publication of CN107580228A publication Critical patent/CN107580228A/en
Application granted granted Critical
Publication of CN107580228B publication Critical patent/CN107580228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a video processing method, a device and equipment, wherein basic decoding and matrix decoding are respectively carried out on an acquired video stream to obtain a bottom layer video and a top layer video, a video picture is played in a first display picture according to the top layer video and the bottom layer video, if local amplification is needed to be carried out on certain parts in the video in the process of playing the video picture, the top layer video can be amplified, the result of the local amplification processing is displayed in a second display picture, and at the moment, a monitoring video is still played in the first display picture. Therefore, the local part is amplified while the monitoring video is played, and the display of the original display picture on the video picture is not influenced.

Description

Monitoring video processing method, device and equipment
Technical Field
The invention relates to the field of monitoring, in particular to a video processing method, a video processing device and video processing equipment.
Background
The video monitoring is a video application system which shoots real scenes in real time through video acquisition equipment such as a camera and transmits the real-time videos back to a rear-end monitoring center through various transmission modes and displays the real scenes of remote monitoring points. With the development of the times, video monitoring is more and more widely applied.
However, the conventional video monitoring method can only play the video, and if the video needs to be displayed in an enlarged manner, the monitoring video needs to be played back, that is, in the prior art, the object in the video image cannot be enlarged while the monitoring video picture is displayed.
Disclosure of Invention
In view of this, embodiments of the present invention provide a video processing method, an apparatus, and a device, which generate a bottom layer video and a top layer video when decoding a video stream, and display a video image according to the videos of the two layers, so as to solve the problem that in the prior art, an object in a video image cannot be amplified when a monitoring video picture is displayed.
A method of video processing, the method comprising:
acquiring a video stream;
performing basic decoding on the video stream to obtain a bottom layer video;
decoding the video stream by adopting a preset matrix decoding method to obtain a top-layer video;
displaying a video picture in a first display picture based on the bottom layer video and the top layer video according to a preset display mode;
when a local amplification instruction is detected, local amplification processing is carried out on the top layer video according to the local amplification instruction, and an amplified picture is displayed in a second display picture while a video picture is displayed in the first display picture.
Optionally, the decoding the video stream by using a preset matrix decoding method includes
Dividing a scene image in a camera view field by adopting a preset matrix virtual division algorithm to obtain a plurality of virtual division frames of the scene image;
capturing a dynamic object appearing in the scene image; the dynamic objects are positioned in each virtual segmentation frame;
and decoding the images in the virtual segmentation frames corresponding to the captured dynamic objects.
Optionally, the decoding the video stream by using a preset matrix decoding method includes:
dividing a scene image in a camera view field by adopting a preset matrix virtual division algorithm to obtain a plurality of virtual division frames of the scene image;
capturing a dynamic object appearing in the scene image; the dynamic objects are positioned in each virtual segmentation frame;
matching the dynamic object with a preset instruction set;
and decoding the images in the virtual segmentation frames corresponding to the captured dynamic objects according to the matching result.
Optionally, the displaying a video image in a first display picture based on the bottom layer video and the top layer video according to a preset display mode includes:
overlaying the top layer video onto the bottom layer video;
the video image of the top-level video is displayed in the first display screen.
Optionally, the displaying a video picture in a first display picture based on the bottom-layer video and the top-layer video according to a preset display mode includes:
a video image of the base video is displayed in the first display screen.
Optionally, when the local zoom-in instruction is detected, zooming in the top-level video according to the local zoom-in instruction, and displaying a video picture in the first display picture and displaying a picture after zoom-in processing in the second display picture at the same time, includes:
when a local amplification instruction is detected, performing local amplification processing in the top-level video according to the local amplification instruction;
and displaying the video picture in the first display picture according to the bottom layer video, and displaying the amplified picture in the second display picture.
Optionally, the method further includes:
encoding the obtained video stream;
and storing the coded data to obtain a video file.
An embodiment of the present invention further provides a video processing apparatus, where the apparatus includes:
an acquisition unit configured to acquire a video stream;
a first decoding unit, configured to perform basic decoding on the video stream to obtain a bottom layer video;
the second decoding unit is used for decoding the video stream by adopting a preset matrix decoding method to obtain a top-layer video;
the display unit is used for displaying a video picture in a first display picture based on the bottom layer video and the top layer video according to a preset display mode;
and the amplifying unit is used for carrying out local amplification processing on the top-layer video according to the local amplification instruction when the local amplification instruction is detected, and displaying the video picture in the first display picture and displaying the picture after the amplification processing in the second display picture.
An embodiment of the present invention further provides a video processing device, where the device includes:
the system comprises an image integrated system chip, a video coding and decoding hard die unit and an image processing unit GPU;
the image integration system is used for respectively controlling the video coding and decoding hard die unit and the GPU to decode the obtained video stream through different control instructions, controlling the video coding and decoding hard die unit and the GPU, and performing video image display and local amplification processing according to the obtained bottom layer video and the top layer video;
the video coding and decoding hard die unit is used for receiving a first video control instruction sent by the image integration system chip and carrying out basic decoding processing on the obtained video stream according to the first video control instruction to obtain a bottom layer video;
and the GPU is used for decoding the acquired video stream according to a second video control instruction sent by the image integrated control system chip to obtain a top-level video, processing the top-level video and the bottom-level video to display a video picture in the first display picture, and amplifying the top-level video to display an amplified picture in the second display picture.
Optionally, the apparatus further comprises:
and the integrated processing control panel is used for integrating all modules in the monitoring video processing equipment.
In the embodiment of the invention, the acquired video stream is subjected to basic decoding and matrix decoding respectively to obtain a bottom layer video and a top layer video, and a video picture is played in a first display picture according to the top layer video and the bottom layer video. Therefore, the local part is amplified while the monitoring video is played, and the display of the original display picture on the video picture is not influenced.
In addition, when the base decoding and the matrix decoding are carried out, the decoding of each dynamic object is carried out in the divided virtual image frame, when the top layer video is enlarged, the image in one virtual divided frame or a plurality of virtual divided frames can be enlarged, and the efficiency of the image enlarging processing is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart illustrating a video processing method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a video processing method according to an embodiment of the present invention;
fig. 3 is another schematic flow chart of a video processing method according to an embodiment of the present invention;
fig. 4 is a schematic flow chart illustrating a video processing method according to an embodiment of the present invention;
fig. 5 is another schematic flow chart of a video processing method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present invention;
fig. 7 shows a schematic structural diagram of a video apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a schematic flow chart of a surveillance video processing method according to an embodiment of the present invention is shown, where the method includes:
s101: acquiring a video stream;
when video monitoring is performed through related equipment, video streams of a monitored area can be acquired through the related equipment.
S102: performing basic decoding processing on the obtained video stream to obtain a bottom layer video;
the bottom layer video obtained in this embodiment is in a video format that can be displayed on a display device, and specifically, S102 may include:
separating the main code stream and the sub code stream of the video stream to obtain a first processed video;
and decoding the first processed video into a bottom layer video with a preset format.
It should be noted that, when performing the basic decoding Processing, the basic decoding Processing may be implemented by using a video coding/decoding unit having an international standard, or may be implemented by using a GPU (Graphics Processing unit, which is a full name of english). The video coding and decoding unit with the international standard can be realized by software or hardware, and if the video coding and decoding unit is realized by hardware, the video coding and decoding unit can be represented as a video coding and decoding hard die chip.
For example, the following steps are carried out: if a video codec unit with international standard is adopted, the obtained video stream may be an encoded video stream, and the encoding process includes: predictive coding, transformation, quantization, motion estimation/motion compensation, entropy coding, etc. When decoding the encoded video stream, the reverse process of the encoding process may be performed, including: entropy decoding, motion estimation/motion compensation, inverse quantization, inverse transformation, predictive decoding, etc.
S103: decoding the video stream by adopting a preset matrix decoding method to obtain a top-layer video;
in this embodiment, the top-level video obtained by decoding may be displayed on a display device, or may be amplified synchronously during displaying, specifically, S103 includes the following two implementation manners:
the first implementation mode comprises the following steps:
s201: dividing a scene image in a camera view field by adopting a preset matrix virtual division algorithm to obtain a plurality of virtual division frames of the scene image;
s202: capturing a dynamic object appearing in the scene image; the dynamic objects are positioned in each virtual segmentation frame;
s203: and decoding the images in the virtual segmentation frames corresponding to the captured dynamic objects.
In this embodiment, the scene image in the camera view field may be divided into an N × N matrix according to a preset matrix virtual segmentation algorithm, where in the camera view field, the scene is in a static state for the same monitoring area, and if an object other than the non-scene is captured in the scene, the object is a captured dynamic image. Because the part needing to be amplified is an object outside the non-scene in general, each part in the captured dynamic object is positioned in each virtual division frame, and the image in each virtual division frame is decoded respectively to obtain a partially amplified material.
The second embodiment:
s301: dividing a scene image in a camera view field by adopting a preset matrix virtual division algorithm to obtain a plurality of virtual division frames of the scene image;
s302: capturing a dynamic object appearing in the scene image; the dynamic objects are positioned in each virtual segmentation frame;
s303: matching the dynamic object with a preset instruction set;
s304: and decoding the images in the virtual segmentation frames corresponding to the captured dynamic objects according to the matching result.
In this embodiment, the preset instruction set may include: human beings and things. The method specifically comprises the following steps: human body, human face, vehicle, aircraft, unmanned aerial vehicle, animal etc.. Or may also be a stationary object, including: road surfaces, buildings, mountains, trees, etc.
In this embodiment, by using the second implementation manner, after the dynamic object is captured, the category to which the dynamic object belongs can be quickly determined through a preset instruction set, and the dynamic object is decoded according to the determined category, so that the decoded video obtained in this way can occupy less system resources when performing local amplification, and the processing efficiency of the local amplification is optimized.
S104: displaying a video picture in a first display picture based on the bottom layer video and the top layer video according to a preset mode;
s105: when a local amplification instruction is detected, local amplification processing is carried out on the top layer video according to the local amplification instruction, and the video picture is displayed in the first display picture while the picture after the amplification processing is displayed in the second display picture.
The first display screen and the second display screen may be the same screen or different screens. If the screen is the same, the first display frame may be displayed in full screen, and the second display frame may be displayed at a specific position in the screen, for example: a picture-in-picture display method; in-situ image amplification can also be performed at the position point where the local amplification is required, and it can be understood that the second display image for local amplification generates an image shielding phenomenon of a certain part on the first display image at this time.
In this embodiment, the monitoring picture is displayed based on the top layer video and the bottom layer video, which is equivalent to fusing the top layer video and the bottom layer video into a multilayer video, and displaying is performed according to the multilayer video. The fusion mode can include multiple modes, including:
the first method is as follows:
s401: overlaying the top layer video onto the bottom layer video;
s402: the video image of the top-level video is displayed in the first display screen.
With respect to the first method, when the instruction of local enlargement is detected, S105 includes:
s403: when a local amplification instruction is detected, performing local amplification processing in the top-level video according to the local amplification instruction;
s404: and displaying the video picture in the first display picture according to the bottom layer video, and displaying the amplified picture in the second display picture.
In this embodiment, for the first mode, the top layer video and the bottom layer video are superimposed, and the superimposed video enables the top layer video to cover the bottom layer video, so that when the monitor screen is played in the display device, the top layer video and the bottom layer video are played at the same time, but because the top layer video covers the bottom layer video, only the picture of the top layer video can be seen. However, when the local part needs to be enlarged, the enlarged local part is obtained and played from the top layer video in a summary manner, but at this time, the bottom layer video is played on the screen where the top layer video is played in the display setting, but the top layer video and the bottom layer video are played synchronously, so that human eyes cannot feel the change of the video source.
The second method comprises the following steps:
s501: a video image of the base video is displayed in the first display screen.
Wherein, in the second mode,
s502: when a local amplification instruction is detected, performing local amplification processing in the top-level video according to the local amplification instruction;
s503: and displaying the video picture in the first display picture according to the bottom layer video, and displaying the amplified picture in the second display picture.
In this embodiment, for the second mode, when the bottom layer video is displayed on the display device, the bottom layer video is directly played, but when some specific part needs to be enlarged, the specific part is obtained from the top layer video and enlarged on the second display screen.
In this embodiment, in order that the user may view the monitoring video in the future, the method further includes:
encoding the obtained video stream;
and storing the coded data to obtain a video file. In this embodiment, the process of encoding a video stream is equivalent to the process of recording a video, and a video file is obtained after storing encoded video data.
In this embodiment, if the user needs to play back the surveillance video, the surveillance video may be played back by playing the video file, and when playing back the surveillance video, the object in the display screen may also be locally enlarged.
In this embodiment, the obtained video stream is subjected to basic decoding and matrix decoding respectively to obtain a bottom layer video and a top layer video, and a video picture is played in the first display picture according to the top layer video and the bottom layer video, during the playing of the video picture, if some parts in the video need to be locally amplified, the top layer video can be amplified, and a result of the local amplification processing is displayed in the second display picture, at this time, the monitoring video picture is still played in the first display picture. Therefore, the local part is amplified while the monitoring video picture is played, and the display of the original video picture is not influenced.
In addition, when the base decoding and the matrix decoding are carried out, the decoding of each dynamic object is carried out in the divided virtual image frame, when the top layer video is enlarged, the image in one virtual divided frame or a plurality of virtual divided frames can be enlarged, and the efficiency of the image enlarging processing is improved.
Referring to fig. 6, a video processing apparatus according to an embodiment of the present invention is shown, and in this embodiment, the apparatus includes:
an acquisition unit 601 configured to acquire a video stream;
a first decoding unit 602, configured to perform basic decoding on the video stream to obtain a bottom layer video;
a second decoding unit 603, configured to decode the video stream by using a preset matrix decoding method to obtain a top-level video;
a display unit 604, configured to display a video picture in a first display picture based on the bottom-layer video and the top-layer video according to a preset display manner;
the enlarging unit 605 is configured to, when a local enlarging instruction is detected, perform local enlarging processing on the top-level video according to the local enlarging instruction, and display the video picture in the first display picture and display the enlarged picture in the second display picture at the same time.
Optionally, the second decoding unit is specifically configured to:
dividing a scene image in a camera view field by adopting a preset matrix virtual division algorithm to obtain a plurality of virtual division frames of the scene image;
capturing a dynamic object appearing in the scene image; the dynamic objects are positioned in each virtual segmentation frame;
and decoding the images in the virtual segmentation frames corresponding to the captured dynamic objects.
Or
Dividing a scene image in a camera view field by adopting a preset matrix virtual division algorithm to obtain a plurality of virtual division frames of the scene image;
capturing a dynamic object appearing in the scene image; the dynamic objects are positioned in each virtual segmentation frame;
matching the dynamic object with a preset instruction set;
and decoding the images in the virtual segmentation frames corresponding to the captured dynamic objects according to the matching result.
Optionally, the display unit is specifically configured to: overlaying the top layer video onto the bottom layer video;
the video image of the top-level video is displayed in the first display screen.
Or
A video image of the base video is displayed in the first display screen.
Optionally, the amplifying unit is specifically configured to:
when a local amplification instruction is detected, performing local amplification processing in the top-level video according to the local amplification instruction;
and displaying the video picture in the first display picture according to the bottom layer video, and displaying the amplified picture in the second display picture.
By the device of the embodiment, the local part is amplified while the monitoring video is played, and the display of the original monitoring video is not influenced.
Referring to fig. 7, a schematic structural diagram of a video processing apparatus according to an embodiment of the present invention is shown, where the apparatus includes:
an image integrated system chip 1, a video coding and decoding hard die Unit 2 and an image Processing Unit GPU3 (English is called Graphics Processing Unit, Chinese is called Graphics processor);
the image integrated system chip 1 is configured to control the video coding and decoding hard mold unit 2 and the image processing unit GPU3 to process the acquired video stream through different control instructions, control the video coding and decoding hard mold unit 2 and the image processing unit GPU3, and perform video image display and local amplification processing according to the acquired top-level video and the top-level video;
the video coding and decoding hard die unit 2 is used for receiving a first video control instruction sent by the image integrated system chip and performing basic decoding processing on the obtained video stream according to the first video processing instruction to obtain a bottom layer video;
and the image processing unit GPU3 is configured to decode the acquired video stream according to the second video control instruction sent by the image integrated control system chip to obtain a top-level video, process the top-level video and the bottom-level video to display a video picture in the first display picture, and amplify the top-level video to display an amplified picture in the second display picture.
Optionally, the image integration system chip 1 includes:
the operating system module 11 is used for controlling the operation of the video coding and decoding hard die unit 2 and the image processing unit GPU3 and providing software management of a human-computer interaction interface;
the UI interface module 12 is used for providing a human-computer interaction operation window according to a human-computer interaction interface program;
a communication control module 13 for managing communication protocols and storage protocols;
in this embodiment, communication between the modules, for example, data communication between the video codec hard die unit 2 and the GPU3, between the video codec hard die unit 2 and the data source, and between the image processing unit GPU3 and the data source, all require communication protocols, and when storing the processing result, a storage protocol is also required.
A function centralized control module 14, configured to perform centralized management on each control instruction;
the matrix processing module 15 is configured to provide a matrix virtual segmentation algorithm for the image processing unit GPU3, and control the image processing unit GPU3 to decode, according to a preset control instruction, pictures in each scene segmentation frame where an object in the monitored scene is located;
the dynamic capture module 16 is configured to capture a dynamic object appearing in the monitoring field of view, and if the dynamic object is captured, trigger a control instruction for decoding a picture of the dynamic object in each scene segmentation frame;
the image synthesis module 17 is configured to receive an image synthesis control instruction sent by the function centralized control module, and control the image processing unit GPU3 to generate a multi-layer video from the bottom-layer video generated in the image coding and decoding hard die unit 2 and the top-layer video generated in the image processing unit GPU3 according to the image synthesis control instruction;
and the local amplification module 18 is used for carrying out local amplification processing on the top layer video according to the local amplification instruction.
The operating system module 11 is further configured to perform resource allocation and input/output management. Optionally, the image integration system chip 1 further includes:
and the subgraph extraction module 19 is configured to determine a dynamic object to be decoded according to a preset instruction set, and control the image processing unit GPU3 to decode pictures of the dynamic image to be decoded in each scene virtual partition frame.
Optionally, the video codec hard mask unit 2 includes:
an MCU (Micro Controller Unit, Chinese full name: Micro control Unit) module 21 for receiving a first operating system program in the operating system module 11 and controlling the DSP (digital Signal processing, Chinese full name: digital Signal processing) module 23 and the VHW (Video Hardware, Chinese full name: Video codec Hardware) module 24 to perform Hardware coding and decoding processing on the obtained Video stream according to the operating system program;
the video stream acquisition module 24 is configured to perform separation processing on a main code stream and a sub code stream of the acquired video stream;
the DSP module 23 is configured to perform multimedia encoding or decoding processing on the acquired video stream and manage the video stream;
the VHW module 22 is configured to perform basic decoding and video coding and recording on the acquired video stream;
and the video packaging output module 25 is configured to perform format packaging processing on the encoded video data and output the video data.
In this embodiment, it should be noted that when the video coding and decoding hard die unit decodes the acquired video stream, the obtained bottom layer video is recorded, and then the recorded video is encoded to obtain a video file.
Alternatively, the image processing unit GPU3 may employ general purpose computing standards such as: the graphics Computing capability of OpenCL (Open Computing Language, chinese full name: Open Computing Language) is configured to receive a second operating system program sent by the operating system module 11, control other modules in the GPU3 to decode an acquired video stream according to the second operating system program, and perform video display on a top-layer video and a bottom-layer video obtained by decoding in a first display screen, and includes:
a TS (English full name: Tesseltatro Chinese full name: mosaic device) module 32 for performing geometric modeling on the acquired video stream and generating a three-dimensional space model representing the appearance characteristics of the scene and the object in the monitoring field of view;
a Rasterizer (Chinese full name: rasterization program) module 33 for converting the three-dimensional space module into a two-dimensional image model;
a TMU (The rendering pipeline in Chinese) module 34 for adding The surface detail information of The image in The video stream according to a preset material library;
a US (Universal for English: Unified Shader Chinese: Uniform rendering architecture) module 35 for coloring each pixel point of the image in the video stream;
and the ROP (English full name: Raster OPeration Chinese full name: Raster OPeration) module is used for fusing and outputting output results of the TS module, the Raster izer module, the TMU module and the US module to obtain a top-level video.
Optionally, the method further includes:
and a power supply circuit 4 for converting the received ac voltage into a dc voltage and supplying the apparatus with electric power.
Optionally, the method further includes:
the memory group module 5 is configured to provide a buffer space for the video coding and decoding hard mold unit 2 and the image processing unit GPU3 according to a preset allocation rule;
optionally, also include
The storage unit 6 is used for storing the video files recorded by the video coding and decoding hard die unit 2;
and the input and output channel 7 is used for establishing a data transmission channel between the external device and the video codec unit 2 and between the external device and the picture processing unit GPU 3.
Optionally, the input and output channels include:
a network input circuit 71, configured to establish a data transmission channel between a data source and the GPU3, so that the GPU3 obtains a video stream from the data source;
a VGA (Video Graphics Array, Video Graphics Array;
the full high-definition output circuit 73 is used for establishing communication connection between the video editing and decoding hard die unit 2 and the image processing unit GPU3 and the large-screen television, and displaying video images on the large-screen television according to the top-layer video and the bottom-layer video;
and a storage transmission circuit 74, which is used for connecting with the storage device and transmitting the video file recorded in the video coding and decoding hard die unit 2 to the storage device.
In this embodiment, the network input circuit 71 may adopt an RJ45 network driver chip or socket to establish a communication connection between the video codec hard die unit 2 and the image processing unit GPU3 and the data source. The VGA output circuit 72 may be connected to the display through a VGA display port, and provides an output channel for the multi-layer video decoded by the video codec hard die unit 2 and the image processing unit GPU 3. The full high definition output circuit 73 can be connected to a large screen television by using an HDMI port. The storage transfer circuit 74 may be a USB port or an IDE port, and these ports may be used for plugging storage devices.
In this embodiment, the method further includes:
and the integrated processing control board 8 is used for integrating and monitoring various modules in the video processing equipment.
In this embodiment, the integrated processing control board 8 may be a PCB, wherein the integrated processing control board may include modules such as an image integrated system chip 1, a video codec unit 2, an image processing unit GPU3, a power circuit 4, a memory bank module 5, a storage unit 6, and an input/output channel 7.
By the equipment of the embodiment, the local part is amplified while the video picture is played, and the display of the original video picture is not influenced.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. A method of video processing, the method comprising:
acquiring a video stream;
performing basic decoding on the video stream to obtain a bottom layer video;
decoding the video stream by adopting a preset matrix decoding method to obtain a top-level video, which specifically comprises the following steps:
dividing a scene image in a camera view field in a video stream after video decoding by adopting a preset matrix virtual division algorithm to obtain a plurality of virtual division frames of the scene image;
capturing a dynamic object appearing in the scene image; the dynamic objects are positioned in each virtual segmentation frame;
decoding the images in each virtual segmentation frame corresponding to the captured dynamic object to obtain a top-level video,
or matching the dynamic object with a preset instruction set, and decoding the images in the virtual segmentation frames corresponding to the captured dynamic object according to the matching result to obtain a top-level video;
displaying a video picture in a first display picture based on the bottom layer video and the top layer video according to a preset display mode;
when a local amplification instruction is detected, local amplification processing is carried out on the top layer video according to the local amplification instruction, and an amplified picture is displayed in a second display picture while a video picture is displayed in the first display picture.
2. The method according to claim 1, wherein displaying the video image in the first display screen based on the bottom layer video and the top layer video according to a preset display mode comprises:
overlaying the top layer video onto the bottom layer video;
the video image of the top-level video is displayed in the first display screen.
3. The method according to claim 1, wherein displaying the video picture in the first display picture based on the bottom-layer video and the top-layer video according to a preset display mode comprises:
a video image of the base video is displayed in the first display screen.
4. The method of claim 1, further comprising:
encoding the obtained decoded video stream;
and storing the coded data to obtain a video file.
5. A video processing apparatus, characterized in that the apparatus comprises:
an acquisition unit configured to acquire a video stream;
a first decoding unit, configured to perform basic decoding on the video stream to obtain a bottom layer video;
a second decoding unit, configured to decode the video stream by using a preset matrix decoding method to obtain a top-level video, specifically: dividing a scene image in a camera view field in a video stream after video decoding by adopting a preset matrix virtual division algorithm to obtain a plurality of virtual division frames of the scene image; capturing a dynamic object appearing in the scene image; the dynamic objects are positioned in each virtual segmentation frame; decoding the captured images in the virtual segmentation frames corresponding to the dynamic object to obtain a top-level video, or matching the dynamic object with a preset instruction set, and decoding the captured images in the virtual segmentation frames corresponding to the dynamic object according to a matching result to obtain the top-level video;
the display unit is used for displaying a video picture in a first display picture based on the bottom layer video and the top layer video according to a preset display mode;
and the amplifying unit is used for carrying out local amplification processing on the top-layer video according to the local amplification instruction when the local amplification instruction is detected, and displaying the video picture in the first display picture and displaying the picture after the amplification processing in the second display picture.
6. A video processing apparatus, characterized in that the apparatus comprises:
the system comprises an image integrated system chip, a video coding and decoding hard die unit and an image processing unit GPU;
the image integration system chip is used for respectively controlling the video coding and decoding hard die unit and the GPU to decode the obtained video stream through different control instructions, controlling the video coding and decoding hard die unit and the GPU, and performing video image display and local amplification processing according to the obtained bottom layer video and the top layer video;
the video coding and decoding hard die unit is used for receiving a first video control instruction sent by the image integration system chip and carrying out basic decoding processing on the obtained video stream according to the first video control instruction to obtain a bottom layer video;
the GPU is configured to decode the video stream by using a preset matrix decoding method according to a second video control instruction sent by the image integrated control system chip to obtain a top-level video, and specifically: dividing a scene image in a camera view field in a video stream after video decoding by adopting a preset matrix virtual division algorithm to obtain a plurality of virtual division frames of the scene image; capturing a dynamic object appearing in the scene image; the dynamic objects are positioned in each virtual segmentation frame; decoding the captured images in the virtual segmentation frames corresponding to the dynamic object to obtain a top-level video, or matching the dynamic object with a preset instruction set, decoding the captured images in the virtual segmentation frames corresponding to the dynamic object according to a matching result to obtain the top-level video, processing the top-level video and the bottom-level video to display a video picture in a first display picture, and amplifying the top-level video to display an amplified picture in a second display picture.
7. The apparatus of claim 6, further comprising:
and the integrated processing control panel is used for integrating all modules in the monitoring video processing equipment.
CN201710835206.XA 2017-09-15 2017-09-15 Monitoring video processing method, device and equipment Active CN107580228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710835206.XA CN107580228B (en) 2017-09-15 2017-09-15 Monitoring video processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710835206.XA CN107580228B (en) 2017-09-15 2017-09-15 Monitoring video processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN107580228A CN107580228A (en) 2018-01-12
CN107580228B true CN107580228B (en) 2020-12-22

Family

ID=61033596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710835206.XA Active CN107580228B (en) 2017-09-15 2017-09-15 Monitoring video processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN107580228B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109104588B (en) * 2018-07-24 2020-01-14 房梦琦 Video monitoring method, equipment, terminal and computer storage medium
CN112770099B (en) * 2019-11-04 2022-06-14 腾讯科技(深圳)有限公司 Multimedia playing method, device, terminal and storage medium
CN112115804A (en) * 2020-08-26 2020-12-22 北京博睿维讯科技有限公司 Key area monitoring video control method and system, intelligent terminal and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101237582A (en) * 2008-03-07 2008-08-06 北京中星微电子有限公司 Local dynamic image amplification device and method and image processing system
CN101383969A (en) * 2008-10-27 2009-03-11 杭州华三通信技术有限公司 Method, decoder and main control module for enlarging local region of image
CN102208171A (en) * 2010-03-31 2011-10-05 安凯(广州)微电子技术有限公司 Local detail playing method on portable high-definition video player
CN103647934A (en) * 2013-11-15 2014-03-19 天津天地伟业数码科技有限公司 Method for real-time preview and video-playback electronic magnification on video recorder
CN103873806A (en) * 2008-11-12 2014-06-18 北京中星微电子有限公司 Monitoring image relevant information transmission method, system and device
CN104081760A (en) * 2012-12-25 2014-10-01 华为技术有限公司 Video play method, terminal and system
CN104184986A (en) * 2013-05-28 2014-12-03 华为技术有限公司 Video monitoring method, device and system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005012764A (en) * 2003-05-22 2005-01-13 Casio Comput Co Ltd Data communication apparatus, image transmitting method, and image transmitting program
US20120169883A1 (en) * 2010-12-31 2012-07-05 Avermedia Information, Inc. Multi-stream video system, video monitoring device and multi-stream video transmission method
CN202085262U (en) * 2011-01-27 2011-12-21 成都佳发安泰科技有限公司 Warning intelligent analysis television wall server
CN202275593U (en) * 2011-10-17 2012-06-13 天津市亚安科技股份有限公司 Circuit for achieving double-circuit video picture in picture function in video monitoring
CN203279065U (en) * 2013-06-04 2013-11-06 四川艾普视达数码科技有限公司 Multi-screen display NVR system
JP5883052B2 (en) * 2014-03-18 2016-03-09 キヤノン株式会社 Image processing apparatus and control method thereof
CN104902238A (en) * 2015-05-29 2015-09-09 江苏天鹰知淮数码科技有限公司 Intelligent video analysis server
CN106231225A (en) * 2016-09-14 2016-12-14 深圳市泛海三江科技发展有限公司 A kind of network hard disk video recorder data processing method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101237582A (en) * 2008-03-07 2008-08-06 北京中星微电子有限公司 Local dynamic image amplification device and method and image processing system
CN101383969A (en) * 2008-10-27 2009-03-11 杭州华三通信技术有限公司 Method, decoder and main control module for enlarging local region of image
CN103873806A (en) * 2008-11-12 2014-06-18 北京中星微电子有限公司 Monitoring image relevant information transmission method, system and device
CN102208171A (en) * 2010-03-31 2011-10-05 安凯(广州)微电子技术有限公司 Local detail playing method on portable high-definition video player
CN104081760A (en) * 2012-12-25 2014-10-01 华为技术有限公司 Video play method, terminal and system
CN104184986A (en) * 2013-05-28 2014-12-03 华为技术有限公司 Video monitoring method, device and system
CN103647934A (en) * 2013-11-15 2014-03-19 天津天地伟业数码科技有限公司 Method for real-time preview and video-playback electronic magnification on video recorder

Also Published As

Publication number Publication date
CN107580228A (en) 2018-01-12

Similar Documents

Publication Publication Date Title
CN106331850B (en) Browser live broadcast client, browser live broadcast system and browser live broadcast method
US8558907B2 (en) Multiple sensor input data synthesis
JP6333858B2 (en) System, apparatus, and method for sharing a screen having multiple visual components
KR100889367B1 (en) System and Method for Realizing Vertual Studio via Network
CN107580228B (en) Monitoring video processing method, device and equipment
EP3434021B1 (en) Method, apparatus and stream of formatting an immersive video for legacy and immersive rendering devices
WO2017185761A1 (en) Method and device for playing back 2d video
CN102714747A (en) Stereoscopic video graphics overlay
CN110012336B (en) Picture configuration method, terminal and device of live interface
JP2010286811A (en) Assembling display equipment, and methdo and system for control of screen thereof
CN112511896A (en) Video rendering method and device
CN105100870A (en) Screenshot method and terminal equipment
CN106412617B (en) Remote debugging control method and device
CN116112706A (en) Method for sharing multiple application programs simultaneously in online video conference
CN113452870B (en) Video processing method and device
CN113556500B (en) Video overlapping method, device and system
CN109727315B (en) One-to-many cluster rendering method, device, equipment and storage medium
CN114640882A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN112887653A (en) Information processing method and information processing device
KR20220066328A (en) Method and apparatus for encoding, transmitting and decoding volumetric video
US11792380B2 (en) Video transmission method, video processing device, and video generating system for virtual reality
TWI817273B (en) Real-time multiview video conversion method and system
WO2022252797A1 (en) Video presentation method, electronic device, computer storage medium and program product
JP2007323481A (en) Video data transmission system and method, transmission processing apparatus and method, and reception processing apparatus and method
JP2016092642A (en) Image edition method, image edition system, and image edition program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200721

Address after: 223, room two, 205, Zhuhai Road, Wendeng Economic Development Zone, Shandong, Weihai 264400, China

Applicant after: Weihai Yuancheng Information Technology Co.,Ltd.

Address before: 266440 room 205, Zhuhai Road, Wendeng Economic Development Zone, Shandong, Weihai, 211

Applicant before: Zhao Lifeng

GR01 Patent grant
GR01 Patent grant