CN110858879B - Video stream processing method, device and computer readable storage medium - Google Patents

Video stream processing method, device and computer readable storage medium Download PDF

Info

Publication number
CN110858879B
CN110858879B CN201810966728.8A CN201810966728A CN110858879B CN 110858879 B CN110858879 B CN 110858879B CN 201810966728 A CN201810966728 A CN 201810966728A CN 110858879 B CN110858879 B CN 110858879B
Authority
CN
China
Prior art keywords
image data
frame
invalid
data frame
video stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810966728.8A
Other languages
Chinese (zh)
Other versions
CN110858879A (en
Inventor
熊宇龙
李林
樊晓清
林志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201810966728.8A priority Critical patent/CN110858879B/en
Publication of CN110858879A publication Critical patent/CN110858879A/en
Application granted granted Critical
Publication of CN110858879B publication Critical patent/CN110858879B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs

Abstract

The embodiment of the invention provides a video stream processing method, a video stream processing device and a computer readable storage medium. The video stream processing method comprises the following steps: when detecting that the video stream has invalid image data, respectively acquiring a corresponding source image data frame and a corresponding target image data frame; generating a transition deformation data frame corresponding to each invalid image data frame according to the source image data frame and the target image data frame; and sequentially replacing each corresponding frame of the invalid image data frame in the video stream with the transition deformation data frame to obtain the processed video stream. The restoration of invalid image data is realized, so that the video stream quality is directly improved. The video stream quality is improved while the transmission speed is ensured. And video stream processing for ensuring smooth transmission of high-quality video stream.

Description

Video stream processing method, device and computer readable storage medium
Technical Field
The present invention relates to the field of image technologies, and in particular, to a method and an apparatus for processing a video stream, and a computer-readable storage medium.
Background
Video stream processing technology is developed in digital image processing and image rendering, and is widely applied to various fields, such as security and protection fields, video entertainment fields, and network video service fields. The video stream processing technology mainly processes image frames by extracting specific information in key frames of a video stream, and then displays or renders and stores the processed stream in real time.
At present, the method for implementing video stream processing mainly comprises:
direct processing, which directly processes the key frame, for example: manual adjustment, refocusing, etc. It cannot be processed dynamically in real time.
An indirect processing method, which is a processing method after extracting key information, for example: post-processing, frame interpolation, etc. The method extracts key information in the key frame, and modifies the key frame after processing. The processing process depends too much on the key frame, and the processed video stream is delayed and stuck.
Therefore, the existing methods can not effectively process the acquired video stream data in real time and can not ensure the real-time high-quality transmission of the video stream.
Disclosure of Invention
It is an object of the present invention to provide a video stream processing method, apparatus and computer readable storage medium for improving the above problems.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a method for processing a video stream, where the method includes: when invalid image data of a video stream is detected, determining a source image data frame and a target image data frame from an image data frame with the collection time before the invalid image data and an image data frame with the collection time after the invalid image data respectively, wherein the invalid image data comprises one or more invalid image data frames; generating a transition deformation data frame corresponding to each invalid image data frame according to the source image data frame and the target image data frame; and sequentially replacing each corresponding frame of the invalid image data frame in the video stream with the transition deformation data frame to obtain the processed video stream.
In a second aspect, an embodiment of the present invention provides a video stream processing apparatus, where the apparatus includes: the detection module is used for respectively determining a source image data frame and a target image data frame from an image data frame with the collection time before the invalid image data and an image data frame with the collection time after the invalid image data when the video stream is detected to have the invalid image data, wherein the invalid image data comprises one or more invalid image data frames; the processing module is used for generating a transition deformation data frame corresponding to each invalid image data frame according to the source image data frame and the target image data frame; and the replacing module is used for sequentially replacing each corresponding frame of the invalid image data frame in the video stream with the transition deformation data frame so as to obtain the processed video stream.
In a third aspect, the present invention provides a computer-readable storage medium, on which computer instructions are stored, and the computer instructions, when executed by a processor, implement the steps of the aforementioned video stream processing method.
Compared with the prior art, the embodiment of the invention provides a video stream processing method, when invalid image data of a video stream is detected, a source image data frame and a target image data frame are respectively determined from an image data frame with the collection time before the invalid image data and an image data frame with the collection time after the invalid image data, and transition deformation data frames corresponding to the invalid image data of each frame are generated based on the source image data frame and the target image data frame. And sequentially replacing each corresponding frame of the invalid image data in the video stream with the transition deformation data frame, thereby realizing the processing of the video stream. The video quality is efficiently improved in real time by replacing invalid frames with transition images without depending on key frames, and real-time high-quality transmission of video streams is guaranteed.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic diagram of an application environment according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of an image capturing device according to an embodiment of the present invention.
Fig. 3 is a flowchart illustrating steps of a video stream processing method according to an embodiment of the present invention.
Fig. 4 is a flowchart illustrating sub-steps of step S102 in fig. 3.
Fig. 5 is another part of a flowchart illustrating steps of a video stream processing method according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of a video stream processing apparatus according to an embodiment of the present invention.
Icon: 100-an image acquisition device; 111-a memory; 112-a processor; 113-a communication unit; 200-a server; 300-video stream processing means; 301-a detection module; 302-a processing module; 303-replacement module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Video stream processing technology is found in digital image processing and image rendering, and is widely applied to various fields, such as security, video entertainment, and network video service. The video stream processing technology mainly processes image frames by extracting specific information in key frames of a video stream, and then displays or renders and stores the processed stream in real time.
In the related art, video stream processing is mainly divided into two types, specifically:
(1) direct treatment: the key frame is processed directly. The method generally adopts real-time image fusion insertion, and image information required to be processed is directly fused on a key frame. However, the position of the fused image is not accurate, the fused image is preprocessed, and real-time dynamic processing cannot be realized.
(2) Indirect treatment: and extracting key information and then processing. The method extracts key information in the key frame, and modifies the key frame after processing. But the processing process of the method is too dependent on key frames, and the processed video stream is delayed and jammed.
In addition, most of the existing methods only improve the quality of the video stream by improving the transmission quality, or perform post-processing after storing the video stream, but do not process invalid frames in the video stream transmission process, and although the damage of the transmission process to the image quality can be avoided, the actual image quality problem of the video stream cannot be solved.
In view of the foregoing, embodiments of the present invention provide a video stream processing method, apparatus and computer-readable storage medium.
The application environment of the preferred embodiment of the present invention shown in fig. 1 includes an image capturing apparatus 100 and a server 200. The image acquisition device 100 is in communication connection with the server 200. Further, the video stream processing method and apparatus provided in the embodiment of the present invention may be applied to the image capturing device 100, and may also be applied to the server 200. The principles are the same, and for convenience of explanation, the embodiment of the present invention is described by taking the application to the image capturing apparatus 100 as an example.
Further, please refer to fig. 2, which is a block diagram of the image capturing apparatus 100. The image capturing device 100 further comprises a video stream processing means 300, a memory 111, a processor 112 and a communication unit 113.
The memory 111, the processor 112 and the communication unit 113 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The software implementing the data storage method includes at least one software functional module which can be stored in the memory 111 in the form of software or Firmware (Firmware) or solidified in an Operating System (OS) of the image capturing unit. The processor 112 is used to execute the executable modules stored in the memory 111.
The Memory 111 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. Wherein the memory is used for storing programs or data. For example, a functional module corresponding to the video stream processing device.
The communication unit 113 is used for establishing a communication connection with other communication terminals (for example, the server 200) through the network, and for transceiving data through the network.
It should be understood that the configuration shown in fig. 2 is merely a schematic configuration of an image acquisition unit, which may also include more or fewer components than shown in fig. 2, or have a different configuration than shown in fig. 2. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
First embodiment
Referring to fig. 3, fig. 3 is a flowchart of a video stream processing method according to a preferred embodiment of the invention. The video stream processing method comprises the following steps:
step S101, when detecting that the video stream has invalid image data, respectively obtaining a source image data frame and a target image data frame corresponding to the invalid image data from the video stream.
In the embodiment of the present invention, when the image capturing device 100 performs frame-by-frame detection on the captured video stream, if invalid image data is detected, a source image data frame and a target image data frame corresponding to the invalid image data are obtained. The invalid image data may be a frame of image data with a deteriorated image quality. The above-mentioned ineffective image data may include one or more continuous ineffective image data frames. When the invalid image data comprises a plurality of invalid image data frames, the acquisition time of the corresponding invalid image data frames is adjacent and continuous in sequence.
Further, the source image data frame corresponding to the invalid image data may be an image data frame before the invalid image data at the corresponding acquisition time, and the target image data frame corresponding to the invalid image data may be an image data frame after the corresponding invalid image data at the corresponding acquisition time.
As a possible implementation manner, the manner of respectively obtaining the source image data frame and the target image data frame corresponding to the invalid image data from the video stream may be:
and in the image data frames with the collection time before the invalid image data, taking the image data frame with the shortest time interval between the collection times corresponding to the first frame of the invalid image data as the source image data frame. If the appearing invalid image data is not a plurality of consecutive invalid images (i.e., if it is only one frame), the first frame of the invalid image data is the invalid image; if the invalid image data is continuous multiple frames of invalid images, the first frame of the invalid image data refers to the frame which corresponds to the earliest acquisition time in the multiple frames of continuous invalid images. I.e. it can be understood that the image data frame of the frame immediately preceding the invalid image data in acquisition time is taken as the source image data frame.
And in the image data frames with the collection time behind the invalid image data, taking the image data frame with the shortest time interval between the collection times corresponding to the last frame in the invalid image data as the target image data frame. If the appearing invalid image data is not a continuous multiple-frame invalid image (i.e., if it is only one frame), the last frame of the invalid image data also refers to the invalid image; if the invalid image data is continuous multi-frame invalid images, the last frame of the invalid image data refers to a frame which corresponds to the latest acquisition time in the multi-frame continuous invalid images. That is, it can be understood that the image data frame whose acquisition time is located next to the invalid image data is used as the target image data frame.
As another possible implementation manner, the manner of respectively acquiring the source image data frame and the target image data frame corresponding to the invalid image data from the video stream may further be:
and selecting an image data frame with an image quality score meeting a specified condition as the source image data frame from image data frames with the acquisition time before the invalid image data. For example, a deep learning method based on the DBNs is adopted to perform image quality scoring on an image data frame positioned before the invalid image data at each frame acquisition time, and an image data frame in which the image quality scoring meets a specified condition is determined as a source image data frame. The above-mentioned specified condition may be the highest-scoring image data frame.
And selecting the image data frame with the image quality score meeting the specified condition as the target image data frame from the image data frames with the collection time before the invalid image data. For example, a deep learning method based on DBNs is used to perform image quality scoring on the image data frames located after the invalid image data at each frame acquisition time, and the image data frame in which the image quality scoring satisfies a specified condition is identified as the target image data frame. The above-mentioned specified condition may be the highest-scoring image data frame.
As another possible implementation manner, the manner of respectively acquiring the source image data frame and the target image data frame corresponding to the invalid image data from the video stream may also be:
and selecting the image data frame with the shortest time interval between the acquisition times corresponding to the first frame in the invalid image data as the source image data frame from the image data frames with the image quality scores meeting specified conditions, wherein the image data frames with the acquisition times before the invalid image data are selected. For example, a deep learning method based on DBNs is used to perform quality score on the image data frames located before the invalid image data at each frame acquisition time, and identify the image data frames in which the quality score satisfies a specified condition. And then, taking the image data frame with the shortest time interval between the acquisition time in the image data frames with the determined image quality scores meeting the specified conditions and the acquisition time corresponding to the first frame in the invalid image data as a source image data frame. The specific condition may be that the image quality score exceeds a preset threshold.
And selecting the image data frame with the shortest time interval between the acquisition times corresponding to the first frame in the invalid image data as the target image data frame from the image data frames with the acquisition times behind the invalid image data, wherein the image data frames with the image quality scores meeting specified conditions. For example, a deep learning method based on DBNs is used to perform image quality scoring on the image data frames located after the invalid image data at each frame acquisition time, and the image data frames in which the image quality scoring satisfies a specified condition are identified. And then, taking the image data frame with the shortest time interval between the acquisition time in the image data frames with the determined image quality scores meeting the specified conditions and the acquisition time corresponding to the last frame in the invalid image data as a target image data frame. The specific condition may be that the image quality score exceeds a preset threshold.
As another possible implementation manner, the manner of respectively acquiring the source image data frame and the target image data frame corresponding to the invalid image data from the video stream may also be:
and sequentially acquiring the image quality scores of the image data frames of each frame according to the sequence of short to long time intervals between the acquisition times corresponding to the first frame in the invalid image data aiming at the image data frames with the acquisition times before the invalid image data until the image data frames with the corresponding image quality scores meeting the specified conditions are acquired, and determining the image data frames with the image quality scores meeting the specified conditions as the source image data frames. The specific condition may be that the image quality score exceeds a preset threshold.
And sequentially acquiring the image quality scores of the image data frames of each frame according to the sequence of short to long time intervals between the acquisition times corresponding to the first frames in the invalid image data aiming at the image data frames with the acquisition times behind the invalid image data until the image data frames with the corresponding image quality scores meeting the specified conditions are acquired, and determining the image data frames with the image quality scores meeting the specified conditions as the target image data frames. The specific condition may be that the image quality score exceeds a preset threshold.
The present invention is not limited to the image quality evaluation method, and the above is only one embodiment of the present invention.
And step S102, generating a transition deformation data frame corresponding to each invalid image data frame according to the source image data frame and the target image data frame.
In the embodiment of the present invention, the gradation parameter may be first determined based on the number of frames of the ineffective image data.
Alternatively, the above fade parameter may be according to the formula: t is determined as 1/N. Wherein t is the determined gradient parameter, and N is the image frame number corresponding to the invalid image data.
And secondly, generating a transition deformation data frame corresponding to each invalid image data frame according to the gradual change parameters, the source image data frame and the target image data frame.
Alternatively, as shown in fig. 4, the step S102 may include the following sub-steps:
in the substep S1021, first target information and second target information extracted from the source image data frame and the target image data frame are extracted. The first target information refers to at least one dynamic characteristic image appearing in the source image data frame, and the second target information refers to at least one dynamic characteristic image appearing in the target image data frame.
In the sub-step S1022, a plurality of gradient images are generated according to the first target information and the second target information in combination with the gradient parameters. Alternatively, transition deformation may be performed between the first target information and the second target information, and then mixed interpolation of pixel colors is performed to obtain a plurality of gradient images.
As an embodiment, a high-quality image morphing algorithm may be used to perform image morphing based on shape interpolation and moving least squares deformation to obtain a morphing image.
In the substep S1023, a characteristic curve group is obtained from the first target information and the second target information.
In this embodiment of the present invention, each feature curve group includes a first feature curve of a dynamic feature image of the first target information and a second feature curve corresponding to the first feature curve in the second target information.
For example, if the moving feature image of the first object information includes a red car and a black car driven on a road, and the moving feature image of the second object information includes a black car that is the same as the first object information but has a different position, the contour line of the black car in the first object information may be used as the first feature curve, and the contour line of the black car in the second object information may be used as the corresponding second feature curve.
And a substep S1024 of obtaining a displacement path of each characteristic curve group according to the characteristic curve groups.
In the embodiment of the present invention, a feature point pair is obtained according to the feature curve group. Each pair of feature points includes two salient feature points. Specifically, each feature point pair includes a first salient feature point on the first feature curve and a second salient feature point on the second feature curve.
It should be noted that the aforementioned significant feature point may be an end point of the feature curve or a curvature extreme point. For example, a curvature extreme point of a first feature curve may be used as a first salient feature point, and the same curvature extreme point of a second feature curve may be used as a corresponding second salient feature point. The first salient feature point and the second salient feature point form a set of feature point pairs.
As an implementation manner, the AlexNet deep learning model may be trained to be a learning model that can actively identify the feature point pairs in the source image data frame and the target image data frame, and the feature point pairs corresponding to the feature curve group between the source image data frame and the target image data frame are obtained directly by using the training model.
Further, according to the first significant feature point corresponding to the first feature curve, a first central feature point of the first feature curve is determined. The first central feature point may be a visual saliency point center of the first feature curve.
And determining a second central characteristic point of the second characteristic curve according to the second remarkable characteristic point corresponding to the second characteristic curve. The second central feature point may be a visual saliency point center of the second feature curve.
And obtaining the displacement path according to the first central characteristic point and the second central characteristic point corresponding to each group of characteristic curve groups. And generating a displacement path according to the image coordinates corresponding to the first central feature point and the image coordinates corresponding to the second central feature point. The displacement path may be a path in which the image coordinate corresponding to the first central feature point moves to the image coordinate corresponding to the second central feature point, and may be, for example, a connecting line between the image coordinates of the first central feature point and the second central feature point.
And a sub-step S1025 of determining the position points on the displacement path, the number of which is the same as the number of frames of the ineffective image data. Alternatively, there may be position points that are determined at the same interval as the number of frames of ineffective image data on the displacement path.
And a substep S1026, mapping the position points with the same sequence number arranged in the displacement direction on each displacement path to a substitute image frame created by each frame according to the determined target-free image frame in the video stream.
In the embodiment of the present invention, the displacement direction may be a direction from the first central feature point to the second central feature point. The non-target image frame may be an image frame without a dynamic feature image acquired from a video stream. As a possible implementation, a frame of non-target image frames obtained from the video stream may be stored in advance, and during the processing of the video stream, if a new non-target image frame is detected and its image quality score exceeds that of the stored non-target image frame, it is substituted for the stored non-target image frame. The image capturing apparatus 100 creates a substitute image frame having the same number as the invalid image data frame based on the newly stored non-target image frame, and maps the position points having the same sequence number in the displacement direction on each displacement path to the same substitute image frame. It should be noted that a plurality of characteristic curve groups may exist, and thus a plurality of displacement paths also exist. And mapping the position points with the same sequence number in the displacement direction on each displacement path to each frame of substituted image frame.
And a substep S1027 of combining the position points mapped on the substitute image frame of each frame, fusing the gradient image and the substitute image frame, and generating the transition deformation data frame corresponding to the invalid image data frame of each frame.
In an embodiment of the present invention, the gradient image is fused with each frame substitute image frame based on the mapped location points on each frame substitute image frame to generate a transition deformation data frame. The method and the device replace the position points mapped on the image frame to be used for determining the fusion position of the gradient image, and the problem that the position of the fusion image is not accurate in the related technology is solved.
Furthermore, the generated degree-deformation data frames can be sorted according to the corresponding serial numbers of the position points mapped on the substitute image frames on the displacement path. For example, the degree-change data frame arranged at the first position corresponds to the first frame of invalid image data frame, and the degree-change data frame arranged at the last position corresponds to the last frame of invalid image data frame.
And step S103, sequentially replacing each corresponding frame of the invalid image data frame in the video stream with the transition deformation data frame to obtain the processed video stream.
In the embodiment of the invention, a gradual change image corresponding to each frame of invalid image data is generated and determined by utilizing a source image data frame and a target image data frame corresponding to the acquired invalid image data, and the fusion position point of the gradual change image of each frame of invalid image data frame in a substitute image frame is determined to generate a transition deformation data frame and replace the invalid image data frame. And video stream processing for improving the quality of the video stream in real time and ensuring smooth transmission of the high-quality video stream. That is, the transition frame is generated by performing image gradual change on the source image data frame and the target image data frame, and then the invalid frame is repaired by performing invalid frame replacement operation, so that the video stream quality is directly improved. The quality of the video stream is improved while the transmission speed is ensured.
Furthermore, in order to facilitate processing of the video stream, the image data frames of the video stream may be captured and then placed in an image processing queue according to the capture time, so that the video stream may be processed according to steps S101 to S103. Specifically, as shown in fig. 5, the video stream processing method may further include the following steps:
step S201, sequentially caching the multiple frames of image data frames of the acquired video stream according to the sequence of the acquisition time.
In the embodiment of the invention, the image data frame is cached according to the first-in first-out principle. The total number of buffered frames of image data frames may be adjusted according to the environment, illumination, climate, and scene of the installation environment of the image capturing apparatus 100. The adjustment may be manual adjustment or automatic adjustment of the image capturing apparatus 100. The automatic adjustment may be based on the number of consecutive frames of invalid image data present in the frames of captured image data. Specifically, when the number of consecutive invalid image data frames appearing in the buffered image data frames exceeds a specified proportion of the total number of buffered frames of the current image data frame, the total number of buffered frames of image data frames is increased to satisfy that the total number of buffered frames of image data exceeds a specified multiple of the number of consecutive invalid image data frames appearing in the buffered image data frames.
Step S202, sequentially detecting the buffered image data frames to determine whether the invalid image data occurs.
As an implementation mode, the image data of each frame cached can be detected by using the deep learning model of the R-FCN, and here, whether the image data frame is an invalid image is detected only by using the detection layer of the deep learning model of the R-FCN without classifying the image data frame, so that the detection efficiency is improved. After the invalid image data is detected, the flow advances to step S101 to repair the invalid frame by performing an invalid frame replacement operation on the invalid data frame, thereby directly improving the video stream quality. The quality of the video stream is improved while the transmission speed is ensured.
Second embodiment
An embodiment of the present invention further provides a video stream processing apparatus 300 corresponding to the foregoing method, where details of the video stream processing apparatus 300 can be implemented with reference to the foregoing method, as shown in fig. 6, the video stream processing apparatus 300 may include:
the detection module 301 is configured to, when it is detected that invalid image data occurs in a video stream, determine a source image data frame and a target image data frame from an image data frame whose acquisition time is before the invalid image data and an image data frame whose acquisition time is after the invalid image data, respectively. Wherein the invalid image data comprises one or more invalid image data frames
The processing module 302 is configured to generate a transition deformation data frame corresponding to each invalid image data frame according to the source image data frame and the target image data frame.
Preferably, a gradual change parameter is determined according to the frame number of the invalid image data, and a transition deformation data frame corresponding to each frame of the invalid image data frame is generated according to the gradual change parameter, the source image data frame and the target image data frame.
A replacing module 303, configured to sequentially replace each corresponding frame of the invalid image data frame in the video stream with the transition transformation data frame, so as to obtain the processed video stream.
Embodiments of the present invention also disclose a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by the processor 112, implements the video stream processing method disclosed in the foregoing embodiments of the present invention.
In summary, the present invention provides a video stream processing method, an apparatus and a computer-readable storage medium. The video stream processing method comprises the following steps: when detecting that invalid image data appear in a video stream, respectively determining a source image data frame and a target image data frame from an image data frame with the collection time before the invalid image data and an image data frame with the collection time after the invalid image data; generating a transition deformation data frame corresponding to each frame of the invalid image data according to the source image data frame and the target image data frame; and sequentially replacing each corresponding frame of the invalid image data in the video stream with the transition deformation data frame to obtain the processed video stream. And video stream processing for improving the quality of the video stream in real time and ensuring smooth transmission of the high-quality video stream. The transition frame is generated by carrying out image gradual change on the source image data frame and the target image data frame, and the invalid image data is replaced to repair the invalid image data, so that the video stream quality is directly improved. The video stream quality is improved while the transmission speed is ensured.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. A method for processing a video stream, the method comprising:
when invalid image data of a video stream is detected, determining a source image data frame and a target image data frame from an image data frame with the collection time before the invalid image data and an image data frame with the collection time after the invalid image data respectively, wherein the invalid image data comprises one or more invalid image data frames;
determining a gradual change parameter according to the frame number of the invalid image data;
first target information and second target information extracted from the source image data frame and the target image data frame;
generating a plurality of gradient images according to the first target information and the second target information;
acquiring characteristic curve groups from the first target information and the second target information, wherein each characteristic curve group comprises a first characteristic curve of a dynamic characteristic image of the first target information and a second characteristic curve corresponding to the first characteristic curve in the second target information;
acquiring a displacement path of each characteristic curve group according to the characteristic curve groups;
determining the position points on the displacement path, wherein the number of the position points is the same as the number of frames of the invalid image data;
mapping the position points with the same sequence number in the displacement direction on each displacement path to a substitute image frame created by each frame according to the determined target-free image frame in the video stream;
combining the position points mapped on the substitute image frame of each frame, fusing the gradual change image and the substitute image frame to generate a transition deformation data frame corresponding to the invalid image data frame of each frame;
and sequentially replacing each corresponding frame of the invalid image data frame in the video stream with the transition deformation data frame to obtain the processed video stream.
2. The method of claim 1, wherein said step of obtaining a displacement path for each of said sets of signatures based on said sets of signatures comprises:
acquiring a characteristic point pair according to the characteristic curve group, wherein the characteristic point pair comprises a first significant characteristic point on the first characteristic curve and a second significant characteristic point on the second characteristic curve;
determining a first central feature point of the first feature curve according to the first significant feature point;
determining a second central feature point of the second feature curve according to the second significant feature point;
and obtaining the displacement path according to the first central characteristic point and the second central characteristic point corresponding to each group of characteristic curve groups.
3. The method of claim 1, wherein said step of determining a source image data frame and a target image data frame from a frame of image data having an acquisition time before said invalid image data and a frame of image data having an acquisition time after said invalid image data, respectively, comprises:
taking an image data frame, the acquisition time of which is positioned before the invalid image data, and the image data frame, the time interval between the acquisition times corresponding to the first frame in the invalid image data is shortest, or the image data frame of which the image quality score meets the specified condition, as the source image data frame;
and taking the image data frame with the acquisition time after the invalid image data, wherein the time interval between the acquisition times corresponding to the last frame in the invalid image data is shortest, or the image data frame with the image quality score meeting the specified condition as the target image data frame.
4. The method of claim 1, wherein said step of determining a source image data frame and a target image data frame from a frame of image data having an acquisition time before said invalid image data and a frame of image data having an acquisition time after said invalid image data, respectively, comprises:
selecting the image data frame with the shortest time interval between the acquisition times corresponding to the first frame in the invalid image data as the source image data frame from the image data frames with the image quality scores meeting the specified conditions, wherein the image data frames are positioned before the invalid image data;
selecting, as the target image data frame, an image data frame having a shortest time interval between the acquisition times corresponding to a first frame in the invalid image data from among image data frames having an image quality score satisfying a specified condition, the image data frames having acquisition times located after the invalid image data; alternatively, the first and second electrodes may be,
sequentially acquiring the image quality scores of each frame of image data frames according to the sequence of short to long time intervals between the acquisition time corresponding to the first frame in the invalid image data aiming at the image data frames with the acquisition time before the invalid image data until the image data frames with the corresponding image quality scores meeting the specified conditions are acquired, and determining the image data frames with the image quality scores meeting the specified conditions as the source image data frames;
and sequentially acquiring the image quality scores of the image data frames of each frame according to the sequence of short to long time intervals between the acquisition times corresponding to the first frames in the invalid image data aiming at the image data frames with the acquisition times behind the invalid image data until the image data frames with the corresponding image quality scores meeting the specified conditions are acquired, and determining the image data frames with the image quality scores meeting the specified conditions as the target image data frames.
5. The method of claim 1, wherein the method further comprises:
caching the collected multiple frames of image data frames of the video stream in sequence according to the sequence of the collection time;
sequentially detecting the cached image data frames to judge whether invalid image data occurs or not;
wherein the total number of the buffered image data frames exceeds a specified multiple of the number of frames of the invalid image data that continuously appears in the buffered image data frames.
6. A video stream processing apparatus, characterized in that the apparatus comprises:
the detection module is used for respectively determining a source image data frame and a target image data frame from an image data frame with the collection time before the invalid image data and an image data frame with the collection time after the invalid image data when the video stream is detected to have the invalid image data, wherein the invalid image data comprises one or more invalid image data frames;
a processing module to:
determining a gradual change parameter according to the frame number of the invalid image data;
first target information and second target information extracted from the source image data frame and the target image data frame;
generating a plurality of gradient images according to the first target information and the second target information;
acquiring characteristic curve groups from the first target information and the second target information, wherein each characteristic curve group comprises a first characteristic curve of a dynamic characteristic image of the first target information and a second characteristic curve corresponding to the first characteristic curve in the second target information;
acquiring a displacement path of each characteristic curve group according to the characteristic curve groups;
determining the position points on the displacement path, wherein the number of the position points is the same as the number of frames of the invalid image data;
mapping the position points with the same sequence number in the displacement direction on each displacement path to a substitute image frame created by each frame according to the determined target-free image frame in the video stream;
combining the position points mapped on the substitute image frame of each frame, fusing the gradual change image and the substitute image frame to generate a transition deformation data frame corresponding to the invalid image data frame of each frame;
and the replacing module is used for sequentially replacing each corresponding frame of the invalid image data frame in the video stream with the transition deformation data frame so as to obtain the processed video stream.
7. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, carry out the steps of the method of any one of claims 1 to 5.
CN201810966728.8A 2018-08-23 2018-08-23 Video stream processing method, device and computer readable storage medium Active CN110858879B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810966728.8A CN110858879B (en) 2018-08-23 2018-08-23 Video stream processing method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810966728.8A CN110858879B (en) 2018-08-23 2018-08-23 Video stream processing method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110858879A CN110858879A (en) 2020-03-03
CN110858879B true CN110858879B (en) 2022-06-14

Family

ID=69636077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810966728.8A Active CN110858879B (en) 2018-08-23 2018-08-23 Video stream processing method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110858879B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114007102A (en) * 2021-10-28 2022-02-01 深圳市商汤科技有限公司 Video processing method, video processing device, electronic equipment and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3618298B2 (en) * 2000-01-28 2005-02-09 株式会社スクウェア・エニックス Motion display method, game device, and recording medium
JP3889233B2 (en) * 2001-03-08 2007-03-07 株式会社モノリス Image encoding method and apparatus, and image decoding method and apparatus
JP4730183B2 (en) * 2006-04-17 2011-07-20 株式会社日立製作所 Video display device
US8184200B1 (en) * 2008-04-22 2012-05-22 Marvell International Ltd. Picture rate conversion system for high definition video
US8781165B2 (en) * 2010-12-14 2014-07-15 The United States Of America As Represented By The Secretary Of The Navy Method and apparatus for displacement determination by motion compensation
US9794642B2 (en) * 2013-01-07 2017-10-17 Gracenote, Inc. Inserting advertisements into video content
CN104616338B (en) * 2015-01-26 2018-02-27 江苏如意通动漫产业有限公司 The consistent speed change interpolating method of space-time based on 2 D animation
CN107424122A (en) * 2017-07-06 2017-12-01 中国科学技术大学 The image interpolation method that deformation aids under a kind of big displacement

Also Published As

Publication number Publication date
CN110858879A (en) 2020-03-03

Similar Documents

Publication Publication Date Title
US11538232B2 (en) Tracker assisted image capture
CN110705405B (en) Target labeling method and device
JP4429298B2 (en) Object number detection device and object number detection method
US8824823B1 (en) Increased quality of image objects based on depth in scene
CN107248174A (en) A kind of method for tracking target based on TLD algorithms
JP4515332B2 (en) Image processing apparatus and target area tracking program
US20160063344A1 (en) Long-term static object detection
CN103942751A (en) Method for extracting video key frame
CN111898581A (en) Animal detection method, device, electronic equipment and readable storage medium
EP3989158A1 (en) Method, apparatus and device for video similarity detection
JP6795224B2 (en) Mobile detection device, mobile detection method, and program
CN110858879B (en) Video stream processing method, device and computer readable storage medium
JP2020504383A (en) Image foreground detection device, detection method, and electronic apparatus
JP2017220098A (en) Image processing device, image processing method, image processing program and image recognition system
KR20220090203A (en) Automatic Data Labeling Method based on Deep learning Object Detection amd Trace and System thereof
CN110378916B (en) TBM image slag segmentation method based on multitask deep learning
CN111027195A (en) Simulation scene generation method, device and equipment
JP2010021761A (en) Video image automatic recording control device
CN109903308B (en) Method and device for acquiring information
WO2009139723A1 (en) Method and device for analyzing video signals generated by a moving camera
CN110879983B (en) Face feature key point extraction method and face image synthesis method
CN112700653A (en) Method, device and equipment for judging illegal lane change of vehicle and storage medium
JP2018056892A (en) Image processing device
CN110795964B (en) Sweeping method and device of sweeping robot
JP2009205695A (en) Apparatus and method for detecting the number of objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant