CN115514985A - Video processing method and device, electronic equipment and storage medium - Google Patents
Video processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115514985A CN115514985A CN202211141644.3A CN202211141644A CN115514985A CN 115514985 A CN115514985 A CN 115514985A CN 202211141644 A CN202211141644 A CN 202211141644A CN 115514985 A CN115514985 A CN 115514985A
- Authority
- CN
- China
- Prior art keywords
- video
- target
- played
- storage mode
- playing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 26
- 238000000034 method Methods 0.000 claims abstract description 26
- 238000012545 processing Methods 0.000 claims abstract description 19
- 238000001514 detection method Methods 0.000 claims description 96
- 230000006870 function Effects 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 16
- 230000001960 triggered effect Effects 0.000 claims description 6
- 230000008859 change Effects 0.000 description 29
- 238000004891 communication Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44004—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47217—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The invention discloses a video processing method, a video processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a video to be processed, and determining a target storage mode corresponding to the video to be processed; the target storage mode comprises a complete data stream storage mode or a key frame storage mode; storing at least one to-be-stored video frame in the to-be-processed video based on the target storage mode, and obtaining the to-be-played video based on each to-be-stored video frame; when a video playing instruction is detected, determining a target playing speed corresponding to the video to be played, and playing the video to be played based on the target playing speed. The problems that a large number of video frames containing invalid information exist in the video, the video frames containing valid information are submerged, and the video storage space is large are solved, and the video frames containing the valid information are only stored, so that the valid information in the video can be quickly acquired, the video playing speed can be adjusted, and the effect of the video storage space can be reduced.
Description
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
In practical application, modes of video supervision and maintenance security, property security and the like are more and more common.
Currently, when video shooting is performed based on a cameraman device, it is generally preferred to record 24 hours all day and store. However, in such a manner, not only a large amount of storage space is occupied when the video is stored, but also a large amount of video frames containing invalid information exist in the video, and the video frames containing valid information are instead submerged in the large amount of video frames, so that it is time-consuming and labor-consuming to acquire valid information through the stored video.
In order to solve the above problems, an improvement in a video processing method is required.
Disclosure of Invention
The invention provides a video processing method, a video processing device, electronic equipment and a storage medium, which are used for solving the problems that a large number of video frames containing invalid information exist in a video, the video frames containing valid information are submerged, and the video storage space is large.
In a first aspect, an embodiment of the present invention provides a video processing method, including:
acquiring a video to be processed, and determining a target storage mode corresponding to the video to be processed; the target storage mode comprises a complete data stream storage mode or a key frame storage mode;
storing at least one video frame to be stored in the video to be processed based on the target storage mode, and obtaining a video to be played based on each video frame to be stored;
when a video playing instruction is detected, determining a target playing speed corresponding to the video to be played, and playing the video to be played based on the target playing speed.
In a second aspect, an embodiment of the present invention further provides a video processing apparatus, including:
the target storage mode determining module is used for acquiring a video to be processed and determining a target storage mode corresponding to the video to be processed; the target storage mode comprises a complete data stream storage mode or a key frame storage mode;
the to-be-played video determining module is used for storing at least one to-be-stored video frame in the to-be-processed video based on the target storage mode and obtaining the to-be-played video based on each to-be-stored video frame;
and the video playing module is used for determining a target playing speed corresponding to the video to be played when a video playing instruction is detected, and playing the video to be played based on the target playing speed.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the video processing method according to any of the embodiments of the invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where computer instructions are stored, and the computer instructions are configured to enable a processor to implement the video processing method according to any embodiment of the present invention when executed.
According to the technical scheme, the video to be processed is obtained, the target storage mode corresponding to the video to be processed is determined, whether a target detection object exists in each video frame to be recognized in the video to be processed or not is determined based on a target detection algorithm or an infrared sensor detection algorithm, namely whether image change exists or not, and the target storage mode corresponding to each video frame to be recognized is determined. Storing at least one video frame to be stored in the video to be processed based on the target storage mode, obtaining the video to be played based on each video frame to be stored, if the video frame to be identified has image change, and storing the corresponding video frames to be stored based on the complete data stream storage mode, otherwise, storing the corresponding video frames to be stored based on the key frame storage mode so as to generate the video to be played based on the stored video frames to be stored. When a video playing instruction is detected, determining a target playing speed corresponding to the video to be played, playing the video to be played based on the target playing speed, determining a target playing time corresponding to the video to be played according to the video playing instruction, and determining a target playing speed corresponding to the video to be played according to the target playing time and the ratio of the number of the videos to be played corresponding to the video frames of the video to be played, so as to play the video based on the target playing speed. The problems that a large number of video frames containing invalid information exist in a video, the video frames containing valid information are submerged, and the video storage space is large are solved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present invention;
fig. 2 is a flowchart of a video processing method according to a second embodiment of the present invention;
fig. 3 is a schematic diagram of video transmission according to a third embodiment of the present invention;
fig. 4 is a flowchart of a video processing method according to a third embodiment of the present invention;
fig. 5 is a flowchart of a video processing method according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of a video processing apparatus according to a fourth embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device implementing a video processing method according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein.
Before the technical solution is elaborated, an application scenario of the technical solution is introduced to more clearly understand the technical solution. In practical applications, in order to manage the target area, an image pickup device may be installed in the target area, and a user or an article or the like in the target area is photographed in real time based on the image pickup device, so as to determine whether an abnormal situation exists in the target area according to a photographed video. The target area may be determined according to actual conditions, for example, the target area may be an area where the image capturing apparatus may be installed in a warehouse, a community, a street, an office, and the like according to regulations, and may specifically be determined according to a shooting range of the image capturing apparatus.
Example one
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present invention, where the embodiment is applicable to a situation where effective information is obtained from a video quickly, a video storage space is reduced, and a video playing speed is dynamically adjusted, and the method may be implemented by a video processing apparatus, where the video processing apparatus may be implemented in a form of hardware and/or software, and the video processing apparatus may be configured in a computing device that can execute the video processing method.
As shown in fig. 1, the method includes:
and S110, acquiring the video to be processed, and determining a target storage mode corresponding to the video to be processed.
The video to be processed can be understood as a video shot for the target area based on the camera device, and the video to be processed can be a video obtained in real time or a video shot in advance and stored in a video repository. The target storage mode includes a complete data stream storage mode or a key frame storage mode, and it can be understood that a video to be processed shot by the camera device includes one or more video frames, the complete data stream storage mode refers to a storage mode in which all video frames in the video to be processed are correspondingly stored, and the key frame storage mode refers to a storage mode in which only key frames in the video to be processed are stored. It should be noted that the key frame may be understood as a video frame with a relatively obvious change in each video frame, and if no user appears in the previous video frame and a user appears in the current video frame, the current video frame may be regarded as a video frame with a relatively obvious change.
Specifically, a to-be-processed video needing to be processed is called from a preset video repository, or a target area is shot in real time based on an image pickup device installed in the target area, so that the to-be-processed video corresponding to the target area is obtained. And determining a target storage mode corresponding to the video to be processed according to whether obvious image change occurs in each video frame in the video to be processed.
It should be noted that the target storage manner may be determined for a complete video to be processed, that is, as long as there is a video frame with an obvious image change in the video to be processed, the video to be processed is stored based on the key frame storage manner, and if there is no video frame with an obvious image change in the entire video to be processed, the video to be processed is stored based on the complete data stream storage manner. The target storage mode may also be a storage mode determined in real time in the shooting process of the video to be processed, in other words, the target storage mode may be switched in real time according to the image change condition in the video to be processed.
Optionally, the obtaining a video to be processed and determining a target storage mode corresponding to the video to be processed includes: when the target video recording function of the camera equipment is triggered, shooting a target area in real time to obtain a video to be processed; determining whether a target detection object exists in a current video frame aiming at least one video frame to be identified in a video to be processed; if so, determining the complete data stream storage mode as a target storage mode; if not, determining that the key frame storage mode is the target storage mode.
In the technical scheme, the target video recording function may be a video epitome function, that is, a function of performing image detection on a video frame shot by the image pickup device in the target area in real time, determining a target storage mode of the corresponding video frame according to whether there is an image change in the video frame, and generating a video to be processed based on each stored video frame. The video frame to be recognized may be understood as each video frame in the video to be processed, which is shot by the image pickup device, and the current video frame may be any one of the video frames to be recognized, and only one of the video frames to be recognized is taken as the current video frame as an example. The target detection object may be understood as an object of interest in a current video frame, such as a user or an object that appears or disappears suddenly in two consecutive video frames, or the same user or object that has a position movement or a motion change in two adjacent video frames. The target area is determined by the shooting range of the camera equipment;
specifically, the functions of the camera device may be multiple, such as a real-time recording function, a video recording function, or a recording function according to a preset time point, and in the technical solution, the video recording function is mainly explained in detail. And when the trigger of the video-epitome function of the camera equipment is detected, shooting the target area in real time based on the camera equipment installed in the target area to obtain the video to be processed. It can be understood that the video to be processed includes at least one video frame to be identified, that is, a video frame that needs to be subjected to image detection. Determining whether a target detection object exists in the current video frame or not aiming at each video frame to be identified, if so, indicating that more obvious image change exists in the current video frame, and determining a complete data stream storage mode as a target storage mode; otherwise, if the key frame does not exist, the image change of the current video frame is not obvious, and the key frame storage mode is determined to be the target storage mode.
And S120, storing at least one to-be-stored video frame in the to-be-processed video based on the target storage mode, and obtaining the to-be-played video based on each to-be-stored video frame.
It should be noted that, in the present technical solution, the video frame with relatively obvious image change is mainly stored, that is, the video frame to be stored may be understood as a video frame with relatively obvious image change. It is understood that a segment of video may include a plurality of video frames, and a corresponding video may also be generated based on the plurality of video frames, and a video to be played may be understood as a video generated based on each video frame to be stored.
Specifically, after a target storage mode corresponding to each video frame to be identified in the video to be processed is determined, each video frame to be stored is stored based on the corresponding target storage mode, so that the video to be played is generated based on each video to be stored.
For example, the video frame to be stored may be stored in the cloud server, or may be stored in a conventional storage space, such as a local resource storage space or a memory card. And storing each video frame to be stored to a cloud server or a corresponding storage space through a target storage mode corresponding to each video to be stored so as to generate a video to be played based on each stored video frame to be stored.
Exemplarily, if a relatively obvious image change exists in video frames of 11 th to 20 th frames in a video to be processed, storing the video frames of 10 th to 20 th frames to a cloud server or a preset storage space based on a complete data stream storage mode; if no obvious image change exists in the 21 st to 30 th frame of video frames, the 21 st to 30 th frame of video frames are stored in a cloud server or a preset storage space based on a key frame storage mode, for example, only the 21 st frame of video frames is stored.
It can be understood that, in the present technical solution, a video frame including a target detection object is a video frame with a relatively obvious image change, that is, a video frame to be stored, at least one video frame to be stored in a video to be processed is stored based on a target storage manner, and a video to be played is obtained based on each video frame to be stored, including: the method comprises the steps that a video frame containing a target detection object is determined as a video frame to be stored, and the video frame to be stored is stored to a cloud server based on a target storage mode; and generating a video to be played based on at least one video frame to be stored in the cloud server.
It can be understood that, if the video frames to be stored are stored in the cloud server, the time that the video to be played generated based on each video frame to be stored can be stored is longer, because the cloud server has a larger storage space, and the video to be played stored before is not required to be covered in order to save space, therefore, the video to be played generated based on each video frame to be stored is preferentially stored in the cloud server in the technical scheme.
The advantage of this arrangement is that, in practical applications, if all the video frames captured by the image capturing apparatus are stored, there may be a large number of video frames containing invalid information in the generated video, that is, there may be a situation where the image information in a plurality of consecutive video frames is completely consistent or substantially the same, and when the video obtained based on each video frame is played, the user needs to watch a large number of video frames containing invalid information, which is very time consuming, and may cause the valid information, that is, the video frames containing valid information to be buried in a large number of video frames, so that the user may miss the valid information during the process of viewing the video. In order to help a user to quickly determine effective information in a video and save the time for the user to check the video, the technical scheme adopts a complete data stream storage mode and a key frame storage mode to store corresponding video frames, so as to generate a video to be played based on the stored video frames, so that when the user views the video to be played, only the video frames containing the effective information are played to the user, and the speed of the user for obtaining the effective information is greatly improved.
It should be noted that, in the present technical solution, the valid information refers to a to-be-recognized video frame including a target detection object, and the invalid information refers to a video frame in which the target detection object does not appear in consecutive multiple to-be-recognized video frames.
S130, when a video playing instruction is detected, determining a target playing speed corresponding to the video to be played, and playing the video to be played based on the target playing speed.
The video playing instruction can be understood as an instruction for playing a video to be played. The target playing speed may be understood as a speed when the video to be played is played, for example, the speed may be 24 frames/second, or 60 frames/second.
In practical application, a user can input a video to be viewed as a video to be played and play parameters associated with the play speed of the video to be played, such as a play time parameter or a play frame number in a unit time length, through an editing control in a target display interface, and determine a target play speed corresponding to the video to be played according to each play parameter. According to the relevant information input by the user, a video playing instruction can be generated, so that when the video playing instruction is detected, the video to be played and the target playing speed corresponding to the video to be played are determined according to the information carried in the video playing instruction, and the video to be played is played based on the target playing speed.
Optionally, determining a target play speed corresponding to the video to be played, and playing the video to be played based on the target play speed includes: according to the video playing instruction, obtaining a video to be played from a cloud server, and determining a target playing time corresponding to the video to be played; determining the number of video frames to be played in a video to be played, and determining a target playing speed based on the ratio of the target playing time length to the number of the video frames to be played; and playing the video to be played based on the target playing speed.
The target playing time length may be understood as a playing time length corresponding to a video that the user desires to play, and for example, if the user desires that the video to be played can be played within 1 minute, the target playing time length is 1 minute. The number to be played refers to the number of video frames included in the video to be played.
In practical application, taking a cloud server as a storage space as an example, it can be understood that a plurality of video clips can be stored in the cloud server, and when a video playing instruction is detected, a video to be played can be determined from the plurality of video clips according to the video playing instruction and viewed. Meanwhile, the video playing instruction may further include a target playing duration corresponding to the video to be played, and after the video to be played is determined, the number of video frames in the video to be played, that is, the number to be played, may be determined according to the video association information of the video to be played. Further, according to the ratio of the target playing time length to the number to be played, a target playing speed corresponding to the video to be played can be obtained, so that the video to be played can be played based on the target playing speed.
For example, a user may input a video identifier corresponding to a video to be played, such as a video name, a video ID, a video number, and the like, in an editing control in the target display interface, and may also input a target playing time corresponding to the video to be played, such as 10 seconds. Based on the parameter information input by the user, a video playing instruction can be generated. When a video playing instruction is received, a video to be played corresponding to the video playing instruction is obtained from the cloud server, and according to video association information associated with the video to be played, the number of the video to be played corresponding to the video to be played can be obtained, for example, 240 frames. Further, according to the ratio of the target playing time length to the number to be played, a target playing speed, that is, 24 frames/second, can be obtained, so that the video to be played is played at a playing speed based on 24 frames/second.
It should be noted that, when the target playing speed is greater than the upper limit playing speed, the prompt information is generated, and the video to be played is played based on the upper limit playing speed.
The upper limit play speed may be understood as a maximum play speed when the video to be played is played, for example, the maximum play speed may be set to 60 frames/second, and the specific upper limit play speed may be set in a user-defined manner according to an actual situation. The prompt message may be understood as an early warning message for prompting the user that the target playing speed is too high.
In practical applications, because the video image information that a user can receive in a unit time length is limited, if the playing speed of a video is too fast, the user may not clearly identify detailed picture information in each video frame in the video, and therefore, when the video to be played is played, the target playing speed cannot exceed the upper limit playing speed.
For example, if the target play speed exceeds the upper limit play speed, a prompt message may be generated, for example, if the current target play speed is greater than the upper limit play speed, the video will be played at the upper limit play speed, so as to prompt the user to play the video to be played at a proper play speed. It should be noted that the prompt information is only used as a prompt, in practical applications, a user may set to display the prompt information or set not to display the prompt information, and if the prompt information is set not to be displayed, the video player does not display the prompt information but directly plays the prompt information at the upper limit play speed.
The method has the advantages that the speed of the video to be played can be adjusted according to the requirement of the playing time length expected by the user, so that the video to be played can be played within the playing time length expected by the user. By the aid of the playing mode, the user can finish playing the video within the expected playing time, and key information in the video cannot be missed due to the fact that the video playing speed is too high.
It should be noted that the advantage of performing video processing based on the present technical solution is that, by analyzing each video frame captured by the image capture device, it is determined whether there is a relatively obvious image change in each video frame, that is, whether there is a target detection object in the video frame, so as to determine a storage manner corresponding to each video frame, and store each video frame to the cloud server or the preset storage space based on the corresponding storage manner, and by such a manner, the power consumption of the front-end image capture device can be greatly reduced, that is, the power consumption of the image capture device is reduced, so that the image capture device can continuously operate for a long time, thereby implementing small photovoltaic power supply and maintaining the possibility of continuous operation. Wherein the small photovoltaic supply is typically less than 30W. Meanwhile, in view of the fact that the technical scheme selectively stores each video frame when storing each video frame, the transmission rate of the video to be processed can be greatly improved, and the data flow used in transmission is reduced.
According to the technical scheme, the video to be processed is obtained, the target storage mode corresponding to the video to be processed is determined, whether a target detection object exists in each video frame to be recognized in the video to be processed or not is determined based on a target detection algorithm or an infrared sensor detection algorithm, namely whether image change exists or not, and the target storage mode corresponding to each video frame to be recognized is determined. And storing at least one to-be-stored video frame in the to-be-processed video based on the target storage mode, obtaining a to-be-played video based on each to-be-stored video frame, if the to-be-identified video frame has image change, storing the corresponding to-be-stored video frame based on a complete data stream storage mode, otherwise, storing the corresponding to-be-stored video frame based on a key frame storage mode, and generating the to-be-played video based on each stored to-be-stored video frame. When a video playing instruction is detected, determining a target playing speed corresponding to the video to be played, playing the video to be played based on the target playing speed, determining a target playing time corresponding to the video to be played according to the video playing instruction, and determining a target playing speed corresponding to the video to be played according to the target playing time and the ratio of the number of the videos to be played corresponding to the video frames of the video to be played, so as to play the video based on the target playing speed. The problems that a large number of video frames containing invalid information exist in a video, the video frames containing valid information are submerged, and the video storage space is large are solved.
Example two
Fig. 2 is a flowchart of a video processing method according to a second embodiment of the present invention, and optionally, the obtaining of the video to be processed and the determining of the target storage mode corresponding to the video to be processed are refined.
As shown in fig. 2, the method includes:
and S210, when the target video recording function of the camera equipment is triggered, shooting a target area in real time to obtain a video to be processed.
And S220, determining a target storage mode corresponding to the video to be processed.
In practical application, the method includes that at least one to-be-identified video frame is included in a to-be-processed video, and for at least one to-be-identified video frame in the to-be-processed video, whether a target detection object exists in a current video frame is determined, and the method includes: based on a target detection algorithm, performing target detection on the current video frame to obtain at least one detection object to be determined in the current video frame; and for each detection object to be determined, if a newly added detection object exists in the current video frame compared with the previous video frame, determining that a target detection object exists in the current video frame.
The target detection algorithm may be understood as an algorithm for detecting a detection object to be determined in a video frame to be identified, specifically, the detection object to be determined in the video frame to be identified may be a user, an animal, an object, or the like, and the target detection algorithm may be an algorithm for performing contour detection, position detection, image identification, and the like on each detection object to be determined. The newly added detection object can be understood that compared with the previous video frame to be identified, if the current video frame contains the user a and the previous video frame does not contain the user a, the newly added detection object can also be a moving position of the detection object to be identified, for example, the position a of the detection object to be identified in the 1 st video frame to be identified is located at the position B in the 2 nd video frame to be identified, it can be considered that the newly added detection object exists at the position B in the video frame to be identified. The target detection object can be understood as a new detection object in the current video frame.
Specifically, image recognition processing is performed on each to-be-recognized video frame in the to-be-processed video based on a preset target detection algorithm, so as to determine at least one to-be-determined detection object in each to-be-recognized video frame, and determine whether a newly-added detection object exists in the current video frame. If the target detection object exists, the target detection object exists in the current video frame, and if the newly added detection object does not exist, the target detection object does not exist in the current video frame.
In practical applications, an infrared sensor may be further installed in the image capturing apparatus to detect infrared information in the target area, and it is understood that the infrared information of the infrared sensor is mainly detected according to the heat change information, that is, for example, a user, an animal or an object with heat in the target area may be detected.
Specifically, based on an infrared sensor installed on the camera device, infrared detection is performed in a target area, and whether newly added infrared information exists in the target area is determined; if yes, determining that a target detection object exists in the current video frame; if not, determining that the target detection object does not exist in the current video frame.
The new infrared information may be new infrared information appearing in the current video frame, or new infrared information appearing at a certain position in the current video frame, and if the user a moves from the position a to the position B, the infrared information corresponding to the user a is the new infrared information for the position B. If the current video frame contains the newly added infrared information, the current video frame is indicated to contain the target detection object, otherwise, the current video frame is indicated to have no target detection object.
Further, after determining whether a target detection object exists in the current video frame, if the target detection object is included in the current video frame, determining that the storage mode of the complete data stream is a target storage mode, otherwise, determining that the storage mode of the key frame is the target storage mode.
And S230, storing at least one to-be-stored video frame in the to-be-processed video based on the target storage mode, and obtaining the to-be-played video based on each to-be-stored video frame.
It should be noted that, in the video to be played generated based on the technical solution, if images in a plurality of consecutive video frames to be identified do not have obvious changes, only one of the video frames to be identified may be stored, that is, a problem that a video picture is not smooth may occur in the generated video to be played, but since all the key frames are included in each video frame to be stored, key information in the video to be processed is not lost.
The method has the advantages that the video frame frames contained in the generated video to be played are all the video frames containing the effective information, and even if the user plays the video at the speed to be played at a higher playing speed, the effective information in the video cannot be missed.
The advantage of setting up like this still lies in, only stores the video frame that contains effective information, can also save a large amount of memory spaces to make the capacity that occupies memory spaces such as littleer cloud end server, make the cloud end server or predetermine memory space and can store more video information or file information etc..
S240, when a video playing instruction is detected, determining a target playing speed corresponding to the video to be played, and playing the video to be played based on the target playing speed.
According to the technical scheme, the video to be processed is obtained, the target storage mode corresponding to the video to be processed is determined, whether a target detection object exists in each video frame to be recognized in the video to be processed or not is determined based on a target detection algorithm or an infrared sensor detection algorithm, namely whether image change exists or not, and the target storage mode corresponding to each video frame to be recognized is determined. And storing at least one to-be-stored video frame in the to-be-processed video based on the target storage mode, obtaining a to-be-played video based on each to-be-stored video frame, if image changes exist in the to-be-identified video frame, storing the corresponding to-be-stored video frame based on a complete data stream storage mode, otherwise, storing the corresponding to-be-stored video frame based on a key frame storage mode, and generating the to-be-played video based on each stored to-be-stored video frame. When a video playing instruction is detected, determining a target playing speed corresponding to the video to be played, playing the video to be played based on the target playing speed, determining a target playing time corresponding to the video to be played according to the video playing instruction, and determining a target playing speed corresponding to the video to be played according to the target playing time and the ratio of the number of the videos to be played corresponding to the video frames of the video to be played, so as to play the video based on the target playing speed. The problems that a large number of video frames containing invalid information exist in a video, the video frames containing valid information are submerged, and the video storage space is large are solved.
EXAMPLE III
In a specific example, as shown in fig. 3, after a target area is shot based on a front-end camera (i.e., an image capturing device), image recognition may be performed on a shot video frame, and different target storage methods are used to store each video frame, and a video to be played may be obtained by performing compression coding on each stored video frame, and further, the video to be played may be stored to a cloud storage management system (i.e., a cloud server) through a wide area network by network transmission, such as WiFi, 4G network, 5G network, ethernet, and the like. The wide area network may include the internet, public networks, private networks, and the like. Furthermore, when a user needs to check a video to be played, corresponding information can be input on the basis of a target display interface of client software to generate a video playing instruction, so that the purposes of real-time monitoring, video playback, quick playback, video downloading, parameter setting and the like of a target area on the basis of the video to be played are achieved.
In practical applications, as shown in fig. 4, after a target area is captured by a camera (i.e., an image capturing apparatus), image detection is performed on a video frame to be recognized based on a target detection algorithm, infrared sensor detection, or the like, so as to determine whether there is an image change in the video frame to be recognized, for example, whether there is a target detection object in the video frame to be recognized, so as to determine a corresponding target storage manner according to whether there is an image change.
Specifically, if the image in the video frame to be identified has no change, the video frame is not encoded and compressed, and the key frame storage mode is used as a target storage mode to store the corresponding video frame to be stored. And if the image in the video frame to be identified is changed, determining the complete data stream storage mode as a target storage mode so as to store all the video frames to be stored containing the image change. And generating a video to be played based on each video frame to be stored, performing compression coding on the video to be played according to the preset time 16s/30s (namely, the target playing duration) input by the user and the number to be played corresponding to the video to be played, storing the video to be played into the cloud management system so as to re-determine the target playing speed corresponding to the video to be played, and playing the video to be played at the target playing speed based on client software.
For example, as shown in fig. 5, it is first determined whether the image capturing device is connected to the cloud storage server based on automatic image capturing by the image capturing device in the target area, and if the connection is successful, automatic image capturing may be performed. When the trigger of the video-epitome function is detected, acquiring an I frame stream (namely key frames) of H.265 from a buffer queue, if a video file (namely, a video to be played) does not exceed 5 minutes, continuing to record the video, if the video file exceeds 5 minutes, ending video recording, and uploading the video file to a cloud server. It should be noted that, in the whole process, if the automatic video recording function is not turned on, a corresponding alarm prompt may be performed, and if the alarm prompt is not required, the server is waited to schedule and change the video recording type. And if the camera equipment is not connected with the cloud storage server, triggering an alarm, and if the alarm is not triggered, waiting for the alarm to be triggered. If the camera device and the cloud server are successfully advanced, but the video recording switch is in a closed state, alarm prompt can be performed.
According to the technical scheme, the video to be processed is obtained, the target storage mode corresponding to the video to be processed is determined, whether a target detection object exists in each video frame to be recognized in the video to be processed or not is determined based on a target detection algorithm or an infrared sensor detection algorithm, namely whether image change exists or not, and the target storage mode corresponding to each video frame to be recognized is determined. And storing at least one to-be-stored video frame in the to-be-processed video based on the target storage mode, obtaining a to-be-played video based on each to-be-stored video frame, if image changes exist in the to-be-identified video frame, storing the corresponding to-be-stored video frame based on a complete data stream storage mode, otherwise, storing the corresponding to-be-stored video frame based on a key frame storage mode, and generating the to-be-played video based on each stored to-be-stored video frame. When a video playing instruction is detected, determining a target playing speed corresponding to the video to be played, playing the video to be played based on the target playing speed, determining a target playing time corresponding to the video to be played according to the video playing instruction, and determining a target playing speed corresponding to the video to be played according to the target playing time and the ratio of the number of the videos to be played corresponding to the video frames of the video to be played, so as to play the video based on the target playing speed. The problems that a large number of video frames containing invalid information exist in a video, the video frames containing valid information are submerged, and the video storage space is large are solved.
Example four
Fig. 6 is a schematic structural diagram of a video processing apparatus according to a fourth embodiment of the present invention. As shown in fig. 6, the apparatus includes: a target storage mode determining module 310, a to-be-played video determining module 320 and a video playing module 330.
The target storage mode determining module 310 is configured to obtain a video to be processed, and determine a target storage mode corresponding to the video to be processed; the target storage mode comprises a complete data stream storage mode or a key frame storage mode;
a to-be-played video determining module 320, configured to store at least one to-be-stored video frame in a to-be-processed video based on a target storage manner, and obtain a to-be-played video based on each to-be-stored video frame;
the video playing module 330 is configured to determine a target playing speed corresponding to the video to be played when the video playing instruction is detected, and play the video to be played based on the target playing speed.
According to the technical scheme, the video to be processed is obtained, the target storage mode corresponding to the video to be processed is determined, whether a target detection object exists in each video frame to be recognized in the video to be processed or not is determined based on a target detection algorithm or an infrared sensor detection algorithm, namely whether image change exists or not, and the target storage mode corresponding to each video frame to be recognized is determined. And storing at least one to-be-stored video frame in the to-be-processed video based on the target storage mode, obtaining a to-be-played video based on each to-be-stored video frame, if image changes exist in the to-be-identified video frame, storing the corresponding to-be-stored video frame based on a complete data stream storage mode, otherwise, storing the corresponding to-be-stored video frame based on a key frame storage mode, and generating the to-be-played video based on each stored to-be-stored video frame. When a video playing instruction is detected, determining a target playing speed corresponding to the video to be played, playing the video to be played based on the target playing speed, determining a target playing time corresponding to the video to be played according to the video playing instruction, and determining a target playing speed corresponding to the video to be played according to the target playing time and the ratio of the number of the videos to be played corresponding to the video frames of the video to be played, so as to play the video based on the target playing speed. The problems that a large number of video frames containing invalid information exist in a video, the video frames containing valid information are submerged, and the video storage space is large are solved.
Optionally, the target storage mode determining module includes: the to-be-processed video determining submodule is used for shooting a target area in real time when the fact that the target video recording function of the camera shooting equipment is triggered is detected, and obtaining a to-be-processed video; the target area is determined by the shooting range of the camera equipment;
the target detection object determining sub-module is used for determining whether a target detection object exists in a current video frame aiming at least one video frame to be identified in a video to be processed;
the first storage mode determining submodule is used for determining that the storage mode of the complete data stream is the target storage mode if the storage mode is the target storage mode;
and the second storage mode determining submodule is used for determining that the key frame storage mode is the target storage mode if the key frame storage mode is not the target storage mode.
Optionally, the target detection object determining sub-module includes: the to-be-determined object determining unit is used for carrying out target detection on the current video frame based on a target detection algorithm to obtain at least one to-be-determined detection object in the current video frame;
and the target detection object determining unit is used for determining that a target detection object exists in the current video frame if the current video frame has a new detection object compared with the previous video frame aiming at each detection object to be determined.
Optionally, the target detection object determining sub-module further includes: the infrared detection unit is used for carrying out infrared detection on the target area based on an infrared sensor arranged on the camera equipment and determining whether newly added infrared information exists in the target area;
the first detection object determining unit is used for determining that a target detection object exists in the current video frame if the first detection object exists;
and the second detection object determining unit is used for determining that the target detection object does not exist in the current video frame if the target detection object does not exist in the current video frame.
Optionally, the to-be-played video determining module includes: the video frame storage submodule is used for determining a video frame containing a target detection object as a video frame to be stored and storing the video frame to be stored to the cloud server based on a target storage mode;
and the to-be-played video generation submodule is used for generating a to-be-played video based on at least one to-be-stored video frame in the cloud server.
Optionally, the video playing module includes: the playing time length determining submodule is used for acquiring a video to be played from the cloud server according to the video playing instruction and determining a target playing time length corresponding to the video to be played;
the playing speed determining submodule is used for determining the number of the video frames to be played in the video to be played and determining a target playing speed based on the ratio of the target playing time length to the number of the video frames to be played;
and the playing video playing submodule is used for playing the video to be played based on the target playing speed.
Optionally, the video processing apparatus is further configured to generate a prompt message when the target play speed is greater than the upper limit play speed, and play the video to be played based on the upper limit play speed.
The video processing device provided by the embodiment of the invention can execute the video processing method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 7 shows a schematic structural diagram of the electronic device 10 of the embodiment of the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 7, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 may also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to the bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 11 performs the various methods and processes described above, such as a video processing method.
In some embodiments, the video processing method may be implemented as a computer program that is tangibly embodied on a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the video processing method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the video processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the video processing method of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A video processing method, comprising:
acquiring a video to be processed, and determining a target storage mode corresponding to the video to be processed; the target storage mode comprises a complete data stream storage mode or a key frame storage mode;
storing at least one to-be-stored video frame in the to-be-processed video based on the target storage mode, and obtaining a to-be-played video based on each to-be-stored video frame;
when a video playing instruction is detected, determining a target playing speed corresponding to the video to be played, and playing the video to be played based on the target playing speed.
2. The method according to claim 1, wherein the obtaining of the video to be processed and the determining of the target storage manner corresponding to the video to be processed comprises:
when the target video recording function of the camera equipment is triggered, shooting a target area in real time to obtain a video to be processed; wherein the target area is determined for a shooting range of the image pickup apparatus;
determining whether a target detection object exists in a current video frame aiming at least one video frame to be identified in the video to be processed;
if so, determining the complete data stream storage mode as the target storage mode;
if not, determining that the key frame storage mode is the target storage mode.
3. The method according to claim 2, wherein the determining whether the target detection object exists in the current video frame for at least one to-be-identified video frame in the to-be-processed video comprises:
performing target detection on the current video frame based on a target detection algorithm to obtain at least one detection object to be determined in the current video frame;
and for each detection object to be determined, if the current video frame has a newly added detection object compared with the previous video frame, determining that the target detection object exists in the current video frame.
4. The method of claim 2, further comprising:
performing infrared detection on a target area based on an infrared sensor installed on the camera equipment, and determining whether newly added infrared information exists in the target area;
if yes, determining that the target detection object exists in the current video frame;
if not, determining that the target detection object does not exist in the current video frame.
5. The method according to claim 1, wherein the storing at least one to-be-stored video frame in the to-be-processed video based on the target storage manner, and obtaining the to-be-played video based on each to-be-stored video frame comprises:
determining a video frame containing the target detection object as a video frame to be stored, and storing the video frame to be stored to a cloud server based on the target storage mode;
and generating a video to be played based on at least one video frame to be stored in the cloud server.
6. The method of claim 5, wherein the determining a target playing speed corresponding to the video to be played and playing the video to be played based on the target playing speed comprises:
according to the video playing instruction, the video to be played is obtained from the cloud server, and a target playing time length corresponding to the video to be played is determined;
determining the number of video frames to be played in the video to be played, and determining the target playing speed based on the ratio of the target playing time length to the number of the video frames to be played;
and playing the video to be played based on the target playing speed.
7. The method of claim 1, further comprising:
and when the target playing speed is greater than the upper limit playing speed, generating prompt information, and playing the video to be played based on the upper limit playing speed.
8. A video processing apparatus, comprising:
the target storage mode determining module is used for acquiring a video to be processed and determining a target storage mode corresponding to the video to be processed; the target storage mode comprises a complete data stream storage mode or a key frame storage mode;
the to-be-played video determining module is used for storing at least one to-be-stored video frame in the to-be-processed video based on the target storage mode and obtaining the to-be-played video based on each to-be-stored video frame;
and the video playing module is used for determining a target playing speed corresponding to the video to be played when a video playing instruction is detected, and playing the video to be played based on the target playing speed.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the video processing method of any of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing a processor to perform the video processing method of any one of claims 1-7 when executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211141644.3A CN115514985A (en) | 2022-09-20 | 2022-09-20 | Video processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211141644.3A CN115514985A (en) | 2022-09-20 | 2022-09-20 | Video processing method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115514985A true CN115514985A (en) | 2022-12-23 |
Family
ID=84504407
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211141644.3A Pending CN115514985A (en) | 2022-09-20 | 2022-09-20 | Video processing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115514985A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118509646A (en) * | 2024-07-16 | 2024-08-16 | 北京宏远智控技术有限公司 | Video optimization method, device, equipment and storage medium |
Citations (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0606675A2 (en) * | 1992-12-16 | 1994-07-20 | International Business Machines Corporation | Method for loss-less compression of full motion video |
CN101060624A (en) * | 2007-05-08 | 2007-10-24 | 杭州华三通信技术有限公司 | Video data processing method and storage equipment |
US20100092151A1 (en) * | 2007-02-01 | 2010-04-15 | Sony Corporation | Image reproducing apparatus, image reproducing method, image capturing apparatus, and control method therefor |
CN101799928A (en) * | 2010-03-30 | 2010-08-11 | 深圳市融创天下科技发展有限公司 | High-efficient low-complexity motion detection method applicable to image processing |
CN102063924A (en) * | 2009-11-18 | 2011-05-18 | 新奥特(北京)视频技术有限公司 | Method and device for playing animation |
CN104363403A (en) * | 2014-11-14 | 2015-02-18 | 浙江宇视科技有限公司 | Method and device for storing video data in video monitoring system |
CN104394345A (en) * | 2014-12-10 | 2015-03-04 | 马人欢 | Video storage and playback method for security and protection monitoring |
CN104702914A (en) * | 2015-01-14 | 2015-06-10 | 汉柏科技有限公司 | Monitored video data processing method and system |
CN104918064A (en) * | 2015-05-27 | 2015-09-16 | 努比亚技术有限公司 | Rapid video play method and device of mobile terminal |
CN104954718A (en) * | 2015-06-03 | 2015-09-30 | 惠州Tcl移动通信有限公司 | Mobile intelligent terminal and image recording method thereof |
CN105007532A (en) * | 2015-06-30 | 2015-10-28 | 北京东方艾迪普科技发展有限公司 | Animation playing method and device |
CN105100692A (en) * | 2014-05-14 | 2015-11-25 | 杭州海康威视系统技术有限公司 | Video playing method and apparatus thereof |
CN105163073A (en) * | 2015-08-21 | 2015-12-16 | 无锡伊佩克科技有限公司 | Image analysis based expressway monitoring method |
CN105554516A (en) * | 2015-12-31 | 2016-05-04 | 杭州华为数字技术有限公司 | Method and device for playing monitoring video |
CN106534949A (en) * | 2016-11-25 | 2017-03-22 | 济南中维世纪科技有限公司 | Method for prolonging video storage time of video monitoring system |
CN106603952A (en) * | 2017-03-06 | 2017-04-26 | 深圳市博信诺达经贸咨询有限公司 | Monitoring video big data management method and system |
CN106878668A (en) * | 2015-12-10 | 2017-06-20 | 微软技术许可有限责任公司 | Mobile detection to object |
CN106937090A (en) * | 2017-04-01 | 2017-07-07 | 广东浪潮大数据研究有限公司 | The method and device of a kind of video storage |
CN107835424A (en) * | 2017-12-18 | 2018-03-23 | 合肥亚慕信息科技有限公司 | A kind of media sync transmission player method based on data perception |
WO2018086527A1 (en) * | 2016-11-08 | 2018-05-17 | 中兴通讯股份有限公司 | Video processing method and device |
CN108540743A (en) * | 2018-03-23 | 2018-09-14 | 佛山市台风网络科技有限公司 | A kind of image data store method and system based on video monitoring |
CN110324654A (en) * | 2019-08-02 | 2019-10-11 | 广州虎牙科技有限公司 | Main broadcaster end live video frame processing method, device, equipment, system and medium |
CN111836102A (en) * | 2019-04-23 | 2020-10-27 | 杭州海康威视数字技术股份有限公司 | Video frame analysis method and device |
CN111866457A (en) * | 2020-07-14 | 2020-10-30 | 广州市宏视电子技术有限公司 | Monitoring image processing method, electronic device, storage medium and system |
CN113596473A (en) * | 2021-07-28 | 2021-11-02 | 浙江大华技术股份有限公司 | Video compression method and device |
CN114339444A (en) * | 2021-12-10 | 2022-04-12 | 北京达佳互联信息技术有限公司 | Method, device and equipment for adjusting playing time of video frame and storage medium |
CN114915851A (en) * | 2022-05-31 | 2022-08-16 | 展讯通信(天津)有限公司 | Video recording and playing method and device |
-
2022
- 2022-09-20 CN CN202211141644.3A patent/CN115514985A/en active Pending
Patent Citations (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0606675A2 (en) * | 1992-12-16 | 1994-07-20 | International Business Machines Corporation | Method for loss-less compression of full motion video |
US20100092151A1 (en) * | 2007-02-01 | 2010-04-15 | Sony Corporation | Image reproducing apparatus, image reproducing method, image capturing apparatus, and control method therefor |
CN101060624A (en) * | 2007-05-08 | 2007-10-24 | 杭州华三通信技术有限公司 | Video data processing method and storage equipment |
CN102063924A (en) * | 2009-11-18 | 2011-05-18 | 新奥特(北京)视频技术有限公司 | Method and device for playing animation |
CN101799928A (en) * | 2010-03-30 | 2010-08-11 | 深圳市融创天下科技发展有限公司 | High-efficient low-complexity motion detection method applicable to image processing |
CN105100692A (en) * | 2014-05-14 | 2015-11-25 | 杭州海康威视系统技术有限公司 | Video playing method and apparatus thereof |
CN104363403A (en) * | 2014-11-14 | 2015-02-18 | 浙江宇视科技有限公司 | Method and device for storing video data in video monitoring system |
CN104394345A (en) * | 2014-12-10 | 2015-03-04 | 马人欢 | Video storage and playback method for security and protection monitoring |
CN104702914A (en) * | 2015-01-14 | 2015-06-10 | 汉柏科技有限公司 | Monitored video data processing method and system |
CN104918064A (en) * | 2015-05-27 | 2015-09-16 | 努比亚技术有限公司 | Rapid video play method and device of mobile terminal |
CN104954718A (en) * | 2015-06-03 | 2015-09-30 | 惠州Tcl移动通信有限公司 | Mobile intelligent terminal and image recording method thereof |
CN105007532A (en) * | 2015-06-30 | 2015-10-28 | 北京东方艾迪普科技发展有限公司 | Animation playing method and device |
CN105163073A (en) * | 2015-08-21 | 2015-12-16 | 无锡伊佩克科技有限公司 | Image analysis based expressway monitoring method |
CN106878668A (en) * | 2015-12-10 | 2017-06-20 | 微软技术许可有限责任公司 | Mobile detection to object |
CN105554516A (en) * | 2015-12-31 | 2016-05-04 | 杭州华为数字技术有限公司 | Method and device for playing monitoring video |
WO2018086527A1 (en) * | 2016-11-08 | 2018-05-17 | 中兴通讯股份有限公司 | Video processing method and device |
CN106534949A (en) * | 2016-11-25 | 2017-03-22 | 济南中维世纪科技有限公司 | Method for prolonging video storage time of video monitoring system |
CN106603952A (en) * | 2017-03-06 | 2017-04-26 | 深圳市博信诺达经贸咨询有限公司 | Monitoring video big data management method and system |
CN106937090A (en) * | 2017-04-01 | 2017-07-07 | 广东浪潮大数据研究有限公司 | The method and device of a kind of video storage |
CN107835424A (en) * | 2017-12-18 | 2018-03-23 | 合肥亚慕信息科技有限公司 | A kind of media sync transmission player method based on data perception |
CN108540743A (en) * | 2018-03-23 | 2018-09-14 | 佛山市台风网络科技有限公司 | A kind of image data store method and system based on video monitoring |
CN111836102A (en) * | 2019-04-23 | 2020-10-27 | 杭州海康威视数字技术股份有限公司 | Video frame analysis method and device |
CN110324654A (en) * | 2019-08-02 | 2019-10-11 | 广州虎牙科技有限公司 | Main broadcaster end live video frame processing method, device, equipment, system and medium |
CN111866457A (en) * | 2020-07-14 | 2020-10-30 | 广州市宏视电子技术有限公司 | Monitoring image processing method, electronic device, storage medium and system |
CN113596473A (en) * | 2021-07-28 | 2021-11-02 | 浙江大华技术股份有限公司 | Video compression method and device |
CN114339444A (en) * | 2021-12-10 | 2022-04-12 | 北京达佳互联信息技术有限公司 | Method, device and equipment for adjusting playing time of video frame and storage medium |
CN114915851A (en) * | 2022-05-31 | 2022-08-16 | 展讯通信(天津)有限公司 | Video recording and playing method and device |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118509646A (en) * | 2024-07-16 | 2024-08-16 | 北京宏远智控技术有限公司 | Video optimization method, device, equipment and storage medium |
CN118509646B (en) * | 2024-07-16 | 2024-09-24 | 北京宏远智控技术有限公司 | Video optimization method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9681125B2 (en) | Method and system for video coding with noise filtering | |
CN108062507B (en) | Video processing method and device | |
US11240542B2 (en) | System and method for multiple video playback | |
CN110166796B (en) | Video frame processing method and device, computer readable medium and electronic equipment | |
CN113691733A (en) | Video jitter detection method and device, electronic equipment and storage medium | |
CN104751164A (en) | Method and system for capturing movement trajectory of object | |
CN113163260A (en) | Video frame output control method and device and electronic equipment | |
CN115514985A (en) | Video processing method and device, electronic equipment and storage medium | |
JPWO2019021628A1 (en) | Information processing apparatus, control method, and program | |
CN110956648A (en) | Video image processing method, device, equipment and storage medium | |
CN116245865A (en) | Image quality detection method and device, electronic equipment and storage medium | |
WO2017121020A1 (en) | Moving image generating method and device | |
CN112291480B (en) | Tracking focusing method, tracking focusing device, electronic device and readable storage medium | |
CN117176990A (en) | Video stream processing method and device, electronic equipment and storage medium | |
CN108495038B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN113038261A (en) | Video generation method, device, equipment, system and storage medium | |
CN116668843A (en) | Shooting state switching method and device, electronic equipment and storage medium | |
CN111988563B (en) | Multi-scene video monitoring method and device | |
CN114143429A (en) | Image shooting method, image shooting device, electronic equipment and computer readable storage medium | |
CN113033483B (en) | Method, device, electronic equipment and storage medium for detecting target object | |
CN113891038B (en) | Information prompting method, device, intelligent equipment and computer readable storage medium | |
CN118781444A (en) | Labeling image selection method and device, electronic equipment and storage medium | |
WO2021249067A1 (en) | Method and system for capturing a real-time video in a plurality of video modes | |
CN118764709A (en) | Working mode switching method, device, equipment and medium of image collector | |
CN116582707A (en) | Video synchronous display method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20221223 |
|
RJ01 | Rejection of invention patent application after publication |