CN106559631A - Method for processing video frequency and device - Google Patents
Method for processing video frequency and device Download PDFInfo
- Publication number
- CN106559631A CN106559631A CN201510640894.5A CN201510640894A CN106559631A CN 106559631 A CN106559631 A CN 106559631A CN 201510640894 A CN201510640894 A CN 201510640894A CN 106559631 A CN106559631 A CN 106559631A
- Authority
- CN
- China
- Prior art keywords
- video
- image data
- module
- frame
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 62
- 238000000034 method Methods 0.000 title claims abstract description 40
- 239000000284 extract Substances 0.000 claims description 7
- 230000001143 conditioned effect Effects 0.000 abstract 2
- 238000010586 diagram Methods 0.000 description 24
- 238000003672 processing method Methods 0.000 description 16
- 238000004891 communication Methods 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 7
- 230000002035 prolonged effect Effects 0.000 description 6
- 238000001454 recorded image Methods 0.000 description 6
- 230000005236 sound signal Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000013075 data extraction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Landscapes
- Television Signal Processing For Recording (AREA)
Abstract
The embodiment of the present disclosure provides method for processing video frequency and device, and wherein method includes:Judge whether the present frame for obtaining and the difference of the preset parameter value of the former frame for obtaining meet pre-conditioned;If meeting described pre-conditioned, according to first view data of current frame recording, so that control device is according to described first image data genaration video.In the present embodiment, video flowing to being gathered on video capture device (such as intelligent video camera head) is not necessarily all recorded, but when frame is changed greatly only in front and back, record next two field picture or one section of video segment, so control device (such as mobile phone) just can generate target video as the epitome of certain time period according to the frame or video segment for being recorded, and be easy to user's viewing and share.As a example by one day, because the epitome for obtaining can be recorded the event occurred in scene with most short time playback, and the video of whole a day is reviewed without the need for user, so as to substantially increase playback efficiency, save the time of user.
Description
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a video processing method and apparatus.
Background
With the development of technology, people start to use video capture devices such as cameras to capture videos in more and more scenes. For example, in the security field, a monitoring camera can be used for performing video monitoring on spaces such as stairs and elevators. Also, for example, in the smart home field, the smart camera may be used to photograph the situation in the user's home, such as capturing the activities of infants, pets, visitors, etc., or recording the activities of the user's own day for sharing to friends, etc.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a video processing method and apparatus to improve efficiency in video recording and playback.
According to a first aspect of embodiments of the present disclosure, there is provided a video processing method, the method including:
judging whether the difference value of the preset parameter values of the obtained current frame and the obtained previous frame meets a preset condition or not;
and if the preset condition is met, recording first image data according to the current frame so that the control equipment generates a video according to the first image data.
Optionally, the preset parameter value is each pixel value in the gray binary image; the judging whether the difference value of the preset parameter values of the obtained current frame and the obtained previous frame meets the preset condition includes:
acquiring a difference value of each corresponding pixel value of the gray-scale binary image of the current frame and the gray-scale binary image of the previous frame;
obtaining the average value of the absolute values of all the difference values;
judging whether the average value is larger than a preset threshold value or not;
and if the average value is larger than the preset threshold value, determining that the preset condition is met.
Optionally, the first image data includes a frame of image; the recording of the first image data according to the current frame includes:
recording the current frame as the first image data.
Optionally, after recording the current frame as the first image data, the method further includes:
and sending the recorded current frame to the control equipment.
Optionally, the recording the first image data according to the current frame includes:
and recording a video segment with a preset duration from the current frame as the first image data.
Optionally, after recording the video segment with the preset duration from the current frame as the first image data, the method further includes:
sending the recorded video clip to the control device; or,
and extracting a preset number of frame images from the video clip, and sending the preset number of frame images to the control equipment.
According to a second aspect of the embodiments of the present disclosure, there is provided a video processing method, the method including:
receiving second image data sent by video acquisition equipment, wherein the second image data comprises a frame image, a video clip or a group of frame images with a preset number;
and generating a target video according to the received at least two second image data.
Optionally, when the second image data includes one frame of image, the generating a target video according to at least two received second image data includes:
and generating the target video according to the received at least two frames of images.
Optionally, when the second image data includes a video segment, the generating a target video according to at least two received second image data includes:
generating the target video according to the received at least two video clips; or,
and extracting a group of frame images with preset number from each video clip, and generating the target video according to each group of frame images with preset number.
Optionally, when the second image data includes a set of frame images with a preset number, the generating a target video according to at least two received second image data includes:
and generating the target video according to at least two groups of the preset number of frame images.
Optionally, after generating the target video according to the received at least two pieces of extracted frame second image data, the method further includes:
and adding the acquired target audio into the target video.
Optionally, after generating the target video according to the received at least two pieces of second image data, the method further includes:
and sending the target video to a designated device or a designated contact.
According to a third aspect of the embodiments of the present disclosure, there is provided a video processing apparatus, the apparatus comprising:
the image change judging module is used for judging whether the difference value of the preset parameter values of the obtained current frame and the obtained previous frame meets a preset condition or not;
and the recording module is used for recording first image data according to the current frame when the image change judging module judges that the preset condition is met, so that the control equipment generates a video according to the first image data.
Alternatively to this, the first and second parts may,
the preset parameter values judged by the image change judging module are all pixel values in a gray binary image;
the image change judging module includes: an image parameter processing submodule and an image parameter judging submodule;
the image parameter processing submodule is used for acquiring an absolute value average value of a difference value of each corresponding pixel value of the gray-scale binary image of the current frame and the gray-scale binary image of the previous frame;
the image parameter judgment sub-module is used for judging whether the average value processed by the image parameter processing sub-module is larger than a preset threshold value, and if the average value is larger than the preset threshold value, the image parameter judgment sub-module determines that the preset condition is met.
Alternatively to this, the first and second parts may,
the first image data recorded by the recording module comprises a frame of image;
the recording module comprises: a first recording sub-module;
the first recording sub-module is configured to record the current frame as the first image data.
Optionally, the apparatus further comprises: a first sending module;
the first sending module is configured to send the current frame recorded by the first recording sub-module to the control device.
Alternatively to this, the first and second parts may,
the first image data recorded by the recording module comprises a video clip;
the recording module comprises: a second recording sub-module;
the second recording sub-module is configured to record a video segment with a preset duration from the current frame as the first image data.
Optionally, the apparatus further comprises: a second sending module;
the second sending module is configured to send the video segment recorded by the second recording sub-module to the control device, or extract a preset number of frame images from the video segment recorded by the second recording sub-module and send the preset number of frame images to the control device.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a video processing apparatus, the apparatus comprising:
the data receiving module is used for receiving second image data sent by the video acquisition equipment, wherein the second image data comprises a frame image, a video clip or a group of frame images with preset number;
and the video generation module is used for generating a target video according to the at least two second image data received by the data receiving module.
Optionally, the video generating module includes: a first video generation submodule;
the first video generation submodule is configured to generate the target video according to at least two received frames of images when the second image data includes one frame of image.
Optionally, the video generating module includes: a second video generation submodule;
the second video generation sub-module is configured to generate the target video according to at least two received video segments when the second image data includes one video segment, or extract a group of preset number of frame images from each of the video segments and generate the target video according to each group of preset number of frame images.
Optionally, the video generating module includes: a third video generation submodule;
the third video generation submodule is configured to generate the target video according to at least two groups of frame images with a preset number when the second image data includes a group of frame images with a preset number.
Optionally, the apparatus further comprises: an audio adding module;
the audio adding module is used for adding the acquired target audio into the target video generated by the video generating module.
Optionally, the apparatus further comprises: a video transmitting module;
and the video sending module is used for sending the target video to a designated device or a designated contact.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including:
a first processor;
a first memory for storing first processor-executable instructions;
wherein the first processor is configured to:
judging whether the difference value of the preset parameter values of the obtained current frame and the obtained previous frame meets a preset condition or not;
and if the preset condition is met, recording first image data according to the current frame so that the control equipment generates a video according to the first image data.
According to a sixth aspect of the embodiments of the present disclosure, there is provided a video processing apparatus comprising:
a second processor;
a second memory for storing second processor-executable instructions;
wherein the second processor is configured to:
receiving second image data sent by video acquisition equipment, wherein the second image data comprises a frame image, a video clip or a group of frame images with a preset number;
and generating a target video according to the received at least two second image data.
According to a seventh aspect of embodiments of the present disclosure, there is provided a video processing system, the system comprising: video acquisition equipment and control equipment;
the video acquisition equipment comprises any one video processing device comprising an image change judging module and a recording module;
the control equipment comprises any one video processing device comprising the data receiving module and the video generating module.
According to an eighth aspect of embodiments of the present disclosure, there is provided a video processing system, the system comprising: video acquisition equipment and control equipment;
the video acquisition equipment comprises the video processing device comprising the first processor and the first memory;
the control equipment comprises the video processing device comprising the second processor and the second memory.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the disclosure, the video stream acquired on the video acquisition device (for example, an intelligent camera) is not all recorded, but only the next frame image or a video segment is recorded when the change of the previous frame and the next frame is large, so that the control device (for example, a mobile phone) can generate the target video as a miniature of a certain time period according to the recorded frame or video segment, and the user can watch and share the target video conveniently. Taking a day as an example, because the obtained epitome can play back the events occurring in the recorded scene in the shortest time without the need of watching the video of the whole day, the playback efficiency is greatly improved, and the time of the user is saved. In addition, in the scheme, all video streams do not need to be recorded, and only the next frame or one video segment needs to be recorded every time when necessary, so that the volume of the recorded image data can be greatly reduced, the read-write times of the storage medium can also be reduced, the efficiency of video recording is improved, and the service life of the storage medium is prolonged.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating a video processing method according to an exemplary embodiment;
FIG. 2 is a schematic diagram of an application scenario shown in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating an application scenario in accordance with an illustrative embodiment;
FIG. 4 is a flow diagram illustrating a video processing method in accordance with an exemplary embodiment;
FIG. 5 is a pictorial diagram illustrating a video processing procedure in accordance with one illustrative embodiment;
FIG. 6 is a flow diagram illustrating a video processing method in accordance with an exemplary embodiment;
FIG. 7 is a flow diagram illustrating a video processing method in accordance with an exemplary embodiment;
FIG. 8 is a flow diagram illustrating a video processing method in accordance with an exemplary embodiment;
FIG. 9 is a signaling diagram illustrating a video processing method in accordance with an exemplary embodiment;
FIG. 10 is a schematic diagram illustrating an application scenario in accordance with an illustrative embodiment;
FIG. 11 is a block diagram illustrating a video processing device according to an example embodiment;
FIG. 12 is a block diagram illustrating a video processing device according to an example embodiment;
FIG. 13 is a block diagram illustrating a video processing device according to an example embodiment;
FIG. 14 is a block diagram illustrating a video processing device according to an example embodiment;
FIG. 15 is a block diagram illustrating a video processing device according to an example embodiment;
FIG. 16 is a block diagram illustrating a video processing device according to an example embodiment;
FIG. 17 is a block diagram illustrating a video processing device according to an example embodiment;
fig. 18 is a block diagram illustrating an apparatus for video processing according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminal device in this document may be a mobile phone, a tablet computer, an e-book reader, an MP3(Moving Picture experts Group Audio Layer III, motion Picture experts compression standard Audio Layer 3) player, an MP4(Moving Picture experts Group Audio Layer IV, motion Picture experts compression standard Audio Layer 4) player, a laptop computer, or the like.
Fig. 1 is a flow diagram illustrating a video processing method according to an example embodiment. The method can be used for video acquisition equipment such as an intelligent camera and the like.
In step S101, it is determined whether a difference between the obtained preset parameter values of the current frame and the obtained previous frame satisfies a preset condition.
As an example, the video capture device may capture a video stream of a captured scene in real time, and perform the above determination on each frame of the captured video stream, and when a difference between preset parameter values of a current frame and a previous frame meets a preset condition, or simply, when a difference between two previous and next frames is large, it may be considered that a certain large change occurs in the captured scene, that is, a certain event (for example, a person comes in), so as to trigger step S102.
The preset parameter value and the preset condition are not limited in this embodiment, as long as they can reflect whether the two frames before and after the preset parameter value and the preset condition have a large change.
In step S102, if the preset condition is satisfied, recording first image data according to the current frame, so that the control device generates a video according to the first image data.
The embodiment of recording the first image data according to the current frame is not limited, for example, a video segment starting from the current frame may be recorded, or only the current frame may be recorded, and those skilled in the art may design the first image data according to the requirements.
And if the preset condition is not met, not recording. The record in this embodiment is also stored.
As an example, the control device may be a terminal device such as a mobile phone or a tablet computer. The user can install the APP (application) that is used for controlling video acquisition equipment on the cell-phone, makes the cell-phone connect video acquisition equipment back user alright look over the image data that video acquisition equipment gathered and send various instructions etc. to video acquisition equipment on this APP interface.
Fig. 2 to 3 are examples of application scenarios of the present disclosure. In fig. 2, the smart camera 201 performs video capture on a shot scene, and since the shot scene is a static object, two frames before and after the capture by the smart camera 201 are substantially the same, so that the first image data is not recorded. In fig. 3, a person 301 intrudes into the scene at a certain time, and the smart camera 201 finds that there is a great change in the two frames before and after the certain time, so that a video clip of, for example, 10 seconds can be recorded as the first image data. Thus, after the day, a plurality of video clips can be left in the storage medium of the video acquisition equipment.
Referring to fig. 4, in this embodiment or some other embodiments of the present disclosure, the preset parameter values are pixel values in a gray scale binary image; the step S101 may include:
in step S401, a difference value of each corresponding pixel value of the gray-scale binary image of the current frame and the gray-scale binary image of the previous frame is obtained.
The grayscale binary image, i.e., the picture, has a RGB 3byte value of 1bit 0 or 1 per pixel. The calculation formula is value ═ (r + g + b)/3, if the value is greater than 128, the value of the pixel in the gray-scale binary image is 1, otherwise, the value is 0.
In step S402, the average of the absolute values of all the differences is acquired.
In step S403, it is determined whether the average value is greater than a preset threshold.
In step S404, if the average value is greater than the preset threshold, it is determined that the preset condition is satisfied.
For example, the preset threshold may be 0.1, and if the calculated average value is greater than 0.1, the picture representing two frames before and after has changed greatly, indicating that some event may occur, and therefore, the recording of the video should be started.
In this embodiment or some other embodiments of the present disclosure, the first image data may include a frame of image, and the step S102 may include:
recording the current frame as the first image data.
Further, after step S102, the method may further include:
and sending the recorded current frame to the control equipment.
Thus, after receiving the first image data, that is, after receiving the frame images, the control device may connect the first image data and the frame images in chronological order to obtain a target video, and the target video segment may be regarded as a segment of a miniature video.
In this embodiment or some other embodiments of the present disclosure, the first image data may include a video clip, and the step S102 may include:
and recording a video segment with a preset duration from the current frame as the first image data.
For example, the preset time period may be 10 seconds.
Further, after step S102, the method may further include:
sending the recorded video clip to the control device; or,
and extracting a preset number of frame images from the video clip, and sending the preset number of frame images to the control equipment.
For example, if recorded video segments are transmitted to the control device, the control device can connect the video segments in chronological order to obtain a target video.
For another example, instead of sending the recorded video segments, a preset number of frame images may be extracted from each video segment, and then the extracted frame images are sent to the control device, so that the control device may connect the frame images in chronological order to obtain a target video.
The preset number is not limited in this embodiment, and may be, for example, 1 frame or multiple frames, and the extracted position is also not limited in this embodiment, and may be, for example, a head frame, a tail frame, or a temporally most central frame, and so on.
As an example, as shown in fig. 5, a1 is a video stream captured by a video capture device, a square in fig. 5 represents one frame (i.e., one frame image), a2 is a segment of video clips recorded (i.e., cut and stored) from the video stream, A3 is a frame extracted from each segment of video clips, and a4 is a miniature video obtained by combining the extracted frames.
In this embodiment, the video stream collected by the video collecting device (e.g., an intelligent camera) is not all recorded, but only the next frame image or a video segment is recorded when the change of the previous and subsequent frames is large, so that the control device (e.g., a mobile phone) can generate the target video as the miniature of a certain time period according to the recorded frame or video segment, thereby facilitating the user to watch and share the target video. Taking a day as an example, because the obtained epitome can play back the events occurring in the recorded scene in the shortest time without the need of watching the video of the whole day, the playback efficiency is greatly improved, and the time of the user is saved. In addition, in the scheme, all video streams do not need to be recorded, and only the next frame or one video segment needs to be recorded every time when necessary, so that the volume of the recorded image data can be greatly reduced, the read-write times of the storage medium can also be reduced, the efficiency of video recording is improved, and the service life of the storage medium is prolonged.
Fig. 6 is a flow diagram illustrating a video processing method according to an example embodiment. The method can be used for controlling equipment, and the controlling equipment can be terminal equipment such as a mobile phone and a tablet personal computer.
In step S601, second image data sent by the video capture device is received, where the second image data includes a frame image, a video clip, or a set of a preset number of frame images.
In step S602, a target video is generated according to the received at least two pieces of the second image data.
In this embodiment or some other embodiments of the present disclosure, when the second image data includes one frame of image, step S602 may include:
and generating the target video according to the received at least two frames of images.
For example, the received frame images may be connected in chronological order, and then the target video may be generated.
In this embodiment or some other embodiments of the present disclosure, when the second image data includes a video clip, step S602 may include:
generating the target video according to the received at least two video clips; or,
and extracting a group of frame images with preset number from each video clip, and generating the target video according to each group of frame images with preset number.
For example, the received video segments may be connected in chronological order, so that a target video may be generated;
alternatively, a set of frame images with a preset number may be extracted from each received video segment, for example, one frame image is extracted from each video segment, and then the frame images are connected in time sequence, so that the target video may be generated.
In this embodiment or some other embodiments of the present disclosure, when the second image data includes a set of frame images with a preset number, step S602 may include:
and generating the target video according to at least two groups of the preset number of frame images.
Referring to fig. 7, in this embodiment or some other embodiments of the present disclosure, after step S602, the method may further include:
in step S701, the acquired target audio is added to the target video.
As an example, the control device may automatically add preset audio to the target video as a soundtrack to the target video; for another example, the control device may add audio designated by the user as target audio to the target video. Therefore, the video can be more vivid when the user shares the video with others.
Referring to fig. 8, in this embodiment or some other embodiments of the present disclosure, after step S602, the method may further include:
in step S801, the target video is sent to a designated device or a designated contact.
For example, the user may designate a contact from the contacts of the mobile phone or APP, and then send the target video to the designated contact, or for example, the user may also directly send the target video to a designated device, for example, another mobile phone connected to the mobile phone of the user through bluetooth, or a server where the user account is located, so as to implement sharing.
In this embodiment, the video stream collected by the video collecting device (e.g., an intelligent camera) is not all recorded, but only the next frame image or a video segment is recorded when the change of the previous and subsequent frames is large, so that the control device (e.g., a mobile phone) can generate the target video as the miniature of a certain time period according to the recorded frame or video segment, thereby facilitating the user to watch and share the target video. Taking a day as an example, because the obtained epitome can play back the events occurring in the recorded scene in the shortest time without the need of watching the video of the whole day, the playback efficiency is greatly improved, and the time of the user is saved. In addition, in the scheme, all video streams do not need to be recorded, and only the next frame or one video segment needs to be recorded every time when necessary, so that the volume of the recorded image data can be greatly reduced, the read-write times of the storage medium can also be reduced, the efficiency of video recording is improved, and the service life of the storage medium is prolonged.
The disclosed aspects are further described below in conjunction with the specific scenarios.
Fig. 9 is a signaling diagram illustrating a video processing method according to an example embodiment. As shown in fig. 10, 1001 is an intelligent camera (i.e., a video capture device), 1002 is a wireless router, 1003 is a server in the cloud, 1004 is a mobile phone (i.e., a control device) of a user, the intelligent camera 1001 is connected to the wireless router 1002 in a wireless manner such as WiFi, and is further connected to the server 1003, and the mobile phone 1004 is also connected to the server 1003, so that the intelligent camera 1001 and the mobile phone 1004 are connected together in an indirect manner, and communication can be achieved between them. Of course, in some other embodiments of the present disclosure, the mobile phone 1004 may also be connected to the smart camera 1001 only through the wireless router 1002, or the mobile phone 1004 is directly connected to the smart camera 1001, which is not limited to this disclosure.
In step S901, the control apparatus transmits an instruction to perform video capturing to the video capturing apparatus.
For example, the intelligent camera is installed at a door of a user, and before the user leaves home and goes to work in the morning, a video acquisition instruction can be sent to the intelligent camera through a mobile phone, so that the intelligent camera can monitor the condition at the door of the user all day.
In step S902, the video capture device captures a video stream, but does not record.
For example, in the daytime, no person passes through or goes in and out of the door of the user, so that the two frames before and after the collected video stream are changed rarely, and thus, recording is not needed.
In step S903, the video capture device finds that the previous and subsequent frames have a large change, and records a next video segment.
For example, at a certain time, a courier comes to the door of the user's home, and at this time, a large change occurs in the video stream in the smart camera, so that the smart camera records a video clip of 10 s.
In step S904, the video capture device finds that the previous and subsequent frames have changed significantly, and records another segment of key video.
For example, another time when the cleaner comes to clean the corridor, the video stream in the intelligent camera also changes greatly, and then the intelligent camera records the video of 10 s.
In step S905, the control apparatus transmits an image data extraction instruction to the video capture apparatus.
For example, at night, the user returns to home and wants to see the situation at the door of the home in the daytime, and then an image data extraction instruction can be sent to the smart camera through the mobile phone.
In step S906, the video capture device extracts one frame from each recorded video segment and sends the extracted frame to the control device.
For example, the most temporally central frame in each video segment may be extracted, or the head frame of each video segment may be extracted, etc.
In step S907, the control apparatus generates a piece of video clip as a target video from the received frame.
For example, combining the frames in chronological order results in a target video that can be viewed as a miniature video of a day.
In step S908, the control apparatus plays the target video according to the operation instruction of the user.
Thus, the user can browse various events happening at the door of the house in the shortest time.
In this embodiment, the video stream collected by the video collecting device (e.g., an intelligent camera) is not all recorded, but only the next frame image or a video segment is recorded when the change of the previous and subsequent frames is large, so that the control device (e.g., a mobile phone) can generate the target video as the miniature of a certain time period according to the recorded frame or video segment, thereby facilitating the user to watch and share the target video. Taking a day as an example, because the obtained epitome can play back the events occurring in the recorded scene in the shortest time without the need of watching the video of the whole day, the playback efficiency is greatly improved, and the time of the user is saved. In addition, in the scheme, all video streams do not need to be recorded, and only the next frame or one video segment needs to be recorded every time when necessary, so that the volume of the recorded image data can be greatly reduced, the read-write times of the storage medium can also be reduced, the efficiency of video recording is improved, and the service life of the storage medium is prolonged.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 11 is a block diagram illustrating a video processing apparatus according to an example embodiment. The device can be used for video acquisition equipment, such as intelligent cameras and the like.
Referring to fig. 11, the apparatus may include:
an image change determining module 1101, configured to determine whether a difference between preset parameter values of an acquired current frame and an acquired previous frame meets a preset condition;
a recording module 1102, configured to record first image data according to the current frame when the image change determining module 1101 determines that the preset condition is met, so that the control device generates a video according to the first image data.
Referring to fig. 12, in this embodiment or some other embodiments of the present disclosure, the preset parameter value determined by the image change determining module 1101 is each pixel value in a gray scale binary image; the image change determination module 1101 includes: an image parameter processing sub-module 1201 and an image parameter judgment sub-module 1202;
the image parameter processing submodule 1201 is configured to obtain an absolute value average value of a difference value between each corresponding pixel value of the gray-scale binary image of the current frame and the corresponding pixel value of the gray-scale binary image of the previous frame;
the image parameter determining sub-module 1202 is configured to determine whether the average value processed by the image parameter processing sub-module 1201 is greater than a preset threshold, and determine that the preset condition is met if the average value is greater than the preset threshold.
In this embodiment or some other embodiments of the present disclosure, the first image data recorded by the recording module 1102 includes a frame of image, and the recording module 1102 includes: a first recording sub-module;
the first recording sub-module is configured to record the current frame as the first image data.
Referring to fig. 13, in this embodiment or some other embodiments of the present disclosure, the apparatus may further include: a first transmission module 1301;
the first sending module 1301 is configured to send the current frame recorded by the first recording submodule to the control device.
In this embodiment or some other embodiments of the present disclosure, the first image data recorded by the recording module 1102 includes a video clip, and the recording module 1102 includes a second recording sub-module:
the second recording sub-module is configured to record a video segment with a preset duration from the current frame as the first image data.
Referring to fig. 14, in this embodiment or some other embodiments of the present disclosure, the apparatus may further include a second sending module 1401:
the second sending module 1401 is configured to send the video segment recorded by the second recording sub-module to the control device, or extract a preset number of frame images from the video segment recorded by the second recording sub-module and send the preset number of frame images to the control device.
In this embodiment, the video stream collected by the video collecting device (e.g., an intelligent camera) is not all recorded, but only the next frame image or a video segment is recorded when the change of the previous and subsequent frames is large, so that the control device (e.g., a mobile phone) can generate the target video as the miniature of a certain time period according to the recorded frame or video segment, thereby facilitating the user to watch and share the target video. Taking a day as an example, because the obtained epitome can play back the events occurring in the recorded scene in the shortest time without the need of watching the video of the whole day, the playback efficiency is greatly improved, and the time of the user is saved. In addition, in the scheme, all video streams do not need to be recorded, and only the next frame or one video segment needs to be recorded every time when necessary, so that the volume of the recorded image data can be greatly reduced, the read-write times of the storage medium can also be reduced, the efficiency of video recording is improved, and the service life of the storage medium is prolonged.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 15 is a block diagram illustrating a video processing apparatus according to an example embodiment. The device can be used for controlling equipment such as a mobile phone, a tablet computer and the like.
Referring to fig. 15, the apparatus may include:
the data receiving module 1501 is configured to receive second image data sent by a video capture device, where the second image data includes a frame image, a video clip, or a group of frame images with a preset number;
a video generating module 1502 is configured to generate a target video according to at least two second image data received by the data receiving module 1501.
In this embodiment or some other embodiments of the present disclosure, the video generating module 1502 includes: a first video generation submodule;
the first video generation submodule is configured to generate the target video according to at least two received frames of images when the second image data includes one frame of image.
In this embodiment or some other embodiments of the present disclosure, the video generating module 1502 includes: a second video generation submodule;
the second video generation sub-module is configured to generate the target video according to at least two received video segments when the second image data includes one video segment, or extract a group of preset number of frame images from each of the video segments and generate the target video according to each group of preset number of frame images.
In this embodiment or some other embodiments of the present disclosure, the video generating module 1502 includes: third video generation submodule
The third video generation submodule is configured to generate the target video according to at least two groups of frame images with a preset number when the second image data includes a group of frame images with a preset number.
Referring to fig. 16, in this embodiment or some other embodiments of the present disclosure, the apparatus may further include: an audio adding module 1601;
the audio adding module 1601 is configured to add the acquired target audio to the target video generated by the video generating module.
Referring to fig. 17, in this embodiment or some other embodiments of the present disclosure, the apparatus may further include: a video transmission module 1701;
the video sending module 1701 is configured to send the target video to a designated device or a designated contact.
In this embodiment, the video stream collected by the video collecting device (e.g., an intelligent camera) is not all recorded, but only the next frame image or a video segment is recorded when the change of the previous and subsequent frames is large, so that the control device (e.g., a mobile phone) can generate the target video as the miniature of a certain time period according to the recorded frame or video segment, thereby facilitating the user to watch and share the target video. Taking a day as an example, because the obtained epitome can play back the events occurring in the recorded scene in the shortest time without the need of watching the video of the whole day, the playback efficiency is greatly improved, and the time of the user is saved. In addition, in the scheme, all video streams do not need to be recorded, and only the next frame or one video segment needs to be recorded every time when necessary, so that the volume of the recorded image data can be greatly reduced, the read-write times of the storage medium can also be reduced, the efficiency of video recording is improved, and the service life of the storage medium is prolonged.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure also discloses a video processing apparatus, comprising:
a first processor;
a first memory for storing first processor-executable instructions;
wherein the first processor is configured to:
judging whether the difference value of the preset parameter values of the obtained current frame and the obtained previous frame meets a preset condition or not;
and if the preset condition is met, recording first image data according to the current frame so that the control equipment generates a video according to the first image data.
The present disclosure also discloses a non-transitory computer readable storage medium having instructions that, when executed by a processor of a video capture device, enable the video capture device to perform a video processing method, the method comprising:
judging whether the difference value of the preset parameter values of the obtained current frame and the obtained previous frame meets a preset condition or not;
and if the preset condition is met, recording first image data according to the current frame so that the control equipment generates a video according to the first image data.
The present disclosure also discloses a video processing apparatus, including:
a second processor;
a second memory for storing second processor-executable instructions;
wherein the second processor is configured to:
receiving second image data sent by video acquisition equipment, wherein the second image data comprises a frame image, a video clip or a group of frame images with a preset number;
and generating a target video according to the received at least two second image data.
The present disclosure also discloses a non-transitory computer readable storage medium having instructions that, when executed by a processor of a control device, enable the control device to perform a video processing method, the method comprising:
receiving second image data sent by video acquisition equipment, wherein the second image data comprises a frame image, a video clip or a group of frame images with a preset number;
and generating a target video according to the received at least two second image data.
The present disclosure also discloses a video processing system, the system comprising: video acquisition equipment and control equipment;
the video acquisition equipment comprises any one video processing device comprising an image change judging module and a recording module;
the control equipment comprises any one video processing device comprising the data receiving module and the video generating module.
The present disclosure also discloses a video processing system, the system comprising: video acquisition equipment and control equipment;
the video acquisition equipment comprises the video processing device comprising the first processor and the first memory;
the control equipment comprises the video processing device comprising the second processor and the second memory.
Fig. 18 is a block diagram illustrating an apparatus for video processing according to an example embodiment. The apparatus is illustrated as apparatus 1800. the apparatus 1800 may be, for example, a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 18, the apparatus 1800 may include one or more of the following components: processing component 1802, memory 1804, power component 1806, multimedia component 1808, audio component 1810, input/output (I/O) interface 1812, sensor component 1814, and communications component 1816.
The processing component 1802 generally controls the overall operation of the device 1800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1802 may include one or more processors 1820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1802 may include one or more modules that facilitate interaction between the processing component 1802 and other components. For example, the processing component 1802 can include a multimedia module to facilitate interaction between the multimedia component 1808 and the processing component 1802.
The memory 1804 is configured to store various types of data to support operation at the device 1800. Examples of such data include instructions for any application or method operating on the device 1800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1804 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1806 provides power to the various components of the device 1800. The power components 1806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 1800.
The multimedia component 1808 includes a screen providing an output interface between the apparatus 1800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 1800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Audio component 1810 is configured to output and/or input audio signals. For example, the audio component 1810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 1800 is in operating modes, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1804 or transmitted via the communication component 1816. In some embodiments, audio component 1810 also includes a speaker for outputting audio signals.
I/O interface 1812 provides an interface between processing component 1802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 1814 includes one or more sensors for providing various aspects of state assessment for the apparatus 1800. For example, the sensor assembly 1814 can detect an open/closed state of the device 1800, the relative positioning of components such as a display and keypad of the apparatus 1800, the sensor assembly 1814 can also detect a change in position of the apparatus 1800 or a component of the apparatus 1800, the presence or absence of user contact with the apparatus 1800, orientation or acceleration/deceleration of the apparatus 1800, and a change in temperature of the apparatus 1800. Sensor assembly 1814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1816 is configured to facilitate communications between the apparatus 1800 and other devices in a wired or wireless manner. The device 1800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1816 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods at the terminal side.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
Claims (28)
1. A method of video processing, the method comprising:
judging whether the difference value of the preset parameter values of the obtained current frame and the obtained previous frame meets a preset condition or not;
and if the preset condition is met, recording first image data according to the current frame so that the control equipment generates a video according to the first image data.
2. The method of claim 1, wherein the predetermined parameter values are pixel values in a gray scale binary image; the judging whether the difference value of the preset parameter values of the obtained current frame and the obtained previous frame meets the preset condition includes:
acquiring a difference value of each corresponding pixel value of the gray-scale binary image of the current frame and the gray-scale binary image of the previous frame;
obtaining the average value of the absolute values of all the difference values;
judging whether the average value is larger than a preset threshold value or not;
and if the average value is larger than the preset threshold value, determining that the preset condition is met.
3. The method of claim 1, wherein the first image data comprises a frame of image; the recording of the first image data according to the current frame includes:
recording the current frame as the first image data.
4. The method of claim 3, wherein after recording the current frame as the first image data, further comprising:
and sending the recorded current frame to the control equipment.
5. The method of claim 1, wherein the first image data comprises a video segment, and wherein recording the first image data based on the current frame comprises:
and recording a video segment with a preset duration from the current frame as the first image data.
6. The method according to claim 5, wherein after recording a video segment of a preset duration from the current frame as the first image data, further comprising:
sending the recorded video clip to the control device; or,
and extracting a preset number of frame images from the video clip, and sending the preset number of frame images to the control equipment.
7. A method of video processing, the method comprising:
receiving second image data sent by video acquisition equipment, wherein the second image data comprises a frame image, a video clip or a group of frame images with a preset number;
and generating a target video according to the received at least two second image data.
8. The method of claim 7, wherein when the second image data comprises one frame of image, the generating a target video from at least two received second image data comprises:
and generating the target video according to the received at least two frames of images.
9. The method of claim 7, wherein when the second image data comprises a video clip, the generating the target video from the received at least two second image data comprises:
generating the target video according to the received at least two video clips; or,
and extracting a group of frame images with preset number from each video clip, and generating the target video according to each group of frame images with preset number.
10. The method according to claim 7, wherein when the second image data includes a set of a preset number of frame images, the generating a target video according to the received at least two second image data comprises:
and generating the target video according to at least two groups of the preset number of frame images.
11. The method of claim 7, wherein after generating the target video from the received at least two of the extracted frame second image data, further comprising:
and adding the acquired target audio into the target video.
12. The method of claim 7, further comprising, after generating the target video from the received at least two second image data:
and sending the target video to a designated device or a designated contact.
13. A video processing apparatus, characterized in that the apparatus comprises:
the image change judging module is used for judging whether the difference value of the preset parameter values of the obtained current frame and the obtained previous frame meets a preset condition or not;
and the recording module is used for recording first image data according to the current frame when the image change judging module judges that the preset condition is met, so that the control equipment generates a video according to the first image data.
14. The apparatus of claim 13,
the preset parameter values judged by the image change judging module are all pixel values in a gray binary image;
the image change judging module includes: an image parameter processing submodule and an image parameter judging submodule;
the image parameter processing submodule is used for acquiring an absolute value average value of a difference value of each corresponding pixel value of the gray-scale binary image of the current frame and the gray-scale binary image of the previous frame;
the image parameter judgment sub-module is used for judging whether the average value processed by the image parameter processing sub-module is larger than a preset threshold value, and if the average value is larger than the preset threshold value, the image parameter judgment sub-module determines that the preset condition is met.
15. The apparatus of claim 13,
the first image data recorded by the recording module comprises a frame of image;
the recording module comprises: a first recording sub-module;
the first recording sub-module is configured to record the current frame as the first image data.
16. The apparatus of claim 15, further comprising: a first sending module;
the first sending module is configured to send the current frame recorded by the first recording sub-module to the control device.
17. The apparatus of claim 13,
the first image data recorded by the recording module comprises a video clip;
the recording module comprises: a second recording sub-module;
the second recording sub-module is configured to record a video segment with a preset duration from the current frame as the first image data.
18. The apparatus of claim 17, further comprising: a second sending module;
the second sending module is configured to send the video segment recorded by the second recording sub-module to the control device, or extract a preset number of frame images from the video segment recorded by the second recording sub-module and send the preset number of frame images to the control device.
19. A video processing apparatus, characterized in that the apparatus comprises:
the data receiving module is used for receiving second image data sent by the video acquisition equipment, wherein the second image data comprises a frame image, a video clip or a group of frame images with preset number;
and the video generation module is used for generating a target video according to the at least two second image data received by the data receiving module.
20. The apparatus of claim 19, wherein the video generation module comprises: a first video generation submodule;
the first video generation submodule is configured to generate the target video according to at least two received frames of images when the second image data includes one frame of image.
21. The apparatus of claim 19, wherein the video generation module comprises: a second video generation submodule;
the second video generation sub-module is configured to generate the target video according to at least two received video segments when the second image data includes one video segment, or extract a group of preset number of frame images from each of the video segments and generate the target video according to each group of preset number of frame images.
22. The apparatus of claim 19, wherein the video generation module comprises: a third video generation submodule;
the third video generation submodule is configured to generate the target video according to at least two groups of frame images with a preset number when the second image data includes a group of frame images with a preset number.
23. The apparatus of claim 19, further comprising: an audio adding module;
the audio adding module is used for adding the acquired target audio into the target video generated by the video generating module.
24. The apparatus of claim 19, further comprising: a video transmitting module;
and the video sending module is used for sending the target video to a designated device or a designated contact.
25. A video processing apparatus, comprising:
a first processor;
a first memory for storing first processor-executable instructions;
wherein the first processor is configured to:
judging whether the difference value of the preset parameter values of the obtained current frame and the obtained previous frame meets a preset condition or not;
and if the preset condition is met, recording first image data according to the current frame so that the control equipment generates a video according to the first image data.
26. A video processing apparatus, comprising:
a second processor;
a second memory for storing second processor-executable instructions;
wherein the second processor is configured to:
receiving second image data sent by video acquisition equipment, wherein the second image data comprises a frame image, a video clip or a group of frame images with a preset number;
and generating a target video according to the received at least two second image data.
27. A video processing system, the system comprising: video acquisition equipment and control equipment;
the video capture device comprising the video processing apparatus of any of claims 13 to 18;
the control device comprising the video processing apparatus of any of claims 19 to 24.
28. A video processing system, the system comprising: video acquisition equipment and control equipment;
the video capture device comprising the video processing apparatus of claim 25;
the control device comprising the video processing apparatus of claim 26.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510640894.5A CN106559631A (en) | 2015-09-30 | 2015-09-30 | Method for processing video frequency and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510640894.5A CN106559631A (en) | 2015-09-30 | 2015-09-30 | Method for processing video frequency and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106559631A true CN106559631A (en) | 2017-04-05 |
Family
ID=58417876
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510640894.5A Pending CN106559631A (en) | 2015-09-30 | 2015-09-30 | Method for processing video frequency and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106559631A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108337551A (en) * | 2018-01-22 | 2018-07-27 | 深圳壹账通智能科技有限公司 | A kind of screen recording method, storage medium and terminal device |
CN108965806A (en) * | 2018-07-12 | 2018-12-07 | 江门市金佣网有限公司 | A kind of data transmission method and device based on long-range sales exhibition system |
CN109248378A (en) * | 2018-09-09 | 2019-01-22 | 深圳硅基仿生科技有限公司 | Video process apparatus, method and the retina stimulator of retina stimulator |
CN109348288A (en) * | 2018-11-09 | 2019-02-15 | 五八同城信息技术有限公司 | A kind of processing method of video, device, storage medium and terminal |
CN109660832A (en) * | 2018-11-10 | 2019-04-19 | 江苏网进科技股份有限公司 | A kind of desktop virtual system and method |
CN110008804A (en) * | 2018-12-12 | 2019-07-12 | 浙江新再灵科技股份有限公司 | Elevator monitoring key frame based on deep learning obtains and detection method |
CN110166780A (en) * | 2018-06-06 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Bit rate control method, trans-coding treatment method, device and the machinery equipment of video |
CN111866366A (en) * | 2019-04-30 | 2020-10-30 | 百度时代网络技术(北京)有限公司 | Method and apparatus for transmitting information |
CN112689158A (en) * | 2019-10-18 | 2021-04-20 | 北京沃东天骏信息技术有限公司 | Method, apparatus, device and computer readable medium for processing video |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008041981A1 (en) * | 2006-10-03 | 2008-04-10 | Daniel Joshua Goldstein | Art adaptor for video monitors |
CN101461239A (en) * | 2006-06-13 | 2009-06-17 | Adt安全服务公司 | Video verification system and method for central station alarm monitoring |
CN102543136A (en) * | 2012-02-17 | 2012-07-04 | 广州盈可视电子科技有限公司 | Method and device for clipping video |
CN103327306A (en) * | 2013-06-14 | 2013-09-25 | 广东威创视讯科技股份有限公司 | Method and device for storing video surveillance image |
CN103391442A (en) * | 2013-07-24 | 2013-11-13 | 佳都新太科技股份有限公司 | Rapid video image transmission compression algorithm based on regional division and difference comparison |
CN103891270A (en) * | 2011-09-01 | 2014-06-25 | 汤姆逊许可公司 | Method for capturing video related content |
CN104038717A (en) * | 2014-06-26 | 2014-09-10 | 北京小鱼儿科技有限公司 | Intelligent recording system |
CN104284158A (en) * | 2014-10-23 | 2015-01-14 | 南京信必达智能技术有限公司 | Event-oriented intelligent camera monitoring method |
CN104601918A (en) * | 2014-12-29 | 2015-05-06 | 小米科技有限责任公司 | Video recording method and device |
CN104811797A (en) * | 2015-04-15 | 2015-07-29 | 广东欧珀移动通信有限公司 | Video processing method and mobile terminal |
CN104902202A (en) * | 2015-05-15 | 2015-09-09 | 百度在线网络技术(北京)有限公司 | Method and device for video storage |
-
2015
- 2015-09-30 CN CN201510640894.5A patent/CN106559631A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101461239A (en) * | 2006-06-13 | 2009-06-17 | Adt安全服务公司 | Video verification system and method for central station alarm monitoring |
WO2008041981A1 (en) * | 2006-10-03 | 2008-04-10 | Daniel Joshua Goldstein | Art adaptor for video monitors |
CN103891270A (en) * | 2011-09-01 | 2014-06-25 | 汤姆逊许可公司 | Method for capturing video related content |
CN102543136A (en) * | 2012-02-17 | 2012-07-04 | 广州盈可视电子科技有限公司 | Method and device for clipping video |
CN103327306A (en) * | 2013-06-14 | 2013-09-25 | 广东威创视讯科技股份有限公司 | Method and device for storing video surveillance image |
CN103391442A (en) * | 2013-07-24 | 2013-11-13 | 佳都新太科技股份有限公司 | Rapid video image transmission compression algorithm based on regional division and difference comparison |
CN104038717A (en) * | 2014-06-26 | 2014-09-10 | 北京小鱼儿科技有限公司 | Intelligent recording system |
CN104284158A (en) * | 2014-10-23 | 2015-01-14 | 南京信必达智能技术有限公司 | Event-oriented intelligent camera monitoring method |
CN104601918A (en) * | 2014-12-29 | 2015-05-06 | 小米科技有限责任公司 | Video recording method and device |
CN104811797A (en) * | 2015-04-15 | 2015-07-29 | 广东欧珀移动通信有限公司 | Video processing method and mobile terminal |
CN104902202A (en) * | 2015-05-15 | 2015-09-09 | 百度在线网络技术(北京)有限公司 | Method and device for video storage |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108337551A (en) * | 2018-01-22 | 2018-07-27 | 深圳壹账通智能科技有限公司 | A kind of screen recording method, storage medium and terminal device |
CN108337551B (en) * | 2018-01-22 | 2020-03-31 | 深圳壹账通智能科技有限公司 | Screen recording method, storage medium and terminal equipment |
CN110166780A (en) * | 2018-06-06 | 2019-08-23 | 腾讯科技(深圳)有限公司 | Bit rate control method, trans-coding treatment method, device and the machinery equipment of video |
CN110166780B (en) * | 2018-06-06 | 2023-06-30 | 腾讯科技(深圳)有限公司 | Video code rate control method, transcoding processing method, device and machine equipment |
CN108965806B (en) * | 2018-07-12 | 2021-01-08 | 江门市金佣网有限公司 | Data transmission method and device based on remote exhibition and marketing system |
CN108965806A (en) * | 2018-07-12 | 2018-12-07 | 江门市金佣网有限公司 | A kind of data transmission method and device based on long-range sales exhibition system |
CN109248378A (en) * | 2018-09-09 | 2019-01-22 | 深圳硅基仿生科技有限公司 | Video process apparatus, method and the retina stimulator of retina stimulator |
CN109348288A (en) * | 2018-11-09 | 2019-02-15 | 五八同城信息技术有限公司 | A kind of processing method of video, device, storage medium and terminal |
CN109660832A (en) * | 2018-11-10 | 2019-04-19 | 江苏网进科技股份有限公司 | A kind of desktop virtual system and method |
CN110008804A (en) * | 2018-12-12 | 2019-07-12 | 浙江新再灵科技股份有限公司 | Elevator monitoring key frame based on deep learning obtains and detection method |
CN110008804B (en) * | 2018-12-12 | 2021-07-06 | 浙江新再灵科技股份有限公司 | Elevator monitoring key frame obtaining and detecting method based on deep learning |
CN111866366A (en) * | 2019-04-30 | 2020-10-30 | 百度时代网络技术(北京)有限公司 | Method and apparatus for transmitting information |
CN112689158A (en) * | 2019-10-18 | 2021-04-20 | 北京沃东天骏信息技术有限公司 | Method, apparatus, device and computer readable medium for processing video |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106559631A (en) | Method for processing video frequency and device | |
CN112153400B (en) | Live broadcast interaction method and device, electronic equipment and storage medium | |
US9674395B2 (en) | Methods and apparatuses for generating photograph | |
CN105100829B (en) | Video content intercept method and device | |
KR101680714B1 (en) | Method for providing real-time video and device thereof as well as server, terminal device, program, and recording medium | |
CN104065878B (en) | Filming control method, device and terminal | |
CN106911961B (en) | Multimedia data playing method and device | |
US20170304735A1 (en) | Method and Apparatus for Performing Live Broadcast on Game | |
CN106559712B (en) | Video playing processing method and device and terminal equipment | |
CN105611413A (en) | Method and device for adding video clip class markers | |
CN109922252B (en) | Short video generation method and device and electronic equipment | |
CN105120301B (en) | Method for processing video frequency and device, smart machine | |
CN106527682B (en) | Method and device for switching environment pictures | |
US20170054906A1 (en) | Method and device for generating a panorama | |
KR20160043523A (en) | Method, and device for video browsing | |
CN105847627B (en) | A kind of method and apparatus of display image data | |
CN106131615A (en) | Video broadcasting method and device | |
CN105678266A (en) | Method and device for combining photo albums of human faces | |
CN114025105A (en) | Video processing method and device, electronic equipment and storage medium | |
CN113259226A (en) | Information synchronization method and device, electronic equipment and storage medium | |
CN105959563B (en) | Image storage method and image storage device | |
CN107122697B (en) | Automatic photo obtaining method and device and electronic equipment | |
CN107105311B (en) | Live broadcasting method and device | |
CN106896917B (en) | Method and device for assisting user in experiencing virtual reality and electronic equipment | |
CN111355879B (en) | Image acquisition method and device containing special effect pattern and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170405 |
|
RJ01 | Rejection of invention patent application after publication |