CN111212321A - Video processing method, device, equipment and computer storage medium - Google Patents

Video processing method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN111212321A
CN111212321A CN202010028264.3A CN202010028264A CN111212321A CN 111212321 A CN111212321 A CN 111212321A CN 202010028264 A CN202010028264 A CN 202010028264A CN 111212321 A CN111212321 A CN 111212321A
Authority
CN
China
Prior art keywords
video data
video
timestamp information
sub
acquisition operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010028264.3A
Other languages
Chinese (zh)
Inventor
陈鑫
浦汉来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Moxiang Network Technology Co ltd
Original Assignee
Shanghai Moxiang Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Moxiang Network Technology Co ltd filed Critical Shanghai Moxiang Network Technology Co ltd
Priority to CN202010028264.3A priority Critical patent/CN111212321A/en
Publication of CN111212321A publication Critical patent/CN111212321A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8352Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention provides a video processing method, a video processing device, video processing equipment and a computer storage medium. The method comprises the following steps: executing video acquisition operation according to the shooting instruction, identifying at least one piece of sub-video data meeting preset conditions in the video data acquired by the video acquisition operation in real time in the video acquisition operation process, and synchronously and automatically marking the start timestamp information and the end timestamp information of the sub-video data; and after the video acquisition operation is finished, extracting at least one piece of sub-video data corresponding to the start time stamp information and the end time stamp information from the total video data obtained by the video acquisition operation according to the stored automatically marked start time stamp information and end time stamp information. The invention can select at least one sub-video data added with the mark, thereby not only improving the flexibility of video processing operation and reducing the video storage capacity, but also improving the use experience of users.

Description

Video processing method, device, equipment and computer storage medium
Technical Field
Embodiments of the present invention relate to data output processing technologies, and in particular, to a video processing method, apparatus, device, and computer storage medium.
Background
With the development of portable photographing devices, more and more users prefer to use portable photographing devices for video photographing to record the wonderful moment in life at any time.
However, when the current video camera outputs the recorded video data, a user can only select a whole video (i.e. a complete video file) to output, which results in a long time consumption for video processing operation, and because the capacity of the video file is large, the video file recorded by the video camera can only be exported by using a conventional computer (e.g. a notebook computer, a desktop computer) as a transmission medium.
However, in most cases, for a complete video image, some video segments without value retention are often mixed, and the video segments of this portion occupy a large amount of storage space and transmission time.
Therefore, how to provide a more flexible video processing technology is the technical subject to be solved by the present invention.
Disclosure of Invention
Embodiments of the present invention provide a video processing method, apparatus, device, and computer storage medium to solve or partially solve the above problems.
According to a first aspect of the embodiments of the present invention, there is provided a video processing method, including: executing video acquisition operation according to a shooting instruction, identifying at least one piece of sub-video data meeting preset conditions in the video data acquired by the video acquisition operation in real time in the process of carrying out the video acquisition operation, and synchronously and automatically marking the start timestamp information and the end timestamp information of the sub-video data; acquiring and storing the automatically marked start timestamp information and end timestamp information; and after the video acquisition operation is finished, extracting at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video acquisition operation according to the stored start timestamp information and end timestamp information.
The video processing method according to claim 1, wherein the preset conditions at least comprise:
and the video information of the same object is captured by the video acquisition operation within a continuous time period which meets the preset time length.
The video processing method according to claim 2, wherein the sub-video data comprises: a segment video data of the total video data and/or an entire segment video data of the total video data.
The video processing method according to claim 1, further comprising, after extracting at least one sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video capture operation:
and outputting at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information, which is extracted from the total video data obtained by the video acquisition operation, to an application program according to a user instruction or a trigger condition.
The video processing method according to claim 4, wherein the extracting, according to a user instruction or a trigger condition, at least one sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video capture operation and outputting the sub-video data to an application program is specifically:
and extracting at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video acquisition operation according to a user instruction or a trigger condition, and outputting the sub-video data to the application program in parallel.
The video processing method according to claim 4, wherein the user instruction comprises a hardware triggered output instruction and/or a software triggered output instruction.
The video processing method according to claim 6, wherein the hardware-triggered output command is a key-triggered output video command; and/or the software trigger output instruction is a software trigger output instruction meeting a preset condition.
The video processing method of claim 4, wherein the method further comprises:
and synthesizing at least one piece of sub-video data corresponding to the start time stamp information and the end time stamp information extracted from the total video data obtained by the video capture operation to generate a video file, and outputting the video file to the application program.
The video processing method of claim 1, wherein the method further comprises:
displaying the stored automatically marked start timestamp information and end timestamp information in a play progress bar of the total video data to provide fast browsing and/or fast positioning for the at least one sub-video data in the total video data.
The video processing method of claim 1, wherein the method further comprises:
operating the automatic mark according to a mark modification instruction, wherein the operation comprises: at least one of addition, deletion, and modification.
The video processing method of claim 1, wherein said extracting at least one sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained from the video capture operation further comprises:
according to a preset time condition, screening out the time stamp information and the end time stamp information which meet the preset time condition from the stored automatically marked start time stamp information and end time stamp information; and
and extracting at least one piece of sub-video data corresponding to the screened start timestamp information and the screened end timestamp information from the total video data obtained by the video acquisition operation.
According to a second aspect of embodiments of the present invention, there is provided a video processing apparatus, the apparatus comprising: the system comprises a mark setting module, a video acquisition module and a marking module, wherein the mark setting module is used for executing video acquisition operation according to a shooting instruction, identifying at least one piece of sub-video data meeting preset conditions in the video data acquired by the video acquisition operation in real time in the process of carrying out the video acquisition operation, and synchronously and automatically marking the start timestamp information and the end timestamp information of the sub-video data; the mark storage module is used for acquiring and storing the start timestamp information and the end timestamp information of the automatic mark; and the video extraction module is used for extracting at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video acquisition operation according to the stored start timestamp information and the end timestamp information after the video acquisition operation is finished.
The video processing apparatus according to claim 12, wherein the preset condition at least comprises:
and the video information of the same object is captured by the video acquisition operation within a continuous time period which meets the preset time length.
The video processing apparatus according to claim 12, wherein the sub video data comprises: a segment video data of the total video data and/or an entire segment video data of the total video data.
The video processing apparatus of claim 12, wherein the apparatus further comprises:
and the video output module is used for outputting at least one piece of sub-video data which is extracted from the total video data obtained by the video acquisition operation and corresponds to the starting timestamp information and the ending timestamp information to an application program according to a user instruction or a trigger condition.
The video processing apparatus of claim 15, wherein the video output module is specifically configured to:
and extracting at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video acquisition operation according to a user instruction or a trigger condition, and outputting the sub-video data to an application program in parallel.
The video processing apparatus of claim 15, wherein the apparatus further comprises:
and the video merging module is used for merging at least one piece of sub-video data corresponding to the starting timestamp information and the ending timestamp information extracted from the total video data obtained by the video acquisition operation by the video extraction module to generate a video file, and providing the video output module to output the video file to the application program.
The video processing apparatus of claim 12, wherein the apparatus further comprises:
and the browsing positioning module is used for displaying the stored automatically marked start timestamp information and end timestamp information in a playing progress bar of the total video data so as to provide quick browsing and/or quick positioning for the at least one sub-video data in the total video data.
The video processing apparatus of claim 12, wherein the apparatus further comprises:
a tag modification module, configured to perform an operation on the automatic tag according to a tag modification instruction, where the operation includes: at least one of addition, deletion, and modification.
The video processing apparatus of claim 12, wherein the apparatus further comprises:
the video screening module is used for screening the timestamp information and the end timestamp information meeting the preset time condition from the stored automatically marked start timestamp information and end timestamp information according to the preset time condition; and wherein the one or more of the one,
the extracting, by the video extracting module, at least one sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video capturing operation specifically includes:
and extracting at least one piece of sub-video data corresponding to the screened start timestamp information and the screened end timestamp information from the total video data obtained by the video acquisition operation.
According to a third aspect of embodiments of the present invention, there is provided an electronic apparatus, including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus; the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the corresponding operation of the method according to the first aspect.
The electronic device according to claim 21, wherein the preset condition at least comprises:
and the video information of the same object is captured by the video acquisition operation within a continuous time period which meets the preset time length.
The electronic device of claim 22, wherein the sub-video data comprises: a segment video data of the total video data and/or an entire segment video data of the total video data.
The electronic device of claim 21, wherein the processor is further configured to output at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information extracted from the total video data obtained from the video capture operation to an application according to a user instruction or a trigger condition.
The electronic device according to claim 24, wherein the processor is configured to output, to an application program according to a user instruction or a trigger condition, at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information extracted from the total video data obtained by the video capture operation, specifically:
and extracting at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video acquisition operation according to a user instruction or a trigger condition, and outputting the sub-video data to the application program in parallel.
The electronic device of claim 24, wherein the user instruction comprises a hardware triggered output instruction and/or a software triggered output instruction.
The electronic device of claim 26, wherein the hardware-triggered output command is a key-triggered output video command; and/or the software trigger output instruction is a software trigger output instruction meeting a preset condition.
The electronic device of claim 24, wherein the processor is further configured to synthesize at least one sub-video data corresponding to the start timestamp information and the end timestamp information extracted from the total video data obtained from the video capture operation to generate a video file, and output the video file to the application program.
The electronic device of claim 21, wherein the processor is further configured to display the stored start timestamp information and end timestamp information of the automatic mark in a progress bar of the total video data to provide fast browsing and/or fast positioning for the at least one sub-video data in the total video data.
The electronic device of claim 21, wherein the processor is further configured to operate on the automatic tag according to tag modification instructions, the operations comprising: at least one of addition, deletion, and modification.
The electronic device of claim 21, wherein the processor is further configured to filter out the stored start timestamp information and end timestamp information of the automatic tag according to a preset time condition, wherein the timestamp information and the end timestamp information satisfy the preset time condition; and extracting at least one piece of sub-video data corresponding to the screened start timestamp information and the screened end timestamp information from the total video data obtained by the video acquisition operation.
According to a fourth aspect of embodiments of the present invention, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method according to the first aspect.
According to the embodiment of the invention, at least one piece of sub-video data meeting preset conditions in the video data acquired by the video acquisition operation can be identified in real time in the video acquisition operation process, and the start timestamp information and the end timestamp information of the sub-video data are automatically marked synchronously; and after the video acquisition operation is finished, extracting at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video acquisition operation according to the stored automatically marked start timestamp information and end timestamp information so as to select at least one piece of sub-video data added with the mark when the subsequent video processing operation is carried out, thereby not only improving the flexibility of the video processing operation and reducing the video storage capacity, but also improving the use experience of a user.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and it is also possible for a person skilled in the art to obtain other drawings based on the drawings.
FIG. 1 is a flowchart illustrating a video processing method according to an embodiment of the present invention;
FIGS. 2-5 are schematic diagrams illustrating steps of different embodiments of a video processing method according to the present invention;
FIG. 6 is a block diagram of an embodiment of a video processing apparatus according to the present invention;
FIGS. 7-11 are block diagrams of various embodiments of a video processing apparatus according to the present invention; and
fig. 12 is a schematic structural diagram of an electronic device according to still another embodiment of the invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
The following further describes specific implementation of the embodiments of the present invention with reference to the drawings.
Please refer to fig. 1, which is a flowchart illustrating a video processing method according to an embodiment of the present invention. The video processing method of the invention is applied to various image pick-up devices, preferably portable image pick-up devices (such as pocket cameras, motion cameras and the like) and various hardware equipment with image pick-up and image processing functions.
As shown in the figure, the video processing method of the present invention mainly includes the following steps:
and step S1, executing video acquisition operation according to the shooting instruction, identifying at least one sub-video data meeting preset conditions in the video data acquired by the video acquisition operation in real time in the process of carrying out the video acquisition operation, and synchronously automatically marking the start timestamp information and the end timestamp information of the sub-video data.
Specifically, in the process of performing a video capture operation, at least one piece of sub-video data meeting a preset condition in video data acquired by the video capture operation can be identified through system presetting, manual selection by a user, or triggering when the preset condition is met, and the start time stamp information and the end time stamp information of the sub-video data are automatically marked by synchronizing, so as to identify the start position and the end position of the sub-video data.
In a specific implementation of the embodiment of the present invention, the preset conditions include, but are not limited to: and the video information of the same object is captured by the video acquisition operation within a continuous time period which meets the preset time length.
Alternatively, the same object is, for example, the same human, the same animal, the same plant.
Optionally, the preset time period may be preset by the system or manually input by the user.
For example, when a first person is photographed for a continuous period of time from 1 st second to 90 th second and disappears from the screen after the 90 th second, the 1 st second is automatically marked as start time stamp information and the 90 th second is automatically marked as end time stamp information, whereby the period of time that the first person appears continuously in the screen is marked.
The preset condition may be set by a system or manually by a user to satisfy definitions of sub video data by different systems or users. The preset condition can also be generated according to the historical behavior data of the user, and comprises the steps of obtaining the preference information of the user according to the historical behavior of the user and automatically setting the preset condition according to the preference information of the user.
For example, if the frequency of output in the portrait shooting mode selected from the user historical behavior data is high, the sub-video data in the portrait shooting mode is set as the preset condition, and the start timestamp information and the end timestamp information of the sub-video data in the portrait shooting mode are automatically marked.
In a specific implementation of the embodiment of the present invention, the sub-video data can be extracted and played through the marked start timestamp information and the marked end timestamp information.
In a specific implementation of the embodiment of the present invention, the sub video data includes: the fragment video data of the total video data and/or the whole video data of the total video data can be automatically marked, so that the fragment video data and/or the whole video data can be operated according to the mark.
Step S2, the start time stamp information and the end time stamp information of the auto-mark are acquired and stored.
Alternatively, the auto-tagged start timestamp information and end timestamp information may be stored in the camera, for example, the auto-tagged start timestamp information and end timestamp information may be stored separately, or the auto-tagged start timestamp information and end timestamp information may be directly written into the sub-video data of the total video data, or the auto-tagged start timestamp information and end timestamp information may be directly written into a specific image frame, for example, I-frame data, in the sub-video data of the total video data.
And step S3, after the video acquisition operation is finished, extracting at least one piece of sub-video data corresponding to the start time stamp information and the end time stamp information from the total video data obtained by the video acquisition operation according to the stored start time stamp information and the end time stamp information.
Specifically, at least one sub-video data can be presented in a list manner, for example, according to the automatically marked start timestamp information and end timestamp information, so that the user can select the desired at least one sub-video data from the at least one sub-video data, and the selected at least one sub-video data can be extracted from the total video data according to the selection result of the user.
The embodiment of the invention can extract at least one sub-video data by adopting an index searching mode, and obtain at least one sub-video data corresponding to the automatically marked starting timestamp information and ending timestamp information. The embodiment of the present invention may also select at least one sub-video data with an automatic mark according to an input user instruction, and the embodiment of the present invention may also adjust an arrangement order of the at least one sub-video data according to the input user instruction.
Referring to fig. 2, in an embodiment of the present invention, the step S3 includes the following processing steps:
and step S31, according to the preset time condition, screening out the time stamp information and the end time stamp information meeting the preset time condition from the stored automatically marked start time stamp information and end time stamp information.
Step S32, extracting at least one sub video data corresponding to the screened start timestamp information and end timestamp information from the total video data obtained by the video capturing operation.
Specifically, the preset time condition is, for example, a specific time period, time stamp information and end time stamp information meeting the specific time period may be screened from stored automatically marked start time stamp information and end time stamp information according to the preset time condition, and sub video data collected only in the specific time period may be extracted from total video data obtained by a video collection operation.
For example, the preset time condition may be set by a system or manually by a user, which is not limited by the present invention.
According to the embodiment of the invention, at least one piece of sub-video data meeting the preset conditions in the video data acquired by the video acquisition operation can be identified in real time in the process of the video acquisition operation, and the start timestamp information and the end timestamp information of the identified sub-video data are automatically marked synchronously, so that at least one piece of sub-video data added with the mark can be selected in the subsequent video processing operation, thereby improving the flexibility of the video processing operation, reducing the video storage capacity and improving the use experience of a user.
In another embodiment of the present invention, referring to fig. 3, the method further comprises the steps of:
and step S4, outputting at least one piece of sub-video data corresponding to the start time stamp information and the end time stamp information extracted from the total video data obtained by the video acquisition operation to an application program according to a user instruction or a trigger condition.
Specifically, the user may input a user instruction to select to download at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information to the corresponding application program, or the system may automatically download at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information to the corresponding application program. The application may be a system setting default or may have a user command for manual selection.
The trigger condition is system setting or manual setting of a user, and can also be manually modified by the user after the system setting.
The user instruction includes a hardware trigger output instruction and/or a software trigger output instruction, which is not limited in the embodiment of the present invention.
Illustratively, the hardware-triggered output instruction is a video-triggered instruction that can be triggered by pressing a key, for example, an entity control key or a control in a touch screen.
Illustratively, the software trigger output instruction is a software trigger output instruction meeting a preset condition. The software triggered output instruction may be a single instruction or a plurality of instructions, and the user inputs the single instruction or the plurality of instructions to enable the camera device to execute the output operation of the at least one piece of sub-video data.
The application program may be located in the hardware applied to the video processing method, or may be located in other hardware having a communication connection relationship with the hardware applied to the video processing method.
For example, the data may be inserted into a memory card (e.g., an SD card or a TF card) of the image capturing apparatus applied by the video processing method, or may be output to other electronic devices or storage devices other than the image capturing apparatus applied by the video processing method in a wired or wireless manner, for example, the data may be uploaded to an electronic device such as a smart phone or a tablet computer, or a storage device such as a cloud storage, a usb disk, or a mobile hard disk.
According to the embodiment of the invention, at least one piece of sub-video data which is extracted from the total video data obtained by video acquisition operation and corresponds to the starting timestamp information and the ending timestamp information is output to an application program through the user instruction or the trigger condition, so that the output operation of the at least one piece of sub-video data can be triggered manually or automatically, and the data transmission quantity of all the total video data is reduced.
In another embodiment of the present invention, the step S4 specifically includes:
and extracting at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video acquisition operation according to a user instruction or a trigger condition and outputting the sub-video data to an application program in parallel.
Specifically, the extracted at least one sub-video data corresponding to the start time stamp information and the end time stamp information may be output to the application program in a parallel manner, thereby achieving data transmission efficiency of parallel downloading. And the application program carries out subsequent processing on the received at least one piece of sub-video data.
In an embodiment of the present invention, the step S4 further includes:
and synthesizing at least one piece of sub-video data corresponding to the start time stamp information and the end time stamp information, which is extracted from the total video data obtained by the video capture operation, to generate a video file, and outputting the generated video file to the application program so as to allow the application program to directly perform subsequent processing on the synthesized video file.
In another embodiment of the present invention, referring to fig. 4, the method further comprises:
and step S5, displaying the stored automatically marked start timestamp information and end timestamp information in a play progress bar of the total video data to provide fast browsing and/or fast positioning for at least one sub-video data in the total video data.
Illustratively, when the total video data is played, the automatically marked start timestamp information and end timestamp information can be synchronously displayed in a playing progress bar of the video, and the sub video data in the total video data can be quickly browsed and quickly positioned based on the start timestamp information and the end timestamp information displayed in the playing schedule.
For example, the user may drag the play progress bar according to the positions of the start timestamp information and the end timestamp information, so as to quickly browse or quickly locate each sub video data in the total video data.
In yet another embodiment, referring to fig. 5, the method of the present invention further comprises:
step S6, the automatic mark is operated according to the mark modification instruction, the operation includes: at least one of addition, deletion, and modification.
Illustratively, the present invention may provide that, in addition to automatically tagging start time stamp information and end time stamp information of the identified sub video data synchronously, a user additionally inputs a tag for other video data in the total video data to generate new sub video data when playing the obtained total video data. In addition, the user can also delete the existing automatic mark during the playing process of the total video data, so as to cancel the mark action of the sub video data in the total video data.
Please refer to fig. 6, which is a block diagram illustrating an exemplary embodiment of a video processing apparatus according to the present invention. The video processing device of the present invention is applied to various image capturing devices, preferably, portable image capturing devices (e.g., pocket cameras, motion cameras, etc.) and various hardware devices having image capturing and image processing functions.
As shown in the figure, the video processing apparatus of the present invention mainly includes the following modules:
the mark setting module 601 is configured to execute a video capture operation according to a shooting instruction, identify at least one piece of sub-video data meeting a preset condition in video data obtained by the video capture operation in real time during the video capture operation, and automatically mark start timestamp information and end timestamp information of the sub-video data synchronously.
Specifically, the mark setting module 601 is configured to, in the process of performing a video capture operation, identify at least one piece of sub-video data meeting a preset condition in video data acquired by the video capture operation through system presetting, manual selection by a user, or triggering when the preset condition is met, and perform automatic marking by synchronizing start timestamp information and end timestamp information of the sub-video data to identify a start position and an end position of the sub-video data.
In a specific implementation of the embodiment of the present invention, the preset conditions include, but are not limited to: and the video information of the same object is captured by the video acquisition operation within a continuous time period which meets the preset time length.
Alternatively, the same object is, for example, the same human, the same animal, the same plant.
Optionally, the preset time period may be preset by the system or manually input by the user.
For example, when a first person is photographed for a continuous period of time from 1 st second to 90 th second and disappears from the screen after the 90 th second, the 1 st second is automatically marked as start time stamp information and the 90 th second is automatically marked as end time stamp information, whereby the period of time that the first person appears continuously in the screen is marked.
The preset condition may be set by a system or manually by a user to satisfy definitions of sub video data by different systems or users. The preset condition can also be generated according to the historical behavior data of the user, and comprises the steps of obtaining the preference information of the user according to the historical behavior of the user and automatically setting the preset condition according to the preference information of the user.
For example, if the frequency of output in the portrait shooting mode selected from the user historical behavior data is high, the sub-video data in the portrait shooting mode is set as the preset condition, and the start timestamp information and the end timestamp information of the sub-video data in the portrait shooting mode are automatically marked.
In a specific implementation of the embodiment of the present invention, the sub video data includes: the fragment video data of the total video data and/or the whole video data of the total video data can be automatically marked, so that the fragment video data and/or the whole video data can be operated according to the mark.
A tag storage module 602, configured to obtain and store the start timestamp information and the end timestamp information of the automatic tag.
Specifically, the start time stamp information and the end time stamp information of the auto-mark may be stored in the image pickup device, for example, the start time stamp information and the end time stamp information of the auto-mark may be independently stored, or the start time stamp information and the end time stamp information of the auto-mark may be directly written into the sub-video data of the total video data, or the start time stamp information and the end time stamp information of the auto-mark may be directly written into a specific image frame, for example, I frame data, in the sub-video data of the total video data.
The video extracting module 603 is configured to, after the video capturing operation is finished, extract at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video capturing operation according to the stored start timestamp information and end timestamp information.
Specifically, the at least one sub-video data can be presented in a list manner, for example, according to the automatically marked start timestamp information and end timestamp information, so that the user can select the desired at least one sub-video data from the at least one sub-video data, and the selected at least one sub-video data can be extracted from the total video data according to the selection result of the user.
The embodiment of the invention can extract at least one sub-video data by adopting an index searching mode, and obtain at least one sub-video data corresponding to the automatically marked starting timestamp information and ending timestamp information. The embodiment of the present invention may also select at least one sub-video data with an automatic mark according to an input user instruction, and the embodiment of the present invention may also adjust an arrangement order of the at least one sub-video data according to the input user instruction.
According to the embodiment of the invention, in the process of performing the video acquisition operation, at least one piece of sub-video data meeting the preset conditions in the video data acquired by the video acquisition operation is identified in real time, and the start timestamp information and the end timestamp information of the identified sub-video data are automatically marked synchronously, so that when the video processing operation is performed subsequently, at least one piece of sub-video data added with the mark can be selected, thereby not only improving the flexibility of the video processing operation and reducing the video storage capacity, but also improving the use experience of a user.
In another embodiment of the present invention, referring to fig. 7, the apparatus further comprises:
the video screening module 608 is configured to screen out timestamp information and end timestamp information meeting a preset time condition from the stored automatically marked start timestamp information and end timestamp information according to the preset time condition.
Correspondingly, the video extracting module 603 is further configured to extract at least one sub-video data corresponding to the screened start timestamp information and end timestamp information from the total video data obtained by the video capturing operation.
Specifically, the preset time condition is, for example, a specific time period, time stamp information and end time stamp information meeting the specific time period may be screened from stored automatically marked start time stamp information and end time stamp information according to the preset time condition, and sub video data collected only in the specific time period may be extracted from total video data obtained by a video collection operation.
For example, the preset time condition may be set by a system or manually by a user, which is not limited by the present invention.
In another embodiment of the present invention, referring to fig. 8, the apparatus further comprises:
and a video output module 604, configured to output, to the application program, at least one piece of sub-video data extracted from the total video data obtained by the video capture operation and corresponding to the start timestamp information and the end timestamp information according to a user instruction or a trigger condition.
Specifically, the user may input a user instruction to select to download at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information to the corresponding application program, or the system may automatically download at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information to the corresponding application program. The application may be a system setting default or may have a user command for manual selection.
The trigger condition is system setting or manual setting of a user, and can also be manually modified by the user after the system setting.
The user instruction includes a hardware trigger output instruction and/or a software trigger output instruction, which is not limited in the embodiment of the present invention.
Illustratively, the hardware-triggered output instruction is a video-triggered instruction that can be triggered by pressing a key, for example, an entity control key or a control in a touch screen.
Illustratively, the software trigger output instruction is a software trigger output instruction meeting a preset condition. The software triggered output instruction may be a single instruction or a plurality of instructions, and the user inputs the single instruction or the plurality of instructions to enable the camera device to execute the output operation of the at least one piece of sub-video data.
The application program may be located in the hardware of the video processing apparatus, or may be located in other hardware having a communication connection relationship with the hardware of the video processing apparatus.
For example, the data may be output to other electronic devices or storage devices other than the image capturing device applied to the video processing device in a wired or wireless manner, such as an electronic device such as a smart phone or a tablet computer, or a storage device such as a cloud storage, a usb disk, or a mobile hard disk, by being inserted into a memory card (e.g., an SD card or a TF card) in the image capturing device applied to the video processing device.
According to the embodiment of the invention, at least one piece of sub-video data which is extracted from the total video data obtained by video acquisition operation and corresponds to the starting timestamp information and the ending timestamp information is output to an application program through the user instruction or the trigger condition, so that the output operation of the at least one piece of sub-video data can be triggered manually or automatically, and the data transmission quantity of all the total video data is reduced.
In another embodiment of the present invention, the video output module 604 is specifically configured to:
and extracting at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video acquisition operation according to a user instruction or a trigger condition and outputting the sub-video data to an application program in parallel.
Specifically, the extracted at least one sub-video data corresponding to the start time stamp information and the end time stamp information may be output to the application program in a parallel manner, thereby achieving data transmission efficiency of parallel downloading. And the application program carries out subsequent processing on the received at least one piece of sub-video data.
In another embodiment of the present invention, referring to fig. 9, the apparatus further comprises:
the video merging module 605 is configured to merge the video extraction module 603 to extract at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video capturing operation, generate a video file, provide the video output module 604 to output the generated video file to the application program, and perform subsequent processing on the synthesized video file directly by the application program.
In another embodiment of the present invention, referring to fig. 10, the apparatus further comprises:
a browsing positioning module 606, configured to display the stored automatically marked start timestamp information and end timestamp information in a play progress bar of the total video data, so as to provide fast browsing and/or fast positioning for the at least one sub-video data in the total video data.
Illustratively, when the total video data is played, the automatically marked start timestamp information and end timestamp information can be synchronously displayed in a playing progress bar of the video, and the sub video data in the total video data can be quickly browsed and quickly positioned based on the start timestamp information and the end timestamp information displayed in the playing schedule.
For example, the user may drag the play progress bar according to the positions of the start timestamp information and the end timestamp information, so as to quickly browse or quickly locate each sub video data in the total video data.
In another embodiment of the present invention, referring to fig. 11, the apparatus further comprises:
a flag modification module 607, configured to perform an operation on the automatic flag according to the flag modification instruction, where the operation includes: at least one of addition, deletion, and modification.
Illustratively, the present invention may provide that, in addition to automatically tagging start time stamp information and end time stamp information of the identified sub video data synchronously, a user additionally inputs a tag for other video data in the total video data to generate new sub video data when playing the obtained total video data. In addition, the user can also delete the existing automatic mark during the playing process of the total video data, so as to cancel the mark action of the sub video data in the total video data.
As shown in fig. 12, an embodiment of the present invention further provides an applied electronic device. The electronic device may include: a processor (processor)1202, a communication Interface 1204, a memory 1206, and a communication bus 1208.
Wherein:
the processor 1202, communication interface 1204, and memory 1206 communicate with one another via a communication bus 1208.
A communication interface 1204 for communicating with other electronic devices, such as a terminal device or a server.
The processor 1202 is configured to execute the program 1210, and may specifically perform the relevant steps in the foregoing method embodiments.
In particular, program 1210 may include program code comprising computer operating instructions.
The processor 1202 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The electronic device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
The memory 1206 is used for storing programs 1210. The memory 1206 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile), such as at least one disk memory.
The program 1210 may specifically be configured to cause the processor 1202 to perform the following operations:
executing video acquisition operation according to a shooting instruction, identifying at least one piece of sub-video data meeting preset conditions in the video data acquired by the video acquisition operation in real time in the process of carrying out the video acquisition operation, and synchronously and automatically marking the start timestamp information and the end timestamp information of the sub-video data; acquiring and storing the automatically marked start timestamp information and end timestamp information; and after the video acquisition operation is finished, extracting at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video acquisition operation according to the stored start timestamp information and end timestamp information.
In an optional embodiment, the preset condition at least includes: and the video information of the same object is captured by the video acquisition operation within a continuous time period which meets the preset time length.
In an alternative embodiment, the sub video data includes: a segment video data of the total video data and/or an entire segment video data of the total video data.
In an alternative embodiment, the program 1210 is further configured to output, to an application program, at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information, extracted from the total video data obtained by the video capture operation, according to a user instruction or a trigger condition.
In an optional embodiment, the program 1210 is further configured to extract at least one sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video capture operation and output the sub-video data to the application program in parallel according to a user instruction or a trigger condition.
In an alternative embodiment, the program 1210 is further configured to synthesize at least one sub-video data corresponding to the start timestamp information and the end timestamp information extracted from the total video data obtained from the video capture operation to generate a video file, and output the video file to the application program.
In an alternative embodiment, the program 1210 is further configured to display the stored start time stamp information and end time stamp information of the automatic mark in a progress bar of the total video data to provide fast browsing and/or fast positioning for the at least one sub video data in the total video data.
In an alternative embodiment, the program 1210 is further configured to perform operations on the automatic tag according to the tag modification instruction, the operations including: at least one of addition, deletion, and modification.
According to the embodiment of the invention, at least one piece of sub-video data meeting the preset conditions in the video data acquired by the video acquisition operation can be identified in real time in the process of the video acquisition operation, and the start timestamp information and the end timestamp information of the sub-video data are automatically marked synchronously, so that at least one piece of sub-video data added with the mark can be selected in the subsequent video processing operation, the flexibility of the video processing operation is improved, the video storage capacity is reduced, and the use experience of a user can be improved.
Furthermore, an embodiment of the present invention further provides a computer storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program can implement the steps in the video processing method.
It should be noted that, according to the implementation requirement, each component/step described in the embodiment of the present invention may be divided into more components/steps, and two or more components/steps or partial operations of the components/steps may also be combined into a new component/step to achieve the purpose of the embodiment of the present invention.
The above-described method according to an embodiment of the present invention may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, a RAM, a floppy disk, a hard disk, or a magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium downloaded through a network and to be stored in a local recording medium, so that the method described herein may be stored in such software processing on a recording medium using a general-purpose computer, a dedicated processor, or programmable or dedicated hardware such as an ASIC or FPGA. It will be appreciated that the computer, processor, microprocessor controller or programmable hardware includes memory components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the method of generating video data described herein. Further, when a general-purpose computer accesses code for implementing the methods illustrated herein, execution of the code transforms the general-purpose computer into a special-purpose computer for performing the methods illustrated herein.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The above embodiments are only for illustrating the embodiments of the present invention and not for limiting the embodiments of the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the embodiments of the present invention, so that all equivalent technical solutions also belong to the scope of the embodiments of the present invention, and the scope of the embodiments of the present invention should be defined by the claims.

Claims (10)

1. A method of video processing, the method comprising:
executing video acquisition operation according to a shooting instruction, identifying at least one piece of sub-video data meeting preset conditions in the video data acquired by the video acquisition operation in real time in the process of carrying out the video acquisition operation, and synchronously and automatically marking the start timestamp information and the end timestamp information of the sub-video data;
acquiring and storing the automatically marked start timestamp information and end timestamp information;
and after the video acquisition operation is finished, extracting at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video acquisition operation according to the stored start timestamp information and end timestamp information.
2. The video processing method according to claim 1, wherein the preset conditions at least comprise:
and the video information of the same object is captured by the video acquisition operation within a continuous time period which meets the preset time length.
3. The video processing method according to claim 2, wherein the sub-video data comprises: a segment video data of the total video data and/or an entire segment video data of the total video data.
4. The video processing method according to claim 1, further comprising, after extracting at least one sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video capture operation:
and outputting at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information, which is extracted from the total video data obtained by the video acquisition operation, to an application program according to a user instruction or a trigger condition.
5. The video processing method of claim 1, wherein the method further comprises:
displaying the stored automatically marked start timestamp information and end timestamp information in a play progress bar of the total video data to provide fast browsing and/or fast positioning for the at least one sub-video data in the total video data.
6. The video processing method of claim 1, wherein the method further comprises:
operating the automatic mark according to a mark modification instruction, wherein the operation comprises: at least one of addition, deletion, and modification.
7. The video processing method of claim 1, wherein said extracting at least one sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained from the video capture operation further comprises:
according to a preset time condition, screening out the time stamp information and the end time stamp information which meet the preset time condition from the stored automatically marked start time stamp information and end time stamp information; and
and extracting at least one piece of sub-video data corresponding to the screened start timestamp information and the screened end timestamp information from the total video data obtained by the video acquisition operation.
8. A video processing apparatus, characterized in that the apparatus comprises:
the system comprises a mark setting module, a video acquisition module and a marking module, wherein the mark setting module is used for executing video acquisition operation according to a shooting instruction, identifying at least one piece of sub-video data meeting preset conditions in the video data acquired by the video acquisition operation in real time in the process of carrying out the video acquisition operation, and synchronously and automatically marking the start timestamp information and the end timestamp information of the sub-video data;
the mark storage module is used for acquiring and storing the start timestamp information and the end timestamp information of the automatic mark; and
and the video extraction module is used for extracting at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video acquisition operation according to the stored start timestamp information and the end timestamp information after the video acquisition operation is finished.
9. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, the executable instruction enables the processor to execute video acquisition operation according to a shooting instruction, in the process of video acquisition operation, at least one piece of sub-video data meeting preset conditions in the video data acquired by the video acquisition operation is identified in real time, and the start timestamp information and the end timestamp information of the sub-video data are automatically marked synchronously; acquiring and storing the automatically marked start timestamp information and end timestamp information; and after the video acquisition operation is finished, extracting at least one piece of sub-video data corresponding to the start timestamp information and the end timestamp information from the total video data obtained by the video acquisition operation according to the stored start timestamp information and end timestamp information.
10. A computer storage medium having stored thereon a computer program which, when executed by a processor, carries out the method of any one of claims 1 to 11.
CN202010028264.3A 2020-01-10 2020-01-10 Video processing method, device, equipment and computer storage medium Pending CN111212321A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010028264.3A CN111212321A (en) 2020-01-10 2020-01-10 Video processing method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010028264.3A CN111212321A (en) 2020-01-10 2020-01-10 Video processing method, device, equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN111212321A true CN111212321A (en) 2020-05-29

Family

ID=70788844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010028264.3A Pending CN111212321A (en) 2020-01-10 2020-01-10 Video processing method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN111212321A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419639A (en) * 2020-10-13 2021-02-26 中国人民解放军国防大学联合勤务学院 Video information acquisition method and device
CN112419638A (en) * 2020-10-13 2021-02-26 中国人民解放军国防大学联合勤务学院 Method and device for acquiring alarm video
CN112581651A (en) * 2020-11-18 2021-03-30 宝能(广州)汽车研究院有限公司 Data recording method for vehicle, computer readable storage medium and vehicle

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130188923A1 (en) * 2012-01-24 2013-07-25 Srsly, Inc. System and method for compiling and playing a multi-channel video
CN104636162A (en) * 2013-11-11 2015-05-20 宏达国际电子股份有限公司 Method for performing multimedia management utilizing tags, and associated apparatus and associated computer program product
CN104796781A (en) * 2015-03-31 2015-07-22 小米科技有限责任公司 Video clip extraction method and device
CN106911900A (en) * 2017-04-06 2017-06-30 腾讯科技(深圳)有限公司 Video dubbing method and device
CN109862388A (en) * 2019-04-02 2019-06-07 网宿科技股份有限公司 Generation method, device, server and the storage medium of the live video collection of choice specimens
CN110136449A (en) * 2019-06-17 2019-08-16 珠海华园信息技术有限公司 Traffic video frequency vehicle based on deep learning disobeys the method for stopping automatic identification candid photograph
US10390067B1 (en) * 2015-04-29 2019-08-20 Google Llc Predicting video start times for maximizing user engagement
CN110267116A (en) * 2019-05-22 2019-09-20 北京奇艺世纪科技有限公司 Video generation method, device, electronic equipment and computer-readable medium
CN110602560A (en) * 2018-06-12 2019-12-20 优酷网络技术(北京)有限公司 Video processing method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130188923A1 (en) * 2012-01-24 2013-07-25 Srsly, Inc. System and method for compiling and playing a multi-channel video
CN104636162A (en) * 2013-11-11 2015-05-20 宏达国际电子股份有限公司 Method for performing multimedia management utilizing tags, and associated apparatus and associated computer program product
CN104796781A (en) * 2015-03-31 2015-07-22 小米科技有限责任公司 Video clip extraction method and device
US10390067B1 (en) * 2015-04-29 2019-08-20 Google Llc Predicting video start times for maximizing user engagement
CN106911900A (en) * 2017-04-06 2017-06-30 腾讯科技(深圳)有限公司 Video dubbing method and device
CN110602560A (en) * 2018-06-12 2019-12-20 优酷网络技术(北京)有限公司 Video processing method and device
CN109862388A (en) * 2019-04-02 2019-06-07 网宿科技股份有限公司 Generation method, device, server and the storage medium of the live video collection of choice specimens
CN110267116A (en) * 2019-05-22 2019-09-20 北京奇艺世纪科技有限公司 Video generation method, device, electronic equipment and computer-readable medium
CN110136449A (en) * 2019-06-17 2019-08-16 珠海华园信息技术有限公司 Traffic video frequency vehicle based on deep learning disobeys the method for stopping automatic identification candid photograph

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419639A (en) * 2020-10-13 2021-02-26 中国人民解放军国防大学联合勤务学院 Video information acquisition method and device
CN112419638A (en) * 2020-10-13 2021-02-26 中国人民解放军国防大学联合勤务学院 Method and device for acquiring alarm video
CN112419638B (en) * 2020-10-13 2023-03-14 中国人民解放军国防大学联合勤务学院 Method and device for acquiring alarm video
CN112581651A (en) * 2020-11-18 2021-03-30 宝能(广州)汽车研究院有限公司 Data recording method for vehicle, computer readable storage medium and vehicle

Similar Documents

Publication Publication Date Title
CN108900902B (en) Method, device, terminal equipment and storage medium for determining video background music
CN111212321A (en) Video processing method, device, equipment and computer storage medium
JP3982605B2 (en) Captured image management apparatus, captured image management method, and captured image management program
CN105320695B (en) Picture processing method and device
CN111209438A (en) Video processing method, device, equipment and computer storage medium
CN105338259B (en) Method and device for synthesizing video
KR101349699B1 (en) Apparatus and method for extracting and synthesizing image
CN111371988B (en) Content operation method, device, terminal and storage medium
CN111383224A (en) Image processing method, image processing device, storage medium and electronic equipment
KR20140045804A (en) Phtographing apparatus and method for blending images
CN111209435A (en) Method and device for generating video data, electronic equipment and computer storage medium
CN105338237A (en) Image processing method and device
JP6230386B2 (en) Image processing apparatus, image processing method, and image processing program
CN105808231B (en) System and method for recording and playing script
CN108805799A (en) Panoramic picture synthesizer, method and computer readable storage medium
CN108184056B (en) Snapshot method and terminal equipment
CN108540817B (en) Video data processing method, device, server and computer readable storage medium
CN113010738B (en) Video processing method, device, electronic equipment and readable storage medium
WO2019015411A1 (en) Screen recording method and apparatus, and electronic device
CN111259198A (en) Management method and device for shot materials and electronic equipment
CN109862295B (en) GIF generation method, device, computer equipment and storage medium
US20190180789A1 (en) Image processing apparatus, control method of image processing apparatus, and non-transitory computer readable medium
KR20170114453A (en) Video processing apparatus using qr code
US8648925B2 (en) Control apparatus, control method, and control system for reproducing captured image data
CN111246090A (en) Tracking shooting method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200529

RJ01 Rejection of invention patent application after publication