CN110312164A - Method for processing video frequency, device and computer storage medium and terminal device - Google Patents

Method for processing video frequency, device and computer storage medium and terminal device Download PDF

Info

Publication number
CN110312164A
CN110312164A CN201910671851.1A CN201910671851A CN110312164A CN 110312164 A CN110312164 A CN 110312164A CN 201910671851 A CN201910671851 A CN 201910671851A CN 110312164 A CN110312164 A CN 110312164A
Authority
CN
China
Prior art keywords
processed
video
video frame
blurred
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910671851.1A
Other languages
Chinese (zh)
Inventor
张海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oppo Chongqing Intelligent Technology Co Ltd
Original Assignee
Oppo Chongqing Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo Chongqing Intelligent Technology Co Ltd filed Critical Oppo Chongqing Intelligent Technology Co Ltd
Priority to CN201910671851.1A priority Critical patent/CN110312164A/en
Publication of CN110312164A publication Critical patent/CN110312164A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the present application discloses a kind of method for processing video frequency, device and computer storage medium and terminal device, this method comprises: obtaining video sequence to be processed;Wherein, the video sequence to be processed includes reference frame and at least one video frame to be processed;The reference frame is matched at least one described video frame to be processed, determines the corresponding object to be blurred of the video frame to be processed of each at least one described video frame to be processed;In each described video frame to be processed, virtualization processing is carried out to determining object to be blurred.

Description

Video processing method and device, computer storage medium and terminal equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a video processing method and apparatus, a computer storage medium, and a terminal device.
Background
With the continuous development of video processing technology, people are pursuing more and more intensely the beauty of art, the demand for blurring videos is higher and higher, and the obtained videos are expected to have more aesthetic feeling and artistic feeling.
In the current video blurring scheme, a contour recognition mode is usually adopted, and it is first determined whether a main object in a video intersects with a plurality of objects to be blurred, and if the main object and the objects to be blurred intersect, the objects to be blurred can be determined and blurred. However, when determining the object to be blurred, the object to be blurred is determined only by whether the two intersect, so that the accuracy is low.
Disclosure of Invention
The application aims to provide a video processing method and device, a computer storage medium and a terminal device, which can improve the accuracy of determining an object to be blurred and can also improve the blurring effect of a video.
The technical scheme of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a video processing method, where the method includes:
acquiring a video sequence to be processed; wherein the video sequence to be processed comprises a reference frame and at least one video frame to be processed;
matching the reference frame with the at least one video frame to be processed, and determining an object to be blurred corresponding to each video frame to be processed in the at least one video frame to be processed;
and performing blurring processing on the determined object to be blurred in each video frame to be processed.
In a second aspect, an embodiment of the present application provides a video processing apparatus, which includes an obtaining unit, a matching unit, and a blurring unit, wherein,
the acquisition unit is configured to acquire a video sequence to be processed; wherein the video sequence to be processed comprises a reference frame and at least one video frame to be processed;
the matching unit is configured to match the reference frame with the at least one to-be-processed video frame, and determine a to-be-blurred object corresponding to each to-be-processed video frame in the at least one to-be-processed video frame;
the blurring unit is configured to perform blurring processing on the determined object to be blurred in each video frame to be processed.
In a third aspect, an embodiment of the present application provides a video processing apparatus, which includes a memory and a processor; wherein,
the memory for storing a computer program operable on the processor;
the processor, when executing the computer program, is adapted to perform the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer storage medium storing a video processing program, which when executed by at least one processor implements the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a terminal device, which includes at least the video processing apparatus according to the second aspect or the third aspect.
The method is applied to a video processing device, and the video processing device is located in a terminal device. Firstly, acquiring a video sequence to be processed; wherein the video sequence to be processed comprises a reference frame and at least one video frame to be processed; then, matching the reference frame with at least one video frame to be processed, and determining an object to be blurred corresponding to each video frame to be processed in the at least one video frame to be processed; finally, performing blurring processing on the determined object to be blurred in each video frame to be processed; like this, the object of treating blurring that this application determined can promote the degree of accuracy of treating blurring object determination to a certain extent, can also promote the visual blurring effect simultaneously to user's experience has been promoted.
Drawings
Fig. 1 is a schematic flowchart of a video processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another video processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another video processing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of another video processing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a comparison between a reference frame and a video frame to be processed according to an embodiment of the present application;
fig. 6 is a schematic flowchart of another video processing method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic hardware structure diagram of a video processing apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
In an embodiment of the present application, referring to fig. 1, a flowchart of a video processing method provided in an embodiment of the present application is shown. As shown in fig. 1, the method may include:
s101: acquiring a video sequence to be processed; wherein the video sequence to be processed comprises a reference frame and at least one video frame to be processed;
it should be noted that the method is applied to a video processing apparatus, and the video processing apparatus is located in a terminal device. The terminal device may be a mobile terminal such as a smart phone, a tablet computer, a notebook computer, a palm computer, a Personal Digital Assistant (PDA), a navigation device, a wearable device, a Digital camera, a video camera, or a fixed terminal such as a Digital TV, a desktop computer, or the like, and the embodiment of the present application is not particularly limited.
It should be further noted that the video sequence to be processed may be acquired by using a video acquisition component in the terminal device, or may be acquired by an external video acquisition component, and then a communication link is established through a communication component between the video acquisition component and the terminal device, so that the terminal device obtains the video sequence to be processed acquired by the video acquisition component, and the following describes in detail an acquisition manner of the video sequence to be processed.
Optionally, in some embodiments, for S101, the acquiring a video sequence to be processed may include:
acquiring a video sequence to be processed by using a video acquisition assembly;
and receiving the video sequence to be processed sent by the video acquisition component through a communication link between the video acquisition component and the terminal equipment.
Here, since the video capture component capable of capturing video data is relatively expensive, the terminal device may not have a video data capture function, but capture a video sequence to be processed through the video capture component independent of the terminal device, and establish a communication link through the communication component between the video capture component and the terminal device, so that the terminal device obtains the video sequence to be processed captured by the video capture component. It is noted that the video capture component may be specifically implemented by at least one of: the camera comprises a depth camera, a binocular camera, a Three-dimensional (3D) structured light camera module and a Time Of Flight (TOF) camera module.
Optionally, in some embodiments, the terminal device itself has a function of acquiring video data, that is, the terminal device is provided with a video acquisition component capable of acquiring at least video data, for example, at least one of the following components: degree of depth camera, binocular camera, 3D structured light module of making a video recording, TOF module of making a video recording to gather pending video sequence.
In addition, when a user browses videos on a network server or a website, the video sequence to be processed can be acquired in a video downloading mode. The embodiment of the present application is not particularly limited to the manner of obtaining the video sequence to be processed.
In this way, after the video sequence to be processed is obtained, the video sequence to be processed includes a plurality of video frames, and the plurality of video frames includes the reference frame and at least one video frame to be processed. The reference frame is used for determining an object to be blurred corresponding to each video frame to be processed; specifically, the reference frame may be a video frame arbitrarily selected by a user, or a first frame in the video sequence directly selected by the terminal device, or a video frame selected according to history information.
S102: matching the reference frame with the at least one video frame to be processed, and determining an object to be blurred corresponding to each video frame to be processed in the at least one video frame to be processed;
it should be noted that, after the reference frame is determined, the reference frame may be matched with each to-be-processed video frame in at least one to-be-processed video frame, so as to determine the to-be-blurred object corresponding to each to-be-processed video frame. The reference frame may be one video frame or a plurality of video frames.
In some embodiments, when the reference frame is a video frame, refer to fig. 2, which shows a flowchart of another video processing method provided in an embodiment of the present application. As shown in fig. 2, for S102, the matching the reference frame with the at least one to-be-processed video frame, and determining an object to be blurred corresponding to each to-be-processed video frame in the at least one to-be-processed video frame, the method may include:
s201: acquiring first image information contained in each video frame to be processed and acquiring second image information contained in the reference frame;
it should be noted that, for at least one to-be-processed video frame, first image information included in each to-be-processed video frame may be acquired, where the first image information may be a plurality of objects such as a portrait, a human face, or an object; for the reference frame, second image information included in the video frame may be acquired, where the second image information may also be a plurality of objects such as a portrait, a human face, or an object.
Therefore, at least one group of first image information can be obtained because the video sequence to be processed comprises at least one video frame to be processed; whereas the reference frame comprises only one video frame, only one set of second image information is available.
S202: and matching the first image information with the second image information aiming at each video frame to be processed, determining third image information which has difference with the second image information in the first image information, and taking the obtained third image information as an object to be blurred corresponding to each video frame to be processed.
It should be noted that, for at least one to-be-processed video frame, each to-be-processed video frame may be matched with the reference frame, that is, first image information included in each to-be-processed video frame is matched with second image information included in the reference frame, a difference object between the first image information included in each to-be-processed video frame and the second image information included in the reference frame is determined, and third image information of each to-be-processed video frame is obtained, so that the third image information is used as a to-be-blurred object corresponding to each to-be-processed video frame.
In some embodiments, when the reference frame is a plurality of video frames, refer to fig. 3, which shows a flowchart of another video processing method provided by an embodiment of the present application. As shown in fig. 3, for S102, the matching the reference frame with the at least one to-be-processed video frame, and determining an object to be blurred corresponding to each to-be-processed video frame in the at least one to-be-processed video frame, the method may include:
s301: acquiring first image information contained in each video frame to be processed and second image information contained in each video frame in the reference frame to obtain multiple groups of second image information;
it should be noted that, for at least one to-be-processed video frame, first image information included in each to-be-processed video frame may be acquired, where the first image information may be a plurality of objects such as a portrait, a human face, or an object; for the reference frame, second image information included in each video frame in the reference frame may be acquired, where the second image information may also be a plurality of objects such as a portrait, a human face, or an object.
Therefore, at least one group of first image information can be obtained because the video sequence to be processed comprises at least one video frame to be processed; and the reference frame comprises a plurality of video frames, a plurality of sets of second image information may be obtained.
S302: and matching the first image information with the multiple groups of second image information aiming at each video frame to be processed, determining third image information which has difference with each group of the multiple groups of second image information in the first image information, and taking the obtained third image information as an object to be blurred corresponding to each video frame to be processed.
It should be noted that, for at least one to-be-processed video frame, each to-be-processed video frame may be matched with each video frame in the reference frame, that is, first image information included in each to-be-processed video frame is matched with multiple sets of second image information, a difference object between the first image information included in each to-be-processed video frame and each set of the multiple sets of second image information is determined, and third image information of each to-be-processed video frame is obtained, so that the third image information is used as a to-be-blurred object corresponding to each to-be-processed video frame.
That is to say, in the embodiment of the present application, a frame may be selected as a reference frame, then each to-be-processed video frame in the to-be-processed video sequence is matched with the reference frame, and an object having a difference from the reference frame in each to-be-processed video frame is determined as a to-be-blurred object corresponding to each to-be-processed video frame; and a plurality of video frames can be selected as reference frames, then each to-be-processed video frame in the to-be-processed video sequence is matched with the plurality of video frames, and all objects which are different from each frame in the plurality of video frames in each to-be-processed video frame are determined as to-be-blurred objects corresponding to each to-be-processed video frame.
Further, in some embodiments, for S202 or S302, before the taking the obtained third image information as the object to be blurred corresponding to each video frame to be processed, the method may further include:
taking the obtained third image information as an initial object to be blurred corresponding to each video frame to be processed, and displaying the initial object to be blurred;
and selecting a specific object from the displayed initial objects to be blurred, and taking the specific object as the object to be blurred corresponding to each video frame to be processed.
It should be noted that, in order to determine the objects to be blurred more accurately, the obtained third image information may also be used as the initial objects to be blurred corresponding to each video frame to be processed, and then the initial objects to be blurred are displayed; therefore, a user can select a specific object from the displayed initial objects to be blurred, and the selected specific object is used as the object to be blurred corresponding to each video frame to be processed, so that the user experience is improved. Here, the selection of the specific object may be determined according to information such as interests or hobbies of the user, but the embodiment of the present application is not particularly limited.
S103: and performing blurring processing on the determined object to be blurred in each video frame to be processed.
It should be noted that after the object to be blurred is determined, the area to be blurred corresponding to the object to be blurred may be further determined, so that the area to be blurred may be blurred to obtain at least one processed video frame.
In some embodiments, refer to fig. 4, which shows a schematic flow chart of another video processing method provided in the embodiments of the present application. As shown in fig. 4, for S103, performing a blurring process on the determined object to be blurred in each video frame to be processed, the method may include:
s401: determining a to-be-virtualized area corresponding to the to-be-virtualized object based on the determined to-be-virtualized object;
the region to be blurred is a region including an object to be blurred. Here, after the object to be blurred is determined, the corresponding region to be blurred may be identified according to the object to be blurred.
S402: and performing virtualization processing on the area to be virtualized to obtain at least one processed video frame.
It should be further noted that after the area to be blurred is determined in each video frame to be processed, blurring processing may be performed on the area to be blurred by using the blurring parameter, so that at least one processed video frame can be obtained. The blurring parameter is a control parameter of the blurring algorithm, and taking a gaussian fuzzy algorithm as an example, the blurring parameter may be a size of a gaussian kernel, so that blurring processing on a to-be-blurred region can be realized, and at least one processed video frame is obtained, so that not only a blurring effect of a video can be improved, a visual demand of a user can be met, but also user experience can be improved.
For example, refer to fig. 5, which shows a schematic diagram of comparing a reference frame with a video frame to be processed provided by an embodiment of the present application. In fig. 5, (a) an image of a reference frame is provided, (b) an image of a video frame to be processed is provided, and (c) an image of another video frame to be processed is provided. Here, the video frame to be processed shown in (b) is matched with the reference frame shown in (a), and there is no difference between the two, that is, there is no object to be blurred in the video frame to be processed shown in (b), and it is not necessary to perform blurring processing on the video frame to be processed shown in (b); matching the video frame to be processed shown in the step (c) with the reference frame shown in the step (a), wherein the video frame to be processed shown in the step (c) is different from the reference frame shown in the step (a), namely, the video frame to be processed shown in the step (c) can be completely used as an object to be blurred, further determining a region to be blurred corresponding to the object to be blurred, and then blurring the region to be blurred in the video frame to be processed shown in the step (c).
The embodiment provides a video processing method which is applied to a video processing device, and the video processing device is positioned in a terminal device. The method comprises the steps of obtaining a video sequence to be processed, wherein the video sequence to be processed comprises a reference frame and at least one video frame to be processed; matching the reference frame with at least one video frame to be processed, and determining an object to be blurred corresponding to each video frame to be processed in the at least one video frame to be processed; performing blurring processing on the determined object to be blurred in each video frame to be processed; like this, the object of treating blurring that this application determined can promote the degree of accuracy of treating blurring object determination to a certain extent, can also promote the visual blurring effect simultaneously to user's experience has been promoted.
In another embodiment of the present application, after the video sequence to be processed is obtained, a suitable reference frame may be further selected from the video sequence to be processed. Therefore, in some embodiments, refer to fig. 6, which shows a schematic flow chart of another video processing method provided by the embodiments of the present application. As shown in fig. 6, after S101, the method may further include:
s601: a reference frame is determined.
It should be noted that the reference frame is a certain video frame or certain video frames in the video sequence to be processed. Specifically, the reference frame may be a video frame arbitrarily selected by a user, or a first frame in the video sequence directly selected by the terminal device, or a video frame selected according to history information, which is not specifically limited in the embodiment of the present application. Which will be described in detail below, respectively.
Optionally, in some embodiments, for S601, the determining the reference frame may include:
acquiring a first video frame in the video sequence to be processed;
and determining the first video frame as the reference frame.
Optionally, in some embodiments, for S601, the determining the reference frame may include:
acquiring historical record information; the historical record information comprises a reference frame selected in a historical mode and selection times;
and determining the reference frame from the video sequence to be processed according to the historical record information.
Optionally, in some embodiments, for S601, the determining the reference frame may include:
receiving a selection instruction; wherein the selection instruction is generated based on the selection operation of the user on the video sequence to be processed;
determining a video frame to be selected from the video sequence to be processed according to the selection instruction;
and determining the video frame to be selected as the reference frame.
That is to say, the reference frame may be not only the first frame in the video sequence directly selected by the terminal device, but also a video frame to be selected determined by the user according to the user's own needs, and the video frame to be selected may be arbitrarily selected by the user from the video sequence. In addition, the reference frame may also be determined according to history information, where the history information may be obtained according to user log information, the user log information includes reference frames selected in history and selection times, and the history information may also be used as a basis for determining and selecting the reference frame.
Exemplarily, by taking history information as an example, it is assumed that a reference frame with the highest selection frequency in a video sequence shot by a user in history is a video frame at a middle position in the video sequence, so that the video frame at the middle position is determined from the current video sequence to be processed according to the history information and is taken as the reference frame; or, assuming that the reference frame with the highest selection frequency in the video sequence shot by the user in history contains user preferences (such as a specific image, a specific face and the like), determining the video frame containing the user preferences from the current video sequence to be processed according to the history information, and using the video frame as the reference frame.
In addition, when the reference frame is determined, if the video frames contained in the video sequence have user information such as user images, user names, user contact ways and the like, the reference frame can be selected in consideration of multiple dimensions by combining the user information, so that the practicability of the reference frame selection can be improved.
In the embodiment of the application, the reference frame is used for determining the object to be blurred corresponding to each video frame to be processed, so that after the reference frame is determined, the reference frame can be used for matching with each video frame to be processed in a video sequence, different objects in each video frame to be processed after being compared with the reference frame are determined as the object to be blurred, and then the corresponding area to be blurred is further determined so as to perform blurring on the area to be blurred; therefore, the accuracy of determining the object to be blurred can be improved, and the blurring effect of the video can be improved.
In another embodiment of the present application, after determining an object to be blurred, in consideration of interest information of a user, a specific video frame may be selected from at least one video frame to be processed, and blurring processing may be performed on the object to be blurred in the specific video frame; alternatively, a specific object may be selected from the determined objects to be blurred, and the blurring process may be performed on the video frame to be processed including the specific object.
Optionally, in some embodiments, for S103, performing a blurring process on the determined object to be blurred in each video frame to be processed may include:
selecting a target video frame from the at least one video frame to be processed based on interest information of a user;
and performing virtualization processing on the object to be virtualized in the target video frame.
It should be noted that the target video frame may include one video frame to be processed, or may include a plurality of video frames to be processed. In addition, the selection basis of the target video frame may be selected based on interest information of the user, may also be selected based on the importance degree of the video frame, and may also be selected based on a history selection record, which is not specifically limited in the embodiment of the present application.
Taking the interest information of the user as an example, and the selected target video frame comprises a plurality of video frames to be processed; in this way, a target video frame is selected from at least one video frame to be processed, and then the object to be virtualized in the target video frame is virtualized, so that a plurality of video frames capable of being operated are provided for a user, and an interface for performing virtualization on the plurality of video frames is provided.
Optionally, in some embodiments, for S103, performing a blurring process on the determined object to be blurred in each video frame to be processed may include:
selecting a target object to be virtualized from the determined objects to be virtualized based on the interest information of the user;
according to the target object to be virtualized, selecting a specific video frame containing the target object to be virtualized from the at least one video frame to be processed;
and performing blurring processing on the target object to be blurred in the specific video frame.
It should be noted that the target object to be virtualized may include one object to be virtualized, or may include multiple objects to be virtualized; correspondingly, when only one video frame to be processed contains the target object to be blurred, the specific video frame at the time comprises one video frame, and when a plurality of video frames to be processed all contain the target object to be blurred, the specific video frame at the time comprises a plurality of video frames. In addition, the selection basis of the target object to be blurred may be selected based on the interest information of the user, or may be selected based on a history selection record, and the embodiment of the present application is not particularly limited.
Taking the interest information of the user as an example, the object to be blurred can be determined according to the interest information of the user; assuming that the user does not like a desert scene, that is, the target object to be blurred is the desert scene, a specific video frame including the desert scene may be selected from at least one video frame to be processed, and if a plurality of video frames to be processed include the desert scene, all the desert scenes in the plurality of video frames are blurred at this time, so that the user experience can be further improved.
Through the embodiment, the specific implementation of the embodiment is elaborated in detail, and it can be seen that the accuracy of determining the object to be blurred can be improved to a certain extent according to the object to be blurred determined by the technical scheme of the embodiment, and meanwhile, the blurring effect of the video can be improved, so that the user experience is improved.
In yet another embodiment of the present application, based on the same inventive concept as the previous embodiment, referring to fig. 7, a schematic diagram of a composition structure of a video processing apparatus 70 provided in an embodiment of the present application is shown. As shown in fig. 7, the video processing apparatus 70 may include an obtaining unit 701, a matching unit 702, and a blurring unit 703, wherein,
the acquiring unit 701 is configured to acquire a video sequence to be processed; wherein the video sequence to be processed comprises a reference frame and at least one video frame to be processed;
the matching unit 702 is configured to match the reference frame with the at least one to-be-processed video frame, and determine an object to be blurred corresponding to each to-be-processed video frame in the at least one to-be-processed video frame;
the blurring unit 703 is configured to perform blurring processing on the determined object to be blurred in each video frame to be processed.
In the above solution, referring to fig. 7, the video processing apparatus 70 may further include a determining unit 704 configured to determine, based on the determined object to be blurred, a region to be blurred corresponding to the object to be blurred;
the blurring unit 703 is specifically configured to perform blurring processing on the region to be blurred, so as to obtain at least one processed video frame.
In the above scheme, the determining unit 704 is further configured to determine the reference frame.
In the above scheme, the obtaining unit 701 is further configured to obtain a first video frame in the video sequence to be processed;
the determining unit 704 is further configured to determine the first video frame as the reference frame.
In the above scheme, the obtaining unit 701 is further configured to obtain history information; the historical record information comprises a reference frame selected in a historical mode and selection times;
the determining unit 704 is further configured to determine the reference frame from the video sequence to be processed according to the history information.
In the above scheme, referring to fig. 7, the video processing apparatus 70 may further include a receiving unit 705 configured to receive a selection instruction; wherein the selection instruction is generated based on the selection operation of the user on the video sequence to be processed;
the determining unit 704 is further configured to determine a video frame to be selected from the video sequence to be processed according to the selection instruction; and determining the video frame to be selected as the reference frame.
In the above solution, when the reference frame is a video frame, the obtaining unit 701 is further configured to obtain first image information included in each to-be-processed video frame, and obtain second image information included in the reference frame;
the matching unit 702 is specifically configured to match the first image information with the second image information for each to-be-processed video frame, determine third image information that is different from the second image information in the first image information, and use the obtained third image information as an to-be-blurred object corresponding to each to-be-processed video frame.
In the above scheme, when the reference frame is a plurality of video frames, the obtaining unit 701 is further configured to obtain first image information included in each to-be-processed video frame, and obtain second image information included in each video frame in the reference frame, so as to obtain a plurality of sets of second image information;
the matching unit 702 is specifically configured to match the first image information with the multiple sets of second image information for each to-be-processed video frame, determine third image information that is different from each of the multiple sets of second image information in the first image information, and use the obtained third image information as an to-be-blurred object corresponding to each to-be-processed video frame.
In the above scheme, referring to fig. 7, the video processing apparatus 70 may further include a presentation unit 706 and a selection unit 707, wherein,
the display unit 706 is configured to use the obtained third image information as an initial object to be virtualized corresponding to each video frame to be processed, and display the initial object to be virtualized;
the selecting unit 707 is configured to select a specific object from the displayed initial objects to be blurred, and use the specific object as the object to be blurred corresponding to each video frame to be processed.
In the above solution, the selecting unit 707 is further configured to select a target video frame from the at least one video frame to be processed based on interest information of a user;
the blurring unit 703 is specifically configured to perform blurring processing on an object to be blurred in the target video frame.
In the above solution, the selecting unit 707 is further configured to select a target object to be blurred from the determined objects to be blurred based on the interest information of the user; according to the target object to be virtualized, selecting a specific video frame containing the target object to be virtualized from the at least one video frame to be processed;
the blurring unit 703 is specifically configured to perform blurring processing on the target object to be blurred in the specific video frame.
It is understood that in this embodiment, a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., and may also be a module, or may also be non-modular. Moreover, each component in the embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Accordingly, the present embodiments provide a computer storage medium storing a video processing program that, when executed by at least one processor, implements the method of any of the preceding embodiments.
Based on the above-mentioned composition of the video processing apparatus 70 and the computer storage medium, refer to fig. 8, which shows a specific hardware structure diagram of the video processing apparatus 70 provided in the embodiment of the present application. As shown in fig. 8, the video processing apparatus 70 may include: a communication interface 801, a memory 802, and a processor 803; the various components are coupled together by a bus system 804. It is understood that the bus system 804 is used to enable communications among the components. The bus system 804 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 804 in FIG. 8. The communication interface 801 is used for receiving and sending signals in the process of receiving and sending information with other external network elements;
a memory 802 for storing a computer program capable of running on the processor 803;
a processor 803 for executing, when running the computer program, the following:
acquiring a video sequence to be processed; wherein the video sequence to be processed comprises a reference frame and at least one video frame to be processed;
matching the reference frame with the at least one video frame to be processed, and determining an object to be blurred corresponding to each video frame to be processed in the at least one video frame to be processed;
and performing blurring processing on the determined object to be blurred in each video frame to be processed.
It will be appreciated that the memory 802 in the subject embodiment can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data rate Synchronous Dynamic random access memory (ddr SDRAM ), Enhanced Synchronous SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct memory bus RAM (DRRAM). The memory 802 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
And the processor 803 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 803. The Processor 803 may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 802, and the processor 803 reads the information in the memory 802, and completes the steps of the above method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Programmable Logic Devices (PLDs), Field-Programmable Gate arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof. For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor, where the memory may be implemented in the processor or external to the processor.
Optionally, as another embodiment, the processor 803 is further configured to perform the method of any one of the previous embodiments when running the computer program.
Referring to fig. 9, a schematic diagram of a composition structure of a terminal device 90 provided in an embodiment of the present application is shown. As shown in fig. 9, the terminal device 90 may include at least the video processing apparatus 70 according to any one of the foregoing embodiments.
It should be noted that, in the present application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method of video processing, the method comprising:
acquiring a video sequence to be processed; wherein the video sequence to be processed comprises a reference frame and at least one video frame to be processed;
matching the reference frame with the at least one video frame to be processed, and determining an object to be blurred corresponding to each video frame to be processed in the at least one video frame to be processed;
and performing blurring processing on the determined object to be blurred in each video frame to be processed.
2. The method according to claim 1, wherein the blurring the determined object to be blurred in each video frame to be blurred comprises:
determining a to-be-virtualized area corresponding to the to-be-virtualized object based on the determined to-be-virtualized object;
and performing virtualization processing on the area to be virtualized to obtain at least one processed video frame.
3. The method of claim 1, wherein after the obtaining the video sequence to be processed, the method further comprises:
the reference frame is determined.
4. The method of claim 3, wherein the determining the reference frame comprises:
acquiring a first video frame in the video sequence to be processed;
and determining the first video frame as the reference frame.
5. The method of claim 3, wherein the determining the reference frame comprises:
acquiring historical record information; the historical record information comprises a reference frame selected in a historical mode and selection times;
and determining the reference frame from the video sequence to be processed according to the historical record information.
6. The method of claim 3, wherein the determining the reference frame comprises:
receiving a selection instruction; wherein the selection instruction is generated based on the selection operation of the user on the video sequence to be processed;
determining a video frame to be selected from the video sequence to be processed according to the selection instruction;
and determining the video frame to be selected as the reference frame.
7. The method according to claim 1, wherein when the reference frame is a video frame, said matching the reference frame with the at least one to-be-processed video frame to determine the to-be-blurred object corresponding to each to-be-processed video frame in the at least one to-be-processed video frame comprises:
acquiring first image information contained in each video frame to be processed and acquiring second image information contained in the reference frame;
and matching the first image information with the second image information aiming at each video frame to be processed, determining third image information which has difference with the second image information in the first image information, and taking the obtained third image information as an object to be blurred corresponding to each video frame to be processed.
8. The method according to claim 1, wherein when the reference frame is a plurality of video frames, said matching the reference frame with the at least one to-be-processed video frame and determining the to-be-blurred object corresponding to each of the at least one to-be-processed video frame comprises:
acquiring first image information contained in each video frame to be processed and second image information contained in each video frame in the reference frame to obtain multiple groups of second image information;
and matching the first image information with the multiple groups of second image information aiming at each video frame to be processed, determining third image information which has difference with each group of the multiple groups of second image information in the first image information, and taking the obtained third image information as an object to be blurred corresponding to each video frame to be processed.
9. The method according to claim 7 or 8, wherein before said taking the obtained third image information as the object to be blurred corresponding to each of the video frames to be processed, the method further comprises:
taking the obtained third image information as an initial object to be blurred corresponding to each video frame to be processed, and displaying the initial object to be blurred;
and selecting a specific object from the displayed initial objects to be blurred, and taking the specific object as the object to be blurred corresponding to each video frame to be processed.
10. The method according to any one of claims 1 to 9, wherein the blurring the determined object to be blurred in each video frame to be blurred comprises:
selecting a target video frame from the at least one video frame to be processed based on interest information of a user;
and performing virtualization processing on the object to be virtualized in the target video frame.
11. The method according to any one of claims 1 to 9, wherein the blurring the determined object to be blurred in each video frame to be blurred comprises:
selecting a target object to be virtualized from the determined objects to be virtualized based on the interest information of the user;
according to the target object to be virtualized, selecting a specific video frame containing the target object to be virtualized from the at least one video frame to be processed;
and performing blurring processing on the target object to be blurred in the specific video frame.
12. A video processing apparatus comprising an acquisition unit, a matching unit and a blurring unit, wherein,
the acquisition unit is configured to acquire a video sequence to be processed; wherein the video sequence to be processed comprises a reference frame and at least one video frame to be processed;
the matching unit is configured to match the reference frame with the at least one to-be-processed video frame, and determine a to-be-blurred object corresponding to each to-be-processed video frame in the at least one to-be-processed video frame;
the blurring unit is configured to perform blurring processing on the determined object to be blurred in each video frame to be processed.
13. A video processing apparatus, characterized in that the video processing apparatus comprises a memory and a processor; wherein,
the memory for storing a computer program operable on the processor;
the processor, when running the computer program, is configured to perform the method of any of claims 1 to 11.
14. A computer storage medium, characterized in that the computer storage medium stores a video processing program which, when executed by at least one processor, implements the method of any one of claims 1 to 11.
15. A terminal device characterized in that it comprises at least a video processing apparatus according to claim 12 or 13.
CN201910671851.1A 2019-07-24 2019-07-24 Method for processing video frequency, device and computer storage medium and terminal device Pending CN110312164A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910671851.1A CN110312164A (en) 2019-07-24 2019-07-24 Method for processing video frequency, device and computer storage medium and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910671851.1A CN110312164A (en) 2019-07-24 2019-07-24 Method for processing video frequency, device and computer storage medium and terminal device

Publications (1)

Publication Number Publication Date
CN110312164A true CN110312164A (en) 2019-10-08

Family

ID=68080502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910671851.1A Pending CN110312164A (en) 2019-07-24 2019-07-24 Method for processing video frequency, device and computer storage medium and terminal device

Country Status (1)

Country Link
CN (1) CN110312164A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111464864A (en) * 2020-04-02 2020-07-28 Oppo广东移动通信有限公司 Reverse order video acquisition method and device, electronic equipment and storage medium
CN113542855A (en) * 2021-07-21 2021-10-22 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587646A (en) * 2008-05-21 2009-11-25 上海新联纬讯科技发展有限公司 Method and system of traffic flow detection based on video identification technology
US20140355680A1 (en) * 2007-01-11 2014-12-04 Korea Electronics Technology Institute Method for image prediction of multi-view video codec and computer readable recording medium therefor
CN105631803A (en) * 2015-12-17 2016-06-01 小米科技有限责任公司 Method and device for filter processing
CN106550243A (en) * 2016-12-09 2017-03-29 武汉斗鱼网络科技有限公司 Live video processing method, device and electronic equipment
CN108875780A (en) * 2018-05-07 2018-11-23 广东省电信规划设计院有限公司 The acquisition methods and device of difference object between image based on view data
CN108960206A (en) * 2018-08-07 2018-12-07 北京字节跳动网络技术有限公司 Video frame treating method and apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140355680A1 (en) * 2007-01-11 2014-12-04 Korea Electronics Technology Institute Method for image prediction of multi-view video codec and computer readable recording medium therefor
CN101587646A (en) * 2008-05-21 2009-11-25 上海新联纬讯科技发展有限公司 Method and system of traffic flow detection based on video identification technology
CN105631803A (en) * 2015-12-17 2016-06-01 小米科技有限责任公司 Method and device for filter processing
CN106550243A (en) * 2016-12-09 2017-03-29 武汉斗鱼网络科技有限公司 Live video processing method, device and electronic equipment
CN108875780A (en) * 2018-05-07 2018-11-23 广东省电信规划设计院有限公司 The acquisition methods and device of difference object between image based on view data
CN108960206A (en) * 2018-08-07 2018-12-07 北京字节跳动网络技术有限公司 Video frame treating method and apparatus

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111464864A (en) * 2020-04-02 2020-07-28 Oppo广东移动通信有限公司 Reverse order video acquisition method and device, electronic equipment and storage medium
CN113542855A (en) * 2021-07-21 2021-10-22 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and readable storage medium
CN113542855B (en) * 2021-07-21 2023-08-22 Oppo广东移动通信有限公司 Video processing method, device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
JP6154075B2 (en) Object detection and segmentation method, apparatus, and computer program product
JP6411505B2 (en) Method and apparatus for generating an omnifocal image
CN108932253B (en) Multimedia search result display method and device
KR102018887B1 (en) Image preview using detection of body parts
CN107223270B (en) Display data processing method and device
EP3110131B1 (en) Method for processing image and electronic apparatus therefor
CN106454086B (en) Image processing method and mobile terminal
CN106612396B (en) Photographing device, terminal and method
US20150010236A1 (en) Automatic image refocusing method
CN110111241B (en) Method and apparatus for generating dynamic image
CN106250421A (en) A kind of method shooting process and terminal
US11113998B2 (en) Generating three-dimensional user experience based on two-dimensional media content
CN110312164A (en) Method for processing video frequency, device and computer storage medium and terminal device
CN115278084B (en) Image processing method, device, electronic equipment and storage medium
CN109242977B (en) Webpage rendering method, device and storage medium
CN113095163B (en) Video processing method, device, electronic equipment and storage medium
US10291845B2 (en) Method, apparatus, and computer program product for personalized depth of field omnidirectional video
JP6085067B2 (en) User data update method, apparatus, program, and recording medium
CN110431838B (en) Method and system for providing dynamic content of face recognition camera
CN117201883A (en) Method, apparatus, device and storage medium for image editing
GB2513865A (en) A method for interacting with an augmented reality scene
CN106101539A (en) A kind of self-shooting bar angle regulation method and self-shooting bar
CN113014820A (en) Processing method and device and electronic equipment
CN114710624B (en) Shooting method and shooting device
CN108919957A (en) A kind of image transfer method, device, terminal device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191008

RJ01 Rejection of invention patent application after publication