CN116071683A - Video processing method and device and terminal - Google Patents

Video processing method and device and terminal Download PDF

Info

Publication number
CN116071683A
CN116071683A CN202310100044.0A CN202310100044A CN116071683A CN 116071683 A CN116071683 A CN 116071683A CN 202310100044 A CN202310100044 A CN 202310100044A CN 116071683 A CN116071683 A CN 116071683A
Authority
CN
China
Prior art keywords
frame
current frame
target
image processing
sensitive area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310100044.0A
Other languages
Chinese (zh)
Inventor
陈建儒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Lutes Robotics Co ltd
Original Assignee
Ningbo Lutes Robotics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Lutes Robotics Co ltd filed Critical Ningbo Lutes Robotics Co ltd
Priority to CN202310100044.0A priority Critical patent/CN116071683A/en
Publication of CN116071683A publication Critical patent/CN116071683A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a video processing method, which comprises the following steps: analyzing the original video to extract each frame of the original video; selecting a current frame from each frame of the original video, and judging whether the current frame is an initial frame of the original video or an interval frame distributed in the original video at fixed time intervals; if the current frame is an initial frame or an interval frame, a sensitive area contained in the current frame is found out by using a target detection model configured with a first image processing parameter set, and the sensitive area contained in the current frame is erased by using a target tracking device configured with a second image processing parameter set, wherein the image processing quality standard specified by the first image processing parameter set is higher than the image processing quality standard specified by the second image processing parameter set; if the current frame is not the initial frame or the interval frame, the sensitive area contained in the current frame is erased by using the target tracking device configured with the second image processing parameter set. The problem of the sensitive information of prior art is not completely removed in the fat is solved, has improved the efficiency of erasing.

Description

Video processing method and device and terminal
Technical Field
The present disclosure relates to the field of video image processing technologies, and in particular, to a video processing method, a device, and a terminal.
Background
With the rapid development of society, people also pay more attention to personal information security. Aiming at video or picture data recorded in vehicle-mounted equipment, china has come out of the corresponding principles and regulations. Specifically, there are two principles, respectively: in-vehicle treatment principles are not provided to the outside of the vehicle unless it is strictly necessary; the anonymization principle is required to be provided outside the vehicle, and anonymization, desensitization, and the like are required to be performed as much as possible. The method is defined as follows: the method is characterized in that video or picture data are required to be acquired through vehicle-mounted equipment, and the data are required to be transmitted to the outside of a vehicle or the internet of vehicles, and if desensitization processing is not carried out, the data are not allowed to be used.
Under the background, a method, a device and an electronic device for processing a sensitive area in a picture are proposed in a CN202110701474.9 prior art, and a deep learning picture recognition model is used to find pixels containing sensitive information and to process erasure information in a fuzzy manner. However, the scheme in the first prior art desensitizes each frame of the video at the pixel level, has excessive operand, consumes very much hardware operation resources, and is not suitable for being applied to the vehicle-mounted equipment for real-time video desensitization.
The prior art two CN202210659749.1 proposes a method and a device for removing video sensitive information and a computer readable storage medium, aiming at the mainstream h264 format video, only key frames in the video are decoded and desensitized and then encoded. Without desensitizing the P-frames and B-frames to improve the desensitization efficiency of the video. However, when sensitive information exists within P-frames or B-frames, the second prior art fails to achieve complete desensitization of video.
Disclosure of Invention
In view of this, the application provides a video processing method, device and terminal, which aim to solve the problem that in the prior art, when sensitive information exists in a P frame or a B frame, the sensitive information cannot be erased, and meanwhile, the sensitive information erasing efficiency of the video is improved, so that the video processing method, device and terminal are suitable for the situation that the computational power resources of vehicle-mounted equipment are limited.
In order to achieve the above object, the present application provides a video processing method, including the steps of:
parsing an original video to extract each frame of the original video;
selecting a current frame from each frame of the original video, and judging whether the current frame is an initial frame of the original video or an interval frame distributed in the original video at fixed time intervals;
if the current frame is an initial frame or an interval frame, a sensitive area contained in the current frame is found out by using a target detection model configured with a first image processing parameter set, and the sensitive area contained in the current frame is erased by using a target tracking device configured with a second image processing parameter set, wherein the image processing quality standard specified by the first image processing parameter set is higher than the image processing quality standard specified by the second image processing parameter set;
and if the current frame is not the initial frame or the interval frame, erasing a sensitive area contained in the current frame by using a target tracking device configured with a second image processing parameter set.
Optionally, the finding the sensitive area included in the current frame using the object detection model configured with the first image processing parameter set includes:
training the target detection model according to the image corresponding to the current frame so as to configure the target detection model into a system for identifying a tracking object of a preset type and the position of the tracking object in the image corresponding to the current frame;
and selecting the tracking object and the position thereof in the image corresponding to the current frame by using a first target frame for marking the tracking object.
Optionally, the finding the sensitive area included in the current frame using the object detection model configured with the first image processing parameter set further includes:
controlling the target detection model to determine a sensitive area contained in the tracking object in the image corresponding to the current frame according to the first target frame;
and selecting the sensitive area by using a second target frame for marking the sensitive area, and erasing the first target frame in the image corresponding to the current frame.
Optionally, the erasing the sensitive area included in the current frame using the object tracking device configured with the second image processing parameter set includes:
controlling the target tracking equipment to acquire the number and the positions of second target frames in the image corresponding to the current frame, and acquiring the motion rule and the appearance characteristics of the tracking object according to the number and the positions of the second target frames;
and learning how to identify a tracking object of a preset type and the position of the tracking object in the image corresponding to the current frame by using the target tracking device according to the motion rule and the appearance characteristic, and controlling the target tracking device to erase a sensitive area selected by a second target frame in the image corresponding to the current frame after learning is completed.
Optionally, the erasing the sensitive area included in the current frame using the object tracking device configured with the second image processing parameter set further includes:
and inputting a subsequent frame of the current frame into the target tracking equipment, determining the position of a tracking object in an image corresponding to the subsequent frame according to the current frame by using the target tracking equipment, and selecting a sensitive area contained in the tracking object by using the second target frame according to the position of the tracking object.
Optionally, the inputting the subsequent frame of the current frame into the target tracking device includes:
if the subsequent frame is an interval frame, judging whether at least one of a newly added second target frame and a lost second target frame exists in the image corresponding to the interval frame;
if yes, the target detection model is controlled to execute at least one operation of correcting the newly added second target frame and deleting the lost second target frame.
Optionally, the inputting the subsequent frame of the current frame into the target tracking device further includes:
and if the subsequent frames are interval frames and the tracking objects of the preset types are not identified in the images corresponding to the interval frames, the target tracking equipment is used for identifying the tracking objects of the subsequent frames of the interval frames.
Optionally, the first image processing parameter set includes a first processing precision, a first operation speed and a first operation time, and the second image processing parameter set includes a second processing precision, a second operation speed and a second operation time, and the first processing precision, the first operation speed and the first operation time are respectively greater than the second processing precision, the second operation speed and the second operation time.
The application also provides a video processing apparatus, the apparatus comprising:
the video processing module is used for analyzing the original video to extract each frame of the original video;
the frame number selecting module is electrically connected with the video processing module and is used for selecting a current frame from each frame of the original video;
the frame number judging module is electrically connected with the current frame number selecting module and is used for judging whether the current frame is an initial frame of the original video or an interval frame distributed in the original video at fixed time intervals;
the execution module is electrically connected with the current frame number judging module, and is used for finding out the sensitive area contained in the current frame by using the target detection model configured with the first image processing parameter set when the current frame is an initial frame or an interval frame, and erasing the sensitive area contained in the current frame by using the target tracking device configured with the second image processing parameter set, or erasing the sensitive area contained in the current frame by using the target tracking device configured with the second image processing parameter set when the current frame is not the initial frame or the interval frame.
The application also provides a terminal comprising a memory and a processor, wherein the memory stores a computer program, and the video processing method is realized when the processor executes the computer program stored in the memory.
According to the invention, through a hybrid video processing method, each frame of an original video is extracted by analyzing the original video, when the current frame is not an initial frame or an interval frame, a sensitive area contained in the current frame is found and erased by using a target tracking device configured with a second image processing parameter set, when the current frame is the initial frame or the interval frame, a sensitive area contained in the frame is found by using a target detection model configured with a first image processing parameter set, and then the sensitive area contained in the current frame is erased by using the target tracking device, so that the problem that sensitive information cannot be erased when the sensitive information exists in a P frame or a B frame in the prior art is solved, meanwhile, the sensitive information erasing efficiency of the video is improved, and the method is applicable to the situation that the computing power resource of vehicle-mounted equipment is limited.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a video processing method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an apparatus of a video processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a terminal of a video processing method according to an embodiment of the present application.
Detailed Description
Specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
In the description of the present invention, the terms "first," "second," "third," and the like are merely used for distinguishing between similar elements and not necessarily for indicating or implying a relative importance or order.
Furthermore, the terms "comprise," "include," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that elements are listed and may include other elements not expressly listed.
Fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the present invention, referring to fig. 1, the video processing method includes the following steps:
s1, analyzing the original video to extract each frame of the original video. Specifically, the original video is a video that has not been subjected to sensitive content processing, and the device for obtaining the original video may be a vehicle recorder, an intelligent monitoring camera, a mobile phone, a camera, a tablet computer, etc., which is not limited only herein. The original video is parsed to obtain all frames of the original video, so that each frame in the original video is processed with sensitive information respectively.
S2, selecting a current frame from each frame of the original video.
S3, judging whether the current frame is an initial frame of the original video or an interval frame distributed in the original video at fixed time intervals. Specifically, from all frames of the original video obtained in step S1, each frame is sequentially used as a current frame to execute a corresponding method, and then it is further determined whether the current frame is at least one of two special frames, namely an initial frame and an interval frame. Wherein the initial frame is the first frame of all frames of the original video, and the interval frame is a frame distributed among all frames of the original video at a fixed time interval. The fixed time interval may be one millisecond, one second, ten seconds, ten minutes, one hour, etc., without limitation.
S4, if the current frame is an initial frame or an interval frame, a sensitive area contained in the current frame is found out by using a target detection model configured with a first image processing parameter set, and the sensitive area contained in the current frame is erased by using a target tracking device configured with a second image processing parameter set, wherein the image processing quality standard specified by the first image processing parameter set is higher than the image processing quality standard specified by the second image processing parameter set. If the current frame is not the initial frame or the interval frame, the sensitive area contained in the current frame is erased by using the target tracking device configured with the second image processing parameter set. Specifically, when the current frame is determined to be the initial frame or the interval frame, the sensitive area contained in the current frame is found out by using the target detection model configured with the first image processing parameter set, and then the sensitive area contained in the current frame is erased by using the target tracking device configured with the second image processing parameter set. Otherwise, the sensitive area contained in the current frame is erased directly by using the target tracking device configured with the second image processing parameter set. Illustratively, the first image processing parameter set and the second image processing parameter set include the same content, for example, may include required energy consumption, recognition rate, calculation speed, calculation accuracy, image sharpness optimization, and the like.
In one embodiment, through the steps S1 to S4, each frame of the original video is parsed, whether the current frame is the initial frame or the interval frame is determined, and if yes or no, the sensitive area included in the current frame is erased by two different ways. Conventionally, the target tracking device configured with the second image processing parameter set is directly used for erasing the sensitive area contained in the current frame, and although a part of calculation accuracy is sacrificed, the energy consumption and the resources required by calculation can be greatly reduced, and the low-power-consumption work is kept for a long time under the normal state. In special cases (when the current frame is an initial frame or an interval frame), firstly, a target detection model which needs larger calculation energy consumption and resources and has higher calculation precision is called to find out a sensitive area contained in the current frame, and then the sensitive area is erased by using a target tracking device. The steps S1-S4 are repeated to sequentially process the sensitive information of the current frame of the original video, so that the problem that the sensitive information cannot be removed when the sensitive information exists in the P frame or the B frame in the prior art can be effectively solved, and meanwhile, the method and the device can improve the sensitive information erasing efficiency of the video and simultaneously are applicable to a hardware resource effectively vehicle-mounted computer system through cooperation of two video processing paths, so that the balance between accuracy and computer resource occupation is achieved.
Optionally, the finding the sensitive area included in the current frame using the object detection model configured with the first image processing parameter set in step S4 specifically includes: training the target detection model according to the image corresponding to the current frame so as to configure the target detection model to be used for identifying a tracking object of a preset type and the position of the tracking object in the image corresponding to the current frame. And selecting the tracking object and the position thereof in the image corresponding to the current frame by using a first target frame for marking the tracking object. Specifically, firstly, the target detection model is trained according to the content in the image corresponding to the current frame through a real-time algorithm, so that the target detection model can accurately and rapidly identify the tracking object of the preset type and the position of the tracking object in the image. By way of example, the predetermined type of tracking object may be an animal, a human, an automobile, a house, a sign, a large screen provided on the outer wall of a shopping mall, etc., which is not limited only herein. The first target frame is used for framing the identified tracking object and the position of the tracking object by training in real time so that the target detection model can be used for framing the identified tracking object and the position of the tracking object by using the first target frame. The method includes that a cow is arranged in the center of a road in an image corresponding to a current frame, the cow can be quickly and accurately identified by a target detection model trained in real time, and the cow and the position where the cow is located are selected by using a first target frame.
Optionally, the finding the sensitive area included in the current frame using the object detection model configured with the first image processing parameter set in step S4 specifically further includes: and the control target detection model determines a sensitive area contained in the tracking object in the image corresponding to the current frame according to the first target frame. And selecting the sensitive area by using a second target frame for marking the sensitive area, and erasing the first target frame in the image corresponding to the current frame. Specifically, the trained object detection model described in the foregoing is first controlled to determine whether the region selected by the first object frame in the image corresponding to the current frame includes a sensitive region of the tracking object. If so, the target detection model uses a second target frame to select a sensitive area of the tracking object, and only the second target frame is reserved after that for reducing the complexity of the picture and improving the use experience of the user. The sensitive area may include a human face, important parts of male and female, a license plate number of a motor vehicle, etc., which are not limited only herein.
Optionally, erasing the sensitive area included in the current frame using the object tracking device configured with the second image processing parameter set in step S4 specifically includes: and controlling the target tracking equipment to acquire the number and the positions of the second target frames in the image corresponding to the current frame, and acquiring the motion rule and the appearance characteristics of the tracked object according to the number and the positions of the second target frames. And using the target tracking equipment to learn how to identify a tracking object of a preset type and the position of the tracking object in the image corresponding to the current frame according to the motion rule and the appearance characteristic, and controlling the target tracking equipment to erase the sensitive area selected by the second target frame in the image corresponding to the current frame after learning is completed. Specifically, firstly, the target tracking device is controlled to acquire the number and the position of second target frames for frame selection of the sensitive area in the image corresponding to the current frame by the target detection model, secondly, the target tracking device acquires the motion rule and the appearance characteristic of the tracking object according to the number and the position of the second target frames for learning, so that the capability of accurately and rapidly identifying the tracking object of a preset type and the position thereof in the image corresponding to the current frame is continuously improved and optimized, and finally, the target tracking device is controlled to erase the sensitive area selected by the frame of the second target frame in the image corresponding to the current frame. In this embodiment, the core of the step is that the result obtained by the target detection model with high calculation accuracy and high energy consumption in real time is provided to the target tracking device with lower calculation accuracy and low power consumption for training and learning, so that the calculation accuracy of the target detection model can be continuously improved and optimized, and the target detection model can play the advantage of low power consumption under special conditions (when the current frame is an initial frame or an interval frame), and meanwhile, the recognition accuracy and recognition efficiency can be ensured, so that the method is suitable for the situation that the calculation resources of the vehicle-mounted device are limited.
Optionally, erasing the sensitive area included in the current frame using the object tracking device configured with the second image processing parameter set in step S4 specifically further includes: and inputting a subsequent frame of the current frame into target tracking equipment, determining the position of a tracking object in an image corresponding to the subsequent frame by using the target tracking equipment according to the current frame, and framing a sensitive area contained in the tracking object by using a second target frame according to the position of the tracking object. Specifically, after the training learning as described above, the object detection model has been provided with a desensitization process for each frame of the original video in a conventional case (when the current frame is not the initial frame or the gap frame). At this time, the subsequent frame of the current frame is input into the target tracking device, the target tracking device is used to determine whether the tracking object exists in the image corresponding to the subsequent frame, and if so, the target tracking device can directly frame the sensitive area contained in the tracking object with the second target frame.
Optionally, inputting the subsequent frame of the current frame into the target tracking device specifically includes: if the subsequent frame is an interval frame, judging whether at least one of a newly added second target frame and a lost second target frame exists in the image corresponding to the interval frame; if yes, the target detection model is controlled to execute at least one operation of correcting the newly added second target frame and deleting the lost second target frame. Specifically, in the process of continuously repeating steps S1 to S4, when the subsequent frame of the current frame is identified as an interval frame, the target detection model is used to acquire an image corresponding to the interval frame and judge whether a lost second target frame which does not frame the sensitive area exists or whether a newly increased second target frame which does not exist in the previous interval frame exists, if so, the target detection model is controlled to delete the lost second target frame or correct the newly increased second target frame. In this embodiment, the core of this step is to make timely corrections and adjustments during the process of continuously repeating steps S1-S3. When the subsequent frame is an interval frame, the object detection model needs to perform the operations described above, and further, by virtue of the advantages of high calculation accuracy and high recognition rate, correct or adjust the deviation of the second object frame selected by the object tracking device in the image corresponding to the previous frame of the interval frame, so that the object tracking device can accurately frame the sensitive area contained in the tracking object in the next frame of the interval frame.
Optionally, inputting the subsequent frame of the current frame into the target tracking device specifically further includes: if the subsequent frame is an interval frame and the tracking object of the preset type is not identified in the image corresponding to the interval frame, the target tracking device is used for identifying the tracking object of the subsequent frame of the interval frame. Specifically, in the process of continuously repeating steps S1 to S4, when the subsequent frame of the current frame is identified as the interval frame, and when the tracking object of the preset type is not identified in the image corresponding to the interval frame, the target tracking device is directly started to identify the tracking object for the subsequent frame of the interval frame, and by this way, the energy consumption can be further reduced.
Optionally, the first image processing parameter set includes a first processing precision, a first operation speed and a first operation time, and the second image processing parameter set includes a second processing precision, a second operation speed and a second operation time, where the first processing precision, the first operation speed and the first operation time are respectively greater than the second processing precision, the second operation speed and the second operation time. In this embodiment, the first image processing parameter set and the second image processing parameter set each include calculation accuracy, calculation speed and operation time, and each set of parameters of the first image processing parameter set is larger than each set of parameters of the second image processing parameter set, that is, calculation accuracy, calculation speed and operation time of the target monitoring model are superior to those of the target tracking device, and by cooperation between the first image processing parameter set and the second image processing parameter set, the method is capable of improving sensitive information erasing efficiency of video, and meanwhile is also suitable for a vehicle-mounted computer system with effective hardware resources, so as to achieve balance of accuracy and computer resource occupation.
Fig. 2 is a schematic structural diagram of an apparatus of a video processing method according to an embodiment of the present invention. Those skilled in the art will appreciate that fig. 2 is merely an example of one type of device and is not meant to be limiting.
Referring to fig. 2, the apparatus includes:
the video processing module 10 is configured to parse the original video to extract each frame of the original video.
The frame number selecting module 20 is electrically connected to the video processing module 10, and is configured to select a current frame from each frame of the original video.
The frame number determining module 30 is electrically connected to the current frame number selecting module 20, and is configured to determine whether the current frame is an initial frame of the original video or an interval frame distributed in the original video at a fixed time interval.
The execution module 40 is electrically connected to the current frame number judging module 30, and is configured to find out a sensitive area included in the current frame using the object detection model configured with the first image processing parameter set and erase the sensitive area included in the current frame using the object tracking device configured with the second image processing parameter set, or erase the sensitive area included in the current frame using the object tracking device configured with the second image processing parameter set when the current frame is not the initial frame or the interval frame.
The video processing method is realized through the cooperation between the device and the modules arranged in the device, and has the advantages which are completely the same as those of the video processing method.
Fig. 3 is a schematic structural diagram of a terminal of a video processing method according to an embodiment of the present invention. The terminal can be a computing device such as a computer, a notebook computer, a palm computer, a cloud server and the like. The terminal may include, but is not limited to, a processor, memory. It will be appreciated by those skilled in the art that fig. 3 is merely an example of a terminal and is not intended to be limiting, and that more or fewer components than shown may be included, or certain components may be combined, or different components may be included, for example, a terminal may also include input and output devices, network access devices, buses, etc.
Referring to fig. 3, the present application further provides a terminal, where the terminal includes a memory and a processor, the memory stores a computer program, and when the processor executes the computer program stored in the memory, the video processing method as described above is implemented, which has the same advantages as the video processing method.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present disclosure should be included in the scope of the present invention. Accordingly, the scope of the invention should be assessed as that of the appended claims.

Claims (10)

1. A video processing method, comprising the steps of:
parsing an original video to extract each frame of the original video;
selecting a current frame from each frame of the original video, and judging whether the current frame is an initial frame of the original video or an interval frame distributed in the original video at fixed time intervals;
if the current frame is an initial frame or an interval frame, a sensitive area contained in the current frame is found out by using a target detection model configured with a first image processing parameter set, and the sensitive area contained in the current frame is erased by using a target tracking device configured with a second image processing parameter set, wherein the image processing quality standard specified by the first image processing parameter set is higher than the image processing quality standard specified by the second image processing parameter set;
and if the current frame is not the initial frame or the interval frame, erasing a sensitive area contained in the current frame by using a target tracking device configured with a second image processing parameter set.
2. The method of claim 1, wherein the using the object detection model configured with the first set of image processing parameters to find the sensitive region contained in the current frame comprises:
training the target detection model according to the image corresponding to the current frame so as to configure the target detection model into a system for identifying a tracking object of a preset type and the position of the tracking object in the image corresponding to the current frame;
and selecting the tracking object and the position thereof in the image corresponding to the current frame by using a first target frame for marking the tracking object.
3. The method of claim 2, wherein the using the object detection model configured with the first set of image processing parameters to find the sensitive region contained in the current frame further comprises:
controlling the target detection model to determine a sensitive area contained in the tracking object in the image corresponding to the current frame according to the first target frame;
and selecting the sensitive area by using a second target frame for marking the sensitive area, and erasing the first target frame in the image corresponding to the current frame.
4. The method of claim 3, wherein the erasing the sensitive area contained in the current frame using the object tracking device configured with the second set of image processing parameters comprises:
controlling the target tracking equipment to acquire the number and the positions of second target frames in the image corresponding to the current frame, and acquiring the motion rule and the appearance characteristics of the tracking object according to the number and the positions of the second target frames;
and learning how to identify a tracking object of a preset type and the position of the tracking object in the image corresponding to the current frame by using the target tracking device according to the motion rule and the appearance characteristic, and controlling the target tracking device to erase a sensitive area selected by a second target frame in the image corresponding to the current frame after learning is completed.
5. The method of claim 4, wherein the erasing the sensitive area contained in the current frame using the object tracking device configured with the second set of image processing parameters further comprises:
and inputting a subsequent frame of the current frame into the target tracking equipment, determining the position of a tracking object in an image corresponding to the subsequent frame according to the current frame by using the target tracking equipment, and selecting a sensitive area contained in the tracking object by using the second target frame according to the position of the tracking object.
6. The method of claim 5, wherein said inputting a subsequent frame to said current frame into said target tracking device comprises:
if the subsequent frame is an interval frame, judging whether at least one of a newly added second target frame and a lost second target frame exists in the image corresponding to the interval frame;
if yes, the target detection model is controlled to execute at least one operation of correcting the newly added second target frame and deleting the lost second target frame.
7. The method of claim 5, wherein said inputting a subsequent frame to said current frame into said target tracking device further comprises:
and if the subsequent frames are interval frames and the tracking objects of the preset types are not identified in the images corresponding to the interval frames, the target tracking equipment is used for identifying the tracking objects of the subsequent frames of the interval frames.
8. The method of claim 1, wherein the first set of image processing parameters includes a first processing precision, a first operation speed, and a first operation time, and the second set of image processing parameters includes a second processing precision, a second operation speed, and a second operation time, the first processing precision, the first operation speed, and the first operation time being greater than the second processing precision, the second operation speed, and the second operation time, respectively.
9. A video processing apparatus, the apparatus comprising:
the video processing module is used for analyzing the original video to extract each frame of the original video;
the frame number selecting module is electrically connected with the video processing module and is used for selecting a current frame from each frame of the original video;
the frame number judging module is electrically connected with the current frame number selecting module and is used for judging whether the current frame is an initial frame of the original video or an interval frame distributed in the original video at fixed time intervals;
the execution module is electrically connected with the current frame number judging module, and is used for finding out the sensitive area contained in the current frame by using the target detection model configured with the first image processing parameter set when the current frame is an initial frame or an interval frame, and erasing the sensitive area contained in the current frame by using the target tracking device configured with the second image processing parameter set, or erasing the sensitive area contained in the current frame by using the target tracking device configured with the second image processing parameter set when the current frame is not the initial frame or the interval frame.
10. A terminal comprising a memory and a processor, the memory having stored thereon a computer program which, when executed by the processor, implements the video processing method of any of claims 1 to 8.
CN202310100044.0A 2023-02-01 2023-02-01 Video processing method and device and terminal Pending CN116071683A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310100044.0A CN116071683A (en) 2023-02-01 2023-02-01 Video processing method and device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310100044.0A CN116071683A (en) 2023-02-01 2023-02-01 Video processing method and device and terminal

Publications (1)

Publication Number Publication Date
CN116071683A true CN116071683A (en) 2023-05-05

Family

ID=86169612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310100044.0A Pending CN116071683A (en) 2023-02-01 2023-02-01 Video processing method and device and terminal

Country Status (1)

Country Link
CN (1) CN116071683A (en)

Similar Documents

Publication Publication Date Title
CN110390262B (en) Video analysis method, device, server and storage medium
EP3901948A1 (en) Method for training a voiceprint extraction model and method for voiceprint recognition, and device and medium thereof
CN111401196A (en) Method, computer device and computer readable storage medium for self-adaptive face clustering in limited space
CN114898342B (en) Method for detecting call receiving and making of non-motor vehicle driver in driving
CN110705370A (en) Deep learning-based road condition identification method, device, equipment and storage medium
CN114679607B (en) Video frame rate control method and device, electronic equipment and storage medium
CN114926766A (en) Identification method and device, equipment and computer readable storage medium
CN112037142A (en) Image denoising method and device, computer and readable storage medium
CN112364898A (en) Image identification automatic labeling method, device, equipment and storage medium
CN112383824A (en) Video advertisement filtering method, device and storage medium
CN111581436A (en) Target identification method and device, computer equipment and storage medium
CN113935349A (en) Method and device for scanning two-dimensional code, electronic equipment and storage medium
CN113014876A (en) Video monitoring method and device, electronic equipment and readable storage medium
CN111667602A (en) Image sharing method and system for automobile data recorder
CN116071683A (en) Video processing method and device and terminal
CN111985304A (en) Patrol alarm method, system, terminal equipment and storage medium
CN114973268A (en) Text recognition method and device, storage medium and electronic equipment
CN112435475B (en) Traffic state detection method, device, equipment and storage medium
CN114140822A (en) Pedestrian re-identification method and device
CN114359789A (en) Target detection method, device, equipment and medium for video image
CN114241253A (en) Model training method, system, server and storage medium for illegal content identification
CN113205158A (en) Pruning quantification processing method, device, equipment and storage medium of network model
CN112199987A (en) Multi-algorithm combined configuration strategy method in single area, image processing device and electronic equipment
KR20210033193A (en) Image reconstruction apparatus and method for performing concealment of license plate in video
CN112132070B (en) Driving behavior analysis method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination