CN110348369B - Video scene classification method and device, mobile terminal and storage medium - Google Patents

Video scene classification method and device, mobile terminal and storage medium Download PDF

Info

Publication number
CN110348369B
CN110348369B CN201910612133.7A CN201910612133A CN110348369B CN 110348369 B CN110348369 B CN 110348369B CN 201910612133 A CN201910612133 A CN 201910612133A CN 110348369 B CN110348369 B CN 110348369B
Authority
CN
China
Prior art keywords
video
image
frame
objects
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910612133.7A
Other languages
Chinese (zh)
Other versions
CN110348369A (en
Inventor
郭冠军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910612133.7A priority Critical patent/CN110348369B/en
Publication of CN110348369A publication Critical patent/CN110348369A/en
Application granted granted Critical
Publication of CN110348369B publication Critical patent/CN110348369B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the disclosure discloses a video scene classification method and device, a mobile terminal and a storage medium. Wherein, the method comprises the following steps: obtaining a first group of video objects and a second group of video objects; the second group of video objects comprises video objects in the first group of video objects; determining the movement speed of the first group of video objects on the video image plane according to the image areas of the first group of video objects in the first frame of video image and the second frame of video image; determining an expected image area of the first group of video objects in each frame of the un-segmented video image of the current video according to the motion speed, and performing image segmentation on each frame of the un-segmented video image according to the expected image area to obtain a corresponding video object; and determining a classification result of the target video according to the video object corresponding to each frame of video image of the current video. The embodiment of the disclosure can improve the segmentation accuracy of the continuously moving video object and accurately classify the video scene.

Description

Video scene classification method and device, mobile terminal and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of video processing, and in particular relates to a video scene classification method and device, a mobile terminal and a storage medium.
Background
With the popularization of mobile terminals, users can take video shots in various scenes through the mobile terminals. Generally, scene classification is performed on videos shot by users to obtain scene categories corresponding to the videos, and then the videos of the users can be stored in an album according to the scene categories, so that the users can share the videos conveniently.
In the prior art, generally, a video object in each frame of video image is divided, and then a scene type corresponding to a video is determined according to the divided video object.
The prior art has the defect that the segmentation accuracy of video objects aiming at continuous motion in the video is difficult to guarantee, and the scene classification of the video can be mistaken. For example, a continuously moving video object is farther and farther away from a lens, an image area of the moving object in a video image is smaller and smaller, and the video object is not segmented in a subsequent video image, so that a scene classification of the video is mistaken.
Disclosure of Invention
The present disclosure provides a video scene classification method, apparatus, mobile terminal and storage medium to realize accurate scene classification of video.
In a first aspect, an embodiment of the present disclosure provides a video scene classification method, including:
respectively inputting a first frame video image and a second frame video image of a current video into a preset image segmentation model to obtain a first group of video objects corresponding to the first frame video image and a second group of video objects corresponding to the second frame video image; the second group of video objects comprises video objects in the first group of video objects;
determining the movement speed of the first group of video objects on the video image plane according to the image areas of the first group of video objects in the first frame of video image and the second frame of video image;
determining an expected image area of the first group of video objects in each frame of the un-segmented video image of the current video according to the motion speed, and performing image segmentation on each frame of the un-segmented video image according to the expected image area to obtain a corresponding video object;
and determining a classification result of the target video according to the video object corresponding to each frame of video image of the current video.
In a second aspect, embodiments of the present disclosure also provide a video scene classification apparatus,
the first image segmentation module is used for respectively inputting a first frame video image and a second frame video image of a current video into a preset image segmentation model to obtain a first group of video objects corresponding to the first frame video image and a second group of video objects corresponding to the second frame video image; the second group of video objects comprises video objects in the first group of video objects;
the motion speed determining module is used for determining the motion speed of the first group of video objects on the video image plane according to the image areas of the first group of video objects in the first frame of video image and the second frame of video image;
the second image segmentation module is used for determining an expected image area of the first group of video objects in each frame of the un-segmented video image of the current video according to the motion speed, and performing image segmentation on each frame of the un-segmented video image according to the expected image area to obtain a corresponding video object;
and the video classification module is used for determining a classification result of the target video according to the video object corresponding to each frame of video image of the current video.
In a third aspect, an embodiment of the present disclosure further provides a mobile terminal, including:
one or more processing devices;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processing devices, the one or more processing devices are caused to implement the video scene classification method according to the embodiment of the present disclosure.
In a fourth aspect, the disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the video scene classification method according to the disclosed embodiments.
The disclosed embodiment determines the motion speed of the first group of video objects on the video image plane according to the image areas of the first group of video objects in the first frame of video image and the second frame of video image, then determines the expected image area of the first group of video objects in each frame of the current video in the non-segmented video image according to the motion speed, and performs image segmentation on each frame of the non-segmented video image according to the expected image area to obtain the corresponding video object, and then determines the classification result of the target video according to the video object corresponding to each frame of the current video image, thereby solving the problems that the segmentation accuracy of the video object which continuously moves in the video is difficult to guarantee, and the classification error of the scenes of the video can be caused, the method comprises the steps of determining the movement speed of a video object on a video image plane, and carrying out image segmentation on the video object in each frame of non-segmented video image according to the movement speed, so that the segmentation accuracy of the continuously moving video object can be improved, and the video can be accurately classified into scenes.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a flowchart of a video scene classification method according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a video scene classification method according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a video scene classification method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a video scene classification apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a mobile terminal according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Fig. 1 is a flowchart of a video scene classification method according to an embodiment of the present disclosure. The embodiment is applicable to the case of performing scene classification on videos, and the method may be performed by a video scene classification device, which may be implemented in a software and/or hardware manner, and may be configured in a mobile terminal. As shown in fig. 1, the method may include the steps of:
step 101, respectively inputting a first frame video image and a second frame video image of a current video into a preset image segmentation model to obtain a first group of video objects corresponding to the first frame video image and a second group of video objects corresponding to the second frame video image; and the second group of video objects comprises the video objects in the first group of video objects.
The current video may be a video shot by a user through a camera of the mobile terminal. The current video is composed of a plurality of frames of video images.
Image segmentation is a technique and process for dividing an image into a number of specific regions with unique properties. The existing image segmentation methods are mainly divided into the following categories: a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a particular theory-based segmentation method, and the like. From a mathematical point of view, image segmentation is the process of dividing a digital image into mutually disjoint regions. The process of image segmentation is also a labeling process. Image segmentation of an image is typically achieved by determining the class to which each pixel included in the image belongs.
And respectively inputting a first frame video image and a second frame video image of the current video into a preset image segmentation model. The preset image segmentation model analyzes the first frame of video image to obtain the probability that each pixel point in the first frame of video image is of various video object types, and the video object type corresponding to the maximum probability value is selected as the video object type to which the pixel point belongs to obtain a first group of video objects corresponding to the first frame of video image. Each video object comprises all pixel points belonging to the video object category in the first frame of video image. Alternatively, the preset multiple video object categories may include a human object and an object. For example, the video object category may be a person, a table, or a car, etc. And analyzing the second frame of video image by using a preset image segmentation model to obtain the probability that each pixel point in the second frame of video image is of various video object types, selecting the video object type corresponding to the maximum probability value as the video object type to which the pixel point belongs to obtain a second group of video objects corresponding to the second frame of video image. Each video object comprises all pixel points belonging to the video object category in the second frame of video image.
The second set of video objects includes video objects in the first set of video objects. The time interval between the first frame video image and the second frame video image of the current video is short, and the first frame video image and the second frame video image contain the same video object. For example, a first set of video objects corresponding to a first frame of video image includes: all pixel points belonging to people in the first frame of video image and all pixel points belonging to a table in the first frame of video image. The second set of video objects corresponding to the second frame of video image includes: all pixel points belonging to people in the second frame of video image and all pixel points belonging to a table in the second frame of video image.
Step 102, determining the movement speed of the first group of video objects on the video image plane according to the image areas of the first group of video objects in the first frame of video image and the second frame of video image.
The image area of each video object in the first group of video objects in the first frame of video image is an area formed by all pixel points belonging to the corresponding video object category in the first frame of video image. The image area of each video object in the first group of video objects in the second frame of video image is an area formed by all pixel points belonging to the corresponding video object category in the second frame of video image.
The method comprises the steps of positioning image areas of each video object in a first frame video image and a second frame video image in a first group of video objects, determining the displacement of the image area of each video object, and calculating the movement speed of each video object on a video image plane according to the displacement of the image area of each video object and the time interval between the first frame video image and the second frame video image. The movement speed includes a movement direction and a movement speed.
For example, the first set of video objects includes: all the pixel points belonging to people and all the pixel points belonging to a table. The image areas of the first group of video objects in the first frame video image and the second frame video image comprise: the region formed by all pixel points belonging to a person in the first frame of video image, the region formed by all pixel points belonging to a table in the first frame of video image, the region formed by all pixel points belonging to a person in the second frame of video image, and the region formed by all pixel points belonging to a table in the second frame of video image. And then, calculating the movement speed of all pixel points belonging to the person on a video image plane according to the displacement of the image areas corresponding to all pixel points belonging to the person and the time interval between the first frame video image and the second frame video image. And then, calculating the movement speed of all the pixel points belonging to the table on the video image plane according to the displacement of the image areas corresponding to all the pixel points belonging to the table and the time interval between the first frame video image and the second frame video image.
And 103, determining expected image areas of the first group of video objects in each frame of the un-segmented video image of the current video according to the motion speed, and performing image segmentation on each frame of the un-segmented video image according to the expected image areas to obtain corresponding video objects.
Wherein the expected image area is an image area containing the video object in the non-segmented video image determined according to the motion trend of the video object. According to the moving speed of each video object in the first group of video objects on the video image plane and the time interval between the frames of video images, the image area of each video object at the corresponding time node is determined, namely the expected image area of each video object in each frame of the current video in the non-segmented video image.
And for each frame of the non-segmented video image, cutting out an expected image which is matched with each video object in the first group of video objects in the non-segmented video image according to the expected image area of the first group of video objects in each frame of the non-segmented video image of the current video. Each prospective image contains a matching one of the video objects. And then respectively inputting the expected images matched with the video objects into a preset image segmentation model to obtain the video objects corresponding to the un-segmented video images.
And step 104, determining a classification result of the target video according to the video object corresponding to each frame of video image of the current video.
In a specific example, the corresponding rule of the video object and the scene category is preset. The rule for correspondence between the video object and the scene type may be a rule for determining the scene type of the video image according to the type of the video object included in the video image, the image area of the video object, and the positional relationship between the video object.
For example, the video object corresponding to each frame of video image of the current video includes all pixel points belonging to a person and all pixel points belonging to an automobile, thereby determining that the scene category of the video image is driving. The video objects corresponding to the video images of the frames of the current video comprise all pixel points belonging to a table, and the image area of all the pixel points belonging to the table is larger than a preset area threshold value, so that the scene type of the video images is determined to be the table display. The video objects corresponding to each frame of video image of the current video include all pixel points belonging to people, all pixel points belonging to a table, and all pixel points belonging to food. The relative distance between all pixel points belonging to people, all pixel points belonging to a table and all pixel points belonging to food is smaller than a preset distance threshold, and therefore the scene type of the video image is determined to be dining.
And determining the scene type corresponding to each frame of video image according to the preset corresponding rule of the video object and the scene type and the video object corresponding to each frame of video image. And then counting the video image frame number corresponding to each scene type, and taking the scene type with the maximum video image frame number as a classification result of the target video.
For example, the target video contains 100 frames of video images. And determining the scene type corresponding to each frame of video image according to the preset corresponding rule of the video object and the scene type and the video object corresponding to each frame of video image. Then, counting the number of video image frames corresponding to each scene type: the video image frame number corresponding to "table show" is 16, "the video image frame number corresponding to" meal "is 57," and the video image frame number corresponding to "food show" is 27. And taking the scene category 'dining' with the largest video image frame number as a classification result of the target video.
In the technical solution of this embodiment, a moving speed of a first group of video objects on a video image plane is determined according to image areas of the first group of video objects in a first frame of video image and a second frame of video image, an expected image area of the first group of video objects in each frame of an un-segmented video image of a current video is determined according to the moving speed, each frame of the un-segmented video image is segmented according to the expected image area to obtain a corresponding video object, and a classification result of a target video is determined according to the video object corresponding to each frame of the video image of the current video, so as to solve a problem that segmentation accuracy of a continuously moving video object in a video is difficult to guarantee and a scene classification error of the video may be caused, the method comprises the steps of determining the movement speed of a video object on a video image plane, and carrying out image segmentation on the video object in each frame of non-segmented video image according to the movement speed, so that the segmentation accuracy of the continuously moving video object can be improved, and the video can be accurately classified into scenes.
Fig. 2 is a flowchart of a video scene classification method according to an embodiment of the present disclosure. This embodiment may be combined with each alternative in one or more of the above embodiments, and in this embodiment, determining the moving speed of the first group of video objects on the video image plane according to the image areas of the first group of video objects in the first frame of video image and the second frame of video image may include: and determining the movement speed of the first group of video objects on the video image plane according to the image areas of the first group of video objects in the first frame of video image and the second frame of video image by adopting an optical flow method.
And determining an expected image area of the first group of video objects in each frame of the un-segmented video image of the current video according to the motion speed, and performing image segmentation on each frame of the un-segmented video image according to the expected image area to obtain a corresponding video object, which may include: sequentially acquiring a frame of non-segmented video image as a current processing video image; determining an expected image area of the first group of video objects in the current processing video image according to the motion speed, the frame rate of the current video and the image area of the first group of video objects in the previous frame of video image; cutting out an expected image matched with each video object in the current processed video image according to an expected image area by adopting an image frame matched with each video object in the first group of video objects; inputting the expected images matched with the video objects into a preset image segmentation model to obtain video objects corresponding to the currently processed video images; and returning to execute the operation of sequentially acquiring a frame of non-segmented video image as the currently processed video image until the processing of all the non-segmented video images of the current video is finished.
And determining a classification result of the target video according to the video object corresponding to each frame of video image of the current video, which may include: determining the scene type corresponding to each frame of video image according to the preset corresponding rule of the video object and the scene type and the video object corresponding to each frame of video image; counting the number of video image frames corresponding to each scene type; and taking the scene category with the largest video image frame number as a classification result of the target video.
As shown in fig. 2, the method may include the steps of:
step 201, respectively inputting a first frame video image and a second frame video image of a current video into a preset image segmentation model to obtain a first group of video objects corresponding to the first frame video image and a second group of video objects corresponding to the second frame video image; and the second group of video objects comprises the video objects in the first group of video objects.
Step 202, determining the moving speed of the first group of video objects on the video image plane according to the image areas of the first group of video objects in the first frame of video image and the second frame of video image by adopting an optical flow method.
The core of the optical flow method is to solve the optical flow, i.e. velocity, of the moving object. The optical flow calculation technology is divided into four types according to the difference between theoretical basis and mathematical method: gradient-based methods, matching-based methods, energy-based methods, phase-based methods. The optical flow calculation method based on matching includes two types based on features and areas.
Feature-based methods continuously locate and track the main features of the target. The area-based approach locates similar areas first, and then computes the optical flow by displacement of the similar areas. The disclosed embodiment adopts a region-based method to locate the image regions of each video object in the first frame video image and the second frame video image, determine the displacement of the image region of each video object, and then calculate the optical flow, i.e. the motion speed, of each video object on the video image plane according to the displacement of the image region of each video object and the time interval between the first frame video image and the second frame video image.
And step 203, acquiring a frame of non-segmented video image in sequence as the current processing video image.
Wherein the video images in the target video are arranged in time sequence.
Step 204, determining an expected image area of the first group of video objects in the currently processed video image according to the motion speed, the frame rate of the current video and the image area of the first group of video objects in the previous frame of video image.
Optionally, determining an expected image area of the first group of video objects in the currently processed video image according to the motion speed, the frame rate of the current video, and the image area of the first group of video objects in the previous frame of video image may include: determining the interval time between the previous frame of video image and the current processing video image according to the frame rate of the current video; determining the displacement of the first group of video objects at the interval time according to the interval time and the motion speed; and determining the expected image area of the first group of video objects in the current processing video image according to the image area of the first group of video objects in the previous frame of video image and the displacement.
The frame rate of the video is a measure for measuring the number of display frames. The unit of measurement is the number of display frames per second (fps). And determining the time interval between each frame of video image in the current video according to the frame rate of the current video. For example, the frame rate of the target video is 25fps, that is, 25 video images are displayed per second. Thus, the time interval between the video images of the frames in the current video is determined to be 0.04 seconds.
And calculating the displacement of each video object in the first group of video objects in the time interval according to the movement speed of each video object in the video image plane and the time interval between the video images of the frames. The displacement of each video object in the time interval is equal to the product of the speed of movement of each video object in the video image plane and the time interval.
Then, according to the image area of the first group of video objects in the previous frame of video image and the calculated displacement of each video object in the time interval, the image area reached by each video object after the time interval, that is, the expected image area of the first group of video objects in the currently processed video image, is determined.
Step 205, cutting out the expected image matched with each video object in the current processed video image according to the expected image area by using the image frame matched with each video object in the first group of video objects.
Wherein the size of the image frame is determined according to the size of the image area of the matched video object. The image frame is larger than the image area of the video object. For each video object, the image frame is matched with the expected image area, so that the expected image area is contained in the image frame, and then the expected image matched with each video object is cut out from the current processed video image along the image frame.
Optionally, the image frame is matched to the desired image area such that the desired image area is located at the center of the image frame, and then the desired image matching each video object is cropped in the currently processed video image along the image frame.
And step 206, inputting the expected images matched with the video objects into a preset image segmentation model to obtain the video objects corresponding to the currently processed video images.
And respectively inputting the expected images matched with the video objects into a preset image segmentation model, and segmenting the corresponding video objects according to the expected images by the obtained image segmentation model.
And step 207, returning to execute the operation of sequentially acquiring a frame of the non-segmented video image as the currently processed video image until the processing of all the non-segmented video images of the current video is finished.
And step 208, determining the scene type corresponding to each frame of video image according to the preset corresponding rule of the video object and the scene type and the video object corresponding to each frame of video image.
And step 209, counting the number of video image frames corresponding to each scene type.
And step 210, taking the scene type with the largest frame number of the video images as a classification result of the target video.
In the technical solution of this embodiment, an optical flow method is adopted, according to image areas of a first group of video objects in a first frame video image and a second frame video image, a motion speed of the first group of video objects on a video image plane is determined, then one frame of non-segmented video image is sequentially acquired as a current processing video image, according to the motion speed, an expected image area of the first group of video objects in the current processing video image is determined, according to the expected image area, image segmentation is performed on each frame of non-segmented video image to obtain corresponding video objects, until processing of all non-segmented video images of the current video is completed, then according to a preset rule corresponding to the video objects and scene types and the video objects corresponding to each frame of video image, a scene type corresponding to each frame of video image is determined, a scene type with the largest frame number of video images is taken as a classification result of a target video, the optical flow method can be adopted to determine the motion speed of the video object on the video image plane, the expected image area of the video object in the current processing video image can be determined according to the motion speed of the video object, and the expected image corresponding to each video object is cut out, so that each video object in the current processing video image can be accurately segmented according to the expected image containing the video object.
Fig. 3 is a flowchart of a video scene classification method according to an embodiment of the present disclosure. The present embodiment may be combined with each alternative in one or more of the above embodiments, and in the present embodiment, before the first frame video image and the second frame video image of the current video are respectively input to the preset image segmentation model, the method may further include: acquiring a training sample set corresponding to each scene type, wherein the training sample set comprises a set number of images corresponding to the scene type; and training the neural network model by using the training sample set to obtain a preset image segmentation model.
As shown in fig. 3, the method may include the steps of:
step 301, a training sample set corresponding to each scene type is obtained, and the training sample set includes a set number of images corresponding to the scene type.
The method comprises the steps of acquiring a set number of images corresponding to each scene type in advance, and storing the images into a training sample set corresponding to each scene type. The set number can be set according to the business requirements. For example, 2000 images corresponding to the scene category are collected for each scene category class, and the collected 2000 images are saved into the training sample set corresponding to the scene category.
Optionally, the set number of images corresponding to the scene type is: the image processing method comprises the steps of processing an original image according to a preset image processing rule to obtain an image.
The original image is a pre-acquired image. The preset image processing rule may be that the video object in the original image is moved to a preset position according to a preset mode, the original pixel points of the preset position are covered, and the original position of the video object is filled by using an image patching technology to obtain a processed image. For example, the preset manner may be to translate a set distance upward, translate a set distance downward, translate a set distance leftward, translate a set distance rightward, or move a set distance in any direction.
And then storing the original image and an image obtained after the original image is processed according to a preset image processing rule into a training sample set corresponding to the scene.
Therefore, the number of samples of each training sample set can be increased, and sample enhancement of each training sample set is realized.
Step 302, training the neural network model by using the training sample set to obtain a preset image segmentation model.
And training the neural network model by using the training sample set corresponding to each scene type to obtain a preset image segmentation model. The preset image classification model is used for receiving the video image and outputting the segmentation result of the video image, namely outputting each video object in the video image. Each video object comprises all pixel points in the video image belonging to the video object category.
Step 303, inputting a first frame video image and a second frame video image of the current video to a preset image segmentation model respectively to obtain a first group of video objects corresponding to the first frame video image and a second group of video objects corresponding to the second frame video image; and the second group of video objects comprises the video objects in the first group of video objects.
Step 304, determining the moving speed of the first group of video objects on the video image plane according to the image areas of the first group of video objects in the first frame of video image and the second frame of video image.
And 305, determining an expected image area of the first group of video objects in each frame of the un-segmented video image of the current video according to the motion speed, and performing image segmentation on each frame of the un-segmented video image according to the expected image area to obtain a corresponding video object.
And step 306, determining a classification result of the target video according to the video object corresponding to each frame of video image of the current video.
According to the technical scheme of the embodiment, the training sample set corresponding to each scene type is obtained, the training sample set comprises a set number of images corresponding to the scene type, the neural network model is trained by using the training sample set to obtain the preset image segmentation model, the image segmentation model capable of receiving the video image and outputting the segmentation result of the video image can be obtained by training, and the video can be subjected to scene classification according to the segmentation result of the video image output by the preset image segmentation model.
Fig. 4 is a schematic structural diagram of a video scene classification apparatus according to an embodiment of the present disclosure. The embodiment can be applied to the situation of carrying out scene classification on the video. The apparatus can be implemented in software and/or hardware, and the apparatus can be configured in a mobile terminal. As shown in fig. 4, the apparatus may include: a first image segmentation module 401, a motion velocity determination module 402, a second image segmentation module 403, and a video classification module 404.
The first image segmentation module 401 is configured to input a first frame video image and a second frame video image of a current video to a preset image segmentation model respectively, so as to obtain a first group of video objects corresponding to the first frame video image and a second group of video objects corresponding to the second frame video image; the second group of video objects comprises video objects in the first group of video objects; a motion speed determining module 402, configured to determine a motion speed of the first group of video objects on the video image plane according to image areas of the first group of video objects in the first frame of video image and the second frame of video image; a second image segmentation module 403, configured to determine, according to the motion speed, an expected image area of the first group of video objects in each frame of an un-segmented video image of the current video, and perform image segmentation on each frame of the un-segmented video image according to the expected image area to obtain a corresponding video object; the video classification module 404 is configured to determine a classification result of the target video according to a video object corresponding to each frame of video image of the current video.
In the technical solution of this embodiment, a moving speed of a first group of video objects on a video image plane is determined according to image areas of the first group of video objects in a first frame of video image and a second frame of video image, an expected image area of the first group of video objects in each frame of an un-segmented video image of a current video is determined according to the moving speed, each frame of the un-segmented video image is segmented according to the expected image area to obtain a corresponding video object, and a classification result of a target video is determined according to the video object corresponding to each frame of the video image of the current video, so as to solve a problem that segmentation accuracy of a continuously moving video object in a video is difficult to guarantee and a scene classification error of the video may be caused, the method comprises the steps of determining the movement speed of a video object on a video image plane, and carrying out image segmentation on the video object in each frame of non-segmented video image according to the movement speed, so that the segmentation accuracy of the continuously moving video object can be improved, and the video can be accurately classified into scenes.
Optionally, on the basis of the foregoing technical solution, the movement speed determining module 402 may include: and the speed determining unit is used for determining the movement speed of the first group of video objects on the video image plane according to the image areas of the first group of video objects in the first frame of video image and the second frame of video image by adopting an optical flow method.
Optionally, on the basis of the foregoing technical solution, the second image segmentation module 403 may include: the image acquisition unit is used for sequentially acquiring a frame of non-segmented video image as a current processing video image; the area determining unit is used for determining an expected image area of the first group of video objects in the current processed video image according to the motion speed, the frame rate of the current video and the image area of the first group of video objects in the previous frame of video image; and an image cropping unit. The image frame matched with each video object in the first group of video objects is adopted, and expected images matched with the video objects are cut out from the currently processed video images according to expected image areas; the object acquisition unit is used for inputting the expected images matched with the video objects into a preset image segmentation model to obtain video objects corresponding to the currently processed video images; and the image processing unit is used for returning and executing the operation of sequentially acquiring a frame of non-segmented video image as the current processing video image until the processing of all the non-segmented video images of the current video is finished.
Optionally, on the basis of the foregoing technical solution, the area determining unit may include: the time determining subunit is used for determining the interval time between the previous frame of video image and the currently processed video image according to the frame rate of the current video; a displacement determining subunit, configured to determine, according to the interval time and the motion speed, a displacement of the first group of video objects at the interval time; and the area determining subunit is used for determining an expected image area of the first group of video objects in the current processed video image according to the image area of the first group of video objects in the previous frame of video image and the displacement.
Optionally, on the basis of the foregoing technical solution, the video classification module 404 may include: the scene type determining unit is used for determining the scene type corresponding to each frame of video image according to the preset corresponding rule of the video object and the scene type and the video object corresponding to each frame of video image; the frame number counting unit is used for counting the frame number of the video image corresponding to each scene type; and the classification result determining unit is used for taking the scene category with the largest number of video image frames as the classification result of the target video.
Optionally, on the basis of the above technical solution, the method may further include: the system comprises a set acquisition module, a scene classification acquisition module and a scene classification analysis module, wherein the set acquisition module is used for acquiring a training sample set corresponding to each scene classification, and the training sample set comprises a set number of images corresponding to the scene classification; and the model training module is used for training the neural network model by using the training sample set to obtain a preset image segmentation model.
Optionally, on the basis of the above technical solution, the set number of images corresponding to the scene type is: the image processing method comprises the steps of processing an original image according to a preset image processing rule to obtain an image.
The video scene classification device provided by the embodiment of the disclosure can execute the video scene classification method provided by the embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
Referring now to fig. 5, a block diagram of a mobile terminal 500 suitable for use in implementing embodiments of the present disclosure is shown. The mobile terminal in the embodiments of the present disclosure may include, but is not limited to, devices such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like. The mobile terminal shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, mobile terminal 500 may include a processing device (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage device 506 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the mobile terminal 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 506 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the mobile terminal 500 to perform wireless or wired communication with other devices to exchange data. While fig. 5 illustrates a mobile terminal 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 506, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the mobile terminal; or may exist separately and not be incorporated into the mobile terminal.
The computer readable medium carries one or more programs which, when executed by the mobile terminal, cause the mobile terminal to: respectively inputting a first frame video image and a second frame video image of a current video into a preset image segmentation model to obtain a first group of video objects corresponding to the first frame video image and a second group of video objects corresponding to the second frame video image; the second group of video objects comprises video objects in the first group of video objects; determining the movement speed of the first group of video objects on the video image plane according to the image areas of the first group of video objects in the first frame of video image and the second frame of video image; determining an expected image area of the first group of video objects in each frame of the un-segmented video image of the current video according to the motion speed, and performing image segmentation on each frame of the un-segmented video image according to the expected image area to obtain a corresponding video object; and determining a classification result of the target video according to the video object corresponding to each frame of video image of the current video.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods, apparatus, mobile terminals, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules, units and sub-units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware. For example, the video classification module may be further described as a "module for determining a classification result of a target video according to a video object corresponding to each frame of video image of a current video", the image acquisition unit may be further described as a "unit for sequentially acquiring one frame of an undivided video image as a currently processed video image", and the time determination subunit may be further described as a "subunit for determining an interval time between a previous frame of video image and a currently processed video image according to a frame rate of the current video".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, an example provides a video scene classification method, including:
respectively inputting a first frame video image and a second frame video image of a current video into a preset image segmentation model to obtain a first group of video objects corresponding to the first frame video image and a second group of video objects corresponding to the second frame video image; wherein the second set of video objects comprises video objects in the first set of video objects;
determining the movement speed of the first group of video objects on a video image plane according to the image areas of the first group of video objects in the first frame of video image and the second frame of video image;
determining an expected image area of the first group of video objects in each frame of un-segmented video image of the current video according to the motion speed, and performing image segmentation on each frame of un-segmented video image according to the expected image area to obtain a corresponding video object;
and determining the classification result of the target video according to the video object corresponding to each frame of video image of the current video.
According to one or more embodiments of the present disclosure, example two provides a video scene classification method, and on the basis of the video scene classification method of example one, the determining, according to image areas of the first group of video objects in the first frame of video image and the second frame of video image, a motion speed of the first group of video objects on a video image plane includes:
and determining the movement speed of the first group of video objects on a video image plane according to the image areas of the first group of video objects in the first frame of video image and the second frame of video image by adopting an optical flow method.
According to one or more embodiments of the present disclosure, example three provides a video scene classification method, and on the basis of the video scene classification method of example one, the determining, according to the motion speed, an expected image area of the first group of video objects in each frame of an un-segmented video image of the current video, and performing image segmentation on each frame of the un-segmented video image according to the expected image area to obtain a corresponding video object includes:
sequentially acquiring a frame of non-segmented video image as a current processing video image;
determining an expected image area of the first set of video objects in the currently processed video image according to the motion speed, the frame rate of the current video, and an image area of the first set of video objects in a previous frame video image;
cutting out expected images matched with the video objects in the currently processed video image according to the expected image area by adopting image frames matched with the video objects in the first group of video objects;
inputting the expected images matched with the video objects into a preset image segmentation model to obtain video objects corresponding to the currently processed video images;
and returning to execute the operation of sequentially acquiring a frame of non-segmented video images as the currently processed video image until the processing of all the non-segmented video images of the current video is finished.
In accordance with one or more embodiments of the present disclosure, example four provides a video scene classification method, on the basis of the video scene classification method of example three, determining an expected image area of the first group of video objects in the currently processed video image according to the motion speed, the frame rate of the current video, and an image area of the first group of video objects in a previous frame of video image, including:
determining the interval time between the previous frame of video image and the current processing video image according to the frame rate of the current video;
determining the displacement of the first group of video objects in the interval time according to the interval time and the motion speed;
and determining the expected image area of the first group of video objects in the current processing video image according to the image area of the first group of video objects in the previous frame of video image and the displacement.
According to one or more embodiments of the present disclosure, example five provides a video scene classification method, and on the basis of the video scene classification method of example one, the determining a classification result of the target video according to video objects corresponding to each frame of video image of the current video includes:
determining the scene type corresponding to each frame of video image according to the preset corresponding rule of the video object and the scene type and the video object corresponding to each frame of video image;
counting the number of video image frames corresponding to each scene type;
and taking the scene category with the maximum number of video image frames as the classification result of the target video.
According to one or more embodiments of the present disclosure, example six provides a video scene classification method, on the basis of the video scene classification method of example one, before respectively inputting a first frame video image and a second frame video image of a current video to a preset image segmentation model, the method further includes:
acquiring a training sample set corresponding to each scene type, wherein the training sample set comprises a set number of images corresponding to the scene type;
and training a neural network model by using the training sample set to obtain a preset image segmentation model.
According to one or more embodiments of the present disclosure, example seven provides a video scene classification method, and on the basis of the video scene classification method of example six, the set number of images corresponding to the scene category is: the method comprises the steps of processing an original image and an image obtained by processing the original image according to a preset image processing rule.
Example eight provides, in accordance with one or more embodiments of the present disclosure, a video scene classification apparatus, comprising:
the first image segmentation module is used for respectively inputting a first frame video image and a second frame video image of a current video into a preset image segmentation model to obtain a first group of video objects corresponding to the first frame video image and a second group of video objects corresponding to the second frame video image; wherein the second set of video objects comprises video objects in the first set of video objects;
a motion speed determination module, configured to determine a motion speed of the first group of video objects on a video image plane according to image areas of the first group of video objects in the first frame of video image and the second frame of video image;
the second image segmentation module is used for determining an expected image area of the first group of video objects in each frame of un-segmented video image of the current video according to the motion speed, and performing image segmentation on each frame of un-segmented video image according to the expected image area to obtain a corresponding video object;
and the video classification module is used for determining the classification result of the target video according to the video object corresponding to each frame of video image of the current video.
Example nine provides, in accordance with one or more embodiments of the present disclosure, a mobile terminal, comprising:
one or more processing devices;
storage means for storing one or more programs;
when executed by one or more processing devices, cause the one or more processing devices to implement the video scene classification method of any of examples one to seven.
Example ten provides, according to one or more embodiments of the present disclosure, a computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements the video scene classification method of any one of examples one to seven.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (9)

1. A method for classifying a video scene, comprising:
respectively inputting a first frame video image and a second frame video image of a current video into a preset image segmentation model to obtain a first group of video objects corresponding to the first frame video image and a second group of video objects corresponding to the second frame video image; wherein the second set of video objects comprises video objects in the first set of video objects;
determining the movement speed of the first group of video objects on a video image plane according to the image areas of the first group of video objects in the first frame of video image and the second frame of video image;
determining an expected image area of the first group of video objects in each frame of un-segmented video image of the current video according to the motion speed, and performing image segmentation on each frame of un-segmented video image according to the expected image area to obtain a corresponding video object; sequentially acquiring a frame of non-segmented video image as a current processing video image; determining an expected image area of the first set of video objects in the currently processed video image according to the motion speed, the frame rate of the current video, and an image area of the first set of video objects in a previous frame video image; cutting out expected images matched with the video objects in the currently processed video image according to the expected image area by adopting image frames matched with the video objects in the first group of video objects; inputting the expected images matched with the video objects into a preset image segmentation model to obtain video objects corresponding to the currently processed video images; returning to execute the operation of sequentially acquiring a frame of non-segmented video images as the currently processed video image until the processing of all the non-segmented video images of the current video is finished;
and determining the classification result of the current video according to the video object corresponding to each frame of video image of the current video.
2. The method of claim 1, wherein determining the motion velocity of the first set of video objects in the video image plane according to the image areas of the first set of video objects in the first frame of video image and the second frame of video image comprises:
and determining the movement speed of the first group of video objects on a video image plane according to the image areas of the first group of video objects in the first frame of video image and the second frame of video image by adopting an optical flow method.
3. The method of claim 1, wherein determining an expected image area of the first set of video objects in the currently processed video image based on the motion velocity, a frame rate of the current video, and an image area of the first set of video objects in a previous frame of video image comprises:
determining the interval time between the previous frame of video image and the current processing video image according to the frame rate of the current video;
determining the displacement of the first group of video objects in the interval time according to the interval time and the motion speed;
and determining the expected image area of the first group of video objects in the current processing video image according to the image area of the first group of video objects in the previous frame of video image and the displacement.
4. The method according to claim 1, wherein the determining the classification result of the current video according to the video objects corresponding to the video images of the frames of the current video comprises:
determining the scene type corresponding to each frame of video image according to the preset corresponding rule of the video object and the scene type and the video object corresponding to each frame of video image;
counting the number of video image frames corresponding to each scene type;
and taking the scene category with the maximum number of video image frames as the classification result of the current video.
5. The method according to claim 1, before inputting the first frame video image and the second frame video image of the current video to the preset image segmentation model respectively, further comprising:
acquiring a training sample set corresponding to each scene type, wherein the training sample set comprises a set number of images corresponding to the scene type;
and training a neural network model by using the training sample set to obtain a preset image segmentation model.
6. The method of claim 5, wherein the set number of images corresponding to the scene category are: the method comprises the steps of processing an original image and an image obtained by processing the original image according to a preset image processing rule.
7. A video scene classification apparatus, comprising:
the first image segmentation module is used for respectively inputting a first frame video image and a second frame video image of a current video into a preset image segmentation model to obtain a first group of video objects corresponding to the first frame video image and a second group of video objects corresponding to the second frame video image; wherein the second set of video objects comprises video objects in the first set of video objects;
a motion speed determination module, configured to determine a motion speed of the first group of video objects on a video image plane according to image areas of the first group of video objects in the first frame of video image and the second frame of video image;
the second image segmentation module is used for determining an expected image area of the first group of video objects in each frame of un-segmented video image of the current video according to the motion speed, and performing image segmentation on each frame of un-segmented video image according to the expected image area to obtain a corresponding video object; sequentially acquiring a frame of non-segmented video image as a current processing video image; determining an expected image area of the first set of video objects in the currently processed video image according to the motion speed, the frame rate of the current video, and an image area of the first set of video objects in a previous frame video image; cutting out expected images matched with the video objects in the currently processed video image according to the expected image area by adopting image frames matched with the video objects in the first group of video objects; inputting the expected images matched with the video objects into a preset image segmentation model to obtain video objects corresponding to the currently processed video images; returning to execute the operation of sequentially acquiring a frame of non-segmented video images as the currently processed video image until the processing of all the non-segmented video images of the current video is finished;
and the video classification module is used for determining the classification result of the current video according to the video object corresponding to each frame of video image of the current video.
8. A mobile terminal, characterized in that the mobile terminal comprises:
one or more processing devices;
storage means for storing one or more programs;
when executed by the one or more processing devices, cause the one or more processing devices to implement the video scene classification method of any of claims 1-6.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method for video scene classification according to any one of claims 1 to 6.
CN201910612133.7A 2019-07-08 2019-07-08 Video scene classification method and device, mobile terminal and storage medium Active CN110348369B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910612133.7A CN110348369B (en) 2019-07-08 2019-07-08 Video scene classification method and device, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910612133.7A CN110348369B (en) 2019-07-08 2019-07-08 Video scene classification method and device, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110348369A CN110348369A (en) 2019-10-18
CN110348369B true CN110348369B (en) 2021-07-06

Family

ID=68178493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910612133.7A Active CN110348369B (en) 2019-07-08 2019-07-08 Video scene classification method and device, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110348369B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112804578A (en) * 2021-01-28 2021-05-14 广州虎牙科技有限公司 Atmosphere special effect generation method and device, electronic equipment and storage medium
CN113014831B (en) * 2021-03-05 2024-03-12 上海明略人工智能(集团)有限公司 Method, device and equipment for scene acquisition of sports video

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897742A (en) * 2017-02-21 2017-06-27 北京市商汤科技开发有限公司 Method, device and electronic equipment for detecting object in video
CN107392917A (en) * 2017-06-09 2017-11-24 深圳大学 A kind of saliency detection method and system based on space-time restriction
CN108805898A (en) * 2018-05-31 2018-11-13 北京字节跳动网络技术有限公司 Method of video image processing and device
CN108875619A (en) * 2018-06-08 2018-11-23 Oppo广东移动通信有限公司 Method for processing video frequency and device, electronic equipment, computer readable storage medium
CN109145840A (en) * 2018-08-29 2019-01-04 北京字节跳动网络技术有限公司 video scene classification method, device, equipment and storage medium
CN109215037A (en) * 2018-09-18 2019-01-15 Oppo广东移动通信有限公司 Destination image partition method, device and terminal device
WO2019040214A1 (en) * 2017-08-22 2019-02-28 Northrop Grumman Systems Corporation System and method for distributive training and weight distribution in a neural network
CN109492608A (en) * 2018-11-27 2019-03-19 腾讯科技(深圳)有限公司 Image partition method, device, computer equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6691126B1 (en) * 2000-06-14 2004-02-10 International Business Machines Corporation Method and apparatus for locating multi-region objects in an image or video database
CN101231755B (en) * 2007-01-25 2013-03-06 上海遥薇(集团)有限公司 Moving target tracking and quantity statistics method
CN108154086B (en) * 2017-12-06 2022-06-03 北京奇艺世纪科技有限公司 Image extraction method and device and electronic equipment
CN109272509B (en) * 2018-09-06 2021-10-29 郑州云海信息技术有限公司 Target detection method, device and equipment for continuous images and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897742A (en) * 2017-02-21 2017-06-27 北京市商汤科技开发有限公司 Method, device and electronic equipment for detecting object in video
CN107392917A (en) * 2017-06-09 2017-11-24 深圳大学 A kind of saliency detection method and system based on space-time restriction
WO2019040214A1 (en) * 2017-08-22 2019-02-28 Northrop Grumman Systems Corporation System and method for distributive training and weight distribution in a neural network
CN108805898A (en) * 2018-05-31 2018-11-13 北京字节跳动网络技术有限公司 Method of video image processing and device
CN108875619A (en) * 2018-06-08 2018-11-23 Oppo广东移动通信有限公司 Method for processing video frequency and device, electronic equipment, computer readable storage medium
CN109145840A (en) * 2018-08-29 2019-01-04 北京字节跳动网络技术有限公司 video scene classification method, device, equipment and storage medium
CN109215037A (en) * 2018-09-18 2019-01-15 Oppo广东移动通信有限公司 Destination image partition method, device and terminal device
CN109492608A (en) * 2018-11-27 2019-03-19 腾讯科技(深圳)有限公司 Image partition method, device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
无监督视频对象分割方法的研究;姚积欢;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160315(第03期);I138-5794 *

Also Published As

Publication number Publication date
CN110348369A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN110532981B (en) Human body key point extraction method and device, readable storage medium and equipment
CN110188719B (en) Target tracking method and device
EP3893125A1 (en) Method and apparatus for searching video segment, device, medium and computer program product
CN112184738A (en) Image segmentation method, device, equipment and storage medium
CN111369427A (en) Image processing method, image processing device, readable medium and electronic equipment
CN111784712B (en) Image processing method, device, equipment and computer readable medium
CN110348369B (en) Video scene classification method and device, mobile terminal and storage medium
CN112561839A (en) Video clipping method and device, storage medium and electronic equipment
CN115205330B (en) Track information generation method and device, electronic equipment and computer readable medium
CN111126159A (en) Method, apparatus, electronic device, and medium for tracking pedestrian in real time
CN115115593A (en) Scanning processing method and device, electronic equipment and storage medium
CN109829431B (en) Method and apparatus for generating information
CN114445269A (en) Image special effect processing method, device, equipment and medium
CN113610034A (en) Method, device, storage medium and electronic equipment for identifying person entity in video
CN111292333A (en) Method and apparatus for segmenting an image
CN112183388A (en) Image processing method, apparatus, device and medium
CN111027495A (en) Method and device for detecting key points of human body
CN110348374B (en) Vehicle detection method and device, electronic equipment and storage medium
CN111586295B (en) Image generation method and device and electronic equipment
CN114187557A (en) Method, device, readable medium and electronic equipment for determining key frame
CN111968030B (en) Information generation method, apparatus, electronic device and computer readable medium
CN114612909A (en) Character recognition method and device, readable medium and electronic equipment
CN111383337B (en) Method and device for identifying objects
CN113255812A (en) Video frame detection method and device and electronic equipment
CN114004229A (en) Text recognition method and device, readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant