CN113569703A - Method and system for judging true segmentation point, storage medium and electronic equipment - Google Patents

Method and system for judging true segmentation point, storage medium and electronic equipment Download PDF

Info

Publication number
CN113569703A
CN113569703A CN202110835226.3A CN202110835226A CN113569703A CN 113569703 A CN113569703 A CN 113569703A CN 202110835226 A CN202110835226 A CN 202110835226A CN 113569703 A CN113569703 A CN 113569703A
Authority
CN
China
Prior art keywords
video
segmentation point
real
candidate
candidate segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110835226.3A
Other languages
Chinese (zh)
Other versions
CN113569703B (en
Inventor
胡郡郡
唐大闰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Minglue Artificial Intelligence Group Co Ltd
Original Assignee
Shanghai Minglue Artificial Intelligence Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Minglue Artificial Intelligence Group Co Ltd filed Critical Shanghai Minglue Artificial Intelligence Group Co Ltd
Priority to CN202110835226.3A priority Critical patent/CN113569703B/en
Publication of CN113569703A publication Critical patent/CN113569703A/en
Application granted granted Critical
Publication of CN113569703B publication Critical patent/CN113569703B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a system for judging a real segmentation point, a storage medium and an electronic device, wherein the method for judging the real segmentation point comprises the following steps: a video characteristic dimension obtaining step: dividing a video into a plurality of video equal parts according to time, and extracting features of the video equal parts by using a deep learning pre-training model to obtain video features; model processing step: inputting the video characteristics into a real segmentation point judgment model for processing to obtain the classification probability of each candidate segmentation point; and a judging step of judging the candidate segmentation points according to the classification probability to determine the real scene segmentation points. The invention uses global consistency loss, reduces the similarity of the same scene, improves the similarity of different scenes, can obtain very good expression, and can gradually converge the model without increasing the ios.

Description

Method and system for judging true segmentation point, storage medium and electronic equipment
Technical Field
The invention belongs to the field of real segmentation point judgment, and particularly relates to a real segmentation point judgment method and system, a storage medium and electronic equipment.
Background
Event detection based methods (Dence Boundary Generator). However, this method has overlapping time regions for each event, and scene segmentation requires that each segment has no temporal overlap.
Disclosure of Invention
The embodiment of the application provides a method and a system for judging a real division point, a storage medium and electronic equipment, which are used for at least solving the problem that an event has an overlapping time region in the existing method for judging the real division point.
The invention provides a method for judging a real segmentation point, which comprises the following steps:
a video characteristic dimension obtaining step: dividing a video into a plurality of video equal parts according to time, and extracting features of the video equal parts by using a deep learning pre-training model to obtain video features;
model processing step: inputting the video characteristics into a real segmentation point judgment model for processing to obtain the classification probability of each candidate segmentation point;
and a judging step of judging the candidate segmentation points according to the classification probability to determine the real scene segmentation points.
The method for judging the real division point, wherein the video feature acquisition step comprises the following steps:
video equal part obtaining: dividing the video into a plurality of video equal parts according to time;
and a step of obtaining video characteristics, which is to extract characteristics of each video equal part by using a deep learning pre-training model to obtain first characteristics corresponding to each video equal part.
The above method for determining a true segmentation point, wherein the model processing step includes:
obtaining sample video equal parts: dividing the sample video into a plurality of sample video equal parts according to time;
extracting features of each sample video equal part by using a deep learning pre-training model to obtain a plurality of sample video features of the video screen;
constructing candidate segmentation point characteristics: for each candidate segmentation point, taking a sample video feature of a video equal part where the candidate segmentation point is located, a sample video feature between the last candidate segmentation point and a sample video feature between the next candidate segmentation point, sequentially building an Encoder network and a Predictor network, designing a loss function, and then building the real segmentation point judgment model;
a classification probability obtaining step: and obtaining the classification probability of each candidate segmentation point through a real segmentation point judgment model according to the video characteristics.
The method for judging the real segmentation point comprises the following steps: and judging the classification probability of each candidate segmentation point by setting a threshold value so as to determine whether the candidate segmentation point is the scene real segmentation point.
The invention also provides a real division point judgment system, which comprises:
the video feature dimension acquisition module divides a video into a plurality of video equal parts according to time, and extracts features of the video equal parts by using a deep learning pre-training model to obtain video features;
the model processing module inputs the video characteristics into a real segmentation point judgment model to be processed to obtain the classification probability of each candidate segmentation point;
and the judging module judges the candidate segmentation points according to the classification probability to determine the real scene segmentation points.
The above real partitioning point determining system, wherein the video feature obtaining module includes:
the video equal-part obtaining unit divides the video into a plurality of video equal parts according to time;
and the video feature obtaining unit extracts features of each video equal part by using a deep learning pre-training model to obtain first features corresponding to each video equal part.
The above real partitioning point determining system, wherein the model processing module includes:
a sample video equal part obtaining unit, which divides the sample video into a plurality of sample video equal parts according to time;
the unit for obtaining the sample video characteristics extracts characteristics of each sample video equal part by using a deep learning pre-training model to obtain a plurality of sample video characteristics of the video screen;
constructing a candidate segmentation point feature unit, wherein the candidate segmentation point feature unit is used for taking the sample video features of the video equal parts where the candidate segmentation points are located, the sample video features between the last candidate segmentation points and the sample video features between the next candidate segmentation points for each candidate segmentation point, sequentially constructing an Encoder network and a Predictor network, designing a loss function and then constructing the real segmentation point judgment model;
and the classification probability obtaining unit obtains the classification probability of each candidate segmentation point through a real segmentation point judgment model according to the video characteristics.
The above real partitioning point determining system, wherein the determining module includes: and judging the classification probability of each candidate segmentation point by setting a threshold value so as to determine whether the candidate segmentation point is the scene real segmentation point.
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method for determining true segmentation points as described in any of the above when executing the computer program.
A storage medium having stored thereon a computer program, wherein the program when executed by a processor implements a true segmentation point determination method as described in any one of the above.
The invention has the beneficial effects that:
the invention belongs to the field of computer vision in the deep learning technology. The invention uses global consistency loss, reduces the similarity of the same scene, improves the similarity of different scenes, can obtain very good expression, and the model can gradually converge without loss rise; the invention also uses a transformer, which can realize automatic attention and learn the relation in the video sequence.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application.
In the drawings:
FIG. 1 is a flow chart of a true segmentation point determination method of the present invention;
FIG. 2 is a flow chart of substep S1 of the present invention;
FIG. 3 is a flow chart of substep S2 of the present invention;
FIG. 4 is a video scene segmentation diagram of the present invention;
FIG. 5 is a diagram of a model of the present invention;
FIG. 6 is a schematic structural diagram of a real segmentation point determination system according to the present invention;
fig. 7 is a frame diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The present invention is described in detail with reference to the embodiments shown in the drawings, but it should be understood that these embodiments are not intended to limit the present invention, and those skilled in the art should understand that functional, methodological, or structural equivalents or substitutions made by these embodiments are within the scope of the present invention.
Before describing in detail the various embodiments of the present invention, the core inventive concepts of the present invention are summarized and described in detail by the following several embodiments.
The first embodiment is as follows:
referring to fig. 1, fig. 1 is a flowchart of a method for determining a true segmentation point. As shown in fig. 1, the method for determining a true segmentation point according to the present invention includes:
a video feature dimension acquisition step S1: dividing a video into a plurality of video equal parts according to time, and extracting features of the video equal parts by using a deep learning pre-training model to obtain video features;
model processing step S2: inputting the video characteristics into a real segmentation point judgment model for processing to obtain the classification probability of each candidate segmentation point;
and a judging step S3, judging the candidate segmentation points according to the classification probability to determine the scene real segmentation points.
Referring to fig. 2, fig. 2 is a flowchart of the video feature dimension obtaining step S1. As shown in fig. 2, the video feature dimension obtaining step S1 includes:
video equal part obtaining step S11: dividing the video into a plurality of video equal parts according to time;
and a step S12 of obtaining video characteristics, which is to extract characteristics of each video equal part by using a deep learning pre-training model to obtain first characteristics corresponding to each video equal part.
Referring to fig. 3, fig. 3 is a flowchart of the model processing step S2. As shown in fig. 3, the model processing step S2 includes:
sample video aliquot obtaining step S21: dividing the sample video into a plurality of sample video equal parts according to time;
a step S22 of obtaining sample video characteristics, which is to use a deep learning pre-training model to extract characteristics of each sample video equal part to obtain a plurality of sample video characteristics of the video screen;
constructing candidate segmentation point features step S23: for each candidate segmentation point, taking a sample video feature of a video equal part where the candidate segmentation point is located, a sample video feature between the last candidate segmentation point and a sample video feature between the next candidate segmentation point, sequentially building an Encoder network and a Predictor network, designing a loss function, and then building the real segmentation point judgment model;
classification probability obtaining step S24: and obtaining the classification probability of each candidate segmentation point through a real segmentation point judgment model according to the video characteristics.
Wherein the judging step comprises: and judging the classification probability of each candidate segmentation point by setting a threshold value so as to determine whether the candidate segmentation point is the scene real segmentation point.
Specifically, as shown in fig. 4 and 5, the training phase includes:
step 1, dividing a video into L equal parts according to time, wherein each equal part of the video is called a clip.
And 2, extracting features of each clip by using a deep learning pre-training model, wherein each clip obtains 1 × D feature expression (D is a feature dimension), and then L clips obtain L × D features.
And 3, constructing an Encoder network, aiming at representing the characteristics of each point by higher-level semantics and reducing D to 128 dimensions.
And 4, constructing the characteristics of each candidate segmentation point, wherein each candidate segmentation point takes the characteristics of the clip where the candidate segmentation point is located, the characteristics between the previous candidate segmentation point and the characteristics between the next candidate segmentation point. Referring to fig. 5, the characteristics of the dividing point P5 are selected from [ F3, F4, F5, F6, F7, and F8 ].
And 5: and (4) constructing a Transformer network, outputting the characteristics of the step (4), adding a classified CLS token, and directly judging whether the CLS token is a real segmentation point by using the output CLS token.
Step 6, designing a loss function: loss functionThe method comprises the following steps: the classification loss function is Lcls=gmasklog(p)+(1-gmask)log(1-p)
Wherein g ismaskThe method comprises the following steps: when the distance between a certain point and the grountruth is less than or equal to 1, the positive case is considered, and otherwise, the negative case is considered.
The consistency regularization loss function is:
Figure BDA0003176998140000071
wherein: i and j are respectively any two clips and cosines<FiFj>+Cosine similarity when two clips of i, j belong to the same scene, cosine<FiFj>-The cosine similarity when two clips of i and j do not belong to the same scene is shown, m is the logarithm of the two clips belonging to the same scene, and n is the logarithm of the two clips not belonging to the same scene.
And 7, reversely propagating the training model.
The reasoning phase comprises:
and 1, obtaining the characteristics L x D of each video according to the same training stage.
And 2, forward propagating through an encoder network and a transform network to obtain the classification probability of each candidate segmentation point, and judging whether the candidate segmentation point is a real segmentation point or not by a certain threshold value of the card.
The overall model scheme is shown in figure 5.
Example two:
referring to fig. 6, fig. 6 is a schematic structural diagram of a real partitioning point determining system according to the present invention. As shown in fig. 6, a real partitioning point determining system of the present invention includes:
the video feature dimension acquisition module divides a video into a plurality of video equal parts according to time, and extracts features of the video equal parts by using a deep learning pre-training model to obtain video features;
the model processing module inputs the video characteristics into a real segmentation point judgment model to be processed to obtain the classification probability of each candidate segmentation point;
and the judging module judges the candidate segmentation points according to the classification probability to determine the real scene segmentation points.
Wherein the video feature acquisition module comprises:
the video equal-part obtaining unit divides the video into a plurality of video equal parts according to time;
and the video feature obtaining unit extracts features of each video equal part by using a deep learning pre-training model to obtain first features corresponding to each video equal part.
Wherein the model processing module comprises:
a sample video equal part obtaining unit, which divides the sample video into a plurality of sample video equal parts according to time;
the unit for obtaining the sample video characteristics extracts characteristics of each sample video equal part by using a deep learning pre-training model to obtain a plurality of sample video characteristics of the video screen;
constructing a candidate segmentation point feature unit, wherein the candidate segmentation point feature unit is used for taking the sample video features of the video equal parts where the candidate segmentation points are located, the sample video features between the last candidate segmentation points and the sample video features between the next candidate segmentation points for each candidate segmentation point, sequentially constructing an Encoder network and a Predictor network, designing a loss function and then constructing the real segmentation point judgment model;
and the classification probability obtaining unit obtains the classification probability of each candidate segmentation point through a real segmentation point judgment model according to the video characteristics.
Wherein, the judging module comprises: and judging the classification probability of each candidate segmentation point by setting a threshold value so as to determine whether the candidate segmentation point is the scene real segmentation point.
Example three:
referring to fig. 7, this embodiment discloses an embodiment of an electronic device. The electronic device may include a processor 81 and a memory 82 storing computer program instructions.
Specifically, the processor 81 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 82 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 82 may include a Hard Disk Drive (Hard Disk Drive, abbreviated to HDD), a floppy Disk Drive, a Solid State Drive (SSD), flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 82 may include removable or non-removable (or fixed) media, where appropriate. The memory 82 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 82 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, Memory 82 includes Read-Only Memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), Electrically rewritable ROM (EAROM), or FLASH Memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a Static Random-Access Memory (SRAM) or a Dynamic Random-Access Memory (DRAM), where the DRAM may be a Fast Page Mode Dynamic Random-Access Memory (FPMDRAM), an Extended data output Dynamic Random-Access Memory (EDODRAM), a Synchronous Dynamic Random-Access Memory (SDRAM), and the like.
The memory 82 may be used to store or cache various data files for processing and/or communication use, as well as possible computer program instructions executed by the processor 81.
The processor 81 reads and executes the computer program instructions stored in the memory 82 to implement any one of the real division point determination methods in the above embodiments.
In some of these embodiments, the electronic device may also include a communication interface 83 and a bus 80. As shown in fig. 7, the processor 81, the memory 82, and the communication interface 83 are connected via the bus 80 to complete communication therebetween.
The communication interface 83 is used for implementing communication between modules, devices, units and/or equipment in the embodiment of the present application. The communication port 83 may also be implemented with other components such as: the data communication is carried out among external equipment, image/data acquisition equipment, a database, external storage, an image/data processing workstation and the like.
The bus 80 includes hardware, software, or both to couple the components of the electronic device to one another. Bus 80 includes, but is not limited to, at least one of the following: data Bus (Data Bus), Address Bus (Address Bus), Control Bus (Control Bus), Expansion Bus (Expansion Bus), and Local Bus (Local Bus). By way of example, and not limitation, Bus 80 may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (FSB), a Hyper Transport (HT) Interconnect, an ISA (ISA) Bus, an InfiniBand (InfiniBand) Interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a microchannel Architecture (MCA) Bus, a PCI (Peripheral Component Interconnect) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a Video Electronics Bus (audio Electronics Association), abbreviated VLB) bus or other suitable bus or a combination of two or more of these. Bus 80 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The electronic device may determine based on the true segmentation point, thereby implementing the methods described in conjunction with fig. 1-3.
In addition, in combination with the method for determining the true segmentation point in the foregoing embodiments, the embodiments of the present application may provide a computer-readable storage medium to implement. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the true segmentation point determination methods in the above embodiments.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
In conclusion, the method has the advantages that global consistency loss is used, the similarity of the same scene is reduced, the similarity of different scenes is improved, very good expression can be obtained, the model can gradually converge, and loss rise cannot occur; the invention also uses a transformer, which can realize automatic attention and learn the relation in the video sequence.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A method for judging a true segmentation point is characterized by comprising the following steps:
a video characteristic dimension obtaining step: dividing a video into a plurality of video equal parts according to time, and extracting features of the video equal parts by using a deep learning pre-training model to obtain video features;
model processing step: inputting the video characteristics into a real segmentation point judgment model for processing to obtain the classification probability of each candidate segmentation point;
and a judging step of judging the candidate segmentation points according to the classification probability to determine the real scene segmentation points.
2. The real division point judging method according to claim 1, wherein the video feature obtaining step includes:
video equal part obtaining: dividing the video into a plurality of video equal parts according to time;
and a step of obtaining video characteristics, which is to extract characteristics of each video equal part by using a deep learning pre-training model to obtain first characteristics corresponding to each video equal part.
3. The method of determining true segmentation points according to claim 1, wherein the model processing step includes:
obtaining sample video equal parts: dividing the sample video into a plurality of sample video equal parts according to time;
extracting features of each sample video equal part by using a deep learning pre-training model to obtain a plurality of sample video features of the video screen;
constructing candidate segmentation point characteristics: for each candidate segmentation point, taking a sample video feature of a video equal part where the candidate segmentation point is located, a sample video feature between the last candidate segmentation point and a sample video feature between the next candidate segmentation point, sequentially building an Encoder network and a Predictor network, designing a loss function, and then building the real segmentation point judgment model;
a classification probability obtaining step: and obtaining the classification probability of each candidate segmentation point through a real segmentation point judgment model according to the video characteristics.
4. The real division point judgment method according to claim 1, wherein the judgment step comprises: and judging the classification probability of each candidate segmentation point by setting a threshold value so as to determine whether the candidate segmentation point is the scene real segmentation point.
5. A true segmentation point determination system, comprising:
the video feature dimension acquisition module divides a video into a plurality of video equal parts according to time, and extracts features of the video equal parts by using a deep learning pre-training model to obtain video features;
the model processing module inputs the video characteristics into a real segmentation point judgment model to be processed to obtain the classification probability of each candidate segmentation point;
and the judging module judges the candidate segmentation points according to the classification probability to determine the real scene segmentation points.
6. The real partitioning point determining system according to claim 5, wherein said video feature obtaining module comprises:
the video equal-part obtaining unit divides the video into a plurality of video equal parts according to time;
and the video feature obtaining unit extracts features of each video equal part by using a deep learning pre-training model to obtain first features corresponding to each video equal part.
7. The real segmentation point judgment system of claim 5, wherein the model processing module comprises:
a sample video equal part obtaining unit, which divides the sample video into a plurality of sample video equal parts according to time;
the unit for obtaining the sample video characteristics extracts characteristics of each sample video equal part by using a deep learning pre-training model to obtain a plurality of sample video characteristics of the video screen;
constructing a candidate segmentation point feature unit, wherein the candidate segmentation point feature unit is used for taking the sample video features of the video equal parts where the candidate segmentation points are located, the sample video features between the last candidate segmentation points and the sample video features between the next candidate segmentation points for each candidate segmentation point, sequentially constructing an Encoder network and a Predictor network, designing a loss function and then constructing the real segmentation point judgment model;
and the classification probability obtaining unit obtains the classification probability of each candidate segmentation point through a real segmentation point judgment model according to the video characteristics.
8. The real partitioning point determining system according to claim 5, wherein said determining means comprises: and judging the classification probability of each candidate segmentation point by setting a threshold value so as to determine whether the candidate segmentation point is the scene real segmentation point.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the true segmentation point judgment method according to any one of claims 1 to 4 when executing the computer program.
10. A storage medium on which a computer program is stored, the program being characterized in that it implements the true segmentation point judgment method according to any one of claims 1 to 4 when executed by a processor.
CN202110835226.3A 2021-07-23 2021-07-23 Real division point judging method, system, storage medium and electronic equipment Active CN113569703B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110835226.3A CN113569703B (en) 2021-07-23 2021-07-23 Real division point judging method, system, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110835226.3A CN113569703B (en) 2021-07-23 2021-07-23 Real division point judging method, system, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113569703A true CN113569703A (en) 2021-10-29
CN113569703B CN113569703B (en) 2024-04-16

Family

ID=78166575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110835226.3A Active CN113569703B (en) 2021-07-23 2021-07-23 Real division point judging method, system, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113569703B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697763A (en) * 2022-04-07 2022-07-01 脸萌有限公司 Video processing method, device, electronic equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160211002A1 (en) * 2015-01-16 2016-07-21 Fujitsu Limited Video data file generation method and video data file generation apparatus
CN108537134A (en) * 2018-03-16 2018-09-14 北京交通大学 A kind of video semanteme scene cut and mask method
CN110213670A (en) * 2019-05-31 2019-09-06 北京奇艺世纪科技有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN112906649A (en) * 2018-05-10 2021-06-04 北京影谱科技股份有限公司 Video segmentation method, device, computer device and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160211002A1 (en) * 2015-01-16 2016-07-21 Fujitsu Limited Video data file generation method and video data file generation apparatus
CN108537134A (en) * 2018-03-16 2018-09-14 北京交通大学 A kind of video semanteme scene cut and mask method
CN112906649A (en) * 2018-05-10 2021-06-04 北京影谱科技股份有限公司 Video segmentation method, device, computer device and medium
CN110213670A (en) * 2019-05-31 2019-09-06 北京奇艺世纪科技有限公司 Method for processing video frequency, device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697763A (en) * 2022-04-07 2022-07-01 脸萌有限公司 Video processing method, device, electronic equipment and medium
US11699463B1 (en) 2022-04-07 2023-07-11 Lemon Inc. Video processing method, electronic device, and non-transitory computer-readable storage medium
CN114697763B (en) * 2022-04-07 2023-11-21 脸萌有限公司 Video processing method, device, electronic equipment and medium

Also Published As

Publication number Publication date
CN113569703B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN111666960A (en) Image recognition method and device, electronic equipment and readable storage medium
CN111681256A (en) Image edge detection method and device, computer equipment and readable storage medium
CN113569705A (en) Scene segmentation point judgment method and system, storage medium and electronic device
CN111798836A (en) Method, device, system, equipment and storage medium for automatically switching languages
CN112232070A (en) Natural language processing model construction method, system, electronic device and storage medium
CN111222005B (en) Voiceprint data reordering method and device, electronic equipment and storage medium
CN113569703A (en) Method and system for judging true segmentation point, storage medium and electronic equipment
CN113743277A (en) Method, system, equipment and storage medium for short video frequency classification
CN113569704B (en) Segmentation point judging method, system, storage medium and electronic equipment
CN113569687A (en) Scene classification method, system, equipment and medium based on double-flow network
CN113722471A (en) Text abstract generation method, system, electronic equipment and medium
CN113569706B (en) Video scene segmentation point judging method, system, storage medium and electronic equipment
CN113255334A (en) Method, system, electronic device and storage medium for calculating word vector
CN109784226B (en) Face snapshot method and related device
CN112257726A (en) Target detection training method, system, electronic device and computer readable storage medium
CN113742525A (en) Self-supervision video hash learning method, system, electronic equipment and storage medium
CN113821661B (en) Image retrieval method, system, storage medium and electronic device
CN112966596A (en) Video optical character recognition system method and system
CN113657317A (en) Cargo position identification method and system, electronic equipment and storage medium
CN112749542A (en) Trade name matching method, system, equipment and storage medium
CN112417856B (en) Method, system, computer equipment and storage medium for improving machine writing quality
CN113822445A (en) Model integration prediction method, system, electronic device and storage medium
CN113591655A (en) Video contrast loss calculation method, system, storage medium and electronic device
CN113569857A (en) Subtitle recognition method, system, storage medium and electronic equipment
CN114021568A (en) Model fusion method, system, electronic device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant