CN113271495A - Video information intelligent editing processing method based on image feature extraction and analysis and cloud service system - Google Patents

Video information intelligent editing processing method based on image feature extraction and analysis and cloud service system Download PDF

Info

Publication number
CN113271495A
CN113271495A CN202110530684.6A CN202110530684A CN113271495A CN 113271495 A CN113271495 A CN 113271495A CN 202110530684 A CN202110530684 A CN 202110530684A CN 113271495 A CN113271495 A CN 113271495A
Authority
CN
China
Prior art keywords
video
competition
event
image
target event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110530684.6A
Other languages
Chinese (zh)
Inventor
黄海燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Fenghuayu Business Co ltd
Original Assignee
Wuhan Fenghuayu Business Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Fenghuayu Business Co ltd filed Critical Wuhan Fenghuayu Business Co ltd
Priority to CN202110530684.6A priority Critical patent/CN113271495A/en
Publication of CN113271495A publication Critical patent/CN113271495A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4755End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for defining user preferences, e.g. favourite actors or genre
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Abstract

The invention discloses a video information intelligent clipping processing method based on image characteristic extraction and analysis and a cloud service system, which are characterized in that a plurality of event images are obtained by carrying out video segmentation on a sports event video, and the affiliated athletes are judged, so that the event images corresponding to the same athletes are gathered, the event video is clipped, a plurality of competition video segments corresponding to the athletes are obtained, and then the competition video segments corresponding to the athletes are combined to form a comprehensive competition video segment corresponding to the athletes, thereby realizing the intelligent clipping of the sports event video, meeting the watching requirements of only a sports enthusiast who prefers to see the competition segments corresponding to the athletes on one hand, providing a plurality of watching segment selections for the sports enthusiasts on the other hand, and improving the watching selection degree of the video, therefore, the watching efficiency is improved, and the watching experience is enhanced.

Description

Video information intelligent editing processing method based on image feature extraction and analysis and cloud service system
Technical Field
The invention belongs to the technical field of video editing, and particularly relates to a video information intelligent editing processing method based on image feature extraction and analysis and a cloud service system.
Background
Along with the rapid development of the internet, propagate the sports event through traditional plane media such as magazine, newspaper, can't satisfy masses and watch the sense demand to the picture of sports event, in order to satisfy masses and watch the sense demand to the picture of sports event, the sports event video is born, people can directly watch the race situation of vividly understanding the sports event through watching the sports event video, this to sports fan, strengthened people and learned the experience sense of watching of sports event information.
However, the duration of the current sports event video is long, which makes it difficult for the sports fans who live at a fast pace to have time and energy to see the whole sports event video completely, and for some sports fans, the favorite sports players are single, and only prefer to see the corresponding game segments of the favorite sports players in the process of watching the sports event video, so that it is very necessary to clip the sports event video.
Disclosure of Invention
In order to achieve the purpose, the invention provides an intelligent video information editing processing method based on image feature extraction and analysis and a cloud service system.
The purpose of the invention can be realized by the following technical scheme:
the first aspect of the embodiment of the invention provides a video information intelligent clipping processing method based on image feature extraction and analysis, which comprises the following steps:
s1, dividing video event images of a sports event, namely acquiring video time of the video of the sports event, carrying out video division on the video time of the sports event according to a set video division frame number to obtain a plurality of event images, and acquiring time stamps of the event images in the video time of the event;
s2, preprocessing a plurality of segmented event images, reserving target event images, numbering the reserved target event images according to the sequence of position points of the target event images in the event video duration, and sequentially marking the reserved target event images as 1,2, a.
S3, judging the competitors to which the target event images belong, namely counting the number of the personnel in each target event image, focusing the personnel on the face areas of the personnel respectively, extracting the face images of the personnel, comparing the face images with the face images of the competitors corresponding to the event video respectively, and judging the names of the competitors to which the target event images correspond;
s4, constructing a target event image set corresponding to the competitor, namely comparing the names of the competitors corresponding to the target event images, so as to collect the target event images corresponding to the same competitor and obtain the target event image set corresponding to each competitor;
s5, editing the event video, namely editing the event video according to the target event image set corresponding to each competitor and the timestamp of each target event image in the event video time length to obtain a plurality of event video sections corresponding to each competitor;
s6, combining the competition participating video bands corresponding to the competition participants, namely combining a plurality of competition participating video bands corresponding to the competition participants to form a comprehensive competition participating video band corresponding to the competition participants and forming a competition participating video band information base;
s7, extracting and pushing a competition video of the target competition athlete: the names of the competition participating players who are input by the user on the competition video playing input platform and want to see are obtained in real time, the names of the competition participating players are recorded as the names of the target competition participating players, the obtained names of the target competition participating players are matched with the names of the competition participating players in the competition video band information base, then the comprehensive competition video band corresponding to the target competition participating players which are successfully matched is extracted, and the comprehensive competition video band is pushed to the corresponding user.
According to an alternative implementation manner of the first aspect of the present invention, the preprocessing is performed on the plurality of event images segmented in S2, and the specific preprocessing steps are as follows:
h1, extracting the body contour of each event image;
h2, if the body outline of the person can not be extracted from the event image, the event image is rejected, if the body outline of the person can be extracted from the event image, the person is present in the event image, and the event image is reserved.
According to an alternative embodiment of the first aspect of the present invention, the target event image is an image of an event in which a person is present.
According to an alternative embodiment of the first aspect of the present invention, in S3, the names of the players corresponding to the target event images are determined according to the number of people in the target event images, wherein if only one person exists in a target event image, the specific determination method performs the following steps:
a1, comparing the face image of the person in the target event image with the face images of the competitors corresponding to the event video, if the comparison is successful, indicating that the target event image belongs to the competitors successfully compared, and acquiring the name corresponding to the competitors successfully compared at the moment to obtain the name of the competitors corresponding to the target event image;
a2, if the comparison fails, the target event image is rejected if the target event image does not belong to any of the athletes participating in the game.
According to an alternative embodiment of the first aspect of the present invention, in S3, the names of the competitors corresponding to the target event images are determined according to the number of people in the target event images, wherein if there are more than one people in a target event image, the specific determination method performs the following steps:
b1, comparing the face images of the corresponding personnel in the target event image with the face images of the participating athletes corresponding to the event video, and counting the number of the successfully compared personnel;
b2, if the number of the successfully compared personnel is zero, the target event image is rejected if the target event image does not belong to any competitor;
b3, if the number of the successfully compared personnel is not zero, comparing the number of the successfully compared personnel with the number of the personnel in the target event image, if the number of the successfully compared personnel is only one, indicating that only one participant in the target event image exists, acquiring the name corresponding to the participant at the moment, wherein the name of the participant corresponding to the target event image is the name corresponding to the participant;
b4, if the number of the successfully compared persons is more than one and less than the number of the persons in the target event image, it is indicated that both the participating players and the non-participating players exist in the target event image, the target event image is subjected to fuzzification processing of the non-participating players at the moment, the bodies of the non-participating players are blurred, only the bodies of the participating players are reserved, the main participating players are separated from the reserved participating players, meanwhile, the names corresponding to the main participating players are obtained, and the names of the participating players corresponding to the target event image are the names corresponding to the main participating players;
b5, if the number of the successfully compared persons is the same as the number of the persons existing in the target event image, it indicates that all the persons existing in the target event image are competitors, at this time, the main competitor is analyzed from each competitor existing in the target event image according to the method B4, and at the same time, the name of the main competitor is obtained, and the name of the competitor corresponding to the target event image is the name corresponding to the main competitor.
According to an alternative embodiment of the first aspect of the present invention, the analysis process of the main competitor from the retained competitors in B4 is as follows:
c1, numbering each reserved competitor, respectively marking the competitor as 1,2, a, j, a, m, positioning the body area of each competitor in the target competition image, and simultaneously acquiring the position of the central area of the target competition image, thereby counting the distance between the position of the body area of each competitor in the target competition image and the position of the central area of the target competition image, wherein the distance is marked as a position distance, and further forming a competitor position distance set l (l1, l2, a, lj, a, lm) which is expressed as the position distance corresponding to the jth competitor;
c2, extracting the whole body contour of each competitor from the target race image, so as to obtain the area occupied by the body contour of each competitor in the target race image, and forming a competitor body contour area set S (S1, S2.,. sj.,. sm), wherein sj represents the area occupied by the body contour of the jth competitor in the target race image, and simultaneously obtaining the total area of the target race image, which is recorded as S;
c3, calculating the corresponding main coefficients of the competitors in the target event image according to the corresponding position distance of the competitors, the occupied area of the body outline of the competitors in the target event image and the total area of the target event image;
and C4, selecting the competitor with the maximum subject coefficient from the subject coefficients corresponding to the target event images of the competitors as the subject competitor.
According to an alternative embodiment of the first aspect of the present invention, the formula for calculating the subject coefficient corresponding to each of the competitors in the target event image is
Figure BDA0003067631250000051
ηjIs expressed as the corresponding body coefficient of the j-th competitor in the target event image.
According to an alternative embodiment of the first aspect of the present invention, in S5, the event video is edited according to the target event image set corresponding to each competitor and the timestamp of each target event image in the duration of the event video, and the editing method includes the following steps:
f1, comparing the time stamps of the target event images corresponding to the competitors in the event video time length, analyzing whether the target event images with continuous time lines exist, counting the number of the continuous time lines if the target event images with continuous time lines exist, and summarizing the target event images corresponding to the continuous time lines, wherein each continuous time line corresponds to one competition video segment;
f2, sequencing a plurality of target event images corresponding to each continuous time line according to the sequence of the timestamps of the target event images in the event video time length to obtain the sequencing result of the target event images corresponding to each continuous time line, extracting the timestamp of the target event image which is arranged at the first position of each continuous time line corresponding to the first position of each continuous time line in the event video time length from the sequencing result, taking the timestamp as the clipping starting position of the competition video segment which corresponds to the continuous time line in the event video, extracting the timestamp of the target event image which is arranged at the last position of each continuous time line in the event video time length, and taking the timestamp as the clipping ending position of the competition video segment which corresponds to the continuous time line in the event video;
and F3, clipping the event video according to the clip start position and the clip end position of the corresponding competition video segment in the event video of each continuous timeline.
According to an alternative embodiment of the first aspect of the present invention, the merging the plurality of playing video segments corresponding to each player in S6 further includes completely merging the adjacent playing video segments, where the method for merging the adjacent playing video segments includes:
d1, numbering a plurality of competition video segments corresponding to each competition player according to the sequence of the positions of the competition video segments in the duration of the whole competition video;
d2, according to the numbering sequence of a plurality of competition video segments corresponding to each competition player, respectively acquiring the interval duration between two adjacent competition video segments, comparing the interval duration with the set minimum interval duration, if the interval duration of a certain two adjacent competition video segments is less than the minimum interval duration, indicating that the interval between the two adjacent competition video segments is too small and can be ignored, respectively acquiring the video ending time stamp corresponding to the previous competition video segment and the video starting time stamp corresponding to the next competition video segment in the two adjacent competition video segments at the moment as the clipping starting position and the clipping ending position of the interval video corresponding to the two adjacent competition video segments in the whole competition video duration, further clipping the competition video according to the clipping starting position and the clipping ending position to obtain the interval video corresponding to the interval duration between the two adjacent competition video segments, and supplements the video segments between the two adjacent match-making video segments to be combined into a complete match-making video segment.
A second aspect of the embodiment of the present invention provides a cloud service system, which includes a processor, a machine-readable storage medium, and a network interface, where the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is configured to be in communication connection with at least one video information intelligent clip processing device, the machine-readable storage medium is configured to store a program, an instruction, or a code, and the processor is configured to execute the program, the instruction, or the code in the machine-readable storage medium to perform the video information intelligent clip processing method based on image feature extraction and analysis according to the present invention.
Based on any one of the above aspects, the invention has the following beneficial effects:
1. the invention divides the video of the sports event video to obtain a plurality of event images, extracts and analyzes the facial features of the personnel of each event image to judge the competitors corresponding to each event image, collects the event images corresponding to the same competitors, clips the event video according to the time stamp of each event image in the duration of the event video to obtain a plurality of competition video segments corresponding to each competitor, combines the plurality of competition video segments corresponding to each competitor to form the comprehensive competition video segment corresponding to each competitor, realizes the intelligent clipping of the sports event video, meets the watching requirement of the sports enthusiasts who only aim at the favorite sports event segments corresponding to the sports athletes on the one hand, and provides a plurality of watching segment selections for the sports enthusiasts on the other hand, the selectivity of video watching is improved, the watching efficiency is improved, and the watching experience is enhanced.
2. Before the personnel facial feature extraction analysis processing is carried out on each event image, each event image is preprocessed firstly, the personnel-free event images are removed, the target event images of the personnel are reserved, a targeted analysis target is provided for the personnel facial feature extraction analysis processing of each target event image, and the phenomenon that the personnel facial feature extraction analysis is carried out on some personnel-free event images to cause useless work is avoided, so that the analysis processing efficiency is influenced.
3. In the process of judging the names of the participating athletes corresponding to the target event images, the method comprehensively considers various situations of personnel in the target event images, gives a targeted judgment basis for the various situations, and enables the judgment result to be closer to reality and more reliable.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
FIG. 1 is a flow chart of the method steps of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a first aspect of the embodiment of the present invention provides a video information intelligent clipping processing method based on image feature extraction and analysis, including the following steps:
s1, dividing video event images of a sports event, namely acquiring video time of the video of the sports event, carrying out video division on the video time of the sports event according to a set video division frame number to obtain a plurality of event images, and acquiring time stamps of the event images in the video time of the event;
s2, event image preprocessing, namely preprocessing a plurality of segmented event images, wherein the preprocessing steps are as follows:
h1, extracting the body contour of each event image;
h2, if the body outline of the person can not be extracted from the event image, the event image is rejected, if the body outline of the person can be extracted from the event image, the person is present in the event image, and the event image is reserved;
reserving target event images, wherein the target event images refer to event images with personnel, and numbering the reserved target event images according to the sequence of position points of the target event images in the event video duration, and sequentially marking the target event images as 1,2, a.
In the embodiment, each segmented event image is preprocessed, the event images without personnel are removed, the target event images of the personnel are reserved, a targeted analysis target is provided for the subsequent personnel facial feature extraction analysis processing of each target event image, and the phenomenon of useless work caused by the personnel facial feature extraction analysis of some event images without personnel is avoided, so that the analysis processing efficiency is influenced;
s3, judging the competitors belonging to the target event images, namely counting the number of the personnel in each target event image, focusing the personnel on the face areas of the personnel respectively, extracting the face images of the personnel, comparing the face images with the face images of the competitors corresponding to the event video respectively, and judging the names of the competitors belonging to each target event image, wherein if only one personnel exists in a certain target event image, the specific judgment method executes the following steps:
a1, comparing the face image of the person in the target event image with the face images of the competitors corresponding to the event video, if the comparison is successful, indicating that the target event image belongs to the competitors successfully compared, and acquiring the name corresponding to the competitors successfully compared at the moment to obtain the name of the competitors corresponding to the target event image;
a2, if the comparison fails, the target event image is rejected if the target event image does not belong to any athlete participating in the game;
if a plurality of persons exist in the image of the target event, the specific judging method executes the following steps:
b1, comparing the face images of the corresponding personnel in the target event image with the face images of the participating athletes corresponding to the event video, and counting the number of the successfully compared personnel;
b2, if the number of the successfully compared personnel is zero, the target event image is rejected if the target event image does not belong to any competitor;
b3, if the number of the successfully compared personnel is not zero, comparing the number of the successfully compared personnel with the number of the personnel in the target event image, if the number of the successfully compared personnel is only one, indicating that only one participant in the target event image exists, acquiring the name corresponding to the participant at the moment, wherein the name of the participant corresponding to the target event image is the name corresponding to the participant;
b4, if the number of the successfully compared persons is more than one and less than the number of the persons in the target event image, it indicates that both the participating players and the non-participating players exist in the target event image, then the target event image is subjected to fuzzification processing of the non-participating players, the bodies of the non-participating players are blurred, only the bodies of the participating players are reserved, and the main participating players are separated from the reserved participating players, and the specific analysis process is as follows:
c1, numbering each reserved competitor, respectively marking the competitor as 1,2, a, j, a, m, positioning the body area of each competitor in the target competition image, and simultaneously acquiring the position of the central area of the target competition image, thereby counting the distance between the position of the body area of each competitor in the target competition image and the position of the central area of the target competition image, wherein the distance is marked as a position distance, and further forming a competitor position distance set l (l1, l2, a, lj, a, lm) which is expressed as the position distance corresponding to the jth competitor;
c2, extracting the whole body contour of each competitor from the target race image, so as to obtain the area occupied by the body contour of each competitor in the target race image, and forming a competitor body contour area set S (S1, S2.,. sj.,. sm), wherein sj represents the area occupied by the body contour of the jth competitor in the target race image, and simultaneously obtaining the total area of the target race image, which is recorded as S;
c3 calculating the corresponding main body coefficient of each player in the target event image according to the corresponding position distance of each player, the occupied area of the body outline of each player in the target event image and the total area of the target event image
Figure BDA0003067631250000101
ηjRepresenting the corresponding main coefficient of the j-th contestant in the target event image;
c4, screening the participating player with the largest main coefficient from the main coefficients corresponding to the target event images of the participating players as the main participating player, and simultaneously acquiring the name corresponding to the main participating player, wherein the name of the participating player corresponding to the target event image is the name corresponding to the main participating player;
b5, if the number of the successfully compared persons is the same as the number of the persons existing in the target event image, it is indicated that all the persons existing in the target event image are competitors, at this time, a main competitor is analyzed from each competitor existing in the target event image according to the method B4, and meanwhile, the name corresponding to the main competitor is obtained, and the name of the competitor corresponding to the target event image is the name corresponding to the main competitor;
in the process of judging the names of the participating athletes corresponding to the target event images, the method comprehensively considers various conditions of personnel in the target event images, and provides a targeted judgment basis for the various conditions, so that the judgment result is closer to reality and more reliable;
s4, constructing a target event image set corresponding to the competitor, namely comparing the names of the competitors corresponding to the target event images, so as to collect the target event images corresponding to the same competitor and obtain the target event image set corresponding to each competitor;
s5, editing the event video, namely editing the event video according to the target event image set corresponding to each competitor and the timestamp of each target event image in the event video time length, wherein the specific editing method comprises the following steps:
f1, comparing the time stamps of the target event images corresponding to the competitors in the event video time length, analyzing whether the target event images with continuous time lines exist, counting the number of the continuous time lines if the target event images with continuous time lines exist, and summarizing the target event images corresponding to the continuous time lines, wherein each continuous time line corresponds to one competition video segment;
f2, sequencing a plurality of target event images corresponding to each continuous time line according to the sequence of the timestamps of the target event images in the event video time length to obtain the sequencing result of the target event images corresponding to each continuous time line, extracting the timestamp of the target event image which is arranged at the first position of each continuous time line corresponding to the first position of each continuous time line in the event video time length from the sequencing result, taking the timestamp as the clipping starting position of the competition video segment which corresponds to the continuous time line in the event video, extracting the timestamp of the target event image which is arranged at the last position of each continuous time line in the event video time length, and taking the timestamp as the clipping ending position of the competition video segment which corresponds to the continuous time line in the event video;
f3, according to the cutting start position and the cutting end position of the competition video segment corresponding to each continuous time line in the competition video, cutting the competition video to obtain a plurality of competition video segments corresponding to each competition player;
s6, merging the competition video sections corresponding to the competition athletes, namely merging a plurality of competition video sections corresponding to the competition athletes to form a comprehensive competition video section corresponding to the competition athletes, and forming a competition video section information base by the comprehensive competition video section, wherein the merging process also comprises the complete merging of adjacent competition video sections, and the specific merging method comprises the following steps:
d1, numbering a plurality of competition video segments corresponding to each competition player according to the sequence of the positions of the competition video segments in the duration of the whole competition video;
d2, according to the numbering sequence of a plurality of competition video segments corresponding to each competition player, respectively acquiring the interval duration between two adjacent competition video segments, comparing the interval duration with the set minimum interval duration, if the interval duration of a certain two adjacent competition video segments is less than the minimum interval duration, indicating that the interval between the two adjacent competition video segments is too small and can be ignored, respectively acquiring the video ending time stamp corresponding to the previous competition video segment and the video starting time stamp corresponding to the next competition video segment in the two adjacent competition video segments at the moment as the clipping starting position and the clipping ending position of the interval video corresponding to the two adjacent competition video segments in the whole competition video duration, further clipping the competition video according to the clipping starting position and the clipping ending position to obtain the interval video corresponding to the interval duration between the two adjacent competition video segments, supplementing the video segments between the two adjacent competition video segments, and combining the video segments into a complete competition video segment;
in the embodiment, in the process of combining the plurality of competition video segments corresponding to each competition player, the adjacent competition video segments with too long interval duration are completely combined, so that the integrity of the video segments is guaranteed, the main status of the competition players in the competition video segments is not affected, and the fluency of the users in watching the videos is enhanced;
in this embodiment, after obtaining the comprehensive competition participating video segments corresponding to each competition participant, the comprehensive competition participating video segments corresponding to each competition participant are further subjected to depth clipping according to different competition participating actions corresponding to the competition participants to obtain competition participating action video segments corresponding to each competition participating action, and the depth clipping can further refine the competition participating video segments, so that a more diverse selection mode is provided for the competition participating video segments corresponding to later users;
s7, extracting and pushing a competition video of the target competition athlete: the method comprises the steps of acquiring names of the competition participants who want to see and competition actions which are input by a user on the competition video playing input platform in real time, recording the names of the competition participants as names of target competition participants, recording the competition actions as target competition actions, matching the acquired names of the target competition participants with the names of the competition participants in a competition video segment information base, further extracting competition action video segments of the competition actions corresponding to the target competition participants who match successfully, matching the target competition actions with the competition action video segments of the competition actions corresponding to the target competition participants at the same time, obtaining competition action video segments of the target competition actions corresponding to the target competition participants, and pushing the competition action video segments to the corresponding user.
The embodiment carries out intelligent pushing on the video segment of the competition that the user wants to see the names of the competition players corresponding to the names of the competition players by acquiring the names of the competition players who want to see and input by the user on the competition video playing input platform in real time, thereby being more convenient for the user to watch and embodying the characteristics of high intelligent degree and strong practicability.
A second aspect of the embodiment of the present invention provides a cloud service system, which includes a processor, a machine-readable storage medium, and a network interface, where the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is configured to be in communication connection with at least one video information intelligent clip processing device, the machine-readable storage medium is configured to store a program, an instruction, or a code, and the processor is configured to execute the program, the instruction, or the code in the machine-readable storage medium to perform the video information intelligent clip processing method based on image feature extraction and analysis according to the present invention.
The invention divides the video of the sports event video to obtain a plurality of event images, extracts and analyzes the facial features of the personnel of each event image to judge the competitors corresponding to each event image, collects the event images corresponding to the same competitors, clips the event video according to the time stamp of each event image in the duration of the event video to obtain a plurality of competition video segments corresponding to each competitor, combines the plurality of competition video segments corresponding to each competitor to form the comprehensive competition video segment corresponding to each competitor, realizes the intelligent clipping of the sports event video, meets the watching requirement of the sports enthusiasts who only aim at the favorite sports event segments corresponding to the sports athletes on the one hand, and provides a plurality of watching segment selections for the sports enthusiasts on the other hand, the selectivity of video watching is improved, the watching efficiency is improved, and the watching experience is enhanced.
The foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the invention as defined in the following claims.

Claims (10)

1. The video information intelligent clipping processing method based on image feature extraction and analysis is characterized by comprising the following steps: comprises the following steps;
s1, dividing video event images of a sports event, namely acquiring video time of the video of the sports event, carrying out video division on the video time of the sports event according to a set video division frame number to obtain a plurality of event images, and acquiring time stamps of the event images in the video time of the event;
s2, preprocessing a plurality of segmented event images, reserving target event images, numbering the reserved target event images according to the sequence of position points of the target event images in the event video duration, and sequentially marking the reserved target event images as 1,2, a.
S3, judging the competitors to which the target event images belong, namely counting the number of the personnel in each target event image, focusing the personnel on the face areas of the personnel respectively, extracting the face images of the personnel, comparing the face images with the face images of the competitors corresponding to the event video respectively, and judging the names of the competitors to which the target event images correspond;
s4, constructing a target event image set corresponding to the competitor, namely comparing the names of the competitors corresponding to the target event images, so as to collect the target event images corresponding to the same competitor and obtain the target event image set corresponding to each competitor;
s5, editing the event video, namely editing the event video according to the target event image set corresponding to each competitor and the timestamp of each target event image in the event video time length to obtain a plurality of event video sections corresponding to each competitor;
s6, combining the competition participating video bands corresponding to the competition participants, namely combining a plurality of competition participating video bands corresponding to the competition participants to form a comprehensive competition participating video band corresponding to the competition participants and forming a competition participating video band information base;
s7, extracting and pushing a competition video of the target competition athlete: the names of the competition participating players who are input by the user on the competition video playing input platform and want to see are obtained in real time, the names of the competition participating players are recorded as the names of the target competition participating players, the obtained names of the target competition participating players are matched with the names of the competition participating players in the competition video band information base, then the comprehensive competition video band corresponding to the target competition participating players which are successfully matched is extracted, and the comprehensive competition video band is pushed to the corresponding user.
2. The intelligent video information clipping processing method based on image feature extraction and analysis as claimed in claim 1, wherein: in S2, preprocessing the plurality of event images, where the preprocessing includes the following steps:
h1, extracting the body contour of each event image;
h2, if the body outline of the person can not be extracted from the event image, the event image is rejected, if the body outline of the person can be extracted from the event image, the person is present in the event image, and the event image is reserved.
3. The intelligent video information clipping processing method based on image feature extraction and analysis as claimed in claim 1, wherein: the target event image refers to an event image of a person.
4. The intelligent video information clipping processing method based on image feature extraction and analysis as claimed in claim 1, wherein: in S3, the names of the competitors corresponding to the target event images are determined according to the number of the persons in the target event images, wherein if only one person exists in a certain target event image, the specific determination method performs the following steps:
a1, comparing the face image of the person in the target event image with the face images of the competitors corresponding to the event video, if the comparison is successful, indicating that the target event image belongs to the competitors successfully compared, and acquiring the name corresponding to the competitors successfully compared at the moment to obtain the name of the competitors corresponding to the target event image;
a2, if the comparison fails, the target event image is rejected if the target event image does not belong to any of the athletes participating in the game.
5. The intelligent video information clipping processing method based on image feature extraction and analysis as claimed in claim 1, wherein: in S3, the names of the competitors corresponding to the target event images are determined according to the number of the persons in the target event images, wherein if a plurality of persons exist in a certain target event image, the specific determination method performs the following steps:
b1, comparing the face images of the corresponding personnel in the target event image with the face images of the participating athletes corresponding to the event video, and counting the number of the successfully compared personnel;
b2, if the number of the successfully compared personnel is zero, the target event image is rejected if the target event image does not belong to any competitor;
b3, if the number of the successfully compared personnel is not zero, comparing the number of the successfully compared personnel with the number of the personnel in the target event image, if the number of the successfully compared personnel is only one, indicating that only one participant in the target event image exists, acquiring the name corresponding to the participant at the moment, wherein the name of the participant corresponding to the target event image is the name corresponding to the participant;
b4, if the number of the successfully compared persons is more than one and less than the number of the persons in the target event image, it is indicated that both the participating players and the non-participating players exist in the target event image, the target event image is subjected to fuzzification processing of the non-participating players at the moment, the bodies of the non-participating players are blurred, only the bodies of the participating players are reserved, the main participating players are separated from the reserved participating players, meanwhile, the names corresponding to the main participating players are obtained, and the names of the participating players corresponding to the target event image are the names corresponding to the main participating players;
b5, if the number of the successfully compared persons is the same as the number of the persons existing in the target event image, it indicates that all the persons existing in the target event image are competitors, at this time, the main competitor is analyzed from each competitor existing in the target event image according to the method B4, and at the same time, the name of the main competitor is obtained, and the name of the competitor corresponding to the target event image is the name corresponding to the main competitor.
6. The intelligent video information clipping processing method based on image feature extraction and analysis as claimed in claim 5, wherein: in the step B4, the main players are separated from the retained players, and the specific analysis process is as follows:
c1, numbering each reserved competitor, respectively marking the competitor as 1,2, a, j, a, m, positioning the body area of each competitor in the target competition image, and simultaneously acquiring the position of the central area of the target competition image, thereby counting the distance between the position of the body area of each competitor in the target competition image and the position of the central area of the target competition image, wherein the distance is marked as a position distance, and further forming a competitor position distance set l (l1, l2, a, lj, a, lm) which is expressed as the position distance corresponding to the jth competitor;
c2, extracting the whole body contour of each competitor from the target race image, so as to obtain the area occupied by the body contour of each competitor in the target race image, and forming a competitor body contour area set S (S1, S2.,. sj.,. sm), wherein sj represents the area occupied by the body contour of the jth competitor in the target race image, and simultaneously obtaining the total area of the target race image, which is recorded as S;
c3, calculating the corresponding main coefficients of the competitors in the target event image according to the corresponding position distance of the competitors, the occupied area of the body outline of the competitors in the target event image and the total area of the target event image;
and C4, selecting the competitor with the maximum subject coefficient from the subject coefficients corresponding to the target event images of the competitors as the subject competitor.
7. The intelligent video information clipping processing method based on image feature extraction and analysis as claimed in claim 6, wherein: the calculation formula of the corresponding main body coefficient of each competitor in the target event image is
Figure FDA0003067631240000051
ηjIs expressed as the corresponding body coefficient of the j-th competitor in the target event image.
8. The intelligent video information clipping processing method based on image feature extraction and analysis as claimed in claim 1, wherein: in the step S5, the event video is edited according to the target event image sets corresponding to the competitors and the timestamps of the target event images in the event video duration, and the specific editing method includes the following steps:
f1, comparing the time stamps of the target event images corresponding to the competitors in the event video time length, analyzing whether the target event images with continuous time lines exist, counting the number of the continuous time lines if the target event images with continuous time lines exist, and summarizing the target event images corresponding to the continuous time lines, wherein each continuous time line corresponds to one competition video segment;
f2, sequencing a plurality of target event images corresponding to each continuous time line according to the sequence of the timestamps of the target event images in the event video time length to obtain the sequencing result of the target event images corresponding to each continuous time line, extracting the timestamp of the target event image which is arranged at the first position of each continuous time line corresponding to the first position of each continuous time line in the event video time length from the sequencing result, taking the timestamp as the clipping starting position of the competition video segment which corresponds to the continuous time line in the event video, extracting the timestamp of the target event image which is arranged at the last position of each continuous time line in the event video time length, and taking the timestamp as the clipping ending position of the competition video segment which corresponds to the continuous time line in the event video;
and F3, clipping the event video according to the clip start position and the clip end position of the corresponding competition video segment in the event video of each continuous timeline.
9. The intelligent video information clipping processing method based on image feature extraction and analysis as claimed in claim 1, wherein: in the step S6, the merging process of the plurality of playing video segments corresponding to each player further includes completely merging the adjacent playing video segments, and the specific merging method is as follows:
d1, numbering a plurality of competition video segments corresponding to each competition player according to the sequence of the positions of the competition video segments in the duration of the whole competition video;
d2, according to the numbering sequence of a plurality of competition video segments corresponding to each competition player, respectively acquiring the interval duration between two adjacent competition video segments, comparing the interval duration with the set minimum interval duration, if the interval duration of a certain two adjacent competition video segments is less than the minimum interval duration, indicating that the interval between the two adjacent competition video segments is too small and can be ignored, respectively acquiring the video ending time stamp corresponding to the previous competition video segment and the video starting time stamp corresponding to the next competition video segment in the two adjacent competition video segments at the moment as the clipping starting position and the clipping ending position of the interval video corresponding to the two adjacent competition video segments in the whole competition video duration, further clipping the competition video according to the clipping starting position and the clipping ending position to obtain the interval video corresponding to the interval duration between the two adjacent competition video segments, and supplements the video segments between the two adjacent match-making video segments to be combined into a complete match-making video segment.
10. A cloud service system, characterized by: comprising a processor, a machine-readable storage medium, and a network interface, the machine-readable storage medium, the network interface, and the processor are connected via a bus system, the network interface is used for being communicatively connected with at least one video information intelligent clip processing device, the machine-readable storage medium is used for storing programs, instructions or codes, and the processor is used for executing the programs, instructions or codes in the machine-readable storage medium to execute the video information intelligent clip processing method based on image feature extraction and analysis according to any one of claims 1 to 9.
CN202110530684.6A 2021-05-15 2021-05-15 Video information intelligent editing processing method based on image feature extraction and analysis and cloud service system Withdrawn CN113271495A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110530684.6A CN113271495A (en) 2021-05-15 2021-05-15 Video information intelligent editing processing method based on image feature extraction and analysis and cloud service system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110530684.6A CN113271495A (en) 2021-05-15 2021-05-15 Video information intelligent editing processing method based on image feature extraction and analysis and cloud service system

Publications (1)

Publication Number Publication Date
CN113271495A true CN113271495A (en) 2021-08-17

Family

ID=77230922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110530684.6A Withdrawn CN113271495A (en) 2021-05-15 2021-05-15 Video information intelligent editing processing method based on image feature extraction and analysis and cloud service system

Country Status (1)

Country Link
CN (1) CN113271495A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101848337A (en) * 2009-03-25 2010-09-29 索尼公司 Image processing equipment, image processing method and program
US20120027379A1 (en) * 2010-01-29 2012-02-02 Raymond Thompson Video processing methods and systems
CN105144741A (en) * 2013-03-05 2015-12-09 英国电讯有限公司 Video data provision
CN105631418A (en) * 2015-12-24 2016-06-01 浙江宇视科技有限公司 People counting method and device
US20200066090A1 (en) * 2018-08-21 2020-02-27 Highlight Games Limited Dynamic virtual scratch card gaming system
CN111435494A (en) * 2019-01-13 2020-07-21 杨杰 Community monitoring big data reporting method
CN112257628A (en) * 2020-10-29 2021-01-22 厦门理工学院 Method, device and equipment for identifying identities of outdoor competition athletes

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101848337A (en) * 2009-03-25 2010-09-29 索尼公司 Image processing equipment, image processing method and program
US20120027379A1 (en) * 2010-01-29 2012-02-02 Raymond Thompson Video processing methods and systems
CN105144741A (en) * 2013-03-05 2015-12-09 英国电讯有限公司 Video data provision
CN105631418A (en) * 2015-12-24 2016-06-01 浙江宇视科技有限公司 People counting method and device
US20200066090A1 (en) * 2018-08-21 2020-02-27 Highlight Games Limited Dynamic virtual scratch card gaming system
CN111435494A (en) * 2019-01-13 2020-07-21 杨杰 Community monitoring big data reporting method
CN112257628A (en) * 2020-10-29 2021-01-22 厦门理工学院 Method, device and equipment for identifying identities of outdoor competition athletes

Similar Documents

Publication Publication Date Title
CN106162223B (en) News video segmentation method and device
CN108833936B (en) Live broadcast room information pushing method, device, server and medium
CN110012348B (en) A kind of automatic collection of choice specimens system and method for race program
CN108718417B (en) Generation method, device, server and the storage medium of direct broadcasting room preview icon
CN106210902B (en) A kind of cameo shot clipping method based on barrage comment data
CN108769801B (en) Synthetic method, device, equipment and the storage medium of short-sighted frequency
CN107241645B (en) Method for automatically extracting goal wonderful moment through caption recognition of video
CN109922373A (en) Method for processing video frequency, device and storage medium
Bettadapura et al. Leveraging contextual cues for generating basketball highlights
CN111757170B (en) Video segmentation and marking method and device
CN110392274B (en) Information processing method, equipment, client, system and storage medium
CN106357416A (en) Group information recommendation method, device and terminal
CN109672899A (en) The Wonderful time of object game live scene identifies and prerecording method in real time
Merler et al. Automatic curation of golf highlights using multimodal excitement features
CN110841287A (en) Video processing method, video processing device, computer-readable storage medium and computer equipment
CN112533003A (en) Video processing system, device and method
CN112995756A (en) Short video generation method and device and short video generation system
CN111046209B (en) Image clustering retrieval system
CN111711856B (en) Interactive video production method, device, terminal, storage medium and player
CN113240466A (en) Mobile media video data processing method and device based on big data depth analysis and storage medium
CN109918525B (en) Food picture aesthetic analysis tag data collection system based on WeChat applet
CN111770359A (en) Event video clipping method, system and computer readable storage medium
CN113271495A (en) Video information intelligent editing processing method based on image feature extraction and analysis and cloud service system
CN108600775A (en) Monitoring method, device, server and the storage medium of live video
Lee et al. Highlight-video generation system for baseball games

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210817