CN114283356A - Acquisition and analysis system and method for moving image - Google Patents

Acquisition and analysis system and method for moving image Download PDF

Info

Publication number
CN114283356A
CN114283356A CN202111490973.4A CN202111490973A CN114283356A CN 114283356 A CN114283356 A CN 114283356A CN 202111490973 A CN202111490973 A CN 202111490973A CN 114283356 A CN114283356 A CN 114283356A
Authority
CN
China
Prior art keywords
image
scene
target
unit
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111490973.4A
Other languages
Chinese (zh)
Other versions
CN114283356B (en
Inventor
曹栋亮
孙伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Weidi Technology Group Co ltd
Original Assignee
Shanghai Weidi Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Weidi Technology Group Co ltd filed Critical Shanghai Weidi Technology Group Co ltd
Priority to CN202111490973.4A priority Critical patent/CN114283356B/en
Publication of CN114283356A publication Critical patent/CN114283356A/en
Application granted granted Critical
Publication of CN114283356B publication Critical patent/CN114283356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method for collecting and analyzing moving images, which comprises the following steps of S100: extracting a video stream to be processed, and performing framing processing to obtain a plurality of frame images; respectively capturing different scene information in a plurality of frames of images and integrating the scene information; step S200: performing fragment division on a video stream to be processed based on scene integration information; step S300: carrying out target scene discrimination on each image frame and selecting expected images from a plurality of video sequences; step S400: identifying and adjusting the motion condition of the target scenery on part of the expected images; step S500: pushing the finally selected expected image to a user, wherein the user can select the image based on self-intention; in order to better realize the method, a system for acquiring and analyzing the moving image is also provided; according to the method and the device, when the middle picture scene of the video is the characters or the diagram, the identification of the picture scene can be corresponding to the information identification of the characters and the diagram.

Description

Acquisition and analysis system and method for moving image
Technical Field
The invention relates to the technical field of video stream content acquisition and processing, in particular to a system and a method for acquiring and analyzing moving images.
Background
For a section of video stream, if a user wants to acquire a part of picture scenes appearing in the section of video stream, a self-screenshot method is usually adopted, but the moment of self-screenshot is probably a place where the frames of a film are changed, the interception of an expected image cannot be accurately performed, and if a dynamic scene appears in the video stream, a photo with an excellent viewing effect of the dynamic scene needs to be acquired, so that the difficulty is increased; meanwhile, the video stream comprises a plurality of images, and if the images are acquired only through the self-capture, the video stream needs to continuously flow back to the desired scene moment, so that the image acquisition efficiency is reduced, and the finally obtained image viewing effect is influenced.
Disclosure of Invention
The present invention is directed to a system and a method for acquiring and analyzing moving images, so as to solve the problems in the background art.
In order to solve the technical problems, the invention provides the following technical scheme: a method for collecting and analyzing moving images comprises the following steps:
step S100: extracting a video stream to be processed, and performing frame processing on the video stream to be processed to obtain a plurality of frame images; respectively capturing different scene information in a plurality of frames of images to obtain respective scene integration information of the plurality of frames of images in the video stream to be processed; when the medium picture scene of the video is the characters or the diagram, the identification of the picture scene can be correspondingly the information identification of the characters and the diagram;
step S200: segmenting the video stream to be processed into a plurality of video sequences based on the obtained scene integration information of a plurality of frames of images;
step S300: carrying out target scene discrimination on each image frame in a plurality of video sequences to obtain a discrimination result of a target scene; respectively selecting expected images from the plurality of video sequences based on the discrimination result;
step S400: identifying the motion condition of the target scenery for the part of the expected images selected in the step S300, and adjusting the selected expected images based on the result obtained by the motion condition identification;
step S500: and pushing the finally selected expected image to the user, wherein the user can select the image from the expected images based on self-intention.
Further, the step of capturing and integrating the scene information in the plurality of image frames in step S100 includes:
step S101: respectively obtaining the color types of the background pictures of a plurality of frames of images, and calculating the distribution area of each color type of the background pictures; setting a color distribution area threshold value, and taking color type information with the distribution area smaller than the color distribution area threshold value as a rejection mark; extracting outlines of scenes in a plurality of frames of images to obtain n scene outlines; setting a contour line interval threshold, comparing interval breakpoint distances among the n scene outlines based on the contour line interval threshold, and combining the scene outlines of which the interval breakpoint distances are less than or equal to the contour line interval threshold to form an integral scene outline;
step S102: finding the scene outline corresponding to the preliminary marking rejection information in the step S101; if the scene contour has other scene contours the distance between which and the interval breakpoint of which is less than or equal to the contour line interval threshold value, eliminating the rejection mark; if the color type information marked for elimination does not have a corresponding scene outline, eliminating the color type information;
step S103: integrating the information from the step S101 to the step S102 to respectively obtain respective scene integration information M of a plurality of frames of images, wherein the form of the scene integration information M is as follows:
Figure BDA0003399338410000021
wherein, a, b, c and d respectively represent different scene outlines which are sequentially shown from left to right in an image frame; r isi,ei,zi,uiRespectively representing color category sets in different scene outlines;
Figure BDA0003399338410000022
representing a set r of colour classes on the scene outline ai
Figure BDA0003399338410000023
Representing a set e of colour classes on the scene outline bi
Figure BDA0003399338410000024
Representing a set z of colour classes on the scene outline ci
Figure BDA0003399338410000025
Representing a set u of colour classes on the scene outline di
The above-mentioned capturing and integrating the scene information in each image frame is to obtain different scene information of all image frames in the video stream to be processed, and integrate the same scene information in all scene information, that is, multiple scene pictures may appear in a section of video stream, and the images in the same scene picture are obtained by capturing and integrating the scene information in different pictures.
Further, the step S200 of performing segment division on the video stream to be processed includes:
step S201: acquiring the frame rate information transmitted by the equipment, setting the equipment to transmit fps frame images every second, and collecting the fps frame images transmitted by the equipment every second;
step S202: setting the time stamp 0-t of the video stream to be processed0The internally appearing image frames are assembled into an image frame set Q1;t0For a first interval of the video stream to be processed, Q1Scene integration information of all image frames in the scene
Figure BDA0003399338410000026
Figure BDA0003399338410000027
Figure BDA0003399338410000028
Wherein M is11、M12、M12… each represents Q1Scene integration information of the first, second and third … image frames; will be at t0~t0-1The internally appearing image frames are assembled into an image frame set Q2,t0-1Represents t0Next first interval of time, Q2Scene integration information of all image frames in the scene
Figure BDA0003399338410000031
Figure BDA0003399338410000032
Wherein M is21、M22、M23… each represents Q2Scene integration information of the first, second and third … image frames;
step S203: image frame set Q1Scene integration information of all intra-frame
Figure BDA0003399338410000033
And an image frame set Q2Scene integration information of all intra-frame
Figure BDA0003399338410000034
Comparing the two to obtain a comparison result P;
Figure BDA0003399338410000035
calculating and setting a contrast threshold; if the comparison result P is smaller than the comparison threshold value, t is compared0As segment dividing nodes;
step S204: if the comparison result P is greater than or equal to the comparison threshold value, the comparison result P is 0 to t0-1The image frame sets are combined to obtain an image frame set Q1And an image frame set Q2Internal scene integration information
Figure BDA0003399338410000036
And
Figure BDA0003399338410000037
correspondingly merging; immediately after finding t0-1~t0-2The internally appearing image frames are assembled into an image frame set Q3,t0-2Represents t0-1Next interval of time, Q3Scene integration information of all image frames in the scene
Figure BDA0003399338410000038
Figure BDA0003399338410000039
Wherein M is31、M32、M33Respectively represent Q3Integrating the scenery of the first, second and third image frames to obtain the image frame set Q3And an image frame set Q1Image frame set Q2Comparing the scene integration information of the combined frame sets to obtain a comparison result P,
Figure BDA00033993384100000310
if the comparison result P is smaller than the comparison threshold value, t is compared0-1As a segment dividing node, if the comparison result P is greater than or equal to the comparison threshold, 0-t0-2Merging the image frame sets; repeating the steps until all the video streams to be processed are traversed and divided to obtain a plurality of video sequences;
the video stream is divided into segments of video segments containing different scenes or scene pictures through the obtained dividing nodes, and another purpose of dividing the video stream is to acquire an expected image containing different scenes or scene pictures when the expected image is acquired in the subsequent step, which is equivalent to providing different acquisition sources for acquisition of the expected image.
Further, step S300 includes:
step S301: performing image diagonal line connection on each image frame in each video sequence to obtain two diagonal lines, taking the intersection point of the two image diagonal lines as a target reference point, selecting a radius value R according to the height and width of the image frame to form a focus target area S ═ pi R2
Step S302: calculating different scene outline areas covered in each image frame focus target area in each video sequence, and taking the scene outline with the largest covered area in the image frame focus target area as a target scene of the image frame; if the total number of the target scenery of each image frame in a video sequence is 1, selecting an image frame which covers the largest area of the target scenery in a focus target region in the video sequence as an expected image in the video sequence; if the target area covers an image frame with the largest target scenery area, the definition of the image frame is smaller than a definition threshold value; selecting an image frame covering the second largest target scene area in the focus target area in the video sequence as an expected image in the video sequence, and repeating the steps to obtain the expected images with the number of the video sequence being 1;
step S303: if the total number of target scenery appearing in each image frame in a video sequence is more than or equal to 2, respectively extracting each image frame covering the target scenery according to the time sequence; for one target scenery A, if all the image frames of the target scenery A are continuous image frames in time, and the number of the continuous image frames is greater than a set frame number threshold value, selecting an image frame with the largest area covering the area A in a focus target area from the continuous image frames as a desired image in the video sequence, and so on, and finally obtaining the desired image with the number of the video sequence greater than or equal to 1;
the reason why the most central part of one frame of image is set as the focus target area is that in the present application, the part which is located at the most central part of the image in one image frame is the part which is most interested in or most wanted to be recorded by the user in the process of recording video data by using the device by default, and the part is taken as the target scene; the focus target area is set for the purpose of identifying the target scene; if only one target scenery appears in one video clip, the situation is judged that the target scenery is not changed when the user continuously inputs information into the same target scenery or the field angle moving range is very small when the user inputs information into a certain part of scene by using equipment, and the probability of information superposition among the image frames is high under the situation; if two or more target scenery appear in a section of video segment, the problem that a user inputs information into different target scenery or the target scenery moves within a period of time is judged under the condition, the target scenery has a trend of moving from the edge of an image frame to a focus target area, the target scenery is dynamic, the probability of information superposition among the image frames is low, and the user needs to collect more images for screening the images with dynamic effect which the target scenery wants to capture.
Further, step S400 is to perform further motion situation recognition on the target scenes with the total number of occurrences greater than or equal to 2 in step S303:
step S401: corresponding to the successive image frames of the object scene A appearing in the acquiring step S303, one image frame in the successive image frames is recorded as fnAdjacent to a picture frame of fn-1(ii) a Respectively discretizing the whole outline of the target scenery A in the image frame fn and the image frame fn-1 to obtain an image frame fnDiscrete point set, image frame fn-1The discrete point set of (2); respectively recording the gray values of the corresponding pixel points of the two frames as f in the two discrete point setsn(xk,yk) And fn-1(xk,yk);
Step S402: calculating the gray values of the corresponding pixel points of the two image frames according to a formula to obtain a difference image D (x)k,yk):
D(xk,yk)=|fn(xk,yk)-fn-1(xk,yk)|
Wherein f isn(xk,yk) Representing the object A in an image frame fnThe k-th discrete point (x)k,yk) The gray value of the corresponding pixel point; f. ofn-1(xk,yk) Representing the object A in an image frame fn-1The k-th discrete point (x)k,yk) The gray value of the corresponding pixel point;
step S403: setting a gray level threshold J according to the formula
Figure BDA0003399338410000041
Obtain an image D' (x)k,yk) (ii) a If image D' (x)k,yk) The object scene is judged to be a person or an animal and to be in a motion state due to self-motion; if image D' (x)k,yk) For a static object, judging that the target scene presents a motion state due to passive artificial change of the field angle of the equipment;
the above steps are equivalent to the identification of the motion conditions of different target scenes when different target scenes occur in step S303, because two or more target scenes may be the change of the field angle of the device, and there may be motion states of the scenes; and (3) introducing inter-frame difference calculation to obtain an image of a complete moving target, and identifying whether the image is a person, an animal or a still object to obtain a judgment result of the motion condition of the target scenery, namely whether the image is due to self motion or passive artificial change.
Further, step S400 includes: when the motion condition of a certain target scene is recognized as self-motion, all continuous image frames covering the target scene in the target area are taken as expected images;
if the target scenery is dynamic, only one expected image is collected, and the wonderful moment of the target scenery at the moment of dynamic change can be missed, so that the partial image is taken as the expected image for the user to select.
In order to better realize the method, the system for acquiring and analyzing the moving image is also provided, and comprises the following steps: the system comprises a moving image acquisition processing module, an image scene information capturing module, a segment dividing module, a target scene distinguishing module, an expected image selecting module, a target scene motion condition identifying module and an expected image selecting and adjusting module;
the mobile image acquisition processing module is used for inputting a video stream to be processed and performing framing processing on the video stream to be processed to obtain a plurality of frame images;
the image scene information capturing module is used for receiving the data in the moving image acquisition and processing module and capturing different scene information in a plurality of frames of images;
the segmentation module is used for receiving data in the moving image acquisition and processing module and the image scene information capturing module, and segmenting the video stream to be processed based on the obtained scene integration information of a plurality of frames of images to obtain a plurality of video sequences;
the object scene distinguishing module is used for receiving the data in the segment dividing module and distinguishing the object scene of each image frame in the plurality of video sequences;
the expected image selection module is used for receiving the data in the target scenery distinguishing module and respectively selecting expected images from a plurality of video sequences based on the target scenery distinguishing result;
the target scenery moving condition identification module is used for receiving the data in the expected image selection module and identifying the target scenery moving condition of the selected part of expected images;
and the expected image selection and adjustment module is used for receiving the data in the target scene motion condition identification module and performing selection and adjustment on the selected expected image based on the identification result of the target scene motion condition.
Furthermore, the image scene information capturing module comprises a distribution area calculating unit, an outline extracting unit, an interval breakpoint capturing unit, an outline integrating unit, an information eliminating unit and an information integrating unit;
the distribution area calculation unit is used for acquiring the background picture color types of a plurality of frames of images obtained in the moving image acquisition and processing module and calculating the distribution area of each background picture color type;
the contour extraction unit is used for extracting contours of scenes in a plurality of frames of images obtained from the moving image acquisition processing module;
the interval breakpoint capturing unit is used for capturing the interval breakpoints among the scene outlines obtained in the outline extracting unit and calculating the interval distance; obtaining the final target scenery
The contour integration unit is used for receiving the data in the interval breakpoint capturing unit and carrying out contour integration on the contour of the related scenery based on the data;
the information removing unit is used for receiving the data in the distribution area calculating unit, the interval breakpoint capturing unit and the outline integrating unit and reserving or removing the related information;
and the information integration unit is used for receiving the information data processed by the information elimination unit and integrating the information of each frame of image in the plurality of frames of images.
Further, the segment dividing module comprises: the system comprises an image frame set analysis unit, an information comparison unit, an image frame set merging unit and a video stream dividing node selection unit;
the image frame set analysis unit is used for acquiring the frame rate information transmitted by the equipment and analyzing and collecting the scene information of the fps frame images transmitted by the equipment every second;
the information comparison unit is used for receiving the data in the image frame set analysis unit and comparing the scene information of the image frame sets of adjacent seconds;
the image frame set merging unit is used for receiving the data in the information comparison unit and generating a frame set or video division node for the image frame set of the adjacent second based on the comparison data;
and the video stream dividing unit is used for receiving the data in the image frame set merging unit and performing video division on the video stream to be processed based on each dividing node generated in the image frame set merging unit.
Further, the target scene distinguishing module comprises a focus target area setting unit and a desired image capturing unit; the target scenery motion condition identification module comprises a discrete processing unit, a difference image calculation unit, a target scenery identification unit and a motion condition identification unit;
the focus target area setting unit is used for carrying out image diagonal connection on each image frame in each video sequence to obtain a target reference point, and then selecting a target radius value R according to the height and width of the image frame to obtain a final focus target area;
a target scene recognition unit that captures a target scene for each image frame in each video sequence based on the focus target region set in the focus target region setting unit;
the expected image capturing unit is used for capturing an expected image based on the distribution area characteristics of the captured target scenery in the focus target area;
the discrete processing unit is used for extracting and discretizing the outline of a scene image in a part of expected images in the expected image capturing unit;
the difference image calculation unit is used for receiving the data in the discrete processing unit and calculating the difference image between the continuous image frames;
a motion situation recognition unit for receiving the data from the difference image calculation unit, recognizing whether the target object is a person or an animal or a still object, and determining the motion situation of the target object based on the recognition result
Compared with the prior art, the invention has the following beneficial effects: the invention realizes the acquisition and analysis of the expected images in different video clips containing different picture scenes in a section of video stream; the situation-based consideration of the image scene is realized, the identification of the target scenery is realized, and the motion situation of the target scenery obtained by identification is judged; the reference expected image in multiple scenes and multiple aspects can be provided for a user; the method and the device for processing the images in the video stream sequence analyze each frame of image in the video stream sequence and obtain the expected image based on the analysis result, and can effectively solve the problems that the expected image obtained by a user through a self-interception method is blurred and unavailable; according to the method and the device, when the medium picture scene of the video is the characters or the diagram, the identification of the picture scene can be correspondingly carried out as the identification of the information of the background characters and the diagram in the video.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic flow chart of a method for collecting and analyzing moving images according to the present invention;
FIG. 2 is a schematic diagram of an acquisition and analysis system for moving images according to the present invention;
fig. 3 is a schematic diagram of an embodiment of the acquisition and analysis method for moving images according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-3, the present invention provides a technical solution: a method for collecting and analyzing moving images comprises the following steps:
step S100: extracting a video stream to be processed, and performing frame processing on the video stream to be processed to obtain a plurality of frame images; respectively capturing different scene information in a plurality of frames of images to obtain respective scene integration information of the plurality of frames of images in the video stream to be processed; when the medium picture scene of the video is characters or a chart, the identification of the picture scene can be correspondingly used as the information identification of the characters and the chart in the video;
wherein the capturing and integrating the scene information in the plurality of image frames in step S100 comprises:
step S101: respectively obtaining the color types of the background pictures of a plurality of frames of images, and calculating the distribution area of each color type of the background pictures; setting a color distribution area threshold value, and taking color type information with the distribution area smaller than the color distribution area threshold value as a rejection mark; extracting outlines of scenes in a plurality of frames of images to obtain n scene outlines; setting a contour line interval threshold, comparing interval breakpoint distances among the n scene outlines based on the contour line interval threshold, and combining the scene outlines of which the interval breakpoint distances are less than or equal to the contour line interval threshold to form an integral scene outline;
step S102: finding the scene outline corresponding to the preliminary marking rejection information in the step S101; if the scene contour has other scene contours the distance between which and the interval breakpoint of which is less than or equal to the contour line interval threshold value, eliminating the rejection mark; if the color type information marked for elimination does not have a corresponding scene outline, eliminating the color type information;
step S103: integrating the information from the step S101 to the step S102 to respectively obtain respective scene integration information M of a plurality of frames of images, wherein the form of the scene integration information M is as follows:
Figure BDA0003399338410000081
wherein, a, b, c and d respectively represent different scene outlines which are sequentially shown from left to right in an image frame; r isi,ei,zi,uiRespectively representing color category sets in different scene outlines;
Figure BDA0003399338410000082
representing a set r of colour classes on the scene outline ai
Figure BDA0003399338410000083
Representing a set e of colour classes on the scene outline bi
Figure BDA0003399338410000084
Representing a set z of colour classes on the scene outline ci
Figure BDA0003399338410000085
Representing a set u of colour classes on the scene outline di
Step S200: segmenting the video stream to be processed into a plurality of video sequences based on the obtained scene integration information of a plurality of frames of images;
the step S200 of segmenting the video stream to be processed includes:
step S201: acquiring the frame rate information transmitted by the equipment, setting the equipment to transmit fps frame images every second, and collecting the fps frame images transmitted by the equipment every second;
step S202: as shown in FIG. 3, it is assumed that the image frames at the 1 st second of the timestamp of the video stream to be processed are collected into an image frame set Q1,Q1Scene integration information of all image frames in the scene
Figure BDA0003399338410000091
Figure BDA0003399338410000092
Wherein M is11、M12、M12… each represents Q1Scene integration information of the first, second and third … image frames; collecting the 2 nd second image frame into an image frame set Q2,Q2Scene integration information of all image frames in the scene
Figure BDA0003399338410000093
Figure BDA0003399338410000094
Wherein M is21、M22、M23… each represents Q2Scene integration information of the first, second and third … image frames;
step S203: image frame set Q1Scene integration information of all intra-frame
Figure BDA0003399338410000095
And an image frame set Q2Scene integration information of all intra-frame
Figure BDA0003399338410000096
Comparing the two to obtain a comparison result P;
Figure BDA0003399338410000097
calculating and setting a contrast threshold; if the comparison result P is smaller than the comparison threshold, taking the first second as a segment division node;
step S204: if the comparison result P is larger than or equal to the comparison threshold, merging the image frame sets within 0-2 seconds, and combining the image frame set Q within the 1 st second1And 2 second image frame set Q2Internal scene integration information
Figure BDA0003399338410000098
And
Figure BDA0003399338410000099
correspondingly merging; the image frames occurring in the 3 rd second are subsequently combined into an image frame set Q3,Q3Scene integration information of all image frames in the scene
Figure BDA00033993384100000910
Figure BDA00033993384100000911
Figure BDA00033993384100000912
Wherein M is31、M32、M33Respectively represent Q3Integrating the scenery of the first, second and third image frames to obtain the image frame set Q3And an image frame set Q1Image frame set Q2Comparing the scene integration information of the combined frame sets to obtain a comparison result P,
Figure BDA00033993384100000913
if the comparison result P is smaller than the comparison threshold, taking the 2 nd second as a segment division node; if the comparison result P is larger than or equal to the comparison threshold, merging the image frame sets within 0-3 seconds;
the image frames appearing at the 4 th second are collected into an image frame set Q in accordance with the foregoing steps S202 to S2044,Q4Scene integration information of all image frames in the scene
Figure BDA00033993384100000914
Figure BDA00033993384100000915
Image frame set Q4And an image frame set Q1Image frame set Q2Image frame set Q3Comparing the scene integration information between the combined frame sets to obtain a comparison result P,
Figure BDA00033993384100000916
Figure BDA00033993384100000917
if the comparison result P is smaller than the comparison threshold, taking the 3 rd second as a segment dividing node, if the comparison result P is larger than or equal to the comparison threshold, merging the image frame sets within 0-4 seconds, and so on until the comparison result P is smaller than the comparison threshold, and repeating the steps until the image frame sets are combinedThe method comprises the steps that all video streams to be processed are traversed and divided to obtain a plurality of fragment division nodes and a plurality of video sequences;
step S300: carrying out target scene discrimination on each image frame in a plurality of video sequences to obtain a discrimination result of a target scene; respectively selecting expected images from the plurality of video sequences based on the discrimination result;
wherein, step S300 includes:
step S301: performing image diagonal line connection on each image frame in each video sequence to obtain two diagonal lines, taking the intersection point of the two image diagonal lines as a target reference point, selecting a radius value R according to the height and width of the image frame to form a focus target area S ═ pi R2
Step S302: calculating different scene outline areas covered in each image frame focus target area in each video sequence, and taking the scene outline with the largest covered area in the image frame focus target area as a target scene of the image frame; if the total number of the target scenery of each image frame in a video sequence is 1, selecting an image frame which covers the largest area of the target scenery in a focus target region in the video sequence as an expected image in the video sequence; if an image frame covering the largest target scenery area in the target area is a blurred image, selecting an image frame covering the second largest target scenery area in the focus target area in the video sequence as an expected image in the video sequence, and repeating the steps to obtain the expected images with the number of the video sequence being 1;
step S303: if the total number of target scenery appearing in each image frame in a video sequence is more than or equal to 2, respectively extracting each image frame covering the target scenery according to the time sequence; for one target scenery A, if all the image frames of the target scenery A are continuous image frames in time, and the number of the continuous image frames is greater than a set frame number threshold value, selecting an image frame with the largest area covering the area A in a focus target area from the continuous image frames as a desired image in the video sequence, and so on, and finally obtaining the desired image with the number of the video sequence greater than or equal to 1; in this step, there is no precedence order in the selection analysis of different target scenes, i.e. the desired image covering the target scene a in the focus target area may be selected first, or the desired image covering the target scene B in the focus target area may be selected first;
step S400: and (3) performing motion condition recognition of the target scenery on the part of the expected images selected in the step (S300), and further performing motion condition recognition on the target scenery with the total number of occurrences of more than or equal to 2 in the step (S303) based on the result obtained by the motion condition recognition:
wherein, step S400 includes:
step S401: corresponding to the successive image frames of the object scene A appearing in the acquiring step S303, one image frame in the successive image frames is recorded as fnAdjacent to a picture frame of fn-1(ii) a Respectively discretizing the whole outline of the target scenery A in the image frame fn and the image frame fn-1 to obtain an image frame fnDiscrete point set, image frame fn-1The discrete point set of (2); respectively recording the gray values of the corresponding pixel points of the two frames as f in the two discrete point setsn(xk,yk) And fn-1(xk,yk);
Step S402: calculating the gray values of the corresponding pixel points of the two image frames according to a formula to obtain a difference image D (x)k,yk):
D(xk,yk)=|fn(xk,yk)-fn-1(xk,yk)|
Wherein f isn(xk,yk) Representing the object A in an image frame fnThe k-th discrete point (x)k,yk) The gray value of the corresponding pixel point; f. ofn-1(xk,yk) Representing the object A in an image frame fn-1The k-th discrete point (x)k,yk) The gray value of the corresponding pixel point;
step S403: setting a gray level threshold J according to the formula
Figure BDA0003399338410000111
Obtain an image D' (x)k,yk) (ii) a If image D' (x)k,yk) The object scene is judged to be a person or an animal and to be in a motion state due to self-motion; if image D' (x)k,yk) For a static object, judging that the target scene presents a motion state due to passive artificial change of the field angle of the equipment;
when the motion condition identification result of a certain target scene is self-motion, all continuous image frames covering the target scene in the target area are taken as expected images;
step S500: and pushing the finally selected expected image to the user, wherein the user can select the image from the expected images based on self-intention.
In order to better realize the method, the system for acquiring and analyzing the moving image is also provided, and comprises the following steps: the system comprises a moving image acquisition processing module, an image scene information capturing module, a segment dividing module, a target scene distinguishing module, an expected image selecting module, a target scene motion condition identifying module and an expected image selecting and adjusting module;
the mobile image acquisition processing module is used for inputting a video stream to be processed and performing framing processing on the video stream to be processed to obtain a plurality of frame images;
the image scene information capturing module is used for receiving the data in the moving image acquisition and processing module and capturing different scene information in a plurality of frames of images;
the image scene information capturing module comprises a distribution area calculating unit, an outline extracting unit, an interval breakpoint capturing unit, an outline integrating unit, an information eliminating unit and an information integrating unit;
the distribution area calculation unit is used for acquiring the background picture color types of a plurality of frames of images obtained in the moving image acquisition and processing module and calculating the distribution area of each background picture color type; the contour extraction unit is used for extracting contours of scenes in a plurality of frames of images obtained from the moving image acquisition processing module; the interval breakpoint capturing unit is used for capturing the interval breakpoints among the scene outlines obtained in the outline extracting unit and calculating the interval distance; the contour integration unit is used for receiving the data in the interval breakpoint capturing unit and carrying out contour integration on the contour of the related scenery based on the data; the information removing unit is used for receiving the data in the distribution area calculating unit, the interval breakpoint capturing unit and the outline integrating unit and reserving or removing the related information; the information integration unit is used for receiving the information data processed by the information elimination unit and integrating the information of each frame of image in the plurality of frames of images;
the segmentation module is used for receiving data in the moving image acquisition and processing module and the image scene information capturing module, and segmenting the video stream to be processed based on the obtained scene integration information of a plurality of frames of images to obtain a plurality of video sequences;
wherein, the fragment division module includes: the system comprises an image frame set analysis unit, an information comparison unit, an image frame set merging unit and a video stream dividing node selection unit;
the image frame set analysis unit is used for acquiring the frame rate information transmitted by the equipment and analyzing and collecting the scene information of the fps frame images transmitted by the equipment every second; the information comparison unit is used for receiving the data in the image frame set analysis unit and comparing the scene information of the image frame sets of adjacent seconds; the image frame set merging unit is used for receiving the data in the information comparison unit and generating a frame set or video division node for the image frame set of the adjacent second based on the comparison data; a video stream dividing unit for receiving data in the image frame set merging unit and performing video division on the video stream to be processed based on each dividing node generated in the image frame set merging unit
The object scene distinguishing module is used for receiving the data in the segment dividing module and distinguishing the object scene of each image frame in the plurality of video sequences;
the expected image selection module is used for receiving the data in the target scenery distinguishing module and respectively selecting expected images from a plurality of video sequences based on the target scenery distinguishing result;
the target scenery moving condition identification module is used for receiving the data in the expected image selection module and identifying the target scenery moving condition of the selected part of expected images;
and the expected image selection and adjustment module is used for receiving the data in the target scene motion condition identification module and performing selection and adjustment on the selected expected image based on the identification result of the target scene motion condition.
The target scene distinguishing module comprises a focus target area setting unit and a desired image capturing unit; the target scenery motion condition identification module comprises a discrete processing unit, a difference image calculation unit, a target scenery identification unit and a motion condition identification unit;
the focus target area setting unit is used for carrying out image diagonal connection on each image frame in each video sequence to obtain a target reference point, and then selecting a target radius value R according to the height and width of the image frame to obtain a final focus target area;
a target scene recognition unit that captures a target scene for each image frame in each video sequence based on the focus target region set in the focus target region setting unit;
the expected image capturing unit is used for capturing an expected image based on the distribution area characteristics of the captured target scenery in the focus target area;
the discrete processing unit is used for extracting and discretizing the outline of a scene image in a part of expected images in the expected image capturing unit;
the difference image calculation unit is used for receiving the data in the discrete processing unit and calculating the difference image between the continuous image frames;
and the motion condition identification unit is used for receiving the data in the difference image calculation unit, identifying whether the target scenery is a person or an animal or a still object, and judging the motion condition of the target scenery based on the identification result.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for moving image acquisition and analysis, the method comprising:
step S100: extracting a video stream to be processed, and performing framing processing on the video stream to be processed to obtain a plurality of frame images; respectively capturing different scene information in the plurality of frames of images to obtain respective scene integration information of the plurality of frames of images in the video stream to be processed;
step S200: segmenting the video stream to be processed into a plurality of video sequences based on the obtained scene integration information of the plurality of frames of images;
step S300: carrying out target scene discrimination on each image frame in the plurality of video sequences to obtain a discrimination result of a target scene; respectively selecting expected images from the plurality of video sequences based on the discrimination result;
step S400: identifying the motion condition of the target scenery for the part of the expected images selected in the step S300, and adjusting the selected expected images based on the result obtained by the motion condition identification;
step S500: and pushing the finally selected expected images to the user, wherein the user can select the images in the expected images based on self-intention.
2. The method for moving image acquisition and analysis according to claim 1, wherein the step of capturing and integrating scene information in the plurality of image frames in step S100 comprises:
step S101: respectively obtaining the background picture color types of the plurality of frames of images, and calculating the distribution area of each background picture color type; setting a color distribution area threshold value, and taking color type information with the distribution area smaller than the color distribution area threshold value as a rejection mark; extracting the outlines of the scenes in the plurality of frames of images to obtain n scene outlines; setting a contour line interval threshold, comparing interval breakpoint distances among the n scene outlines based on the contour line interval threshold, and combining the scene outlines of which the interval breakpoint distances are less than or equal to the contour line interval threshold to form an integral scene outline;
step S102: finding the scene outline corresponding to the preliminary marking rejection information in the step S101; if the scene outlines have other scene outlines of which the interval breakpoint distance is less than or equal to the contour line interval threshold value, eliminating the rejection marks; if the color type information marked for removal does not have a corresponding scene outline, removing the color type information;
step S103: integrating the information from the step S101 to the step S102 to obtain respective scene integration information M of the plurality of frames of images, wherein the form of the scene integration information M is as follows:
Figure FDA0003399338400000011
wherein, a, b, c and d respectively represent different scene outlines which are sequentially shown from left to right in an image frame; r isi,ei,zi,uiRespectively representing color kinds of the different scene outlines;
Figure FDA0003399338400000021
representing a set r of colour classes on the scene outline ai
Figure FDA0003399338400000022
Representing a set e of colour classes on the scene outline bi
Figure FDA0003399338400000023
Representing a set z of colour classes on the scene outline ci
Figure FDA0003399338400000024
Representing a set u of colour classes on the scene outline di
3. A moving image acquisition analysis method according to claim 1, wherein the step S200 of segmenting the video stream to be processed comprises:
step S201: acquiring the frame rate information transmitted by equipment, setting the equipment to transmit fps frame images every second, and collecting the fps frame images transmitted every second by the equipment;
step S202: setting the time stamp 0-t of the video stream to be processed0The internally appearing image frames are assembled into an image frame set Q1;t0A first interval time, Q, for the video stream to be processed1Scene integration information of all image frames in the scene
Figure FDA0003399338400000025
Figure FDA0003399338400000026
Wherein M is11、M12、M12… each represents Q1Scene integration information of the first, second and third … image frames; will be at t0~t0-1The internally appearing image frames are assembled into an image frame set Q2,t0-1Represents t0Next interval of time, Q2Scene integration information of all image frames in the scene
Figure FDA0003399338400000027
Figure FDA0003399338400000028
Wherein M is21、M22、M23… each represents Q2Scene integration information of the first, second and third … image frames;
step S203: collecting the image frame Q1Scene integration information of all intra-frame
Figure FDA0003399338400000029
And the image frame set Q2Scene integration information of all intra-frame
Figure FDA00033993384000000210
Comparing the two to obtain a comparison result P;
Figure FDA00033993384000000211
calculating and setting a contrast threshold; if the comparison result P is smaller than the comparison threshold value, the t is compared0As segment dividing nodes;
step S204: if the comparison result P is greater than or equal to the comparison threshold value, 0-t is carried out0-1The image frame sets are combined, and the image frame set Q is combined1And the image frame set Q2Internal scene integration information
Figure FDA00033993384000000212
And
Figure FDA00033993384000000213
correspondingly merging; immediately after finding t0-1~t0-2The internally appearing image frames are assembled into an image frame set Q3,t0-2Represents t0-1Next interval of time, Q3Scene integration information of all image frames in the scene
Figure FDA00033993384000000214
Figure FDA00033993384000000215
Wherein M is31、M32、M33Respectively represent Q3Integrating the scenery of the first, second and third image frames to obtain a set Q of image frames3And an image frame set Q1Image frame set Q2Comparing the scene integration information of the combined frame sets to obtain a comparison result P,
Figure FDA00033993384000000216
Figure FDA00033993384000000217
if the comparison result P is smaller than the comparison threshold value, the t is compared0-1As a segment dividing node, if the comparison result P is greater than or equal to the comparison threshold, 0-t0-2And combining the image frame sets in the video processing system, and repeating the steps until all the video streams to be processed are traversed and divided to obtain a plurality of video sequences.
4. The method for collecting and analyzing moving images according to claim 2, wherein the step S300 comprises:
step S301: performing image diagonal line connection on each image frame in each video sequence to obtain two diagonal lines, taking the intersection point of the two image diagonal lines as a target reference point, selecting a radius value R according to the height and width of the image frame to form a focus target region pi R2
Step S302: calculating different scene outline areas covered in each image frame focus target area in each video sequence, and taking the scene outline with the largest covered area in the image frame focus target area as a target scene of the image frame; if the total number of the target scenery of each image frame in a video sequence is 1, selecting an image frame which covers the largest area of the target scenery in a focus target region in the video sequence as an expected image in the video sequence; if the image frame with the largest target scenery area in the target area is blurred, selecting an image frame with the second largest target scenery area in the focus target area in the video sequence as an expected image in the video sequence, and repeating the steps to obtain the expected images with the number of the video sequence being 1;
step S303: if the total number of target scenes appearing in each image frame in a video sequence is more than or equal to 2, respectively extracting each image frame covering the target scenes according to a time sequence; and aiming at one target scenery A, if all the image frames of which the target scenery is A are continuous image frames in time, and the frame number of the continuous image frames is greater than a set frame number threshold value, selecting an image frame which covers the area A with the largest area in a focus target area from the continuous image frames as a desired image in the video sequence, and so on, and finally obtaining the desired image of which the number is greater than or equal to 1 in the video sequence.
5. The method for moving image acquisition and analysis as claimed in claim 4, wherein said step S400 is further characterized by performing motion condition recognition on each object scene with the total number of occurrences greater than or equal to 2 in said step S303:
step S401: correspondingly acquiring the continuous image frames of the target scenery A appearing in the step S303, and recording one image frame in the continuous image frames as fnAdjacent to a picture frame of fn-1(ii) a Respectively discretizing the whole outline of the target scenery A in the image frame fn and the image frame fn-1 to obtain an image frame fnDiscrete point set, image frame fn-1The discrete point set of (2); respectively recording the gray values of the corresponding pixel points of the two frames as f in the two discrete point setsn(xk,yk) And fn-1(xk,yk);
Step S402: corresponding two image frames to imagesThe gray value of the pixel points is calculated according to a formula to obtain a difference image D (x)k,yk):
D(xk,yk)=|fn(xk,yk)-fn-1(xk,yk)|
Wherein f isn(xk,yk) Representing the object A in an image frame fnThe k-th discrete point (x)k,yk) The gray value of the corresponding pixel point; f. ofn-1(xk,yk) Representing the object A in an image frame fn-1The k-th discrete point (x)k,yk) The gray value of the corresponding pixel point;
step S403: setting a gray level threshold J according to the formula
Figure FDA0003399338400000041
Obtain an image D' (x)k,yk) (ii) a If the image D' (x)k,yk) Judging that the motion state of the target scenery is caused by self-motion for the person or the animal; if the image D' (x)k,yk) And judging that the target scene presents the motion state due to passive artificial change of the field angle of the equipment for the still object.
6. The method for mobile image acquisition and analysis according to claim 5, wherein said step S400 comprises: and when the motion condition of a certain target scene is recognized as self-motion, all continuous image frames covering the target scene in the target area are taken as expected images.
7. An acquisition analysis system for moving images, the system comprising: the system comprises a moving image acquisition processing module, an image scene information capturing module, a segment dividing module, a target scene distinguishing module, an expected image selecting module, a target scene motion condition identifying module and an expected image selecting and adjusting module;
the mobile image acquisition processing module is used for inputting a video stream to be processed and performing framing processing on the video stream to be processed to obtain a plurality of frame images;
the image scene information capturing module is used for receiving the data in the moving image acquisition and processing module and capturing different scene information in the plurality of frames of images;
the segmentation module is used for receiving data in the moving image acquisition processing module and the image scene information capturing module, and segmenting the video stream to be processed based on the obtained scene integration information of the frames of images to obtain a plurality of video sequences;
the object scene distinguishing module is used for receiving the data in the segmentation module and distinguishing the object scene from each image frame in the plurality of video sequences;
the expected image selection module is used for receiving the data in the target scenery distinguishing module and respectively selecting expected images from the plurality of video sequences based on the target scenery distinguishing result;
the target scenery moving condition identification module is used for receiving the data in the expected image selection module and identifying the target scenery moving condition of the selected part of expected images;
and the expected image selection and adjustment module is used for receiving the data in the target scenery movement condition identification module and performing selection and adjustment on the selected expected image based on the identification result of the target scenery movement condition.
8. The mobile image acquisition and analysis system according to claim 7, wherein the image scene information capturing module includes a distribution area calculating unit, a contour extracting unit, an interval breakpoint capturing unit, a contour integrating unit, an information rejecting unit, and an information integrating unit;
the distribution area calculation unit is used for acquiring the background picture color types of a plurality of frames of images obtained in the moving image acquisition and processing module and calculating the distribution area of each background picture color type;
the contour extraction unit is used for extracting contours of scenes in a plurality of frames of images obtained from the moving image acquisition and processing module;
the interval breakpoint capturing unit is used for capturing and calculating interval breakpoints among the scene outlines obtained in the outline extraction unit;
the contour integration unit is used for receiving the data in the interval breakpoint capturing unit and carrying out contour integration on the contour of the related scenery based on the data;
the information eliminating unit is used for receiving the data in the distribution area calculating unit, the interval breakpoint capturing unit and the outline integrating unit and reserving or eliminating related information;
the information integration unit is used for receiving the information data processed by the information elimination unit and integrating the information of each frame of image in the plurality of frames of images.
9. The system for moving image acquisition and analysis of claim 7, wherein the segmentation module comprises: the system comprises an image frame set analysis unit, an information comparison unit, an image frame set merging unit and a video stream dividing node selection unit;
the image frame set analysis unit is used for acquiring the frame rate information transmitted by the equipment and analyzing and collecting the scene information of the fps frame images transmitted by the equipment every second;
the information comparison unit is used for receiving the data in the image frame set analysis unit and comparing the scene information of the image frame set of adjacent seconds;
the image frame set merging unit is used for receiving the data in the information comparison unit and carrying out frame set and/or video division node generation on the image frame sets of adjacent seconds based on the comparison data;
and the video stream dividing unit is used for receiving the data in the image frame set merging unit and performing video division on the video stream to be processed based on each dividing node generated in the image frame set merging unit.
10. A moving image acquisition and analysis system according to claim 7, wherein said object scene discrimination module comprises a focus object region setting unit, a desired image capturing unit; the target scenery motion condition identification module comprises a discrete processing unit, a difference image calculation unit, a target scenery identification unit and a motion condition identification unit;
the focus target area setting unit is used for carrying out image diagonal connection on each image frame in each video sequence to obtain a target reference point, and then selecting a target radius value R according to the height and width of the image frame to obtain a final focus target area;
the target scenery recognition unit captures a target scenery for each image frame in each video sequence based on the focus target area set in the focus target area setting unit;
the expected image capturing unit captures an expected image based on the distribution area characteristics of the captured target scenery in the focus target area;
the discrete processing unit is used for extracting and discretizing the outline of a scene image in a part of expected images in the expected image capturing unit;
the difference image calculation unit is used for receiving the data in the discrete processing unit and calculating the difference image between the continuous image frames;
and the motion condition identification unit is used for receiving the data in the differential image calculation unit, identifying whether the target scenery is a person or an animal or a still object, and judging the motion condition of the target scenery based on the identification result.
CN202111490973.4A 2021-12-08 2021-12-08 Acquisition and analysis system and method for moving image Active CN114283356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111490973.4A CN114283356B (en) 2021-12-08 2021-12-08 Acquisition and analysis system and method for moving image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111490973.4A CN114283356B (en) 2021-12-08 2021-12-08 Acquisition and analysis system and method for moving image

Publications (2)

Publication Number Publication Date
CN114283356A true CN114283356A (en) 2022-04-05
CN114283356B CN114283356B (en) 2022-11-29

Family

ID=80871265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111490973.4A Active CN114283356B (en) 2021-12-08 2021-12-08 Acquisition and analysis system and method for moving image

Country Status (1)

Country Link
CN (1) CN114283356B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150169960A1 (en) * 2012-04-18 2015-06-18 Vixs Systems, Inc. Video processing system with color-based recognition and methods for use therewith
CN105718871A (en) * 2016-01-18 2016-06-29 成都索贝数码科技股份有限公司 Video host identification method based on statistics
CN107169985A (en) * 2017-05-23 2017-09-15 南京邮电大学 A kind of moving target detecting method based on symmetrical inter-frame difference and context update
WO2018086527A1 (en) * 2016-11-08 2018-05-17 中兴通讯股份有限公司 Video processing method and device
CN112203095A (en) * 2020-12-04 2021-01-08 腾讯科技(深圳)有限公司 Video motion estimation method, device, equipment and computer readable storage medium
CN112581489A (en) * 2019-09-29 2021-03-30 RealMe重庆移动通信有限公司 Video compression method, device and storage medium
US20210335391A1 (en) * 2019-06-24 2021-10-28 Tencent Technology (Shenzhen) Company Limited Resource display method, device, apparatus, and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150169960A1 (en) * 2012-04-18 2015-06-18 Vixs Systems, Inc. Video processing system with color-based recognition and methods for use therewith
CN105718871A (en) * 2016-01-18 2016-06-29 成都索贝数码科技股份有限公司 Video host identification method based on statistics
WO2018086527A1 (en) * 2016-11-08 2018-05-17 中兴通讯股份有限公司 Video processing method and device
CN107169985A (en) * 2017-05-23 2017-09-15 南京邮电大学 A kind of moving target detecting method based on symmetrical inter-frame difference and context update
US20210335391A1 (en) * 2019-06-24 2021-10-28 Tencent Technology (Shenzhen) Company Limited Resource display method, device, apparatus, and storage medium
CN112581489A (en) * 2019-09-29 2021-03-30 RealMe重庆移动通信有限公司 Video compression method, device and storage medium
CN112203095A (en) * 2020-12-04 2021-01-08 腾讯科技(深圳)有限公司 Video motion estimation method, device, equipment and computer readable storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A.SENTHIL MURUGAN ET AL: ""A study on various methods used for video summarization and moving object detection for video surveillance applications"", 《MULTIMEDIA TOOLS AND APPLICATIONS》 *
J.CALIC ET AL: ""Efficient key-frame extraction and video analysis"", 《PROCEEDINGS. INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY:CODING AND COMPUTING》 *
孙中华: ""基于颜色与目标轮廓特征的视频分割方法"", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
熊伟: ""基于场景分割的视频内容摘要研究"", 《中国学位论文全文数据库》 *

Also Published As

Publication number Publication date
CN114283356B (en) 2022-11-29

Similar Documents

Publication Publication Date Title
CN110830756B (en) Monitoring method and device
CN107133969B (en) A kind of mobile platform moving target detecting method based on background back projection
JP3461626B2 (en) Specific image region extraction method and specific image region extraction device
JP4616702B2 (en) Image processing
EP1580757A2 (en) Extracting key-frames from a video
US7606441B2 (en) Image processing device and a method for the same
US20020186881A1 (en) Image background replacement method
US20110164823A1 (en) Video object extraction apparatus and method
JPH08191411A (en) Scene discrimination method and representative image recording and display device
CN107358141B (en) Data identification method and device
JP2005513656A (en) Method for identifying moving objects in a video using volume growth and change detection masks
CN106060470A (en) Video monitoring method and system
CN115965889A (en) Video quality assessment data processing method, device and equipment
CN205883437U (en) Video monitoring device
CN114283356B (en) Acquisition and analysis system and method for moving image
CN116095363A (en) Mobile terminal short video highlight moment editing method based on key behavior recognition
CN115512263A (en) Dynamic visual monitoring method and device for falling object
KR100853267B1 (en) Multiple People Tracking Method Using Stereo Vision and System Thereof
CN113111847A (en) Automatic monitoring method, device and system for process circulation
CN113253890A (en) Video image matting method, system and medium
CN112818743A (en) Image recognition method and device, electronic equipment and computer storage medium
CN112446820A (en) Method for removing irrelevant portrait of scenic spot photo
CN113938671A (en) Image content analysis method and device, electronic equipment and storage medium
Cheng Temporal registration of video sequences
CN113099128B (en) Video processing method and video processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant