CN112837350A - Target moving object identification method and device, electronic equipment and storage medium - Google Patents

Target moving object identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112837350A
CN112837350A CN202110219773.9A CN202110219773A CN112837350A CN 112837350 A CN112837350 A CN 112837350A CN 202110219773 A CN202110219773 A CN 202110219773A CN 112837350 A CN112837350 A CN 112837350A
Authority
CN
China
Prior art keywords
moving object
target
moving
identified
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110219773.9A
Other languages
Chinese (zh)
Inventor
陈科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202110219773.9A priority Critical patent/CN112837350A/en
Publication of CN112837350A publication Critical patent/CN112837350A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image
    • G06T2207/30224Ball; Puck

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a target moving object identification method and device, electronic equipment and a storage medium, and relates to the technical field of image identification. The method comprises the following steps: acquiring a target video frame; performing motion detection on the target video frame to obtain a motion object in the target video frame; screening the moving objects to be identified based on the intra-frame characteristics of the moving objects; and identifying and obtaining a target moving object from the moving objects to be identified based on the identification rule corresponding to the number of the moving objects to be identified. The method and the device can improve the accuracy and the recognition efficiency of the target moving object recognition.

Description

Target moving object identification method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a method and an apparatus for recognizing a target moving object, an electronic device, and a storage medium.
Background
In some ball games, it is usually desirable to acquire a global image of a game field in order to acquire various parts of the game field, however, some ball games have a large game field, and in the acquired global image of the game field, the proportion of the game ball in the game field is small, and the online audience may not well recognize the game ball, thereby reducing the online viewing experience of the audience.
In this case, it is usually necessary to identify the game ball in the video frame to assist the online audience to watch the game, however, the conventional image feature identification method is used to identify the game ball in the video frame, which has a problem of low identification accuracy.
Disclosure of Invention
In view of this, embodiments of the present application provide a method and an apparatus for identifying a target moving object, an electronic device, and a storage medium to solve the above problem.
In a first aspect, an embodiment of the present application provides a target moving object identification method, where the method includes: acquiring a target video frame; carrying out motion detection on the target video frame to obtain a motion object in the target video frame; screening the moving objects to be identified based on the intra-frame characteristics of the moving objects; and identifying and obtaining the target moving object from the moving objects to be identified based on the identification rule corresponding to the number of the moving objects to be identified.
In a second aspect, an embodiment of the present application provides an apparatus for identifying a target moving object, where the apparatus includes: the device comprises a target video frame acquisition module, a motion detection module, a screening module and an identification module. The target video frame acquisition module is used for acquiring a target video frame; the motion detection module is used for carrying out motion detection on the target video frame to obtain a motion object in the target video frame; the screening module is used for screening the moving objects to be identified based on the intra-frame characteristics of the moving objects; and the identification module is used for identifying and obtaining the target moving object from the moving objects to be identified based on the identification rule corresponding to the number of the moving objects to be identified.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory; one or more programs are stored in the memory and configured to be executed by the processor to implement the methods described above.
In a fourth aspect, the present application provides a computer-readable storage medium having program code stored therein, where the program code executes the method described above when executed by a processor.
In a fifth aspect, embodiments of the present application provide a computer program product or a computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the above-described method.
According to the target moving object identification method, the target moving object identification device, the electronic equipment and the storage medium, after the target video frame is obtained, the target video frame is subjected to motion detection to obtain the moving object in the target video frame, then the moving object to be identified is obtained by screening the moving object based on the intra-frame characteristics of the moving object, and finally the target moving object is obtained by identifying the moving object to be identified based on the identification rule corresponding to the number of the moving objects to be identified. Because only the moving object to be identified screened out from the moving objects is identified, which is equivalent to that the moving objects are primarily screened once, the interference of other moving objects can be eliminated to a certain extent, thereby improving the accuracy of target moving object identification.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a target moving object identification method according to an embodiment of the present application;
fig. 2 is a flowchart of an implementation manner of S110 in a target moving object identification method proposed by the embodiment shown in fig. 1;
fig. 3 is a flowchart illustrating an implementation manner of S120 in a target moving object identification method according to the embodiment shown in fig. 1;
FIG. 4 is a diagram of a frame of a target video frame in an embodiment of the present application;
fig. 5 is a flowchart illustrating another target moving object identification method according to an embodiment of the present application;
fig. 6 is a schematic flowchart of S230 in a target moving object identification method according to the embodiment shown in fig. 5;
fig. 7 is a schematic flow chart of S230 in another target moving object identification method proposed by the embodiment shown in fig. 5;
fig. 8 is a schematic flow chart of S230 in another target moving object identification method proposed by the embodiment shown in fig. 5;
fig. 9 is a flowchart illustrating another target moving object identification method according to an embodiment of the present application;
fig. 10 is a block diagram of a target moving object recognition apparatus according to an embodiment of the present application;
fig. 11 is a block diagram illustrating an electronic device for executing a target moving object recognition method according to an embodiment of the present application;
fig. 12 illustrates a storage unit for storing or carrying program codes for implementing a target moving object identification method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In a large-scale ball game such as football, basketball, etc., in order to present a global image of the playing field to the online spectators, so that the online spectators can know the information such as tactical distribution of the game through the global image, an ultra-wide-angle camera or a panoramic camera can be used for capturing the global image of the playing field, wherein the global image of the playing field is an image covering the whole playing field area.
However, because the field used by a large-scale ball game is generally large, and the game ball is relatively small compared with the field, the proportion of the game ball in the captured global image of the field is small, and the online spectator may not recognize the game ball well in the video image, thereby reducing the online competition experience of the spectator.
In such cases, it is often necessary to identify the game ball in the video frame to assist the online audience in viewing the game. For example, the game ball may be identified first and then the identified game ball may be identified so that the online audience may quickly find the game ball and the viewing experience may be improved. However, the method of identifying a game ball in a video image in the related art has a problem of low identification accuracy.
For example, in some methods, a ball for game in a video frame may be identified by using a feature value identification method, in which feature values of the entire video frame are directly extracted and then identified according to feature value rows, and since identification is performed based on the feature values of the entire video frame, there is a problem of low identification accuracy because there is much interference of similar features.
In other ways, the identification may be through deep learning by neural networks. However, the recognition method of the neural network deep learning has certain requirements on the size of an object to be recognized and the resolution of an input video frame, for example, generally, the resolution of the object to be recognized cannot be less than 20 × 20, the resolution of the input video frame cannot be greater than 2K, and the like, and for display effects, the resolution of the video frame is far greater than the resolution requirement of the neural network deep learning, which requires that the resolution of the video frame is scaled to reduce the resolution of the video frame to meet the requirement of the deep learning on the size of an input source, and as a result, the original football in the video frame is smaller due to scaling, and the recognition accuracy of the deep learning method on the football becomes lower.
Therefore, the inventor proposes a target moving object identification method, a target moving object identification device, an electronic device and a storage medium, which are provided by the present application, in the method, after a target video frame is obtained, motion detection is performed on the target video frame to obtain a moving object in the target video frame, then a moving object to be identified is obtained by screening from the moving object based on an intra-frame feature of the moving object, and finally the target moving object is obtained by identifying from the moving object to be identified based on an identification rule corresponding to the number of the moving objects to be identified.
In the mode, only the moving object to be identified screened from the moving object is identified, which is equivalent to primary screening of the moving object, so that the interference of other moving objects can be eliminated to a certain extent, and the accuracy of target moving object identification is improved.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for identifying a target moving object according to an embodiment of the present application, where the method includes the following steps:
and S110, acquiring a target video frame.
In the video image acquisition process, the video image is acquired in the form of video image frames (namely video frames), and therefore, the video image is formed by a plurality of video frames. The target video frame is a video frame which is used for carrying out motion detection subsequently in the video image. The video image is a field global video image of the large-scale ball game acquired by using an ultra-wide-angle camera or a panoramic camera.
Optionally, the acquired global video image may be a real-time video image, or a video image acquired in advance.
Among them, it can be understood that the more the number of video frames presented to the viewer per unit time, i.e., the higher the video frame rate, the smoother the viewer watches. Therefore, in some embodiments, in order to improve the display effect of the target moving object in the video image, all video frames in the video image may be taken as the target video frames, that is, the target video frames are taken as all video frames in the video image, so as to determine the target moving object from all video frames, and thus, the target moving object may be identified in each video frame in the video image, so that the number of video frames in which the target moving object is identified in the video image finally output to the online audience is more, and the target moving object viewed by the audience is smoother.
A target moving object is a moving object that is of interest to the online audience, for example, in a ball game, the user is interested in a game ball, i.e., a ball object, such as a soccer ball, a basketball, a volleyball, etc.
However, in consideration of device performance limitations or in order to save device performance, in other embodiments, the target video frame may be acquired as a partial video frame in the video image, and the acquired partial video frame is taken as the target video frame. In this case, as shown in fig. 2, acquiring the target video frame may specifically include the following steps:
and S111, acquiring a target video image and a video frame extraction frame rate.
In this embodiment, the video frame extraction frame rate may be understood as a frame rate of a video image composed of target video frames extracted from the target video image.
And S112, extracting the target video frame from the target video image based on the video frame extraction frame rate.
In order to improve the display effect of the target moving object in the video image, as an embodiment, based on the video frame extraction frame rate, the extraction of the target video frame from the target video source may be uniform extraction of the target video frame from the target video image, that is, the target video frame is extracted at certain frame number intervals. Specifically, the number of interval frames may be determined according to the frame rate of the target video image and the video frame extraction frame rate.
Illustratively, assuming that the frame rate of the target video image is 60 frames/second and the extraction frame rate of the video image is 30 frames/second, at this time, it may be determined that the number of interval frames is 1 frame, that is, 1 frame of video frame is extracted as the target video frame every 1 frame of interval video frame.
In another embodiment, the target video frame may be extracted from the target video source randomly based on the video frame extraction frame rate. Still taking the example that the extraction frame rate of the video image is 30 frames/second, at this time, 30 frames of video image frames can be randomly extracted from the target video image every second as the target video frame.
And S120, carrying out motion detection on the target video frame to obtain a motion object in the target video frame.
The moving objects refer to all objects that can move in the video image, for example, in a ball game, the moving objects may include a game ball, a player, or other objects that may move.
According to research, the field background and the like do not move in the ball game, the game ball, the sportsman and the like move, so that in order to reduce the difficulty of subsequent identification and the data amount of identification, the moving object in the target video frame can be obtained by performing motion detection on the target video frame, the field background and other objects are eliminated, and the accuracy and the identification efficiency of the target moving object identification are greatly improved.
Therefore, the motion detection of the target video frame can be understood as the detection of a moving object in the target video frame. There are many ways to perform motion detection on the target video frame. Alternatively, the motion detection may be performed on the target video frame by a gaussian mixture model. Alternatively, the target video frame may be motion detected by inter-frame disparity.
In addition, when the global image acquisition of the playing field is performed by using an image pickup device such as an ultra-wide-angle camera or a panoramic camera, objects outside the playing field, such as spectators, cleaning personnel or players warming up using balls, which are also movable, are inevitably acquired, and if the objects are also used for subsequently recognizing target moving objects, the recognition accuracy and the recognition efficiency are reduced.
Therefore, in order to avoid the influence of the motion object outside the playing field on the recognition accuracy and the recognition efficiency, as an implementation manner, as shown in fig. 3, the motion detection is performed on the target video frame to obtain the motion object in the target video frame, which may specifically include the following steps:
and S121, acquiring a picture of an effective motion area in the target video frame.
The effective motion region can be understood as a region within a possible motion range of the target moving object. For example, for a ball game, the game ball may generally move within the field or within a certain range of the field boundary, and thus the effective movement area may refer to an area within the field or within a certain range of the field boundary.
Considering that when an ultra-wide-angle camera or a panoramic camera is used for acquiring the global images of the playing field, the ultra-wide-angle camera or the panoramic camera usually does not change the acquisition position, so that the scene area of the acquired video images does not change. Therefore, the picture of the effective area can be obtained in a manner of calibrating the effective motion area in the acquired target video frame in advance. For example, the field boundary is marked in the video image in advance, or a certain range outside the field boundary is marked.
Alternatively, the field range or the range within a certain range of the field boundary may be circled by a polygon, so as to acquire the picture within the polygon as the picture of the effective motion area.
Illustratively, referring to fig. 4, fig. 4 shows a picture schematic diagram of a frame of target video frame, and in the target video frame shown in fig. 4, both pictures within the field and pictures of the outer race are shown, at which time, the field area may be circled by using polygons, so as to obtain pictures of effective motion areas in the target video frame.
And S122, carrying out motion detection on the picture of the effective motion area to obtain a motion object in the picture of the effective motion area.
Through carrying out motion detection on the picture of the effective motion area, the influence of audiences, cleaning personnel or sportsmen who uses balls to warm up on the target motion object is eliminated, and the accuracy and the recognition efficiency of the target motion object recognition are further improved.
And S130, screening the moving objects to be identified based on the intra-frame characteristics of the moving objects.
The moving object to be identified is a moving object which is identified by using a preset identification rule in the follow-up process. It is understood that even the moving objects obtained from the effective moving area picture include all the moving objects, for example, players, game balls, flags swung by referees, or other objects that may move. The number of these objects may still be large, and if these moving objects are all identified directly, there still remains a problem of low identification efficiency. Therefore, in order to further improve the recognition efficiency, as an embodiment, after obtaining the moving object, the moving object may be further filtered based on the intra-frame features of the moving object to obtain the moving object to be recognized, so as to further reduce the moving objects actually used for recognition in the following.
The intra-frame features refer to features obtained from a frame of video.
And S140, identifying and obtaining the target moving object from the moving objects to be identified based on the identification rule corresponding to the number of the moving objects to be identified.
It is understood that after the moving objects are filtered through the above step S130 to obtain the moving objects to be identified, the number of the moving objects to be identified is reduced again, in this case, the number of the moving objects to be identified may be one or more. At this time, different identification rules can be selected directly based on the number of the moving objects to be identified, and the small number of the moving objects to be identified are identified, so that the target moving object is obtained.
In addition, considering that one game ball is normally present in the field by default, if the number of moving objects to be recognized is one, the one moving object to be recognized can be directly determined as the target moving object. In this case, identifying the target moving object from the moving objects to be identified based on the identification rule corresponding to the number of the moving objects to be identified includes: and when the number of the moving objects to be identified obtained by screening from the moving objects is one, determining the moving objects to be identified as target moving objects.
In addition, considering that under normal conditions, besides the game ball, there may be players waiting to identify the moving object in the game field, and also because one game ball is defaulted in the game field under normal conditions, when the number of the moving objects to be identified is more than one, the moving objects to be identified cannot be directly determined as the target moving objects, and each moving object to be identified needs to be identified respectively. In this case, identifying the target moving object from the moving objects to be identified based on the identification rule corresponding to the number of the moving objects to be identified includes: when the number of the to-be-identified moving objects obtained by screening from the moving objects is more than one, inputting the image corresponding to each to-be-identified moving object into an object classifier so as to identify and obtain a target moving object from the to-be-identified moving objects through the object classifier, wherein the object classifier is obtained by training sample moving objects with classification labels determined from sample video frames.
In this embodiment, since the moving object to be identified is a local picture in the target video frame, resolution scaling is not performed, and the resolution can meet the requirement of the neural network model for input data. Thus, the object classifier may be a trained neural network model. The neural network model is used for identifying the moving object to be identified, so that the identification accuracy is improved.
The object classifier may be a supervised model, i.e. trained with sample moving objects with classification labels. The sample moving object may be obtained through the above steps S110 to S130, that is, first, a sample video frame is obtained, then, the sample video frame is subjected to motion detection to obtain a moving object in the sample video frame, then, the sample moving object is obtained by screening from the moving object, and after the sample moving object is obtained, the sample moving object with the classification label may be obtained through a manual labeling manner.
Alternatively, the neural network model may adopt open source models such as TensorFlow (a symbolic mathematical system based on dataflow programming, which is widely applied to programming implementation of various machine learning algorithms), Caffe (Convolutional structure for Fast Feature Embedding), and the like.
In some embodiments, the object classifier may be configured to receive the moving object to be detected and then output whether the moving object to be detected is the target moving object. For example, if a moving object of "human head" is input to the object classifier and the target moving object is soccer, the output result of the object classifier is negative, that is, the moving object is not the target moving object. Thus, the target moving object can be identified and obtained from a plurality of moving objects to be identified.
In other embodiments, the object classifier may be configured to receive a to-be-detected moving object and then output an object type of the to-be-detected moving object, so as to determine the target moving object from the output object type. For example, if a moving object of "head" is input into the object classifier, the output result of the object classifier is that the moving object is "head", and a moving object of "football" is input into the object classifier, the output result of the object classifier is that the moving object is "football", so that the type of each moving object to be recognized can be recognized, and the selected target moving object is recognized from a plurality of moving objects to be recognized.
Since the output result of the object classifier can be the target moving object or other classified moving objects, not only the target moving object but also other moving objects can be obtained, for example, the classification of the athlete can also be obtained, and therefore, when the online audience wants to pay attention to the position of the athlete, the position box of the athlete can also be selected and displayed in the video image.
According to the target moving object identification method, after a target video frame is obtained, motion detection is carried out on the target video frame to obtain a moving object in the target video frame, then the moving object to be identified is obtained by screening from the moving object based on the intra-frame characteristics of the moving object, and finally the target moving object is obtained by identification from the moving object to be identified based on identification rules corresponding to the number of the moving objects to be identified. Because only the moving object to be identified screened out from the moving objects is identified, which is equivalent to that the moving objects are primarily screened once, the interference of other moving objects can be eliminated to a certain extent, thereby improving the accuracy of target moving object identification.
Referring to fig. 5, fig. 5 is a flowchart illustrating a method for identifying a target moving object according to another embodiment of the present application, where the method includes the following steps:
and S210, acquiring a target video frame.
S220, carrying out motion detection on the target video frame to obtain a motion object in the target video frame.
In some embodiments, in order to fully select a moving object and facilitate determining the position and the actual size of the moving object, the moving object obtained by motion detection may be framed in a rectangular frame manner. Continuing with FIG. 4, the various moving objects are shown after rectangular box selection.
And S230, screening the moving objects to be identified based on the intra-frame characteristics of the moving objects.
Among them, the intra-frame characteristics of the moving object may be of various types.
In some embodiments, the intra-frame feature may be an actual size of a location in the video frame where the moving object in the target video frame is currently located. In this case, as shown in fig. 6, the step of obtaining the moving object to be identified by filtering from the moving object based on the intra-frame feature of the moving object may specifically include the following steps:
S231A, obtaining the predicted sizes of different positions of the target moving object in the target video frame, and the actual size of the current position of the moving object in the target video frame.
It can be understood that, in the global image of the competition field acquired by the ultra-wide-angle camera or the panoramic camera, each object shows a rule of magnitude, that is, in an actual scene, the distance from the acquisition device is different, and the display size of the moving object in the finally acquired video image is different. Based on this finding, in the case of determining the position of the capturing device such as a super wide-angle camera or a panoramic camera, the sizes of the target moving object corresponding to different positions in the target video frame can be calibrated in advance. The previously calibrated size is the predicted size of the target moving object at different positions in the target video frame.
It is understood that the moving object is located at a position in the target video frame, i.e. the current position of the moving object, and similarly, the moving object has a real size in the target video frame. Therefore, the actual size of the position where the moving object is currently located in the target video frame can be directly obtained from the target video frame.
After selecting the moving object using the rectangular frame, the actual size of the moving object may be obtained by multiplying the resolution length and width of the rectangular frame.
S232A, when the actual size of the moving object at the current position matches the predicted size of the current position, determining the moving object as the moving object to be identified.
In some embodiments, the predicted size may be a range, and when the actual size of the moving object at the current position matches the predicted size of the current position, the actual size is within the range of the predicted size.
Therefore, when the actual size of the current position of the moving object matches the predicted size of the current position, it may be determined that the size of the moving object is satisfactory, and the moving object whose size does not conform to the target moving object may be preliminarily filtered out, for example, a flag waved by a referee is filtered out, so as to determine the conforming moving object as the moving object to be identified.
Therefore, the number of the moving objects which are identified by the identification rule in the following process can be reduced, the interference of the interfering moving objects is eliminated, and the identification accuracy and the identification efficiency of the target moving object are improved.
In other embodiments, the intra-frame feature may be an actual aspect ratio of the moving object in the target video frame. In this case, as shown in fig. 7, the step of obtaining the moving object to be identified by filtering from the moving object based on the intra-frame feature of the moving object may specifically include the following steps:
S231B, obtaining the predicted length-width ratio of the target moving object in the target video frame and the actual length-width ratio of the moving object in the target video frame.
It is understood that the predicted length-width ratio of the target moving object in the target video frame refers to the length-width ratio that the target moving object should display in the target video frame, for example, for a ball object such as football, basketball, etc., the length-width ratio should display in the target video frame as a nearly square shape, i.e., the length-width ratio is nearly one to one, and for a player, the length-width ratio should display as a long bar shape.
The actual aspect ratio of the moving object in the target video frame can be directly obtained from the target video frame.
After selecting the moving object using the rectangular frame, the actual length-width ratio of the moving object can be obtained from the ratio of the resolution length and the resolution width of the rectangular frame.
S232B, when the actual length-width ratio of the moving object matches the predicted length-width ratio of the target moving object, determining that the moving object is the moving object to be identified.
In some embodiments, the predicted aspect ratio may be a range, where matching the actual aspect ratio of the moving object to the predicted aspect ratio of the target moving object means that the actual aspect ratio is within the range of the predicted aspect ratio.
Therefore, when the actual length-width ratio of the moving object is matched with the predicted length-width ratio of the target moving object, the length-width ratio of the moving object is determined to be in accordance with the requirement, and the moving object with the length-width ratio not in accordance with the target moving object can be screened primarily, so that the moving object in accordance with the requirement is determined to be the moving object to be identified, the number of the moving objects identified by adopting an identification rule subsequently is further reduced, and the identification accuracy and the identification efficiency of the target moving object are improved.
For example, when a certain player does not move his/her lower limb and only his/her arm is moving, the actual length/width ratio of the arm is close to the linear ratio, and when the target moving object is a soccer ball, the predicted length/width ratio of the soccer ball is close to the square ratio. Therefore, if the two are not matched, the moving objects such as the 'arm' can be screened out.
In other embodiments, the intra-frame feature may be a color distribution parameter of the moving object in some embodiments, considering that some moving objects have symmetry, such as a game ball, a player's whole body, etc., and some moving objects have no symmetry, such as a player with only one arm moving but no lower limb moving, and the moving object identified as the single "arm" has no symmetry. In this case, as shown in fig. 8, the step of obtaining the moving object to be identified by filtering from the moving object based on the intra-frame feature of the moving object may specifically include the following steps:
S231C, color distribution parameters of the moving object are acquired.
The color distribution parameter of the moving object refers to the color distribution condition of the moving object, and optionally, may be the RGB color distribution condition of the moving object in the target video frame.
After the moving object is selected using the rectangular frame, the color distribution parameter of the moving object may be a color distribution parameter of the rectangular frame selected portion.
S232C, based on the color distribution parameters, performing symmetry detection on the moving object to obtain a symmetry detection result of the moving object.
It can be understood that, for a moving object with symmetry, the color distribution is regular, for example, the symmetric color distribution appears along the symmetry axis, so that the symmetry detection of the moving object can be performed based on the color distribution parameters to obtain the symmetry detection result of the moving object.
In some embodiments, after the moving object is selected by using the rectangular frame, the rectangular frame containing the moving object may be divided, for example, into four, nine, or sixteen grids. Taking the nine-square grid as an example, the four corners, namely the upper left grid, the upper right grid, the lower left grid and the lower right grid are subjected to RGB color comparison respectively according to the above to calculate the similarity, or the upper left grid and the upper right grid are selected to be subjected to RGB color comparison to calculate the similarity, or the lower left grid and the lower right grid are selected to be subjected to RGB color comparison to calculate the similarity.
It should be noted that, in the embodiment of the present application, the similarity calculation method is not limited, and for example, an entire body formed by three left lattices in a nine-square lattice and an entire body formed by three right lattices in the nine-square lattice may be selected to perform RGB color comparison, and one similarity may be calculated. For example, the upper left, upper right, lower left, and lower right lattices of the nine-square lattice may be subjected to RGB color comparison in order to calculate a plurality of similarities.
After the similarity is calculated, the calculated similarity (which may be one or more) may be compared with a similarity threshold, and if the calculated similarity is greater than the similarity threshold, the symmetry detection result of the moving object may be determined to have symmetry, and if the calculated similarity is less than or equal to the similarity threshold, the symmetry detection result of the moving object may be determined to have no symmetry.
S233C, when the symmetry detection result matches the predicted symmetry of the target moving object, determining the moving object as the moving object to be recognized.
It will be appreciated that the predicted symmetry of the target moving object may or may not be symmetric, for example, for a game ball selected as the target moving object. However, some moving objects have no symmetry, for example, for an athlete with a moving arm, but a fixed lower limb, the moving object identified as the "arm" alone has no symmetry, and when the moving object without symmetry is selected as the target moving object, the predicted symmetry of the target moving object has no symmetry.
Therefore, when the predicted symmetry of the target moving object is to have symmetry, the moving object having symmetry as a result of the symmetry detection can be determined as the object to be recognized, and when the predicted symmetry of the target moving object is not to have symmetry, the moving object having no symmetry as a result of the symmetry detection can be determined as the object to be recognized.
In this embodiment, symmetry detection is performed on a moving object based on a color distribution parameter to obtain a symmetry detection result of the moving object, symmetry is obtained based on the symmetry detection result, the moving object is determined to be a moving object to be identified, the color distribution parameter of the moving object can be determined to be satisfactory, and a moving object whose color distribution parameter does not conform to a target moving object can be primarily screened out, so that the satisfactory moving object is determined to be the moving object to be identified, the number of the moving objects identified by a subsequent identification rule is further reduced, and the identification accuracy and the identification efficiency of the target moving object are improved.
It should be noted that, when the moving object to be identified is obtained by screening from the moving object based on the intra-frame features of the moving object, each intra-frame feature may be used separately, for example, the actual size of the current position of the moving object in the target video frame in the video frame, the actual length-width ratio of the moving object in the target video frame, or the color distribution parameter of the moving object may be used separately. Any two or more of these intra features may also be used in combination. For example, the actual size of the current position of the moving object in the target video frame in the video frame and the color distribution parameter of the moving object are used in combination, or the actual size of the current position of the moving object in the target video frame in the video frame, the actual length-width ratio of the moving object in the target video frame, and the color distribution parameter of the moving object are used in combination at the same time. And, when any two or more of the intra features are used in combination at the same time, the various intra features may not be prioritized.
In this embodiment, the method for obtaining the moving object to be identified by screening from the moving object based on the intra-frame characteristics of the moving object is not specifically limited.
And S241, when the number of the to-be-identified moving objects obtained by screening from the moving objects is one, determining the to-be-identified moving objects as target moving objects.
And S242, when the number of the to-be-identified moving objects obtained by screening from the moving objects is more than one, inputting the image corresponding to each to-be-identified moving object into an object classifier so as to identify and obtain a target moving object from the to-be-identified moving objects through the object classifier, wherein the object classifier is obtained by training sample moving objects with classification labels determined from sample video frames.
In the target moving object identification method of the embodiment, in the process of obtaining the moving object to be identified by screening the moving object based on the intra-frame characteristics of the moving object, various specific intra-frame characteristics are provided for screening the moving object, the application range is wide, and the difficulty in screening the moving object is reduced.
Referring to fig. 9, fig. 9 is a flowchart illustrating a method for identifying a target moving object according to another embodiment of the present application, where the number of moving objects to be identified in a target video frame is greater than one, and the method may include the following steps:
and S310, acquiring a target video frame.
S320, carrying out motion detection on the target video frame to obtain a motion object in the target video frame.
S330, based on the intra-frame characteristics of the moving objects, the moving objects to be identified are obtained through screening from the moving objects.
In this embodiment, the number of the moving objects to be identified, which are obtained by screening from the moving objects, is more than one based on the intra-frame characteristics of the moving objects.
And S340, acquiring the actual distance between each moving object to be identified and a credible object in an adjacent target video frame, wherein the credible object is the corresponding moving object to be identified when the number of the moving objects to be identified, which are obtained by screening from the moving objects, is one.
It is understood that after the moving objects in a certain target video frame are filtered, a plurality of moving objects to be detected may still be obtained, however, in some cases, the number of target moving objects is limited. For example, only one game ball is present by default in the game field under normal conditions, and therefore, if all the moving objects to be detected are identified based on the corresponding identification rules according to the random sequence, the target moving object may be identified when the last moving object to be detected is identified, which reduces the identification efficiency of the target moving object. Therefore, in order to further improve the identification efficiency of the target moving object, the actual distance between each moving object to be identified and the credible object in the adjacent target video frame corresponding to the same video frame may be obtained first.
As can be seen from the foregoing, in some cases, for some target video frames, based on a preset filtering rule, the number of the to-be-identified moving objects filtered from the moving objects may be one, at this time, because one target moving object exists in a default target video frame, the to-be-detected moving object may be determined as a target moving object, and the target moving object is understood as a trusted object.
And because the time interval between two adjacent frames of target video frames is short, in an actual scene, a target moving object cannot move for a long distance in the short time interval, and the distance difference between the target moving objects is smaller in the two adjacent frames of target video frames, so that after the trusted objects in the adjacent target video frames are obtained, for each moving object to be recognized in the target video frame which is currently recognized, the actual distance between each moving object to be recognized and the trusted object in the adjacent video frame, which correspond to the same video frame, can be obtained first.
In an actual scene, the collection angle of the video image collection device is not changed, and therefore, the distribution of the background position in the collected video frames is not changed, in this case, as an implementation manner, different target video frames can be mapped into the same frame of target video frame, that is, in different target video frames, the same coordinate system can be established, that is, the origin of coordinates in the two video frames is the same, so as to obtain the coordinates of the credible object and the coordinates of each moving object to be identified.
And S350, sequentially inputting the images corresponding to the moving objects to be identified into an object classifier according to the sequence from small to large of the actual distances between the moving objects to be identified and the credible objects in the adjacent video frames on the same video frame, so as to identify and obtain target moving objects from the moving objects to be identified through the object classifier, wherein the object classifier is obtained by training sample moving objects with classification labels determined from sample video frames.
Because the distance difference between the target moving objects is smaller in the two adjacent frames of the target video frames, after the actual distances of the moving objects to be recognized and the trusted objects in the adjacent target video frames corresponding to the same video frame are obtained, the actual distances of the moving objects to be recognized and the trusted objects in the adjacent video frames corresponding to the same video frame can be sequenced from small to large, and it can be understood that the smaller the actual distance is, the greater the probability that the moving objects to be recognized are the target moving objects is, and therefore, the images corresponding to the moving objects to be recognized can be sequentially input into the object classifier for recognition according to the sequence from small to large of the actual distances until the target moving objects are recognized from the moving objects to be recognized through the object classifier.
By adopting the method of the embodiment, the identification can be started from the most possible target moving object, and the identification efficiency of the target moving object can be improved.
It should be noted that the present application provides some specific examples of the foregoing implementation manners, and on the premise of not conflicting with each other, the examples of the embodiments may be arbitrarily combined to form a new target moving object identification method. It should be understood that a new target moving object identification method formed by any combination of examples is within the scope of the present application.
Referring to fig. 10, fig. 10 is a block diagram illustrating an apparatus 400 for identifying a target moving object according to an embodiment of the present application, where the apparatus 400 may include: a target video frame acquisition module 410, a motion detection module 420, a filtering module 430, and an identification module 440.
A target video frame obtaining module 410, configured to obtain a target video frame;
the motion detection module 420 is configured to perform motion detection on the target video frame to obtain a motion object in the target video frame;
the screening module 430 is configured to screen the moving object to be identified based on the intra-frame characteristics of the moving object;
and the identifying module 440 is configured to identify a target moving object from the moving objects to be identified based on an identification rule corresponding to the number of the moving objects to be identified.
As an embodiment, the target video frame obtaining module 410 is further configured to obtain a target video image and a video frame extraction frame rate; and extracting the target video frame from the target video image based on the video frame extraction frame rate.
As an embodiment, the motion detection module 420 is further configured to obtain a picture of an effective motion area in the target video frame; and carrying out motion detection on the picture of the effective motion area to obtain a motion object in the picture of the effective motion area.
As an embodiment, the screening module 430 is further configured to obtain predicted sizes of different positions of the target moving object in the target video frame, and an actual size of a current position of the moving object in the target video frame; and when the actual size of the current position of the moving object is matched with the predicted size of the current position, determining the moving object as the moving object to be identified.
As an embodiment, the screening module 430 is further configured to obtain a predicted length-width ratio of the target moving object in the target video frame and an actual length-width ratio of the moving object in the target video frame; and when the actual length-width ratio of the moving object is matched with the predicted length-width ratio of the target moving object, determining the moving object as the moving object to be identified.
As an embodiment, the screening module 430 is further configured to obtain a color distribution parameter of the moving object; based on the color distribution parameters, carrying out symmetry detection on the moving object to obtain a symmetry detection result of the moving object; and when the symmetry detection result is matched with the prediction symmetry of the target moving object, determining the moving object as the moving object to be identified.
As an embodiment, the identifying module 440 is further configured to determine the moving object to be identified as the target moving object when the number of the moving objects to be identified, which are obtained by filtering from the moving objects, is one; or when the number of the moving objects to be identified obtained by screening from the moving objects is more than one, inputting the image corresponding to each moving object to be identified into an object classifier so as to identify and obtain the target moving object from the moving objects to be identified through the object classifier, wherein the object classifier is obtained by training sample moving objects with classification labels determined from sample video frames.
As an implementation manner, the number of the to-be-identified moving objects is greater than one, in this case, the identifying module 440 is further configured to obtain an actual distance between each to-be-identified moving object and a trusted object in an adjacent video frame, where the trusted object is a corresponding to-be-identified moving object when the number of the to-be-identified moving objects obtained by screening from the moving objects is one; according to the sequence that the actual distance between each moving object to be recognized and a credible object in an adjacent video frame on the same video frame is from small to large, images corresponding to the moving objects to be recognized are sequentially input into an object classifier, so that a target moving object is recognized and obtained from the moving objects to be recognized through the object classifier, wherein the object classifier is obtained through training sample moving objects with classification labels determined from sample video frames.
The application provides a pair of target moving object recognition device, because only to the moving object of treating discernment that sieves out among the moving object discerns, be equivalent to and carried out a preliminary screening to the moving object, can eliminate the interference of other moving objects to a certain extent, thereby improve target moving object's rate of accuracy, simultaneously, again because only to the moving object of treating discernment that sieves out among the moving object discerns, the quantity of utilizing to predetermine the discernment rule to discern has been reduced, target moving object's recognition efficiency has been improved.
It should be noted that the device embodiment and the method embodiment in the present application correspond to each other, and specific principles in the device embodiment may refer to the contents in the method embodiment, which is not described herein again.
An electronic device provided by the present application will be described below with reference to fig. 11.
Referring to fig. 11, based on the target moving object identification method, another electronic device 200 including a processor 104 capable of executing the target moving object identification method is further provided in the embodiment of the present application, where the electronic device 200 may be a smart phone, a tablet computer, a portable computer, or the like. Electronic device 200 also includes memory 104, network module 106, and screen 108. The memory 104 stores programs that can execute the content of the foregoing embodiments, and the processor 102 can execute the programs stored in the memory 104.
Processor 102 may include, among other things, one or more cores for processing data and a message matrix unit. The processor 102 interfaces with various components throughout the electronic device 200 using various interfaces and circuitry to perform various functions of the electronic device 200 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 104 and invoking data stored in the memory 104. Alternatively, the processor 102 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 102 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 102, but may be implemented by a communication chip.
The Memory 104 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 104 may be used to store instructions, programs, code sets, or instruction sets. The memory 104 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal 100 in use, such as a phonebook, audio-video data, chat log data, and the like.
The network module 106 is configured to receive and transmit electromagnetic waves, and achieve interconversion between the electromagnetic waves and the electrical signals, so as to communicate with a communication network or other devices, for example, an audio playing device. The network module 106 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The network module 106 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. For example, the network module 106 may interact with a base station.
The screen 108 may display interface content and may also be used to respond to touch gestures.
It should be noted that, in order to implement more functions, the electronic device 200 may also protect more devices, for example, may also protect a structured light sensor for acquiring face information or may also protect a camera for acquiring an iris.
Referring to fig. 12, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer readable medium 1100 has stored therein a program code that can be called by a processor to execute the method described in the above method embodiments.
The computer-readable storage medium 1100 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 1100 includes a non-volatile computer-readable storage medium. The computer readable storage medium 1100 has storage space for program code 1110 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 1110 may be compressed, for example, in a suitable form.
Based on the above-mentioned target moving object identification method, according to an aspect of an embodiment of the present application, there is provided a computer program product or a computer program, the computer program product or the computer program comprising computer instructions, the computer instructions being stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations described above.
To sum up, according to the target moving object identification method, device, electronic device, storage medium, computer program product, or computer program provided in the embodiments of the present application, since only the moving object to be identified that is screened out from the moving object is identified, which is equivalent to performing a primary screening on the moving object, interference of other moving objects can be eliminated to a certain extent, thereby improving accuracy of target moving object identification, and meanwhile, since only the moving object to be identified that is screened out from the moving object is identified again, the number of identification using a preset identification rule is reduced, and the identification efficiency of the target moving object is improved.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (11)

1. A target moving object identification method is characterized by comprising the following steps:
acquiring a target video frame;
performing motion detection on the target video frame to obtain a motion object in the target video frame;
screening the moving objects to be identified based on the intra-frame characteristics of the moving objects;
and identifying and obtaining a target moving object from the moving objects to be identified based on the identification rule corresponding to the number of the moving objects to be identified.
2. The method according to claim 1, wherein the screening of the moving objects to be identified based on the intra-frame features of the moving objects comprises:
acquiring the predicted sizes of different positions of a target moving object in the target video frame and the actual size of the current position of the moving object in the target video frame;
and when the actual size of the current position of the moving object is matched with the predicted size of the current position, determining the moving object as the moving object to be identified.
3. The method according to claim 1, wherein the screening of the moving objects to be identified based on the intra-frame features of the moving objects comprises:
acquiring the predicted length-width ratio of a target moving object in the target video frame and the actual length-width ratio of the moving object in the target video frame;
and when the actual length-width ratio of the moving object is matched with the predicted length-width ratio of the target moving object, determining the moving object as the moving object to be identified.
4. The method according to claim 1, wherein the screening of the moving objects to be identified based on the intra-frame features of the moving objects comprises:
acquiring color distribution parameters of the moving object;
based on the color distribution parameters, carrying out symmetry detection on the moving object to obtain a symmetry detection result of the moving object;
and when the symmetry detection result is matched with the predicted symmetry of the target moving object, determining the moving object as the moving object to be identified.
5. The method according to any one of claims 1 to 4, wherein the identifying a target moving object from the moving objects to be identified based on an identification rule corresponding to the number of the moving objects to be identified comprises:
when the number of the moving objects to be identified obtained by screening from the moving objects is one, determining the moving objects to be identified as target moving objects; or
When the number of the to-be-identified moving objects obtained by screening from the moving objects is more than one, inputting the image corresponding to each to-be-identified moving object into an object classifier so as to identify and obtain a target moving object from the to-be-identified moving objects through the object classifier, wherein the object classifier is obtained by training sample moving objects with classification labels determined from sample video frames.
6. The method according to any one of claims 1 to 4, wherein the number of the moving objects to be identified is greater than one, and the identifying a target moving object from the moving objects to be identified based on the identification rule corresponding to the number of the moving objects to be identified comprises:
acquiring the actual distance between each moving object to be identified and a credible object in an adjacent video frame, wherein the actual distance corresponds to the same video frame, and the credible object is the corresponding moving object to be identified when the number of the moving objects to be identified, which is obtained by screening from the moving objects, is one;
and sequentially inputting the images corresponding to the moving objects to be recognized into an object classifier according to the sequence from small to large of the actual distances, corresponding to the same video frame, of the moving objects to be recognized and the credible objects in the adjacent video frames, so as to recognize and obtain target moving objects from the moving objects to be recognized through the object classifier, wherein the object classifier is obtained by training sample moving objects with classification labels determined from sample video frames.
7. The method of claim 1, wherein the obtaining the target video frame comprises:
acquiring a target video image and a video frame extraction frame rate;
and extracting a target video frame from the target video image based on the video frame extraction frame rate.
8. The method according to claim 1, wherein the performing motion detection on the target video frame to obtain a moving object in the target video frame comprises:
acquiring a picture of an effective motion area in the target video frame;
and carrying out motion detection on the picture of the effective motion area to obtain a motion object in the picture of the effective motion area.
9. An apparatus for identifying a target moving object, the apparatus comprising:
the target video frame acquisition module is used for acquiring a target video frame;
the motion detection module is used for carrying out motion detection on the target video frame to obtain a motion object in the target video frame;
the screening module is used for screening the moving objects to be identified based on the intra-frame characteristics of the moving objects;
and the identification module is used for identifying and obtaining a target moving object from the moving objects to be identified based on the identification rule corresponding to the number of the moving objects to be identified.
10. An electronic device comprising a processor and a memory; one or more programs are stored in the memory and configured to be executed by the processor to implement the method of any of claims 1-8.
11. A computer-readable storage medium, having program code stored therein, wherein the program code when executed by a processor performs the method of any of claims 1-8.
CN202110219773.9A 2021-02-26 2021-02-26 Target moving object identification method and device, electronic equipment and storage medium Pending CN112837350A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110219773.9A CN112837350A (en) 2021-02-26 2021-02-26 Target moving object identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110219773.9A CN112837350A (en) 2021-02-26 2021-02-26 Target moving object identification method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112837350A true CN112837350A (en) 2021-05-25

Family

ID=75933995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110219773.9A Pending CN112837350A (en) 2021-02-26 2021-02-26 Target moving object identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112837350A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113343031A (en) * 2021-06-10 2021-09-03 浙江大华技术股份有限公司 Data adding method and device, storage medium and electronic device
CN115475373A (en) * 2022-09-14 2022-12-16 浙江大华技术股份有限公司 Motion data display method and device, storage medium and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140002616A1 (en) * 2011-03-31 2014-01-02 Sony Computer Entertainment Inc. Information processing system, information processing device, imaging device, and information processing method
CN105741324A (en) * 2016-03-11 2016-07-06 江苏物联网研究发展中心 Moving object detection identification and tracking method on moving platform
CN110781711A (en) * 2019-01-21 2020-02-11 北京嘀嘀无限科技发展有限公司 Target object identification method and device, electronic equipment and storage medium
CN111627049A (en) * 2020-05-29 2020-09-04 北京中科晶上科技股份有限公司 High-altitude parabola determination method and device, storage medium and processor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140002616A1 (en) * 2011-03-31 2014-01-02 Sony Computer Entertainment Inc. Information processing system, information processing device, imaging device, and information processing method
CN105741324A (en) * 2016-03-11 2016-07-06 江苏物联网研究发展中心 Moving object detection identification and tracking method on moving platform
CN110781711A (en) * 2019-01-21 2020-02-11 北京嘀嘀无限科技发展有限公司 Target object identification method and device, electronic equipment and storage medium
CN111627049A (en) * 2020-05-29 2020-09-04 北京中科晶上科技股份有限公司 High-altitude parabola determination method and device, storage medium and processor

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113343031A (en) * 2021-06-10 2021-09-03 浙江大华技术股份有限公司 Data adding method and device, storage medium and electronic device
CN113343031B (en) * 2021-06-10 2023-03-24 浙江大华技术股份有限公司 Data adding method and device, storage medium and electronic device
CN115475373A (en) * 2022-09-14 2022-12-16 浙江大华技术股份有限公司 Motion data display method and device, storage medium and electronic device
CN115475373B (en) * 2022-09-14 2024-02-02 浙江大华技术股份有限公司 Display method and device of motion data, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN109145840B (en) Video scene classification method, device, equipment and storage medium
CN110189378A (en) A kind of method for processing video frequency, device and electronic equipment
CN109299658B (en) Face detection method, face image rendering device and storage medium
CN105451029B (en) A kind of processing method and processing device of video image
CN110288534B (en) Image processing method, device, electronic equipment and storage medium
CN111626163B (en) Human face living body detection method and device and computer equipment
CN112837350A (en) Target moving object identification method and device, electronic equipment and storage medium
CN110266955B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN110516572B (en) Method for identifying sports event video clip, electronic equipment and storage medium
CN111008935A (en) Face image enhancement method, device, system and storage medium
CN110321896A (en) Blackhead recognition methods, device and computer readable storage medium
CN112036209A (en) Portrait photo processing method and terminal
CN111821693A (en) Perspective plug-in detection method, device, equipment and storage medium for game
CN111144156B (en) Image data processing method and related device
CN106548114B (en) Image processing method, device and computer-readable medium
CN110490064B (en) Sports video data processing method and device, computer equipment and computer storage medium
CN112221133A (en) Game picture customizing method, cloud server, terminal and storage medium
CN115082993B (en) Face biopsy method and device based on mouth opening action
CN108898134B (en) Number identification method and device, terminal equipment and storage medium
CN113591829B (en) Character recognition method, device, equipment and storage medium
CN106162092B (en) A kind of method and system sport video acquisition and played
CN113936231A (en) Target identification method and device and electronic equipment
CN114722230A (en) Auxiliary judgment system using angle big data matching
CN113887354A (en) Image recognition method and device, electronic equipment and storage medium
CN114511877A (en) Behavior recognition method and device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination