CN111862144A - Method and device for determining object movement track fraction - Google Patents

Method and device for determining object movement track fraction Download PDF

Info

Publication number
CN111862144A
CN111862144A CN202010628034.0A CN202010628034A CN111862144A CN 111862144 A CN111862144 A CN 111862144A CN 202010628034 A CN202010628034 A CN 202010628034A CN 111862144 A CN111862144 A CN 111862144A
Authority
CN
China
Prior art keywords
images
image
track
score
superposed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010628034.0A
Other languages
Chinese (zh)
Inventor
林坚伟
吴琦
肖潇
龚纯斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vismarty Xiamen Technology Co ltd
Original Assignee
Vismarty Xiamen Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vismarty Xiamen Technology Co ltd filed Critical Vismarty Xiamen Technology Co ltd
Priority to CN202010628034.0A priority Critical patent/CN111862144A/en
Publication of CN111862144A publication Critical patent/CN111862144A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The embodiment of the application provides a method and a device for determining the moving track fraction of an object, the method comprises the steps of collecting the moving track of the object, dividing the coordinate point of the moving track of the object into a plurality of parts, mapping each part of the plurality of parts to an image to obtain a plurality of mapped images, processing the plurality of mapped images to obtain processed images, performing image transformation processing on the processed images to obtain multi-dimensional feature vectors of the processed images, inputting the multi-dimensional feature vectors of the processed images into a track grading model for processing, determining the scores of the moving tracks of the objects, not only is simple, efficient and intuitive, the method has strong practicability and high precision, and can be widely applied to various scenes and fields with higher standard requirements on the moving track, thereby solving the problems of low practicability and low precision in the prior art.

Description

Method and device for determining object movement track fraction
Technical Field
The embodiment of the application relates to the technical field of computer vision, in particular to a method and a device for determining a moving track score of an object.
Background
Within a certain space range, whether the moving track of the object is consistent with a preset track or not is judged, for example, whether the walking track of the intelligent robot is consistent with the preset track or not is judged, and the performance and the quality of the robot are favorably analyzed. Or, in the industrial field, whether the track generated by a series of actions of the operator meets the specification or not is judged, whether the track meets the standard or not is judged, for example, whether the detection action on the display screen meets the specification or not is judged, and the method has higher guiding significance for improving the professional of the operator. Therefore, how to quickly determine whether the generated trajectory meets the requirement for the production line or the scene with the high route specification requirement becomes a problem to be solved urgently.
The existing method for judging whether the generated track meets the requirements is mainly based on the distance and the traditional algorithm based on the similarity. For example, the K-nearest neighbor algorithm or the K-means algorithm is used for processing the trajectory to determine whether the trajectory meets the requirement, but the processing method has the problems of low practicability and low precision.
In summary, there is a need for a method for determining a moving trajectory score of an object, so as to solve the problems of low practicability and low precision in the prior art.
Disclosure of Invention
The embodiment of the application provides a method and a device for determining the moving track fraction of an object, which are used for solving the problems of poor practicability and low precision in the prior art.
In a first aspect, an embodiment of the present application provides a method for determining a trajectory score of an object, including:
collecting a moving track of an object;
dividing coordinate points of the moving track of the object into a plurality of parts, mapping each part of the plurality of parts to an image to obtain a plurality of mapped images, and processing the plurality of mapped images to obtain processed images;
carrying out image transformation processing on the processed image to obtain a multi-dimensional feature vector of the processed image;
inputting the multi-dimensional characteristic vector of the processed image into a track grading model for processing, and determining the fraction of the object moving track; the track scoring model is determined by training the convolutional neural network based on the sample moving track of the labeling score.
In the technical scheme, the coordinate points of the moving track of the object are divided into a plurality of parts, each part of the plurality of parts is mapped to the image to obtain a plurality of mapped images, the plurality of mapped images are processed to obtain processed images, the processed images are subjected to image transformation processing to obtain the multidimensional characteristic vector of the processed images, the track coordinate points can be flexibly and efficiently converted into image information, the input characteristic vector of the images is extracted to provide support for determining the score of the moving track by using a track scoring model subsequently, the multidimensional characteristic vector of the processed images is input to the track scoring model for processing to determine the score of the moving track of the object, and the method is simple, efficient, intuitive, strong in practicability and high in precision, and can be widely applied to various scenes and fields with higher standard requirements on the moving track, therefore, the problems of low practicability and low precision in the prior art can be solved.
In a possible implementation manner, the dividing the coordinate point of the movement trajectory of the object into a plurality of parts, mapping each of the plurality of parts onto an image to obtain a plurality of mapped images, and processing the plurality of mapped images to obtain a processed image includes:
dividing the coordinate points of the moving track of the object into a plurality of parts according to the time sequence, and mapping each part of the plurality of parts to a single-channel image to obtain a plurality of mapped single-channel images;
and superposing the plurality of mapped single-channel images according to the channel dimension of the images to obtain superposed images, and compressing the superposed images to obtain the processed images.
In a possible implementation manner, the compressing the superimposed image to obtain the processed image includes:
compressing the superposed image by m times on the abscissa axis of the superposed image, and compressing the superposed image by n times on the ordinate axis of the superposed image to obtain the processed image; and m and n are positive integers.
According to the technical scheme, the coordinate points of the moving track of the object are divided into the multiple parts according to the time sequence, each part of the multiple parts is mapped to the single-channel image, the multiple mapped single-channel images are overlapped according to the channel dimension of the image to obtain the overlapped image, the overlapped image is compressed by the horizontal axis and the vertical axis, the track coordinate points can be flexibly and efficiently converted into image information, and support is provided for the follow-up track scoring model to determine the fraction of the moving track.
In one possible implementation, the training a convolutional neural network based on the sample movement trajectory of the annotation score to determine the trajectory scoring model includes:
acquiring a sample moving track marked with a score;
dividing the coordinate points of the sample moving track marked with the scores into a plurality of parts according to the time sequence, and mapping each part of the plurality of parts to a single-channel image to obtain a plurality of mapped single-channel images;
superposing the plurality of mapped single-channel images according to the channel dimension of the images to obtain superposed images, compressing the superposed images by m times on the abscissa axis where the superposed images are located, and compressing the superposed images by n times on the ordinate axis where the superposed images are located to obtain compressed images; m and n are positive integers;
Carrying out image transformation processing on the compressed image to obtain a multi-dimensional feature vector of the compressed image;
and inputting the multi-dimensional feature vector of the compressed image into the convolutional neural network for training to obtain a prediction score of the sample moving track, and reversely propagating and updating the convolutional neural network through a loss function between the prediction score and the labeling score until the convolutional neural network is converged to obtain the track scoring model.
According to the technical scheme, the coordinate points of the sample moving track marked with the scores are divided into the multiple parts according to the time sequence, each part of the multiple parts is mapped to the single-channel image, the multiple mapped single-channel images are overlapped according to the channel dimension of the image, and the overlapped images are compressed on the horizontal axis and the vertical axis, so that the stability of the track scoring model can be improved, and the complexity of the track scoring model algorithm can be reduced. And then, image transformation processing is carried out on the compressed image to obtain a multi-dimensional feature vector of the compressed image, so that the stability and generalization capability of the track scoring model can be further improved. And then, inputting the multi-dimensional characteristic vector of the compressed image into a convolutional neural network for training to obtain a track scoring model, so that the score of the moving track can be accurately and quickly determined.
In one possible implementation, the sample movement trajectory includes a standard movement trajectory and a non-standard movement trajectory;
before the obtaining of the sample movement track of the annotation score, the method further includes:
acquiring a standard movement track and a non-standard movement track;
marking the score of the standard movement track as a preset threshold value;
comparing the non-standard movement track coordinate point with the standard movement track coordinate point to obtain a comparison result, and marking the score of the non-standard movement track according to the comparison result; the fraction of the non-standard movement track is greater than or equal to zero and less than or equal to the preset threshold.
According to the technical scheme, the scores of the standard moving tracks are marked as the preset threshold, the non-standard moving track coordinate points are compared with the standard moving track coordinate points to obtain the comparison result, and the scores of the non-standard moving tracks are marked according to the comparison result, so that the scores of the non-standard moving tracks can be simply and efficiently determined, and training sample support is provided for training the convolutional neural network to obtain the track scoring model.
In a second aspect, an embodiment of the present application further provides an apparatus for determining a trajectory score of an object, including:
The acquisition unit is used for acquiring the moving track of the object;
the processing unit is used for dividing the coordinate point of the moving track of the object into a plurality of parts, mapping each part of the plurality of parts to an image to obtain a plurality of mapped images, and processing the plurality of mapped images to obtain processed images; carrying out image transformation processing on the processed image to obtain a multi-dimensional feature vector of the processed image; inputting the multi-dimensional characteristic vector of the processed image into a track grading model for processing, and determining the fraction of the object moving track; the track scoring model is determined by training the convolutional neural network based on the sample moving track of the labeling score.
In a possible implementation manner, the processing unit is specifically configured to:
dividing the coordinate points of the moving track of the object into a plurality of parts according to the time sequence, and mapping each part of the plurality of parts to a single-channel image to obtain a plurality of mapped single-channel images;
and superposing the plurality of mapped single-channel images according to the channel dimension of the images to obtain superposed images, and compressing the superposed images to obtain the processed images.
In a possible implementation manner, the processing unit is specifically configured to:
compressing the superposed image by m times on the abscissa axis of the superposed image, and compressing the superposed image by n times on the ordinate axis of the superposed image to obtain the processed image; and m and n are positive integers.
In a possible implementation manner, the processing unit is specifically configured to:
acquiring a sample moving track marked with a score;
dividing the coordinate points of the sample moving track marked with the scores into a plurality of parts according to the time sequence, and mapping each part of the plurality of parts to a single-channel image to obtain a plurality of mapped single-channel images;
superposing the plurality of mapped single-channel images according to the channel dimension of the images to obtain superposed images, compressing the superposed images by m times on the abscissa axis where the superposed images are located, and compressing the superposed images by n times on the ordinate axis where the superposed images are located to obtain compressed images; m and n are positive integers;
carrying out image transformation processing on the compressed image to obtain a multi-dimensional feature vector of the compressed image;
And inputting the multi-dimensional feature vector of the compressed image into the convolutional neural network for training to obtain a prediction score of the sample moving track, and reversely propagating and updating the convolutional neural network through a loss function between the prediction score and the labeling score until the convolutional neural network is converged to obtain the track scoring model.
In one possible implementation, the sample movement trajectory includes a standard movement trajectory and a non-standard movement trajectory;
the processing unit is further to:
before the sample movement track marked with the score is obtained, a standard movement track and a non-standard movement track are obtained;
marking the score of the standard movement track as a preset threshold value;
comparing the non-standard movement track coordinate point with the standard movement track coordinate point to obtain a comparison result, and marking the score of the non-standard movement track according to the comparison result; the fraction of the non-standard movement track is greater than or equal to zero and less than or equal to the preset threshold.
In a third aspect, an embodiment of the present application provides a computing device, including:
a memory for storing a computer program;
And the processor is used for calling the computer program stored in the memory and executing the steps of the method for determining the moving track fraction of the object according to the obtained program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer-executable program for causing a computer to perform the steps of a method for determining a trajectory score of an object movement.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for determining a moving trajectory score of an object according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating a movement trajectory coordinate point is converted into image information according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a process for training a convolutional neural network to obtain a trajectory scoring model according to an embodiment of the present disclosure;
Fig. 4 is a schematic structural diagram of an apparatus for determining a moving trajectory score of an object according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The illustrative embodiments and descriptions of the present application are provided to explain the present application and not to limit the application. Additionally, the same or similar numbered elements/components used in the drawings and the embodiments are used to represent the same or similar parts.
It should be understood that the terms "first," "second," and the like, as used herein, do not denote any order or importance, nor are they used to limit the present application, but rather are used interchangeably to distinguish one element from another or from another element or operation described in similar technical terms.
Furthermore, as used in this application, the terms "comprising," "including," "having," "containing," and the like are open-ended terms, i.e., meaning including, but not limited to. Additionally, as used herein, "and/or" includes any and all combinations of the stated items.
Fig. 1 schematically illustrates a flow of a method for determining a score of an object movement trajectory according to an embodiment of the present application, where the flow may be performed by an apparatus for determining a score of an object movement trajectory.
As shown in fig. 1, the process specifically includes:
step 101, collecting a moving track of an object.
Step 102, dividing the coordinate point of the movement track of the object into a plurality of parts, mapping each part of the plurality of parts to an image to obtain a plurality of mapped images, and processing the plurality of mapped images to obtain a processed image.
And 103, performing image transformation processing on the processed image to obtain a multi-dimensional feature vector of the processed image.
And 104, inputting the multi-dimensional characteristic vector of the processed image into a track grading model for processing, and determining the fraction of the object moving track.
In step 101, a moving track of the object is collected, where a moving track coordinate point of the object may be a moving track coordinate point generated by walking of the intelligent robot or a moving track generated by a series of actions of an operator on an industrial line, and is not particularly limited; a trajectory is a series of time-ordered coordinate points, including standard and non-standard trajectories.
In the step 102, dividing the coordinate points of the movement trajectory of the object into a plurality of parts according to a time sequence, mapping each of the plurality of parts onto a single-channel image to obtain a plurality of mapped single-channel images, superimposing the plurality of mapped single-channel images according to the channel dimension of the image to obtain a superimposed image, compressing the superimposed image m times on the abscissa axis where the superimposed image is located, and compressing the superimposed image n times on the ordinate axis where the superimposed image is located to obtain a processed image; wherein m and n are positive integers.
Specifically, the coordinate point of the movement track is divided into K parts according to the time sequence, wherein K is a parameter, and the parameter K is adjusted according to the complexity of the movement track. It should be noted that when K is 1, the time sequence of the movement trajectory is not considered. And mapping the coordinate points of each part to a single-channel image, shearing the edge of the image to remove extremely individual sparse coordinate points, and then superposing the K images on the channel dimension. Then, the number of the coordinate points is counted by every m pixels along the x coordinate axis, and the number of the coordinate points is counted by every n pixels along the y coordinate axis, so that the original image is reduced by m times on the x coordinate axis and n times on the y coordinate axis, and the reduced image is obtained. For example, as shown in fig. 2, the moving locus is set to a coordinate point [ [ x ] 1,y1],[x2,y2],...]The method comprises the steps of dividing the image into a part, mapping coordinate points of the part to a single-channel image, cutting the edge of the image to remove extremely sparse coordinate points, counting the number of the coordinate points appearing by every 4 pixels along an x coordinate axis, and counting the number of the coordinate points appearing by every 4 pixels along a y coordinate axis, so that the original image is reduced by 4 times on the x coordinate axis and reduced by 4 times on the y coordinate axis, and the reduced image is obtained.
In step 103, the processed image is subjected to image transformation processing to obtain a multi-dimensional feature vector of the processed image. Specifically, the reduced image is preprocessed, and the size of the preprocessed image is changed to be adjusted to 224 × 224. Thus, a movement track is transformed into a feature vector of 224 × 224 × K.
In step 104, the multi-dimensional feature vector of the processed image is input to the trajectory scoring model for processing, and a score of the movement trajectory of the object is determined, wherein the larger the score is, the more standard and more standard the movement trajectory is, and otherwise, the movement trajectory is not in accordance with the requirement and is not standard. The track scoring model is determined by training the convolutional neural network based on the sample movement track of the labeling score.
In addition, the convolutional neural network is trained to determine a track scoring model based on the sample movement track of the labeling score. Firstly, obtaining a sample moving track marked with a score, dividing coordinate points of the sample moving track marked with the score into a plurality of parts according to a time sequence, and mapping each part of the plurality of parts to a single-channel image to obtain a plurality of mapped single-channel images. And then overlapping a plurality of mapped single-channel images according to the channel dimension of the images to obtain overlapped images, compressing the overlapped images by m times on the abscissa axis where the overlapped images are located, and then compressing the overlapped images by n times on the ordinate axis where the overlapped images are located to obtain compressed images, wherein m and n are positive integers. And then, carrying out image transformation processing on the compressed image to obtain a multi-dimensional feature vector of the compressed image, inputting the multi-dimensional feature vector of the compressed image into a convolutional neural network for training to obtain a prediction score of a sample moving track, and updating the convolutional neural network through back propagation of a loss function between the prediction score and a labeling score until the convolutional neural network is converged to obtain a track scoring model. In addition, before the sample moving track with the mark is obtained, a standard moving track and a non-standard moving track are obtained, the mark of the standard moving track is marked as a preset threshold value, the coordinate point of the non-standard moving track is compared with the coordinate point of the standard moving track to obtain a comparison result, and the mark of the non-standard moving track is marked according to the comparison result. The score of the non-standard movement track is greater than or equal to zero and less than or equal to a preset threshold; the preset threshold may be set empirically.
Specifically, a standard movement track and a non-standard movement track are obtained, score (score) of the standard movement track is marked as 1.0 according to experience, the non-standard movement track is compared with the standard movement track manually, and then the non-standard movement track is marked according to the difference degree of the non-standard movement track and the standard movement track. It should be noted that the coordinate points of the standard movement track and the coordinate points of the non-standard movement track can be drawn on the image and presented in a visual form, so that the difference degree between the non-standard movement track and the standard track can be observed more conveniently and intuitively. Where score ranges from 0.0 to 1.0, 0.0 indicates nothing to associate with the standard movement trajectory, and 1.0 indicates that the movement trajectory is the standard trajectory. After the scores of the standard track and the scores of the non-standard moving track are labeled, the coordinate point of the moving track is divided into K parts according to the time sequence, wherein K is a parameter, and the parameter K is adjusted according to the complexity of the moving track. It should be noted that when K is 1, the time sequence of the movement trajectory is not considered. And mapping the coordinate points of each part to a single-channel image, shearing the edge of the image to remove extremely individual sparse coordinate points, and then superposing the K images on the channel dimension. Then, in order to make the track scoring model more stable and reduce the complexity of the track scoring model algorithm, the number of the coordinate points appearing is counted by every m pixels along the x coordinate axis, and the number of the coordinate points appearing is counted by every n pixels along the y coordinate axis, so that the original image is reduced by m times on the x coordinate axis and n times on the y coordinate axis, and the reduced image is obtained. Finally, the reduced image is preprocessed, and in order to further improve the stability and generalization capability of the trajectory scoring model, the size of the preprocessed image is changed and adjusted to 224 × 224. Thus, a movement track is transformed into a feature vector of 224 × 224 × K. After the characteristic vectors of 224 × 224 × K are obtained, a resnet50 convolutional neural network is adopted and fine-tuned to be suitable for inputting the characteristic vectors of 224 × 224 × K, and an output node is set to be 1, so that each moving track outputs a predicted score after passing through the convolutional neural network, the error loss between the predicted score and the labeled score is calculated by using a loss function, the loss function adopts a mean square error loss function (MSE), and model parameters of the convolutional neural network are updated by back propagation according to the mean square error loss function, so that the convolutional neural network is trained, and a track scoring model is obtained. Wherein, the formula of the mean square error loss function is:
Figure BDA0002565504100000101
Where C denotes the cost, x denotes the input features, i.e. feature vectors of 224 × 224 × K, y denotes the score of the trajectory label, α denotes the score predicted by the algorithm, n denotes the total number of movement trajectories, and L denotes the number of layers of the convolutional neural network.
Aiming at the defects of the existing moving track similarity and moving track compliance scoring algorithm, the moving track coordinate points are converted into image information, effective image input characteristics are further extracted, and the scores of the moving tracks are directly trained through the loss of the mean square error of the moving tracks score predicted by the convolutional neural network and the actually marked moving tracks score.
In order to better explain the embodiment of the present application for training the convolutional neural network to obtain the trajectory scoring model, a process for training the convolutional neural network to obtain the trajectory scoring model provided in the embodiment of the present application is described below through a specific implementation scenario.
As shown in fig. 3, the process includes the following steps:
step 301, collecting a sample moving track.
The sample movement trajectory may include a standard trajectory coordinate point and a non-standard trajectory coordinate point, and a sample movement trajectory is a series of coordinate points arranged in time series.
And step 302, marking the score of the sample moving track.
Marking the score of the standard movement track as 1.0 according to experience, comparing the non-standard movement track with the standard movement track manually, and marking the score of the non-standard movement track according to the difference degree of the non-standard movement track and the standard movement track.
Step 303, converting the sample moving track into image information.
And mapping the sample moving track to a single-channel image, superposing the mapped single-channel image, compressing the superposed image on the abscissa axis and the ordinate axis, and converting the image into image information.
And step 304, carrying out image change processing on the image to obtain a multi-dimensional feature vector.
And preprocessing the reduced image, changing the size of the preprocessed image, and adjusting the size of the preprocessed image to 224 × 224, so that one moving track is converted into a feature vector of 224 × 224 × K.
Step 305, the multidimensional feature vector is input to a convolutional neural network.
And step 306, outputting the prediction score of the sample moving track.
And inputting the multi-dimensional feature vector into a convolutional neural network, and outputting a prediction score of the movement track of the sample.
Step 307, calculating error losses of the prediction score and the annotation score.
The loss of error between this prediction score and the annotation score is calculated using a loss function.
And step 308, updating the model parameters of the convolutional neural network according to the error loss.
And carrying out back propagation according to the mean square error loss function to update the model parameters of the convolutional neural network so as to train the convolutional neural network and obtain a track scoring model.
The embodiment shows that the coordinate points of the sample movement track marked with the scores are divided into a plurality of parts according to the time sequence, each part of the plurality of parts is mapped to the single-channel image, the plurality of mapped single-channel images are overlapped according to the channel dimension of the image, and the overlapped image is compressed on the horizontal axis and the vertical axis, so that the stability of the track scoring model can be improved, and the complexity of the track scoring model algorithm can be reduced. And then, image transformation processing is carried out on the compressed image to obtain a multi-dimensional feature vector of the compressed image, so that the stability and generalization capability of the track scoring model can be further improved. And then, inputting the multi-dimensional characteristic vector of the compressed image into a convolutional neural network for training to obtain a track scoring model, so that the score of the moving track can be accurately and quickly determined.
Based on the same technical concept, fig. 4 exemplarily shows an apparatus for determining a score of an object movement trajectory provided by an embodiment of the present application, and the apparatus may perform a flow of a method for determining a score of an object movement trajectory.
As shown in fig. 4, the apparatus includes:
an acquiring unit 401, configured to acquire a moving trajectory of an object;
a processing unit 402, configured to divide a coordinate point of a movement trajectory of the object into multiple parts, map each of the multiple parts onto an image to obtain multiple mapped images, and process the multiple mapped images to obtain processed images; carrying out image transformation processing on the processed image to obtain a multi-dimensional feature vector of the processed image; inputting the multi-dimensional characteristic vector of the processed image into a track grading model for processing, and determining the fraction of the object moving track; the track scoring model is determined by training the convolutional neural network based on the sample moving track of the labeling score.
In a possible implementation manner, the processing unit 402 is specifically configured to:
dividing the coordinate points of the moving track of the object into a plurality of parts according to the time sequence, and mapping each part of the plurality of parts to a single-channel image to obtain a plurality of mapped single-channel images;
And superposing the plurality of mapped single-channel images according to the channel dimension of the images to obtain superposed images, and compressing the superposed images to obtain the processed images.
In a possible implementation manner, the processing unit 402 is specifically configured to:
compressing the superposed image by m times on the abscissa axis of the superposed image, and compressing the superposed image by n times on the ordinate axis of the superposed image to obtain the processed image; and m and n are positive integers.
In a possible implementation manner, the processing unit 402 is specifically configured to:
acquiring a sample moving track marked with a score;
dividing the coordinate points of the sample moving track marked with the scores into a plurality of parts according to the time sequence, and mapping each part of the plurality of parts to a single-channel image to obtain a plurality of mapped single-channel images;
superposing the plurality of mapped single-channel images according to the channel dimension of the images to obtain superposed images, compressing the superposed images by m times on the abscissa axis where the superposed images are located, and compressing the superposed images by n times on the ordinate axis where the superposed images are located to obtain compressed images; m and n are positive integers;
Carrying out image transformation processing on the compressed image to obtain a multi-dimensional feature vector of the compressed image;
and inputting the multi-dimensional feature vector of the compressed image into the convolutional neural network for training to obtain a prediction score of the sample moving track, and reversely propagating and updating the convolutional neural network through a loss function between the prediction score and the labeling score until the convolutional neural network is converged to obtain the track scoring model.
In one possible implementation, the sample movement trajectory includes a standard movement trajectory and a non-standard movement trajectory;
the processing unit 402 is further configured to:
before the sample movement track marked with the score is obtained, a standard movement track and a non-standard movement track are obtained;
marking the score of the standard movement track as a preset threshold value;
comparing the non-standard movement track coordinate point with the standard movement track coordinate point to obtain a comparison result, and marking the score of the non-standard movement track according to the comparison result; the fraction of the non-standard movement track is greater than or equal to zero and less than or equal to the preset threshold.
Based on the same technical concept, an embodiment of the present invention provides a computing device, including:
a memory for storing a computer program;
and the processor is used for calling the computer program stored in the memory and executing the steps of the method for determining the moving track fraction of the object according to the obtained program.
Based on the same technical concept, embodiments of the present invention provide a computer-readable storage medium storing a computer-executable program for causing a computer to perform the steps of a method of determining a moving trajectory score of an object.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, and may be loaded onto the computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in conjunction with specific features and embodiments thereof, it is evident that many alterations and modifications may be made by those skilled in the art without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method of determining a score of a trajectory of an object, comprising:
collecting a moving track of an object;
dividing coordinate points of the moving track of the object into a plurality of parts, mapping each part of the plurality of parts to an image to obtain a plurality of mapped images, and processing the plurality of mapped images to obtain processed images;
Carrying out image transformation processing on the processed image to obtain a multi-dimensional feature vector of the processed image;
inputting the multi-dimensional characteristic vector of the processed image into a track grading model for processing, and determining the fraction of the object moving track; the track scoring model is determined by training the convolutional neural network based on the sample moving track of the labeling score.
2. The method of claim 1, wherein the dividing the coordinate points of the movement trajectory of the object into a plurality of parts and mapping each of the plurality of parts onto an image to obtain a plurality of mapped images, and processing the plurality of mapped images to obtain a processed image comprises:
dividing the coordinate points of the moving track of the object into a plurality of parts according to the time sequence, and mapping each part of the plurality of parts to a single-channel image to obtain a plurality of mapped single-channel images;
and superposing the plurality of mapped single-channel images according to the channel dimension of the images to obtain superposed images, and compressing the superposed images to obtain the processed images.
3. The method of claim 2, wherein the compressing the superimposed image to obtain the processed image comprises:
compressing the superposed image by m times on the abscissa axis of the superposed image, and compressing the superposed image by n times on the ordinate axis of the superposed image to obtain the processed image; and m and n are positive integers.
4. The method of claim 1, wherein training a convolutional neural network based on a sample movement trajectory annotated with a score to determine the trajectory scoring model comprises:
acquiring a sample moving track marked with a score;
dividing the coordinate points of the sample moving track marked with the scores into a plurality of parts according to the time sequence, and mapping each part of the plurality of parts to a single-channel image to obtain a plurality of mapped single-channel images;
superposing the plurality of mapped single-channel images according to the channel dimension of the images to obtain superposed images, compressing the superposed images by m times on the abscissa axis where the superposed images are located, and compressing the superposed images by n times on the ordinate axis where the superposed images are located to obtain compressed images; m and n are positive integers;
Carrying out image transformation processing on the compressed image to obtain a multi-dimensional feature vector of the compressed image;
and inputting the multi-dimensional feature vector of the compressed image into the convolutional neural network for training to obtain a prediction score of the sample moving track, and reversely propagating and updating the convolutional neural network through a loss function between the prediction score and the labeling score until the convolutional neural network is converged to obtain the track scoring model.
5. The method of claim 4, wherein the sample movement trajectories include standard movement trajectories and non-standard movement trajectories;
before the obtaining of the sample movement track of the annotation score, the method further includes:
acquiring a standard movement track and a non-standard movement track;
marking the score of the standard movement track as a preset threshold value;
comparing the non-standard movement track coordinate point with the standard movement track coordinate point to obtain a comparison result, and marking the score of the non-standard movement track according to the comparison result; the fraction of the non-standard movement track is greater than or equal to zero and less than or equal to the preset threshold.
6. An apparatus for determining a score of a trajectory of an object, comprising:
the acquisition unit is used for acquiring the moving track of the object;
the processing unit is used for dividing the coordinate point of the moving track of the object into a plurality of parts, mapping each part of the plurality of parts to an image to obtain a plurality of mapped images, and processing the plurality of mapped images to obtain processed images; carrying out image transformation processing on the processed image to obtain a multi-dimensional feature vector of the processed image; inputting the multi-dimensional characteristic vector of the processed image into a track grading model for processing, and determining the fraction of the object moving track; the track scoring model is determined by training the convolutional neural network based on the sample moving track of the labeling score.
7. The apparatus as claimed in claim 6, wherein said processing unit is specifically configured to:
dividing the coordinate points of the moving track of the object into a plurality of parts according to the time sequence, and mapping each part of the plurality of parts to a single-channel image to obtain a plurality of mapped single-channel images;
and superposing the plurality of mapped single-channel images according to the channel dimension of the images to obtain superposed images, and compressing the superposed images to obtain the processed images.
8. The apparatus as claimed in claim 7, wherein said processing unit is specifically configured to:
compressing the superposed image by m times on the abscissa axis of the superposed image, and compressing the superposed image by n times on the ordinate axis of the superposed image to obtain the processed image; and m and n are positive integers.
9. A computing device, comprising:
a memory for storing a computer program;
a processor for calling a computer program stored in said memory, for executing the method of any one of claims 1 to 5 in accordance with the obtained program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer-executable program for causing a computer to execute the method of any one of claims 1 to 5.
CN202010628034.0A 2020-07-01 2020-07-01 Method and device for determining object movement track fraction Pending CN111862144A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010628034.0A CN111862144A (en) 2020-07-01 2020-07-01 Method and device for determining object movement track fraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010628034.0A CN111862144A (en) 2020-07-01 2020-07-01 Method and device for determining object movement track fraction

Publications (1)

Publication Number Publication Date
CN111862144A true CN111862144A (en) 2020-10-30

Family

ID=73151835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010628034.0A Pending CN111862144A (en) 2020-07-01 2020-07-01 Method and device for determining object movement track fraction

Country Status (1)

Country Link
CN (1) CN111862144A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734704A (en) * 2020-12-29 2021-04-30 上海索验智能科技有限公司 Skill training evaluation method under real objective based on neural network machine learning recognition

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102397680A (en) * 2011-12-06 2012-04-04 北京市莱科智多教育科技有限公司 System and method for training fine movement
CN104064051A (en) * 2014-06-23 2014-09-24 银江股份有限公司 Locating information dynamic matching method for passenger portable mobile terminal and taken bus
CN107335192A (en) * 2017-05-26 2017-11-10 深圳奥比中光科技有限公司 Move supplemental training method, apparatus and storage device
EP3336746A1 (en) * 2016-12-15 2018-06-20 Nederlandse Organisatie voor toegepast- natuurwetenschappelijk onderzoek TNO System and method of video content filtering
CN108288027A (en) * 2017-12-28 2018-07-17 新智数字科技有限公司 A kind of detection method of picture quality, device and equipment
CN108710885A (en) * 2018-03-29 2018-10-26 百度在线网络技术(北京)有限公司 The detection method and device of target object
CN109034509A (en) * 2017-06-08 2018-12-18 株式会社日立制作所 Operating personnel's evaluation system, operating personnel's evaluating apparatus and evaluation method
CN109063568A (en) * 2018-07-04 2018-12-21 复旦大学 A method of the figure skating video auto-scoring based on deep learning
CN109080142A (en) * 2018-10-11 2018-12-25 郑州市中心医院 A kind of 3D printer spray head motion profile detection device
CN109711285A (en) * 2018-12-11 2019-05-03 百度在线网络技术(北京)有限公司 Training, test method and the device of identification model
CN109937343A (en) * 2017-06-22 2019-06-25 百度时代网络技术(北京)有限公司 Appraisal framework for the prediction locus in automatic driving vehicle traffic forecast
CN110123333A (en) * 2019-04-15 2019-08-16 努比亚技术有限公司 A kind of method, wearable device and the storage medium of wearable device synkinesia
CN110163084A (en) * 2019-04-08 2019-08-23 睿视智觉(厦门)科技有限公司 Operator action measure of supervision, device and electronic equipment
CN110443288A (en) * 2019-07-19 2019-11-12 浙江大学城市学院 A kind of track similarity calculation method based on sequence study
CN110717154A (en) * 2018-07-11 2020-01-21 中国银联股份有限公司 Method and device for processing characteristics of motion trail and computer storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102397680A (en) * 2011-12-06 2012-04-04 北京市莱科智多教育科技有限公司 System and method for training fine movement
CN104064051A (en) * 2014-06-23 2014-09-24 银江股份有限公司 Locating information dynamic matching method for passenger portable mobile terminal and taken bus
US20200012866A1 (en) * 2016-12-15 2020-01-09 Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek Tno System and method of video content filtering
EP3336746A1 (en) * 2016-12-15 2018-06-20 Nederlandse Organisatie voor toegepast- natuurwetenschappelijk onderzoek TNO System and method of video content filtering
CN107335192A (en) * 2017-05-26 2017-11-10 深圳奥比中光科技有限公司 Move supplemental training method, apparatus and storage device
CN109034509A (en) * 2017-06-08 2018-12-18 株式会社日立制作所 Operating personnel's evaluation system, operating personnel's evaluating apparatus and evaluation method
CN109937343A (en) * 2017-06-22 2019-06-25 百度时代网络技术(北京)有限公司 Appraisal framework for the prediction locus in automatic driving vehicle traffic forecast
CN108288027A (en) * 2017-12-28 2018-07-17 新智数字科技有限公司 A kind of detection method of picture quality, device and equipment
CN108710885A (en) * 2018-03-29 2018-10-26 百度在线网络技术(北京)有限公司 The detection method and device of target object
CN109063568A (en) * 2018-07-04 2018-12-21 复旦大学 A method of the figure skating video auto-scoring based on deep learning
CN110717154A (en) * 2018-07-11 2020-01-21 中国银联股份有限公司 Method and device for processing characteristics of motion trail and computer storage medium
CN109080142A (en) * 2018-10-11 2018-12-25 郑州市中心医院 A kind of 3D printer spray head motion profile detection device
CN109711285A (en) * 2018-12-11 2019-05-03 百度在线网络技术(北京)有限公司 Training, test method and the device of identification model
CN110163084A (en) * 2019-04-08 2019-08-23 睿视智觉(厦门)科技有限公司 Operator action measure of supervision, device and electronic equipment
CN110123333A (en) * 2019-04-15 2019-08-16 努比亚技术有限公司 A kind of method, wearable device and the storage medium of wearable device synkinesia
CN110443288A (en) * 2019-07-19 2019-11-12 浙江大学城市学院 A kind of track similarity calculation method based on sequence study

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MARTIN DIMITRIEVSKI ET. AL: "Behavioral Pedestrian Tracking Using a Camera and LiDAR Sensors on a Moving Vehicle", HTTPS://DOI.ORG/10.3390/S19020391, 18 January 2019 (2019-01-18), pages 1 - 34 *
张显炀;刘刚;马霄龙;陈健;李兆麟;: "基于变分自编码的海面舰船轨迹预测算法", 计算机应用研究, no. 1, 30 June 2020 (2020-06-30), pages 122 - 125 *
高雅: "移动对象轨迹数据的位置预测", 信息科技辑, 15 February 2020 (2020-02-15), pages 136 - 1256 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734704A (en) * 2020-12-29 2021-04-30 上海索验智能科技有限公司 Skill training evaluation method under real objective based on neural network machine learning recognition

Similar Documents

Publication Publication Date Title
Jana et al. YOLO based Detection and Classification of Objects in video records
US20220366576A1 (en) Method for target tracking, electronic device, and storage medium
JP6188400B2 (en) Image processing apparatus, program, and image processing method
WO2021036373A1 (en) Target tracking method and device, and computer readable storage medium
CN109977895B (en) Wild animal video target detection method based on multi-feature map fusion
CN112446363A (en) Image splicing and de-duplication method and device based on video frame extraction
CN102779157B (en) Method and device for searching images
CN111640089A (en) Defect detection method and device based on feature map center point
US20200125898A1 (en) Methods and systems of segmentation of a document
CN109034136A (en) Image processing method, device, picture pick-up device and storage medium
CN104517113A (en) Image feature extraction method and device and image sorting method and device
CN112336342A (en) Hand key point detection method and device and terminal equipment
US10937150B2 (en) Systems and methods of feature correspondence analysis
CN112215079B (en) Global multistage target tracking method
Teng et al. Generative robotic grasping using depthwise separable convolution
CN111144215B (en) Image processing method, device, electronic equipment and storage medium
Avola et al. A shape comparison reinforcement method based on feature extractors and f1-score
EP3043315B1 (en) Method and apparatus for generating superpixels for multi-view images
CN111862144A (en) Method and device for determining object movement track fraction
CN109657577B (en) Animal detection method based on entropy and motion offset
JP6393495B2 (en) Image processing apparatus and object recognition method
CN115937249A (en) Twin network-based multi-prediction output aligned target tracking method and device
CN115272393A (en) Video stream target tracking method and device for unmanned aerial vehicle and storage medium
WO2020237674A1 (en) Target tracking method and apparatus, and unmanned aerial vehicle
CN113592906A (en) Long video target tracking method and system based on annotation frame feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination