CN114708381A - Motion trail generation method and device based on three-dimensional model and server - Google Patents

Motion trail generation method and device based on three-dimensional model and server Download PDF

Info

Publication number
CN114708381A
CN114708381A CN202210232116.2A CN202210232116A CN114708381A CN 114708381 A CN114708381 A CN 114708381A CN 202210232116 A CN202210232116 A CN 202210232116A CN 114708381 A CN114708381 A CN 114708381A
Authority
CN
China
Prior art keywords
target
dimensional model
target object
video data
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210232116.2A
Other languages
Chinese (zh)
Inventor
崔岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Germany Zhuhai Artificial Intelligence Institute Co ltd
4Dage Co Ltd
Original Assignee
China Germany Zhuhai Artificial Intelligence Institute Co ltd
4Dage Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Germany Zhuhai Artificial Intelligence Institute Co ltd, 4Dage Co Ltd filed Critical China Germany Zhuhai Artificial Intelligence Institute Co ltd
Priority to CN202210232116.2A priority Critical patent/CN114708381A/en
Publication of CN114708381A publication Critical patent/CN114708381A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides a motion trail generation method based on a three-dimensional model, which comprises the following steps: acquiring video data; obtaining a target motion track corresponding to a target object according to the video data; acquiring a scene image according to the target motion track; generating a three-dimensional model according to the scene image; adding a target motion trajectory to the three-dimensional model; and displaying the three-dimensional model added with the target motion trail to a user. Therefore, the method and the device can achieve the effect that the user can monitor the motion trail of the target object intuitively and at high precision.

Description

Motion trajectory generation method and device based on three-dimensional model and server
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a motion trajectory generation method and device based on a three-dimensional model, and a server.
Background
With the development of video monitoring intellectualization, a large amount of information of a moving object can be observed by video monitoring, but for some special scenes, the requirements of a user on intuitively and accurately monitoring the motion track of a target object cannot be met.
Disclosure of Invention
The embodiment of the application provides a motion trail generation method, a motion trail generation device and a server based on a three-dimensional model, and can solve the technical problem that in the prior art, a user cannot intuitively and accurately detect the motion trail of a target person.
In a first aspect, an embodiment of the present application provides a motion trajectory generation method based on a three-dimensional model, including:
acquiring video data;
obtaining a target motion track corresponding to a target object according to the video data;
acquiring a scene image according to the target motion track;
generating a three-dimensional model according to the scene image;
adding the target motion trajectory to the three-dimensional model;
and displaying the three-dimensional model added with the target motion trail to a user.
In a possible implementation manner of the first aspect, obtaining a target motion trajectory corresponding to a target object according to the video data includes:
determining a target object in the video data;
and generating a target motion track corresponding to the target object.
In a possible implementation manner of the first aspect, determining a target object in the video data includes:
inputting the video data into a target detection model and outputting candidate objects;
identifying identity information of the candidate object;
and determining a target object in the candidate objects according to the identity information.
In a possible implementation manner of the first aspect, generating a target motion trajectory corresponding to the target object includes:
acquiring an initial motion track of a target object;
preprocessing the initial motion trail;
and carrying out abnormity detection on the preprocessed initial motion track to obtain a target motion track.
In a possible implementation manner of the first aspect, the obtaining an initial motion trajectory of the target object includes:
marking the position and the target characteristic corresponding to the target object in the video data;
and inputting the marked video data into a target tracking model, and outputting an initial motion track corresponding to a target object.
In a possible implementation manner of the first aspect, the preprocessing the initial motion trajectory includes:
compressing the initial motion trail;
carrying out similarity measurement on the compressed initial motion trail;
and clustering the initial motion tracks after the similarity measurement.
In a first aspect, an embodiment of the present application provides a motion trajectory generation apparatus based on a three-dimensional model, including:
the first acquisition module is used for acquiring video data;
the first generation module is used for obtaining a target motion track corresponding to a target object according to the video data;
the second acquisition module is used for acquiring a scene image according to the target motion track;
the second generation module is used for generating a three-dimensional model according to the scene image;
the adding module is used for adding the target motion track to the three-dimensional model;
and the display module is used for displaying the three-dimensional model added with the target motion trail to a user.
In a possible implementation manner of the second aspect, the first generating module includes:
a determining submodule for determining a target object in the video data;
a generation submodule for generating a target motion trajectory corresponding to the target object
In a possible implementation manner of the second aspect, the determining sub-module includes:
the detection unit is used for inputting the video data into a target detection model and outputting candidate objects;
the identification unit is used for identifying the identity information of the candidate object;
and the determining unit is used for determining the target object in the candidate objects according to the identity information.
In one possible implementation manner of the second aspect, the generating the sub-module includes:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring an initial motion track of a target object;
the preprocessing unit is used for preprocessing the initial motion trail;
an anomaly detection unit for performing anomaly detection on the preprocessed initial motion track to obtain a target motion track
In a possible implementation manner of the second aspect, the obtaining unit includes:
the marking subunit is used for marking the position and the target characteristic corresponding to the target object in the video data;
and the tracking subunit is used for inputting the marked video data into a target tracking model and outputting an initial motion track corresponding to the target object.
In a possible implementation manner of the second aspect, the preprocessing unit includes:
a compressing subunit, configured to compress the initial motion trajectory;
the similarity measurement unit is used for carrying out similarity measurement on the compressed initial motion trail;
and the clustering unit is used for clustering the initial motion trail after the similarity measurement.
In one possible implementation, the second generating module includes:
the extraction submodule is used for extracting the characteristic points of the scene image;
the generating submodule is used for generating a point cloud according to the depth information and the color information of the characteristic points;
and the three-dimensional reconstruction submodule is used for performing three-dimensional reconstruction based on the point cloud to obtain the three-dimensional model.
In a third aspect, an embodiment of the present application provides a server, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor, when executing the computer program, implements the method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, and the computer program when executed by a processor implements the method as described in the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that:
in the embodiment of the application, the video data are acquired, the target motion track corresponding to the target object is acquired according to the video data, the scene image is acquired according to the target motion track, the three-dimensional model is generated according to the scene image, the target motion track is added to the three-dimensional model, and the three-dimensional model with the target motion track added is displayed to a user. Therefore, the method and the device for monitoring the motion trail of the target object can achieve the effect that a user can monitor the motion trail of the target object intuitively and accurately.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a method for generating a motion trajectory based on a three-dimensional model according to an embodiment of the present application;
fig. 2 is a block diagram of a three-dimensional model-based motion trajectory generation apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The technical solutions provided in the embodiments of the present application will be described below by specific embodiments.
Referring to fig. 1, a schematic flowchart of a method for generating a motion trajectory based on a three-dimensional model provided in an embodiment of the present application is shown, by way of example and not limitation, the method may be applied to a server, and the method may include the following steps:
step S101, video data is acquired.
The video data is video monitoring data transmitted by a camera arranged in a preset area (for example, within 1 kilometer of a case scene).
It can be understood that in the embodiment of the present application, video monitoring data transmitted by a camera arranged in a preset area is acquired, so that motion trajectories of a large number of objects (for example, outdoor and indoor pedestrians) are observed, after a target object is determined, the motion trajectory of the target object is generated, a scene image captured according to the motion trajectory is acquired at the same time, a three-dimensional model is generated according to the scene image, and finally, a motion opportunity of the target object is added to the three-dimensional model, so that a user (for example, a manager) can intuitively and accurately monitor the motion trajectory of the target object.
And S102, obtaining a target motion track corresponding to the target object according to the video data.
In a specific application, obtaining a target motion trajectory corresponding to a target object according to video data includes:
in step S201, a target object in the video data is determined.
In a specific application, determining a target object in video data includes:
step S301, inputting video data to the target detection model, and outputting candidate objects.
Illustratively, vectorizing a video sequence to obtain vectorized and characterized multiple continuous images, evaluating feature similarity of adjacent images by using a selective search algorithm, combining the images with high similarity into the same candidate box, inputting the candidate box into a CNN (convolutional neural network) to obtain a depth feature, and finally performing SVM (support vector machine) classification and RNN (recursive neural network) regression on the depth feature to obtain a classification result to determine a candidate object.
Step S302, identify the identity information of the candidate object.
Exemplarily, feature points of an image region corresponding to the candidate object are extracted, the feature points are encoded to obtain feature vector values, and the feature vector values are input into a preset face matching library to obtain identity information of the candidate object.
Step S303, determining a target object in the candidate objects according to the identity information.
It will be appreciated that the target object is determined in the database based on the identity information.
Step S202, generating a target motion track corresponding to the target object.
In a specific application, generating a target motion trajectory corresponding to a target object includes:
in step S401, an initial motion trajectory of the target object is acquired.
In a specific application, the obtaining of the initial motion trajectory of the target object includes:
step S501, marking the corresponding position and the target characteristic of the target object in the video data.
Step S502, the marked video data is input into a target tracking model, and an initial motion track corresponding to a target object is output.
Exemplarily, similarity between candidate frames in adjacent frames is calculated according to a position and target characteristics corresponding to a target object in video data through a YOLO target detection algorithm, a similarity algorithm such as a mahalanobis distance or a cosine distance is used as a basis for target management, and finally, the targets are distributed through a Hungary algorithm or a KM algorithm, and two targets with high similarity are associated into one track to obtain an initial motion track.
Step S402, preprocessing an initial motion track.
In a specific application, the preprocessing of the initial motion trajectory comprises:
step S601, compress the initial motion trajectory.
Step S602, performing similarity measurement on the compressed initial motion trajectory.
And step S603, clustering the initial motion tracks after the similarity measurement.
Exemplarily, firstly, a track compression algorithm (such as a static meshing method) is adopted to replace an original coordinate point by using a coordinate point capable of retaining a track structure under the condition of meeting the similarity between a compressed track and an original track, so that the data volume of a track data point is reduced as much as possible; then, measuring the similarity of the tracks by using the distance between the tracks by adopting a similarity calculation method (such as Euclidean distance or edit distance) and combining the tracks with high similarity together; and finally, dividing a large number of acquired tracks into relatively homogeneous clusters by using a track clustering algorithm (such as a K-Mens algorithm) to obtain the preprocessed initial motion track.
And step S403, carrying out abnormity detection on the preprocessed initial motion track to obtain a target motion track.
Exemplarily, dividing the preprocessed data into a training set and a test set according to a certain proportion, selecting a clustering algorithm for learning, and establishing a normal motion track mode; and judging whether the motion of the current object is normal or not through a normal motion track mode obtained by training to obtain a target motion track.
And step S103, acquiring a scene image according to the target motion track.
It is understood that, according to the target motion trajectory, a scene image corresponding to a scene through which the target object passes is retrieved from the local database, and the scene image may be captured by a depth camera.
And step S104, generating a three-dimensional model according to the scene image.
In a specific application, generating a three-dimensional model according to a scene image comprises:
step S701 extracts feature points of the scene image.
Illustratively, the feature points of the scene image are extracted according to a preset feature extraction algorithm, wherein the preset feature extraction algorithm may be a corner detection algorithm, such as Harris corner detection and FAST corner detection, and may also be a plaque feature point detection algorithm, such as SIFI extraction algorithm and SURF extraction algorithm.
Step S702, generating a point cloud according to the depth information and the color information of the feature points.
Illustratively, the feature points are directly processed according to the SFM algorithm, depth information corresponding to the feature points is calculated, and color information corresponding to the feature points is extracted, so that the feature points with the depth information and the color information are regarded as point clouds.
And step S703, performing three-dimensional reconstruction based on the point cloud to obtain a three-dimensional model.
Illustratively, a three-dimensional reconstruction algorithm (e.g., an MVE algorithm) is used to extract surface triangles of the point cloud, so as to generate a three-dimensional model.
And step S105, adding the target motion track to the three-dimensional model.
Illustratively, the target motion track is repositioned, and the target motion track is mapped into the three-dimensional model from the current coordinate system.
And step S106, displaying the three-dimensional model added with the target motion trail to a user.
Illustratively, the user sends the scene ID to the server through the user terminal, and then the server sends the three-dimensional model added with the target motion trail to the user terminal to display the three-dimensional model added with the target motion trail to the user.
In the embodiment of the application, the video data are acquired, the target motion track corresponding to the target object is acquired according to the video data, the scene image is acquired according to the target motion track, the three-dimensional model is generated according to the scene image, the target motion track is added to the three-dimensional model, and the three-dimensional model with the target motion track added is displayed to a user. Therefore, the method and the device for monitoring the motion trail of the target object can achieve the effect that a user can monitor the motion trail of the target object intuitively and accurately.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 2 shows a block diagram of a motion trajectory generation device based on a three-dimensional model according to an embodiment of the present application, which corresponds to the method described in the foregoing embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
Referring to fig. 2, the apparatus includes:
a first obtaining module 21, configured to obtain video data;
the first generating module 22 is configured to obtain a target motion trajectory corresponding to a target object according to the video data;
the second obtaining module 23 is configured to obtain a scene image according to the target motion trajectory;
a second generating module 24, configured to generate a three-dimensional model according to the scene image;
an adding module 25, configured to add the target motion trajectory to the three-dimensional model;
and the display module 26 is used for displaying the three-dimensional model added with the target motion trail to a user.
In one possible implementation, the first generating module includes:
a determining submodule for determining a target object in the video data;
a generation submodule for generating a target motion trajectory corresponding to the target object
In one possible implementation, the determining sub-module includes:
the detection unit is used for inputting the video data into a target detection model and outputting candidate objects;
the identification unit is used for identifying the identity information of the candidate object;
and the determining unit is used for determining the target object in the candidate objects according to the identity information.
In one possible implementation, generating the sub-module includes:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring an initial motion track of a target object;
the preprocessing unit is used for preprocessing the initial motion trail;
an anomaly detection unit for performing anomaly detection on the preprocessed initial motion track to obtain a target motion track
In one possible implementation, the obtaining unit includes:
the marking subunit is used for marking the position and the target characteristic corresponding to the target object in the video data;
and the tracking subunit is used for inputting the marked video data into a target tracking model and outputting an initial motion track corresponding to the target object.
In one possible implementation, the preprocessing unit includes:
a compressing subunit, configured to compress the initial motion trajectory;
the similarity measurement unit is used for carrying out similarity measurement on the compressed initial motion trail;
and the clustering unit is used for clustering the initial motion trail after the similarity measurement.
In one possible implementation, the second generating module includes:
the extraction submodule is used for extracting the characteristic points of the scene image;
the generating submodule is used for generating a point cloud according to the depth information and the color information of the characteristic points;
and the three-dimensional reconstruction submodule is used for performing three-dimensional reconstruction based on the point cloud to obtain the three-dimensional model.
It should be noted that, for the information interaction, execution process, and other contents between the above devices/units, the specific functions and technical effects thereof based on the same concept as those of the method embodiment of the present application can be specifically referred to the method embodiment portion, and are not described herein again.
Fig. 3 is a schematic structural diagram of a server according to an embodiment of the present application. As shown in fig. 3, the server 3 of this embodiment includes: at least one processor 30, a memory 31 and a computer program 32 stored in the memory 31 and executable on the at least one processor 30, the processor 30 implementing the steps of any of the various method embodiments described above when executing the computer program 32.
The server 3 may be a computing device such as a cloud server. The server may include, but is not limited to, a processor 30, a memory 31. Those skilled in the art will appreciate that fig. 3 is merely an example of the server 3, and does not constitute a limitation on the server 3, and may include more or less components than those shown, or combine some of the components, or different components, such as input and output devices, network access devices, etc.
The Processor 30 may be a Central Processing Unit (CPU), and the Processor 30 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may in some embodiments be an internal storage unit of the server 3, such as a hard disk or a memory of the server 3. The memory 31 may also be an external storage device of the server 3 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the server 3. Further, the memory 31 may also include both an internal storage unit and an external storage device of the server 3. The memory 31 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 31 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a readable storage medium, which is preferably a computer readable storage medium, and the computer readable storage medium stores a computer program, and the computer program, when executed by a processor, implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a server, recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed server and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A motion trail generation method based on a three-dimensional model is characterized by comprising the following steps:
acquiring video data;
obtaining a target motion track corresponding to a target object according to the video data;
acquiring a scene image according to the target motion track;
generating a three-dimensional model according to the scene image;
adding the target motion trajectory to the three-dimensional model;
and displaying the three-dimensional model added with the target motion trail to a user.
2. The method for generating a motion trajectory based on a three-dimensional model according to claim 1, wherein obtaining a target motion trajectory corresponding to a target object according to the video data comprises:
determining a target object in the video data;
and generating a target motion track corresponding to the target object.
3. The three-dimensional model-based motion trajectory generation method of claim 2, wherein determining the target object in the video data comprises:
inputting the video data into a target detection model and outputting candidate objects;
identifying identity information of the candidate object;
and determining a target object in the candidate objects according to the identity information.
4. The method for generating a motion trail based on a three-dimensional model according to claim 2, wherein generating a target motion trail corresponding to the target object comprises:
acquiring an initial motion track of a target object;
preprocessing the initial motion trail;
and carrying out abnormity detection on the preprocessed initial motion track to obtain a target motion track.
5. The three-dimensional model-based motion trail generation method according to claim 4, wherein obtaining an initial motion trail of the target object comprises:
marking the corresponding position and the target characteristic of a target object in the video data;
and inputting the marked video data into a target tracking model, and outputting an initial motion track corresponding to a target object.
6. The three-dimensional model-based motion trajectory generation method of claim 4, wherein preprocessing the initial motion trajectory comprises:
compressing the initial motion trail;
carrying out similarity measurement on the compressed initial motion trail;
and clustering the initial motion tracks after the similarity measurement.
7. The three-dimensional model-based motion trajectory generation method according to claim 1, wherein generating a three-dimensional model from the scene image comprises:
extracting feature points of the scene image;
generating a point cloud according to the depth information and the color information of the feature points;
and performing three-dimensional reconstruction based on the point cloud to obtain the three-dimensional model.
8. A motion trajectory generation device based on a three-dimensional model, comprising:
the first acquisition module is used for acquiring video data;
the first generation module is used for obtaining a target motion track corresponding to a target object according to the video data;
the second acquisition module is used for acquiring a scene image according to the target motion track;
the second generation module is used for generating a three-dimensional model according to the scene image;
the adding module is used for adding the target motion track to the three-dimensional model;
and the display module is used for displaying the three-dimensional model added with the target motion trail to a user.
9. A server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the method according to any one of claims 1 to 7.
CN202210232116.2A 2022-03-10 2022-03-10 Motion trail generation method and device based on three-dimensional model and server Pending CN114708381A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210232116.2A CN114708381A (en) 2022-03-10 2022-03-10 Motion trail generation method and device based on three-dimensional model and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210232116.2A CN114708381A (en) 2022-03-10 2022-03-10 Motion trail generation method and device based on three-dimensional model and server

Publications (1)

Publication Number Publication Date
CN114708381A true CN114708381A (en) 2022-07-05

Family

ID=82168764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210232116.2A Pending CN114708381A (en) 2022-03-10 2022-03-10 Motion trail generation method and device based on three-dimensional model and server

Country Status (1)

Country Link
CN (1) CN114708381A (en)

Similar Documents

Publication Publication Date Title
CN108229456B (en) Target tracking method and device, electronic equipment and computer storage medium
US8442307B1 (en) Appearance augmented 3-D point clouds for trajectory and camera localization
CN109117773B (en) Image feature point detection method, terminal device and storage medium
CN109116129B (en) Terminal detection method, detection device, system and storage medium
CN110598019B (en) Repeated image identification method and device
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
CN113436338A (en) Three-dimensional reconstruction method and device for fire scene, server and readable storage medium
CN113807451B (en) Panoramic image feature point matching model training method and device and server
CN116188821A (en) Copyright detection method, system, electronic device and storage medium
CN111626303B (en) Sex and age identification method, sex and age identification device, storage medium and server
Huo et al. Three-dimensional mechanical parts reconstruction technology based on two-dimensional image
CN112258647B (en) Map reconstruction method and device, computer readable medium and electronic equipment
CN112396831B (en) Three-dimensional information generation method and device for traffic identification
CN113704276A (en) Map updating method and device, electronic equipment and computer readable storage medium
CN112686122A (en) Human body and shadow detection method, device, electronic device and storage medium
CN115719428A (en) Face image clustering method, device, equipment and medium based on classification model
CN112819953B (en) Three-dimensional reconstruction method, network model training method, device and electronic equipment
CN114708381A (en) Motion trail generation method and device based on three-dimensional model and server
CN114494960A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN113887384A (en) Pedestrian trajectory analysis method, device, equipment and medium based on multi-trajectory fusion
CN111753766A (en) Image processing method, device, equipment and medium
CN113705304A (en) Image processing method and device, storage medium and computer equipment
CN112487943A (en) Method and device for removing duplicate of key frame and electronic equipment
CN110889894A (en) Three-dimensional face reconstruction method and device and terminal equipment
CN112949672A (en) Commodity identification method, commodity identification device, commodity identification equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination