CN114782496A - Object tracking method and device, storage medium and electronic device - Google Patents

Object tracking method and device, storage medium and electronic device Download PDF

Info

Publication number
CN114782496A
CN114782496A CN202210694635.0A CN202210694635A CN114782496A CN 114782496 A CN114782496 A CN 114782496A CN 202210694635 A CN202210694635 A CN 202210694635A CN 114782496 A CN114782496 A CN 114782496A
Authority
CN
China
Prior art keywords
point cloud
target
frame
determining
target point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210694635.0A
Other languages
Chinese (zh)
Inventor
倪华健
彭垚
赵之健
林亦宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shanma Zhiqing Technology Co Ltd
Original Assignee
Hangzhou Shanma Zhiqing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Shanma Zhiqing Technology Co Ltd filed Critical Hangzhou Shanma Zhiqing Technology Co Ltd
Priority to CN202210694635.0A priority Critical patent/CN114782496A/en
Publication of CN114782496A publication Critical patent/CN114782496A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The embodiment of the invention provides a method and a device for tracking an object, a storage medium and an electronic device, wherein the method comprises the following steps: fusing first data acquired by first equipment and first point cloud acquired by second equipment to obtain a first target point cloud frame; predicting first predicted position information of each first object in a second target point cloud frame based on the first target point cloud frame, wherein the first predicted position information of each first object in the first target point cloud frame is obtained by fusing second data obtained by first equipment and second point cloud obtained by second equipment; determining first actual position information of each second object included in the second target point cloud frame; the same object as the first object included in the second target point cloud frame is determined based on each of the first predicted position information and each of the first actual position information. By the method and the device, the problem of inaccurate tracking of the obtained object in the related technology is solved, and the effect of improving the object tracking accuracy is achieved.

Description

Object tracking method and device, storage medium and electronic device
Technical Field
The embodiment of the invention relates to the field of communication, in particular to a method and a device for tracking an object, a storage medium and an electronic device.
Background
In the related art, a monocular camera is generally used for object tracking, but under the conditions of strong light, dim light, traffic jam and the like, detection missing is easy to occur in a detection algorithm based on a pure image, and meanwhile, a tracking algorithm which depends on a 2D detection frame extremely is unstable.
Therefore, the problem of inaccurate object tracking exists in the related art.
In view of the above problems in the related art, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for tracking an object, a storage medium and an electronic device, which are used for at least solving the problem of inaccurate object tracking in the related art.
According to an embodiment of the present invention, there is provided a tracking method of an object, including: fusing first data acquired by first equipment and first point cloud acquired by second equipment to acquire a first target point cloud frame, wherein the first data is acquired by shooting a target area by the first equipment at a target moment, the first point cloud is acquired by shooting the target area by the second equipment at the target moment, and the angle of shooting the target area by the first equipment is the same as the angle of shooting the target area by the second equipment; predicting first predicted position information of each first object in a second target point cloud frame included in the first target point cloud frame based on the first target point cloud frame, wherein the second target point cloud frame is a point cloud frame obtained by fusing second data acquired by the first equipment and second point cloud acquired by the second equipment, the second data is data acquired after the first data and adjacent to the first data, and the second point cloud is a point cloud acquired after the first point cloud and adjacent to the first point cloud; determining first actual position information of each second object included in the second target point cloud frame; determining an object included in the second target point cloud frame that is the same as the first object based on each of the first predicted location information and each of the first actual location information.
According to another embodiment of the present invention, there is provided an apparatus for tracking an object, including: the system comprises a fusion module, a first target point cloud frame and a second target point cloud frame, wherein the fusion module is used for fusing first data acquired by first equipment and first point cloud acquired by second equipment to acquire a first target point cloud frame, the first data is data acquired by shooting a target area by the first equipment at a target moment, the first point cloud is point cloud acquired by shooting the target area by the second equipment at the target moment, and the angle of shooting the target area by the first equipment is the same as the angle of shooting the target area by the second equipment; a prediction module, configured to predict, based on the first target point cloud frame, first predicted position information of each first object included in the first target point cloud frame in a second target point cloud frame, where the second target point cloud frame is a point cloud frame obtained by fusing second data acquired by the first device and a second point cloud acquired by the second device, the second data is data acquired after the first data and adjacent to the first data, and the second point cloud is a point cloud acquired after the first point cloud and adjacent to the first point cloud; a determining module for determining first actual position information of each second object included in the second target point cloud frame; a tracking module to determine an object included in the second target point cloud frame that is the same as the first object based on each of the first predicted location information and each of the first actual location information.
According to yet another embodiment of the invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program, when executed by a processor, implements the steps of the method as set forth in any of the above.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
By the invention, the first data acquired by the first equipment and the first point cloud of the second equipment or the first point cloud are fused to obtain a first target point cloud frame, predicting first predicted position information of each first object included in the first target point cloud frame in the second target point cloud frame from the first target point cloud frame, wherein the second target point cloud frame is obtained by fusing first data obtained from the first device and second point cloud obtained from the second device, the first data is obtained from the first data, and adjacent to the first data, the second point cloud is a point cloud acquired after and adjacent to the first point cloud, first actual position information of each second object included in the second target point cloud frame is determined, and determining the same object as the first object included in the second target point cloud frame according to each piece of the first predicted position information and the first actual position information. When a second target point cloud frame is determined, data acquired by the first equipment and data acquired by the second equipment are fused, an object which is the same as the first object is determined according to the fused data, namely, when object tracking is performed, data acquired by different equipment are integrated, and the shooting angles of target areas by all the equipment are the same, so that the accuracy of determining the tracked object is improved. Therefore, the problem that the object tracking is inaccurate in the related technology can be solved, and the effect of improving the object tracking accuracy is achieved.
Drawings
Fig. 1 is a block diagram of a hardware configuration of a mobile terminal of a tracking method of an object according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a method of tracking an object according to an embodiment of the invention;
FIG. 3 is a schematic diagram of the operation of a target network model according to an embodiment of the invention;
FIG. 4 is a flow diagram of a method for tracking objects in accordance with a specific embodiment of the present invention;
fig. 5 is a block diagram of a tracking apparatus of an object according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking an example of the method running on a mobile terminal, fig. 1 is a hardware structure block diagram of the mobile terminal of the object tracking method according to the embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used for storing computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the object tracking method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the above-mentioned method. The memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices via a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In the present embodiment, a method for tracking an object is provided, and fig. 2 is a flowchart of a method for tracking an object according to an embodiment of the present invention, where as shown in fig. 2, the flowchart includes the following steps:
step S202, fusing first data acquired by first equipment and first point cloud acquired by second equipment to acquire a first target point cloud frame, wherein the first data is acquired by shooting a target area by the first equipment at a target moment, the first point cloud is acquired by shooting the target area by the second equipment at the target moment, and the angle of the target area shot by the first equipment is the same as the angle of the target area shot by the second equipment;
step S204, predicting first predicted position information of each first object in a second target point cloud frame in the first target point cloud frame based on the first target point cloud frame, wherein the second target point cloud frame is a point cloud frame obtained by fusing second data acquired by the first equipment and second point cloud acquired by the second equipment, the second data is data acquired after the first data and adjacent to the first data, and the second point cloud is a point cloud acquired after the first point cloud and adjacent to the first point cloud;
step S206, determining first actual position information of each second object included in the second target point cloud frame;
step S208, determining the same object as the first object included in the second target point cloud frame based on each of the first predicted position information and each of the first actual position information.
In the above-described embodiments, the first device may be an image pickup device such as a camera (monocular camera ), video recorder, or the like. The first data may be an image, a video, etc. captured by the camera device. The second device may be a radar, such as a lidar, a microwave radar, a millimeter wave radar, or the like. The first device and the second device can be installed at the same height, the same orientation and adjacent positions, so that the angles of the first device and the second device when shooting the target area are the same. Of course, the first device and the second device may be arranged to have the same orientation and different positions, and the shooting angles of the first device and the second device when shooting the target area are adjusted to be the same.
In the above-described embodiment, the first object and the second object may be a person, a vehicle, or the like. When the first object and the second object are persons, the first device and the second device may be a camera device and a radar device installed at a shopping mall, a traffic post, or the like. When the first object and the second object are vehicles, the first device and the second device may be an image pickup device and a radar device installed at a traffic post or the like. Wherein the target area may be an overlapping area of the photographing areas of the first and second devices. For example, the shooting area of the first device may be a target area, and the shooting area of the second device may include the target area, i.e., the shooting area of the second device is larger than the shooting area of the first device.
In the above embodiment, time synchronization and joint calibration can be performed on the laser radar and the monocular camera which are installed at the same height, in the same direction and adjacent in position, and internal and external parameters of the camera mapped by the laser radar coordinate system are calculated, so that the target point cloud obtained by scanning of the laser radar can be correctly projected onto the image picture of the camera and correct mapping projection can be completed. The first data and the first point cloud may be data and point cloud obtained by photographing the target area at the same time by the first device and the second device. The first data and the first point cloud which are obtained at the same time can be fused to obtain a first target point cloud frame. A first predicted position of each first object included in the first target point cloud frame in the second target point cloud frame is predicted from the first target point cloud frame. The first object may be an object included in the target area, the first device photographs the target area, and when the target area includes the first object, the first data includes the first object, and the number of the first object may be one or multiple. Similarly, the first point cloud also includes a first object, and the data of the first object may be one or multiple. It should be noted that the time interval between the data acquisition of the first device and the data acquisition of the second device is the same.
In the above embodiment, the second data acquired by the first device after acquiring the first data and adjacent to the first data may be acquired, and the second point cloud acquired by the second device after acquiring the first point cloud and adjacent to the first point cloud may be acquired. And fusing the second data and the second point cloud to obtain a second target point cloud frame. After the first target point cloud frame and the second target point cloud frame are obtained, a first predicted position of the first object in the second target point cloud frame can be predicted according to the first target point cloud frame. And determining a first actual position of each second object included in the second point cloud frame according to the first target point cloud frame and the second target point cloud frame, and determining an object which is the same as each first object included in the second target point cloud frame according to the first predicted position and the first actual position.
In the above embodiment, the first target point cloud frame and the second target point cloud frame may be input into a pre-trained network model, and the first predicted position and the first actual position are determined by the network model.
Optionally, the execution subject of the above steps may be a background processor or other devices with similar processing capabilities, and may also be a machine integrated with at least a data processing device, where the data processing device may include a terminal such as a computer, a mobile phone, and the like, but is not limited thereto.
By the invention, the first data acquired by the first equipment and the first point cloud of the second equipment or the first point cloud are fused to obtain a first target point cloud frame, predicting first predicted position information of each first object included in the first target point cloud frame in the second target point cloud frame from the first target point cloud frame, wherein the second target point cloud frame is a point cloud frame obtained by fusing first data acquired at the first device and a second point cloud acquired at the second device, the first data is acquired at the first data, and adjacent to the first data, the second point cloud is a point cloud which is acquired after and adjacent to the first point cloud, first actual position information of each second object included in the second target point cloud frame is determined, and determining the same object as the first object included in the second target point cloud frame according to each piece of the first predicted position information and the first actual position information. When a second target point cloud frame is determined, data acquired by the first equipment and data acquired by the second equipment are fused, an object which is the same as the first object is determined according to the fused data, namely, when object tracking is performed, data acquired by different equipment are integrated, and the shooting angles of target areas by all the equipment are the same, so that the accuracy of determining the tracked object is improved. Therefore, the problem of inaccurate tracking of the object in the related art can be solved, and the effect of improving the object tracking accuracy is achieved.
In an exemplary embodiment, fusing the first data acquired by the first device and the first point cloud acquired by the second device to obtain the first target point cloud frame includes: mapping each pixel point included in the first data to a coordinate system where the first point cloud is located to obtain a target pixel point; performing the following for each target point included in the first point cloud to determine a target vector for each of the target points: determining a coordinate value and a response intensity value of the target point, determining a color parameter value of the target pixel point corresponding to the target point, and determining the coordinate value, the response intensity value and the color parameter value as a target vector of the target point; determining the point cloud formed by the target points of which the target vectors are determined as the first target point cloud frame. In the present embodimentWhen the first data and the first point cloud are fused, each pixel point included in the first data can be projected to the coordinate system where the first point cloud is located to obtain a target pixel point located in the coordinate system where the first point cloud is located, and the coordinate value and the response intensity value of each target point in the first point cloud are determined. The response intensity value may be a laser response intensity value when the second device is a lidar and a reflection intensity value when the second device is a microwave radar. In the coordinate system of the first point cloud, there are three-dimensional coordinates for each target point: (
Figure 924636DEST_PATH_IMAGE001
) And then determining the response intensity value as a parameter for each target point, each target point can be represented as
Figure 834823DEST_PATH_IMAGE002
. And determining the coordinates of each target pixel point in the coordinate system of the first point cloud, determining the point which is the same as the coordinates of the target pixel point in the coordinate system of the first point cloud, and determining the coordinate value, the response intensity value and the color parameter value of the target pixel point of the point as the target vector of the point. Wherein the color parameter values may comprise r, g, b color values. The target vector can be expressed as
Figure 681425DEST_PATH_IMAGE003
In the above embodiment, each of the point clouds may also be combined
Figure 312258DEST_PATH_IMAGE002
(x, y and z are three-dimensional coordinates of the ith point cloud in a point cloud coordinate system, and a is a laser response intensity value of the ith point cloud) to an image plane of the camera, so that a one-to-one correspondence relationship between the point clouds and image pixels can be obtained, namely, each point cloud corresponds to one pixel. Thereby, each point cloud value can be expanded to 7-dimensional vector
Figure 482077DEST_PATH_IMAGE004
In which r is a radical of a group,g, b are the values of the corresponding image pixels.
In the above embodiment, after the target vector of each target point is determined, the target vector corresponding to each target point may be labeled in the first point cloud, and the point cloud formed by all the target points labeled with the target vectors is determined as the first target point cloud frame.
It should be noted that the determination method of the second target point cloud frame is the same as the determination method of the first target point cloud frame, and details are not described herein.
In one exemplary embodiment, predicting first predicted position information of each first object included in the first target point cloud frame in a second target point cloud frame based on the first target point cloud frame comprises: determining size information of the first object based on a target vector of each target point included in the first target point cloud frame; determining position information of the first object based on the size information; determining a target motion velocity of the first object based on the first target point cloud frame; determining the first predicted position information based on the position information and the object movement speed. In this embodiment, size information of the first object may be determined from a target vector of each target point included in the first target point cloud frame, position information of the first object may be determined from the size information, a target movement velocity of the first object may be determined, and first predicted position information may be determined from the position information and the target movement velocity. The position information of the first object may include a position of the 3D frame of the first object and a rotation angle of the 3D frame. The position where the 3D frame is located may include the center point coordinates of the 3D frame and the size of the 3D frame.
In the above-described embodiment, the size information, the center point position information, the rotation angle, and the like of the first object may be determined from the target vector of each target point. The size information may be size information of a 3D frame of the first object, and the size information may include length, width, height, rotation angle, and the like, where the rotation angle is a rotation angle of the 3D frame with respect to a z-axis of the lidar coordinate system.
In one exemplary embodiment, determining the target motion velocity of the first object based on the first target point cloud frame comprises: acquiring the second target point cloud frame; inputting the first target point cloud frame and the second target point cloud frame into a target network model, and determining a first predicted target frame of each first object included in the first target point cloud frame and a second predicted target frame of the first object in the second target point cloud frame, wherein the target network model is obtained by machine learning by using multiple groups of training data, and each group of training data in the multiple groups of training data comprises adjacent point cloud frame pairs and calibration frame parameters of the objects included in each point cloud frame; determining the target motion velocity of the first object based on the first predicted target box and the second predicted target box. In this embodiment, the target motion speed may be determined according to the target network model, and the first target point cloud frame and the second target point cloud frame are input into the target network model to determine the target motion speed.
In the above embodiment, the position information and the target movement speed of each first object may also be determined by using a pre-trained target network model, so as to determine the first predicted position information. When training the target network model, a data set acquired by a first device and a point cloud set acquired by a second device may be acquired first. Wherein the data included in the data set is the same as the number of point clouds included in the point cloud set. The data with the same number in the data set and the point cloud set and the acquisition time of the point cloud are the same. And each point cloud in the point clouds
Figure 172952DEST_PATH_IMAGE002
(where x, y, and z are three-dimensional coordinates of the ith point cloud in the point cloud coordinate system, and a is a laser response intensity value of the ith point cloud) is mapped to a data plane with the same number as the ith point cloud, such as an image plane, and a one-to-one correspondence relationship with image pixels can be obtained, that is, each point cloud corresponds to one pixel. Thereby, each point cloud value can be expanded to 7-dimensional vector
Figure 116638DEST_PATH_IMAGE004
In which r is a radical of a group,g, b are the values of the corresponding image pixels.
After each frame of point cloud is endowed with r, g and b color information, manual marking can be carried out on street point cloud videos synchronously collected by a camera and a laser radar, and each person in the point cloud frame is endowed with a 3D frame and a unique id, namely
Figure 66008DEST_PATH_IMAGE005
Wherein, in the step (A),
Figure 654115DEST_PATH_IMAGE006
represents the position of the center point of the kth 3D frame (x-axis corresponds to the horizontal direction, y-axis corresponds to the vertical direction, z-axis corresponds to the distance direction),
Figure 374946DEST_PATH_IMAGE007
represents the width, height and length of the k-th 3D frame,
Figure 399403DEST_PATH_IMAGE008
representing the rotation angle of the k-th 3D frame with respect to the lidar coordinate system z-axis.
And extracting two adjacent frames of point clouds from the calibrated point cloud video to form a training set of the detection tracking task, and learning and detecting a 3D frame and a motion track of a vehicle target in each frame of point cloud. Assuming that a point cloud video is composed of N frames of point clouds, N-1 pairs of point cloud frames can be extracted from the video for training, i.e. the video is composed of N frames of point clouds
Figure 558989DEST_PATH_IMAGE009
For the well-constructed point cloud pair training set, through deep learning of a 3D backhaul network structure, 3D features of a frame point cloud before and after voxelization under a view angle (y axis) of the emperor are extracted at the same time, namely
Figure 391816DEST_PATH_IMAGE010
. Wherein F is a matrix with dimensions W x L x D, W and L are the width and length of the point cloud after voxelization and step-size down-sampling set by the network structure, respectively, and D is the dimension of each 3D feature.
Setting a plurality of head sub-network structures for the 3D point cloud features of the previous and next frame point clouds, and respectively training and predicting the central position of each vehicle (
Figure 697199DEST_PATH_IMAGE011
) Wide, high and long
Figure 615476DEST_PATH_IMAGE012
) Angle of rotation
Figure 391803DEST_PATH_IMAGE008
And speed of movement (pair)
Figure 813557DEST_PATH_IMAGE013
Predict a velocity of
Figure 938507DEST_PATH_IMAGE014
To is aligned with
Figure 734294DEST_PATH_IMAGE015
Predict a velocity of
Figure 32420DEST_PATH_IMAGE016
Because of the relative relationship). The operation diagram of the target network model can be seen in fig. 3.
After the training of the whole deep learning network model is finished, the trained network weight can be used for reasoning and detecting all vehicle 3D frames and running speeds of each adjacent frame point cloud pair in the point cloud video.
Under the emperor's perspective (y-axis), each vehicle can be represented as a central point, thus, for
Figure 574260DEST_PATH_IMAGE017
The frame point cloud, namely the first target point cloud frame, can predict the central position and speed of each vehicle detected by the deep learning network model, and simply solve the central position value and speed value of each vehicle of the current frame to predict the next frame
Figure 479899DEST_PATH_IMAGE018
Frame, i.e. the central position of the vehicle in the second target point cloud frame, ((ii))
Figure 231823DEST_PATH_IMAGE019
) I.e. the first predicted position information.
In the same way, for
Figure 474586DEST_PATH_IMAGE018
The frame can be detected by a deep learning network model
Figure 11877DEST_PATH_IMAGE018
The central position of each vehicle in the frame: (
Figure 681893DEST_PATH_IMAGE020
) I.e. the first actual position information.
In one exemplary embodiment, determining the target motion velocity of the first object based on the first predicted target frame and the second predicted target frame comprises: determining a first coordinate of a first mark point of the first prediction target frame; determining a second coordinate of a second mark point of the second prediction target frame; determining a movement distance of the first object based on the first coordinate and the second coordinate; determining a sampling time interval of the first device or the second device; determining a ratio of the movement distance to the sampling time interval as the target movement speed. In this embodiment, a first predicted target frame of the first object and a second predicted target frame of the first object in the second target point cloud frame may be determined, a distance between a center point of the first predicted target frame and a center point of the second predicted target frame may be determined, or a distance between a target vertex of the first predicted target frame and a target vertex of the second predicted target frame may be determined, such as a distance between an upper left vertex of the first predicted target frame and an upper left vertex of the second predicted target frame. And determining the sampling time interval of the first equipment or the second equipment, and determining the ratio of the distance to the sampling time interval as the target movement speed. The first prediction target frame and the second prediction target frame can be 3D frames, and the first mark point and the second mark point can be points of the same type. For example, when the first marker point is the center point of the first prediction target frame, the second marker point is also the center point of the second prediction target frame. When the first marking point is a certain vertex of the first prediction target frame, the second marking point is also a certain vertex of the second prediction target frame.
In one exemplary embodiment, determining the same object as the first object included in the second target point cloud frame based on each of the first predicted location information and each of the first actual location information comprises: determining a target distance between each of the first predicted location information and each of the first actual location information; determining a first sub-target distance which is less than a preset threshold value and is included in the target distance; determining first sub-actual position information corresponding to the minimum sub-distance included in the first sub-landmark distance; and determining the second object corresponding to the first sub-actual position information as the same object as the first object. In the present embodiment, after the first predicted position information and the first actual position information are obtained, a target distance between each of the first predicted position information and each of the first actual position information may be calculated. The target distance may be a euclidean distance, a cosine distance, or the like. Namely calculation
Figure 452272DEST_PATH_IMAGE017
All predicted central points of the frame are the same
Figure 108512DEST_PATH_IMAGE018
The euclidean distances between all the actual center points of the frames can be considered as vehicles belonging to the same id in the preceding and following frames for the center point pairs smaller than the predetermined threshold value α (if there are a plurality of candidate points smaller than α, matching is performed with the smallest distance).
In one exemplary embodiment, after determining the target distance between each of the first predicted location information and each of the first actual location information, the method further comprises: determining a predetermined threshold value when the target distances are all greater than the predetermined threshold valueWhether an object identical to the first object is included in a predetermined number of third target point cloud frames, wherein the third target point cloud frames are generated after the second target point cloud frame, and a first one of the third target point cloud frames included in the predetermined number of third target point cloud frames is adjacent to the second target point cloud frame; in the case that the same object as the first object exists in the predetermined number of the third target point cloud frames, allocating identification information to the first object; deleting the first object in the absence of the same object as the first object in the predetermined number of the third target point cloud frames. In the present embodiment, for
Figure 624944DEST_PATH_IMAGE017
The distance values calculated in the frames are all larger than the central point of the preset threshold value alpha, the vehicles can be considered as newly appeared or lost vehicles, three frames are reserved for the id, if the three frames in the back can be matched, the new id is considered, otherwise, the new id is considered to be lost, and the vehicles are directly lost.
The following describes a tracking method for an object with reference to a specific embodiment:
fig. 4 is a flowchart of a tracking method for an object according to an embodiment of the present invention, as shown in fig. 4, the flowchart includes:
step S402, two adjacent frames of point cloud frames are obtained: a previous frame of point cloud (corresponding to the first target point cloud frame), and a subsequent frame of point cloud (corresponding to the second target point cloud frame).
In step S404, the previous frame point cloud and the next frame point cloud are input to the deep learning network (corresponding to the target network model).
In step S406, a center point of a previous frame (corresponding to the position information), a speed (corresponding to the target speed information), and a center point of a next frame (corresponding to the first actual position information) are determined.
In step S408, the center point of the next frame is predicted to obtain a predicted center point (corresponding to the first predicted position information).
And step S410, performing Euclidean distance calculation between the central point of the next frame and the predicted central point.
In step S412, the threshold value is determined, and if the threshold value is smaller than the threshold value, step S414 is executed, and if the threshold value is larger than the threshold value, step S416 is executed.
Step S414, the same id.
Step S416, new id or missing id.
In the foregoing embodiments, as the manufacturing cost of the lidar device, particularly the hybrid solid-state lidar, decreases year by year, the application thereof to the field of intelligent transportation is also receiving attention. Compared with a traditional monocular camera, the laser radar can obtain high-precision point cloud detection data, is not easily influenced by illumination intensity, and cannot cause crowding of targets caused by perspective projection in the monocular camera under the shielding condition. In view of the various reasons such as low cost, stability and reliability of the monocular camera, and the like, the rapid development of the deep learning technology is also accompanied, and the combination of the advantages of the monocular camera and the deep learning technology becomes the mainstream software and hardware system of the current intelligent traffic. The vehicle detection and tracking can be the important of the intelligent traffic management system, and the accuracy of the intelligent traffic management system plays a decisive role. Most video images acquired by monocular cameras are adopted for vehicle detection and tracking in the current market, but detection omission easily occurs in a detection algorithm based on images under the conditions of strong light, dim light, traffic jam and the like, and a tracking algorithm which depends on a 2D detection frame extremely is unstable. Therefore, the problem can be well remedied by adding 3D point cloud data acquired by laser radar equipment, the street point cloud video acquired by the laser radar equipment and the monocular camera in a combined manner is labeled, and each person in the video data is endowed with a 3D frame and a unique id; then, performing deep learning training on the marked point cloud video, extracting the 3D characteristics of each point cloud voxel under the view angle (y axis) of the vehicle, and detecting a 3D frame (central position, width, height, length and rotation angle) and a motion speed value of the vehicle through a plurality of head sub-networks; and finally, performing central point tracking prediction on a target in the next frame of point cloud by using the detected central positions and speed values of all vehicles in the current frame of point cloud, performing distance matching with all tracks in the existing vehicle track library, and finally allocating each 3D detection frame to obtain a correct track to finish tracking matters. Particularly, the laser point cloud equipment is added, 3D point cloud data are used, a deep learning technology is adopted to detect the vehicles, so that a short plate of a traditional monocular camera is supplemented effectively, a tracking algorithm is simplified into a point tracking mode, and the accuracy of street vehicle detection and tracking is improved greatly.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method according to the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a tracking apparatus for an object is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description of which has been already made is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram of a structure of an apparatus for tracking an object according to an embodiment of the present invention, as shown in fig. 5, the apparatus including:
a fusion module 52, configured to fuse first data obtained by a first device and first point cloud obtained by a second device to obtain a first target point cloud frame, where the first data is data obtained by the first device shooting a target area at a target time, the first point cloud is point cloud obtained by the second device shooting the target area at the target time, and an angle of the first device shooting the target area is the same as an angle of the second device shooting the target area;
a prediction module 54, configured to predict, based on the first target point cloud frame, first predicted position information of each first object included in the first target point cloud frame in a second target point cloud frame, where the second target point cloud frame is a point cloud frame obtained by fusing second data acquired by the first device and a second point cloud acquired by the second device, the second data is data acquired after the first data and adjacent to the first data, and the second point cloud is a point cloud acquired after the first point cloud and adjacent to the first point cloud;
a determining module 56 for determining first actual position information of each second object included in the second target point cloud frame;
a tracking module 58 for determining an object included in the second target point cloud frame that is the same as the first object based on each of the first predicted location information and each of the first actual location information.
In an exemplary embodiment, the fusion module 52 may fuse the first data acquired by the first device and the first point cloud acquired by the second device to obtain the first target point cloud frame by: mapping each pixel point included in the first data to a coordinate system where the first point cloud is located to obtain a target pixel point; performing the following for each target point comprised in the first point cloud to determine a target vector for each of the target points: determining coordinate values and response intensity values of the target points, determining color parameter values of the target pixel points corresponding to the target points, and determining the coordinate values, the response intensity values and the color parameter values as target vectors of the target points; determining the point cloud formed by the target points of which the target vectors are determined as the first target point cloud frame.
In one exemplary embodiment, prediction module 54 may enable prediction of first predicted location information of each first object included in the first target point cloud frame in a second target point cloud frame based on the first target point cloud frame by: determining size information of the first object based on a target vector of each target point included in the first target point cloud frame; determining position information of the first object based on the size information; determining a target motion velocity of the first object based on the first target point cloud frame; determining the first predicted position information based on the position information and the object movement speed.
In one exemplary embodiment, the prediction module 54 may enable determining a target motion velocity of the first object based on the first target point cloud frame by: acquiring the second target point cloud frame; inputting the first target point cloud frame and the second target point cloud frame into a target network model, and determining a first predicted target frame of each first object included in the first target point cloud frame and a second predicted target frame of the first object in the second target point cloud frame, wherein the target network model is obtained by machine learning by using multiple sets of training data, and each set of training data in the multiple sets of training data comprises adjacent point cloud frame pairs and calibration frame parameters of objects included in each point cloud frame; determining the target motion velocity of the first object based on the first predicted target box and the second predicted target box.
In one exemplary embodiment, prediction module 54 may determine the target motion velocity of the first object based on the first predicted target block and the second predicted target block by: determining a first coordinate of a first mark point of the first prediction target frame; determining a second coordinate of a second mark point of the second prediction target frame; determining a movement distance of the first object based on the first coordinate and the second coordinate; determining a sampling time interval of the first device or the second device; and determining the ratio of the moving distance to the sampling time interval as the target motion speed.
In one exemplary embodiment, the tracking module 58 may enable determining the same object included in the second target point cloud frame as the first object based on each of the first predicted location information and each of the first actual location information by: determining a target distance between each of the first predicted location information and each of the first actual location information; determining a first sub-target distance which is less than a preset threshold value and is included in the target distance; determining first sub-actual position information corresponding to the minimum sub-distance included in the first sub-landmark distance; and determining the second object corresponding to the first sub-actual position information as the same object as the first object.
In one exemplary embodiment, the apparatus may be further configured to, after determining the target distance between each of the first predicted location information and each of the first actual location information, include: determining whether a predetermined number of third target point cloud frames comprise the same object as the first object or not when the target distances are all larger than the predetermined threshold, wherein the third target point cloud frames are generated after the second target point cloud frame, and a first third target point cloud frame included in the predetermined number of third target point cloud frames is adjacent to the second target point cloud frame; in the case that the same object as the first object exists in the predetermined number of the third target point cloud frames, allocating identification information to the first object; deleting the first object in the case that the same object as the first object does not exist in the predetermined number of the third target point cloud frames.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the method as set forth in any of the above.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
For specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments and exemplary implementations, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented in a general purpose computing device, they may be centralized in a single computing device or distributed across a network of multiple computing devices, and they may be implemented in program code that is executable by a computing device, such that they may be stored in a memory device and executed by a computing device, and in some cases, the steps shown or described may be executed in an order different from that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps therein may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for tracking an object, comprising:
fusing first data acquired by first equipment and first point cloud acquired by second equipment to obtain a first target point cloud frame, wherein the first data is acquired by shooting a target area by the first equipment at a target moment, the first point cloud is acquired by shooting the target area by the second equipment at the target moment, and the angle of shooting the target area by the first equipment is the same as the angle of shooting the target area by the second equipment;
predicting first predicted position information of each first object in a second target point cloud frame based on the first target point cloud frame, wherein the second target point cloud frame is a point cloud frame obtained by fusing second data acquired by the first equipment and second point cloud acquired by the second equipment, the second data is data acquired after the first data and adjacent to the first data, and the second point cloud is a point cloud acquired after the first point cloud and adjacent to the first point cloud;
determining first actual position information of each second object included in the second target point cloud frame;
determining an object included in the second target point cloud frame that is the same as the first object based on each of the first predicted location information and each of the first actual location information.
2. The method of claim 1, wherein fusing the first data obtained by the first device and the first point cloud obtained by the second device to obtain the first target point cloud frame comprises:
mapping each pixel point included in the first data to a coordinate system where the first point cloud is located to obtain a target pixel point;
performing the following for each target point included in the first point cloud to determine a target vector for each of the target points: determining a coordinate value and a response intensity value of the target point, determining a color parameter value of the target pixel point corresponding to the target point, and determining the coordinate value, the response intensity value and the color parameter value as a target vector of the target point;
determining the point cloud formed by the target points of which the target vectors are determined as the first target point cloud frame.
3. The method of claim 1, wherein predicting, based on the first target point cloud frame, first predicted location information of each first object included in the first target point cloud frame in a second target point cloud frame comprises:
determining size information of the first object based on a target vector of each target point included in the first target point cloud frame;
determining position information of the first object based on the size information;
determining a target motion velocity of the first object based on the first target point cloud frame;
determining the first predicted position information based on the position information and the object movement speed.
4. The method of claim 3, wherein determining the target motion velocity of the first object based on the first target point cloud frame comprises:
acquiring the second target point cloud frame;
inputting the first target point cloud frame and the second target point cloud frame into a target network model, and determining a first predicted target frame of each first object included in the first target point cloud frame and a second predicted target frame of the first object in the second target point cloud frame, wherein the target network model is obtained by machine learning by using multiple groups of training data, and each group of training data in the multiple groups of training data comprises adjacent point cloud frame pairs and calibration frame parameters of the objects included in each point cloud frame;
determining the target motion velocity of the first object based on the first predicted target box and the second predicted target box.
5. The method of claim 4, wherein determining the target motion velocity of the first object based on the first predicted target box and the second predicted target box comprises:
determining a first coordinate of a first mark point of the first prediction target frame;
determining a second coordinate of a second mark point of the second prediction target frame;
determining a moving distance of the first object based on the first coordinate and the second coordinate;
determining a sampling time interval of the first device or the second device;
and determining the ratio of the moving distance to the sampling time interval as the target motion speed.
6. The method of claim 1, wherein determining the same object included in the second target point cloud frame as the first object based on each of the first predicted location information and each of the first actual location information comprises:
determining a target distance between each of the first predicted location information and each of the first actual location information;
determining a first sub-target distance which is less than a preset threshold value and is included in the target distance;
determining first sub-actual position information corresponding to the minimum sub-distance included in the first sub-landmark distance;
and determining the second object corresponding to the first sub-actual position information as the same object as the first object.
7. The method of claim 6, wherein after determining the target distance between each of the first predicted location information and each of the first actual location information, the method further comprises:
determining whether a predetermined number of third target point cloud frames comprise the same object as the first object or not when the target distances are all larger than the predetermined threshold, wherein the third target point cloud frames are generated after the second target point cloud frame, and a first third target point cloud frame included in the predetermined number of third target point cloud frames is adjacent to the second target point cloud frame;
in the case that the same object as the first object exists in the predetermined number of the third target point cloud frames, allocating identification information to the first object;
deleting the first object in the absence of the same object as the first object in the predetermined number of the third target point cloud frames.
8. An apparatus for tracking an object, comprising:
the system comprises a fusion module, a first target point cloud frame and a second target point cloud frame, wherein the fusion module is used for fusing first data acquired by first equipment and first point cloud acquired by second equipment to acquire a first target point cloud frame, the first data is data acquired by shooting a target area by the first equipment at a target moment, the first point cloud is point cloud acquired by shooting the target area by the second equipment at the target moment, and the angle of shooting the target area by the first equipment is the same as the angle of shooting the target area by the second equipment;
a prediction module, configured to predict, based on the first target point cloud frame, first predicted position information of each first object included in the first target point cloud frame in a second target point cloud frame, where the second target point cloud frame is a point cloud frame obtained by fusing second data acquired by the first device and second point cloud acquired by the second device, the second data is data acquired after the first data and adjacent to the first data, and the second point cloud is a point cloud acquired after the first point cloud and adjacent to the first point cloud;
a determining module for determining first actual position information of each second object included in the second target point cloud frame;
a tracking module to determine an object included in the second target point cloud frame that is the same as the first object based on each of the first predicted location information and each of the first actual location information.
9. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, wherein the computer program, when being executed by a processor, carries out the steps of the method as claimed in any one of the claims 1 to 7.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 7.
CN202210694635.0A 2022-06-20 2022-06-20 Object tracking method and device, storage medium and electronic device Pending CN114782496A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210694635.0A CN114782496A (en) 2022-06-20 2022-06-20 Object tracking method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210694635.0A CN114782496A (en) 2022-06-20 2022-06-20 Object tracking method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN114782496A true CN114782496A (en) 2022-07-22

Family

ID=82421221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210694635.0A Pending CN114782496A (en) 2022-06-20 2022-06-20 Object tracking method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN114782496A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152197A (en) * 2023-10-30 2023-12-01 成都睿芯行科技有限公司 Method and system for determining tracking object and method and system for tracking

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3064899A1 (en) * 2015-03-06 2016-09-07 Airbus DS GmbH Tracking in an indoor environment
CN108765455A (en) * 2018-05-24 2018-11-06 中国科学院光电技术研究所 A kind of target tenacious tracking method based on TLD algorithms
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion
CN111209840A (en) * 2019-12-31 2020-05-29 浙江大学 3D target detection method based on multi-sensor data fusion
CN111339880A (en) * 2020-02-19 2020-06-26 北京市商汤科技开发有限公司 Target detection method and device, electronic equipment and storage medium
CN112154444A (en) * 2019-10-17 2020-12-29 深圳市大疆创新科技有限公司 Target detection and tracking method, system, movable platform, camera and medium
CN112200129A (en) * 2020-10-28 2021-01-08 中国人民解放军陆军航空兵学院陆军航空兵研究所 Three-dimensional target detection method and device based on deep learning and terminal equipment
CN113887376A (en) * 2021-09-27 2022-01-04 中汽创智科技有限公司 Target detection method, device, medium and equipment
CN114611635A (en) * 2022-05-11 2022-06-10 北京闪马智建科技有限公司 Object identification method and device, storage medium and electronic device
CN114627112A (en) * 2022-05-12 2022-06-14 宁波博登智能科技有限公司 Semi-supervised three-dimensional target labeling system and method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3064899A1 (en) * 2015-03-06 2016-09-07 Airbus DS GmbH Tracking in an indoor environment
CN108765455A (en) * 2018-05-24 2018-11-06 中国科学院光电技术研究所 A kind of target tenacious tracking method based on TLD algorithms
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion
CN112154444A (en) * 2019-10-17 2020-12-29 深圳市大疆创新科技有限公司 Target detection and tracking method, system, movable platform, camera and medium
CN111209840A (en) * 2019-12-31 2020-05-29 浙江大学 3D target detection method based on multi-sensor data fusion
CN111339880A (en) * 2020-02-19 2020-06-26 北京市商汤科技开发有限公司 Target detection method and device, electronic equipment and storage medium
CN112200129A (en) * 2020-10-28 2021-01-08 中国人民解放军陆军航空兵学院陆军航空兵研究所 Three-dimensional target detection method and device based on deep learning and terminal equipment
CN113887376A (en) * 2021-09-27 2022-01-04 中汽创智科技有限公司 Target detection method, device, medium and equipment
CN114611635A (en) * 2022-05-11 2022-06-10 北京闪马智建科技有限公司 Object identification method and device, storage medium and electronic device
CN114627112A (en) * 2022-05-12 2022-06-14 宁波博登智能科技有限公司 Semi-supervised three-dimensional target labeling system and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152197A (en) * 2023-10-30 2023-12-01 成都睿芯行科技有限公司 Method and system for determining tracking object and method and system for tracking
CN117152197B (en) * 2023-10-30 2024-01-23 成都睿芯行科技有限公司 Method and system for determining tracking object and method and system for tracking

Similar Documents

Publication Publication Date Title
JP2023523243A (en) Obstacle detection method and apparatus, computer device, and computer program
CN112070807B (en) Multi-target tracking method and electronic device
Berrio et al. Camera-LIDAR integration: Probabilistic sensor fusion for semantic mapping
CN112233177B (en) Unmanned aerial vehicle pose estimation method and system
CN111753757B (en) Image recognition processing method and device
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
Munoz-Banon et al. Targetless camera-lidar calibration in unstructured environments
CN111091023B (en) Vehicle detection method and device and electronic equipment
CN112562005A (en) Space calibration method and system
CN114969221A (en) Method for updating map and related equipment
CN111899279A (en) Method and device for detecting motion speed of target object
CN114782496A (en) Object tracking method and device, storage medium and electronic device
CN114299230A (en) Data generation method and device, electronic equipment and storage medium
CN114611635B (en) Object identification method and device, storage medium and electronic device
CN116977806A (en) Airport target detection method and system based on millimeter wave radar, laser radar and high-definition array camera
CN115471574B (en) External parameter determination method and device, storage medium and electronic device
CN111899277A (en) Moving object detection method and device, storage medium and electronic device
CN112396630A (en) Method and device for determining state of target object, storage medium and electronic device
CN111754388A (en) Picture construction method and vehicle-mounted terminal
CN113902047B (en) Image element matching method, device, equipment and storage medium
WO2023283929A1 (en) Method and apparatus for calibrating external parameters of binocular camera
EP4078087B1 (en) Method and mobile entity for detecting feature points in an image
CN112598736A (en) Map construction based visual positioning method and device
CN112433193A (en) Multi-sensor-based mold position positioning method and system
CN111372051A (en) Multi-camera linkage blind area detection method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220722

RJ01 Rejection of invention patent application after publication