CN115546263A - Cross-mirror target tracking method, device, equipment and medium applied to vehicle - Google Patents

Cross-mirror target tracking method, device, equipment and medium applied to vehicle Download PDF

Info

Publication number
CN115546263A
CN115546263A CN202211287331.9A CN202211287331A CN115546263A CN 115546263 A CN115546263 A CN 115546263A CN 202211287331 A CN202211287331 A CN 202211287331A CN 115546263 A CN115546263 A CN 115546263A
Authority
CN
China
Prior art keywords
target
detection
cross
tracking
mirror
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211287331.9A
Other languages
Chinese (zh)
Inventor
孙炼杰
金涛
倪守诚
杨秋红
朱月萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202211287331.9A priority Critical patent/CN115546263A/en
Publication of CN115546263A publication Critical patent/CN115546263A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a cross-mirror target tracking method, a cross-mirror target tracking device, equipment and a medium, wherein the method comprises the following steps: acquiring video frames shot by each camera, and performing target detection on the video frames to determine a detection target; carrying out target tracking on the detection target in the video frame shot by the cameras arranged at intervals by adopting the same tracker so as to generate a plurality of tracking units; and inputting the tracking results corresponding to the two trackers into a pre-trained target recognition model for recognition so as to determine the tracking units belonging to the same detection target, wherein the target recognition model is obtained by retraining training data with a cross-mirror target. Therefore, the accuracy of different cameras for identifying the same target can be improved, and the environment perception effect is further ensured.

Description

Cross-mirror target tracking method, device, equipment and medium applied to vehicle
Technical Field
The present disclosure relates to the field of automotive environment sensing technologies, and in particular, to a method, an apparatus, a device, and a medium for tracking a target across mirrors for a vehicle.
Background
With the development of technology, target tracking has increasingly been applied to human-computer interaction, automatic monitoring, video retrieval, traffic detection and vehicle navigation. The task of target tracking is to determine the geometric state of a target in a video stream, including position, shape size and the like. In the current technical solution, a camera may be disposed on a vehicle to acquire video stream information around the vehicle, and identify and track surrounding targets (such as pedestrians or vehicles), so as to realize environmental perception. However, if a plurality of cameras are arranged on the vehicle for sensing the environment, the situation that two adjacent cameras shoot the same target often occurs, and the same target shot by different cameras is easily recognized as different targets, so that the accuracy of sensing the environment is affected. Therefore, how to improve the accuracy of identifying the same target by different cameras to ensure the environmental perception effect becomes a technical problem to be solved urgently.
Disclosure of Invention
The application provides a cross-mirror target tracking method, a cross-mirror target tracking device and a cross-mirror target tracking medium, which are applied to a vehicle and solve the problem that the accuracy of environment perception is influenced because the same target shot by different cameras is easily identified as different targets in the related technology. The method can improve the accuracy of different cameras for identifying the same target, thereby ensuring the environment perception effect.
The embodiment of the first aspect of the application provides a cross-mirror target tracking method applied to a vehicle, wherein a plurality of cameras are arranged along the circumferential direction of the vehicle, the fields of view of the adjacent cameras are partially overlapped, and the fields of view of the cameras arranged at intervals are mutually independent, and the method comprises the following steps: acquiring video frames shot by each camera, and performing target detection on the video frames to determine a detection target; target tracking is carried out on the detection target in the video frame shot by the cameras arranged at intervals by adopting the same tracker so as to generate a plurality of tracking units; and inputting the tracking results corresponding to the two trackers into a pre-trained target recognition model for recognition so as to determine the tracking units belonging to the same detection target, wherein the target recognition model is obtained by retraining training data with a cross-mirror target.
According to the technical means, the plurality of cameras are arranged along the circumferential direction of the vehicle in the embodiment of the application, the fields of view of the adjacent cameras are partially overlapped, the fields of view of the cameras arranged at intervals are mutually independent, the video frames shot by the cameras are obtained, target detection is carried out on the video frames to determine the detection targets, the detection targets in the videos shot by the cameras arranged at intervals are subjected to target tracking by the same tracker to generate the plurality of tracking units, and therefore the accuracy of subsequent identification can be improved. And inputting the tracking results corresponding to the two trackers into a pre-trained target recognition model for recognition so as to determine the tracking units belonging to the same detection target, wherein the target recognition model is obtained by retraining training data with a cross-mirror target, so that the accuracy of recognition of the same detection target can be improved, and the environment perception effect is further ensured.
Optionally, after determining tracking units belonging to the same target, the method further comprises: and distributing the same identification information to the tracking units belonging to the same detection target for association.
According to the technical means, the embodiment of the application can realize the association of the tracking units belonging to the same detection target.
Optionally, retraining the target recognition model includes: acquiring training data with a cross-lens target and a pre-trained basic recognition model; and retraining the basic recognition model by adopting the training data with the cross-lens target so that the basic recognition model can accurately recognize the same detection target in video frames shot by different cameras.
According to the technical means, the accuracy of the target recognition model in recognizing the cross-mirror target can be improved.
Optionally, acquiring training data with a cross-mirror target comprises: detecting whether a detection target enters or leaves an overlapped view field between the camera and an adjacent camera; if yes, storing the detection data corresponding to the current camera to serve as training data with a cross-lens target.
According to the technical means, the training data with the cross-mirror target can be automatically acquired without manual marking.
Optionally, storing detection data corresponding to the current camera as training data with a cross-lens target, including: acquiring detection data corresponding to a current camera; and storing the detection data, of which the proportion of the size of the actual detection frame corresponding to the detection target to the size of the expected detection frame reaches a preset threshold value, as training data with the cross-mirror target.
According to the technical means, the effectiveness of the training data can be ensured by screening the training data with the cross-mirror target.
The embodiment of the second aspect of the present application provides a cross-mirror target tracking device applied to a vehicle, wherein a plurality of cameras are arranged along the circumferential direction of the vehicle, partial overlapping is provided between the fields of view of the adjacently arranged cameras, the fields of view of the alternately arranged cameras are mutually independent, and the device comprises: the target detection module is used for acquiring video frames shot by each camera and carrying out target detection on the video frames so as to determine a detection target; the target tracking module is used for carrying out target tracking on the detection target in the video frame shot by the cameras arranged at intervals by adopting the same tracker so as to generate a plurality of tracking units; and the target matching module is used for inputting the tracking results corresponding to the two trackers into a pre-trained target recognition model for recognition so as to determine the tracking units belonging to the same detection target, wherein the target recognition model is obtained by retraining training data with a cross-mirror target.
Optionally, after determining tracking units belonging to the same target, the target matching module is further configured to: and distributing the same identification information to the tracking units belonging to the same detection target for association.
Optionally, the system further comprises a model training module, wherein the model training module is configured to: acquiring training data with a cross-lens target and a pre-trained basic recognition model; and retraining the basic recognition model by adopting the training data with the cross-lens target so that the basic recognition model can accurately recognize the same detection target in video frames shot by different cameras.
Optionally, the model training module is configured to: detecting whether a detection target enters or leaves an overlapped view field between the camera and an adjacent camera; if so, storing the detection data corresponding to the current camera as training data with a cross-mirror target.
Optionally, the model training module is configured to: acquiring detection data corresponding to a current camera; and storing the detection data, of which the proportion of the size of the actual detection frame corresponding to the detection target to the size of the expected detection frame reaches a preset threshold value, as training data with the cross-mirror target.
An embodiment of a third aspect of the present application provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the cross-mirror target tracking method applied to a vehicle as described in the above embodiments.
A fourth aspect of the present application provides a computer-readable storage medium storing computer instructions for causing a computer to execute the cross-mirror target tracking method applied to a vehicle as described in the above embodiments.
The embodiment of the application can set up a plurality of cameras along vehicle circumference, has partial overlap between the visual field of the camera of adjacent setting, mutual independence between the visual field of the camera of interval setting, through obtaining the video frame that each camera shot and obtain to carry out target detection with confirming the detection target to this video frame, and, in the video that the camera that sets up to the interval shot the detection target adopts same tracker to carry out target tracking, with a plurality of tracking units of formation, thereby can improve the accuracy of follow-up discernment. And inputting the tracking results corresponding to the two trackers into a pre-trained target recognition model for recognition so as to determine the tracking units belonging to the same detection target, wherein the target recognition model is obtained by retraining training data with a cross-mirror target, so that the accuracy of recognition of the same detection target can be improved, and the environment perception effect is further ensured.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flowchart of a cross-mirror target tracking method applied to a vehicle according to an embodiment of the present application;
FIG. 2 is a schematic view of a camera mount and corresponding field of view of a vehicle according to one embodiment of the present application;
FIG. 3 is a schematic view of a camera view with objects entering overlapping fields of view according to one embodiment of the present application;
FIG. 4 is a schematic diagram of tracker allocation according to one embodiment of the present application;
FIG. 5 is a schematic diagram of dynamic target tracking according to one embodiment of the present application;
FIG. 6 is a diagram illustrating invocation of a primary function interface according to one embodiment of the present application;
FIG. 7 is a schematic view of a video frame with a cross-mirror target according to an embodiment of the present application;
FIG. 8 is a schematic view of a cross-mirror target tracking device applied to a vehicle according to an embodiment of the present application;
fig. 9 is an exemplary diagram of an electronic device according to an embodiment of the application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are illustrative and intended to explain the present application and should not be construed as limiting the present application.
A cross-mirror target tracking method, device, apparatus, and medium applied to a vehicle according to an embodiment of the present application will be described below with reference to the accompanying drawings. In order to solve the problem that the accuracy of environment perception is affected due to the fact that the same target shot by different cameras is easily recognized as different targets in the background technology, the application provides a cross-mirror target tracking method applied to a vehicle. And inputting the tracking results corresponding to the two trackers into a pre-trained target recognition model for recognition so as to determine the tracking units belonging to the same detection target, wherein the target recognition model is obtained by retraining training data with a cross-mirror target, so that the accuracy of recognition of the same detection target can be improved, and the environment perception effect is further ensured.
Specifically, fig. 1 is a schematic flowchart of a cross-mirror target tracking method applied to a vehicle according to an embodiment of the present disclosure. In the embodiment, a plurality of cameras are arranged along the circumferential direction of the vehicle, the fields of view of the adjacent cameras are partially overlapped, and the fields of view of the cameras arranged at intervals are mutually independent.
The field of view may be a horizontal shooting range of the cameras (for example, a horizontal field angle FOV is 90 °, and the like), and the fields of view of the adjacently disposed cameras may have a partial overlap by adjustment, that is, the cameras have both independent fields of view and overlap fields of view (i.e., overlap-view) with other adjacent cameras, and the fields of view of the intermittently disposed cameras are independent of each other and do not have an overlap.
Taking fig. 2 as an example, six cameras, namely camera1-front, camera2-front-right, camera3-rear-right, camera4-rear, camera5-rear-left and camera6-front-left, are arranged on the vehicle along the circumferential direction of the vehicle. By adjustment, each camera has an independent field of view, and there is also an overlapping field of view with adjacent cameras, e.g., between camera1-front and camera6-front-left, between camera6-front-left and camera5-rear-left, and so on. Moreover, the fields of view of the cameras arranged at intervals are independent of each other, for example, between camera1-front and camera5-rear-left, between camera6-front-left and camera4-rear, and the like, and the fields of view do not overlap.
With continued reference to fig. 1, the method for tracking a vehicle across a mirror target at least includes steps S110 to S130, which are described in detail as follows:
in step S110, a video frame captured by each camera is obtained, and target detection is performed on the video frame to determine a detection target.
In this embodiment, each camera may obtain video stream information within its own shooting range, and perform framing processing on the video stream information (for example, dividing into 30 frames per second, etc.) to obtain a corresponding set of video frames. According to the video frames acquired by the cameras, target detection can be performed on the video frames to identify detection targets contained in the video frames, wherein the detection targets can comprise vehicles, pedestrians, cyclists and the like.
It should be understood that when an object enters the overlapped view field, both cameras corresponding to the overlapped view field can capture the object, so that the detected object is identified when the video frames acquired by the respective cameras are captured. For example, as shown in FIG. 3, when an object enters the overlapping fields of view between the cameras camera3-rear-right and camera4-rear, both camera3-rear-right and camera4-rear will capture the object.
In step S120, the same tracker is used for target tracking on the detection target in the video frame captured by the cameras set at intervals to generate a plurality of tracking units.
In this embodiment, the same tracker may be used for cameras arranged at intervals, and taking fig. 2 as an example, one set of trackers is used for tracking targets in the target detection results of camera1, camera3, and camera5, and another set of trackers is used for tracking targets in the remaining other cameras, namely camera2, camera4, and camera6 (as shown in fig. 4, camera1, camera3, and camera5 use tracker a for tracking targets, and camera2, camera4, and camera6 use tracker B for tracking). In other words, three video frames shot by cameras arranged at intervals form a large image, and target tracking is realized by using one tracker. Thus, each tracker can perform target tracking on each detection target captured by the camera and generate a corresponding tracking unit.
In one example, the tracking algorithm of the tracker may adopt a Deep Sort algorithm, which is a commonly used algorithm in multi-target tracking. The method mainly comprises the steps of predicting Track Tracks by a Kalman filter, matching the predicted Track Tracks with detection in a current frame by using a Hungarian algorithm (including cascade matching and IOU matching), and updating by using Kalman filtering.
Because the target tracking adopts a tracking by detection mode, when the detection result passes through Kalman filtering and is subjected to Hungary matching, if six cameras only use one tracker, when the target appears in an overlapped view field, matching errors are easily caused, and the IDs of the same detection target cannot be unified, so that the problem of errors in Hungary matching can be solved by adopting two trackers.
In step S130, the tracking results corresponding to the two trackers are input to a pre-trained target recognition model for recognition, so as to determine the tracking units belonging to the same detection target, where the target recognition model is obtained by retraining training data with a cross-mirror target.
The target recognition model may be a recognition model for recognizing tracking results of different trackers to determine tracking units belonging to the same detection target. It is noted that the target recognition model can be retrained by training data with the cross-mirror target to improve the accuracy of the target recognition model in detecting the cross-mirror target. The cross-mirror target is a target entering or leaving the overlapped visual field, and as shown in fig. 3, the target is the cross-mirror target when the target is in the overlapped visual field between the camera 3-real-right and the camera 4-real; as yet another alternative, as shown in FIG. 5, when a target leaves the overlapping fields of view into the independent field of view, it is also referred to as a cross-mirror target.
In this embodiment, the tracking results corresponding to the two trackers may be input to a pre-trained target recognition model for recognition, and the tracking results may include, but are not limited to, a video frame sequence of six cameras, corresponding target detection results, calibration parameters of the cameras, and the like. The target recognition model can correspondingly output human body tracking results, vehicle tracking results and corresponding attributes. It should be noted that, after training, the target recognition model may associate tracking units belonging to the same detection target, so as to avoid the occurrence of repeated recognition.
In an embodiment, after determining tracking units belonging to the same target, the method further comprises:
and distributing the same identification information to the tracking units belonging to the same detection target for association.
In this embodiment, tracking units belonging to the same target determined by the target recognition model may be assigned the same identification information for association. Such as the pedestrian object shown in fig. 3, may be assigned the same ID number to determine the association between different tracking cells. Therefore, the tracking units belonging to the same detection target are associated through the same identification information, the same detection target can be conveniently identified, and the environment perception effect is further ensured.
Based on the foregoing embodiments, in one embodiment of the present application, retraining the target recognition model includes:
acquiring training data with a cross-lens target and a pre-trained basic recognition model;
and retraining the basic recognition model by adopting the training data with the cross-lens target so that the basic recognition model can accurately recognize the same detection target in video frames shot by different cameras to obtain a target recognition model.
In this embodiment, the basic recognition model may be a recognition model obtained by training with common training data, in an example, the basic recognition model may be constructed and trained with yolov5 algorithm, and a pre-labeled training sample and Kitti data set may be used as a supplement to perform basic training. And by counting and setting the size of the dynamic target, the basic recognition model only focuses on the targets of pedestrians, bikers and vehicles.
After the basic recognition model is trained, training data with the mirror-crossing target can be obtained, and the trained basic recognition model is retrained to improve the accuracy of the basic recognition model in recognizing and matching the mirror-crossing target, namely after retraining, the target recognition model can accurately recognize the same detection target in video frames shot by different cameras.
In an embodiment of the present application, a hardware inference platform may be built by using a heterogeneous computing platform of JETSON Xavier dual-module processor of NVIDIA and a TC297 function security processor of english flying, a frame rate of each camera is 20FPS, and a main function interface thereof is as shown in fig. 6.
Based on the above embodiments, in one embodiment of the present application, acquiring training data with a cross-mirror objective comprises:
detecting whether a detection target enters or leaves an overlapped view field between the camera and an adjacent camera;
if yes, storing the detection data corresponding to the current camera to serve as training data with a cross-lens target.
In this embodiment, a preliminary tracker may be preset to detect whether there is an overlapping field of view between a detection target entering or leaving a camera and an adjacent camera, and if there is an overlapping field of view between a detection target entering or leaving a camera and an adjacent camera, the detection data corresponding to the current camera may be automatically saved to serve as training data with a cross-mirror target. Specifically, for a cross-mirror target, only the data within the camera shooting range can be retained, and for out-of-range data can be discarded (as shown in fig. 7, wherein a rear-view camera (i.e. a rear view angle in the figure) does not have the cross-mirror target). Therefore, training data with the cross-mirror target can be acquired without manual labeling, and the acquisition efficiency of the training data is improved.
In one embodiment of the present application, storing detection data corresponding to a current camera as training data with a cross-mirror target includes:
acquiring detection data corresponding to a current camera;
and storing the detection data, of which the proportion of the size of the actual detection frame corresponding to the detection target to the size of the expected detection frame reaches a preset threshold value, as training data with the cross-mirror target.
In this embodiment, when the cross-lens target is detected, the detection data corresponding to the current camera may be correspondingly obtained, and the detection data is screened, specifically, the detection data in which the ratio of the size of the actual detection frame corresponding to the detection target to the size of the expected detection frame reaches the predetermined threshold may be stored as the training data with the cross-lens target.
It should be understood that through the screening, a certain area of the cross-mirror target can be ensured to be occupied in the video frame, so that the effectiveness of training data is improved, and the influence on subsequent retraining effect due to the fact that the cross-mirror target is too small in the video frame and cannot be subjected to target recognition and tracking is avoided.
It should be noted that the predetermined threshold value may be determined by a person skilled in the art according to prior experience, and may be, for example, 0.5 or 0.6. The above numbers are merely exemplary and are not intended to be limiting.
According to the cross-mirror target tracking method applied to the vehicle, the video frames obtained by shooting through the cameras are obtained, target detection is carried out on the video frames to determine the detection target, the same tracker is adopted for carrying out target tracking on the detection target in the videos shot through the cameras arranged at intervals to generate the plurality of tracking units, and therefore the accuracy of follow-up identification can be improved. And inputting the tracking results corresponding to the two trackers into a pre-trained target recognition model for recognition so as to determine the tracking units belonging to the same detection target, wherein the target recognition model is obtained by retraining training data with a cross-mirror target, so that the accuracy of recognition of the same detection target can be improved, and the environment perception effect is further ensured.
Next, a quantitative evaluation device for safety of intended functions of a vehicle according to an embodiment of the present application will be described with reference to the drawings.
Fig. 8 is a block schematic diagram of a cross-mirror target tracking device applied to a vehicle according to an embodiment of the present application.
The plurality of cameras are arranged along the circumferential direction of the vehicle, the fields of view of the adjacent cameras are partially overlapped, and the fields of view of the cameras arranged at intervals are mutually independent. As shown in fig. 8, the cross-mirror target tracking device applied to a vehicle includes:
the target detection module 810 is configured to obtain video frames captured by the cameras, and perform target detection on the video frames to determine a detection target;
a target tracking module 820, configured to perform target tracking on the detected target in the video frame captured by the cameras set at intervals by using the same tracker to generate a plurality of tracking units;
and a target matching module 830, configured to input the tracking results corresponding to the two trackers into a pre-trained target recognition model for recognition, so as to determine the tracking units belonging to the same detection target, where the target recognition model is obtained by retraining training data of a target with a cross-mirror.
Optionally, after determining the tracking units belonging to the same target, the target matching module 830 is further configured to: and distributing the same identification information to the tracking units belonging to the same detection target for association.
Optionally, a model training module 840 is further included, where the model training module 840 is configured to: acquiring training data with a cross-lens target and a pre-trained basic recognition model; and retraining the basic recognition model by adopting the training data with the cross-lens target so that the basic recognition model can accurately recognize the same detection target in video frames shot by different cameras.
Optionally, the model training module 840 is configured to: detecting whether a detection target enters or leaves an overlapped view field between the camera and an adjacent camera; if yes, storing the detection data corresponding to the current camera to serve as training data with a cross-lens target.
Optionally, the model training module 840 is configured to: acquiring detection data corresponding to a current camera; and storing the detection data, of which the proportion of the size of the actual detection frame corresponding to the detection target to the size of the expected detection frame reaches a preset threshold value, as training data with the cross-mirror target.
It should be noted that the explanation of the embodiment of the cross-mirror target tracking method applied to the vehicle is also applicable to the cross-mirror target tracking device applied to the vehicle in the embodiment, and is not repeated herein.
According to the cross-mirror target tracking device applied to the vehicle, the video frames obtained by shooting through the cameras are obtained, target detection is carried out on the video frames to determine the detection target, the same tracker is adopted for carrying out target tracking on the detection target in the videos shot by the cameras arranged at intervals to generate the plurality of tracking units, and therefore the accuracy of follow-up identification can be improved. And inputting the tracking results corresponding to the two trackers into a pre-trained target recognition model for recognition so as to determine the tracking units belonging to the same detection target, wherein the target recognition model is obtained by retraining training data of a cross-mirror target, so that the accuracy of recognition of the same detection target can be improved, and the environmental perception effect is further ensured.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may include:
a memory 901, a processor 902 and a computer program stored on the memory 901 and executable on the processor 902.
The processor 902, when executing the program, implements the cross-mirror target tracking method applied to the vehicle provided in the above-described embodiments.
Further, the electronic device further includes:
a communication interface 903 for communication between the memory 901 and the processor 902.
A memory 901 for storing computer programs executable on the processor 902.
Memory 901 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 901, the processor 902, and the communication interface 903 are implemented independently, the communication interface 903, the memory 901, and the processor 902 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
Alternatively, in specific implementation, if the memory 901, the processor 902 and the communication interface 903 are integrated into one chip, the memory 901, the processor 902 and the communication interface 903 may complete mutual communication through an internal interface.
The processor 902 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present Application.
Embodiments of the present application also provide a computer-readable storage medium having stored thereon a computer program, which when executed by a processor, implements the above cross-mirror target tracking method applied to a vehicle.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "N" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more N executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of implementing the embodiments of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or N wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried out in the method of implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer-readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are exemplary and should not be construed as limiting the present application and that changes, modifications, substitutions and alterations in the above embodiments may be made by those of ordinary skill in the art within the scope of the present application.

Claims (12)

1. A cross-mirror target tracking method applied to a vehicle is characterized in that a plurality of cameras are arranged along the circumferential direction of the vehicle, the fields of view of the adjacent cameras are partially overlapped, and the fields of view of the cameras arranged at intervals are mutually independent;
the method comprises the following steps:
acquiring video frames shot by each camera, and carrying out target detection on the video frames to determine a detection target;
target tracking is carried out on the detection target in the video frame shot by the cameras arranged at intervals by adopting the same tracker so as to generate a plurality of tracking units;
and inputting the tracking results corresponding to the two trackers into a pre-trained target recognition model for recognition so as to determine the tracking units belonging to the same detection target, wherein the target recognition model is obtained by retraining training data with a cross-mirror target.
2. The method of claim 1, wherein after determining tracking units belonging to the same target, the method further comprises:
and distributing the same identification information to the tracking units belonging to the same detection target for association.
3. The method of claim 1 or 2, wherein retraining the target recognition model comprises:
acquiring training data with a cross-lens target and a pre-trained basic recognition model;
and retraining the basic recognition model by adopting the training data with the cross-lens target so that the basic recognition model can accurately recognize the same detection target in video frames shot by different cameras.
4. The method of claim 3, wherein acquiring training data having a cross-mirror objective comprises:
detecting whether a detection target enters or leaves an overlapped view field between the camera and an adjacent camera;
if so, storing the detection data corresponding to the current camera as training data with a cross-mirror target.
5. The method of claim 4, wherein storing the detection data corresponding to the current camera as training data with a cross-mirror goal comprises:
acquiring detection data corresponding to a current camera;
and storing the detection data, of which the proportion of the size of the actual detection frame corresponding to the detection target to the size of the expected detection frame reaches a preset threshold value, as training data with the cross-mirror target.
6. A cross-mirror target tracking device applied to a vehicle is characterized in that a plurality of cameras are arranged along the circumferential direction of the vehicle, the fields of view of the adjacent cameras are partially overlapped, and the fields of view of the cameras arranged at intervals are mutually independent;
the device comprises:
the target detection module is used for acquiring video frames shot by the cameras and carrying out target detection on the video frames so as to determine a detection target;
the target tracking module is used for performing target tracking on the detection target in the video frame shot by the cameras arranged at intervals by adopting the same tracker so as to generate a plurality of tracking units;
and the target matching module is used for inputting the tracking results corresponding to the two trackers into a pre-trained target recognition model for recognition so as to determine the tracking units belonging to the same detection target, wherein the target recognition model is obtained by retraining training data with a cross-mirror target.
7. The apparatus of claim 6, wherein after determining tracking units belonging to the same target, the target matching module is further configured to:
and distributing the same identification information to the tracking units belonging to the same detection target for association.
8. The apparatus of claim 6 or 7, further comprising a model training module to:
acquiring training data with a cross-lens target and a pre-trained basic recognition model;
and retraining the basic recognition model by adopting the training data with the cross-lens target so that the basic recognition model can accurately recognize the same detection target in video frames shot by different cameras.
9. The apparatus of claim 8, wherein the model training module is configured to:
detecting whether a detection target enters or leaves an overlapped view field between the camera and an adjacent camera;
if yes, storing the detection data corresponding to the current camera to serve as training data with a cross-lens target.
10. The apparatus of claim 9, wherein the model training module is configured to:
acquiring detection data corresponding to a current camera;
and storing the detection data, of which the proportion of the size of the actual detection frame corresponding to the detection target to the size of the expected detection frame reaches a preset threshold value, as training data with the cross-mirror target.
11. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the cross-mirror target tracking method applied to a vehicle according to any one of claims 1 to 5.
12. A computer-readable storage medium having stored thereon a computer program, characterized in that the program is executed by a processor for implementing the cross-mirror target tracking method applied to a vehicle according to any one of claims 1 to 5.
CN202211287331.9A 2022-10-20 2022-10-20 Cross-mirror target tracking method, device, equipment and medium applied to vehicle Pending CN115546263A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211287331.9A CN115546263A (en) 2022-10-20 2022-10-20 Cross-mirror target tracking method, device, equipment and medium applied to vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211287331.9A CN115546263A (en) 2022-10-20 2022-10-20 Cross-mirror target tracking method, device, equipment and medium applied to vehicle

Publications (1)

Publication Number Publication Date
CN115546263A true CN115546263A (en) 2022-12-30

Family

ID=84736011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211287331.9A Pending CN115546263A (en) 2022-10-20 2022-10-20 Cross-mirror target tracking method, device, equipment and medium applied to vehicle

Country Status (1)

Country Link
CN (1) CN115546263A (en)

Similar Documents

Publication Publication Date Title
CN108447091B (en) Target positioning method and device, electronic equipment and storage medium
CN107305627B (en) Vehicle video monitoring method, server and system
CN110443210B (en) Pedestrian tracking method and device and terminal
US9047518B2 (en) Method for the detection and tracking of lane markings
US8687063B2 (en) Method for predicting lane line and lane departure warning system using the same
EP3140777B1 (en) Method for performing diagnosis of a camera system of a motor vehicle, camera system and motor vehicle
KR101492180B1 (en) Video analysis
CN109766867B (en) Vehicle running state determination method and device, computer equipment and storage medium
US10997451B2 (en) Method and apparatus for license plate recognition using multiple fields of view
CN111127508B (en) Target tracking method and device based on video
JP2016537934A (en) Camera covering state recognition method, camera system, and automobile
CN112084810A (en) Obstacle detection method and device, electronic equipment and storage medium
US20140334672A1 (en) Method for detecting pedestrians based on far infrared ray camera at night
CN107798688B (en) Moving target identification method, early warning method and automobile rear-end collision prevention early warning device
CN108960083B (en) Automatic driving target classification method and system based on multi-sensor information fusion
CN110738150A (en) Camera linkage snapshot method and device and computer storage medium
CN111507327A (en) Target detection method and device
CN110837760B (en) Target detection method, training method and device for target detection
CN111768630A (en) Violation waste image detection method and device and electronic equipment
CN116863124B (en) Vehicle attitude determination method, controller and storage medium
CN114141022B (en) Emergency lane occupation behavior detection method and device, electronic equipment and storage medium
CN112380977A (en) Smoking behavior detection method and device
TW201536609A (en) Obstacle detection device
CN115546263A (en) Cross-mirror target tracking method, device, equipment and medium applied to vehicle
CN107255470B (en) Obstacle detection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination