CN112052807A - Vehicle position detection method, device, electronic equipment and storage medium - Google Patents

Vehicle position detection method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112052807A
CN112052807A CN202010949703.4A CN202010949703A CN112052807A CN 112052807 A CN112052807 A CN 112052807A CN 202010949703 A CN202010949703 A CN 202010949703A CN 112052807 A CN112052807 A CN 112052807A
Authority
CN
China
Prior art keywords
vehicle
image
key point
position detection
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010949703.4A
Other languages
Chinese (zh)
Other versions
CN112052807B (en
Inventor
谭昶
贾若然
傅云翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Iflytek Information Technology Co Ltd
Original Assignee
Iflytek Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Iflytek Information Technology Co Ltd filed Critical Iflytek Information Technology Co Ltd
Priority to CN202010949703.4A priority Critical patent/CN112052807B/en
Publication of CN112052807A publication Critical patent/CN112052807A/en
Application granted granted Critical
Publication of CN112052807B publication Critical patent/CN112052807B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The embodiment of the invention provides a vehicle position detection method, a vehicle position detection device, electronic equipment and a storage medium, wherein the method comprises the following steps: determining an image containing a vehicle to be tested; and determining a key point position detection result of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature and the vehicle type feature of the vehicle to be detected in the image. According to the vehicle position detection method, the vehicle position detection device, the electronic equipment and the storage medium, the position information of the vehicle to be detected is represented through the key point position detection result of the vehicle to be detected, the accuracy of vehicle position detection is improved, the method and the device can be suitable for cameras with different shooting angles, the utilization rate of video data resources is improved, meanwhile, the influence of the orientation of the vehicle and the type of the vehicle on the position distribution of each key point on the vehicle is fully considered, and the accuracy of vehicle key point detection is improved.

Description

Vehicle position detection method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting a vehicle position, an electronic device, and a storage medium.
Background
The vehicle position detection is an important link for carrying out traffic violation detection on vehicles, and the existing vehicle position detection method is to represent the vehicles as a target detection frame and carry out real-time detection on the positions of the vehicles based on target detection and target tracking.
The existing vehicle position detection method requires that the shooting angle of a camera and the vehicle running direction are on the same straight line, and a large amount of misjudgments are easily generated on the camera with the inclined shooting angle, so that a large amount of cameras deployed in a city cannot be used for traffic violation detection, and the utilization rate of video data resources is low.
Disclosure of Invention
The embodiment of the invention provides a vehicle position detection method and device, electronic equipment and a storage medium, which are used for solving the defects of high requirement on a camera shooting angle and low utilization rate of video data resources in the existing vehicle position detection method.
The embodiment of the invention provides a vehicle position detection method, which comprises the following steps:
determining an image containing a vehicle to be tested;
and determining a key point position detection result of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature and the vehicle type feature of the vehicle to be detected in the image.
According to the vehicle position detection method of an embodiment of the present invention, the determining a detection result of a key point position of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature, and the vehicle type feature of the vehicle to be detected in the image specifically includes:
respectively carrying out vehicle type identification and vehicle orientation identification on the vehicle to be detected based on the vehicle image characteristics of the vehicle to be detected in the image to obtain the vehicle type characteristics and the vehicle orientation characteristics of the vehicle to be detected;
and detecting key points of the vehicle to be detected based on the vehicle image characteristics, the vehicle type characteristics and the vehicle orientation characteristics to obtain a key point position detection result of the vehicle to be detected.
According to one embodiment of the present invention, the method for detecting a vehicle position, which detects a key point based on the vehicle image feature, the vehicle type feature, and the vehicle orientation feature to obtain a detection result of the key point of the vehicle to be detected, specifically includes:
performing feature fusion on the vehicle image feature, the vehicle type feature and the vehicle orientation feature based on an attention mechanism to obtain a vehicle fusion feature;
and detecting key points based on the vehicle fusion characteristics to obtain a detection result of the key point positions of the vehicle to be detected.
According to the vehicle position detection method of an embodiment of the present invention, the determining a detection result of a key point position of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature, and the vehicle type feature of the vehicle to be detected in the image specifically includes:
inputting the image into a vehicle position detection model to obtain a key point position detection result of the vehicle to be detected, which is output by the vehicle position detection model;
the vehicle position detection model is obtained by training based on a sample image, and a sample key point position detection result, a sample vehicle orientation and a sample vehicle type of a sample vehicle in the sample image.
According to the vehicle position detection method of one embodiment of the present invention, the loss function of the vehicle position detection model includes a localization loss function and an orientation type loss function;
wherein the localization loss function is determined based on a sample thermodynamic diagram for each sample keypoint in a sample keypoint location detection result of the sample image and a predictive thermodynamic diagram for each predictive keypoint of the sample image;
the predicted thermodynamic diagram of each predicted key point of the sample image is obtained by performing non-maximum suppression on the thermodynamic diagram output by the vehicle position detection model based on the sample image.
According to the vehicle position detection method of an embodiment of the present invention, after determining the detection result of the key point position of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature and the vehicle type feature of the vehicle to be detected in the image, the method further includes:
selecting a plurality of candidate images similar to the image;
correcting the detection result of the key point position of the vehicle to be detected based on the key point conversion matrix of each candidate image and the key point conversion matrix of the image;
wherein the key point conversion matrix is determined based on a target detection frame of a vehicle in a corresponding image and position information of each key point of the vehicle.
According to an embodiment of the present invention, the method for detecting a vehicle position, which corrects a detection result of a keypoint location of the vehicle to be detected based on the keypoint conversion matrix of each candidate image and the keypoint conversion matrix of the image, specifically includes:
selecting a target key point conversion matrix which is closest to the key point conversion matrix of the image from the key point conversion matrix of each candidate image;
determining a final key point conversion matrix of the image based on the key point conversion matrix of the image and the target key point conversion matrix;
and determining a final key point position detection result of the vehicle to be detected based on the target detection frame of the vehicle to be detected in the image and the final key point conversion matrix.
An embodiment of the present invention further provides a vehicle position detecting device, including:
an image determination unit for determining an image containing a vehicle to be measured;
and the vehicle position detection unit is used for determining a key point position detection result of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature and the vehicle type feature of the vehicle to be detected in the image.
An embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of any one of the vehicle position detection methods described above when executing the program.
Embodiments of the present invention also provide a non-transitory computer readable storage medium, on which a computer program is stored, the computer program, when being executed by a processor, implementing the steps of the vehicle position detecting method according to any one of the above.
According to the vehicle position detection method, the vehicle position detection device, the electronic equipment and the storage medium, the position information of the vehicle to be detected is represented through the key point position detection result of the vehicle to be detected, the accuracy of vehicle position detection is improved, the method and the device can be suitable for cameras with different shooting angles, and the utilization rate of video data resources is improved; the method and the device for detecting the key points of the vehicle determine the detection result of the key points of the vehicle to be detected based on the vehicle image characteristics, the vehicle orientation characteristics and the vehicle type characteristics of the vehicle to be detected in the image, fully consider the influence of the orientation of the vehicle and the type of the vehicle on the position distribution of each key point on the vehicle, and improve the accuracy of the detection of the key points of the vehicle.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a vehicle position detection method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a vehicle key provided by an embodiment of the present invention;
fig. 3 is a schematic flowchart of a method for determining a detection result of a keypoint location according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a method for determining a detection result of a keypoint location according to another embodiment of the present invention;
fig. 5 is a schematic flowchart of a method for correcting a detection result of a keypoint location according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of a method for determining a final detection result of a keypoint location according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a vehicle position detection model according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a residual error module according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a vehicle position detection apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
With the development of digital acquisition and storage technology, video monitoring systems are rapidly popularized, a large amount of video data is brought by more and more cameras deployed in cities, and large-scale and full-coverage traffic violation detection can be realized based on the video data acquired by the video monitoring systems.
The vehicle position detection is an important link for carrying out traffic violation detection on vehicles, the existing vehicle position detection is to represent the vehicles as a target detection frame and carry out real-time detection on the positions of the vehicles based on the target detection and the target tracking
The existing vehicle position detection method requires that the shooting angle of a camera and the vehicle running direction are on the same straight line, when the shooting angle of the camera is inclined to cause deviation with the vehicle running direction or the vehicle is a large vehicle such as a bus or a truck, a target detection frame of the vehicle cannot reflect the real space position of the vehicle, when the vehicle has no solid line and the target detection frame of the vehicle is intersected with the solid line, the existing vehicle position detection method can detect that the vehicle has traffic violation behaviors of a compaction line, so that misjudgments of a large number of traffic violation detections are caused, a large number of cameras deployed in a city cannot be used for traffic violation detections, and the utilization rate of video data resources is low.
To this end, an embodiment of the present invention provides a vehicle position detecting method, and fig. 1 is a schematic flow chart of the vehicle position detecting method provided in the embodiment of the present invention, as shown in fig. 1, the method includes:
at step 110, an image containing the vehicle under test is determined.
Specifically, the vehicle to be detected may be a vehicle requiring position detection, the image including the vehicle to be detected is acquired by a camera installed in advance, for example, a frame of image in video data acquired by camera shooting or an image acquired by camera shooting may be subjected to target detection, a target area including the vehicle to be detected is cut out based on a target detection frame of the vehicle to be detected, so as to obtain an image including the vehicle to be detected, or a frame of image in video data acquired by camera shooting or an image acquired by camera shooting may be subjected to target detection, and if the vehicle is detected, the image is taken as the image including the vehicle to be detected.
And step 120, determining a detection result of the key point position of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature and the vehicle type feature of the vehicle to be detected in the image.
Specifically, the current target detection frame of the vehicle for detecting the vehicle position is only a rectangular frame containing the vehicle and cannot reflect the real spatial position of the vehicle, and the key points of the vehicle are located on the surface of the vehicle, and the coordinates of each key point on the vehicle can more accurately represent the real spatial position of the vehicle.
Here, the key point of the vehicle may be specifically a left front wheel, a right front wheel, a left rear wheel, a right rear wheel, a left front lamp, a right front lamp, a left rear lamp, a right rear lamp, a front license plate, a rear license plate, a front emblem, a rear emblem, and the like of the vehicle. Fig. 2 is a schematic diagram of key points of a vehicle according to an embodiment of the present invention, and as shown in fig. 2, the key points of the vehicle may include: the apexes 11 and 12 of the left front lamp, the apexes 21 and 22 of the right front lamp, the apexes 31 and 32 of the front license plate, the apexes 41 and 42 of the lower edge of the vehicle, the apexes 51 and 52 of the upper edge of the vehicle, the center 61 of the right front wheel, and the center 62 of the right rear wheel.
For example, four key points may be respectively and correspondingly selected from the front left wheel, the front right wheel, the rear left wheel and the rear right wheel, four key points may also be respectively and correspondingly selected from the front left lamp, the front right lamp, the rear left lamp and the rear right lamp, and four vertices of the upper edge or the lower edge of the vehicle may also be directly selected.
When the image is shot, part of key points on the vehicle are shielded by the vehicle to be detected, all key points of the vehicle often do not appear in the image, the orientation of the vehicle in the image influences the relative position relation between the key points appearing in the image and the key points appearing in the image, for example, when the orientation of the vehicle in the image is the front side, only the key points of the left front wheel and the key points of the right front wheel of the vehicle appear in the image, and the key points appearing in the image are in the same horizontal direction; when the orientation of the vehicle in the image is a side face, only a key point of a left front wheel and a key point of a left rear wheel of the vehicle or a key point of a right front wheel and a key point of a right rear wheel of the vehicle appear in the image, and the key points appearing in the image are in the same horizontal direction; when the orientation of the vehicle in the image is 45 ° from the front, only the key point of the left front wheel and the key point of the left rear wheel, or the key point of the right front wheel and the key point of the right rear wheel appear in the image, and the angle between the line of the key points appearing in the image and the horizontal direction is 45 °.
In addition, the type of vehicle also affects the distribution of the location of critical points on the vehicle relative to the vehicle, for example in a car or a medium passenger car, the critical points of the rear wheels of the vehicle are the locations at the rear of the vehicle body; in a motor bus, the key point of the rear wheels of the vehicle is the position behind the middle part of the vehicle body; in a minivan, the key point of the rear wheels of the vehicle is at the rear of the middle of the vehicle; in a large truck, the key point of the rear wheels of the vehicle is the location at the rear of the vehicle.
Considering the influence of the orientation of the vehicle and the type of the vehicle on the position distribution of each key point on the vehicle, the image characteristics of the vehicle can be assisted by the vehicle orientation characteristics and the vehicle type characteristics, and the key point position detection result of the vehicle to be detected can be determined. Wherein the vehicle orientation feature and the vehicle type feature may be determined based on the vehicle image feature. Here, the vehicle image feature is a feature representation of image information of a vehicle to be tested, the vehicle orientation feature is a feature representation of an orientation of the vehicle to be tested in the image, the orientation of the vehicle to be tested in the image may include a front face, a back face, a side face, a front side 45 ° and a rear side 45 ° and the like, the vehicle type feature is a feature representation of a type of the vehicle to be tested, and the type of the vehicle to be tested may include a large passenger car, a medium passenger car, a large truck, a small truck, a sedan car and the like.
Here, the detection result of the key point position of the vehicle to be detected may include position information of all key points on the vehicle to be detected, including position information of key points that are occluded by the vehicle to be detected in the image. The position information of any key point can be represented as the coordinates of the key point, and can also be represented as the thermodynamic diagram of the key point. The point value of any point in the thermodynamic diagram of the key point is used for representing the probability that the point is the key point, the higher the point value is, the higher the probability that the point is the key point is, and the position of the key point can be determined based on the point value of each point in the thermodynamic diagram of any key point, for example, the position of the point with the largest point value in the thermodynamic diagram is taken as the position of the key point.
Because the position information of each key point on the vehicle to be detected can reflect the real space position of the vehicle to be detected, compared with the prior art that a target detection frame of the vehicle to be detected is used, in the embodiment of the invention, the position information of the vehicle to be detected is represented by the position information of each key point on the vehicle to be detected, the accuracy of vehicle position detection is improved, the accuracy of traffic violation detection based on the position information of the vehicle is further improved, the requirement on the shooting angle of the camera is lower, the method and the device can be suitable for cameras with different shooting angles, and the utilization rate of video data resources is improved.
The method and the device have the advantages that the key point position detection result of the vehicle to be detected is determined based on the image feature, the orientation feature and the type feature of the vehicle, the influence of the orientation and the type of the vehicle on the position distribution of each key point on the vehicle is fully considered, and the accuracy of vehicle key point detection is improved.
According to the method provided by the embodiment of the invention, the position information of the vehicle to be detected is represented by the key point position detection result of the vehicle to be detected, so that the accuracy of vehicle position detection is improved, the method can be suitable for cameras with different shooting angles, and the utilization rate of video data resources is improved; the method and the device for detecting the key points of the vehicle determine the detection result of the key points of the vehicle to be detected based on the vehicle image characteristics, the vehicle orientation characteristics and the vehicle type characteristics of the vehicle to be detected in the image, fully consider the influence of the orientation of the vehicle and the type of the vehicle on the position distribution of each key point on the vehicle, and improve the accuracy of the detection of the key points of the vehicle.
The obtained detection result of the key point position of the vehicle to be detected can be matched with a road in the image to be used for detecting traffic violation of the vehicle to be detected, and can also be used for detecting the position of the vehicle to be detected in vehicle navigation or driving license examination, and the like, and the embodiment of the invention is not particularly limited to this.
Based on the above embodiment, in the method, the image including the vehicle to be tested may be determined based on the following steps:
firstly, the existing vehicle position detection method based on the target detection frame is adopted to carry out traffic violation detection, when the traffic violation behavior of the vehicle is detected, the vehicle is taken as a vehicle to be detected, a trigger frame image for outputting the detection result in video data is obtained, an image containing the vehicle to be detected is determined based on the target detection frame of the vehicle in the trigger frame image, so that the vehicle position detection is carried out on the image, and the key point position detection result of the vehicle to be detected is output.
By combining the existing vehicle position detection method, the vehicle with the detected traffic violation is taken as the vehicle to be detected, and the vehicle position detection is carried out on the vehicle to be detected based on the image containing the vehicle to be detected, so that the vehicle position detection is not required to be carried out on each frame of image of the video data acquired by the camera one by one, and the calculated amount of the vehicle position detection is greatly reduced; the traffic violation detection result of the vehicle to be detected obtained in the prior art is rechecked through the key point position detection result of the vehicle to be detected, and misjudgment of traffic violation detection caused by inaccurate position information of the vehicle can be effectively filtered.
Based on any of the above embodiments, fig. 3 is a schematic flow chart of the method for determining a detection result of a keypoint location provided by the embodiment of the present invention, as shown in fig. 3, step 120 specifically includes:
step 121, respectively carrying out vehicle type identification and vehicle orientation identification on the vehicle to be detected based on the vehicle image characteristics of the vehicle to be detected in the image to obtain vehicle type characteristics and vehicle orientation characteristics of the vehicle to be detected;
and step 122, detecting key points of the vehicle to be detected based on the vehicle image characteristics, the vehicle type characteristics and the vehicle orientation characteristics to obtain a key point position detection result of the vehicle to be detected.
Specifically, before step 121 is executed, feature extraction may be performed on the image, so as to obtain a vehicle image feature of the vehicle to be detected in the image. After the vehicle image features of the vehicle to be detected are obtained, the vehicle type and the vehicle orientation of the vehicle to be detected can be respectively identified based on the vehicle image features, so that the vehicle type and the vehicle orientation of the vehicle to be detected are obtained.
For example, the vehicle image features may be further feature extracted by using a vehicle orientation layer and a vehicle type layer, respectively, to obtain the vehicle orientation features output by the vehicle orientation layer and the vehicle type features output by the vehicle type layer. Here, the network structure of the vehicle-facing layer and the vehicle-type layer may be the same, for example, the vehicle-facing layer or the vehicle-type layer may be composed of one CNN (Convolutional Neural network) and one Global Max Pool layer.
On the basis, the key point detection is carried out on the vehicle to be detected based on the vehicle image characteristic, the vehicle type characteristic and the vehicle orientation characteristic, and the key point position detection result of the vehicle to be detected is output.
Based on any of the above embodiments, fig. 4 is a schematic flowchart of a method for determining a detection result of a keypoint location provided by an embodiment of the present invention, as shown in fig. 4, step 122 specifically includes:
and 1221, performing feature fusion on the vehicle image features, the vehicle type features and the vehicle orientation features based on the attention mechanism to obtain vehicle fusion features.
Specifically, based on an attention mechanism, attention transformation is respectively carried out on the vehicle type feature and the vehicle orientation feature to obtain a fusion weight of the vehicle type feature and a fusion weight of the vehicle orientation feature, and the vehicle type feature and the vehicle orientation feature are subjected to feature fusion with the vehicle image feature based on the fusion weight of the vehicle type feature and the fusion weight of the vehicle orientation feature, so that the vehicle image feature combining the vehicle type feature and the vehicle orientation feature is output as the vehicle fusion feature.
And 1222, performing key point detection based on the vehicle fusion characteristics to obtain a key point position detection result of the vehicle to be detected.
Specifically, the vehicle fusion feature includes not only image information of the vehicle to be tested, but also orientation information and type information of the vehicle to be tested. Based on the image information of the vehicle to be detected contained in the vehicle fusion feature, the key points of the vehicle to be detected, which appear in the image, of the vehicle to be detected can be detected, and on the basis, the key points, which are shielded in the image, of the vehicle to be detected can be detected by means of the orientation information and the type information of the vehicle to be detected contained in the vehicle fusion feature. Therefore, the key point position detection result of the vehicle to be detected, which is obtained by detecting the key points based on the vehicle fusion features, includes the position information of all the key points on the vehicle to be detected.
Based on any of the above embodiments, in the method, step 120 specifically includes:
inputting the image into a vehicle position detection model to obtain a key point position detection result of the vehicle to be detected, which is output by the vehicle position detection model;
the vehicle position detection model is obtained by training based on the sample image, and the detection result of the sample key point position of the sample vehicle in the sample image, the orientation of the sample vehicle and the type of the sample vehicle.
Specifically, after obtaining the image including the vehicle to be detected, the vehicle position detection for the vehicle to be detected in the image may be specifically implemented by a vehicle position detection model obtained through pre-training. The vehicle position detection model is used for extracting vehicle image features of a vehicle to be detected in an image, determining vehicle orientation features and vehicle type features based on the vehicle image features, assisting the vehicle image features with the vehicle orientation features and the vehicle type features, and determining a key point position detection result of the vehicle to be detected.
Before step 120 is executed, the vehicle position detection model may also be obtained by training in advance, and specifically, the vehicle position detection model may be obtained by training in the following manner: first, a large number of sample images are collected, and the sample vehicle keypoint location detection results, the sample vehicle orientation, and the sample vehicle type of the sample vehicles in the sample images are noted. Here, the sample keypoint location detection result may include sample coordinates of each sample keypoint on the sample vehicle, and may also include a sample thermodynamic diagram of each sample keypoint on the sample vehicle, where the sample thermodynamic diagram of any sample keypoint may be a thermodynamic diagram generated based on the labeled coordinates of the sample keypoint and the gaussian kernel.
And then, inputting the sample image and the detection result of the position of the sample key point of the sample vehicle in the sample image, the orientation of the sample vehicle and the type of the sample vehicle into the initial model for training, thereby obtaining the vehicle position detection model.
According to any one of the above embodiments, in the method, the loss function of the vehicle position detection model includes a positioning loss function and an orientation type loss function;
the positioning loss function is determined based on the sample thermodynamic diagram of each sample key point in the sample key point position detection result of the sample image and the prediction thermodynamic diagram of each prediction key point of the sample image; the predicted thermodynamic diagram of each predicted key point of the sample image is obtained by carrying out non-maximum suppression on the thermodynamic diagram output by the vehicle position detection model based on the sample image.
Specifically, in the training process, the sample image is input into the vehicle position detection model, the vehicle position detection model extracts the sample vehicle image characteristics of the sample vehicle in the sample image, the sample vehicle orientation characteristics and the sample vehicle type characteristics of the sample vehicle are determined based on the sample vehicle image characteristics, and the predicted vehicle orientation and the predicted vehicle type of the sample vehicle are respectively output based on the sample vehicle orientation characteristics and the sample vehicle type characteristics.
Further, the vehicle position detection model outputs a predicted sample vehicle position detection result of the sample vehicle based on the sample vehicle image feature, the sample vehicle orientation feature, and the sample vehicle type feature. The predicted sample vehicle position detection result may include an output thermodynamic diagram for each predicted keypoint on the sample vehicle, and accordingly, the sample keypoint position detection result may include a sample thermodynamic diagram for each sample keypoint.
After obtaining the predicted vehicle orientation and the predicted vehicle type output by the vehicle position detection model based on the sample image, and the predicted sample vehicle position detection result, the orientation type loss function of the vehicle position detection model may be determined based on the predicted vehicle orientation and the pre-labeled sample vehicle orientation, and the predicted vehicle type and the pre-labeled sample vehicle type.
In addition, the vehicle position detection model outputs the position information of each key point on the vehicle, so that in the training process, whether the predicted key point in the sample image is accurate or not needs to be concerned, namely, the difference between the predicted key point and the corresponding sample key point is as minimum as possible, and at this time, the accuracy of other points which are irrelevant to the predicted key point in the output thermodynamic diagram does not need to be considered.
Therefore, the output thermodynamic diagram may be subjected to non-maximum suppression based on each point value in the output thermodynamic diagram, the larger point value in the output thermodynamic diagram is retained, and the smaller point value is suppressed, for example, the point value with the point value smaller than the preset threshold value is set to zero, so as to obtain the predicted thermodynamic diagram of the predicted key point, where the predicted thermodynamic diagram only includes some candidate points of the predicted key point whose point values are not zero, and the point values of other points are all zero.
After the predictive thermodynamic diagrams for each of the predictive key points are obtained, a localization loss function of the vehicle location detection model is determined based on the sample thermodynamic diagrams for each of the sample key points and the predictive thermodynamic diagrams for each of the predictive key points. The positioning loss function only needs to compare each candidate point in the prediction thermodynamic diagram with the corresponding point in the sample thermodynamic diagram, and does not need to compare each point in the prediction thermodynamic diagram with the corresponding point in the sample thermodynamic diagram one by one, so that the interference of points irrelevant to the prediction key points on model training is reduced, the calculated amount of the loss function is greatly reduced, and the model training speed is accelerated.
After obtaining the orientation type loss function and the positioning loss function, the orientation type loss function and the positioning loss function may be combined as the loss function of the vehicle position detection model, for example, the sum of the two may be used as the loss function of the model, and for example, the result of the weighted sum of the two may be used as the loss function of the model. The model parameters of the vehicle position detection model are continuously adjusted, so that the loss function of the vehicle position detection model is minimum, and the multi-target training of the vehicle position detection model is realized.
According to the method provided by the embodiment of the invention, the direction of the sample vehicle and the type of the sample vehicle are used as additional supervision information for model training, so that the precision of the vehicle position detection model is improved; and determining a positioning loss function based on a prediction thermodynamic diagram obtained by carrying out non-maximum value inhibition on the output thermodynamic diagram, thereby greatly reducing the calculation amount of the loss function and accelerating the model training speed.
The vehicle position detection model determines the key point position detection result of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature and the vehicle type feature of the vehicle to be detected in the image, and the key point position detection result is influenced by the accuracy of the model feature extraction, so that certain deviation exists between the position information of each key point in the key point position detection result and the actual position information.
To this end, based on any one of the above embodiments, fig. 5 is a schematic flowchart of a method for correcting a detection result of a keypoint location provided by an embodiment of the present invention, as shown in fig. 5, the step 120 further includes:
step 131, selecting a plurality of candidate images similar to the image.
Specifically, the candidate image similar to the image including the vehicle to be tested may be obtained from an image library by using an image retrieval method, for example, an image with a high similarity between the vehicle image feature in the image library and the vehicle image feature of the vehicle to be tested is used as the candidate image; or the candidate image may be selected from a large number of sample images by using an image retrieval method, and the source and the acquisition mode of the candidate image are not particularly limited in the embodiment of the present invention. Preferably, the number of candidate images may range from 3 to 10.
Step 132, correcting the detection result of the key point position of the vehicle to be detected based on the key point conversion matrix of each candidate image and the key point conversion matrix of the image;
wherein the key point conversion matrix is determined based on the object detection frame of the vehicle in the corresponding image and the position information of each key point of the vehicle.
Specifically, after obtaining a plurality of candidate images, a keypoint transformation matrix between the target detection frame and the keypoints in the candidate image may be established based on the coordinates of the four vertices of the target detection frame of the vehicle in the candidate image and the coordinates of each keypoint on the vehicle, where the keypoint transformation matrix of the candidate image is used to represent the position distribution of each keypoint on the vehicle relative to the vehicle. The target detection frame of the vehicle in the candidate image may be manually labeled, or may be obtained by performing target detection on the candidate image, and the coordinates of the key points of the vehicle in the candidate image may be manually labeled.
And carrying out target detection on the image containing the vehicle to be detected to obtain a target detection frame of the vehicle to be detected in the image. And determining the coordinates of each key point on the vehicle to be detected in the image according to the key point position detection result output by the vehicle position detection model based on the image. And determining a key point conversion matrix of the image based on the coordinates of the top point of the target detection frame of the vehicle to be detected in the image and the coordinates of each key point on the vehicle to be detected, wherein the key point conversion matrix of the image is used for representing the position distribution of each key point on the vehicle to be detected relative to the vehicle to be detected.
Since the candidate image is similar to the image containing the vehicle to be detected, the vehicle type and the vehicle orientation of the vehicle in the candidate image and the vehicle to be detected in the image are substantially the same, and the position distribution of each key point on the vehicle in the candidate image relative to the vehicle and the position distribution of each key point on the vehicle to be detected in the image relative to the vehicle to be detected are also substantially the same.
Considering that the coordinates of each key point on the vehicle in any candidate image are obtained by manual labeling, the coordinates of each key point are more accurate, and the key point conversion matrix of any candidate image can more truly reflect the position distribution of each key point on the vehicle relative to the vehicle, so the key point conversion matrix of the image can be corrected based on the key point conversion matrix of each candidate image and the key point conversion matrix of the image, and the key point position detection result of the vehicle to be detected is corrected based on the corrected key point conversion matrix of the image and the target detection frame of the vehicle to be detected.
Here, a keypoint conversion matrix closest to the keypoint conversion matrix of the image in the keypoint conversion matrices of each candidate image may be used as the image-corrected keypoint conversion matrix; the corrected image keypoint conversion matrix may also be determined based on the average matrix of the keypoint conversion matrices of all candidate images and the keypoint conversion matrix of the image.
According to the method provided by the embodiment of the invention, the candidate image similar to the image is adopted to correct the detection result of the vehicle key point position of the vehicle to be detected, so that the accuracy of the vehicle key point detection is further improved.
Based on any of the above embodiments, fig. 6 is a schematic flow chart of the method for determining the final keypoint location detection result provided by the embodiment of the present invention, as shown in fig. 6, step 132 specifically includes:
step 1321, selecting a target key point transformation matrix closest to the key point transformation matrix of the image from the key point transformation matrices of each candidate image;
step 1322, determining a final key point transformation matrix of the image based on the key point transformation matrix of the image and the target key point transformation matrix;
and step 1323, determining a final key point position detection result of the vehicle to be detected based on the target detection frame and the final key point conversion matrix of the vehicle to be detected in the image.
Specifically, after the key point transformation matrix of each candidate image is obtained, the key point transformation matrix closest to the key point transformation matrix of the image is selected as the target key point transformation matrix. And correcting the key point conversion matrix of the image by using the target key point matrix to obtain a final key point conversion matrix of the image.
Specifically, the final keypoint conversion matrix of the image can be calculated by the following formulafinal
matricfinal=∈*matricori+(1-∈)matricdst
In the formula, matrixoriMatrix for the key point transformation of an imagedstAnd converting the matrix for the target key point, wherein the epsilon is a preset smooth coefficient.
And after the final key point conversion matrix of the image is obtained, performing coordinate conversion on the coordinates of the top point of the target detection frame of the vehicle to be detected in the image by using the final key point conversion matrix, and taking the obtained coordinates of each key point as the final key point position detection result of the vehicle to be detected.
Based on any of the above embodiments, fig. 7 is a schematic structural diagram of a vehicle position detection model provided in an embodiment of the present invention, as shown in fig. 7, the vehicle position detection model includes a base network, and two branches, namely a vehicle orientation layer and a vehicle type layer, the base network includes an image feature extraction layer and a key point detection layer, and the vehicle orientation layer and the vehicle type layer are combined to form the vehicle feature extraction layer. The basic network is specifically a stacked hourglass network, and the network structures of the vehicle orientation layer and the vehicle type layer are the same and both comprise a CNN and a Global Max Pool. In fig. 7, a rectangle is a residual block, the upward arrow indicates the up-sampling operation, and the downward arrow indicates the down-sampling operation.
After obtaining an image including a vehicle to be measured, the image is adjusted to a fixed size, for example, 256 × 256, and a feature map corresponding to the image is determined based on the image, for example, the image is input to a convolution layer having a convolution kernel size kernel _ size of 7 and a convolution step size stride of 2 and a pooling layer having a pooling size pool _ size of 2, so as to obtain a 64 × 64 feature map.
And then, inputting the feature map into the hourglass network, wherein the feature map is divided into an upper half path and a lower half path in the hourglass network, the upper half path and the lower half path are both provided with a plurality of residual error modules for gradually extracting deeper features of the feature map, the upper half path is provided with three residual error modules for feature extraction, the four upper half paths respectively correspond to feature maps with four sizes, the original size of the feature map is Oversize, and the lower half path performs down-sampling and then up-sampling in the feature extraction process. Before down-sampling each time, an upper half way is divided to keep current size information, after up-sampling each time, data output by the upper half way is added with data of the upper size, three residual error modules are arranged between two down-sampling, and one residual error module is arranged between two data addition.
After the vehicle image characteristics of the vehicle to be detected in the image extracted by the residual error module of the lower half way are obtained, the vehicle image characteristics are respectively input into the vehicle orientation layer and the vehicle type layer, the vehicle orientation characteristics output by the vehicle orientation layer and the vehicle type characteristics output by the vehicle type layer are obtained, and the vehicle orientation characteristics and the vehicle type characteristics are input into the hourglass network again. During the training process, the vehicle Orientation layer may determine a predicted vehicle Orientation (Orientation) based on the vehicle Orientation characteristics, and the vehicle Type layer may determine a predicted vehicle Type (Type) based on the vehicle Type characteristics. Based on the attention mechanism, feature fusion is performed on the vehicle image feature, the vehicle orientation feature and the vehicle type feature to obtain a vehicle fusion feature, feature extraction is performed on the vehicle fusion feature continuously by using the hourglass network, and a key point position detection result is output, for example, the key point position detection result may be a 64 × 64 × 4 feature map, a feature map of one channel corresponds to a thermodynamic diagram of one key point, where the number of channels of the feature map is 4, that is, corresponds to 4 key points.
Based on any of the above embodiments, fig. 8 is a schematic structural diagram of a Residual error Module according to an embodiment of the present invention, as shown in fig. 8, the Residual error Module (Residual Module) includes two paths, one path on the left side is a convolution path, the convolution path is used for extracting features of a higher layer, the convolution path is formed by serially connecting three convolution layers with different convolution kernel sizes, and a Batch Normalization layer and a ReLU layer are inserted between the convolution layers; the right path is a skip level, the skip level is used for retaining information of an original level, the skip level only contains one convolution layer, M in fig. 8 is the number of input channels, N is the number of output channels, and k is the convolution kernel size.
Based on any of the above embodiments, fig. 9 is a schematic structural diagram of a vehicle position detection apparatus provided in an embodiment of the present invention, and as shown in fig. 9, the apparatus includes:
an image determining unit 910 for determining an image containing a vehicle to be tested;
and a vehicle position detection unit 920, configured to determine a detection result of a key point position of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature, and the vehicle type feature of the vehicle to be detected in the image.
According to the device provided by the embodiment of the invention, the position information of the vehicle to be detected is represented by the key point position detection result of the vehicle to be detected, so that the accuracy of vehicle position detection is improved, the device can be suitable for cameras with different shooting angles, and the utilization rate of video data resources is improved; the method and the device for detecting the key points of the vehicle determine the detection result of the key points of the vehicle to be detected based on the vehicle image characteristics, the vehicle orientation characteristics and the vehicle type characteristics of the vehicle to be detected in the image, fully consider the influence of the orientation of the vehicle and the type of the vehicle on the position distribution of each key point on the vehicle, and improve the accuracy of the detection of the key points of the vehicle.
Based on any of the above embodiments, the vehicle position detection unit 920 specifically includes:
the vehicle feature extraction module is used for respectively carrying out vehicle type identification and vehicle orientation identification on the vehicle to be detected based on the vehicle image features of the vehicle to be detected in the image to obtain the vehicle type features and the vehicle orientation features of the vehicle to be detected;
and the key point position detection module is used for detecting key points of the vehicle to be detected based on the vehicle image characteristics, the vehicle type characteristics and the vehicle orientation characteristics to obtain a key point position detection result of the vehicle to be detected.
Based on any one of the above embodiments, the key point position detection module specifically includes:
the attention feature fusion submodule is used for carrying out feature fusion on the vehicle image features, the vehicle type features and the vehicle orientation features based on an attention mechanism to obtain vehicle fusion features;
and the result output submodule is used for carrying out key point detection based on the vehicle fusion characteristics to obtain a key point position detection result of the vehicle to be detected.
Based on any of the above embodiments, the vehicle position detection unit 920 is specifically configured to:
inputting the image into a vehicle position detection model to obtain a key point position detection result of the vehicle to be detected, which is output by the vehicle position detection model;
the vehicle position detection model is obtained by training based on the sample image, and the detection result of the sample key point position of the sample vehicle in the sample image, the orientation of the sample vehicle and the type of the sample vehicle.
According to any one of the above embodiments, in the apparatus, the loss function of the vehicle position detection model includes a localization loss function and an orientation type loss function;
the positioning loss function is determined based on the sample thermodynamic diagram of each sample key point in the sample key point position detection result of the sample image and the prediction thermodynamic diagram of each prediction key point of the sample image; the predicted thermodynamic diagram of each predicted key point of the sample image is obtained by carrying out non-maximum suppression on the thermodynamic diagram output by the vehicle position detection model based on the sample image.
Based on any embodiment above, the apparatus further comprises:
the key point position detection result correction unit is used for selecting a plurality of candidate images similar to the images;
correcting the detection result of the key point position of the vehicle to be detected based on the key point conversion matrix of each candidate image and the key point conversion matrix of the image; wherein the key point conversion matrix is determined based on the object detection frame of the vehicle in the corresponding image and the position information of each key point of the vehicle.
Based on any of the above embodiments, the keypoint location detection result correction unit is specifically configured to:
selecting a target key point conversion matrix which is closest to the key point conversion matrix of the image from the key point conversion matrix of each candidate image;
determining a final key point conversion matrix of the image based on the key point conversion matrix of the image and the target key point conversion matrix;
and determining a final key point position detection result of the vehicle to be detected based on the target detection frame and the final key point conversion matrix of the vehicle to be detected in the image.
Fig. 10 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 8: a processor (processor)1010, a communication Interface (Communications Interface)1020, a memory (memory)1030, and a communication bus 1040, wherein the processor 1010, the communication Interface 1020, and the memory 1030 communicate with each other via the communication bus 1040. Processor 1010 may invoke logic instructions in memory 1030 to perform a vehicle position detection method comprising: determining an image containing a vehicle to be tested; and determining a key point position detection result of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature and the vehicle type feature of the vehicle to be detected in the image.
Furthermore, the logic instructions in the memory 1030 can be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, an embodiment of the present invention further provides a computer program product, where the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed by a computer, the computer can execute the vehicle position detection method provided by the above-mentioned method embodiments, where the method includes: determining an image containing a vehicle to be tested; and determining a key point position detection result of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature and the vehicle type feature of the vehicle to be detected in the image.
In yet another aspect, an embodiment of the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program being implemented by a processor to perform the vehicle position detection method provided by the above embodiments, the method including: determining an image containing a vehicle to be tested; and determining a key point position detection result of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature and the vehicle type feature of the vehicle to be detected in the image.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A vehicle position detection method characterized by comprising:
determining an image containing a vehicle to be tested;
and determining a key point position detection result of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature and the vehicle type feature of the vehicle to be detected in the image.
2. The vehicle position detection method according to claim 1, wherein the determining a detection result of the key point position of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature, and the vehicle type feature of the vehicle to be detected in the image specifically includes:
respectively carrying out vehicle type identification and vehicle orientation identification on the vehicle to be detected based on the vehicle image characteristics of the vehicle to be detected in the image to obtain the vehicle type characteristics and the vehicle orientation characteristics of the vehicle to be detected;
and detecting key points of the vehicle to be detected based on the vehicle image characteristics, the vehicle type characteristics and the vehicle orientation characteristics to obtain a key point position detection result of the vehicle to be detected.
3. The vehicle position detection method according to claim 2, wherein the performing the key point detection based on the vehicle image feature, the vehicle type feature, and the vehicle orientation feature to obtain the key point position detection result of the vehicle to be detected specifically includes:
performing feature fusion on the vehicle image feature, the vehicle type feature and the vehicle orientation feature based on an attention mechanism to obtain a vehicle fusion feature;
and detecting key points based on the vehicle fusion characteristics to obtain a detection result of the key point positions of the vehicle to be detected.
4. The vehicle position detection method according to any one of claims 1 to 3, wherein the determining a key point position detection result of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature, and the vehicle type feature of the vehicle to be detected in the image specifically includes:
inputting the image into a vehicle position detection model to obtain a key point position detection result of the vehicle to be detected, which is output by the vehicle position detection model;
the vehicle position detection model is obtained by training based on a sample image, and a sample key point position detection result, a sample vehicle orientation and a sample vehicle type of a sample vehicle in the sample image.
5. The vehicle position detection method according to claim 4, characterized in that the loss function of the vehicle position detection model includes a localization loss function and an orientation type loss function;
wherein the localization loss function is determined based on a sample thermodynamic diagram for each sample keypoint in a sample keypoint location detection result of the sample image and a predictive thermodynamic diagram for each predictive keypoint of the sample image;
the predicted thermodynamic diagram of each predicted key point of the sample image is obtained by performing non-maximum suppression on the thermodynamic diagram output by the vehicle position detection model based on the sample image.
6. The vehicle position detection method according to any one of claims 1 to 3, wherein the determining of the detection result of the key point position of the vehicle under test based on the vehicle image feature, the vehicle orientation feature, and the vehicle type feature of the vehicle under test in the image further comprises:
selecting a plurality of candidate images similar to the image;
correcting the detection result of the key point position of the vehicle to be detected based on the key point conversion matrix of each candidate image and the key point conversion matrix of the image;
wherein the key point conversion matrix is determined based on a target detection frame of a vehicle in a corresponding image and position information of each key point of the vehicle.
7. The vehicle position detection method according to claim 6, wherein the correcting the keypoint detection result of the vehicle to be detected based on the keypoint conversion matrix of each candidate image and the keypoint conversion matrix of the image specifically includes:
selecting a target key point conversion matrix which is closest to the key point conversion matrix of the image from the key point conversion matrix of each candidate image;
determining a final key point conversion matrix of the image based on the key point conversion matrix of the image and the target key point conversion matrix;
and determining a final key point position detection result of the vehicle to be detected based on the target detection frame of the vehicle to be detected in the image and the final key point conversion matrix.
8. A vehicle position detecting apparatus, characterized by comprising:
an image determination unit for determining an image containing a vehicle to be measured;
and the vehicle position detection unit is used for determining a key point position detection result of the vehicle to be detected based on the vehicle image feature, the vehicle orientation feature and the vehicle type feature of the vehicle to be detected in the image.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the vehicle position detection method according to any of claims 1 to 7 are implemented when the processor executes the program.
10. A non-transitory computer-readable storage medium having stored thereon a computer program, which, when being executed by a processor, carries out the steps of the vehicle position detection method according to any one of claims 1 to 7.
CN202010949703.4A 2020-09-10 2020-09-10 Vehicle position detection method, device, electronic equipment and storage medium Active CN112052807B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010949703.4A CN112052807B (en) 2020-09-10 2020-09-10 Vehicle position detection method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010949703.4A CN112052807B (en) 2020-09-10 2020-09-10 Vehicle position detection method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112052807A true CN112052807A (en) 2020-12-08
CN112052807B CN112052807B (en) 2022-06-10

Family

ID=73610899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010949703.4A Active CN112052807B (en) 2020-09-10 2020-09-10 Vehicle position detection method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112052807B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112528938A (en) * 2020-12-22 2021-03-19 四川云从天府人工智能科技有限公司 Vehicle detection model training and detection method, device and computer storage medium thereof
CN112784817A (en) * 2021-02-26 2021-05-11 上海商汤科技开发有限公司 Method, device and equipment for detecting lane where vehicle is located and storage medium
CN114998424A (en) * 2022-08-04 2022-09-02 中国第一汽车股份有限公司 Vehicle window position determining method and device and vehicle

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150186715A1 (en) * 2013-12-29 2015-07-02 Motorola Mobility Llc Method and Device for Detecting a Seating Position in a Vehicle
CN109800321A (en) * 2018-12-24 2019-05-24 银江股份有限公司 A kind of bayonet image vehicle retrieval method and system
CN110059623A (en) * 2019-04-18 2019-07-26 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN110189397A (en) * 2019-03-29 2019-08-30 北京市商汤科技开发有限公司 A kind of image processing method and device, computer equipment and storage medium
CN110287936A (en) * 2019-07-02 2019-09-27 北京字节跳动网络技术有限公司 Image detecting method, device, equipment and storage medium
CN110490256A (en) * 2019-08-20 2019-11-22 中国计量大学 A kind of vehicle checking method based on key point thermal map
CN111259971A (en) * 2020-01-20 2020-06-09 上海眼控科技股份有限公司 Vehicle information detection method and device, computer equipment and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150186715A1 (en) * 2013-12-29 2015-07-02 Motorola Mobility Llc Method and Device for Detecting a Seating Position in a Vehicle
CN109800321A (en) * 2018-12-24 2019-05-24 银江股份有限公司 A kind of bayonet image vehicle retrieval method and system
CN110189397A (en) * 2019-03-29 2019-08-30 北京市商汤科技开发有限公司 A kind of image processing method and device, computer equipment and storage medium
CN110059623A (en) * 2019-04-18 2019-07-26 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN110287936A (en) * 2019-07-02 2019-09-27 北京字节跳动网络技术有限公司 Image detecting method, device, equipment and storage medium
CN110490256A (en) * 2019-08-20 2019-11-22 中国计量大学 A kind of vehicle checking method based on key point thermal map
CN111259971A (en) * 2020-01-20 2020-06-09 上海眼控科技股份有限公司 Vehicle information detection method and device, computer equipment and readable storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZHONGDAO WANG ET AL: "Orientation Invariant Feature Embedding and Spatial Temporal Regularization", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
刘凯等: "车辆再识别技术综述", 《智能科学与技术学报》 *
谢禹等: "基于关键点的目标检测算法综述", 《信息技术与标准化》 *
郑婷婷等: "基于关键点的Anchor Free目标检测模型综述", 《计算机系统应用》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112528938A (en) * 2020-12-22 2021-03-19 四川云从天府人工智能科技有限公司 Vehicle detection model training and detection method, device and computer storage medium thereof
CN112784817A (en) * 2021-02-26 2021-05-11 上海商汤科技开发有限公司 Method, device and equipment for detecting lane where vehicle is located and storage medium
CN114998424A (en) * 2022-08-04 2022-09-02 中国第一汽车股份有限公司 Vehicle window position determining method and device and vehicle

Also Published As

Publication number Publication date
CN112052807B (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN112052807B (en) Vehicle position detection method, device, electronic equipment and storage medium
CN113033604B (en) Vehicle detection method, system and storage medium based on SF-YOLOv4 network model
CN111738032B (en) Vehicle driving information determination method and device and vehicle-mounted terminal
CN113554643B (en) Target detection method and device, electronic equipment and storage medium
CN111091023A (en) Vehicle detection method and device and electronic equipment
CN112036385B (en) Library position correction method and device, electronic equipment and readable storage medium
CN113052159A (en) Image identification method, device, equipment and computer storage medium
CN115205803A (en) Automatic driving environment sensing method, medium and vehicle
CN112837404B (en) Method and device for constructing three-dimensional information of planar object
CN111144361A (en) Road lane detection method based on binaryzation CGAN network
CN116543143A (en) Training method of target detection model, target detection method and device
CN113591543B (en) Traffic sign recognition method, device, electronic equipment and computer storage medium
CN114359859A (en) Method and device for processing target object with shielding and storage medium
CN113963233A (en) Target detection method and system based on double-stage convolutional neural network
CN113793364A (en) Target tracking method and device, computer equipment and storage medium
Selçuk et al. Development of a Traffic Speed Limit Sign Detection System Based on Yolov4 Network
CN112613370A (en) Target defect detection method, device and computer storage medium
CN116503695B (en) Training method of target detection model, target detection method and device
CN115063594B (en) Feature extraction method and device based on automatic driving
CN114897987B (en) Method, device, equipment and medium for determining vehicle ground projection
CN111815667B (en) Method for detecting moving target with high precision under camera moving condition
CN111251994B (en) Method and system for detecting objects around vehicle
CN113887294A (en) Method and device for detecting wheel grounding point, electronic equipment and storage medium
CN113808208A (en) Train positioning method and system with safe function, electronic equipment and storage medium
CN115546752A (en) Lane line marking method and device for high-precision map, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant