CN116127165A - Vehicle position updating method and device, storage medium and electronic device - Google Patents
Vehicle position updating method and device, storage medium and electronic device Download PDFInfo
- Publication number
- CN116127165A CN116127165A CN202211733173.5A CN202211733173A CN116127165A CN 116127165 A CN116127165 A CN 116127165A CN 202211733173 A CN202211733173 A CN 202211733173A CN 116127165 A CN116127165 A CN 116127165A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- target
- moment
- determining
- position information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000004590 computer program Methods 0.000 claims description 16
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000001514 detection method Methods 0.000 description 20
- 238000010586 diagram Methods 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 5
- 238000001914 filtration Methods 0.000 description 3
- 230000009191 jumping Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/907—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/909—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Navigation (AREA)
Abstract
The embodiment of the invention provides a vehicle position updating method, a vehicle position updating device, a storage medium and an electronic device, wherein the method comprises the following steps: according to the position information of the target vehicle at the first moment, determining the predicted position information of the target vehicle at the second moment, wherein the second moment is later than the first moment; acquiring vehicles in a target range at a second moment to obtain a first vehicle set, wherein the target range is determined according to the position information of the target vehicle at the first moment; and determining a first vehicle matched with the target vehicle in the first vehicle set according to the predicted position information, and determining the position information of the target vehicle at the second moment as the position information of the first vehicle. The embodiment of the invention solves the problem of inaccurate position updating of the target vehicle in the related technology.
Description
Technical Field
The embodiment of the invention relates to the field of vehicle detection, in particular to a vehicle position updating method, a vehicle position updating device, a storage medium and an electronic device.
Background
Along with continuous optimization and upgrading of the deep learning network model and adaptive research of vehicle problems, the vehicle detection and tracking problems based on video images are greatly solved, and the real-time detection and tracking capability of vehicle targets is greatly improved. However, the high-precision target detection and tracking capability can not be realized in any scene, for example, the situations of rush hour of vehicles going to work, parking queuing in red light and the like are realized, vehicles in a lane are too dense, the problems of multiple detection, missed detection and target ID jump of the vehicle targets often exist, and the multiple detection, missed detection and target ID jump of the vehicle targets can cause inaccurate position update of the target vehicles, so that the accuracy of vehicle counting is restricted. And there is a problem in that the location update of the target vehicle is inaccurate in the related art.
Aiming at the problem of inaccurate position updating of a target vehicle in the related art, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the invention provides a vehicle position updating method, a vehicle position updating device, a storage medium and an electronic device, which are used for at least solving the problem of inaccurate position updating of a target vehicle in the related technology.
According to an embodiment of the present invention, there is provided a position updating method of a vehicle, including: determining predicted position information of a target vehicle at a second moment according to position information of the target vehicle at a first moment, wherein the second moment is later than the first moment; acquiring vehicles in a target range at the second moment to obtain a first vehicle set, wherein the target range is determined according to the position information of the target vehicle at the first moment; and determining a first vehicle matched with the target vehicle in the first vehicle set according to the predicted position information, and determining the position information of the first vehicle as the position information of the target vehicle at the second moment.
According to still another embodiment of the present invention, there is also provided a position updating apparatus of a vehicle, including: the first determining module is used for determining predicted position information of the target vehicle at a second moment according to the position information of the target vehicle at the first moment, wherein the second moment is later than the first moment; the acquisition module is used for acquiring vehicles positioned in a target range at the second moment to obtain a first vehicle set, wherein the target range is determined according to the position information of the target vehicle at the first moment; and the second determining module is used for determining a first vehicle matched with the target vehicle in the first vehicle set according to the predicted position information, and determining the position information of the first vehicle as the position information of the target vehicle at the second moment.
According to a further embodiment of the invention, there is also provided a computer readable storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
According to a further embodiment of the invention, there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
According to the invention, the position information of the target vehicle at the second moment is predicted, the first vehicle matched with the target vehicle is determined in the first vehicle set according to the predicted position information, the vehicle matched with the target vehicle is accurately found in the vehicles identified at the second moment, and the position information of the target vehicle at the second moment is determined as the position information of the first vehicle, so that the accurate update of the position of the target vehicle at different moments is realized, the problem of inaccurate position update of the target vehicle in the related art is solved, and the effect of improving the position update accuracy of the target vehicle is achieved.
Drawings
Fig. 1 is a block diagram of a mobile terminal hardware structure of a location updating method of a vehicle according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of updating a location of a vehicle according to an embodiment of the invention;
FIG. 3 is a schematic illustration of a target vehicle detection zone according to an embodiment of the present invention;
FIG. 4 is a schematic illustration of determining a first set of vehicles according to an embodiment of the invention;
FIG. 5 is a flow chart of a vehicle location update according to an embodiment of the invention;
fig. 6 is a block diagram of a position updating apparatus of a vehicle according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be performed in a mobile terminal, a computer terminal or similar computing device. Taking the operation on a mobile terminal as an example, fig. 1 is a block diagram of a mobile terminal hardware structure of a method for updating a location of a vehicle according to an embodiment of the present invention. As shown in fig. 1, a mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, wherein the mobile terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store computer programs, such as software programs of application software and modules, such as computer programs corresponding to the method for updating the position of the vehicle in the embodiment of the present invention, and the processor 102 executes the computer programs stored in the memory 104 to perform various functional applications and data processing, that is, to implement the method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Control ler, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
In this embodiment, a method for updating a position of a vehicle is provided, and fig. 2 is a flowchart of a method for updating a position of a vehicle according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, according to the position information of a target vehicle at a first moment, determining the predicted position information of the target vehicle at a second moment, wherein the second moment is later than the first moment;
step S204, obtaining vehicles in a target range at the second moment to obtain a first vehicle set, wherein the target range is determined according to the position information of the target vehicle at the first moment;
and step S206, determining a first vehicle matched with the target vehicle in the first vehicle set according to the predicted position information, and determining the position information of the first vehicle as the position information of the target vehicle at the second moment.
In this embodiment, the target vehicle is a vehicle identified in a first image captured at a first time, and the position of the target vehicle in the first image is mapped into an actual space to obtain position information of the target vehicle at the first time, and the position information of the target vehicle at a second time is predicted according to the position information of the target vehicle at the first time to obtain predicted position information of the target vehicle at the second time.
And identifying a plurality of vehicles in a second picture shot at the second moment, determining the position of each vehicle in the actual space at the second moment, carrying out target tracking, finding that the first vehicle is a target vehicle in the plurality of vehicles identified in the second picture, and determining the position information of the first vehicle at the second moment as the position information of the target vehicle at the second moment.
The method comprises the steps of respectively carrying out matching operation with a plurality of vehicles identified in a second picture, determining which vehicle is a target vehicle, and carrying out preliminary screening on the plurality of vehicles identified in the second picture before starting the matching operation: knowing the position of the target vehicle at the first moment, the position of the target vehicle at the second moment is not far away from the position of the target vehicle at the first moment, so that a target range is determined through the position of the target vehicle at the first moment, vehicles which are located in the target range at the second moment are added to a first vehicle set, a first vehicle matched with the target vehicle is searched in the first vehicle according to the predicted position information, the first vehicle is determined as the target vehicle, and the position information of the first vehicle is determined as the position information of the target vehicle at the second moment.
Through the steps, the position information of the target vehicle at the second moment is predicted, the first vehicle matched with the target vehicle is determined in the first vehicle set according to the predicted position information, the vehicle matched with the target vehicle is accurately found in the vehicles identified at the second moment, the position information of the first vehicle is determined as the position information of the target vehicle at the second moment, and the accurate updating of the position of the target vehicle at different moments is realized, so that the problem of inaccurate position updating of the target vehicle in the related technology is solved, and the effect of improving the accuracy of the position updating of the target vehicle is achieved.
Optionally, a specific target vehicle detection area may be set, and fig. 3 is a schematic diagram of the target vehicle detection area according to an embodiment of the present invention, as shown in fig. 3, the target vehicle detection area may be set at a preset distance from an intersection, adjacent vehicles in the area are obviously spaced, vehicle features are more accurate, accuracy of video vehicle target detection is high, probability of generating target missing detection and false detection is low, and accurate vehicle number and position information is provided for a vehicle statistical link.
The vehicle detection can be carried out in the target detection area, the number and the position system of the vehicles in the target detection area at present are obtained, and the target sequence at the present moment and the corresponding target frame sequence are established:
V trg1 ,Box trg1
V trg2 ,Box trg2
V trg3 ,Box trg3
wherein V is trgi Representing vehicle ID, box trgi Indicating the corresponding target frame position of the vehicle.
Wherein the method comprises the steps ofRespectively representing the pixel coordinates of the four points of the upper left, the upper right, the lower left and the lower right.
Acquiring a central position U of a video target frame t Acquiring actual spatial position information X corresponding to a video target by combining with a camera calibration system t The method comprises the following steps:
wherein f (x) is a camera calibration system.
In an optional embodiment, in the determining the position information of the target vehicle at the second time as the position information of the first vehicle, the method further includes: determining whether the target vehicle reaches a preset position according to the position information of the target vehicle at the second moment; adding the target vehicle to a target vehicle set under the condition that the target vehicle reaches the preset position, and determining the arrival time of the target vehicle as the second moment, wherein the target vehicle set records the vehicle reaching the preset position and the arrival time corresponding to each vehicle; and determining the traffic flow in a preset time period according to the number of the vehicles reaching the preset position in the preset time period in the target vehicle set.
In this embodiment, after the position information of the target vehicle at the second moment is determined, whether the target vehicle reaches the preset position is determined according to the position information of the target vehicle at the second moment, and if the target vehicle reaches the preset position, the target vehicle is added to the target vehicle set, and the time when the target vehicle reaches the preset position is recorded.
The preset position can be a line arranged on the road, the number of vehicles crossing the line is counted, the traffic flow is counted through the number of the vehicles in the target vehicle set, the repeated counting is not carried out, and the accuracy of traffic flow counting is guaranteed.
And when the traffic flow in the preset time period is counted, determining the traffic flow in the preset time period by determining the number of vehicles in the preset time period when the target vehicle set reaches the preset position.
Alternatively, it may be determined whether the target vehicle reaches the preset position by the following formula:
wherein c is a preset bitThe value of the position to be set,and the position information of the target vehicle at the second moment.It is determined that the target vehicle reaches the preset position.
In an optional embodiment, the determining, according to the position information of the target vehicle at the first moment, the predicted position information of the target vehicle at the second moment includes: acquiring a first target frame used for identifying the target vehicle in a first image, wherein the first image is an image obtained by shooting at the first moment by a target shooting device; determining the position information of the target vehicle at the first moment according to the coordinate information of the first target frame; and inputting the position information of the target vehicle at the first moment into a target prediction model to obtain the predicted position information of the target vehicle at the second moment.
In the present embodiment, when predicting the position information of the target vehicle at the second time, the position information of the target vehicle at the first time is first determined: the target shooting equipment is fixed at a fixed position, images are shot through the fixed position, a target vehicle is identified in a first image shot at a first moment, a first target frame is used for identifying the target vehicle in the first image, the position information of the target vehicle at the first moment is determined through the coordinate information of the target frame in the first image, the position information of the target vehicle at the first moment is input into a target prediction model, the position is predicted through the prediction model, and the predicted position information of the target vehicle at a second moment is obtained.
After the target shooting device is fixed, calibration is performed through a calibration system in the target shooting device, so that coordinates of pixel points in an image shot by the target shooting device can be mapped to positions in an actual space.
The target prediction model outputs predicted position information x (k) of the target vehicle at the second time by inputting position information x (k-1) of the target vehicle at the first time.
The target prediction model may be a kalman filter algorithm model, and the kalman filter algorithm tracks predicted target position information in real time according to a state model and an observation model of the system, and in this embodiment, a discrete dynamic system is selected and consists of a q-dimensional dynamic system and an r-dimensional observation system. The state model of the q-dimensional dynamic system is as follows:
x(k)=Ax(k-1)+w(k-1)
where w (k) is the state model error.
The covariance matrix of w (k) is:
Q(k)=E[w(k)w(k) T ]
the observation model of the r-dimensional observation system is as follows:
y(k)=F(k)+v(k)
wherein F (k) is an observation system model, v (k) is an observation system model error, and a covariance matrix of v (k) is:
R(k)=E[v(k)v(k) T ]
covariance prediction equation for kalman filtering:
P 1 (k)=AP(k-1)A T +Q(k)
filter gain equation:
K(k)=P 1 (k)C T [CP 1 (k)C T +R(k)] -1
filtering covariance equation:
P(x)=P 1 (k)-K(k)CP 1 (k)
the filter estimation equation:
predictive estimation equation:
state vector x (k) in the kalman filter model:
x(k)=[x,y,v x ,v y ]
measurement vector y (k):
y(k)=[x ′ ,y ′ ] T
the system parameters are A:
in an alternative embodiment, the acquiring the vehicles within the target range at the second moment, to obtain the first vehicle set, includes: acquiring a second target frame set, wherein target frames in the second target frame set are in one-to-one correspondence with vehicles in a second vehicle set, the second target frame set comprises target frames in a second image, the target frames are used for identifying all vehicles in the second vehicle set, the second vehicle set comprises vehicles identified in the second image, and the second image is an image shot by target shooting equipment at the second moment; determining position information of each vehicle in the second vehicle set at the second moment according to the coordinate information of each target frame in the second target frame set; and searching vehicles with the position information at the second moment in the target range in the second vehicle set, and determining the searched vehicles as the first vehicle set.
In this embodiment, all vehicles identified in a second image captured by a target capturing device at a second time constitute a second vehicle set, each vehicle is identified in the second image using a target frame, all target frames in the second image constitute a second target frame set, position information of each vehicle in the second vehicle set at the second time is determined by coordinate information of each target frame in the second image in the second target frame set, and then, vehicles whose position information at the second time is within the target range are determined in the second vehicle set as a first vehicle set according to position information of each vehicle in the second vehicle set at the second time.
FIG. 4 is a schematic diagram of determining a first set of vehicles according to an embodiment of the present invention, as shown in FIG. 4, five vehicles (labeled 1, 2, 3, 4, 5, respectively) are identified in the second image, and their position information at the second time is x, respectively 2,1 ,x 2,2 ,x 2,3 ,x 2,4 ,x 2,5 The target range is shown as a dashed circle in fig. 4, where vehicle 2 and vehicle 3 are within the target range, the first vehicle set includes vehicle 1 and vehicle 2.
In an alternative embodiment, the determining a first vehicle in the first vehicle set that matches the target vehicle includes: determining a matching relation loss value of each vehicle in the first vehicle set and the target vehicle, wherein the matching relation loss value is used for representing an error of matching of each vehicle in the first vehicle set and the target vehicle; and determining the vehicle with the minimum matching relation loss value in the first vehicle set as the first vehicle matched with the target vehicle.
In the present embodiment, the target vehicle is subjected to a matching operation with each vehicle in the first vehicle set, and a matching relationship loss value is calculated, and the vehicle with the smallest loss value is determined as the vehicle that matches the target vehicle.
The first vehicle set as in fig. 3 includes the vehicle 1 and the vehicle 2, the matching relation loss value of the vehicle 1 and the target vehicle is calculated, respectively, if it is 0.2, the matching relation loss value of the vehicle 2 and the target vehicle is calculated, if it is 0.1, the matching relation loss value of the vehicle 2 and the target vehicle is smaller, and the vehicle 2 and the target vehicle are matched, that is, the vehicle 2 is the target vehicle.
In an alternative embodiment, the determining a matching relationship loss value between each vehicle in the first vehicle set and the target vehicle includes: performing, for each vehicle in the first set of vehicles, the following operations, when each vehicle in the first set of vehicles is a current vehicle: determining a first loss value according to the position information of the current vehicle at the second moment and the predicted position information of the target vehicle at the second moment, wherein the first loss value represents a centroid loss value between the current vehicle and the target vehicle; determining a second loss value according to a current target frame and a first target frame used for identifying the current vehicle in the second image, wherein the first target frame is a target frame used for identifying the target vehicle in a first image, the first image is an image obtained by shooting at the first moment through the target shooting equipment, and the first loss value represents an overlapping area loss value between the current target frame and the first target frame; determining a third loss value according to the current target frame and the first target frame, wherein the first loss value represents an area similarity loss value between the current vehicle and the target vehicle; and determining a matching relation loss value of the current vehicle and the target vehicle according to the first loss value, the second loss value and the third loss value.
In this embodiment, determining the matching relationship loss value by the loss values of three different latitudes includes: a first loss value, a second loss value, and a third loss value, wherein the first loss value represents a centroid loss value, the second loss value represents an overlap area loss value, and the third loss value represents an area similarity loss value;
normalizing the first loss value, the second loss value and the third loss value:
D(n,m)=D(n,m)/maxD(n,*)
L(n,m)=L(n,m)/maxL(n,*)
ΔS(n,m)=ΔS(n,m)/maxΔS(n,*)
where n represents the target vehicle, m represents one of the vehicles, D (n, m) is the first loss value, L (n, m) is the second loss value, and Δs (n, m) is the third loss value.
The matching relation loss value is as follows:
loss(n,m)=αD(n,m)+βL(n,m)+γΔS(n,m)
wherein, alpha, beta and gamma are all preset values.
In an optional embodiment, the determining the second loss value according to the current target frame and the first target frame in the second image, where the current target frame and the first target frame are used to identify the current vehicle includes: determining an overlapping area of the current target frame and the first target frame and a first area of the first target frame; determining a ratio of the overlapping area and the first area; and determining the difference between the ratio and 1 as the second loss value.
In this embodiment, the second loss value is calculated by the current target frame and the first target frame:
wherein S is the overlapping area of the current target frame and the first target frame,is the first area of the first target frame.
In an optional embodiment, the determining a third loss value according to the current target frame and the first target frame includes: determining a second area of the current target frame and a first area of the first target frame; and determining the third loss value according to the difference between the first area and the second area.
In this embodiment, the current target frame and the first target frame are determined as a third loss value:
wherein,,for the second area of the current target frame, +.>Is the first area of the first target frame.
Optionally, the first loss value is determined according to the position information of the current vehicle at the second moment and the predicted position information of the target vehicle at the second moment:
wherein,,for the current vehicle's position information at said second moment, and (2)>Is predicted position information of the target vehicle at the second moment.
It will be apparent that the embodiments described above are merely some, but not all, embodiments of the invention.
The invention is illustrated below with reference to examples:
fig. 5 is a schematic flow chart of a location update of a vehicle according to an embodiment of the present invention, as shown in fig. 5, including:
and step 1, starting. The program is started. Step 2 is skipped.
And 2, initializing. Setting a preset position, a target detection area and the like, and jumping to the step 3.
Step 3: the camera calibration is carried out, the corresponding relation between the video pixel value and the actual space distance is mainly solved, and the video calibration precision influences the accuracy of the follow-up traffic flow statistics. And (4) jumping to the step (4).
Step 4: video target vehicle detection. The number and the positions of the target vehicles are detected by adopting an artificial intelligence method such as deep learning and the like, and unique ID is given to each target vehicle. Step 5 is skipped.
Step 5: video target vehicle tracking. The purpose of this step is that the same target vehicle has a stable ID. Step 6 is skipped.
Step 6: and extracting the position point of the target frame. And extracting a target frame center point based on the target frame position in the video. And mapping the center point of the target frame into an actual position under the world coordinate system through a calibration algorithm, and jumping to the step 7.
Step 7: and acquiring the number and state information of the target vehicles in the target detection area according to the target actual position information. Step 8 is skipped.
Step 8: and carrying out target vehicle prediction updating based on a Kalman filtering algorithm. Step 9 is skipped.
Step 9: a first set of vehicles is obtained based on a preset range. Step 10 is skipped.
Step 10: a loss value is calculated. For each vehicle in the first set of vehicles, a loss value with the target vehicle is calculated. Step 11 is skipped.
Step 11: the best match is selected. And selecting the vehicle with the smallest loss value from all the loss values to determine as the target vehicle. Step 12 is skipped.
Step 12: and judging whether the vehicle reaches a preset position, and carrying out traffic flow statistics. If the vehicle does not reach the preset position, the step is skipped to step 8 to further predict the target state, and if not, the group is skipped to step 13.
Step 13: and (5) ending.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Also provided in this embodiment is a position updating apparatus of a vehicle, fig. 6 is a block diagram of a position updating apparatus of a vehicle according to an embodiment of the present invention, as shown in fig. 6, the apparatus including:
a first determining module 602, configured to determine predicted position information of a target vehicle at a second time according to position information of the target vehicle at a first time, where the second time is later than the first time;
an obtaining module 604, configured to obtain a first vehicle set by obtaining a vehicle that is located in a target range at the second time, where the target range is determined according to position information of the target vehicle at the first time;
a second determining module 606, configured to determine, according to the predicted position information, a first vehicle that matches the target vehicle in the first vehicle set, and determine position information of the first vehicle as position information of the target vehicle at the second time.
In an optional embodiment, the device is further configured to determine whether the target vehicle reaches a preset position according to position information of the target vehicle at the second moment; adding the target vehicle to a target vehicle set under the condition that the target vehicle reaches the preset position, and determining the arrival time of the target vehicle as the second moment, wherein the target vehicle set records the vehicle reaching the preset position and the arrival time corresponding to each vehicle; and determining the traffic flow in a preset time period according to the number of the vehicles reaching the preset position in the preset time period in the target vehicle set.
In an optional embodiment, the apparatus is further configured to obtain a first target frame used to identify the target vehicle in a first image, where the first image is an image obtained by capturing, by a target capturing device, at the first time; determining the position information of the target vehicle at the first moment according to the coordinate information of the first target frame; and inputting the position information of the target vehicle at the first moment into a target prediction model to obtain the predicted position information of the target vehicle at the second moment.
In an optional embodiment, the above apparatus is further configured to obtain a second set of target frames, where target frames in the second set of target frames are in one-to-one correspondence with vehicles in a second set of vehicles, and the second set of target frames includes target frames in a second image for identifying each vehicle in the second set of vehicles, and the second set of vehicles includes the vehicles identified in the second image, and the second image is an image obtained by capturing, by a target capturing device, at the second time; determining position information of each vehicle in the second vehicle set at the second moment according to the coordinate information of each target frame in the second target frame set; and searching vehicles with the position information at the second moment in the target range in the second vehicle set, and determining the searched vehicles as the first vehicle set.
In an optional embodiment, the foregoing apparatus is further configured to determine a loss of matching relationship between each vehicle in the first vehicle set and the target vehicle, where the loss of matching relationship is used to represent an error in matching each vehicle in the first vehicle set with the target vehicle; and determining the vehicle with the minimum matching relation loss value in the first vehicle set as the first vehicle matched with the target vehicle.
In an alternative embodiment, the apparatus is further configured to perform, for each vehicle in the first set of vehicles, the following operations, when performed, each vehicle in the first set of vehicles is a current vehicle: determining a first loss value according to the position information of the current vehicle at the second moment and the predicted position information of the target vehicle at the second moment, wherein the first loss value represents a centroid loss value between the current vehicle and the target vehicle; determining a second loss value according to a current target frame and a first target frame used for identifying the current vehicle in the second image, wherein the first target frame is a target frame used for identifying the target vehicle in a first image, the first image is an image obtained by shooting at the first moment through the target shooting equipment, and the first loss value represents an overlapping area loss value between the current target frame and the first target frame; determining a third loss value according to the current target frame and the first target frame, wherein the first loss value represents an area similarity loss value between the current vehicle and the target vehicle; and determining a matching relation loss value of the current vehicle and the target vehicle according to the first loss value, the second loss value and the third loss value.
In an optional embodiment, the foregoing apparatus is further configured to determine an overlapping area of the current target frame and the first target frame and a first area of the first target frame; determining a ratio of the overlapping area and the first area; and determining the difference between the ratio and 1 as the second loss value.
In an optional embodiment, the above apparatus is further configured to determine a second area of the current target frame and a first area of the first target frame; and determining the third loss value according to the difference between the first area and the second area.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Embodiments of the present invention also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
In one exemplary embodiment, the computer readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
In an exemplary embodiment, the electronic apparatus may further include a transmission device connected to the processor, and an input/output device connected to the processor.
Specific examples in this embodiment may refer to the examples described in the foregoing embodiments and the exemplary implementation, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may be implemented in program code executable by computing devices, so that they may be stored in a storage device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A method of updating a position of a vehicle, comprising:
determining predicted position information of a target vehicle at a second moment according to position information of the target vehicle at a first moment, wherein the second moment is later than the first moment;
acquiring vehicles in a target range at the second moment to obtain a first vehicle set, wherein the target range is determined according to the position information of the target vehicle at the first moment;
and determining a first vehicle matched with the target vehicle in the first vehicle set according to the predicted position information, and determining the position information of the first vehicle as the position information of the target vehicle at the second moment.
2. The method according to claim 1, characterized in that after the determining of the position information of the target vehicle at the second time as the position information of the first vehicle, the method further comprises:
determining whether the target vehicle reaches a preset position according to the position information of the target vehicle at the second moment;
adding the target vehicle to a target vehicle set under the condition that the target vehicle reaches the preset position, and determining the arrival time of the target vehicle as the second moment, wherein the target vehicle set records the vehicle reaching the preset position and the arrival time corresponding to each vehicle;
and determining the traffic flow in a preset time period according to the number of the vehicles reaching the preset position in the preset time period in the target vehicle set.
3. The method of claim 1, wherein determining predicted location information of the target vehicle at the second time based on the location information of the target vehicle at the first time comprises:
acquiring a first target frame used for identifying the target vehicle in a first image, wherein the first image is an image obtained by shooting at the first moment by a target shooting device;
determining the position information of the target vehicle at the first moment according to the coordinate information of the first target frame;
and inputting the position information of the target vehicle at the first moment into a target prediction model to obtain the predicted position information of the target vehicle at the second moment.
4. The method of claim 1, wherein the obtaining vehicles within a target range at the second time to obtain a first set of vehicles comprises:
acquiring a second target frame set, wherein target frames in the second target frame set are in one-to-one correspondence with vehicles in a second vehicle set, the second target frame set comprises target frames in a second image, the target frames are used for identifying all vehicles in the second vehicle set, the second vehicle set comprises vehicles identified in the second image, and the second image is an image shot by target shooting equipment at the second moment;
determining position information of each vehicle in the second vehicle set at the second moment according to the coordinate information of each target frame in the second target frame set;
and searching vehicles with the position information at the second moment in the target range in the second vehicle set, and determining the searched vehicles as the first vehicle set.
5. The method of claim 4, wherein the determining a first vehicle in the first set of vehicles that matches the target vehicle comprises:
determining a matching relation loss value of each vehicle in the first vehicle set and the target vehicle, wherein the matching relation loss value is used for representing an error of matching of each vehicle in the first vehicle set and the target vehicle;
and determining the vehicle with the minimum matching relation loss value in the first vehicle set as the first vehicle matched with the target vehicle.
6. The method of claim 5, wherein the determining a matching relationship loss value for each vehicle in the first set of vehicles with the target vehicle comprises:
performing, for each vehicle in the first set of vehicles, the following operations, when each vehicle in the first set of vehicles is a current vehicle:
determining a first loss value according to the position information of the current vehicle at the second moment and the predicted position information of the target vehicle at the second moment, wherein the first loss value represents a centroid loss value between the current vehicle and the target vehicle;
determining a second loss value according to a current target frame and a first target frame used for identifying the current vehicle in the second image, wherein the first target frame is a target frame used for identifying the target vehicle in a first image, the first image is an image obtained by shooting at the first moment through the target shooting equipment, and the first loss value represents an overlapping area loss value between the current target frame and the first target frame;
determining a third loss value according to the current target frame and the first target frame, wherein the first loss value represents an area similarity loss value between the current vehicle and the target vehicle;
and determining a matching relation loss value of the current vehicle and the target vehicle according to the first loss value, the second loss value and the third loss value.
7. The method of claim 6, wherein the step of providing the first layer comprises,
the determining a second loss value according to the current target frame and the first target frame used for identifying the current vehicle in the second image includes: determining an overlapping area of the current target frame and the first target frame and a first area of the first target frame; determining a ratio of the overlapping area and the first area; determining a difference between the ratio and 1 as the second loss value;
the determining a third loss value according to the current target frame and the first target frame includes: determining a second area of the current target frame and a first area of the first target frame; and determining the third loss value according to the difference between the first area and the second area.
8. A position updating apparatus of a vehicle, characterized by comprising:
the first determining module is used for determining predicted position information of the target vehicle at a second moment according to the position information of the target vehicle at the first moment, wherein the second moment is later than the first moment;
the acquisition module is used for acquiring vehicles positioned in a target range at the second moment to obtain a first vehicle set, wherein the target range is determined according to the position information of the target vehicle at the first moment;
and the second determining module is used for determining a first vehicle matched with the target vehicle in the first vehicle set according to the predicted position information, and determining the position information of the first vehicle as the position information of the target vehicle at the second moment.
9. A computer readable storage medium, characterized in that a computer program is stored in the computer readable storage medium, wherein the computer program, when being executed by a processor, implements the steps of the method according to any of the claims 1 to 7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any one of claims 1 to 7 when the computer program is executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211733173.5A CN116127165A (en) | 2022-12-30 | 2022-12-30 | Vehicle position updating method and device, storage medium and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211733173.5A CN116127165A (en) | 2022-12-30 | 2022-12-30 | Vehicle position updating method and device, storage medium and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116127165A true CN116127165A (en) | 2023-05-16 |
Family
ID=86307454
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211733173.5A Pending CN116127165A (en) | 2022-12-30 | 2022-12-30 | Vehicle position updating method and device, storage medium and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116127165A (en) |
-
2022
- 2022-12-30 CN CN202211733173.5A patent/CN116127165A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112070807B (en) | Multi-target tracking method and electronic device | |
CN109325429B (en) | Method, device, storage medium and terminal for associating feature data | |
CN107730993A (en) | The parking lot intelligent vehicle-tracing system and method identified again based on image | |
CN110428114B (en) | Fruit tree yield prediction method, device, equipment and computer readable storage medium | |
CN115063454B (en) | Multi-target tracking matching method, device, terminal and storage medium | |
CN110264497B (en) | Method and device for determining tracking duration, storage medium and electronic device | |
CN110969048A (en) | Target tracking method and device, electronic equipment and target tracking system | |
CN111507204A (en) | Method and device for detecting countdown signal lamp, electronic equipment and storage medium | |
CN117242489A (en) | Target tracking method and device, electronic equipment and computer readable medium | |
CN109636828A (en) | Object tracking methods and device based on video image | |
CN113160272A (en) | Target tracking method and device, electronic equipment and storage medium | |
CN112950717A (en) | Space calibration method and system | |
CN115546705A (en) | Target identification method, terminal device and storage medium | |
CN111899279A (en) | Method and device for detecting motion speed of target object | |
CN115690545B (en) | Method and device for training target tracking model and target tracking | |
CN112631333B (en) | Target tracking method and device of unmanned aerial vehicle and image processing chip | |
CN117315237B (en) | Method and device for determining target detection model and storage medium | |
CN109903308B (en) | Method and device for acquiring information | |
CN112819889B (en) | Method and device for determining position information, storage medium and electronic device | |
CN111950507B (en) | Data processing and model training method, device, equipment and medium | |
RU2740708C1 (en) | Radio monitoring results processing method | |
CN112396630B (en) | Method and device for determining target object state, storage medium and electronic device | |
CN116912716A (en) | Target positioning method, target positioning device, electronic equipment and storage medium | |
CN117152949A (en) | Traffic event identification method and system based on unmanned aerial vehicle | |
CN111104965A (en) | Vehicle target identification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |