CN103969466A - Method for measuring speed of vehicle and corresponding terminal - Google Patents

Method for measuring speed of vehicle and corresponding terminal Download PDF

Info

Publication number
CN103969466A
CN103969466A CN201410195239.9A CN201410195239A CN103969466A CN 103969466 A CN103969466 A CN 103969466A CN 201410195239 A CN201410195239 A CN 201410195239A CN 103969466 A CN103969466 A CN 103969466A
Authority
CN
China
Prior art keywords
msub
mrow
images
objects
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410195239.9A
Other languages
Chinese (zh)
Other versions
CN103969466B (en
Inventor
靳锐敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN201410195239.9A priority Critical patent/CN103969466B/en
Publication of CN103969466A publication Critical patent/CN103969466A/en
Application granted granted Critical
Publication of CN103969466B publication Critical patent/CN103969466B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a method for measuring the speed of a vehicle and a corresponding terminal. The method comprises the steps that an outside scene is shot for many times to obtain a plurality of images, parameters of each time of shoot are recorded, and the parameters comprise the image distance, the focal distance and the shoot moment; two images are used as a group, the images are divided into one or more groups, the two images in the same group are processed in the way that objects are extracted from the two images to be matched, and one or more objects are selected in the successfully matched objects on the two images to be determined as the basic objects for speed measurement; offset distance of the basic object on the two images is calculated, and combined with the time difference obtained through the image distance, the focal distance and the shoot moment of shoot of the two images, the speed of the vehicle is calculated. The corresponding terminal comprises a shoot module, a grouping module, a selection module and a calculation module. The terminal can be used for measuring the speed of the vehicle, and the speed of the vehicle can be conveniently known.

Description

Method for realizing vehicle speed measurement and corresponding terminal
Technical Field
The present invention relates to speed measurement, and more particularly, to a method for implementing vehicle speed measurement and a corresponding terminal.
Background
At present, two methods are generally adopted for vehicle speed measurement:
one is to use a sensing device installed in the vehicle to calculate the speed of the vehicle using a vehicle speed operating parameter, such as a speedometer on the vehicle, which calculates the speed of the vehicle according to the rotational speed of the transmission. The method can accurately calculate the vehicle speed in real time, but has the defect that only a driver can conveniently see the vehicle speed. Other passengers are not convenient to see.
The other is radar speed measurement, namely the movement speed of the measured object is obtained by calculating according to the received reflected wave frequency shift quantity. If a radar transmitter is erected beside a road, a radar beam is transmitted to the coming direction of the road, then the reflected echo of the automobile is received, the automobile speed is measured through echo analysis, and if the automobile speed exceeds a set value, a camera is instructed to shoot (a flash lamp is triggered at the same time at night). This approach is mainly used by traffic authorities to monitor the overspeed condition of vehicles.
At present, no method is available for passengers to measure the speed of the vehicle by themselves, so that the driving speed of the vehicle can be known at any time.
Disclosure of Invention
The invention aims to provide a method for realizing vehicle speed measurement by using a terminal and a corresponding terminal.
In order to solve the above technical problem, the present invention provides a method for implementing vehicle speed measurement, which is applied to a terminal with a shooting function, and the method includes:
shooting an external scene for multiple times to obtain a multi-frame image, and recording parameters shot each time, wherein the parameters comprise an image distance, a focal length and shooting time;
dividing the multi-frame images into one or more groups by taking two frame images as one group, and carrying out the following processing on the two frame images in the same group:
extracting objects from the two frames of images, matching the objects, selecting one or more objects from the successfully matched objects on the two frames of images, and determining the objects as basic objects for speed measurement; and
and calculating the offset distance of the basic object on the two frames of images, and calculating the vehicle speed by combining the image distance and the focal distance when the two frames of images are shot and the time difference obtained according to the shooting time.
Preferably, the first and second liquid crystal films are made of a polymer,
the method for selecting one or more objects from the objects successfully matched on the two frames of images to determine the objects as basic objects for speed measurement comprises the following steps:
and identifying the object successfully matched on the two frames of images according to a preset identification model of the static object, and determining the object as a basic object for speed measurement if the object is identified as the static object.
Preferably, the first and second liquid crystal films are made of a polymer,
the selecting one or more objects from the objects successfully matched on the two frames of images comprises: when the number of objects successfully matched on the two frames of images exceeds the number of objects to be selected, selecting the one or more objects according to one of the following modes:
displaying the matched objects on a display screen for selection by a user, wherein the one or more objects are selected by the user; or
Selecting the one or more objects in order of decreasing distance to the middle of the image; or
Selecting the one or more objects in order of size; or
Selecting the one or more objects in order of high and low matching degrees; or
And selecting the one or more objects according to the order from large to small after the object size and the matching degree are subjected to weighted operation.
Preferably, the first and second liquid crystal films are made of a polymer,
the calculated vehicle speed is based on the following formula:
<math> <mrow> <mi>v</mi> <mo>=</mo> <mfrac> <msqrt> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msup> <msub> <mi>v</mi> <mn>2</mn> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>v</mi> <mn>1</mn> </msub> <mn>2</mn> </msup> <mo>-</mo> <mi>&Delta;</mi> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </msqrt> <mrow> <msub> <mi>t</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>t</mi> <mn>1</mn> </msub> </mrow> </mfrac> </mrow> </math>
wherein,
v is vehicle speed, t1For the two frame imagesMoment of image one, t2The shooting time f of the second image in the two images1Focal length for a moment of taking the image, f2Focal length, v, for taking the second image1Image distance, v, of image one calculated from said base object2And calculating the image distance of the second image according to the basic object, wherein the deltas is the offset distance of the basic object on the two frames of images.
Preferably, the first and second liquid crystal films are made of a polymer,
taking two frames of images as a group, processing the two frames of images of the group when dividing the multi-frame images into a group, calculating the vehicle speed, and taking the vehicle speed as a speed measurement result;
when two frames of images are taken as a group and the multi-frame images are divided into a plurality of groups, the two frames of images in each group are processed, and after the vehicle speed is calculated, the method further comprises the following steps: and averaging the calculated speeds of the plurality of vehicles to obtain a speed measurement result.
Correspondingly, the invention also provides a terminal with a shooting function, which comprises:
the shooting module is used for shooting an external scene for multiple times to obtain a multi-frame image and recording parameters shot each time, wherein the parameters comprise an image distance, a focal length and shooting time;
the grouping module is used for grouping two frames of images into one group or a plurality of groups;
the selection module is used for extracting objects from the two images of the same group and matching the objects, selecting one or more objects from the successfully matched objects on the two images, and determining the objects as basic objects for measuring the speed on the two images;
and the calculating module is used for calculating the offset distance of a basic object for measuring the speed on the two frames of images for the two frames of images in the same group, and calculating the vehicle speed by combining the image distance and the focal length when the two frames of images are shot and the time difference obtained according to the shooting time.
Preferably, the first and second liquid crystal films are made of a polymer,
the selection module selects one or more objects from the objects successfully matched on the two frames of images, and determines the objects as basic objects for speed measurement, and the selection module comprises:
and identifying the object successfully matched on the two frames of images according to a preset identification model of the static object, and determining the object as a basic object for speed measurement if the object is identified as the static object.
Preferably, the first and second liquid crystal films are made of a polymer,
the selection module selects one or more objects from the objects successfully matched on the two frames of images, and comprises the following steps: when the number of objects successfully matched on the two frames of images exceeds the number of objects to be selected, one or more objects are selected according to one of the following modes:
displaying the matched objects on a display screen for selection by a user, wherein the one or more objects are selected by the user; or
Selecting the one or more objects in order of decreasing distance to the middle of the image; or
Selecting the one or more objects in order of size; or
Selecting the one or more objects in order of high and low matching degrees; or
And selecting the one or more objects according to the order from large to small after the object size and the matching degree are subjected to weighted operation.
Preferably, the first and second liquid crystal films are made of a polymer,
the calculation module calculates the vehicle speed based on the following formula:
<math> <mrow> <mi>v</mi> <mo>=</mo> <mfrac> <msqrt> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msup> <msub> <mi>v</mi> <mn>2</mn> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>v</mi> <mn>1</mn> </msub> <mn>2</mn> </msup> <mo>-</mo> <mi>&Delta;</mi> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </msqrt> <mrow> <msub> <mi>t</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>t</mi> <mn>1</mn> </msub> </mrow> </mfrac> </mrow> </math>
wherein,
v is vehicle speed, t1Is the shooting time t of the first image in the two frames of images2The shooting time f of the second image in the two images1Focal length for a moment of taking the image, f2Focal length, v, for taking the second image1Image distance, v, of image one calculated from said base object2And calculating the image distance of the second image according to the basic object, wherein the deltas is the offset distance of the basic object on the two frames of images.
Preferably, the first and second liquid crystal films are made of a polymer,
the grouping module is used for grouping two frames of images into one group and dividing the multiple frames of images into one group; the calculation module processes the two frames of images of the group, and after the vehicle speed is calculated, the vehicle speed is used as a speed measurement result; or
The grouping module is used for grouping two frames of images into one group and dividing the multiple frames of images into multiple groups; the calculation module processes the two frames of images of each group, calculates the vehicle speed, and then averages the calculated vehicle speeds to obtain a speed measurement result.
According to the scheme, the terminal camera shooting function is utilized to shoot the external scene, the speed is calculated by utilizing the images and the related parameters during two times of shooting, the speed can be automatically measured at any time and any place, and a user can know the running speed of the vehicle at any time.
Drawings
FIG. 1 is a flow chart of a method for measuring speed according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating parameters associated with capturing two images at a time according to an embodiment of the present invention;
fig. 3 is a functional block diagram of a terminal according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments of the present invention may be arbitrarily combined with each other without conflict.
The invention uses a terminal with a shooting function, such as a mobile terminal, in a running vehicle to calculate the speed of the vehicle by using data generated by two times of shooting, and the calculation process needs to be based on a convex lens imaging principle and an information extraction technology.
Convex lens imaging principle:
the most basic gaussian imaging formula in optics is: 1/u +1/v is 1/f, i.e. the reciprocal of the object distance plus the reciprocal of the image distance is equal to the reciprocal of the focal length. Wherein u is the object distance, which is the distance from the object to the optical center of the lens; v is the image distance, which refers to the distance between the image and the optical center of the lens; f is the focal length, which means the distance from the optical center of the lens to the focal point of the light collection, i.e. the distance from the center of the lens to the imaging plane of the film or CCD in the camera.
For convex lens imaging (camera is convex lens imaging), when the object distance is infinity, the image distance is equal to the focal length, and the image is on the focal plane (the case of camera focusing at infinity); when the object distance is between infinity and twice the focal length, the image distance is between the focal length and twice the focal length, and a reduced real image is formed (the camera generally belongs to the situation, and the camera performs macro shooting when the object distance is close to twice the focal length); when the object distance is equal to twice the focal length, the image distance is equal to the object distance, and the object image is equal to 1: 1 microspur is the case; when the object distance is less than twice of the focal distance and greater than the focal distance, the image distance is greater than twice of the focal distance, and an amplified real image is formed; when the object distance is equal to the focal length, the image distance is infinite, and the light rays on the object are parallel light rays after passing through the lens and are not imaged; when the object distance is smaller than the focal length, the image distance is negative, i.e. a virtual image is formed on the same side of the object (e.g. magnifying glass imaging).
The information extraction technology comprises the following steps:
information extraction is a process of obtaining useful information from observation data, and can be mainly divided into detection and estimation, but also includes information extraction processes in system identification and pattern recognition. There is a kind of information hidden in the voice, text or graphic image, which needs to be extracted through image processing. The invention mainly relates to a technology for extracting an object from an image, wherein the same object on a plurality of frames of images is used as a basic object for measuring speed. If the moving objects in the shot scene are more or larger, the function of identifying the objects in the image can be started, and static objects such as houses, trees and the like are selected as basic objects for speed measurement according to the identification result. The above-mentioned extraction of objects from images and identification can be performed by using various existing techniques, and the present invention is not limited thereto.
Example one
In this embodiment, a speed measurer on a traveling vehicle uses a fixed terminal (the terminal cannot move or rotate during shooting), shoots a scene outside a window for many times in a short time, extracts an object from a plurality of shot images, uses the same object on the plurality of shot images as a basic object for speed measurement, and calculates the speed of the vehicle by calculating the moving distance of the object.
As shown in fig. 1, the method for implementing vehicle speed measurement in this embodiment is applied to a terminal with a shooting function, such as a mobile phone and a camera, and the method includes:
step 110, shooting an external scene for multiple times to obtain a multi-frame image, and recording parameters of each shooting, wherein the parameters comprise an image distance, a focal length and a shooting time;
the multiple shooting of the terminal can be automatic continuous shooting of the terminal or multiple shooting triggered manually by a user, and the terminal needs to automatically record shooting time to calculate time difference. The multiple shooting can be two times of shooting, or can be more times of shooting, and the speed can be measured by two times of shooting, but the multiple shooting can also be carried out to obtain more accurate results.
Step 120, dividing the multi-frame images into one or more groups by taking two frames of images as one group;
when two frame images are taken, the plurality of frame images can be grouped only.
When a plurality of frame images are taken, two frame images may be selected as one group, or two frame images may be selected as one group, and the plurality of frame images are divided into a plurality of groups, and at this time, adjacent images may be taken as one group, but the present invention is not limited thereto, and for example, when 3 frame images are taken, a first frame image and a second frame image are selected as one group, a first frame image and a third frame image are selected as one group, and the like.
Step 130, extracting objects from the two images of the same group, matching the objects, selecting one or more objects from the successfully matched objects on the two images, and determining the objects as basic objects for speed measurement;
when the object extracted from the two images is matched (the matched object is on different images), the matching can be performed based on one characteristic of the object, such as shape, or simultaneously based on multiple characteristics of the object, such as shape, color, shape, gray level and the like, so as to improve the matching accuracy.
In this step, if the terminal does not find a matched object on the two frames of images, a prompt may be given: the same object is not found, and the speed cannot be measured. If a matched object is found, the object is used as a basic object for speed measurement.
If the number of objects successfully matched on the two images exceeds the number of objects to be selected, one or more objects are selected in the following way:
selecting the one or more objects in order of size; or
Selecting the one or more objects in order of high and low matching degrees; or
Selecting the one or more objects according to the order from large to small after the object size and the matching degree are weighted; or
In particular, in one example, the one or more objects are selected in order of decreasing distance to the middle of the image. The user generally aims the middle of the lens at a target object during shooting, and the target object is a basic object under the condition of speed measurement, so that the target object selected by the user can be more easily and automatically identified as the basic object by the selection mode.
In particular, in one example, the terminal displays the matched plurality of objects on the display screen for selection by the user, the one or more objects selected by the user. Although some operations are added, calculation errors caused by taking a shot moving object as a basic object can be avoided.
And 140, calculating the offset distance of the basic object for measuring the speed on the two frames of images for the two frames of images in the same group, and calculating the vehicle speed by combining the image distance and the focal distance when the two frames of images are shot and the time difference obtained according to the shooting time.
Fig. 2A is a schematic diagram showing parameters related to two shots. The direction of the arrow in the figure is the direction of travel of the vehicle, assuming that the selected underlying object is a tree, A1、A2The plane is an imaging plane; B. c is the optical center of the convex lens; B. and the vertical distances from the C to the imaging plane are respectively focal distances when the two frames of images are shot.
In this example, at t1Shooting an image at a moment, and recording the focal length as f1The position point of the tree on the first image is A1(the point may be selected from the geometric center point, the leftmost point, the rightmost point, the uppermost point, the lowermost point, etc. of the base object), AB is the object distance of an image one calculated from the base object, and is denoted as u1,BA1Image distance of image one calculated according to the basic object and recorded as v1. At t2Shooting a second image at the moment with the focal length f2The position point of the tree on the second image is A2AC is the object distance of image two calculated from the base object and is denoted as u2,CA2The image distance of the image two calculated according to the basic object is recorded as v2. Wherein f is1、v1And f2、v2Can be obtained and saved by the terminal. BC is vehicle at t2-t1The distance S travelled during the time period (the change in focal length of image one and image two is negligible).
In FIG. 2B, dashed line CD is drawn parallel to BA1,A2D is the offset distance of the base object, i.e. the tree, over the two images, denoted as Δ s.
According to the convex lens object image imaging principle, the method comprises the following steps: angle A2CD=∠BAC
According to the gaussian imaging formula: 1/u + 1/v-1/f, then u-fv/(v-f)
Then from the triangle cosine theorem: cos & lt A2CD=(A2C2+DC2-A2D2)/(2×A2C×DC)
So < A2CD=arcos((A2C2+DC2-A2D2)/(2×A2C×DC))
Namely < A >2CD=arcos((v2 2+v1 2-Δs2)/(2v2v1))
So < BAC ═ A2CD=arcos((v2 2+v1 2-Δs2)/(2v2v1))
And AB ═ u1;AC=u2
According to the cosine theorem: BC2=AB2+AC2-2×AB×AC×cos∠BAC;
Therefore, it is not only easy to use <math> <mrow> <mi>BC</mi> <mo>=</mo> <msqrt> <mrow> <mo>(</mo> <msup> <mi>AB</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>AC</mi> <mn>2</mn> </msup> <mo>-</mo> <mn>2</mn> <mo>&times;</mo> <mi>AB</mi> <mo>&times;</mo> <mi>AC</mi> <mo>&times;</mo> <mi>cos</mi> <mo>&angle;</mo> <mi>BAC</mi> <mo>)</mo> </mrow> </msqrt> </mrow> </math>
Namely:
<math> <mrow> <mi>S</mi> <mo>=</mo> <mi>BC</mi> <mo>=</mo> <msqrt> <msup> <msub> <mi>u</mi> <mn>1</mn> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>u</mi> <mn>2</mn> </msub> <mn>2</mn> </msup> <mo>-</mo> <mn>2</mn> <msub> <mi>u</mi> <mn>1</mn> </msub> <msub> <mi>u</mi> <mn>2</mn> </msub> <mo>&times;</mo> <mrow> <mo>(</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>v</mi> <mn>2</mn> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>v</mi> <mn>1</mn> </msub> <mn>2</mn> </msup> <mo>-</mo> <mi>&Delta;</mi> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>/</mo> <mrow> <mo>(</mo> <mn>2</mn> <msub> <mi>v</mi> <mn>2</mn> </msub> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </msqrt> </mrow> </math>
bringing u-fv/(v-f) into
Then <math> <mrow> <mi>S</mi> <mo>=</mo> <msqrt> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msup> <msub> <mi>v</mi> <mn>2</mn> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>v</mi> <mn>1</mn> </msub> <mn>2</mn> </msup> <mo>-</mo> <mi>&Delta;</mi> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </msqrt> </mrow> </math>
The vehicle running speed v is therefore:
<math> <mrow> <mi>v</mi> <mo>=</mo> <mi>S</mi> <mo>/</mo> <mi>t</mi> <mo>=</mo> <mfrac> <msqrt> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msup> <msub> <mi>v</mi> <mn>2</mn> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>v</mi> <mn>1</mn> </msub> <mn>2</mn> </msup> <mo>-</mo> <mi>&Delta;</mi> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </msqrt> <mrow> <msub> <mi>t</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>t</mi> <mn>1</mn> </msub> </mrow> </mfrac> </mrow> </math>
the above equation is a theoretical equation for calculating the vehicle speed v, and a simplified algorithm or an approximate algorithm derived from the equation may be used when calculating based on the equation.
If an object is selected from the objects successfully matched with the two images in step 130 and determined as a basic object for speed measurement, in step 140, the vehicle speed calculated for the basic object is used as the vehicle speed calculated according to the two images in the same group;
if the plurality of objects are selected from the successfully matched objects in step 130 and are determined to be the basic objects for speed measurement, in step 140, the vehicle speed is calculated for each basic object, and the average value (such as an arithmetic average value or a weighted average value) of the calculated vehicle speeds is used as the vehicle speed calculated from the two images in the same group. This may increase the accuracy of the calculation result.
When two frames of images are taken as a group and the multiple frames of images are divided into a group, the two frames of images in the group are processed, and after the vehicle speed is calculated, the vehicle speed is taken as a speed measurement result. Particularly, if the number of the shot images is more than 3, if the vehicle speed cannot be calculated based on the group of images (if the basic object is not determined), another two frames of images can be selected again as a group to continue the processing, and the speed measurement is not considered to be failed until the vehicle speed is calculated, if the vehicle speed cannot be calculated after all the images are combined in pairs.
The method comprises the following steps that two frames of images are used as a group, when the multi-frame images are divided into a plurality of groups, the two frames of images in each group are processed, and after the vehicle speed is calculated, the method further comprises the following steps: and averaging the calculated speeds of the plurality of vehicles to obtain a speed measurement result.
Fig. 3 is a functional block diagram of the terminal according to the embodiment, and as shown in the figure, the terminal includes:
the shooting module 10 is configured to shoot an external scene for multiple times to obtain a multi-frame image, and record parameters of each shooting, where the parameters include an image distance, a focal length, and a shooting time;
a grouping module 20, configured to group two frames of images into one group, and divide the multiple frames of images into one or more groups;
the selecting module 30 is configured to extract and match objects from the two frames of images in the same group, select one or more objects from the objects that are successfully matched on the two frames of images, and determine the objects as basic objects for speed measurement on the two frames of images;
and the calculating module 40 is used for calculating the offset distance of the basic object for speed measurement on the two frames of images for the two frames of images in the same group, and calculating the vehicle speed by combining the image distance and the focal length when the two frames of images are shot and the time difference obtained according to the shooting time.
Preferably, the first and second liquid crystal films are made of a polymer,
the selection module 30 selects one or more objects from the objects successfully matched on the two frames of images, including: when the number of objects successfully matched on the two frames of images exceeds the number of objects to be selected, one or more objects are selected according to one of the following modes:
displaying the matched objects on a display screen for selection by a user, wherein the one or more objects are selected by the user; or
Selecting the one or more objects in order of decreasing distance to the middle (referring to the center point) of the image; or
Selecting the one or more objects in order of size; or
Selecting the one or more objects in order of high and low matching degrees; or
And selecting the one or more objects according to the order from large to small after the object size and the matching degree are subjected to weighted operation.
Preferably, the calculation module 30 calculates the vehicle speed based on the following formula:
<math> <mrow> <mi>v</mi> <mo>=</mo> <mfrac> <msqrt> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msup> <msub> <mi>v</mi> <mn>2</mn> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>v</mi> <mn>1</mn> </msub> <mn>2</mn> </msup> <mo>-</mo> <mi>&Delta;</mi> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </msqrt> <mrow> <msub> <mi>t</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>t</mi> <mn>1</mn> </msub> </mrow> </mfrac> </mrow> </math>
wherein,
v is vehicle speed, t1Is the shooting time t of the first image in the two frames of images2The shooting time f of the second image in the two images1Focal length for a moment of taking the image, f2Focal length, v, for taking the second image1Image distance, v, of image one calculated from said base object2Calculating the image distance of a second image according to the basic object, wherein deltas is the offset distance of the basic object on the two frames of images;
preferably, the first and second liquid crystal films are made of a polymer,
the selection module 30 selects an object from the objects successfully matched with the two frames of images, and when the object is determined to be a basic object for speed measurement, the calculation module 40 uses the vehicle speed calculated for the basic object as the vehicle speed calculated according to the two frames of images in the same group;
the selection module 30 selects a plurality of objects from the objects successfully matched with the two frames of images, and when the objects are determined as the basic objects for measuring the speed, the calculation module 40 calculates the vehicle speed for each basic object, and uses the average value of the calculated vehicle speeds as the vehicle speed calculated according to the two frames of images in the same group.
Preferably, the first and second liquid crystal films are made of a polymer,
the grouping module is used for grouping two frames of images into one group and dividing the multiple frames of images into one group; the calculation module processes the two frames of images of the group, and after the vehicle speed is calculated, the vehicle speed is used as a speed measurement result; or
The grouping module is used for grouping two frames of images into one group and dividing the multiple frames of images into multiple groups; the calculation module processes the two frames of images of each group, calculates the vehicle speed, and then averages the calculated vehicle speeds to obtain a speed measurement result.
Example two
In the first embodiment, if the terminal automatically selects the basic object for speed measurement, when there are many moving objects and the moving objects are large in the external scene, the moving objects are easily selected as the basic object. Therefore, in this embodiment, the terminal presets the identification model of the stationary object and adds the object identification function, after the function is started, after the matched objects are determined, the objects are identified, and only when an object is identified as the stationary object, the object is used as the basic object for speed measurement.
The method for implementing vehicle speed measurement in this embodiment is applied to a terminal with a shooting function, and is basically the same as the method in the first embodiment, except that:
and for two frames of images in the same group, extracting objects from the two frames of images and matching the objects, and when one or more objects are selected from the successfully matched objects in the two frames of images, adding a condition relative to the first embodiment, namely that the selected object is a static object. Therefore, the terminal needs to perform image recognition on the successfully matched object according to a preset recognition model of the static object (such as a recognition model of a preset tree, a house, and the like), and if the successfully matched object is recognized as the static object, the successfully matched object is determined as a basic object for speed measurement. The identification is usually performed by the following steps: the important characteristics of various images are marked out by using numbers, namely characteristic extraction is carried out; finding out some comprehensive indexes, namely feature selection, in many characteristics of a certain type of images; and matching according to a preset recognition model of the static object, and judging which object the static object belongs to, namely object recognition.
Accordingly, the terminal of the present embodiment is substantially the same as the terminal of the first embodiment, and the difference is that: and a selection module in the terminal selects one or more objects from the objects successfully matched with the two frames of images for the two frames of images in the same group, identifies the objects successfully matched with the two frames of images according to a preset identification model of the static objects when the objects are determined as the basic objects for speed measurement, and determines the objects as the basic objects for speed measurement if the objects are identified as the static objects.
In the two embodiments, when the terminal calculates the vehicle speed and displays the vehicle speed to the user, the terminal may simultaneously display information of the used basic object, such as a graph of the basic object (the identified object name may be used in the second embodiment), and the user may know the basic object based on the calculation according to the information to evaluate the accuracy of the calculation.
According to the scheme of the embodiment, the terminal camera shooting function is utilized to shoot the external scene, the speed is calculated by utilizing the images and the related parameters during two times of shooting, the speed can be automatically measured at any time and any place, and a user can know the running speed of the vehicle at any time.
It will be understood by those skilled in the art that all or part of the steps of the above methods may be implemented by instructing the relevant hardware through a program, and the program may be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, and the like. Alternatively, all or part of the steps of the foregoing embodiments may also be implemented by using one or more integrated circuits, and accordingly, each module/unit in the foregoing embodiments may be implemented in the form of hardware, and may also be implemented in the form of a software functional module. The present invention is not limited to any specific form of combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for realizing vehicle speed measurement is applied to a terminal with a shooting function, and comprises the following steps:
shooting an external scene for multiple times to obtain a multi-frame image, and recording parameters shot each time, wherein the parameters comprise an image distance, a focal length and shooting time;
dividing the multi-frame images into one or more groups by taking two frame images as one group, and carrying out the following processing on the two frame images in the same group:
extracting objects from the two frames of images, matching the objects, selecting one or more objects from the successfully matched objects on the two frames of images, and determining the objects as basic objects for speed measurement; and
and calculating the offset distance of the basic object on the two frames of images, and calculating the vehicle speed by combining the image distance and the focal distance when the two frames of images are shot and the time difference obtained according to the shooting time.
2. The method of claim 1, wherein:
the method for selecting one or more objects from the objects successfully matched on the two frames of images to determine the objects as basic objects for speed measurement comprises the following steps:
and identifying the object successfully matched on the two frames of images according to a preset identification model of the static object, and determining the object as a basic object for speed measurement if the object is identified as the static object.
3. The method of claim 1 or 2, wherein:
the selecting one or more objects from the objects successfully matched on the two frames of images comprises: when the number of objects successfully matched on the two frames of images exceeds the number of objects to be selected, selecting the one or more objects according to one of the following modes:
displaying the matched objects on a display screen for selection by a user, wherein the one or more objects are selected by the user; or
Selecting the one or more objects in order of decreasing distance to the middle of the image; or
Selecting the one or more objects in order of size; or
Selecting the one or more objects in order of high and low matching degrees; or
And selecting the one or more objects according to the order from large to small after the object size and the matching degree are subjected to weighted operation.
4. The method of claim 1, wherein:
the calculated vehicle speed is based on the following formula:
<math> <mrow> <mi>v</mi> <mo>=</mo> <mfrac> <msqrt> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msup> <msub> <mi>v</mi> <mn>2</mn> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>v</mi> <mn>1</mn> </msub> <mn>2</mn> </msup> <mo>-</mo> <mi>&Delta;</mi> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </msqrt> <mrow> <msub> <mi>t</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>t</mi> <mn>1</mn> </msub> </mrow> </mfrac> </mrow> </math>
wherein,
v is vehicle speed, t1Is the shooting time t of the first image in the two frames of images2The shooting time f of the second image in the two images1Focal length for a moment of taking the image, f2Focal length, v, for taking the second image1Image distance, v, of image one calculated from said base object2And calculating the image distance of the second image according to the basic object, wherein the deltas is the offset distance of the basic object on the two frames of images.
5. The method of claim 1 or 4, wherein:
taking two frames of images as a group, processing the two frames of images of the group when dividing the multi-frame images into a group, calculating the vehicle speed, and taking the vehicle speed as a speed measurement result;
when two frames of images are taken as a group and the multi-frame images are divided into a plurality of groups, the two frames of images in each group are processed, and after the vehicle speed is calculated, the method further comprises the following steps: and averaging the calculated speeds of the plurality of vehicles to obtain a speed measurement result.
6. A terminal having a photographing function, comprising:
the shooting module is used for shooting an external scene for multiple times to obtain a multi-frame image and recording parameters shot each time, wherein the parameters comprise an image distance, a focal length and shooting time;
the grouping module is used for grouping two frames of images into one group or a plurality of groups;
the selection module is used for extracting objects from the two images of the same group and matching the objects, selecting one or more objects from the successfully matched objects on the two images, and determining the objects as basic objects for measuring the speed on the two images;
and the calculating module is used for calculating the offset distance of a basic object for measuring the speed on the two frames of images for the two frames of images in the same group, and calculating the vehicle speed by combining the image distance and the focal length when the two frames of images are shot and the time difference obtained according to the shooting time.
7. The terminal of claim 6, wherein:
the selection module selects one or more objects from the objects successfully matched on the two frames of images, and determines the objects as basic objects for speed measurement, and the selection module comprises:
and identifying the object successfully matched on the two frames of images according to a preset identification model of the static object, and determining the object as a basic object for speed measurement if the object is identified as the static object.
8. The terminal according to claim 6 or 7, characterized by:
the selection module selects one or more objects from the objects successfully matched on the two frames of images, and comprises the following steps: when the number of objects successfully matched on the two frames of images exceeds the number of objects to be selected, one or more objects are selected according to one of the following modes:
displaying the matched objects on a display screen for selection by a user, wherein the one or more objects are selected by the user; or
Selecting the one or more objects in order of decreasing distance to the middle of the image; or
Selecting the one or more objects in order of size; or
Selecting the one or more objects in order of high and low matching degrees; or
And selecting the one or more objects according to the order from large to small after the object size and the matching degree are subjected to weighted operation.
9. The terminal of claim 8, wherein:
the calculation module calculates the vehicle speed based on the following formula:
<math> <mrow> <mi>v</mi> <mo>=</mo> <mfrac> <msqrt> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <msub> <mi>v</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <msub> <mi>v</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msup> <msub> <mi>v</mi> <mn>2</mn> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>v</mi> <mn>1</mn> </msub> <mn>2</mn> </msup> <mo>-</mo> <mi>&Delta;</mi> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </msqrt> <mrow> <msub> <mi>t</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>t</mi> <mn>1</mn> </msub> </mrow> </mfrac> </mrow> </math>
wherein,
v is vehicle speed, t1Is the shooting time t of the first image in the two frames of images2The shooting time f of the second image in the two images1Focal length for a moment of taking the image, f2Focal length, v, for taking the second image1Image distance, v, of image one calculated from said base object2And calculating the image distance of the second image according to the basic object, wherein the deltas is the offset distance of the basic object on the two frames of images.
10. The terminal according to claim 6 or 9, characterized by:
the grouping module is used for grouping two frames of images into one group and dividing the multiple frames of images into one group; the calculation module processes the two frames of images of the group, and after the vehicle speed is calculated, the vehicle speed is used as a speed measurement result; or
The grouping module is used for grouping two frames of images into one group and dividing the multiple frames of images into multiple groups; the calculation module processes the two frames of images of each group, calculates the vehicle speed, and then averages the calculated vehicle speeds to obtain a speed measurement result.
CN201410195239.9A 2014-05-09 2014-05-09 Method for measuring speed of vehicle and corresponding terminal Active CN103969466B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410195239.9A CN103969466B (en) 2014-05-09 2014-05-09 Method for measuring speed of vehicle and corresponding terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410195239.9A CN103969466B (en) 2014-05-09 2014-05-09 Method for measuring speed of vehicle and corresponding terminal

Publications (2)

Publication Number Publication Date
CN103969466A true CN103969466A (en) 2014-08-06
CN103969466B CN103969466B (en) 2017-02-01

Family

ID=51239213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410195239.9A Active CN103969466B (en) 2014-05-09 2014-05-09 Method for measuring speed of vehicle and corresponding terminal

Country Status (1)

Country Link
CN (1) CN103969466B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200675A (en) * 2014-08-25 2014-12-10 安徽建筑大学 Vehicle speed measurement method based on invariant feature matching
CN104714048A (en) * 2015-03-30 2015-06-17 上海斐讯数据通信技术有限公司 Detection method and mobile terminal for movement speed of moving object
CN105158496A (en) * 2015-08-31 2015-12-16 广东欧珀移动通信有限公司 Object movement speed measurement method and device
CN105301279A (en) * 2015-10-09 2016-02-03 广东欧珀移动通信有限公司 Speed measurement method and speed measurement device based on camera, and mobile terminal
CN106033605A (en) * 2015-03-18 2016-10-19 章志成 Method of using single-frame automobile-motion fuzzy image to test automobile speed
CN106803350A (en) * 2017-03-06 2017-06-06 中山大学 A kind of vehicle speed detection method and device based on camera shooting time difference
CN111009135A (en) * 2019-12-03 2020-04-14 北京百度网讯科技有限公司 Method and device for determining vehicle running speed and computer equipment
CN111369709A (en) * 2020-04-03 2020-07-03 中信戴卡股份有限公司 Driving scene determination method, device, computer, storage medium and system
CN112991769A (en) * 2021-02-03 2021-06-18 中科视语(北京)科技有限公司 Traffic volume investigation method and device based on video
WO2022007122A1 (en) * 2020-07-08 2022-01-13 谢超奇 Group migration speed measurement system and method
CN114125305A (en) * 2021-12-01 2022-03-01 西安维沃软件技术有限公司 Shooting method, device and equipment
CN114463988A (en) * 2020-11-09 2022-05-10 浙江宇视科技有限公司 Image acquisition method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060078164A1 (en) * 2004-10-08 2006-04-13 Huei-Yung Lin Measurement method using blurred images
CN101187671A (en) * 2007-12-27 2008-05-28 北京中星微电子有限公司 Method and device for determining automobile driving speed

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060078164A1 (en) * 2004-10-08 2006-04-13 Huei-Yung Lin Measurement method using blurred images
CN101187671A (en) * 2007-12-27 2008-05-28 北京中星微电子有限公司 Method and device for determining automobile driving speed

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
顾柏园: "基于单目视觉的安全车距预警系统研究", 《中国博士学位论文全文数据库 工程科技Ⅱ辑》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200675A (en) * 2014-08-25 2014-12-10 安徽建筑大学 Vehicle speed measurement method based on invariant feature matching
CN104200675B (en) * 2014-08-25 2016-11-23 安徽建筑大学 Vehicle speed measurement method based on invariant feature matching
CN106033605A (en) * 2015-03-18 2016-10-19 章志成 Method of using single-frame automobile-motion fuzzy image to test automobile speed
CN104714048B (en) * 2015-03-30 2017-11-21 上海斐讯数据通信技术有限公司 A kind of detection method and mobile terminal for mobile object translational speed
CN104714048A (en) * 2015-03-30 2015-06-17 上海斐讯数据通信技术有限公司 Detection method and mobile terminal for movement speed of moving object
CN105158496A (en) * 2015-08-31 2015-12-16 广东欧珀移动通信有限公司 Object movement speed measurement method and device
CN105301279B (en) * 2015-10-09 2018-06-01 广东欧珀移动通信有限公司 A kind of speed measurement method based on camera, device and mobile terminal
CN105301279A (en) * 2015-10-09 2016-02-03 广东欧珀移动通信有限公司 Speed measurement method and speed measurement device based on camera, and mobile terminal
CN106803350A (en) * 2017-03-06 2017-06-06 中山大学 A kind of vehicle speed detection method and device based on camera shooting time difference
CN111009135A (en) * 2019-12-03 2020-04-14 北京百度网讯科技有限公司 Method and device for determining vehicle running speed and computer equipment
CN111369709A (en) * 2020-04-03 2020-07-03 中信戴卡股份有限公司 Driving scene determination method, device, computer, storage medium and system
US11691625B2 (en) 2020-04-03 2023-07-04 Citic Dicastal Co., Ltd. Driving scene determining method and apparatus, computer, storage medium, and system
WO2022007122A1 (en) * 2020-07-08 2022-01-13 谢超奇 Group migration speed measurement system and method
CN114463988A (en) * 2020-11-09 2022-05-10 浙江宇视科技有限公司 Image acquisition method and device, electronic equipment and storage medium
CN114463988B (en) * 2020-11-09 2023-10-03 浙江宇视科技有限公司 Image acquisition method, device, electronic equipment and storage medium
CN112991769A (en) * 2021-02-03 2021-06-18 中科视语(北京)科技有限公司 Traffic volume investigation method and device based on video
CN114125305A (en) * 2021-12-01 2022-03-01 西安维沃软件技术有限公司 Shooting method, device and equipment

Also Published As

Publication number Publication date
CN103969466B (en) 2017-02-01

Similar Documents

Publication Publication Date Title
CN103969466B (en) Method for measuring speed of vehicle and corresponding terminal
CN110322702B (en) Intelligent vehicle speed measuring method based on binocular stereo vision system
CN105989593B (en) The method and device of particular vehicle tachometric survey is carried out in video record
CN102510734B (en) Pupil detection device and pupil detection method
EP1744292B1 (en) Method for determining position and speed of vehicles
CN106375706B (en) method and device for measuring speed of moving object by using double cameras and mobile terminal
JP2022103234A (en) Information processing device, information processing method, and program
CN107272021A (en) The object detection of the image detection region defined using radar and vision
CN105321342B (en) A kind of intersection vehicles queue length detection method based on video of taking photo by plane
CN105574552A (en) Vehicle ranging and collision early warning method based on monocular vision
JP2016522415A (en) Visually enhanced navigation
JP2010511212A (en) Method and apparatus for identifying and locating planar objects in an image
US11971961B2 (en) Device and method for data fusion between heterogeneous sensors
CN113255578B (en) Traffic identification recognition method and device, electronic equipment and storage medium
KR101735557B1 (en) System and Method for Collecting Traffic Information Using Real time Object Detection
US20160091297A1 (en) Operating device, operating method, and program therefor
CN111976601B (en) Automatic parking method, device, equipment and storage medium
CN112598743B (en) Pose estimation method and related device for monocular vision image
JP2011513876A (en) Method and system for characterizing the motion of an object
JP2020160840A (en) Road surface defect detecting apparatus, road surface defect detecting method, road surface defect detecting program
CN104463240A (en) Method and device for controlling list interface
CN115235493A (en) Method and device for automatic driving positioning based on vector map
CN109407080A (en) Vehicle distance measuring system based on binocular camera and distance measuring method thereof
CN110345924A (en) A kind of method and apparatus that distance obtains
US7330569B2 (en) Measurement method using blurred images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant