CN114071013B - Target snapshot and tracking method and device for vehicle-mounted camera - Google Patents

Target snapshot and tracking method and device for vehicle-mounted camera Download PDF

Info

Publication number
CN114071013B
CN114071013B CN202111191031.6A CN202111191031A CN114071013B CN 114071013 B CN114071013 B CN 114071013B CN 202111191031 A CN202111191031 A CN 202111191031A CN 114071013 B CN114071013 B CN 114071013B
Authority
CN
China
Prior art keywords
target
camera
detail camera
captured
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111191031.6A
Other languages
Chinese (zh)
Other versions
CN114071013A (en
Inventor
刘明
陈明珠
杨增启
李广义
刘峰明
郑元
邱慧慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202111191031.6A priority Critical patent/CN114071013B/en
Publication of CN114071013A publication Critical patent/CN114071013A/en
Application granted granted Critical
Publication of CN114071013B publication Critical patent/CN114071013B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application relates to a target snapshot and tracking method for a vehicle-mounted camera, wherein the vehicle-mounted camera comprises a panoramic camera and a detail camera, and the method comprises the steps of controlling the panoramic camera to identify a target to be snapshot; after the panoramic camera determines the target to be captured, acquiring the position of the target to be captured, and adjusting the position of the detail camera according to the position of the target to be captured; detecting a real-time relative motion state between the target to be captured and the detail camera; and selecting different tracking control strategies to control the detail camera to track and snapshot the target to be snapshot based on the real-time relative motion state. According to the method and the device, through the mode that the panoramic camera and the detail camera work cooperatively, after the panoramic camera recognizes and selects the target to be shot, the detail camera is controlled to lock and track the target to be shot, so that the shooting quality of the target to be shot is improved, and the problem that the shooting quality of the target to be shot is poor in the prior art is solved.

Description

Target snapshot and tracking method and device for vehicle-mounted camera
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method and apparatus for capturing and tracking a target of a vehicle-mounted camera.
Background
At present, artificial intelligence is increasingly widely applied in the field of law enforcement, and in order to strengthen the monitoring management of motor vehicles and road pedestrians, intelligent monitoring systems are installed on a plurality of law enforcement devices, and snapshot is carried out on targets to be snapshot, such as faces or vehicles, so that follow-up management and tracking are facilitated. In the existing intelligent monitoring system, in the moving snapshot process of a law enforcement device, the target snapshot to be snapshot is fuzzy or lost due to too high moving speed or jolt in running, so that the law enforcement snapshot effect is seriously affected.
Aiming at the problem of poor snapshot quality of a target to be snapshot in the prior art, no effective solution is proposed at present.
Disclosure of Invention
In this embodiment, a method, an apparatus, a system, an electronic apparatus, and a storage medium for capturing and tracking a target of a vehicle-mounted camera are provided, so as to solve the problem of poor capturing quality of a target to be captured in the related art.
In a first aspect, in this embodiment, there is provided a target snapshot and tracking method for an in-vehicle camera including a panoramic camera and a detail camera, the method comprising,
controlling the panoramic camera to identify a target to be snap shot;
After the panoramic camera determines the target to be captured, acquiring the position of the target to be captured, and adjusting the position of the detail camera according to the position of the target to be captured;
detecting a real-time relative motion state between the target to be captured and the detail camera;
and selecting different tracking control strategies to control the detail camera to track and snapshot the target to be snapshot based on the real-time relative motion state.
In some of these embodiments, the detecting a real-time relative motion state between the object to be snap-shot and the detail camera includes,
and determining the real-time relative motion state between the target to be captured and the detail camera according to the change of the pixel points of the target to be captured in the two adjacent image frames acquired by the detail camera.
In some embodiments, the determining the real-time relative motion state between the object to be captured and the detail camera according to the change of the pixel points of the object to be captured in two adjacent image frames acquired by the detail camera comprises,
when the pixel point change of the target to be captured in two adjacent image frames acquired by the detail camera is larger than a preset first threshold value, determining that the real-time relative motion state is relative motion;
And when the pixel point change of the target to be captured in two adjacent image frames acquired by the detail camera is smaller than or equal to the first threshold value, determining that the real-time relative motion state is relatively static.
In some embodiments, when the real-time relative motion state is a relative motion, the selecting different tracking control strategies based on the real-time relative motion state controls the detail camera to track and capture the target to be captured, including,
calculating a position coordinate value of the object to be captured in an image frame of the detail camera, calculating a position deviation value of the object to be captured at the ith moment according to the position coordinate value of the object to be captured in the image frame of the detail camera, which is acquired at the ith moment and the ith-1 moment, obtaining a tracking speed of the detail camera according to the position deviation value, and adjusting a movement speed of the detail camera according to the tracking speed.
In some of these embodiments, the deriving the tracking speed of the detail camera from the positional deviation value includes,
the tracking speed of the detail camera comprises a horizontal tracking speed V x And vertical tracking velocity V y The horizontal tracking speed V x And vertical tracking velocity V y Respectively calculating according to the following formulas;
V x =K p *e ix +K i *∑e ik +K d *(e ik -e (i-1)x )
V y =K p *e iy +K i *∑e iy +K d *(e iy -e (i-1)x )
wherein, the liquid crystal display device comprises a liquid crystal display device, ix e, for the horizontal position deviation value of the target to be snap shot at the ith moment iy For the vertical position deviation value, e of the object to be snap shot at the ith moment (i-1)x For the level of the object to be snap shot at the i-1 timePosition deviation value e (i-1)y For the vertical position deviation value of the object to be snap shot at the i-1 th moment, i is an integer greater than 1, K p 、K i 、K d Parameters are adjusted for speed.
In some embodiments, the vehicle-mounted camera further comprises a gyroscope, and when the real-time relative motion state is relatively static, the selecting different tracking control strategies based on the real-time relative motion state controls the detail camera to track and capture the target to be captured, including,
and acquiring the motion speed of the detail camera and the angular speed of the gyroscope at the i-th moment, calculating the target speed of the detail camera according to the motion speed of the detail camera and the angular speed of the gyroscope, and adjusting the motion speed of the detail camera according to the target speed.
In some of these embodiments, after said adjusting the position of said detail camera in accordance with the position of said object to be snap-shot, said method further comprises,
Detecting whether a curve or an obstacle exists in a motion path of a vehicle provided with the vehicle-mounted camera;
and if the motion path exists, predicting the motion path of the vehicle, and performing motion compensation on the detail camera according to a prediction result.
In some embodiments, when there is an obstacle in the motion path of the vehicle, predicting the motion path of the vehicle, and performing motion compensation on the detail camera according to a prediction result, where the motion compensation includes identifying the obstacle in the motion path of the vehicle, detecting a distance between the vehicle and the obstacle, and calculating a time when the vehicle reaches the obstacle according to a motion speed of the vehicle; and calculating the compensation acceleration and the compensation speed of the detail camera according to the time of the vehicle reaching the obstacle and the position deviation value of the target to be snap-shot, and adjusting the motion parameters of the detail camera according to the compensation acceleration and the compensation speed.
In some of these embodiments, the compensation acceleration and compensation speed of the detail camera are calculated from the time the vehicle reaches the obstacle and the position deviation value of the object to be snap-shot, including,
Calculating the compensation acceleration and the compensation speed of the detail camera according to the following formula;
jerk=Amax/
a k =a k-1 +jerk*T
v k =v k-1 +a k *T
k=t/T
V k-1 =K p *e k-1 +K i *∑e k-1 +K d *(e k-1 -e (k-2) )
wherein Amax is the maximum acceleration allowed by the detail camera, t is the time for the vehicle to reach the obstacle, jerk is an intermediate variable, a k Compensating acceleration for the detail camera, v k For the compensation speed of the detail camera, T is a discrete sampling period, K p 、K i 、K d For adjusting the parameters, e k And k is an integer greater than 2 for the position deviation value of the target to be snap shot at the k moment.
In some of these embodiments, after the panoramic camera determines the object to be snap shot,
identifying a target contour of the target to be snap-shot in the image acquired by the panoramic camera, and obtaining a first position coordinate of the target contour in the image acquired by the panoramic camera;
obtaining a second position coordinate of the target contour in the image acquired by the detail camera according to a preset position coordinate corresponding relation between the panoramic camera and the detail camera;
obtaining a target position coordinate according to the second position coordinate and a first included angle, wherein the first included angle is an included angle between the light supplementing lamp and the detail camera;
and controlling the detail camera to move to the target position coordinates.
In some of these embodiments, the method further comprises,
when the shake of the detail camera is detected, calculating the moving distance of the shake preventing device of the detail camera according to the following formula;
Figure SMS_1
wherein f is the focal length of the detail camera, D is the moving distance of the anti-shake device, alpha is the shake angle of the detail camera, and SR is the image stabilization sensitivity;
and controlling the anti-shake device to move reversely according to the moving distance of the anti-shake device.
In a second aspect, in this embodiment, there is provided a target capturing and tracking apparatus for an in-vehicle camera, the apparatus including:
the identification module is used for controlling the panoramic camera to identify the target to be captured;
the position adjustment module is used for acquiring the position of the target to be snapped after the panoramic camera determines the target to be snapped, and adjusting the position of the detail camera according to the position of the target to be snapped;
the detection module is used for detecting the real-time relative motion state between the target to be snap shot and the detail camera;
and the tracking snapshot module is used for selecting different tracking control strategies to control the detail camera to carry out tracking snapshot on the target to be snapshot based on the real-time relative motion state.
In a third aspect, in this embodiment there is provided a target capture and tracking system for an in-vehicle camera, the system comprising,
the vehicle-mounted camera comprises a mounting seat, a cradle head supported by the mounting seat, a panoramic camera and a detail camera arranged on the cradle head, and a control unit arranged in the cradle head, wherein the control unit is used for executing the target snapshot and tracking method for the vehicle-mounted camera according to the first aspect.
In a fourth aspect, in this embodiment, there is provided an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to run the computer program to perform the target capturing and tracking method for an in-vehicle camera according to the first aspect.
In a fifth aspect, in this embodiment, there is provided a computer readable storage medium having a computer program stored thereon, where the computer program is executed by a processor, and the steps of the target capturing and tracking method for an in-vehicle camera according to the first aspect are performed by the processor.
Compared with the related art, the target capturing and tracking method, device, system, electronic device and storage medium for the vehicle-mounted camera, provided by the embodiment, are used for controlling the detail camera to lock and track the target to be captured after the panoramic camera recognizes and selects the target to be captured in a mode that the panoramic camera and the detail camera work cooperatively, so that the capturing quality of the target to be captured is improved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a block diagram of a system for capturing and tracking a target of an in-vehicle camera of the present embodiment.
Fig. 2 is a block diagram of another system for capturing and tracking targets for an in-vehicle camera of the present embodiment.
Fig. 3 is a flowchart of a target capturing and tracking method for an in-vehicle camera of the present embodiment.
Fig. 4 is a flowchart of a method for detecting a real-time relative motion state between an object to be captured and a detail camera according to the present embodiment.
Fig. 5 is a flowchart of another object capturing and tracking method for an in-vehicle camera of the present embodiment.
Detailed Description
For a clearer understanding of the objects, technical solutions and advantages of the present application, the present application is described and illustrated below with reference to the accompanying drawings and examples.
Unless defined otherwise, technical or scientific terms used herein shall have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," "these," and the like in this application are not intended to be limiting in number, but rather are singular or plural. The terms "comprising," "including," "having," and any variations thereof, as used in the present application, are intended to cover a non-exclusive inclusion; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (units) is not limited to the list of steps or modules (units), but may include other steps or modules (units) not listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in this application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference to "a plurality" in this application means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. Typically, the character "/" indicates that the associated object is an "or" relationship. The terms "first," "second," "third," and the like, as referred to in this application, merely distinguish similar objects and do not represent a particular ordering of objects.
In this embodiment, a system for capturing and tracking an object of a vehicle-mounted camera is provided, and fig. 1 is a block diagram of a system for capturing and tracking an object of a vehicle-mounted camera in this embodiment, as shown in fig. 1, and the system includes:
the system comprises a mounting seat, a cradle head 2 supported by the mounting seat, a panoramic camera 1, a detail camera 3 and a light supplementing device 4 which are arranged on the cradle head, wherein the system is arranged on a vehicle through the mounting seat, and a system main control unit is arranged in the cradle head 2 and is used for completing target snapshot and tracking of a vehicle-mounted camera.
According to the target snapshot and tracking system provided by the embodiment, after the target to be snapshot is identified and selected in a mode that the panoramic camera and the detail camera work cooperatively, the target to be snapshot is locked and tracked, and the detail snapshot effect is improved.
In some of these embodiments, the vehicle may be a law enforcement vehicle.
There is also provided in this embodiment a system for capturing and tracking an object of an in-vehicle camera, and fig. 2 is a block diagram of another system for capturing and tracking an object of an in-vehicle camera according to this embodiment, as shown in fig. 2, including:
the system comprises a mounting seat, a cradle head 2 supported by the mounting seat, a panoramic camera 1, a detail camera 3 and a light supplementing device 4 which are arranged on the cradle head, wherein the system is arranged on a vehicle through the mounting seat, and a system main control unit is arranged in the cradle head 2 and is used for completing target snapshot and tracking of a vehicle-mounted camera.
The panoramic camera 1 comprises a sensor, a stitching unit and a panoramic camera main control unit. The panoramic camera main control unit is connected with the splicing unit. The splicing unit splices the images acquired by the sensors and then transmits the spliced images to the panoramic camera main control unit, and the splicing unit can be realized by adopting an FPGA (Field Programmable Gate Array ) chip. The panoramic camera main control unit can provide various intelligent rules, and according to preset target characteristics, the panoramic camera 1 recognizes the target with the corresponding target characteristics as the target to be captured. For example, the target features are set as people or vehicles, after the panoramic camera 1 recognizes the people or vehicles, the recognized objects are optimized and then used as targets to be captured, and the targets to be captured are sent to the system main control unit for further capturing and tracking. The panoramic camera main control unit can be realized by adopting an SOC (System-on-a-Chip) Chip.
The detail camera 3 comprises a zoom lens and a sensor, wherein the zoom lens can change focal length within a certain range, so that different wide and narrow field angles, images with different sizes and different scene ranges are obtained, the zoom lens is connected with the sensor, the sensor is connected with a system main control unit, and the acquired images are sent to the system main control unit.
The cradle head 2 is internally provided with a system main control unit, a cradle head singlechip, a motor and a gyroscope. The system main control unit is connected with the panoramic camera main control unit to acquire an image of a target to be captured; the cradle head singlechip is connected with the system main control unit to control the motor according to the instruction of the system main control unit, thereby controlling the movement of the detail camera 3 and the light supplementing device 4. The interior of the cradle head 2 can be provided with a plurality of motors, including a horizontal motor and a vertical motor, the motors are respectively connected with the detail camera 3 and the light supplementing device 4, the light supplementing device 4 can be a light supplementing lamp or a flash lamp and the like, and the motors are used for receiving commands of the cradle head singlechip to control the movement of the detail camera 3 and the light supplementing device 4 in the horizontal direction and the movement of the light supplementing device in the vertical direction. The gyroscope is connected with the cradle head single chip microcomputer and is used for monitoring the motion parameters of the vehicle, the motion parameters of the vehicle are sent to the system main control unit through the cradle head single chip microcomputer, and the system main control unit determines the motion state of the vehicle through the motion parameters, such as whether the motion of the vehicle is stable or not. After the system main control unit collects the target to be captured collected by the panoramic camera main control unit, the position of the target to be captured is obtained, the cradle head single-chip microcomputer is controlled according to the position of the target to be captured, the cradle head single-chip microcomputer controls the motor to adjust the position of the detail camera 3, so that the target to be captured is at the central position of an image shot by the detail camera 3, the real-time relative motion state between the target to be captured and the detail camera 3 is detected, and different tracking control strategies are selected to control the detail camera 3 to track and capture the target to be captured based on the real-time relative motion state. The system main control unit can be realized by adopting an SOC chip.
According to the target snapshot and tracking system provided by the embodiment, through the mode that the panoramic camera and the detail camera work cooperatively, after the system main control unit collects the target to be snapshot collected by the panoramic camera main control unit, the position of the target to be snapshot is obtained, the position of the detail camera is adjusted according to the position of the target to be snapshot, so that the real-time relative motion state between the target to be snapshot and the detail camera is further detected, different tracking control strategies are selected to control the detail camera to track the target to be snapshot based on the real-time relative motion state, the target to be snapshot is locked and tracked, and the detail snapshot effect is improved.
In some of these embodiments, the detail camera and the light supplementing device share a rotational axis such that the detail camera vertical rotational angle and the light supplementing device rotational angle are identical. The light supplementing device can be a light supplementing lamp or a flash lamp.
In this embodiment, a method for capturing and tracking a target of a vehicle-mounted camera is provided, fig. 3 is a flowchart of a method for capturing and tracking a target of a vehicle-mounted camera according to this embodiment, and as shown in fig. 3, the flowchart includes the following steps:
In step S302, the panoramic camera is controlled to identify the object to be snap shot. And the system main control unit controls the panoramic camera to identify the target to be snap shot. It should be noted that the panoramic camera may provide various intelligent rules, and according to preset target characteristics, the panoramic camera recognizes that the target with the corresponding target characteristics is the target to be captured. If the target features are set to be people or vehicles, after the panoramic camera identifies the people or vehicles, the identified target objects are optimized and then serve as targets to be captured, and the targets to be captured are sent to the system main control unit for further capturing and tracking.
Step S304, after the panoramic camera determines the target to be captured, the position of the target to be captured is obtained, and the position of the detail camera is adjusted according to the position of the target to be captured. After the panoramic camera determines the target to be captured, the system main control unit enters a detail tracking capturing mode to acquire the position of the target to be captured. The position of the target to be captured may be calculated by the panoramic camera according to the acquired image of the target to be captured, and then the position coordinates of the target to be captured are sent to the system main control unit. And the system main control unit adjusts the position of the detail camera according to the position class of the target to be captured so that the detail camera faces the target to be captured, and the target to be captured is positioned at the central position of the image shot by the detail camera.
In one embodiment, after the position of the object to be captured is obtained, the position of the detail camera and the zoom magnification of the detail camera are adjusted according to the position of the object to be captured, so that the object to be captured is located at the central position of the image shot by the detail camera, and the adjusted zoom magnification is kept unchanged in the whole capturing and tracking process. Adjusting the magnification of the zoom of the detail camera may be achieved by setting a multiple of the focal length change of the zoom lens of the detail camera.
Step S306, detecting a real-time relative motion state between the object to be snap shot and the detail camera. The system main control unit detects the real-time relative motion state between the target to be snapped and the detail camera, and can determine the real-time relative motion state between the target to be snapped and the detail camera through the change of the pixel points of the target to be snapped in the two adjacent image frames acquired by the detail camera.
And step 308, selecting different tracking control strategies to control the detail camera to track and capture the target to be captured based on the real-time relative motion state. The system main control unit selects different tracking control strategies to control the detail camera to track and capture the target to be captured based on the real-time relative motion state in the step S306. After the processing of the steps, the position of the target to be captured is always in the center of the picture of the detail camera, and the target capturing is triggered to be completed.
Through the steps, after the panoramic camera and the detail camera work cooperatively, the system main control unit acquires the target to be captured, which is acquired by the panoramic camera main control unit, the position of the target to be captured is acquired, the position of the detail camera is adjusted according to the position of the target to be captured, so that the real-time relative motion state between the target to be captured and the detail camera is further detected, different tracking control strategies are selected to control the detail camera to track and capture the target to be captured based on the real-time relative motion state, the target to be captured is locked and tracked, and the detail capturing effect is improved.
In this embodiment, a method for detecting a real-time relative motion state between a target to be captured and a detail camera is provided, and fig. 4 is a flowchart of a method for detecting a real-time relative motion state between a target to be captured and a detail camera in this embodiment, as shown in fig. 4, where the flowchart includes the following steps:
s402, keeping the zoom rate of the detail camera unchanged; in the foregoing embodiment, after the position of the object to be captured is acquired, the position of the detail camera and the magnification of the detail camera are adjusted according to the position of the object to be captured, so that the object to be captured is located at the central position of the image captured by the detail camera. And keeping the magnification of the zoom after adjustment unchanged, and judging the real-time relative motion state between the target to be snap shot and the detail camera in the subsequent step.
S404, when the fluctuation of the output data of the gyroscope is not large, recording the pixel point size M1 of the picture occupied by the target to be snap shot selected in the current frame image of the detail camera; the gyroscope is used for monitoring the motion condition of the vehicle, and when the fluctuation of output data of the gyroscope is not large, the current motion is stable, so that the influence of new shake and other factors on the judgment of the size of a subsequent target is reduced.
S406, updating the pixel point size M2 of the picture occupied by the same target to be snap-shot in the monitoring image in real time according to the sampling time T;
s408, comparing the profile size change conditions of the front and rear sampling moments of the same target to be snap shot, and judging whether the pixel point change quantity exceeds a given threshold N, namely the relation between I M2-M1 and N; wherein, the N value can be an empirical value and is related to the parameters of the camera;
s410, if |M2-M1| > N, judging that the real-time relative motion state of the selected object to be captured and the detail camera is relative motion, otherwise, judging that the real-time relative motion state of the selected object to be captured and the detail camera is relative static. If the absolute value M2-M1 is larger than N, the outline size of the object to be snapped is greatly changed in the front frame image and the back frame image, and the relative movement of the object to be snapped and the detail camera is indicated.
In some embodiments, when the real-time relative motion state between the object to be captured and the detail camera is relative motion, the detail camera is controlled to track and capture the object to be captured according to the following tracking control strategy, so as to ensure the capturing effect of the object:
Calculating a position coordinate value of an object to be captured in an image frame of the detail camera, calculating a position deviation value of the object to be captured at the ith moment according to the position coordinate value of the object to be captured in the image frame of the detail camera, which is acquired at the ith moment and the ith-1 moment, obtaining the tracking speed of the detail camera according to the position deviation value, and adjusting the movement speed of the detail camera according to the tracking speed.
In some embodiments, when the real-time relative motion state between the object to be captured and the detail camera is relative motion, the detail camera is controlled to track and capture the object to be captured according to the following tracking control strategy, so as to ensure the capturing effect of the object:
in the process of moving the object to be captured, firstly, calculating a horizontal coordinate position value and a vertical coordinate position value of the object to be captured in the front and rear frame images by a three-dimensional positioning method, and then obtaining the position deviation values of the object to be captured in the front and rear frame images at the current sampling moment and the previous moment; then respectively calculating the horizontal tracking speed V according to the following formula x And vertical tracking velocity V y
V x =K p *e ix +K i *∑e ik +K d *(e ik -e (i-1)x )
V y =K p *e iy +K i *∑e iy +K d *(e iy -e (i-1)y )
Wherein, the liquid crystal display device comprises a liquid crystal display device, ix e, for the horizontal position deviation value of the target to be snap shot at the ith moment iy For the vertical position deviation value, e of the object to be snap shot at the ith moment (i-1)x E, as the horizontal position deviation value of the target to be snap shot at the i-1 time (i-1)y For the vertical position deviation value K of the object to be snap shot at the i-1 th moment p 、K i 、K d Parameters are adjusted for speed.
The system main control unit tracks the speed V according to the level x And vertical tracking velocity V y Acceleration and deceleration control of detail camera, generating 0 to target speed V according to selected curve model such as S curve x And V y Thereby adjusting the detail camera position in real time so that the target can remain centered in the detail camera image without being lost.
In some embodiments, when the real-time relative motion state between the object to be captured and the detail camera is relatively static, the detail camera is controlled to track and capture the object to be captured according to the following tracking control strategy, so as to ensure the capturing effect of the object:
and acquiring the motion speed of the detail camera and the angular speed of the gyroscope at the i-th moment, calculating the target speed of the detail camera according to the motion speed of the detail camera and the angular speed of the gyroscope, and adjusting the motion speed of the detail camera according to the target speed.
In some embodiments, when the real-time relative motion state between the object to be captured and the detail camera is relatively static, the detail camera is controlled to track and capture the object to be captured according to the following tracking control strategy, so as to ensure the capturing effect of the object:
And eliminating the influence of the object to be captured not in the picture caused by vehicle vibration and the like by adopting a speed or position real-time reverse compensation mode. Firstly, collecting an angular velocity value V output in real time by an X axis and a Y axis of a gyroscope Gx And V Gy Current horizontal velocity V at the same time as the detail camera Cx And vertical velocity V Cy The method comprises the steps of carrying out a first treatment on the surface of the Controlling the horizontal movement speed of the detail camera according to the X-axis angular velocity value of the gyroscope, controlling the vertical movement of the detail camera according to the Y-axis angular velocity value of the gyroscope, and updating the current horizontal target velocity value V of the detail camera in real time according to the following formula Tx And a vertical target velocity value V Ty
V Tx =V Cx -V Gx
V Ty =V Cy -V Gy
The strategy can directly calculate the tracking speed of the detail camera by means of the existing gyroscope speed information for anti-shake, and the method for controlling the speed of the detail camera does not need to be calculated in real time according to the condition that the real-time relative motion state between the target to be captured and the detail camera is the relative motion state in the embodiment; the calculation amount can be saved, and the method is convenient and simple.
In this embodiment, there is also provided a target capturing and tracking method for an in-vehicle camera, and fig. 5 is a flowchart of another target capturing and tracking method for an in-vehicle camera according to this embodiment, as shown in fig. 5, where the flowchart includes the following steps:
S502, controlling the panoramic camera to identify a target to be captured and identify an obstacle or a curve on a driving path;
s504, judging whether an obstacle or a curve exists on the driving path; judging whether an obstacle exists on a vehicle driving path or a large turn exists on the vehicle driving path according to images acquired by the panoramic camera and the detail camera;
s506, if an obstacle or a curve exists, predicting the motion estimation of the target to be snap-shot, and adopting a tracking control strategy of advanced compensation; if no obstacle or curve exists, further judging the relative motion state of the target to be captured and the detail camera;
s508, when the real-time relative motion state between the target to be captured and the detail camera is relative motion, the detail camera is controlled to track and capture the target to be captured by adopting a tracking control strategy during the relative motion; specific tracking control strategies have been described in detail in the above embodiments and will not be described here.
S510, when the real-time relative motion state between the target to be captured and the detail camera is relatively static, the detail camera is controlled by adopting a tracking control strategy in the relatively static state to track and capture the target to be captured; specific tracking control strategies have been described in detail in the above embodiments and will not be described here.
In some of these embodiments, curve and obstacle recognition is introduced for curves or obstacle avoidance scenarios, and the following tracking snapshot strategy is employed:
the instantaneous speed and acceleration change generated in the scene are increased, and the position deviation e of the corresponding object to be snap-shot in the front and back two frames of images in the picture is increased k And the instantaneous output speed of the gyroscope is so great that the tracking speed of the camera according to the above-calculated details is too great and possibly exceeds the maximum speed V in the horizontal or vertical direction allowed by the device max And limited by the maximum acceleration of the detail camera, the target to be captured is lost due to tracking lag caused by the fact that the whole acceleration process is too slow because the target cannot be accelerated to a required target speed instantaneously. Therefore, target motion track prediction is introduced, compensation control is performed in advance, the track and the motion direction of a moving target are recognized in advance, and the cradle head is controlled to move in advance according to the target motion direction. The specific process is as follows:
first, an obstacle on the road where the vehicle is traveling is identified by images acquired by the panoramic camera and the detail camera, and the current distance S of the obstacle from the vehicle is measured by means of the binocular distance principle.
According to the formula s=v×t, the time t for reaching the obstacle is calculated, where the time is the time available for tracking speed compensation by the detail camera before reaching the obstacle or turning, and V is the vehicle speed.
And calculating and processing the motion parameters of the detail camera under the advanced compensation strategy. Considering that the acceleration cannot be suddenly changed, in order to ensure that the gyroscope speed can be rapidly accelerated to a larger compensation speed to complete the compensation of the gyroscope speed under the condition of bypassing an obstacle and the like, the cradle head acceleration can just reach the maximum acceleration A allowed by the detail camera when the vehicle reaches the obstacle or encounters a large turn max The method comprises the steps of carrying out a first treatment on the surface of the Calculating the compensation acceleration and the compensation speed of the detail camera according to the following formula:
jerk=Amax/
a k =a k-1 +jerk*T
v k =v k-1 +a k *T
k=t/T
V k-1 =K p *e k-1 +K i *∑e k-1 +K d *(e k-1 -e (k-2) )
wherein A is max For the maximum acceleration allowed by the detail camera, t is the time the vehicle reaches the obstacle, jerk is an intermediate variable, a k Compensating acceleration for the detail camera, v k For the compensation speed of the detail camera, T is a discrete sampling period, K p 、K o 、K d For adjusting the parameters, e k And the position deviation value of the target to be snap shot at the kth moment.
The parameters are recorded and stored in real time, and the detail camera is controlled to move according to the motion parameters, so that the compensation effect can be achieved.
In the embodiment, aiming at complex scenes such as large-amplitude turning or obstacle avoidance, tracking compensation is performed by adopting an advanced motion compensation strategy, and the tracking snapshot effect and scene applicability under the scenes are improved.
In some embodiments, for a scene with low illumination at night, for a clearer target to be captured, a tracking capture strategy with cooperative processing of the following detail camera and the light supplementing device is adopted:
firstly, the illumination condition of the current scene is identified according to devices such as light sensitivity, and objects in a 360-degree scene seen by a panoramic camera are not clear in a night low-illumination scene. At this time, the panoramic camera intelligent processing module can only recognize the rough contour information of each target in the panoramic camera first, obtain the coordinate position of each target contour in the panoramic camera, and then obtain the horizontal position coordinate p of each target contour in the detail camera according to the coordinate calibration algorithm 1 And vertical position coordinate p 2 Control of the horizontal fine positioning of the detail camera to the target position (p 1 +θ, p 2 ) I.e. control the movement of the detail camera to the target position. The angle theta is an included angle between the light supplementing device and the detail camera. After the detail camera moves to the target position, the light supplementing device is aligned to the target identified by the panoramic camera, and the light supplementing device is started, so that the panoramic camera can capture clear targets, after the light supplementing operation of each target is completed in sequence, the interested target to be captured is accurately optimized from a plurality of targets to be captured for capturing storage, and the target to be captured is stored in the capturing mode Triggering the panoramic camera to complete snapshot after the supplementing light is aimed at the preferred target and immediately controlling the detail camera to reversely move by theta degrees from the current coordinate position, so that the detail camera can aim at the preferred target to be snapshot, and then completing tracking snapshot of the preferred target to be snapshot under low illumination according to the tracking snapshot step.
In the embodiment, aiming at a complex scene with low illumination, a detail camera and a light supplementing device cooperative control strategy is adopted to carry out tracking compensation, so that tracking snapshot effect and scene applicability in the scene are improved.
In some embodiments, during the process of tracking the preferred target to be captured by the detail camera in real time, the lens in the movement of the detail camera is controlled to move in parallel for optical anti-shake, so that the target can be always ensured to be in the center of a picture and the image quality is stable and not dithered during the tracking and capturing process. The lens in the detail camera is provided with an optical anti-shake member. When the system main control unit detects that the gyroscope data changes, the compensation distance of the optical anti-shake piece in the lens is calculated according to the change amount of the gyroscope data and the current focal length of the detail camera, so that the anti-shake lens is driven, and the final imaging is stable.
When the detail camera shake is detected, calculating the moving distance of the anti-shake lens of the detail camera according to the following formula;
Figure SMS_2
Wherein f is the focal length of the detail camera, D is the moving distance of the anti-shake device, and alpha is the shake angle of the detail camera; SR is the image stabilization sensitivity. f is a focal length when the image pickup apparatus shakes, the image stabilization sensitivity SR is a distance that an intersection point of an optical axis of the OIS lens group and a focal plane moves on the focal plane every 1mm of movement of the OIS lens group, and SR is a constant when the lens is fixed. The value of f/SR is thus a set of preset data related to the focal length.
Further analysis, α is a shake angle of shake when the imaging apparatus, i.e., the detail camera, shakes, i.e., shake data obtained by the gyroscope. In most cases less than 0.2 deg., according to the formula
Figure SMS_3
It can be deduced that tanα=α, and therefore
Figure SMS_4
And controlling the anti-shake lens to move by the distance D according to the moving distance D, wherein the direction is opposite to the shake direction, and outputting a stable image.
According to the embodiment, in the process of tracking the preferable target to be snapped by the detail camera in real time, the anti-shake is performed by controlling the lens in the movement of the detail camera in parallel, so that the target to be snapped in the process of tracking and snapping can be always ensured to be in the center of a picture, and the image quality is stable and not dithered.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
In this embodiment, a target capturing and tracking device for a vehicle-mounted camera is further provided, where the device is used to implement the foregoing embodiments and preferred embodiments, and the description is omitted, and the device includes:
the identification module is used for controlling the panoramic camera to identify the target to be snap shot;
the position adjustment module is used for acquiring the position of the target to be captured after the panoramic camera determines the target to be captured, and adjusting the position of the detail camera according to the position of the target to be captured;
the detection module is used for detecting the real-time relative motion state between the target to be captured and the detail camera;
and the tracking snapshot module is used for selecting different tracking control strategies to control the detail camera to track and snapshot the target to be snapshot based on the real-time relative motion state.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
There is also provided in this embodiment an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and are not described in detail in this embodiment.
In addition, in combination with the target capturing and tracking method for the vehicle-mounted camera provided in the above embodiment, a storage medium may be further provided in this embodiment to implement the method. The storage medium has a computer program stored thereon; the computer program, when executed by a processor, implements any of the methods of the embodiments described above for target capture and tracking for an in-vehicle camera.
It should be understood that the specific embodiments described herein are merely illustrative of this application and are not intended to be limiting. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present application, are within the scope of the present application in light of the embodiments provided herein.
It is evident that the drawings are only examples or embodiments of the present application, from which the present application can also be adapted to other similar situations by a person skilled in the art without the inventive effort. In addition, it should be appreciated that while the development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as an admission of insufficient detail.
The term "embodiment" in this application means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive. It will be clear or implicitly understood by those of ordinary skill in the art that the embodiments described in this application can be combined with other embodiments without conflict.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the patent. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (12)

1. A method for capturing and tracking targets for an in-vehicle camera, the in-vehicle camera comprising a panoramic camera and a detail camera, the method comprising,
Controlling the panoramic camera to identify a target to be snap shot;
after the panoramic camera determines the target to be captured, acquiring the position of the target to be captured, and adjusting the position of the detail camera according to the position of the target to be captured;
detecting a real-time relative motion state between the target to be captured and the detail camera;
based on the real-time relative motion state, selecting different tracking control strategies to control the detail camera to track and snapshot the target to be snapshot;
after the position of the detail camera is adjusted according to the position of the target to be snap shot, the method further comprises,
detecting whether a curve or an obstacle exists in a motion path of a vehicle provided with the vehicle-mounted camera;
if so, predicting the motion path of the vehicle, and performing motion compensation on the detail camera according to a prediction result and the maximum acceleration allowed by the detail camera;
when an obstacle exists in the motion path of the vehicle, predicting the motion path of the vehicle, and performing motion compensation on the detail camera according to a prediction result and the maximum acceleration allowed by the detail camera, wherein the motion compensation comprises the steps of identifying the obstacle in the motion path of the vehicle, detecting the distance between the vehicle and the obstacle, and calculating the time for the vehicle to reach the obstacle according to the motion speed of the vehicle; calculating the compensation acceleration and the compensation speed of the detail camera according to the time of the vehicle reaching the obstacle and the position deviation value of the target to be captured, and adjusting the motion parameters of the detail camera according to the compensation acceleration and the compensation speed;
The position deviation value of the target to be captured is a position coordinate value in an image frame of a detail camera acquired at the ith moment and the (i-1) th moment of the target to be captured, and the calculated position deviation value of the ith moment of the target to be captured;
calculating the compensation acceleration and the compensation speed of the detail camera according to the time of the vehicle reaching the obstacle and the position deviation value of the target to be snap shot, including,
calculating the compensation acceleration and the compensation speed of the detail camera according to the following formula;
jerk=Amax/t
a k =a k-1 +jerk*T
v k =v k-1 +a k *T
k=t/T
v k-1 =k p *e k-1 +k i *Σe k-1 +k d *(e k-1 -e (k-2) )
wherein a calculation formula of the compensation speed of the detail camera based on the speed adjusting parameter is used for calculating an initial value of the compensation speed of the detail camera,amax is the maximum acceleration allowed by the detail camera, t is the time the vehicle reaches the obstacle, jerk is an intermediate variable, a k Compensating acceleration for the detail camera, v k For the compensation speed of the detail camera, T is a discrete sampling period, K p 、K i 、K d For adjusting the parameters, e k And k is an integer greater than 2 for the position deviation value of the target to be snap shot at the k moment.
2. The method of claim 1, wherein said detecting a real-time relative motion state between said object to be snap-shot and said detail camera comprises,
And determining the real-time relative motion state between the target to be captured and the detail camera according to the change of the pixel points of the target to be captured in the two adjacent image frames acquired by the detail camera.
3. The method according to claim 2, wherein the determining the real-time relative motion state between the object to be captured and the detail camera according to the change of the pixel points of the object to be captured in the two adjacent image frames acquired by the detail camera comprises,
when the pixel point change of the target to be captured in two adjacent image frames acquired by the detail camera is larger than a preset first threshold value, determining that the real-time relative motion state is relative motion;
and when the pixel point change of the target to be captured in two adjacent image frames acquired by the detail camera is smaller than or equal to the first threshold value, determining that the real-time relative motion state is relatively static.
4. The method of claim 3, wherein when the real-time relative motion state is a relative motion, selecting a different tracking control strategy to control the detail camera to track and capture the object to be captured based on the real-time relative motion state comprises,
Calculating a position coordinate value of the object to be captured in an image frame of the detail camera, calculating a position deviation value of the object to be captured at the ith moment according to the position coordinate value of the object to be captured in the image frame of the detail camera, which is acquired at the ith moment and the ith-1 moment, obtaining a tracking speed of the detail camera according to the position deviation value, and adjusting a movement speed of the detail camera according to the tracking speed.
5. The method of claim 4, wherein said deriving a tracking speed of said detail camera from said positional deviation value comprises,
the tracking speed of the detail camera comprises a horizontal tracking speed V x And vertical tracking velocity V y The horizontal tracking speed V x And vertical tracking velocity V y Respectively calculating according to the following formulas;
V x =K p *e ix +K i *∑e ix +K d *(e ix -e (i-1)x )
V y =K p *e iy +K i *∑e iy +K d *(e iy -e (i-1)y )
wherein e ix E, for the horizontal position deviation value of the target to be snap shot at the ith moment iy For the vertical position deviation value, e of the object to be snap shot at the ith moment (i-1)x E, as the horizontal position deviation value of the target to be snap shot at the i-1 time (i-1)y For the vertical position deviation value of the object to be snap shot at the i-1 th moment, i is an integer greater than 1, K p 、K i 、K d Parameters are adjusted for speed.
6. The method of claim 3, wherein the onboard camera further comprises a gyroscope, wherein when the real-time relative motion state is relatively stationary, the selecting a different tracking control strategy based on the real-time relative motion state controls the detail camera to track and capture the object to be captured, including,
and acquiring the motion speed of the detail camera and the angular speed of the gyroscope at the i-th moment, calculating the target speed of the detail camera according to the motion speed of the detail camera and the angular speed of the gyroscope, and adjusting the motion speed of the detail camera according to the target speed.
7. The method of claim 1, further comprising, after the panoramic camera determines the object to be snap shot,
identifying a target contour of the target to be snap-shot in the image acquired by the panoramic camera, and obtaining a first position coordinate of the target contour in the image acquired by the panoramic camera;
obtaining a second position coordinate of the target contour in the image acquired by the detail camera according to a preset position coordinate corresponding relation between the panoramic camera and the detail camera;
Obtaining a target position coordinate according to the second position coordinate and a first included angle, wherein the first included angle is an included angle between the light supplementing lamp and the detail camera;
and controlling the detail camera to move to the target position coordinates.
8. The method of claim 1, further comprising,
when the shake of the detail camera is detected, calculating the moving distance of the shake preventing device of the detail camera according to the following formula;
Figure FDA0004139585820000031
wherein f is the focal length of the detail camera, D is the moving distance of the anti-shake device, alpha is the shake angle of the detail camera, and SR is the image stabilization sensitivity;
and controlling the anti-shake device to move reversely according to the moving distance of the anti-shake device.
9. A target snapshot and tracking device for an in-vehicle camera, the device comprising:
the identification module is used for controlling the panoramic camera to identify the target to be snap shot;
the position adjustment module is used for acquiring the position of the target to be snapped after the panoramic camera determines the target to be snapped, and adjusting the position of the detail camera according to the position of the target to be snapped;
the detection module is used for detecting the real-time relative motion state between the target to be snap shot and the detail camera;
The tracking snapshot module is used for selecting different tracking control strategies to control the detail camera to carry out tracking snapshot on the target to be snapshot based on the real-time relative motion state;
the obstacle avoidance module is used for detecting whether a curve or an obstacle exists in the motion path of the vehicle provided with the vehicle-mounted camera after the position adjustment module adjusts the position of the detail camera according to the position of the target to be snap-shot; if so, predicting the motion path of the vehicle, and performing motion compensation on the detail camera according to a prediction result and the maximum acceleration allowed by the detail camera; when an obstacle exists in the motion path of the vehicle, predicting the motion path of the vehicle, and performing motion compensation on the detail camera according to a prediction result and the maximum acceleration allowed by the detail camera, wherein the motion compensation comprises the steps of identifying the obstacle in the motion path of the vehicle, detecting the distance between the vehicle and the obstacle, and calculating the time for the vehicle to reach the obstacle according to the motion speed of the vehicle; calculating the compensation acceleration and the compensation speed of the detail camera according to the time of the vehicle reaching the obstacle and the position deviation value of the target to be captured, and adjusting the motion parameters of the detail camera according to the compensation acceleration and the compensation speed;
The position deviation value of the target to be captured is a position coordinate value in an image frame of a detail camera acquired at the ith moment and the (i-1) th moment of the target to be captured, and the calculated position deviation value of the ith moment of the target to be captured;
calculating the compensation acceleration and the compensation speed of the detail camera according to the time of the vehicle reaching the obstacle and the position deviation value of the target to be snap shot, including,
calculating the compensation acceleration and the compensation speed of the detail camera according to the following formula;
jerk=Amax/
Figure FDA0004139585820000041
v k-1 =k p *e k-1 +k i *∑e k-1 +k d *(e k-1 -e (k-2) )
wherein a calculation formula of the compensation speed of the detail camera based on the speed adjusting parameter is used for calculating an initial value of the compensation speed of the detail camera, amax is the maximum acceleration allowed by the detail camera, t is the time when the vehicle reaches the obstacle, jerk is an intermediate variable, and a k Compensating acceleration for the detail camera, v k For the compensation speed of the detail camera, T is a discrete sampling period, K p 、K i 、K d For adjusting the parameters, e k And k is an integer greater than 2 for the position deviation value of the target to be snap shot at the k moment.
10. A target snapshot and tracking system for an on-board camera, the system comprising,
The vehicle-mounted camera comprises a mounting seat, a cradle head supported by the mounting seat, a panoramic camera and a detail camera arranged on the cradle head, and a control unit arranged inside the cradle head, wherein the control unit is used for executing the target snapshot and tracking method for the vehicle-mounted camera according to any one of claims 1 to 8.
11. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the object snapshot and tracking method for an in-vehicle camera of any one of claims 1 to 8.
12. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the object snapshot and tracking method for an in-vehicle camera according to any one of claims 1 to 8.
CN202111191031.6A 2021-10-13 2021-10-13 Target snapshot and tracking method and device for vehicle-mounted camera Active CN114071013B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111191031.6A CN114071013B (en) 2021-10-13 2021-10-13 Target snapshot and tracking method and device for vehicle-mounted camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111191031.6A CN114071013B (en) 2021-10-13 2021-10-13 Target snapshot and tracking method and device for vehicle-mounted camera

Publications (2)

Publication Number Publication Date
CN114071013A CN114071013A (en) 2022-02-18
CN114071013B true CN114071013B (en) 2023-06-20

Family

ID=80234659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111191031.6A Active CN114071013B (en) 2021-10-13 2021-10-13 Target snapshot and tracking method and device for vehicle-mounted camera

Country Status (1)

Country Link
CN (1) CN114071013B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006004188A (en) * 2004-06-17 2006-01-05 Daihatsu Motor Co Ltd Obstacle recognition method and obstacle recognition device
CN106705992A (en) * 2015-11-12 2017-05-24 北京自动化控制设备研究所 Biaxial optical fiber inertial navigation system rapid self-calibration self-alignment method
CN111372037A (en) * 2018-12-25 2020-07-03 杭州海康威视数字技术股份有限公司 Target snapshot system and method
CN112644488A (en) * 2020-12-30 2021-04-13 清华大学苏州汽车研究院(吴江) Adaptive cruise system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7650253B2 (en) * 2008-05-08 2010-01-19 L-3 Communications Corporation Accelerometer and method for error compensation
CN101598559A (en) * 2008-06-05 2009-12-09 广东电子工业研究院有限公司 The location compensation method of Vehicular continuous navigation device and the optimized Algorithm of locator data
CN206650789U (en) * 2017-04-24 2017-11-17 福州大学至诚学院 A kind of automatic energy saving system based on video monitor
CN108881703B (en) * 2017-05-09 2020-07-21 杭州海康威视数字技术股份有限公司 Anti-shake control method and device
CN109151375B (en) * 2017-06-16 2020-07-24 杭州海康威视数字技术股份有限公司 Target object snapshot method and device and video monitoring equipment
CN107878453B (en) * 2017-11-07 2019-07-30 长春工业大学 A kind of automobile emergency collision avoidance integral type control method for hiding dynamic barrier
CN110293967B (en) * 2019-05-21 2020-08-07 重庆长安汽车股份有限公司 Low-speed active safety execution control method and system for automobile
CN112683269B (en) * 2020-12-07 2022-05-03 电子科技大学 MARG attitude calculation method with motion acceleration compensation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006004188A (en) * 2004-06-17 2006-01-05 Daihatsu Motor Co Ltd Obstacle recognition method and obstacle recognition device
CN106705992A (en) * 2015-11-12 2017-05-24 北京自动化控制设备研究所 Biaxial optical fiber inertial navigation system rapid self-calibration self-alignment method
CN111372037A (en) * 2018-12-25 2020-07-03 杭州海康威视数字技术股份有限公司 Target snapshot system and method
CN112644488A (en) * 2020-12-30 2021-04-13 清华大学苏州汽车研究院(吴江) Adaptive cruise system

Also Published As

Publication number Publication date
CN114071013A (en) 2022-02-18

Similar Documents

Publication Publication Date Title
EP2950521B1 (en) Camera capable of reducing motion blur in a low luminance environment and vehicle including the same
EP2947874A1 (en) Stereo camera and driver assistance apparatus and vehicle including the same
KR101698783B1 (en) Stereo camera and Vehicle including the same
EP2949534A2 (en) Driver assistance apparatus capable of diagnosing vehicle parts and vehicle including the same
EP2950237A1 (en) Driver assistance apparatus capable of performing distance detection and vehicle including the same
CN113196007B (en) Camera system applied to vehicle
CN107122770B (en) Multi-camera system, intelligent driving system, automobile, method and storage medium
JP6032034B2 (en) Object detection device
KR20200096518A (en) Information processing device, moving object, control system, information processing method and program
US20110149045A1 (en) Camera and method for controlling a camera
KR20150074752A (en) Driver assistance apparatus and Vehicle including the same
CN109345591A (en) A kind of vehicle itself attitude detecting method and device
CN114281100B (en) Unmanned aerial vehicle inspection system and method without hovering
CN114071013B (en) Target snapshot and tracking method and device for vehicle-mounted camera
JP7003972B2 (en) Distance estimation device, distance estimation method and computer program for distance estimation
CN113170057B (en) Imaging unit control device
CN110609576A (en) Cloud deck control method, device and system, control equipment and storage medium
JPH1139464A (en) Image processor for vehicle
CN110986890B (en) Height detection method and device
CN111684784B (en) Image processing method and device
CN113507559A (en) Intelligent camera shooting method and system applied to vehicle and vehicle
KR102085866B1 (en) Image photographing device, image photographing system and method of operation the same
JP2017038243A (en) Imaging apparatus
JP2012118029A (en) Exit determination device, exit determination program and exit determination method
KR102302574B1 (en) stereo camera module and calibration method of stereo camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant