CN111179305B - Object position estimation method and object position estimation device thereof - Google Patents
Object position estimation method and object position estimation device thereof Download PDFInfo
- Publication number
- CN111179305B CN111179305B CN201811344830.0A CN201811344830A CN111179305B CN 111179305 B CN111179305 B CN 111179305B CN 201811344830 A CN201811344830 A CN 201811344830A CN 111179305 B CN111179305 B CN 111179305B
- Authority
- CN
- China
- Prior art keywords
- coordinate value
- picture
- later
- previous
- position estimation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000012544 monitoring process Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/04—Synchronising
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an object position estimation method with a time stamp alignment function, which is applied to an object position estimation device. The object position estimation method comprises the steps of obtaining a plurality of first pictures of a first camera, setting a first preset time point, taking out a first previous picture and a first later picture which are closest to the first preset time point from the plurality of first pictures, obtaining a first previous coordinate value of the object in the first previous picture and a first later coordinate value of the object in the first later picture, and calculating a first estimated coordinate value of the object at the first preset time point by utilizing the first previous coordinate value and the first later coordinate value. The invention can overcome the time stamp gap of metadata among different cameras and effectively improve the accuracy of object tracking.
Description
Technical Field
The present invention relates to an object position estimation method and an object position estimation device, and more particularly, to an object position estimation method with a time stamp alignment function and an object position estimation device thereof.
Background
With the advancement of technology, multi-camera surveillance systems are widely used in a wide range of areas or environments. The field of view of one camera is insufficient to cover all monitoring areas, so that the traditional multi-camera monitoring system leads each camera to point to different areas respectively, and when tracking an object, the moving track of the object in the monitoring area can be judged according to the monitoring images of the different cameras. In order to obtain an accurate moving track of an object, a traditional multi-camera monitoring system can set time synchronization for all cameras, so that each camera can obtain a monitoring picture at a required time point. However, the time point and the capturing frequency of each camera for capturing the monitoring frame are affected by the hardware configuration or the network environment, so that the metadata (metadata) time stamp between different cameras still has a gap. The larger the gap in time stamps, the poorer the accuracy of object tracking. Therefore, how to overcome the defect, an object tracking method capable of precisely aligning metadata of multiple camera objects is designed, which is an important development goal of the monitoring industry.
Disclosure of Invention
The present invention relates to an object position estimation method with a time stamp alignment function and an object position estimation apparatus thereof.
The invention further discloses an object position estimation method with a time stamp alignment function, which comprises the steps of obtaining a plurality of first pictures of a first camera, setting a first preset time point, taking out a first previous picture and a first later picture which are closest to the first preset time point from the plurality of first pictures, obtaining a first previous coordinate value of an object in the first previous picture and a first later coordinate value of the object in the first later picture, and calculating a first estimated coordinate value of the object at the first preset time point by utilizing the first previous coordinate value and the first later coordinate value.
The invention also discloses an object position estimation device with the time stamp alignment function, which comprises a receiver and a processor. The receiver acquires a picture generated by the camera. The processor is electrically connected with the receiver and is used for obtaining a plurality of first pictures of the first camera, setting a first preset time point, taking out a first previous picture and a first later picture which are closest to the first preset time point from the plurality of first pictures, obtaining a first previous coordinate value of an object in the first previous picture and a first later coordinate value of the object in the first later picture, and calculating a first estimated coordinate value of the object at the first preset time point by utilizing the first previous coordinate value and the first later coordinate value.
The visual fields of the cameras can be partially overlapped or completely non-overlapped, and the time synchronization setting is finished in advance. The object position estimating method and the object position estimating device set a plurality of preset time points according to a specific frequency, obtain the metadata of each object of the pictures before and after the preset time points of each camera, obtain the metadata of the object at the preset time points by interpolation or other operation functions, and estimate the metadata of the object after time alignment. The invention can overcome the time stamp gap of metadata among different cameras and effectively improve the accuracy of object tracking.
Drawings
FIG. 1 is a functional block diagram of an object position estimation device according to an embodiment of the invention.
Fig. 2 is a flowchart of an object position estimation method according to an embodiment of the invention.
Fig. 3 is a schematic diagram of images obtained by the video camera according to the embodiment of the invention in different time sequences.
Fig. 4 is a schematic diagram of an object position estimation device and a camera according to an embodiment of the invention.
FIGS. 5 and 6 illustrate the conversion of the previous and subsequent coordinate values into estimated coordinate values according to an embodiment of the present invention
Schematic diagram.
Fig. 7 is a schematic diagram of a spliced picture according to an embodiment of the invention.
Wherein reference numerals are as follows:
10. object position estimating apparatus
12. Receiver with a receiver body
14. Processor and method for controlling the same
16. First camera
18. Second camera
Is1 first picture
Is2 second picture
Ip1 first previous picture
In1 first later picture
Ip2 second previous picture
In2 second later picture
Ip3 third previous frame
In3 third later picture
Ip4 fourth previous picture
In4 fourth later picture
Iu unused picture
I ', I' spliced picture
O, O' article
Cp1 first previous coordinate value
Cn1 first later coordinate value
Ce1 first estimated coordinate value
Cp2 second previous coordinate value
Cn2 second later coordinate value
Ce2 second estimated coordinate value
T1 first preset time point
T2 second preset time point
Steps S200, S202, S204, S206, S208, S210, S212, S214
Detailed Description
Referring to fig. 1, fig. 1 is a functional block diagram of an object position estimation device 10 according to an embodiment of the invention. When multiple cameras capture images of the same monitoring range from different viewing angles, the operation systems of the cameras have completed time synchronization setting, but each frame of image obtained by each of the different cameras still has a slight time difference. The object position estimation device 10 has a time stamp alignment function for aligning object metadata in a screen acquired by a plurality of cameras, thereby improving object tracking accuracy. The object position estimation device 10 may be a server, and collect the image data of a plurality of cameras to perform time stamp alignment; the object position estimation device 10 may also be a camera with special functions, capable of receiving the monitoring pictures of other cameras and time-stamping with its own monitoring picture.
Referring to fig. 1 to 4, fig. 2 is a flowchart of an object position estimation method according to an embodiment of the invention, fig. 3 is a schematic diagram of pictures acquired by a camera according to an embodiment of the invention in different time sequences, and fig. 4 is a schematic diagram of an object position estimation device 10 and a camera according to an embodiment of the invention. The object position estimation device 10 may include a receiver 12 and a processor 14 electrically connected together. The receiver 12 takes pictures generated by other cameras. The object position estimation method performed by the processor 14 is applicable to the object position estimation device 10 shown in fig. 1. Although the first camera 16 and the second camera 18 have completed time synchronization, the number of frames (Frames per second, FPS) captured by each camera varies over time due to different variability in the system performance and network connection quality of the cameras, as shown in fig. 3.
Regarding the object position estimation method, first, step S200 is performed, and the processor 14 may determine the time stamp alignment frequency, for example, 60fps or 30fps or other values when the number of capturing frames of the first camera 16 and the second camera 18 is 60 fps. Next, steps S202 and S204 are executed, and the processor 14 obtains the first frames Is1 and the second frames Is2 of the first camera 16 and the second camera 18 through the receiver 12, and sets the first preset time point T1 according to the time stamp pair Ji Pinlv. Then, step S206 Is performed to find the first previous picture Ip1 and the first later picture In1 closest to the first preset time point T1 from the first picture Is1, and find the second previous picture Ip2 and the second later picture In2 closest to the first preset time point T1 from the second picture Is 2.
Next, step S208 Is performed to analyze the objects O and O 'In the previous frames Ip1, ip2 and the later frames In1, is2 using the object tracking technique, and obtain the first previous coordinate value Cp1 of the object O In the first previous frame Ip1 and the first later coordinate value Cn1 In the first later frame In1, and the second previous coordinate value Cp2 of the object O' In the second previous frame Ip2 and the second later coordinate value Cn2 In the second later frame In2. Later, step S210 is performed to calculate a first estimated coordinate value Ce1 of the object O at the first preset time point T1 by using the first previous coordinate value Cp1 and the first later coordinate value Cn1, and calculate a second estimated coordinate value Ce2 of the object O' at the first preset time point T1 by using the second previous coordinate value Cp2 and the second later coordinate value Cn2.
After the first estimated coordinate value Ce1 and the second estimated coordinate value Ce2 are obtained, steps S212 and S214 are executed, and a stitched frame I ' Is generated by stitching one of the plurality of first frames Is1 and the plurality of second frames Is2, and the first estimated coordinate value Ce1 of the object O and the second estimated coordinate value Ce2 of the object O ' are displayed on the stitched frame I '. In this way, the object position estimation device 10 can form the traveling tracks of the objects O and O 'tracked by the two cameras 16 and 18 on the stitched image I', which is convenient for the user to observe or apply for other computing applications. The present embodiment preferably uses the first previous picture Ip1 and the second later picture In2 for the stitching, but the practical application is not limited thereto.
In particular, the first estimated coordinate value Ce1 belongs to the predicted amount of the object O at the first preset time point T1, and the second estimated coordinate value Ce2 belongs to the predicted amount of the object O' at the first preset time point T1, so that the first estimated coordinate value Ce1 and the second estimated coordinate value Ce2 are preferably regarded as the same in time sequence; the object position estimation method of the present invention can reduce the errors to a minimum value regardless of whether there is some slight error in the actual situation, so the two errors can be defined as the same.
Referring to fig. 3 to 7, fig. 5 and 6 are schematic diagrams of converting the previous coordinate values and the later coordinate values into estimated coordinate values, and fig. 7 is a schematic diagram of a stitched frame according to an embodiment of the present invention. The object position estimation method of the present invention can obtain a first estimated coordinate value Ce1 (x 3, y 3) between a first previous coordinate value Cp1 (x 1, y 1) and a first later coordinate value Cn1 (x 2, y 2) by interpolation operation. Wherein the first previous picture Ip1 is taken at a time point Tp1, the first later picture In1 is further taken at a time point Tn1, the coordinate value x3=x1+ (x 2-x 1)/(Tn 1-Tp 1), and the coordinate value y3=y1+ (y 2-y 1)/(Tn 1-Tp 1). The operation mode of converting the second previous coordinate value Cp2 and the second later coordinate value Cn2 into the second estimated coordinate value Ce2 is as the first estimated coordinate value Ce1, so the description is not repeated. The operation method of estimating the coordinate value is not limited to interpolation, and may be obtained by other operation functions.
The object position estimation method may further set one or more preset time points later than the first preset time point T1, and obtain estimated coordinate values of the objects O and O' at the preset time points. Taking the second preset time T2 as an example, the second preset time T2 can be determined according to the time stamp alignment frequency carried in step S200. The first picture Is1 has a third previous picture Ip3 and a third later picture In3 closest to the second preset time point T2 before and after the second preset time point. The position of the object O In the third previous picture Ip3 is a third previous coordinate value (not indicated In the drawing), and the position In the third later picture In3 is a third later coordinate value (not indicated In the drawing). The third previous coordinate value and the third later coordinate value may be used to generate a third estimated coordinate value (not shown in the drawing) using interpolation or other operational function. The previous coordinate value and the later coordinate value of the object O' In the fourth previous frame Ip4 and the fourth later frame In4 closest to the second preset time point T2 In the second frame Is2 can also be converted to generate another estimated coordinate value. The third previous picture Ip3 may be stitched with the fourth later picture In4 to generate a stitched picture i″ and display the third estimated coordinate value of the object O and the other estimated coordinate value of the object O' on the stitched picture I ".
If one or more unused frames Iu exist between the first later frame In1 and the third previous frame Ip3, the object position estimation method of the present invention may directly discard the data of the object O In the unused frames Iu, that is, the coordinate values of the object O In the unused frames Iu are not used In calculating the third estimated coordinate values. The unused picture Iu is defined as an arbitrary picture between two preset time points T1 and T2 that is not used for object position estimation. In addition, the estimated coordinate value of the object O at the second preset time point T2 is calculated by a specific operation function without using the screen Iu or In combination with the third previous screen Ip3 and the third later screen In 3; for example, the unused picture Iu gives a lower weight, the third previous picture Ip3 and the third later picture In3 give a higher weight, and three frames of pictures are used to calculate the estimated coordinate value. Alternatively, the object position estimation method may analyze and compare the unused picture Iu with the third previous picture Ip3, and take one of them to match with the third later picture In3 for object position estimation.
In the embodiment of the present invention, if the object position estimation device 10 is a server electrically connected to a plurality of cameras, the time stamp alignment frequency can be set by itself, and the estimated coordinate value can be calculated stably at each preset time point. If the object position estimation device 10 is a camera that receives image data of other cameras, the image capturing and the object position estimation can be performed alternately, for example, the estimated coordinate value of the previous preset time point is calculated first, and the estimated coordinate value of the next preset time point is calculated after the previous preset time point is calculated, so that the intervals of the preset time points may be slightly different. The setting of the time stamp alignment frequency and the predetermined time point may vary according to the hardware of the target position estimation apparatus 10 and its operation performance, and will not be described separately.
In summary, the fields of view of the plurality of cameras may be partially overlapped or completely non-overlapped, and the time synchronization setting is completed in advance. The object position estimating method and the object position estimating device set a plurality of preset time points according to a specific frequency, obtain the metadata of each object of the pictures before and after the preset time points of each camera, obtain the metadata of the object at the preset time points by interpolation or other operation functions, and estimate the metadata of the object after time alignment. Compared with the prior art, the method and the device can overcome the time stamp gap of metadata among different cameras, and effectively improve the accuracy of object tracking.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (6)
1. An object position estimation method, characterized in that the object position estimation method comprises:
setting the time synchronization of the first camera and the second camera;
acquiring pictures shot by the first camera as a plurality of first pictures;
acquiring pictures shot by the second camera as a plurality of second pictures;
setting a first preset time point according to the time stamp pair Ji Pinlv;
taking out a first previous picture and a first later picture closest to the first preset time point from the plurality of first pictures;
acquiring a first previous coordinate value of the object O in the first previous picture and a first later coordinate value of the object O in the first later picture;
calculating a first estimated coordinate value of the object O at the first preset time point by using the first previous coordinate value and the first later coordinate value;
taking out a second previous picture and a second later picture closest to the first preset time point from the plurality of second pictures;
obtaining a second previous coordinate value of the object O 'in the second previous picture and a second later coordinate value of the object O' in the second later picture;
calculating a second estimated coordinate value of the object O' at the first preset time point by using the second previous coordinate value and the second later coordinate value;
generating a spliced picture according to the first previous picture and the second later picture; and
and displaying the first estimated coordinate value of the object O and the second estimated coordinate value of the object O' on the spliced picture.
2. The method of claim 1 wherein the object position estimation method uses interpolation to calculate the first estimated coordinate value between the first previous coordinate value and the first later coordinate value.
3. The object position estimation method of claim 1, further comprising:
setting a second preset time point later than the first preset time point;
taking out a third previous picture and a third later picture closest to the second preset time point from the plurality of first pictures;
obtaining a third previous coordinate value of the object O in the third previous frame and a third later coordinate value of the object O in the third later frame; and
and calculating a third estimated coordinate value of the object O at the second preset time point by using the third previous coordinate value and the third later coordinate value.
4. The object position estimation method of claim 3, further comprising:
judging whether an unused picture exists between the first later picture and the third previous picture; and
if so, discarding the unused picture of the object O.
5. The method of claim 3, wherein the second predetermined point in time is based on the time stamp alignment frequency.
6. An object position estimation device, characterized in that the object position estimation device comprises:
a receiver for acquiring a picture generated by the camera; and
a processor electrically connected to the receiver for performing the object position estimation method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811344830.0A CN111179305B (en) | 2018-11-13 | 2018-11-13 | Object position estimation method and object position estimation device thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811344830.0A CN111179305B (en) | 2018-11-13 | 2018-11-13 | Object position estimation method and object position estimation device thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111179305A CN111179305A (en) | 2020-05-19 |
CN111179305B true CN111179305B (en) | 2023-11-14 |
Family
ID=70649977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811344830.0A Active CN111179305B (en) | 2018-11-13 | 2018-11-13 | Object position estimation method and object position estimation device thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111179305B (en) |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7450735B1 (en) * | 2003-10-16 | 2008-11-11 | University Of Central Florida Research Foundation, Inc. | Tracking across multiple cameras with disjoint views |
CN101344965A (en) * | 2008-09-04 | 2009-01-14 | 上海交通大学 | Tracking system based on binocular camera shooting |
CN101808230A (en) * | 2009-02-16 | 2010-08-18 | 杭州恒生数字设备科技有限公司 | Unified coordinate system used for digital video monitoring |
CN103716594A (en) * | 2014-01-08 | 2014-04-09 | 深圳英飞拓科技股份有限公司 | Panorama splicing linkage method and device based on moving target detecting |
CN104063867A (en) * | 2014-06-27 | 2014-09-24 | 浙江宇视科技有限公司 | Multi-camera video synchronization method and multi-camera video synchronization device |
CN104318588A (en) * | 2014-11-04 | 2015-01-28 | 北京邮电大学 | Multi-video-camera target tracking method based on position perception and distinguish appearance model |
CN104766291A (en) * | 2014-01-02 | 2015-07-08 | 株式会社理光 | Method and system for calibrating multiple cameras |
KR20160014413A (en) * | 2014-07-29 | 2016-02-11 | 주식회사 일리시스 | The Apparatus and Method for Tracking Objects Based on Multiple Overhead Cameras and a Site Map |
JP2016099941A (en) * | 2014-11-26 | 2016-05-30 | 日本放送協会 | System and program for estimating position of object |
CN105828045A (en) * | 2016-05-12 | 2016-08-03 | 浙江宇视科技有限公司 | Method and device for tracking target by using spatial information |
CN106023139A (en) * | 2016-05-05 | 2016-10-12 | 北京圣威特科技有限公司 | Indoor tracking and positioning method based on multiple cameras and system |
CN107343165A (en) * | 2016-04-29 | 2017-11-10 | 杭州海康威视数字技术股份有限公司 | A kind of monitoring method, equipment and system |
CN107613159A (en) * | 2017-10-12 | 2018-01-19 | 北京工业职业技术学院 | Image temporal calibration method and system |
CN108111818A (en) * | 2017-12-25 | 2018-06-01 | 北京航空航天大学 | Moving target active perception method and apparatus based on multiple-camera collaboration |
CN108734739A (en) * | 2017-04-25 | 2018-11-02 | 北京三星通信技术研究有限公司 | The method and device generated for time unifying calibration, event mark, database |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8335345B2 (en) * | 2007-03-05 | 2012-12-18 | Sportvision, Inc. | Tracking an object with multiple asynchronous cameras |
WO2016164118A2 (en) * | 2015-04-10 | 2016-10-13 | Robert Bosch Gmbh | Object position measurement with automotive camera using vehicle motion data |
-
2018
- 2018-11-13 CN CN201811344830.0A patent/CN111179305B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7450735B1 (en) * | 2003-10-16 | 2008-11-11 | University Of Central Florida Research Foundation, Inc. | Tracking across multiple cameras with disjoint views |
CN101344965A (en) * | 2008-09-04 | 2009-01-14 | 上海交通大学 | Tracking system based on binocular camera shooting |
CN101808230A (en) * | 2009-02-16 | 2010-08-18 | 杭州恒生数字设备科技有限公司 | Unified coordinate system used for digital video monitoring |
CN104766291A (en) * | 2014-01-02 | 2015-07-08 | 株式会社理光 | Method and system for calibrating multiple cameras |
CN103716594A (en) * | 2014-01-08 | 2014-04-09 | 深圳英飞拓科技股份有限公司 | Panorama splicing linkage method and device based on moving target detecting |
CN104063867A (en) * | 2014-06-27 | 2014-09-24 | 浙江宇视科技有限公司 | Multi-camera video synchronization method and multi-camera video synchronization device |
KR20160014413A (en) * | 2014-07-29 | 2016-02-11 | 주식회사 일리시스 | The Apparatus and Method for Tracking Objects Based on Multiple Overhead Cameras and a Site Map |
CN104318588A (en) * | 2014-11-04 | 2015-01-28 | 北京邮电大学 | Multi-video-camera target tracking method based on position perception and distinguish appearance model |
JP2016099941A (en) * | 2014-11-26 | 2016-05-30 | 日本放送協会 | System and program for estimating position of object |
CN107343165A (en) * | 2016-04-29 | 2017-11-10 | 杭州海康威视数字技术股份有限公司 | A kind of monitoring method, equipment and system |
CN106023139A (en) * | 2016-05-05 | 2016-10-12 | 北京圣威特科技有限公司 | Indoor tracking and positioning method based on multiple cameras and system |
CN105828045A (en) * | 2016-05-12 | 2016-08-03 | 浙江宇视科技有限公司 | Method and device for tracking target by using spatial information |
CN108734739A (en) * | 2017-04-25 | 2018-11-02 | 北京三星通信技术研究有限公司 | The method and device generated for time unifying calibration, event mark, database |
CN107613159A (en) * | 2017-10-12 | 2018-01-19 | 北京工业职业技术学院 | Image temporal calibration method and system |
CN108111818A (en) * | 2017-12-25 | 2018-06-01 | 北京航空航天大学 | Moving target active perception method and apparatus based on multiple-camera collaboration |
Non-Patent Citations (1)
Title |
---|
目标跟踪技术综述;高文 等;《中国光学》;20140630;第7卷(第3期);第365-375页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111179305A (en) | 2020-05-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9521362B2 (en) | View rendering for the provision of virtual eye contact using special geometric constraints in combination with eye-tracking | |
US10867166B2 (en) | Image processing apparatus, image processing system, and image processing method | |
KR102239530B1 (en) | Method and camera system combining views from plurality of cameras | |
Hedborg et al. | Structure and motion estimation from rolling shutter video | |
US9001222B2 (en) | Image processing device, image processing method, and program for image processing for correcting displacement between pictures obtained by temporally-continuous capturing | |
CN111327788B (en) | Synchronization method, temperature measurement method and device of camera set and electronic system | |
US20110043691A1 (en) | Method for synchronizing video streams | |
US20060125920A1 (en) | Matching un-synchronized image portions | |
US11132538B2 (en) | Image processing apparatus, image processing system, and image processing method | |
TW200818916A (en) | Wide-area site-based video surveillance system | |
US20070263000A1 (en) | Method, Systems And Computer Product For Deriving Three-Dimensional Information Progressively From A Streaming Video Sequence | |
US10593059B1 (en) | Object location estimating method with timestamp alignment function and related object location estimating device | |
Rao et al. | Real-time speed estimation of vehicles from uncalibrated view-independent traffic cameras | |
CN111179305B (en) | Object position estimation method and object position estimation device thereof | |
JPH07505033A (en) | Mechanical method for compensating nonlinear image transformations, e.g. zoom and pan, in video image motion compensation systems | |
JP2017103564A (en) | Control apparatus, control method, and program | |
WO2018179119A1 (en) | Image analysis apparatus, image analysis method, and recording medium | |
KR102238794B1 (en) | Method for increasing film speed of video camera | |
US20150350669A1 (en) | Method and apparatus for improving estimation of disparity in a stereo image pair using a hybrid recursive matching processing | |
TWI718437B (en) | Object location estimating method with timestamp alignment function and related object location estimating device | |
CN113781560B (en) | Viewpoint width determining method, device and storage medium | |
CN113763472B (en) | Viewpoint width determining method and device and storage medium | |
Thangarajah et al. | Vision-based registration for augmented reality-a short survey | |
JPH09145368A (en) | Moving and tracing method for object by stereoscopic image | |
CN111447403B (en) | Video display method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |