WO2016183954A1 - 运动轨迹的计算方法及装置、终端 - Google Patents

运动轨迹的计算方法及装置、终端 Download PDF

Info

Publication number
WO2016183954A1
WO2016183954A1 PCT/CN2015/087411 CN2015087411W WO2016183954A1 WO 2016183954 A1 WO2016183954 A1 WO 2016183954A1 CN 2015087411 W CN2015087411 W CN 2015087411W WO 2016183954 A1 WO2016183954 A1 WO 2016183954A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
cameras
specified targets
specified
target
Prior art date
Application number
PCT/CN2015/087411
Other languages
English (en)
French (fr)
Inventor
姜伟
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2016183954A1 publication Critical patent/WO2016183954A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P3/00Measuring linear or angular speed; Measuring differences of linear or angular speeds
    • G01P3/36Devices characterised by the use of optical means, e.g. using infrared, visible, or ultraviolet light
    • G01P3/38Devices characterised by the use of optical means, e.g. using infrared, visible, or ultraviolet light using photographic means

Definitions

  • the present invention relates to the field of communications, and in particular to a method, device, and terminal for calculating a motion trajectory.
  • the method for measuring the speed of the target in the related art includes: (1) measuring the speed with a single camera and a laser pen ranging. This method utilizes the vertical direction of the laser pointer and the film, plus an angle information calculated from the camera's framing results. Thus, the distance measurement and the speed measurement are realized; (2) the global positioning test GPS speed measurement, which utilizes the GPS positioning target, and uses the position change of two adjacent targets and the time interval of two measurements to realize the distance measurement and the speed measurement; ) Laser speedometer, radar speedometer, infrared speedometer.
  • GPS speed measurement requires GPS signal, and can only test the speed of the mobile terminal itself, can not test the speed of other targets, and is affected by GPS signals.
  • the main purpose of the embodiments of the present invention is to provide a method, a device, and a terminal for calculating a motion trajectory, so as to solve at least the cumbersome problem in the related art when measuring speed for a specific target.
  • a method for calculating a motion trajectory includes: acquiring, by a plurality of cameras of a terminal device, one or more specified targets in a spatial region simultaneously covered by the plurality of cameras at predetermined time intervals Information; calculating the one or more specified targets according to the acquired information of the one or more specified targets, the predetermined time, and parameter information of the plurality of cameras Movement track.
  • calculating the motion trajectory of the one or more specified targets according to the acquired information of the specified target, the predetermined time, and parameter information of the plurality of cameras includes: specifying the one or more The information of the target and the parameter information of the plurality of cameras are calculated according to a preset rule, and the spatial location information of the one or more specified targets and the terminal is calculated according to the plurality of the spatial location information and the predetermined time.
  • the one or more motion trajectories of the specified target includes: specifying the one or more The information of the target and the parameter information of the plurality of cameras are calculated according to a preset rule, and the spatial location information of the one or more specified targets and the terminal is calculated according to the plurality of the spatial location information and the predetermined time.
  • the plurality of cameras acquire information of the one or more specified targets in units of two groups, and then calculate information according to the information of the one or more specified targets and parameter information of the plurality of cameras Determining the spatial location information of the one or more specified targets and the terminal includes: calculating a spatial location of the one or more specified targets and the terminal according to a relative distance between the two cameras and a focal length of the two cameras information.
  • the method further includes: moving the one or more specified targets The trajectory information is presented in real time on the display screen of the terminal; the unwanted one or more motion trajectories are deleted from the display screen, and/or an operation to perform a calculation of the newly added motion trajectory of the specified target is triggered.
  • the information of the motion track includes: an operating rate of the one or more specified targets with respect to the terminal, and a moving direction of the one or more specified targets with respect to the terminal.
  • an apparatus for calculating a motion trajectory includes: an acquisition module configured to acquire, in a spatial region covered by the plurality of cameras simultaneously by a plurality of cameras of the terminal device every predetermined time Information of one or more specified targets; a calculation module configured to calculate the one or more designations according to the acquired information of the one or more specified targets, the predetermined time, and parameter information of the plurality of cameras The trajectory of the target.
  • the calculating module includes: a first calculating unit, configured to calculate information of the one or more specified targets and parameter information of the plurality of cameras according to a preset rule to calculate the one or more specified a spatial location information of the target and the terminal; and a second calculating unit configured to calculate a motion trajectory of the one or more specified targets according to the plurality of the spatial location information and the predetermined time.
  • the plurality of cameras acquire information of the one or more specified targets in units of two groups; the first calculating unit is further configured to compare the relative distances of the two cameras with the two cameras The focal length calculates spatial location information of the one or more specified targets and the terminal.
  • the apparatus further includes: a presentation module, configured to set the one or more The information of the motion track of the specified target is presented in real time on the display screen of the terminal; the management module is arranged to delete the unwanted one or more motion tracks from the display screen, and/or perform a new calculation The operation of the specified target's motion trajectory.
  • a presentation module configured to set the one or more The information of the motion track of the specified target is presented in real time on the display screen of the terminal
  • the management module is arranged to delete the unwanted one or more motion tracks from the display screen, and/or perform a new calculation The operation of the specified target's motion trajectory.
  • the information of the motion track includes: an operating rate of the one or more specified targets with respect to the terminal, and a moving direction of the one or more specified targets with respect to the terminal.
  • a computing terminal for a motion trajectory includes: a plurality of cameras configured to acquire one or more specified targets in a spatial region simultaneously covered by the plurality of cameras at predetermined time intervals And a processor configured to calculate a motion trajectory of the one or more specified targets according to the acquired information of the one or more specified targets, the predetermined time, and parameter information of the plurality of cameras.
  • the processor is further configured to calculate the one or more specified targets and the terminal according to preset information of the one or more specified targets and parameter information of the multiple cameras according to a preset rule. Spatial position information; and calculating a motion trajectory of the one or more specified targets according to the plurality of the spatial position information and the predetermined time.
  • the plurality of cameras of the terminal acquire information of one or more specified targets every predetermined time, and further, through the acquired information of one or more specified targets, parameter information of the plurality of cameras, and The predetermined time can calculate the motion trajectory of the one or more specified targets, and the entire calculation process does not need to actively emit electromagnetic waves, and the user does not need to accurately target the target tracking, as long as the space area that the multiple cameras can cover at the same time can be Therefore, the related method is adopted in the related art, and the method adopted by the method is relatively cumbersome, thereby improving the user experience.
  • FIG. 1 is a flow chart of a method of calculating a motion trajectory according to an embodiment of the present invention
  • FIG. 2 is a structural block diagram of a computing device for a motion trajectory according to an embodiment of the present invention
  • FIG. 3 is a block diagram 1 of an optional structure of a computing device for a motion trajectory according to an embodiment of the present invention
  • FIG. 4 is a block diagram 2 of an optional structure of a computing device for a motion trajectory according to an embodiment of the present invention
  • FIG. 5 is a structural block diagram of an application terminal utilizing a dual camera ranging function according to an alternative embodiment of the present invention.
  • FIG. 6 is a flowchart of a method for an application terminal to acquire a specified target motion trajectory according to an alternative embodiment of the present invention
  • FIG. 7 is a schematic diagram of an application terminal acquiring a relative position of a scene P according to an alternative embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for calculating a motion trajectory according to an embodiment of the present invention. As shown in FIG. 1, the steps of the method include:
  • Step S102 Acquire information of one or more specified targets in a spatial area covered by multiple cameras simultaneously by a plurality of cameras of the terminal device every predetermined time;
  • Step S104 Calculate motion trajectories of one or more specified targets according to the acquired information of one or more specified targets, a predetermined time, and parameter information of the plurality of cameras.
  • the information of one or more specified targets is acquired every predetermined time by the plurality of cameras of the terminal, and the information of the acquired one or more specified targets is further
  • the parameter information of the camera and the predetermined time can calculate the motion trajectory of the one or more specified targets, and the entire calculation process does not need to actively emit electromagnetic waves, and the user does not need to accurately target the target tracking, as long as the multiple cameras can simultaneously
  • the space area covered can be used, thereby solving the problem that the method used in the related art for speed measurement of a specific target is relatively cumbersome, thereby improving the user experience.
  • an optional implementation manner of this embodiment can be achieved by:
  • Step S11 calculating information of one or more specified targets and parameter information of the plurality of cameras according to a preset rule, and calculating spatial location information of the one or more specified targets and the terminal;
  • Step S12 Calculate the motion trajectory of one or more specified targets according to the plurality of spatial position information and the predetermined time.
  • the plurality of cameras may acquire information of one or more specified targets in units of two groups, and the step may be implemented by: The relative distance of the cameras and the focal length of the two cameras calculate spatial position information of one or more specified targets and terminals.
  • the relative position of the specified target and the mobile phone that is, the three-dimensional spatial position information (X 1 , Y 1 , Z 1 ) of the specified target is calculated.
  • the target information is acquired again by the plurality of cameras, and the relative position of the specified target and the mobile phone is calculated for the second time, that is, the three-dimensional spatial position information of the specified target (X 2 , Y 2 , Z 2 ), and so on, the position information of the target point is acquired once every time T is delayed. Thereby obtaining an operating curve of the target;
  • the method in this embodiment may further include:
  • Step S21 real-time presenting the motion track information of one or more specified targets on the display screen of the terminal;
  • Step S22 deleting unnecessary one or more motion tracks from the display screen, and/or triggering an operation of calculating a newly added motion track of the specified target.
  • the information of the motion track includes: an operating rate of one or more specified targets relative to the terminal, and a moving direction of the one or more specified targets with respect to the terminal.
  • the device includes: an acquiring module 22 configured to acquire multiple cameras simultaneously through multiple cameras of the terminal device at predetermined intervals. The information of the one or more specified targets in the space area; the calculation module 24 is coupled to the acquisition module 22, and is configured to calculate one according to the acquired information of one or more specified targets, a predetermined time, and parameter information of the plurality of cameras. Or the motion trajectory of multiple specified targets.
  • the calculation module 24 includes: a first calculation unit 32 configured to set information of one or more specified targets and The parameter information of the plurality of cameras is used to calculate the spatial location information of the specified target and the terminal according to the preset rule.
  • the second computing unit 34 is coupled to the first computing unit 32 and configured to be based on the plurality of spatial location information and the predetermined information. Time calculates the motion trajectory of one or more specified targets.
  • the plurality of cameras acquire information of one or more specified targets in units of two groups; and the first calculating unit 32 is further configured to calculate the focal lengths of the two cameras according to the relative distance between the two cameras.
  • the device further includes: a presentation module 42 coupled to the computing module 24, configured to present information of one or more specified target motion trajectories in real time on the display screen of the terminal; the management module 44 is coupled to the presentation module 42 and configured to The unwanted one or more motion trajectories are deleted from the display screen, and/or an operation to perform a calculation of the newly added motion trajectory of the specified target is triggered.
  • the information of the motion track includes: an operating rate of one or more specified targets relative to the terminal, The direction of motion of one or more specified targets relative to the terminal.
  • the embodiment of the present invention further provides a computing terminal for a motion trajectory, the terminal includes: a plurality of cameras, configured to acquire information of one or more specified targets in a spatial region simultaneously covered by the plurality of cameras at predetermined time intervals; And configured to calculate a motion trajectory of one or more specified targets according to the acquired information of one or more specified targets, a predetermined time, and parameter information of the plurality of cameras.
  • the processor is further configured to calculate information about one or more specified targets and parameter information of the plurality of cameras according to a preset rule, and calculate spatial location information of the one or more specified targets and terminals; and according to the multiple spaces.
  • the position information and the predetermined time calculate a motion trajectory of one or more specified targets.
  • the processor is further configured to: when the plurality of cameras acquire information of one or more specified targets in units of two groups, calculate one according to a relative distance between the two cameras and a focal length of the two cameras Or multiple spatial location information specifying the target and the terminal.
  • the terminal further includes: a display configured to present information of the motion trajectory of the one or more specified targets in real time.
  • the display of the terminal is configured to present information of the motion trajectory of the one or more specified targets in real time.
  • the processor is further configured to delete unnecessary one or more motion trajectories from the display screen, and/or execute The operation of calculating the newly added motion track of the specified target.
  • the information of the motion trajectory includes: an operating rate of the one or more specified targets relative to the terminal, and a moving direction of the one or more specified targets with respect to the terminal.
  • FIG. 5 is a structural block diagram of an application terminal that utilizes the dual camera ranging function according to an alternative embodiment of the present invention.
  • the application terminal includes: a camera, a user operation interface, a storage unit, and a central processing unit;
  • the camera can be two or more independent cameras fixed in the same plane, or multiple cameras that have a displacement function and can simultaneously view a certain orientation (for example, a fixed camera, plus one can Rotate the camera 180 degrees along a certain axis of rotation). Dual camera can capture the scene in real time and get the information of the target area.
  • the user operation interface is set to select a feature target (designated target) to be tracked by the user, and the feature target can be operated on the 3D photo or in the camera view interface, and is set to select the feature target to be tracked;
  • the central processing unit is configured to calculate relative position information of the feature target selected by the user. And through multiple sets of relative position information, calculate the target's running trajectory, moving speed, moving direction and other information.
  • the storage unit is configured to store feature values that can be used as passwords, including time information, geographic location information, feature target features, target motion trajectories, target moving speeds, moving directions, and the like.
  • the feature target is determined by the user, and the relative position of the target is calculated, thereby calculating the movement trajectory, moving speed, moving direction and the like of the target according to the plurality of feature positions continuously acquired.
  • the dual camera can cover the space area at the same time, the tracking speed can be realized; it will be affected by natural light, but the speed measurement can be realized without using the GPS signal. Strong; integrated on the mobile terminal, portability is good.
  • the mobile terminal itself is stationary, and the moving target is observed and tracked, thereby calculating the moving speed of the tracking target.
  • This kind of scene can be used to track the movement patterns of moving objects such as moving birds, insects, athletes on the field, model airplanes, cars, etc.
  • the mobile terminal itself is moving, observing and tracking the stationary target, thereby calculating the moving speed of the mobile terminal itself, and this scene can be used to calculate the moving speed of the mobile terminal itself.
  • the mobile terminal itself is moving, and the observation target is also moving, thereby calculating the relative motion speed.
  • This kind of scene can calculate the relative speed of objects in motion, such as two cars, two planes, two pedestrians, and the relative speed of the cheetah to capture the antelope.
  • FIG. 6 is a flowchart of a method for an application terminal to acquire a specified target motion trajectory, which has a dual camera and stores a detailed focal length range (fixed focus or zoom camera) of the two cameras, according to an alternative embodiment of the present invention,
  • the location information as shown in Figure 6, the steps of the method include:
  • Step S602 The user enters the target speed measurement function through the mobile phone menu
  • Step S604 The mobile phone starts the camera, and captures the scene captured by the current mobile phone
  • Step S606 displaying current real-time camera information on the mobile phone
  • Step S608 the user clicks on the touch screen to select a feature target
  • the user can select on the interface according to the needs, multiple points are measured separately, or the target to be tracked can be added at any time, and the target that does not need to be tracked is deleted.
  • the selection of the feature target is described as follows. Taking a point in the middle of the monochrome plane as the feature point, it is difficult to perform feature matching later because the feature is not obvious. So when the user selects a feature target on the screen. For feature targets that are recommended to the user, some points with the following characteristics can be used:
  • an endpoint of the color block boundary such as a blue square, the lower end corresponding to the right side, or the upper end.
  • the center point of a special logo on a plane such as the center of the mobile phone logo, the center point of the logo is taken when calculating.
  • the center point of the small color block such as the center point of the computer monitor power indicator.
  • the initial feature values may change in continuous motion, and the characteristics of the tracking target can be continuously corrected by the data of the camera framing each time.
  • the target has a significant change in brightness due to the reflection of sunlight during the movement, but the target shape and the expected position are consistent with the previously stored feature data. You can still continue to lock the target.
  • a model airplane selects its nose, wing, tail, front wheel, and rear wheel, and features five features to record the feature information of the target. If there are more than three feature information for each positioning, Successful identification can confirm the successful confirmation of the target. Visible can be used to track the constantly changing target, in case the target is lost in tracking and improve tracking accuracy.
  • the relative position of each component part of the model aircraft can be clarified by calculation in the framing process or by inputting preset parameters.
  • the spatial position of the remaining components can be calculated and the corresponding position can be obtained.
  • the information of the camera framing interface corresponding to the spatial position can be compared with the original feature information in the storage space. If some information changes before and after the feature, the model feature information can be updated (for example: if the model aircraft's nose is When a tomato is hit, the original blue head becomes a red head, and the new head information feature will be recorded in the storage space. If the new head features are consistent for three consecutive shots, the machine can be confirmed. The head feature has been changed.
  • the subsequent tracking can not be used as the target reference positioning feature), which can be beneficial to track the constantly changing posture, discoloration, deformation target, and improve Track the adaptability of the device.
  • Step S610 The user operation interface clears the detailed information of the specific feature target to the user according to the feature target selected by the user, and requests the user to confirm;
  • Step S612 After the user confirms the feature target, the camera collects the information of the current feature target and the parameter information of the camera itself, and sends the parameter information to the “central processing unit”;
  • Step S614 The central processing unit calculates the relative position of the feature target and the mobile phone by using the information transmitted by the camera.
  • the relative position is also the three-dimensional spatial position information (X 1 , Y 1 , Z 1 ) of the feature target.
  • the target information is acquired again through the camera, and the second time is calculated as the relative relationship between the feature target and the mobile phone.
  • the position that is, the three-dimensional spatial position information (X 2 , Y 2 , Z 2 ) of the feature target, and so on, acquires the position information of the target point every time a time T is delayed. Thereby obtaining an operating curve of the target;
  • the dual camera spatial positioning is to determine the relative position of the feature point in the space relative to the camera group by using the independent imaging parallax between the relative distance of the two cameras and the characteristic point of the camera, and the focal length information of the camera at that time.
  • FIG. 7 is a schematic diagram of an application terminal acquiring a relative position of a scene P according to an alternative embodiment of the present invention.
  • the two cameras are in the same horizontal plane while photographing the scene P.
  • P l and P r are obtained on the two photographs taken, and these two points have a plane coordinate with respect to the center point of the photograph.
  • the vector directions of O l P l and O r P r correspond exactly to O l P and Or P P .
  • the position of the P point relative to the camera group can be easily calculated.
  • Step S616 The central processing unit uses the running curve of the target and the time interval T to calculate the running speed of the target, and at the same time, obtain the moving direction of the target relative to the mobile terminal.
  • Step S618 The calculated speed, movement direction and the like of the feature target are displayed to the user in real time by the mobile phone display screen.
  • the corresponding function of the mobile phone can be used to track the target, and the motion track of the target is determined, and the corresponding movement manner of the insect is observed, for example.
  • this step can be performed by importing the video of the 3D video, and selecting the target tracking target in the video, thereby calculating the position of the target and Movement track.
  • the process of tracking the target can be delayed and reproduced afterwards. For example, when viewing an area for a long time, when no one is observing, the person can quickly find what we need to observe afterwards. For example, if the user observes the movement mode of the insect, the mobile phone can be placed there to automatically shoot, and the video analysis is used afterwards. You can also restore 3D video from other devices to restore the tracking target. When using 3D video restoration recorded by other devices, you need to know the real-time focal length of the recording device's camera and the relative position of each camera to complete the calculation of the feature target position.
  • a dual camera is used. If the number of cameras is increased, a camera array, such as 2*2 cameras, is built to calculate two sets of spatial positioning data, so that if one camera is damaged , it will immediately produce a large difference between the two sets of data, to remind the user that the camera is faulty, the ranging function has been unable to work properly, and manual maintenance is required. In turn, the target tracking system can be self-contained. I have the ability to verify results and improve system fault tolerance.
  • the feature target After locating the location of the mobile phone, combined with the self-calculated feature target and the relative position of the mobile phone, and the posture of the mobile phone, the feature target can be calculated, relative to the position of the GPS system, so that the target can be marked on the world map.
  • the position information relative to the mobile phone is converted into information relative to the latitude and longitude of the earth, and the application channel of the information is increased.
  • the calculation may also be performed in reverse, inputting the motion track information of a group of targets relative to the camera array, and then restoring the position of the target in a group of videos by using corresponding information, that is, in 3D video. In, draw the corresponding target.
  • the technology itself is an animation technology. Joining here, you can achieve multi-track visual comparison. For example, a physical experiment can calculate the movement of the item in advance, digitize the animation effect, and then record the real motion track while playing the real motion track, and play the previously calculated track to the user to achieve the simultaneous viewing of the actual effect and calculation. A good experience of the effect.
  • the calculated speed of the feature target, the direction of the movement, and the like are displayed to the user by the mobile phone display screen in real time.
  • the display mode here may have a real-time display bubble beside the tracking target.
  • the position information of the acquired target in the photo can be effectively utilized to effectively and vividly express the target information, especially when multi-target tracking is performed, the display effect is good.
  • the process of manually selecting the feature target by the user can pre-store the feature information through the software, thereby automatically finding the feature target appearing in the screen.
  • the process of automatically locking the feature target can be effectively utilized to achieve automatic tracking.
  • applying the invention to a secure area monitoring system can identify intrusion targets and automatically track them.
  • the camera can actively adjust the viewing angle to continuously track the target.
  • the feature target leaves the monitoring range of the camera, it can also locate the last position of the target, notify the camera that covers the corresponding position, and continue to track the target.
  • the user enters the target speed measurement function through the mobile phone menu, the menu for entering the speed measurement of the mobile terminal itself can be selected.
  • the menu select a stationary feature target through the screen captured by the camera.
  • the moving direction and moving speed of the target are calculated. Using the calculation principle of relative motion, it is easy to obtain the opposite direction of the corresponding vector, that is, the moving direction and motion of the mobile terminal. speed. In this way, the user can measure his own movement speed. For example, when taking a car, observe a fixed scene on the roadside to complete the test and learn about the current speed.
  • enter the target speed measurement function can choose to enter a relative speed speed menu.
  • a sporty feature through the screen captured by the camera.
  • Target Through the same speed calculation method as the preferred embodiment, the moving direction and moving speed of the target are calculated. This value is the relative speed between the user and the target.
  • the user can measure his own speed on the carousel.
  • target Trojan that is the same as the rotating wooden horse
  • the relative speed is close to 0.
  • the user selects the off-site target speed he can get relative to the off-site audience. See the speed of yourself.
  • the direction of rotation of the inner column is opposite to that of the trojan. If the feature target selected by the user is the inner column of the carousel, then the speed of movement relative to the inner column of the carousel can be tested, which will be greater than the speed of the carousel relative to the field.
  • a variety of users can only use the perceived speed changes, and more intuitive feedback to the user through the mobile terminal numerical value, can provide more fun for the user.
  • a storage medium is further provided, wherein the software includes the above-mentioned software, including but not limited to: an optical disk, a floppy disk, a hard disk, an erasable memory, and the like.
  • modules or steps of the present invention can be implemented by a general-purpose computing device, which can be concentrated on a single computing device or distributed over a network composed of multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device such that they may be stored in the storage device for execution by the computing device and, in some cases, may be performed in a different order than herein.
  • the steps shown or described are either made separately into individual integrated circuit modules, or a plurality of modules or steps are fabricated as a single integrated circuit module. Thus, the invention is not limited to any specific combination of hardware and software.
  • the plurality of cameras of the terminal acquire information of one or more specified targets every predetermined time, and further, the information of the acquired one or more specified targets, the parameter information of the plurality of cameras, and the The motion trajectory of the one or more specified targets can be calculated at a predetermined time, and the entire calculation process does not need to actively emit electromagnetic waves, and the user does not need to accurately target the target tracking, as long as the space area that the multiple cameras can cover at the same time, Therefore, the problem that the method used in measuring speed for a specific target in the related art is relatively cumbersome is solved, thereby improving the user experience.

Abstract

一种运动轨迹的计算方法及装置、终端,其中,计算方法包括:每隔预定时间通过终端设备的多个摄像头获取多个摄像头同时覆盖的空间区域中一个或多个指定目标的信息(S102);依据获取的一个或多个指定目标的信息、预定时间以及多个摄像头的参数信息计算出一个或多个指定目标的运动轨迹(S104)。在整个计算过程中不需要主动发射电磁波,不需要用户精确对准目标追踪,只要多个摄像头对同时覆盖的空间区域中的指定目标进行相应计算即可,从而解决了相关技术中对于特定目标进行测速时,采用的方式比较繁琐的问题,进而提高了用户的体验效果。

Description

运动轨迹的计算方法及装置、终端 技术领域
本发明涉及通信领域,具体而言,涉及一种运动轨迹的计算方法及装置、终端。
背景技术
相关技术中对目标进行测速的方式包括:(1)单摄像头搭配激光笔测距进行测速。该方式利用了激光笔与底片的垂直,加上一个从摄像头取景结果上计算出的角度信息。从而实现测距,测速;(2)全球定位测试GPS测速,该方式利用了GPS定位目标,利用相邻两次目标的位置变化与两次测量的时间间隔,实现了测距,测速;(3)激光测速仪,雷达测速仪,红外测速仪。
可见,相关技术中的几种方式都是基于电磁波反射,通过发射电磁波到接收电磁波的时间差,再与电磁波传输的速度相乘,从而计算出目标的距离,从而实现测距测速;而对于上述几种方式存在如下问题:
A.单摄像头测距,需要用户手动控制测距仪瞄准目标,如果应用在移动终端上,需要外加激光笔硬件,并需要对准目标操作不够灵活。
B.GPS测速,需要GPS信号,而且只能测试移动终端自身的速度,无法测试其他目标的速度,受GPS信号影响。
C.激光测速仪,雷达测速仪,红外测速仪,如果应用在移动终端上,都需要外加主动发射电磁波的发射装置,增加了耗电,并且使用时需要对准目标。
针对相关技术中对于特定目标进行测速时,采用的方式比较繁琐的问题,目前尚未提出有效的解决方案。
发明内容
本发明实施例的主要目的在于提供一种运动轨迹的计算方法及装置、终端,以至少解决相关技术中对于特定目标进行测速时,采用的方式比较繁琐的问题。
根据本发明实施例的一个方面,提供了一种运动轨迹的计算方法,包括:每隔预定时间通过终端设备的多个摄像头获取所述多个摄像头同时覆盖的空间区域中一个或多个指定目标的信息;依据获取的所述一个或多个指定目标的信息、所述预定时间以及所述多个摄像头的参数信息计算出所述一个或多个指定目标 的运动轨迹。
可选地,依据获取的所述指定目标的信息、所述预定时间以及所述多个摄像头的参数信息计算出所述一个或多个指定目标的运动轨迹包括:将所述一个或多个指定目标的信息以及所述多个摄像头的参数信息依据预设规则计算出所述一个或多个指定目标与所述终端的空间位置信息;依据多个所述空间位置信息以及所述预定时间计算出所述一个或多个指定目标的运动轨迹。
可选地,所述多个摄像头以两个一组为单位获取所述一个或多个指定目标的信息,则依据所述一个或多个指定目标的信息以及所述多个摄像头的参数信息计算出所述一个或多个指定目标与所述终端的空间位置信息包括:依据两个摄像头的相对距离与该两个摄像头的焦距计算出所述一个或多个指定目标与所述终端的空间位置信息。
可选地,在依据多个所述空间位置信息以及所述预定时间计算出所述一个或多个指定目标的运动轨迹之后,所述方法还包括:将所述一个或多个指定目标的运动轨迹信息实时呈现在所述终端的显示屏;从所述显示屏中删除不需要的一个或多个运动轨迹,和/或触发执行计算新增加的指定目标的运动轨迹的操作。
可选地,所述运动轨迹的信息包括:所述一个或多个指定目标相对于所述终端的运行速率、所述一个或多个指定目标相对于所述终端的运动方向。
根据本发明实施例的另一个方面,提供了一种运动轨迹的计算装置,包括:获取模块,设置为每隔预定时间通过终端设备的多个摄像头获取所述多个摄像头同时覆盖的空间区域中一个或多个指定目标的信息;计算模块,设置为依据获取的所述一个或多个指定目标的信息、所述预定时间以及所述多个摄像头的参数信息计算出所述一个或多个指定目标的运动轨迹。
可选地,所述计算模块包括:第一计算单元,设置为将所述一个或多个指定目标的信息以及所述多个摄像头的参数信息依据预设规则计算出所述一个或多个指定目标与所述终端的空间位置信息;第二计算单元,设置为依据多个所述空间位置信息以及所述预定时间计算出所述一个或多个指定目标的运动轨迹。
可选地,所述多个摄像头以两个一组为单位获取所述一个或多个指定目标的信息;所述第一计算单元,还设置为依据两个摄像头的相对距离与该两个摄像头的焦距计算出所述一个或多个指定目标与所述终端的空间位置信息。
可选地,在依据多个所述空间位置信息以及所述预定时间计算出所述一个或多个指定目标的运动轨迹之后,所述装置还包括:呈现模块,设置为将所述一个或多个指定目标的运动轨迹的信息实时呈现在所述终端的显示屏;管理模块,设置为从所述显示屏中删除不需要的一个或多个运动轨迹,和/或执行计算新增加 的指定目标的运动轨迹的操作。
可选地,所述运动轨迹的信息包括:所述一个或多个指定目标相对于所述终端的运行速率、所述一个或多个指定目标相对于所述终端的运动方向。
根据本发明实施例的再一个方面,提供了一种运动轨迹的计算终端,包括:多个摄像头,设置为每隔预定时间获取所述多个摄像头同时覆盖的空间区域中一个或多个指定目标的信息;处理器,设置为依据获取的所述一个或多个指定目标的信息、所述预定时间以及所述多个摄像头的参数信息计算出所述一个或多个指定目标的运动轨迹。
可选地,所述处理器,还设置为将所述一个或多个指定目标的信息以及所述多个摄像头的参数信息依据预设规则计算出所述一个或多个指定目标与所述终端的空间位置信息;并依据多个所述空间位置信息以及所述预定时间计算出所述一个或多个指定目标的运动轨迹。
在本发明实施例中,通过终端的多个摄像头每隔预定时间获取一个或多个指定目标的信息,进而通过该获取到的一个或多个指定目标的信息、该多个摄像头的参数信息以及该预定时间就能计算出该一个或多个指定目标的运动轨迹,整个计算过程不需要主动发射电磁波,不需要用户精确对准目标追踪,只要是该多个摄像头可以同时覆盖的空间区域即可,从而解决了相关技术中对于特定目标进行测速时,采用的方式比较繁琐的问题,进而提高了用户的体验效果。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本发明实施例的运动轨迹的计算方法的流程图;
图2是根据本发明实施例的运动轨迹的计算装置结构框图;
图3是根据本发明实施例的运动轨迹的计算装置可选结构框图一;
图4是根据本发明实施例的运动轨迹的计算装置可选结构框图二;
图5是根据本发明可选实施例的利用双摄像头测距功能的应用终端的结构框图;
图6是根据本发明可选实施例的应用终端获取指定目标运动轨迹的方法流程图;
图7是根据本发明可选实施例应用终端获取景物P的相对位置的示意图。
具体实施方式
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本发明。
本实施例提供了一种运动轨迹的计算方法,图1是根据本发明实施例的运动轨迹的计算方法的流程图,如图1所示,该方法的步骤包括:
步骤S102:每隔预定时间通过终端设备的多个摄像头获取多个摄像头同时覆盖的空间区域中一个或多个指定目标的信息;
步骤S104:依据获取的一个或多个指定目标的信息、预定时间以及多个摄像头的参数信息计算出一个或多个指定目标的运动轨迹。
在本实施例的上述步骤S102和步骤S104中,通过终端的多个摄像头每隔预定时间获取一个或多个指定目标的信息,进而通过该获取到的一个或多个指定目标的信息、该多个摄像头的参数信息以及该预定时间就能计算出该一个或多个指定目标的运动轨迹,整个计算过程不需要主动发射电磁波,不需要用户精确对准目标追踪,只要是该多个摄像头可以同时覆盖的空间区域即可,从而解决了相关技术中对于特定目标进行测速时,采用的方式比较繁琐的问题,进而提高了用户的体验效果。
而对于本实施例中涉及到的依据获取的指定目标的信息、预定时间以及多个摄像头的参数信息计算出一个或多个指定目标的运动轨迹的方式,在本实施例的一个可选实施方式中,可以通过如下方式来实现:
步骤S11:将一个或多个指定目标的信息以及多个摄像头的参数信息依据预设规则计算出一个或多个指定目标与终端的空间位置信息;
步骤S12:依据多个空间位置信息以及预定时间计算出一个或多个指定目标的运动轨迹。
对于上述步骤S11,在本实施例的一应用场景中可以是,该多个摄像头以两个一组为单位获取一个或多个指定目标的信息,则该步骤可以通过如下方式来实现:依据两个摄像头的相对距离与该两个摄像头的焦距计算出一个或多个指定目标与终端的空间位置信息。
针对上述步骤S11和步骤S12,在本实施例的一应用场景中,可以通过如下方式来实现:
首先,通过多个摄像头传递过来的信息,计算出指定目标与手机的相对位置,也就是指定目标的三维空间位置信息(X1,Y1,Z1)。
然后,在延迟一个时间T后,再次通过该多个摄像头获取目标信息,第二 次计算出指定目标与手机的相对位置,也就是指定目标的三维空间位置信息(X2,Y2,Z2),以此类推,每延迟一个时间T获取一次目标点的位置信息。从而获得了目标的一个运行曲线;
最后,利用目标的运行曲线,以及时间间隔T,即可计算出,指定目标的运行速度,同时可以获得目标相对于移动终端的运动方向。
在本实施例的另一个可选实施方式中,在依据多个空间位置信息以及预定时间计算出一个或多个指定目标的运动轨迹之后,本实施例的方法还可以包括:
步骤S21:将一个或多个指定目标的运动轨迹信息实时呈现在终端的显示屏;
步骤S22:从显示屏中删除不需要的一个或多个运动轨迹,和/或触发执行计算新增加的指定目标的运动轨迹的操作。
其中,运动轨迹的信息包括:一个或多个指定目标相对于终端的运行速率、一个或多个指定目标相对于终端的运动方向。
图2是根据本发明实施例的运动轨迹的计算装置结构框图,如图2所示,该装置包括:获取模块22,设置为每隔预定时间通过终端设备的多个摄像头获取多个摄像头同时覆盖的空间区域中一个或多个指定目标的信息;计算模块24,与获取模块22耦合连接,设置为依据获取的一个或多个指定目标的信息、预定时间以及多个摄像头的参数信息计算出一个或多个指定目标的运动轨迹。
图3是根据本发明实施例的运动轨迹的计算装置可选结构框图一,如图3所示,该计算模块24包括:第一计算单元32,设置为将一个或多个指定目标的信息以及多个摄像头的参数信息依据预设规则计算出一个或多个指定目标与终端的空间位置信息;第二计算单元34,与第一计算单元32耦合连接,设置为依据多个空间位置信息以及预定时间计算出一个或多个指定目标的运动轨迹。
可选地,该多个摄像头以两个一组为单位获取一个或多个指定目标的信息;则第一计算单元32,还设置为依据两个摄像头的相对距离与该两个摄像头的焦距计算出一个或多个指定目标与终端的空间位置信息。
图4是根据本发明实施例的运动轨迹的计算装置可选结构框图二,如图4所示,在依据多个空间位置信息以及预定时间计算出一个或多个指定目标的运动轨迹之后,该装置还包括:呈现模块42,与计算模块24耦合连接,设置为将一个或多个指定目标的运动轨迹的信息实时呈现在终端的显示屏;管理模块44,与呈现模块42耦合连接,设置为从显示屏中删除不需要的一个或多个运动轨迹,和/或触发执行计算新增加的指定目标的运动轨迹的操作。
其中,运动轨迹的信息包括:一个或多个指定目标相对于终端的运行速率、 一个或多个指定目标相对于终端的运动方向。
本发明实施例还提供了一种运动轨迹的计算终端,该终端包括:多个摄像头,设置为每隔预定时间获取多个摄像头同时覆盖的空间区域中一个或多个指定目标的信息;处理器,设置为依据获取的一个或多个指定目标的信息、预定时间以及多个摄像头的参数信息计算出一个或多个指定目标的运动轨迹。
可选地,处理器,还设置为将一个或多个指定目标的信息以及多个摄像头的参数信息依据预设规则计算出一个或多个指定目标与终端的空间位置信息;并依据多个空间位置信息以及预定时间计算出一个或多个指定目标的运动轨迹。
可选地,该处理器,还设置为在多个摄像头以两个一组为单位获取一个或多个指定目标的信息时,依据两个摄像头的相对距离与该两个摄像头的焦距计算出一个或多个指定目标与终端的空间位置信息。
可选地,在依据多个空间位置信息以及预定时间计算出一个或多个指定目标的运动轨迹之后,终端还包括:显示器,设置为将一个或多个指定目标的运动轨迹的信息实时呈现在终端的显示屏;
此外在依据多个空间位置信息以及预定时间计算出一个或多个指定目标的运动轨迹之后,该处理器,还设置为从显示屏中删除不需要的一个或多个运动轨迹,和/或执行计算新增加的指定目标的运动轨迹的操作。
可选地,运动轨迹的信息包括:一个或多个指定目标相对于终端的运行速率、一个或多个指定目标相对于终端的运动方向。
下面通过本可选实施例对本发明进行举例说明;
本可选实施例提供了一种利用双摄像头测距功能的应用终端,图5是根据本发明可选实施例的利用双摄像头测距功能的应用终端的结构框图,如图5所示,该应用终端包括:摄像头、用户操作界面、存储单元、中央处理单元;
该摄像头,可以是固定在同一平面的两个或多个独立的摄像头,也可以是本身有位移功能并能同时对某个相同方位取景的多个摄像头,(比如,一个固定摄像头,加一个可以沿某旋转轴180度的旋转摄像头)。双摄像可以实时取景,获取目标区域的信息。
用户操作界面,设置为用户选取需要追踪的特征目标(指定目标),该特征目标可以在3D相片上操作,也可以在照相机取景界面操作,设置为选择需要追踪的特征目标;
中央处理单元,设置为计算用户选取的特征目标的相对位置信息。并通过多组相对位置信息,计算出目标的运行轨迹,移动速度,移动方向等信息
存储单元,设置为存储可以作为密码的特征值,包括时间信息,地理位置信息,特征目标特征,目标运动轨迹,目标移动速度,移动方向等信息。
在本可选实施例中,通过在摄像头采集图像,由用户确定特征目标,算出目标相对位置,从而根据连续获取的多个特征位置,计算出目标的移动轨迹,移动速度,移动方向等信息,不需要主动发射电磁波,不需要用户精确对准目标追踪,只要是双摄像头可以同时覆盖的空间区域即可实现跟踪测速;会受自然光影响,但是不需要借助GPS信号即可实现测速定位,适应性强;集成在移动终端上,便携性好。
本可选实施例应用的场景包括:
1、移动终端本身静止,观察跟踪移动目标,从而计算出跟踪目标的运动速度。这种场景可以用来跟踪观察移动的飞鸟,昆虫,赛场上的运动员,模型飞机,汽车等日常生活中,运动的物体的运动模式。
2、移动终端本身在运动,观察跟踪静止目标,从而计算出移动终端本身的运动速度,这种场景可以用来计算移动终端自己的运动速度。
3、移动终端本身在运动,观察目标也在运动,从而计算出相对运动速度。这种场景可以计算运动中物体的相对速度,比如两辆汽车,两架飞机,两个行人,猎豹捕捉羚羊的相对速度。
下面结合附图对本发明可选实施例进行详细说明;
图6是根据本发明可选实施例的应用终端获取指定目标运动轨迹的方法流程图,该应用终端具有双摄像头,并且存储有这两个摄像头的详细焦距范围(定焦或者变焦摄像头),相对位置的信息,如图6所示,该方法的步骤包括:
步骤S602:用户通过手机菜单,进入目标测速功能;
步骤S604:手机启动摄像机,抓取当前手机所拍摄的景物;
步骤S606:手机上显示当前的实时摄像头信息;
步骤S608:用户在触摸屏上点击,选择特征目标;
其中,用户可以根据需要,在界面上选取,多个点分别测速,也可以随时追加需要追踪的目标,删除不需要继续追踪的目标。
所属特征目标的选取做一下说明,在单色平面中间取一点作为特征点,由于特征不明显是难以后续进行特征匹配的。因此当用户在屏幕上选取特征目标时。给用户推荐的特征目标,可以使用具有如下特征的一些点:
A,大面积色块中的一个独立的其他颜色的点,比如白色手绢上的一个明显 黑点;
B,色块边界的一个端点,比如一个蓝色方块,右侧边对应的下端点,或上端点。
C,一个平面上特殊logo的中心点,比如手机logo的中心,计算的时候取这个logo的中心点。
D,两条线段的交叉点,比如两本书叠放形成的边界的交叉点。
E,小型色块的中心点,比如电脑显示器电源指示灯的中心点。
此外,由于目标连续运动,最初的特征值可能在连续运动中产生变化,可以通过每次摄像头取景的数据,不断修正跟踪目标的特征。比如目标由于运动过程中,太阳光反射导致亮度产生明显变化,但是目标形状和预期位置都与之前存储的特征数据保持一致的情况下。仍然可以继续锁定目标。
举例说明,例如一个模型飞机,选择其机头,机翼,尾翼,前轮,后轮,5个特征,来记录这个目标的特征信息,每次定位时,如果有3个以上的特征信息被成功识别,既可以确认目标成功确认。可见可以有利于跟踪姿态不断变化的目标,以防目标在跟踪中丢失,提高跟踪精度。
事先可以通过取景过程中的计算,或者输入预设参数,来明确模型飞机的各个组成部位的相对位置,当获取了三个组件的相对位置后,即可计算出剩余组件的空间位置,获取相应空间位置对应的摄像头取景界面的信息,即可与存储空间中,原始的特征信息,进行比较,如果前后特征有部分信息变化,即可完成模型特征信息更新(举例:如果模型飞机的机头被一个西红柿打中,原来的蓝色机头变成了红色机头,新得机头信息特征,将会被记录在存储空间中,如果连续三次取景获得新机头特征信息一致,即可确认机头特征已经改变,如果每次机头位置的信息特征都在不断变化,说明机头追踪丢失,后续将不能作为目标参考定位特征),可以有利于跟踪不断变换姿态,变色,变形的目标,提高跟踪设备的适应性。
步骤S610:用户操作界面根据用户选择的特征目标,明确具体特征目标的详细信息给用户,要求用户来确认;
步骤S612:用户确认特征目标后,摄像头采集当前特征目标的信息,以及摄像头本身的参数信息,发送给“中央处理单元”;
步骤S614:中央处理单元通过摄像头传递过来的信息,计算出特征目标与手机的相对位置,。
其中,该相对位置也就是特征目标的三维空间位置信息(X1,Y1,Z1), 在延迟一个时间T后,再次通过摄像头获取目标信息,第二次计算出特征目标与手机的相对位置,也就是特征目标的三维空间位置信息(X2,Y2,Z2),以此类推,每延迟一个时间T获取一次目标点的位置信息。从而获得了目标的一个运行曲线;
其中,双摄像头空间定位,就是利用两个摄像头的相对距离与特征点之间的个独立成像的视差,以及摄像头的当时的焦距信息,来确定特征点在空间中的相对于摄像头组的相对位置。
图7是根据本发明可选实施例应用终端获取景物P的相对位置的示意图,如图7所示,两个摄像头处于同一水平面,同时对景物P拍照。在拍摄的两张照片上获得Pl和Pr,这两点相对于照片中心点有一个平面坐标。利用相似三角形原理,我们很容易通过两个摄像头在移动终端上的相对位置,知道Ol和Or两点的相对位置。OlPl和OrPr的向量方向与Ol P和OrP完全对应。综上信息可以很容易算出P点相对于摄像头组的位置。
步骤S616:中央处理单元利用目标的运行曲线,以及时间间隔T,即可计算出,目标的运行速度,同时可以获得目标相对于移动终端的运动方向。
步骤S618:将计算出的特征目标的速度,运动方向等信息,由手机显示屏,实时显示给用户。
通过本可选实施例,当用户使用手机过程中,发现一个有趣的目标,即可使用手机相应的功能追踪目标,并判断出目标的运动轨迹,相应的比如观察昆虫的运动方式等。
需要说明的是,对于上述步骤S604,“抓取当前手机所拍摄的景物”,这一步,可以通过导入3D录像的视频,来替代,在视频中选取目标跟踪目标,从而计算出目标的位置与运动轨迹。
采用该方式,让追踪目标的过程可以延迟处理,事后重现,方便比如长时间观察某一区域,无人观察的时候,人员可以事后快速寻找我们需要观察的内容。比如用户观察昆虫运动方式,可以手机放在那里自动拍摄,事后利用录像分析。还可以从其他设备拍摄的3D视频,来还原追踪目标的过程。使用其他设备录制的3D视频还原时,需要了解录制设备的摄像头实时焦距,以及每个摄像头的相对位置,才能完成特征目标位置的计算。
此外,在本可选实施例中使用了双摄像头,如果将摄像头的数目提高,来搭建摄像头阵列,比如2*2个摄像头,分别计算出两组空间定位数据,这样的话,如果某一个摄像头损坏,就会立即产生两组数据较大的差异,提醒用户摄像头故障,测距功能已经无法正常工作,需要人工检修。进而可以让目标跟踪系统有自 我校验结果的能力,提高系统容错能力。
另外,还可以加入对手机本身GPS功能的利用,和陀螺仪的利用,定位手机自身在GPS系统上的位置。定位手机自身位置后,再结合自算出的特征目标与手机的相对位置,以及手机的姿态,可以计算出特征目标,相对于GPS系统的位置,从而在世界地图上可以标注目标。通过加入GPS信息,让相对于手机的位置信息,转化为相对于地球经纬度的信息,增加信息的应用渠道。
基于本可选实施例的上述方式,还可以反向计算,输入一组目标相对于摄像头阵列的运动轨迹信息,然后通过对应的信息,在一组视频中还原目标的位置,也就是在3D视频中,画出对应的目标。相当于在3D影片的后期动画制作中,加入一个运动的物品,该技术本身属于动画制作技术,加入在这里,可以实现多轨迹直观对比。比如,某个物理实验可以事先计算出物品运动的方式,将这个动画效果数字化出来,然后实际物理实验时,录制真实运动轨迹的同时,播放事先计算的轨迹给用户,达到同时观看实际效果与计算效果的优良体验。
对本可选实施例的步骤S618“计算出的特征目标的速度,运动方向等信息,由手机显示屏,实时显示给用户”,这里的显示方式,可以在追踪目标的旁边有一个实时显示气泡,来显示目标的运动信息。进而可以有效利用已经获取到的目标在照片中的位置信息,来有效生动的表达目标信息,尤其是在多目标追踪时,显示效果很好。
对于本可选实施例的步骤S608“用户在触摸屏上点击,选择特征目标”,这里的用户手动选择特征目标的过程,可以通过软件预存特征信息,从而自动找到屏幕中出现的特征目标。进而可以有效利用自动锁定特征目标的过程,来实现自动化跟踪。比如将该发明应用到安全区域监控系统中,可以识别出闯入目标,并自动跟踪。同时通过跟踪目标的移动方向,摄像头可以主动调整观察角度以便持续跟踪目标,当特征目标离开摄像头监控范围后,还可以定位目标最后出现的位置,通知覆盖到对应位置的摄像头,继续跟踪目标。
而对于本可选实施例中步骤S602“用户通过手机菜单,进入目标测速功能”可以选择进入一个移动终端自身测速的菜单。用户进入菜单后,通过摄像头采集的画面,选择一个静止不动的特征目标。经过与最佳实施例相同的速度计算方式,计算出了目标的移动方向和移动速度,利用相对运动的计算原理,很容易获得对应矢量的反方向,就是就可以的移动终端的运动方向与运动速度。用户可以通过这种方式测量自身的运动速度。比如乘坐汽车时,观察路边某一固定景物,即可完成测试,了解到目前的车速。
此外,对于“用户通过手机菜单,进入目标测速功能”可以选择进入一个相对速度测速菜单。用户进入菜单后,通过摄像头采集的画面,选择一个运动的特 征目标。经过与最佳实施例相同的速度计算方式,计算出了目标的移动方向和移动速度,这个数值,就是用户与目标之间的相对速度。用户可以在旋转木马上,来测量自己的速度,当用户选择一个同为旋转木马上运动的目标木马时,相对速度接近于0,当用户选择场外目标测速时,可以获得相对于场外观众看到的自身速度。特别的由于旋转木马的内柱也在旋转,内柱的旋转方向与木马方向正好相反。如果用户选择的特征目标是旋转木马的内柱,那么就可以测试到相对于旋转木马内柱的运动速度,这个速度将大于旋转木马相对于场外景物的速度。多种用户日常只能用感觉感受到的速度变化,更直观的通过移动终端数值化的反馈给用户,可以为用户提供更多的乐趣。
在另外一个实施例中,还提供了一种软件,该软件用于执行上述实施例及优选实施方式中描述的技术方案。
在另外一个实施例中,还提供了一种存储介质,该存储介质中存储有上述软件,该存储介质包括但不限于:光盘、软盘、硬盘、可擦写存储器等。
显然,本领域的技术人员应该明白,上述本发明的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。
上述仅为本发明的可选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。
工业实用性
在本实施例中,通过终端的多个摄像头每隔预定时间获取一个或多个指定目标的信息,进而通过该获取到的一个或多个指定目标的信息、该多个摄像头的参数信息以及该预定时间就能计算出该一个或多个指定目标的运动轨迹,整个计算过程不需要主动发射电磁波,不需要用户精确对准目标追踪,只要是该多个摄像头可以同时覆盖的空间区域即可,从而解决了相关技术中对于特定目标进行测速时,采用的方式比较繁琐的问题,进而提高了用户的体验效果。

Claims (12)

  1. 一种运动轨迹的计算方法,包括:
    每隔预定时间通过终端设备的多个摄像头获取所述多个摄像头同时覆盖的空间区域中一个或多个指定目标的信息;
    依据获取的所述一个或多个指定目标的信息、所述预定时间以及所述多个摄像头的参数信息计算出所述一个或多个指定目标的运动轨迹。
  2. 根据权利要求1所述的方法,其中,依据获取的所述指定目标的信息、所述预定时间以及所述多个摄像头的参数信息计算出所述一个或多个指定目标的运动轨迹包括:
    将所述一个或多个指定目标的信息以及所述多个摄像头的参数信息依据预设规则计算出所述一个或多个指定目标与所述终端的空间位置信息;
    依据多个所述空间位置信息以及所述预定时间计算出所述一个或多个指定目标的运动轨迹。
  3. 根据权利要求2所述的方法,其中,在所述多个摄像头以两个一组为单位获取所述一个或多个指定目标的信息时,则依据所述一个或多个指定目标的信息以及所述多个摄像头的参数信息计算出所述一个或多个指定目标与所述终端的空间位置信息包括:
    依据两个摄像头的相对距离与该两个摄像头的焦距计算出所述一个或多个指定目标与所述终端的空间位置信息。
  4. 根据权利要求2所述的方法,其中,在依据多个所述空间位置信息以及所述预定时间计算出所述一个或多个指定目标的运动轨迹之后,所述方法还包括:
    将所述一个或多个指定目标的运动轨迹信息实时呈现在所述终端的显示屏;
    从所述显示屏中删除不需要的一个或多个运动轨迹,和/或执行计算新增加的指定目标的运动轨迹的操作。
  5. 根据权利要求4所述的方法,其中,所述运动轨迹的信息包括:所述一个或多个指定目标相对于所述终端的运行速率、所述一个或多个指定目标相对于所述终端的运动方向。
  6. 一种运动轨迹的计算装置,包括:
    获取模块,设置为每隔预定时间通过终端设备的多个摄像头获取所述多个摄像头同时覆盖的空间区域中一个或多个指定目标的信息;
    计算模块,设置为依据获取的所述一个或多个指定目标的信息、所述预定时间以及所述多个摄像头的参数信息计算出所述一个或多个指定目标的运动轨迹。
  7. 根据权利要求6所述的装置,其中,所述计算模块包括:
    第一计算单元,设置为将所述一个或多个指定目标的信息以及所述多个摄像头的参数信息依据预设规则计算出所述一个或多个指定目标与所述终端的空间位置信息;
    第二计算单元,设置为依据多个所述空间位置信息以及所述预定时间计算出所述一个或多个指定目标的运动轨迹。
  8. 根据权利要求7所述的装置,其中,
    所述第一计算单元,还设置为在所述多个摄像头以两个一组为单位获取所述一个或多个指定目标的信息时,依据两个摄像头的相对距离与该两个摄像头的焦距计算出所述一个或多个指定目标与所述终端的空间位置信息。
  9. 根据权利要求7所述的装置,其中,在依据多个所述空间位置信息以及所述预定时间计算出所述一个或多个指定目标的运动轨迹之后,所述装置还包括:
    呈现模块,设置为将所述一个或多个指定目标的运动轨迹的信息实时呈现在所述终端的显示屏;
    管理模块,设置为从所述显示屏中删除不需要的一个或多个运动轨迹,和/或执行计算新增加的指定目标的运动轨迹的操作。
  10. 根据权利要求9所述的装置,其中,所述运动轨迹的信息包括:所述一个或多个指定目标相对于所述终端的运行速率、所述一个或多个指定目标相对于所述终端的运动方向。
  11. 一种终端,包括:
    多个摄像头,设置为每隔预定时间获取所述多个摄像头同时覆盖的空间区域中一个或多个指定目标的信息;
    处理器,设置为依据获取的所述一个或多个指定目标的信息、所述预定时间以及所述多个摄像头的参数信息计算出所述一个或多个指定目标的运动轨迹。
  12. 根据权利要求11所述的终端,其中,
    所述处理器,还设置为将所述一个或多个指定目标的信息以及所述多个 摄像头的参数信息依据预设规则计算出所述一个或多个指定目标与所述终端的空间位置信息;并依据多个所述空间位置信息以及所述预定时间计算出所述一个或多个指定目标的运动轨迹。
PCT/CN2015/087411 2015-05-21 2015-08-18 运动轨迹的计算方法及装置、终端 WO2016183954A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510262805.8A CN106289180A (zh) 2015-05-21 2015-05-21 运动轨迹的计算方法及装置、终端
CN201510262805.8 2015-05-21

Publications (1)

Publication Number Publication Date
WO2016183954A1 true WO2016183954A1 (zh) 2016-11-24

Family

ID=57319254

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/087411 WO2016183954A1 (zh) 2015-05-21 2015-08-18 运动轨迹的计算方法及装置、终端

Country Status (2)

Country Link
CN (1) CN106289180A (zh)
WO (1) WO2016183954A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108829247A (zh) * 2018-06-01 2018-11-16 北京市商汤科技开发有限公司 基于视线跟踪的交互方法及装置、计算机设备
CN112702571A (zh) * 2020-12-18 2021-04-23 福建汇川物联网技术科技股份有限公司 一种监控方法及装置

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280848A (zh) * 2017-12-05 2018-07-13 中国农业科学院蜜蜂研究所 一种用于研究传粉昆虫访花行为的方法及系统
CN109361931A (zh) * 2018-11-16 2019-02-19 北京中竞鸽体育文化发展有限公司 一种赛事直播中进行提示的方法及系统
CN110782476B (zh) * 2019-11-06 2022-08-02 杭州益昊农业科技有限公司 一种昆虫运动轨迹的测定方法及其测定装置
CN111986224B (zh) * 2020-08-05 2024-01-05 七海行(深圳)科技有限公司 一种目标行为预测追踪方法及装置
CN113324559B (zh) * 2021-05-10 2023-03-21 青岛海尔空调器有限总公司 一种运动计步方法、装置及空气处理设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101179707A (zh) * 2007-09-21 2008-05-14 清华大学 无线网络视频图像多视角协作目标跟踪测量方法
CN202382732U (zh) * 2011-12-12 2012-08-15 上海理工大学 物体运动轨迹的识别装置
CN102853820A (zh) * 2011-07-01 2013-01-02 中国钢铁股份有限公司 高炉的落料轨迹的量测方法
CN103808308A (zh) * 2014-02-27 2014-05-21 西南大学 一种家蚕吐丝行为数据的自动采集方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101179707A (zh) * 2007-09-21 2008-05-14 清华大学 无线网络视频图像多视角协作目标跟踪测量方法
CN102853820A (zh) * 2011-07-01 2013-01-02 中国钢铁股份有限公司 高炉的落料轨迹的量测方法
CN202382732U (zh) * 2011-12-12 2012-08-15 上海理工大学 物体运动轨迹的识别装置
CN103808308A (zh) * 2014-02-27 2014-05-21 西南大学 一种家蚕吐丝行为数据的自动采集方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108829247A (zh) * 2018-06-01 2018-11-16 北京市商汤科技开发有限公司 基于视线跟踪的交互方法及装置、计算机设备
CN112702571A (zh) * 2020-12-18 2021-04-23 福建汇川物联网技术科技股份有限公司 一种监控方法及装置

Also Published As

Publication number Publication date
CN106289180A (zh) 2017-01-04

Similar Documents

Publication Publication Date Title
WO2016183954A1 (zh) 运动轨迹的计算方法及装置、终端
US11697046B2 (en) System and method for three dimensional object tracking using combination of radar and image data
US11656635B2 (en) Heading generation method and system of unmanned aerial vehicle
US10582008B2 (en) Highly accurate baseball pitch speed detector using widely available smartphones
JP6719466B2 (ja) 放送においてサーモグラフィー特性を表示するためのシステム及び方法
Raneri Enhancing forensic investigation through the use of modern three-dimensional (3D) imaging technologies for crime scene reconstruction
US10898757B1 (en) Three dimensional object tracking using combination of radar speed data and two dimensional image data
US9448067B2 (en) System and method for photographing moving subject by means of multiple cameras, and acquiring actual movement trajectory of subject based on photographed images
CN108810473B (zh) 一种在移动平台上实现gps映射摄像机画面坐标的方法及系统
US9270885B2 (en) Method, system, and computer program product for gamifying the process of obtaining panoramic images
US9662564B1 (en) Systems and methods for generating three-dimensional image models using game-based image acquisition
JP4758842B2 (ja) 映像オブジェクトの軌跡画像合成装置、映像オブジェクトの軌跡画像表示装置およびそのプログラム
US10217228B2 (en) Method, system and non-transitory computer-readable recording medium for measuring ball spin
US20130058532A1 (en) Tracking An Object With Multiple Asynchronous Cameras
US20230343001A1 (en) Object trajectory simulation
US10564250B2 (en) Device and method for measuring flight data of flying objects using high speed video camera and computer readable recording medium having program for performing the same
JP2005268847A (ja) 画像生成装置、画像生成方法、および画像生成プログラム
US10751569B2 (en) System and method for 3D optical tracking of multiple in-flight golf balls
WO2017092432A1 (zh) 一种虚拟现实交互方法、装置和系统
JP2016218626A (ja) 画像管理装置、画像管理方法およびプログラム
KR102298047B1 (ko) 디지털 콘텐츠를 녹화하여 3d 영상을 생성하는 방법 및 장치
JP2005258792A (ja) 画像生成装置、画像生成方法、および画像生成プログラム
WO2023100220A1 (ja) 映像処理装置、方法およびプログラム
Ponomarev et al. ORDSLAM dataset for comparison of outdoor RGB-D SLAM algorithms
CN116684651A (zh) 一种基于数字建模的体育赛事云展播和控制平台

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15892332

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15892332

Country of ref document: EP

Kind code of ref document: A1