CN104103081A - Virtual multi-camera target tracking video material generation method - Google Patents

Virtual multi-camera target tracking video material generation method Download PDF

Info

Publication number
CN104103081A
CN104103081A CN201410332803.7A CN201410332803A CN104103081A CN 104103081 A CN104103081 A CN 104103081A CN 201410332803 A CN201410332803 A CN 201410332803A CN 104103081 A CN104103081 A CN 104103081A
Authority
CN
China
Prior art keywords
camera
target
video
virtual
production method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410332803.7A
Other languages
Chinese (zh)
Inventor
刘贵喜
王康
段红岩
张音哲
王亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201410332803.7A priority Critical patent/CN104103081A/en
Publication of CN104103081A publication Critical patent/CN104103081A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a virtual multi-camera target tracking video material generation method. By aiming at situations that research video source materials are in lack and scene changes are insufficient for multi-camera target tracking, the invention utilizes a vision technology to truly simulate a complex environment of the multi-camera target tracking in a real environment by creating a required scene and customizing the attributes of a tracked target and cameras, and even tracking video materials which rarely appear in the reality can be created. The virtual multi-camera target tracking video material generation method favorably simulates problems of specific complex environments, such as shadows, target occlusion, illumination variation, target deformation, texture changes, target generation and disappearing and the like which are not easy to realize in the reality in a target tracking process, and the three-dimensional realization processes of multi-camera calibration, collaborative operation, synchronous calibration, network layout and the like are solved. The frame data of a camera channel is processed to generate a virtual multi-camera target tracking vide, and great help is provided for target tracking algorithm research and verification.

Description

A kind of virtual multiple-camera target following video material production method
Technical field
The video material that the invention belongs to virtual scene produces field, is specifically related to a kind of virtual multiple-camera target following video material production method.
Background technology
Along with the development of video frequency motion target detection and tracking technology, the limited situation of the single camera ken can not meet for a long time, the requirement of pursuit movement target on a large scale, and multiple-camera multiple target tracking has obtained a lot of concerns in wide area monitoring scene.Research and the proof of algorithm of multiple-camera target following are at present limited to video source material to a great extent, researcher due to the reasons such as multiple-camera motion, object matching cannot simple realization video recording process, the comparatively difficulty of video material that gets verification algorithm, some specific scene such as shades, target occlusion, target deformation and changing features etc. cannot well realize.The military materials such as multiple cameras calibration, Collaboration, synchronous calibration etc. are subject to the impact of many environmental factors simultaneously, and video source material is also confined to target and behaves and vehicular traffic, the collaborative target following of some high-altitude multiple target tracking such as multiple no-manned planes are rare.
The development of computer graphics makes three-dimensional vision have application true to nature to present, also reality more of the scene that vision simulation engine produces, and this also has benefited from the upwards development of hardware-software, and the technology such as illumination, material, shade and tinter are also gradually improved.Therefore, the video based on virtual reality technology generates the video material production method that a kind of good multiple-camera target following is provided.Can produce the complex environment problem in target occlusion, shade, illumination variation and target travel, deformation, texture change and the target following such as generation and disappearance of target in the visual field, for research and the checking of target tracking algorism.
Summary of the invention
The object of this invention is to provide a kind of virtual multiple-camera target following video material production method.
For achieving the above object, the technical solution used in the present invention is: a kind of virtual multiple-camera target following video material production method, it is characterized in that: according to the demand of target tracking algorism research and checking, from scene database, choose three-dimensional model and carry out Customization Module scene, the scene customizing can realize target environment, goal behavior, the setting of video camera behavior and the layout of multiple-camera network, the complex environment needing to produce multiple-camera target tracking algorism research institute; Complete video camera by formulation camera chin and synchronously obtain picture, and be output as video file.
Above-mentioned targeted environment comprise block, the variation of shade and illumination.Described blocking comprises trees, building, cloud layer, airborne dust and other objects blocking target; Described shade is that the shadow of realizing building, trees, billboard and other objects by light source position, angle are set is projected in target proximity; Described illumination variation refer to position, angle, the intensity by light source is set realize illumination by by force to variation weak, that grow from weak to strong.
Above-mentioned goal behavior comprises that target travel, target deformation, target texture change and generation and the disappearance of target; Described target travel, comprise target translation, rotation and target stop and again motion; Described target deformation, refers to that large variation occurs target morphology, comprises that tank turret rotation, missile transporter vehicle launcher rise, target is destroyed and other variation; Described target texture changes, and refers to target surface textural characteristics generation marked change, comprises that military vehicle adds camouflage painting; The generation of described target and disappearance, refer to that target enters and leave the process of camera coverage scope.
Above-mentioned video camera behavior comprises that camera motion, video camera attitude are adjusted, focal length of camera regulates and video camera disturbs; Described camera motion, refers to that video camera moves along the translation of different azimuth; Described video camera attitude is adjusted, and refers to the attitude angle of video camera is set, and comprises the angle of pitch and azimuthal setting; Described focal length of camera regulates, and refers to furthering of camera coverage zoomed out to adjusting, and focal length becomes corresponding relation with camera parameters matrix, can complete the demarcation of video camera by this corresponding relation; Described video camera disturbs, and refers to the interference of environment to camera lens in camera motion process, comprises that on camera lens shake, camera lens, mist, camera lens dye dirt.
The layout of above-mentioned multiple-camera network is for setting five video cameras, lay respectively at the controlled top of distance objective height and front, rear, left, right-hand an angle of 90 degrees degree scope, and suitably adjust the behavior of each video camera, the synchronous photographic subjects picture from each orientation.
Above-mentioned camera chin and video output are by the visual field picture of virtual video camera, by pipeline, the synchronous real-time Transmission of every frame data of each virtual video camera is shown to graph rendering hardware and at display end, and frame data are stored in to buffer zone, set resolution sizes, frame per second, Video coding and compress mode, data are changed into the video flowing of main flow form, together offer researchist with camera parameters and object module sample and make algorithm research.
The setup of attribute process of above-mentioned target and video camera specifically realizes with two kinds of methods: one is pre-defined, sets in advance by code or specific mode targeted environment change, goal behavior variation, video camera behavior variation and the multiple-camera network topology adjustment that tracking target will produce; Another kind is real-time control, after scene rendering, controls in real time and change targeted environment, goal behavior, video camera behavior and multiple-camera network topology by input equipments such as keyboard and mouses; Two schemes all can be realized the process arranging under the auxiliary observation of scene key frame.
The virtual video production method of above-mentioned target following is not only applicable to monotrack, is equally applicable to multiple target tracking.
The virtual video production method of above-mentioned multiple-camera is: export each road camera video, the camera video of therefrom choosing different azimuth carries out algorithm research and the checking of target following, not only be applicable to multiple-camera target following, be equally applicable to single camera target following.
The invention has the beneficial effects as follows:
Research video source material for multiple-camera target following lacks, the situation of scene changes deficiency, the present invention utilizes what comes into a driver's technology, by creating the scene of demand and the attribute of customization tracking target and video camera, simulate really the complex environment of multiple-camera target following in actual environment, even can create the tracking video material that in reality, occurrence probability is very little.The present invention has well simulated and in the reality such as generation and disappearance of some specific complex environment such as shades, target occlusion, illumination variation, target deformation and texture variations, target in target following process, has been difficult for the problem that realizes, has also solved the three dimensional realization process of multiple cameras calibration, Collaboration, synchronous calibration, network topology etc.By the frame data processing to camera chin, the multiple-camera target following video of generating virtual, for algorithm research and the checking of target following provide very large help.
Brief description of the drawings
Below in conjunction with embodiment, the present invention is described further:
The scene reproduction of the virtual multiple-camera target following of Fig. 1;
Fig. 2 customizes the composition of scene;
The composition of Fig. 3 targeted environment;
The composition of Fig. 4 goal behavior;
The composition of Fig. 5 video camera behavior;
The layout of Fig. 6 multiple-camera network;
Fig. 7 passage video output parameter;
Description of reference numerals:
101-105: camera; 106-110: follow the tracks of picture; 111: scene; 112: target to be tracked; 201: customization scene; 202: targeted environment; 203: goal behavior; 204: video camera behavior; 205: multiple-camera network topology; 206: the output of passage video; 301: block; 302: shade; 303: illumination; 304: trees; 305: building; 306: cloud layer; 307: airborne dust; 308: other objects; 309: produce shade building; 310: produce shade trees; 311: billboard; 312: produce other objects of shade; 313: a little less than arriving by force; 314: weak to strong; 401: target travel; 402: target deformation; 403: target texture changes; 404: target produces and disappears; 405: translation; 406: rotation; 407: stop; 408: motion again; 409: tank turret rotation; 410: launcher rises; 411: target is destroyed; 412: other variation; 413: camouflage painting; 501: camera motion; 502: attitude adjustment; 503: focus adjustment; 504: video camera disturbs; 505: camera lens shake; 506: mist on camera lens; 507: camera lens dyes dirt; 601: top; 602: front; 603: rear; 604: left; 605: right-hand; 701: resolution; 702: frame per second; 703: Video coding; 704: compress mode.
Embodiment
The present embodiment elaborates to system composition and part implementation procedure.
This virtual multiple-camera target following video material production method provided by the invention, build scene as shown in Figure 1, target 112 to be tracked is moved in scene 111, camera 101-105 lay respectively at target top, front, rear, left, right-hand come tracking target picture, produce respectively the tracking picture of 106-110 to display end by channel transfer.Video camera number in example is not limited to five, and concrete number can be selected by user.
According to the demand of target tracking algorism research and checking, from scene database, choose three-dimensional model and carry out Customization Module scene 201, the scene customizing can realize target environment 202, setting and the multiple-camera network topology 205 of goal behavior 203, video camera behavior 204, the complex environment needing to produce multiple-camera target tracking algorism research institute, as shown in Figure 2.Module channels video output 206 completes video camera by formulation camera chin and synchronously obtains picture, and is output as video file.
Wherein targeted environment 202 as shown in Figure 3, comprise block 301, the variation of shade 302 and illumination 303.Describedly block 301 and comprise trees 304, building 305, cloud layer 306, airborne dust 307 and other objects 308 blocking target; Described shade 302 be by arrange light source position, angle realize produce shade building 309, produce shade trees 310, billboard 311 and the shadow that produces other objects 312 of shade be projected in target proximity; Described illumination 303 change refer to position, angle, intensity by light source is set realize illumination by by force to weak 313,314 variation grows from weak to strong.
Wherein goal behavior 203 as shown in Figure 4, comprises that target travel 401, target deformation 402, target texture variation 403 and target produce and disappear 404.Described target travel 401, comprise target translation 405, rotation 406 and target stop 407 and again move 408; Target deformation 402, refers to that large variation occurs target morphology, comprise that tank turret rotation 409, missile transporter vehicle launcher rise 410, target destroyed 411 and other change 412; Described target texture changes 403, refers to target surface textural characteristics generation marked change, comprises that military vehicle adds camouflage painting 413 etc.; The generation of target and disappearance 404, refer to that target enters and leave the process of camera coverage scope.
Wherein video camera behavior 204 as shown in Figure 5, comprises that camera motion 501, video camera attitude adjustment 502, focal length of camera adjusting 503 and video camera disturb 504.Camera motion 501 refers to that video camera moves along the translation of different azimuth; Video camera attitude adjustment 502 refers to be set the attitude angle of video camera, comprises the angle of pitch and azimuthal setting; Focal length of camera adjusting 503 refers to zooms out adjusting to furthering of camera coverage, and focal length becomes corresponding relation with camera parameters matrix, can complete the demarcation of video camera by this corresponding relation; Video camera disturbs 504 to refer to the interference of environment to camera lens in camera motion process, comprises that on camera lens shake 505, camera lens, mist 506, camera lens dye dirt 507 etc.
The parameter of above-mentioned scene customization 201: the setting of targeted environment 202, goal behavior 203, video camera behavior 204 and multiple-camera layout 205, specifically can realize by Scene Simulation and method, build three-dimensional scenic and world coordinate system, the coordinate of Offered target and video camera and mode of motion, obtain target view by different attitude angle, utilize the scene customization in the present embodiment such as vision simulation means are simulated such as blocking, shade, illumination, target deformation, texture variations.This technology is prior art, comparatively ripe, does not explain here.
The wherein demarcation of video camera is by building terrestrial reference model by the anti-parameter matrix that solves of standardization.Specifically build large H and be marked in three-dimensional scenic, the coordinate of 12 angle points that obtain H under world coordinate system, builds virtual video camera, gets respectively many class values of coordinate under world coordinate system and viewpoint is invested to terrestrial reference H.Utilize the coordinate under the camera coordinate system of 12 angle points of H in the camera chin of fixed resolution, and 12 angular coordinates under world coordinate system and organize camera coordinates, according to the anti-parameter matrix that solves video camera of the translation rotational transform of coordinate system. more
Wherein the layout 205 of multiple-camera network as shown in Figure 6, set five video cameras, lay respectively at distance objective height controlled top 601 and front 602, rear 603, left 604, right-hand 605 an angle of 90 degrees degree scope, and suitably adjust the behavior of each video camera, the synchronous photographic subjects picture from each orientation.Five video cameras, by adjusting in scope separately, change the video group of the target picture of camera attitude angle one-tenth different azimuth capable of being combined.
Wherein passage video output 206 as shown in Figure 7, by the visual field picture of virtual video camera, by pipeline, the synchronous real-time Transmission of every frame data of each virtual video camera is shown to graph rendering hardware and at display end, and frame data are stored in to buffer zone, set resolution 701 sizes, frame per second 702, Video coding 703 and compress mode 704, data are changed into the video flowing of main flow form, together offer researchist with camera parameters and object module sample and make algorithm research.
Wherein target and video camera setup of attribute process, specifically realize with two kinds of methods: one is pre-defined, set in advance by code or specific mode that the targeted environment 202 that tracking target will produce changes, goal behavior 203 changes, video camera behavior 204 changes and multiple-camera network topology 205 is adjusted; Another kind is real-time control, after scene rendering, controls in real time and change targeted environment 202, goal behavior 203, video camera behavior 204 and multiple-camera network topology 205 by input equipments such as keyboard and mouses.Two schemes all can be realized the process arranging under the auxiliary observation of scene key frame.
The virtual video production method of multiple-camera of the present invention, the virtual video production method of described target following, is not only applicable to monotrack, is equally applicable to multiple target tracking.
The virtual video production method of multiple-camera of the present invention, the virtual video production method of described multiple-camera, export five road camera videos, the camera video that can therefrom choose different azimuth carries out algorithm research and the checking of target following, not only be applicable to multiple-camera target following, be equally applicable to single camera target following.
The research video source material that the present invention is directed to multiple-camera target following lacks, the situation of scene changes deficiency, utilize what comes into a driver's technology, by creating the scene of demand and the attribute of customization tracking target and video camera, simulate really the complex environment of multiple-camera target following in actual environment, even can create the tracking video material that in reality, occurrence probability is very little.The present invention has well simulated and in the reality such as generation and disappearance of some specific complex environment such as shades, target occlusion, illumination variation, target deformation and texture variations, target in target following process, has been difficult for the problem that realizes, has also solved the three dimensional realization process of multiple cameras calibration, Collaboration, synchronous calibration, network topology etc.By the frame data processing to camera chin, the multiple-camera target following video of generating virtual, for algorithm research and the checking of target following provide very large help.
The part that the present embodiment does not describe in detail belongs to the known conventional means of the industry, here not narration one by one.
More than exemplifying is only to illustrate of the present invention, does not form the restriction to protection scope of the present invention, within the every and same or analogous design of the present invention all belongs to protection scope of the present invention.

Claims (9)

1. a virtual multiple-camera target following video material production method, it is characterized in that: according to the demand of target tracking algorism research and checking, from scene database, choose three-dimensional model and carry out Customization Module scene, the scene customizing can realize target environment, goal behavior, the setting of video camera behavior and the layout of multiple-camera network, the complex environment needing to produce multiple-camera target tracking algorism research institute; Complete video camera by formulation camera chin and synchronously obtain picture, and be output as video file.
2. a kind of virtual multiple-camera target following video material production method according to claim 1, is characterized in that: described targeted environment comprise block, the variation of shade and illumination; Described blocking comprises trees, building, cloud layer, airborne dust and other objects blocking target; Described shade is that the shadow of realizing building, trees, billboard and other objects by light source position, angle are set is projected in target proximity; Described illumination variation refer to position, angle, the intensity by light source is set realize illumination by by force to variation weak, that grow from weak to strong.
3. a kind of virtual multiple-camera target following video material production method according to claim 1, is characterized in that: described goal behavior comprises that target travel, target deformation, target texture change and generation and the disappearance of target; Described target travel, comprise target translation, rotation and target stop and again motion; Described target deformation, refers to that large variation occurs target morphology, comprises that tank turret rotation, missile transporter vehicle launcher rise, target is destroyed and other variation; Described target texture changes, and refers to target surface textural characteristics generation marked change, comprises that military vehicle adds camouflage painting; The generation of described target and disappearance, refer to that target enters and leave the process of camera coverage scope.
4. a kind of virtual multiple-camera target following video material production method according to claim 1, is characterized in that: described video camera behavior comprises that camera motion, video camera attitude are adjusted, focal length of camera regulates and video camera disturbs; Described camera motion, refers to that video camera moves along the translation of different azimuth; Described video camera attitude is adjusted, and refers to the attitude angle of video camera is set, and comprises the angle of pitch and azimuthal setting; Described focal length of camera regulates, and refers to furthering of camera coverage zoomed out to adjusting, and focal length becomes corresponding relation with camera parameters matrix, can complete the demarcation of video camera by this corresponding relation; Described video camera disturbs, and refers to the interference of environment to camera lens in camera motion process, comprises that on camera lens shake, camera lens, mist, camera lens dye dirt.
5. a kind of virtual multiple-camera target following video material production method according to claim 1, it is characterized in that: the layout of described multiple-camera network is for setting five video cameras, lay respectively at the controlled top of distance objective height and front, rear, left, right-hand an angle of 90 degrees degree scope, and suitably adjust the behavior of each video camera, the synchronous photographic subjects picture from each orientation.
6. a kind of virtual multiple-camera target following video material production method according to claim 1, it is characterized in that: described camera chin and video output are by the visual field picture of virtual video camera, by pipeline, the synchronous real-time Transmission of every frame data of each virtual video camera is shown to graph rendering hardware and at display end, and frame data are stored in to buffer zone, set resolution sizes, frame per second, Video coding and compress mode, data are changed into the video flowing of main flow form, together offer researchist with camera parameters and object module sample and make algorithm research.
7. a kind of virtual multiple-camera target following video material production method according to claim 1, it is characterized in that: the setup of attribute process of described target and video camera specifically realizes with two kinds of methods: one is pre-defined, set in advance by code or specific mode targeted environment change, goal behavior variation, video camera behavior variation and the multiple-camera network topology adjustment that tracking target will produce; Another kind is real-time control, after scene rendering, controls in real time and change targeted environment, goal behavior, video camera behavior and multiple-camera network topology by keyboard and mouse input equipment; Two schemes all can be realized the process arranging under the auxiliary observation of scene key frame.
8. a kind of virtual multiple-camera target following video material production method according to claim 1, is characterized in that: the virtual video production method of described target following, be not only applicable to monotrack, and be equally applicable to multiple target tracking.
9. a kind of virtual multiple-camera target following video material production method according to claim 1, it is characterized in that: the virtual video production method of described multiple-camera is: export each road camera video, the camera video of therefrom choosing different azimuth carries out algorithm research and the checking of target following, not only be applicable to multiple-camera target following, be equally applicable to single camera target following.
CN201410332803.7A 2014-07-14 2014-07-14 Virtual multi-camera target tracking video material generation method Pending CN104103081A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410332803.7A CN104103081A (en) 2014-07-14 2014-07-14 Virtual multi-camera target tracking video material generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410332803.7A CN104103081A (en) 2014-07-14 2014-07-14 Virtual multi-camera target tracking video material generation method

Publications (1)

Publication Number Publication Date
CN104103081A true CN104103081A (en) 2014-10-15

Family

ID=51671201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410332803.7A Pending CN104103081A (en) 2014-07-14 2014-07-14 Virtual multi-camera target tracking video material generation method

Country Status (1)

Country Link
CN (1) CN104103081A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105094373A (en) * 2015-07-30 2015-11-25 深圳汇达高科科技有限公司 Gesture collection device for manipulating industrial robot and corresponding gesture collection method
CN105487692A (en) * 2014-12-22 2016-04-13 哈尔滨安天科技股份有限公司 Controller switching method and system based on three-dimensional display space
CN106097788A (en) * 2016-08-22 2016-11-09 上嘉(天津)文化传播有限公司 A kind of VR video teaching system based on Zigbee module
WO2017113577A1 (en) * 2015-12-31 2017-07-06 幸福在线(北京)网络技术有限公司 Method for playing game scene in real-time and relevant apparatus and system
CN107452060A (en) * 2017-06-27 2017-12-08 西安电子科技大学 Full angle automatic data collection generates virtual data diversity method
WO2018028048A1 (en) * 2016-08-12 2018-02-15 南方科技大学 Virtual reality content generation method and apparatus
CN108136954A (en) * 2015-09-14 2018-06-08 法雷奥照明公司 For projecting image onto the projecting method for motor vehicles in projection surface
CN109685002A (en) * 2018-12-21 2019-04-26 创新奇智(广州)科技有限公司 A kind of dataset acquisition method, system and electronic device
PL423499A1 (en) * 2017-11-17 2019-05-20 Politechnika Warszawska Method for creation of a calibration grid using the high-resolution display unit
CN109934907A (en) * 2019-02-14 2019-06-25 深兰科技(上海)有限公司 A kind of sample generating method, device, medium and equipment
CN109963148A (en) * 2017-12-25 2019-07-02 浙江宇视科技有限公司 Video flowing test method, apparatus and system
CN110178375A (en) * 2016-12-13 2019-08-27 乐威指南公司 The system and method for minimizing cover of the coverage diagram to media asset by predicting the movement routine of the object of interest of media asset and avoiding placing coverage diagram in movement routine
CN111654676A (en) * 2020-06-10 2020-09-11 上海趣人文化传播有限公司 Cooperative shooting system and shooting method thereof
CN111739137A (en) * 2020-05-26 2020-10-02 复旦大学 Method for generating three-dimensional attitude estimation data set

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7602404B1 (en) * 1998-04-17 2009-10-13 Adobe Systems, Incorporated Method and apparatus for image assisted modeling of three-dimensional scenes
CN102760303A (en) * 2012-07-24 2012-10-31 南京仕坤文化传媒有限公司 Shooting technology and embedding method for virtual reality dynamic scene video
CN103679800A (en) * 2013-11-21 2014-03-26 北京航空航天大学 System for generating virtual scenes of video images and method for constructing frame of system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7602404B1 (en) * 1998-04-17 2009-10-13 Adobe Systems, Incorporated Method and apparatus for image assisted modeling of three-dimensional scenes
CN102760303A (en) * 2012-07-24 2012-10-31 南京仕坤文化传媒有限公司 Shooting technology and embedding method for virtual reality dynamic scene video
CN103679800A (en) * 2013-11-21 2014-03-26 北京航空航天大学 System for generating virtual scenes of video images and method for constructing frame of system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周浩: ""复杂场景下视频目标检测及跟踪算法研究"", 《中国博士学位论文全文数据库 信息科技辑》 *
王莉莉: ""虚拟演播室原型系统中的关键技术研究"", 《中国优秀博硕士学位论文全文数据库 (硕士) 信息科技辑》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105487692A (en) * 2014-12-22 2016-04-13 哈尔滨安天科技股份有限公司 Controller switching method and system based on three-dimensional display space
CN105094373A (en) * 2015-07-30 2015-11-25 深圳汇达高科科技有限公司 Gesture collection device for manipulating industrial robot and corresponding gesture collection method
CN108136954A (en) * 2015-09-14 2018-06-08 法雷奥照明公司 For projecting image onto the projecting method for motor vehicles in projection surface
CN108136954B (en) * 2015-09-14 2021-06-11 法雷奥照明公司 Projection method for a motor vehicle for projecting an image onto a projection surface
WO2017113577A1 (en) * 2015-12-31 2017-07-06 幸福在线(北京)网络技术有限公司 Method for playing game scene in real-time and relevant apparatus and system
WO2018028048A1 (en) * 2016-08-12 2018-02-15 南方科技大学 Virtual reality content generation method and apparatus
CN106097788A (en) * 2016-08-22 2016-11-09 上嘉(天津)文化传播有限公司 A kind of VR video teaching system based on Zigbee module
CN110178375A (en) * 2016-12-13 2019-08-27 乐威指南公司 The system and method for minimizing cover of the coverage diagram to media asset by predicting the movement routine of the object of interest of media asset and avoiding placing coverage diagram in movement routine
US11611794B2 (en) 2016-12-13 2023-03-21 Rovi Guides, Inc. Systems and methods for minimizing obstruction of a media asset by an overlay by predicting a path of movement of an object of interest of the media asset and avoiding placement of the overlay in the path of movement
CN107452060A (en) * 2017-06-27 2017-12-08 西安电子科技大学 Full angle automatic data collection generates virtual data diversity method
PL423499A1 (en) * 2017-11-17 2019-05-20 Politechnika Warszawska Method for creation of a calibration grid using the high-resolution display unit
CN109963148A (en) * 2017-12-25 2019-07-02 浙江宇视科技有限公司 Video flowing test method, apparatus and system
CN109963148B (en) * 2017-12-25 2020-08-28 浙江宇视科技有限公司 Video stream testing method, device and system
CN109685002A (en) * 2018-12-21 2019-04-26 创新奇智(广州)科技有限公司 A kind of dataset acquisition method, system and electronic device
CN109934907A (en) * 2019-02-14 2019-06-25 深兰科技(上海)有限公司 A kind of sample generating method, device, medium and equipment
CN111739137A (en) * 2020-05-26 2020-10-02 复旦大学 Method for generating three-dimensional attitude estimation data set
CN111654676A (en) * 2020-06-10 2020-09-11 上海趣人文化传播有限公司 Cooperative shooting system and shooting method thereof
CN111654676B (en) * 2020-06-10 2021-11-19 上海趣人文化传播有限公司 Cooperative shooting system and shooting method thereof

Similar Documents

Publication Publication Date Title
CN104103081A (en) Virtual multi-camera target tracking video material generation method
JP6275362B1 (en) 3D graphic generation, artificial intelligence verification / learning system, program and method
CN107341832B (en) Multi-view switching shooting system and method based on infrared positioning system
US11514654B1 (en) Calibrating focus/defocus operations of a virtual display based on camera settings
CN104299245B (en) Augmented reality tracking based on neutral net
CN109783914B (en) Preprocessing dynamic modeling method and device based on virtual reality simulation
CN104463859B (en) A kind of real-time video joining method based on tracking specified point
CN105513087A (en) Laser aiming and tracking equipment and method for controlling same
CN102314708B (en) Optical field sampling and simulating method by utilizing controllable light source
CN107784038A (en) A kind of mask method of sensing data
CN109186552A (en) A kind of method and system of the specification of layouting of image collecting device
Ganoni et al. A framework for visually realistic multi-robot simulation in natural environment
Yu et al. Intelligent visual-IoT-enabled real-time 3D visualization for autonomous crowd management
CN105139433B (en) Infrared DIM-small Target Image sequence emulation mode based on mean value model
Adithya et al. Augmented reality approach for paper map visualization
CN105139432A (en) Gaussian model based infrared small weak target image simulation method
Roth et al. Next-generation 3D visualization for visual surveillance
Jiao et al. Lce-calib: automatic lidar-frame/event camera extrinsic calibration with a globally optimal solution
Thieling et al. Scalable sensor models and simulation methods for seamless transitions within system development: From first digital prototype to final real system
Ghasemi et al. Control a drone using hand movement in ROS based on single shot detector approach
CN103093491A (en) Three-dimensional model high sense of reality virtuality and reality combination rendering method based on multi-view video
CN109389538A (en) A kind of Intelligent campus management system based on AR technology
Zoellner et al. Reality Filtering: A Visual Time Machine in Augmented Reality.
CN108346183A (en) A kind of method and system for AR origin reference locations
CN104199314A (en) Method of intelligent simulation testing for robots

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20141015

RJ01 Rejection of invention patent application after publication