WO2019174429A1 - 一种视频地图引擎系统 - Google Patents

一种视频地图引擎系统 Download PDF

Info

Publication number
WO2019174429A1
WO2019174429A1 PCT/CN2019/074378 CN2019074378W WO2019174429A1 WO 2019174429 A1 WO2019174429 A1 WO 2019174429A1 CN 2019074378 W CN2019074378 W CN 2019074378W WO 2019174429 A1 WO2019174429 A1 WO 2019174429A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
augmented reality
real
client
label
Prior art date
Application number
PCT/CN2019/074378
Other languages
English (en)
French (fr)
Inventor
陈声慧
徐冠杰
孟超伟
胡伟健
林显敬
钟鉴荣
刘卓峰
邓志钊
江盛欣
罗克俊
高文国
宁细晚
丘春森
黄仝宇
汪刚
宋一兵
侯玉清
刘双广
Original Assignee
高新兴科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 高新兴科技集团股份有限公司 filed Critical 高新兴科技集团股份有限公司
Priority to US16/461,382 priority Critical patent/US10909766B2/en
Priority to EP19721186.5A priority patent/EP3567832A4/en
Publication of WO2019174429A1 publication Critical patent/WO2019174429A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/53Network services using third party service providers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences

Definitions

  • the present invention relates to the field of map engine technologies, and in particular, to a video map engine system.
  • the map engine from the application layer level, is a set of function libraries that provide functions for driving and managing geographic data, rendering, querying, etc. All application layer software only needs to call the function interface provided by the map engine. It's easy to complete its function.
  • the existing map engine system uses a two-dimensional map and a three-dimensional map as a base map, and adds a label on the base map, so that the user can understand the scene through the information on the label.
  • Two-dimensional maps use two-dimensional tile maps (ie, simulated terrain images), and three-dimensional maps use three-dimensional model maps (ie, simulated terrain three-dimensional graphics), but these are only simulated maps, and cannot truly let users see the scene. Real time screen.
  • the present invention provides a video map engine system, which can realize real-time video as a base map and present a video of an augmented reality label at a target position on the base map. Map effect.
  • a video map engine system includes a configuration management client, a plurality of video devices, a video access server, an augmented reality processor, and an augmented reality client; the augmented reality client and the configuration management client respectively a video access server, the augmented reality processor connection;
  • the configuration management client is configured to configure and save parameters of the video device, and send parameters of the video device to the augmented reality client;
  • a plurality of the video devices for capturing real-time video, wherein some of the video devices are also used to capture real-time video of augmented reality;
  • the video access server is connected to a plurality of the video devices, and configured to send the real-time video to the augmented reality client;
  • the augmented reality processor is configured to generate an augmented reality tag with a target location, and send the generated augmented reality tag to the augmented reality client, and also to delete an augmented reality tag with a target location, and Sending the deletion information to the augmented reality client;
  • the augmented reality client includes a processing line configured to:
  • the enhancement is performed Integrating the real-time video sent by the video access server with the calculated video coordinates, and using the calculated video coordinates to render the augmented reality tag at a corresponding position in the real-time video;
  • the augmented reality tag When the target location of the augmented reality tag is detected as a video coordinate, the augmented reality tag is integrated with the real-time video sent by the video access server, and the augmented reality tag is directly used by the video coordinate The corresponding position in the live video is presented.
  • the augmented reality label can directly use the video coordinates to make the augmented reality label appear in the corresponding position in the real-time video without performing coordinate calculation.
  • the target position carried by the augmented reality tag is GPS coordinates, then the video coordinates need to be calculated to determine the position of the augmented reality tag in the live video.
  • the augmented reality tag can be presented in the corresponding position in the real-time video, thereby realizing the real-time video as the base map and presenting the augmented reality label on the target position on the base map.
  • Video map effect
  • the augmented reality processor can also delete the augmented reality tags that have been generated, and the user can conveniently manage the augmented reality tags when using the video map.
  • the parameters of the video device include an azimuth P of the target location relative to the spatial location of the video device, a vertical angle T, and a zoom factor Z of the video device.
  • the position of the augmented reality tag in the real-time video can be calculated according to the P value, the T value of the target device, the Z value of the video device, and the GPS coordinate of the augmented reality tag, that is, the augmented reality is calculated.
  • the video coordinates of the tag to determine where the augmented reality tag is rendered in the live video.
  • the configuration management client determines whether the values of the current P, T, and Z are consistent with the values of the P, T, and Z saved by the configuration management client; if they are inconsistent, the augmented reality client re-based the current P, T, The value of Z and the position information of the augmented reality tag calculate the position of the augmented reality tag in the real-time video, so that the augmented reality tag is presented in the corresponding new position in the real-time video, and the configuration management client updates the saved P, T, Z The value; if consistent, the position of the augmented reality label rendered in the live video does not change.
  • the lens of the video device will move or rotate, that is, the values of the P, T, and Z of the video device relative to the target position will change, and the augmented reality label corresponding to the target position is presented in the real-time video.
  • the location will also change.
  • the configuration management client first determines whether the P, T, and Z values of the video device have changed. If the change is made, the augmented reality client needs to recalculate the position of the augmented reality label in the real-time video, otherwise the real-time video is enhanced when it is played.
  • the real-world tag does not move with the target position; if there is no change, the target location does not move during live video playback, and the augmented reality client does not need to recalculate the position of the augmented reality tag in the live video, and the augmented reality tag remains in original position.
  • the augmented reality label is composed of one or more points, or one of the points is a GPS coordinate point or a video coordinate point of the target position.
  • the augmented reality label can be a point, a line consisting of two points, or a face composed of multiple points.
  • the point is the GPS coordinate point or the video coordinate point of the target position.
  • the augmented reality label is a plurality of points, one of the points is a GPS coordinate point or a video coordinate point of the target position, thereby forming Augmented reality label with target location.
  • the augmented reality label includes one or more of a point label, a line label, a round label, a polygon label, an arrow label, an icon label, and a text label.
  • augmented reality labels Users can freely create different kinds of augmented reality labels through the augmented reality processor, which is convenient for users to identify different target objects in the real-time video, which is beneficial to the user to manage and monitor the scene.
  • the video map engine system further includes a data access server, the data access server is configured to access a third-party information system, and receive data sent by a third-party information system, where the augmented reality client and the data are connected.
  • the processing line of the augmented reality client is configured to integrate the data sent by the third party information system with the real-time video sent by the video access server.
  • the data access server provides an interface for the data of the third-party information system to be transmitted to the augmented reality client and presented in the real-time video, which facilitates the user to use the third-party information system to assist in on-site management and monitoring.
  • the data sent by the third-party information system carries location information
  • the augmented reality client calculates, according to the values of P, T, and Z, location information, the data sent by the third-party information system is presented in real-time video.
  • the location of the data of the third-party information system in the real-time video can be calculated, and the real-time video captured by the video device is enhanced.
  • the data of the third-party information system can be presented in the corresponding position in the real-time video, thereby realizing the real-time video as the base map and presenting the video map effect of the third-party information system on the base map.
  • the data access server provides an active access data service and a passive access data service, and the third-party information system accesses through an active access data service or a passive access data service.
  • the active access data service accesses a third-party information system through an SDK, an API interface, or a database provided by a third-party information system; the passive access data service is accessed through an Http API interface provided by the passive access data service.
  • Third-party information system Third-party information system.
  • the data access server provides an active access data service and a passive access data service, so that the user can flexibly select the third-party information system on the development platform of the video map engine and the characteristics of the third-party information system that needs to be accessed. Into the way.
  • the video access server accesses the video device through an SDK, an API interface, or a 28281 protocol.
  • the video access server provides a variety of interfaces, so that the user can flexibly select the access mode of the video device in combination with the characteristics of the video device on the development platform of the video map engine system.
  • the position of the target position in the real-time video can be calculated, and the augmented reality label is presented in the corresponding position in the real-time video, which can be implemented as real-time video.
  • the augmented reality label can move as the target position on the video map moves
  • the data of the third-party information system can also be presented on the video map, which is beneficial for the user to use the third-party information system to assist in on-site management and monitoring;
  • Embodiment 1 is a diagram showing the basic structure of Embodiment 1 of the present invention.
  • FIG. 2 is a block diagram of an access third party information system according to Embodiment 2 of the present invention.
  • FIG. 3 is a schematic structural diagram of a processing line according to an embodiment of the present invention.
  • mounting and “connecting” are to be understood broadly, and may be, for example, a fixed connection, a detachable connection, or an integral, unless otherwise explicitly stated and defined.
  • Ground connection it can be a mechanical connection or an electrical connection; it can be directly connected, or it can be indirectly connected through an intermediate medium, and it can be said that the internal connection of the two elements.
  • the specific meaning of the above terms in the present invention can be understood in the specific circumstances by those skilled in the art.
  • configuration management client and “augmented reality client” refer to a hardware device by which data or information may be transmitted, which may refer to having an addressable interface (eg, an Internet Protocol (IP) address, Any object (eg, device, sensor, etc.) of Bluetooth (registered trademark) identifier (ID), near field communication (NFC ID, etc.) that can transmit information to one or more other devices over a wired or wireless connection
  • IP Internet Protocol
  • ID Bluetooth (registered trademark) identifier
  • NFC ID near field communication
  • “Configuration Management Client” and “Augmented Reality Client” may have passive communication interfaces such as Quick Response (QR) codes, Radio Frequency Identification (RFID) tags, NFC tags, etc., or have active communication interfaces such as modems, transceivers , transmitter-receiver, etc.
  • the data orchestration device can include, but is not limited to, a cell phone, a computer, a laptop, a tablet computer, and the like.
  • a video map engine system includes a configuration management client 101, a plurality of video devices 102, a video access server 103, an augmented reality processor 104, and an augmented reality client 105;
  • a configuration management client 101 for configuring and saving parameters of the video device 102, the parameters including an azimuth P of the target location and the spatial location of the video device 102, a vertical angle T, and a zoom factor Z of the video device 102;
  • Video device 102 for capturing real-time video, wherein part of the video device 102 is also used to capture real-time video of augmented reality;
  • the video access server 103 is connected to the plurality of video devices 102 for transmitting real-time video to the augmented reality client 105;
  • the augmented reality processor 104 is configured to generate an augmented reality tag with a target location, and send the generated augmented reality tag to the augmented reality client 105, and also delete the augmented reality tag with the target location, and delete the Information feedback to the augmented reality client 105;
  • the augmented reality client 105 is connected to the configuration management client 101, the video access server 103, and the augmented reality processor 104, respectively.
  • the augmented reality client includes a processing line, and the processing line of the augmented reality client is configured to:
  • the video coordinates of the augmented reality tag displayed in the real-time video are calculated according to the parameters of the video device 102 and the GPS coordinates of the augmented reality tag, and the augmented reality tag and the video access server are
  • the real-time video sent by 103 is integrated, and the calculated video coordinates are used to make the augmented reality label appear in the corresponding position in the real-time video;
  • the augmented reality tag When the target location of the augmented reality tag is detected as the video coordinate, the augmented reality tag is integrated with the real-time video sent by the video access server 103, and the augmented reality tag is directly used to render the augmented reality tag in a corresponding position in the real-time video.
  • the augmented reality label can directly use the video coordinates to make the augmented reality label appear in the corresponding position in the real-time video without performing coordinate calculation.
  • the target position carried by the augmented reality tag is GPS coordinates, then the video coordinates need to be calculated to determine the position of the augmented reality tag in the live video.
  • the augmented reality tag is presented in a corresponding position in the real-time video, thereby realizing the real-time video as a base map and presenting the augmented reality at the target position on the base map.
  • the video map effect of the tag is presented in a corresponding position in the real-time video, thereby realizing the real-time video as a base map and presenting the augmented reality at the target position on the base map.
  • the augmented reality processor 104 can also delete the augmented reality tag that has been generated, and the user can conveniently manage the augmented reality tag when using the video map.
  • the parameters of the video device 102 include the azimuth P of the target position relative to the spatial position at which the video device 102 is located, the vertical angle T, and the zoom factor Z of the video device 102.
  • the position of the augmented reality tag in the real-time video can be calculated according to the P value, the T value, the Z value of the target device relative to the GPS device 102, and the GPS coordinates of the augmented reality tag, that is, the augmented reality tag is calculated.
  • the configuration management client 101 determines whether the values of the current P, T, and Z are consistent with the values of P, T, and Z held by the configuration management client 101; if not, the configuration management client 101 sends the changes.
  • the augmented reality client 105 is instructed to recalculate the augmented reality tag in the real-time video according to the current P, T, Z values and the position information of the augmented reality tag in response to the change instruction.
  • the position of the presentation is such that the augmented reality label is presented in a corresponding new location in the real-time video, and the configuration management client 101 updates the saved values of P, T, and Z; if consistent, the position of the augmented reality label presented in the real-time video does not change. .
  • the lens of the video device 102 may move or rotate, that is, the values of the P, T, and Z of the video device 102 relative to the target position may change, and the augmented reality label corresponding to the target position is in real time.
  • the position presented in the video will also change.
  • the configuration management client 101 first determines whether the P, T, and Z values of the video device 102 have changed. If changed, the augmented reality client 105 needs to recalculate the position of the augmented reality tag in the real-time video, otherwise the live video is playing.
  • the augmented reality tag does not move with the target location; if there is no change, the target location does not move during real-time video playback, and the augmented reality client 105 does not need to recalculate the position of the augmented reality tag in the live video, enhancing The actual label remains in its original position.
  • the augmented reality tag is composed of one or more points which are GPS coordinate points or video coordinate points of the target position.
  • the augmented reality label can be a point, a line consisting of two points, or a face composed of multiple points.
  • the point is the GPS coordinate point or the video coordinate point of the target position.
  • the augmented reality label is a plurality of points, one of the points is a GPS coordinate point or a video coordinate point of the target position, thereby forming Augmented reality label with target location.
  • the augmented reality label includes a point label, a line label, a round label, a polygon label, an arrow label, an icon label, and a text label.
  • the user can freely create different kinds of augmented reality tags through the augmented reality processor 104, so that the user can identify different target objects in the real-time video, which is beneficial to the user to manage and monitor the scene.
  • the user can modify different label attributes for different kinds of augmented reality labels:
  • Point labels can modify point styles, including alignment, color, transparency, size, etc.
  • Line labels can modify line styles, including alignment, color, transparency, thickness, etc.
  • the circular label can modify the center coordinates of the dot, the lateral diameter, the longitudinal diameter, the edge pattern, the filling style, and the like;
  • Polygon labels can modify size, edge style, fill style, etc.
  • Arrow labels can modify start point, end point, arrow width, arrow height, tail connection width, edge style, fill style, etc.
  • Icon labels can modify icon styles
  • Text labels can modify text center coordinates, text styles (such as bold, underline, italic, font size, font color).
  • the video map engine system further includes a data access server 106 for accessing the third-party information system 107 and receiving the third-party information system 107.
  • the transmitted data, the augmented reality client 105 is connected to the data access server 106, and the data sent by the third party information system 107 is integrated with the real-time video sent by the video access server 103.
  • the data access server 106 provides an interface for the data of the third party information system 107 to be transmitted to the augmented reality client 105 and presented in the live video, facilitating the user to utilize the third party information system 107 to assist in on-site management and monitoring.
  • the third-party information system 107 can be a police system, including a police car, a policeman, a device, etc.; or a traffic system, including a traffic light, a traffic police car, a speedometer, and the like.
  • the data sent by the third-party information system 107 carries the location information
  • the augmented reality client 105 calculates the data sent by the third-party information system 107 in the real-time video according to the values of P, T, and Z and the location information.
  • the presented location integrates the data sent by the third party information system 107 with the real-time video sent by the video access server 103, so that the data sent by the third party information system 107 is presented at a corresponding location in the live video.
  • the location of the data of the third party information system 107 in the live video can be calculated by the video device 102.
  • the data of the third-party information system 107 can be presented in a corresponding position in the real-time video, thereby realizing the real-time video as a base map and presenting the third-party information system on the base map. 107 video map effects.
  • the data access server 106 provides an active access data service and a passive access data service, and the third-party information system 107 accesses through an active access data service or a passive access data service.
  • the active access data service is specifically configured to access the third-party information system 107 through the SDK, API interface or database provided by the third-party information system 107;
  • the passive access data service is specifically: passively accessing data
  • the Http API interface provided by the service accesses the third party information system 107.
  • the data access server 106 provides an active access data service and a passive access data service, so that the user can flexibly select the third-party information system 107 on the development platform of the video map engine system in combination with the characteristics of the third-party information system 107 that needs to be accessed. Access method.
  • the video access server 103 accesses the video device 102 through the SDK, API interface or 28281 protocol.
  • the video access server 103 provides a variety of interfaces for the user to flexibly select the access mode of the video device 102 in conjunction with the features of the video device 102 on the development platform of the video map engine system.
  • the control device comprises: at least one processor 11, such as a CPU, at least one network interface 14 or other user interface 13, a memory 15, at least one communication bus 12, and a communication bus 12 for implementing these components. Connection communication between.
  • the user interface 13 can optionally include a USB interface and other standard interfaces and wired interfaces.
  • Network interface 14 may optionally include a Wi-Fi interface as well as other wireless interfaces.
  • the memory 15 may contain high speed RAM memory and may also include non-volatile memory, such as at least one disk memory.
  • the memory 15 can optionally include at least one storage device located remotely from the aforementioned processor 11.
  • memory 15 stores elements, executable modules or data structures, or a subset thereof, or their extension set:
  • the operating system 151 includes various system programs, such as a battery management system and the like, for implementing various basic services and processing hardware-based tasks;
  • the processor 11 is configured to invoke the program 152 stored in the memory 15 to implement various functions described in the foregoing embodiments, such as integrating the augmented reality tag with the real-time video sent by the video access server 103.
  • the computer program can be partitioned into one or more modules/units that are stored in the memory and executed by the processor to perform the present invention.
  • the one or more modules/units may be a series of computer program instruction segments capable of performing a particular function, the instruction segments being used to describe the execution of the computer program in the processing circuit.
  • the processing lines may include, but are not limited to, the processor 11 and the memory 15. It will be understood by those skilled in the art that the schematic diagram is merely an example of a processing circuit and does not constitute a limitation of a processing circuit, and may include more or less components than those illustrated, or a combination of certain components, or different components.
  • the control device may also include an input and output device, a network access device, a bus, and the like.
  • the processor 11 may be a central processing unit (CPU), or may be other general-purpose processors, a digital signal processor (DSP), an application specific integrated circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like, and the processor 11 is a control center of the control device, which connects various parts of the entire processing line using various interfaces and lines.
  • the memory 15 can be used to store the computer program and/or module, the processor 11 being implemented by running or executing a computer program and/or module stored in the memory 15 and recalling data stored in the memory.
  • the various functions of the control device may mainly include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function, and the like; the storage data area may store data created according to usage of the mobile phone, and the like.
  • the memory may include a high-speed random access memory, and may also include non-volatile memory such as a hard disk, a memory, a plug-in hard disk, a smart memory card (SMC), and a Secure Digital (SD) card. , Flash Card, at least one disk storage device, flash memory device, or other volatile solid-state storage device.
  • the processing circuit integrated module/unit can be stored in a computer readable storage medium if it is implemented in the form of a software functional unit and sold or used as a standalone product. Based on such understanding, the present invention implements all or part of the processes in the foregoing embodiments, and may also be completed by a computer program to instruct related hardware.
  • the computer program may be stored in a computer readable storage medium. The steps of the various method embodiments described above may be implemented when the program is executed by the processor.
  • the computer program comprises computer program code, which may be in the form of source code, object code form, executable file or some intermediate form.
  • the computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM). , random access memory (RAM, Random Access Memory) and software distribution media. It should be noted that the content contained in the computer readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction.

Abstract

本发明涉及一种视频地图引擎系统,包括用于配置和保存视频设备参数的配置管理客户端、用于拍摄实时视频的视频设备、视频接入服务器、增强现实处理器、增强现实客户端。视频设备的参数包括视频设备相对于目标位置的P、T、Z的值,增强现实客户端根据上述P、T、Z的值以及增强现实处理器生成的增强现实标签所带的目标位置计算出增强现实标签在实时视频中呈现的位置,使增强现实标签在实时视频中对应的位置呈现,从而实现以实时视频作为底图,并在底图上的目标位置呈现增强现实标签的视频地图效果。

Description

一种视频地图引擎系统 技术领域
本发明涉及地图引擎技术领域,具体涉及一种视频地图引擎系统。
背景技术
地图引擎,从应用层层面来看,就是一套提供了驱动和管理地理数据,实现渲染、查询等功能的一套函数库,所有的应用层软件只需要调用地图引擎提供的功能接口就能较容易的完成其功能。现有的地图引擎系统是以二维地图、三维地图作为底图,在底图上添加标签,让用户可以通过标签上的信息了解现场情况。二维地图是采用二维瓦片地图(即模拟地形图片),三维地图是采用三维模型图(即模拟地形三维图形),但这些都仅仅是模拟的地图,不能真实地让用户看到现场的实时画面。
发明内容
本发明为了克服上述现有技术所述的至少一种缺陷(不足),提供一种视频地图引擎系统,可以实现以实时视频作为底图,并在底图上的目标位置呈现增强现实标签的视频地图效果。
为实现本发明的目的,采用以下技术方案予以实现:
一种视频地图引擎系统,包括配置管理客户端、多个视频设备、视频接入服务器、增强现实处理器和增强现实客户端;所述增强现实客户端分别与所述配置管理客户端、所述视频接入服务器、所述增强现实处理器连接;
所述配置管理客户端,用于配置和保存所述视频设备的参数,并将所述视频设备的参数发送至所述增强现实客户端;
多个所述视频设备,用于拍摄实时视频,其中部分视频设备还用于拍摄增强现实的实时视频;
所述视频接入服务器,与多个所述视频设备连接,用于发送所述实时视频至 所述增强现实客户端;
所述增强现实处理器,用于生成带目标位置的增强现实标签,并将已生成的所述增强现实标签发送至所述增强现实客户端,还用于删除带目标位置的增强现实标签,并将删除信息反馈至所述增强现实客户端;
所述增强现实客户端包括处理线路,所述处理线路被配置为:
当检测到所述增强现实标签的目标位置为GPS坐标时,根据所述视频设备的参数以及所述增强现实标签的GPS坐标计算出增强现实标签在实时视频中呈现的视频坐标,将所述增强现实标签与所述视频接入服务器发送的实时视频进行整合,并利用所计算出的视频坐标使所述增强现实标签在所述实时视频中对应的位置呈现;
当检测到所述增强现实标签的目标位置为视频坐标时,将所述增强现实标签与所述视频接入服务器发送的实时视频进行整合,并直接利用该视频坐标使所述增强现实标签在所述实时视频中对应的位置呈现。
当增强现实标签所带的目标位置直接是视频坐标,则增强现实标签可以直接利用视频坐标使增强现实标签在实时视频中对应的位置呈现,无需进行坐标的计算。当增强现实标签所带的目标位置是GPS坐标时,则需要计算视频坐标,从而确定增强现实标签在实时视频中呈现的位置。
视频设备拍摄的实时视频在增强现实客户端播放的过程中,增强现实标签可以呈现在实时视频中对应的位置,从而实现以实时视频作为底图,并在底图上的目标位置呈现增强现实标签的视频地图效果。
增强现实处理器除了可以生成增强现实标签以外,还可以对已经生成的增强现实标签进行删除,用户在使用视频地图时可以方便地管理增强现实标签。
进一步地,所述视频设备的参数包括目标位置相对于视频设备所处空间位置的方位角P、垂直夹角T以及视频设备的变焦倍数Z。
利用投影原理,根据上述目标位置相对于视频设备的P值、T值、视频设备的Z值以及增强现实标签的GPS坐标可以计算出增强现实标签在实时视频中的位置,也即计算出增强现实标签的视频坐标,从而确定增强现实标签在实时视频中呈现的位置。
进一步地,所述配置管理客户端判断当前P、T、Z的值与配置管理客户端 所保存的P、T、Z的值是否一致;若不一致,增强现实客户端重新根据当前P、T、Z的值以及增强现实标签的位置信息计算出增强现实标签在实时视频中呈现的位置,使增强现实标签在实时视频中对应的新位置呈现,配置管理客户端更新所保存的P、T、Z的值;若一致,增强现实标签在实时视频中呈现的位置不改变。
在视频设备拍摄实时视频的过程中,视频设备的镜头会移动或转动,即视频设备相对于目标位置的P、T、Z的值会发生改变,目标位置相应的增强现实标签在实时视频中呈现的位置也会改变。配置管理客户端先判断视频设备的P、T、Z值是否更改了,若更改了,增强现实客户端需要重新计算增强现实标签在实时视频中呈现的位置,否则当实时视频在播放时,增强现实标签不会随着目标位置移动;若没有更改,则目标位置在实时视频播放中没有发生移动,增强现实客户端不需要重新计算增强现实标签在实时视频中呈现的位置,增强现实标签保持在原来的位置。
进一步地,所述增强现实标签由一个或多个点组成,该点或其中一个点为目标位置的GPS坐标点或视频坐标点。
增强现实标签可以是一个点,也可以是由两点组成的线,也可以是多个点组成的面。当增强现实标签是一个点时,该点就是目标位置的GPS坐标点或视频坐标点,当增强现实标签是多个点时,其中一个点是目标位置的GPS坐标点或视频坐标点,从而形成带目标位置的增强现实标签。
进一步地,所述增强现实标签包括点标签、线标签、圆形标签、多边形标签、箭头标签、图标标签、文本标签中的一种或者多种。
用户可以自由地通过增强现实处理器创建不同种类的增强现实标签,方便用户对实时视频中不同目标物体进行标识,有利于用户对现场情况进行管理和监控。
进一步地,所述视频地图引擎系统还包括数据接入服务器,所述数据接入服务器用于接入第三方信息系统,并接收第三方信息系统发送的数据,所述增强现实客户端与数据接入服务器连接,所述增强现实客户端的处理线路被配置为将第三方信息系统发送的数据与视频接入服务器发送的实时视频进行整合呈现。
数据接入服务器提供接口,让第三方信息系统的数据也可以传送至增强现实客户端并呈现在实时视频中,有利于用户利用第三方信息系统辅助进行现场管理 和监控。
进一步地,所述第三方信息系统发送的数据带有位置信息,所述增强现实客户端根据P、T、Z的值以及位置信息计算出所述第三方信息系统发送的数据在实时视频中呈现的位置,并将所述第三方信息系统发送的数据与所述视频接入服务器发送的实时视频进行整合,使所述第三方信息系统发送的数据在实时视频中对应的位置呈现。
根据视频设备相对于目标位置的P、T、Z的值以及第三方信息系统数据所带的位置信息可以计算出第三方信息系统的数据在实时视频中的位置,视频设备拍摄的实时视频在增强现实客户端播放的过程中,第三方信息系统的数据可以呈现在实时视频中对应的位置,从而实现以实时视频作为底图,并在底图上的呈现第三方信息系统的视频地图效果。
进一步地,所述数据接入服务器提供主动接入数据服务和被动接入数据服务,第三方信息系统通过主动接入数据服务或者被动接入数据服务进行接入。
进一步地,所述主动接入数据服务通过第三方信息系统提供的SDK、API接口或数据库接入第三方信息系统;所述被动接入数据服务通过被动接入数据服务提供的Http API接口接入第三方信息系统。
数据接入服务器提供主动接入数据服务和被动接入数据服务,便于用户在所述视频地图引擎的开发平台上,结合需要接入的第三方信息系统的特点灵活地选择第三方信息系统的接入方式。
进一步地,所述视频接入服务器通过SDK、API接口或28281协议接入视频设备。
视频接入服务器提供多种接口,便于用户在所述视频地图引擎系统的开发平台上,结合视频设备的特点灵活地选择视频设备的接入方式。
与现有技术相比,本发明技术方案的有益效果是:
(1)通过视频设备相对于目标位置的P、T、Z的值,可以计算出目标位置在实时视频中的位置,将增强现实标签呈现在实时视频中对应的位置,可以实现以实时视频作为底图,并在底图上的目标位置呈现增强现实标签的视频地图效果;
(2)增强现实标签可以随着视频地图上目标位置的移动而移动;
(3)用户可以自由地在视频地图上创建或删除增强现实标签;
(4)第三方信息系统的数据也可以呈现在视频地图上,有利于用户利用第三方信息系统辅助进行现场的管理和监控;
(5)提供多种接口方式,用户可以根据实际需要选择不同的接口方式接入视频设备和第三方信息系统。
附图说明
图1是本发明实施例1的基本架构图。
图2是本发明实施例2的接入第三方信息系统的架构图。
图3是本发明实施例提供的处理线路的结构示意图。
具体实施方式
附图仅用于示例性说明,不能理解为对本专利的限制;
为了更好说明本实施例,附图某些部件会有省略、放大或缩小,并不代表实际产品的尺寸;
对于本领域技术人员来说,附图中某些公知结构及其说明可能省略是可以理解的。
在本发明的描述中,除非另有说明,“多个”的含义是两个或两个以上。
在本发明的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以是通过中间媒介间接连接,可以说两个元件内部的连通。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本发明的具体含义。
以下的描述和相关附图中公开了本发明的各个方面。在不脱离本发明的范围的情况下,可以想出可选的方案。另外,本发明将不详细描述或者省略描述本发明的公知元素,以免模糊本发明的相关细节。
进一步地,许多方面要根据要由例如计算设备的元件执行的动作序列来描述。需要认知到的是,这里描述的各种动作可以由特定电路(例如,专用集成电路(ASIC))、一个或多个处理器执行的程序指令或两者的组合来执行。另外,这里描述的这些动作序列可以被认为完全收录在任何形式的计算机可读存储介质 中,该计算机可读存储介质中存储有相应的一组计算机指令,这些计算机指令在执行时将使相关的处理器执行此处所述的功能。因此,本发明的各个方面可以以多种不同的体现形式,所有这些形式都被认为是在所要求保护的主题的范围内。这里所述的“配置管理客户端”和“增强现实客户端”指代硬件设备,由该硬件设备可以传送数据或信息,其可以指具有可寻址接口(例如,互联网协议(IP)地址、蓝牙(注册商标)标识符(ID))、近场通信(NFC ID等)的任何对象(例如,设备,传感器等),其可以通过有线或无线连接向一个或多个其他设备传送信息,“配置管理客户端”和“增强现实客户端”可以具有无源通信接口,例如快速响应(QR)代码、射频识别(RFID)标签、NFC标签等,或具有有源通信接口,例如调制解调器、收发器、发送器-接收器等。例如,数据编配设备可包括但不限于,手机、式计算机、手提电脑、平板计算机等。
下面结合附图和实施例对本发明的技术方案做进一步的说明。
实施例1
如图1所示,一种视频地图引擎系统,包括配置管理客户端101、多个视频设备102、视频接入服务器103、增强现实处理器104、增强现实客户端105;
配置管理客户端101,用于配置和保存视频设备102的参数,所述参数包括目标位置与视频设备102所处空间位置的方位角P、垂直夹角T以及视频设备102的变焦倍数Z;多个视频设备102,用于拍摄实时视频,其中部分视频设备102还用于拍摄增强现实的实时视频;
视频接入服务器103,与多个视频设备102连接,用于发送实时视频至增强现实客户端105;
增强现实处理器104,用于生成带目标位置的增强现实标签,并将已生成的所述增强现实标签发送至增强现实客户端105,还用于删除带目标位置的增强现实标签,并将删除信息反馈至增强现实客户端105;
增强现实客户端105,分别与配置管理客户端101、视频接入服务器103、增强现实处理器104连接,所述增强现实客户端包括处理线路,所述增强现实客户端的处理线路被配置为:
当检测到增强现实标签的目标位置为GPS坐标时,根据视频设备102的参数以及增强现实标签的GPS坐标计算出增强现实标签在实时视频中呈现的视频 坐标,将增强现实标签与视频接入服务器103发送的实时视频进行整合,并利用所计算出的视频坐标使增强现实标签在实时视频中对应的位置呈现;
当检测到增强现实标签的目标位置为视频坐标时,将增强现实标签与视频接入服务器103发送的实时视频进行整合,并直接利用该视频坐标使增强现实标签在实时视频中对应的位置呈现。
当增强现实标签所带的目标位置直接是视频坐标,则增强现实标签可以直接利用视频坐标使增强现实标签在实时视频中对应的位置呈现,无需进行坐标的计算。当增强现实标签所带的目标位置是GPS坐标时,则需要计算视频坐标,从而确定增强现实标签在实时视频中呈现的位置。
视频设备102拍摄的实时视频在增强现实客户端105播放的过程中,增强现实标签呈现在实时视频中对应的位置,从而实现以实时视频作为底图,并在底图上的目标位置呈现增强现实标签的视频地图效果。
增强现实处理器104除了可以生成增强现实标签以外,还可以对已经生成的增强现实标签进行删除,用户在使用视频地图时可以方便地管理增强现实标签。
在本实施例1中,视频设备102的参数包括目标位置相对于视频设备102所处空间位置的方位角P、垂直夹角T以及视频设备102的变焦倍数Z。
利用投影原理,根据上述目标位置相对于视频设备102的P值、T值、Z值以及增强现实标签的GPS坐标可以计算出增强现实标签在实时视频中的位置,也即计算出增强现实标签的视频坐标,从而确定增强现实标签在实时视频中呈现的位置。
在本实施例1中,配置管理客户端101判断当前P、T、Z的值与配置管理客户端101所保存的P、T、Z的值是否一致;若不一致,配置管理客户端101发送变更指令给所述增强现实客户端105,所述增强现实客户端105响应于所述变更指令,重新根据当前P、T、Z的值以及增强现实标签的位置信息计算出增强现实标签在实时视频中呈现的位置,使增强现实标签在实时视频中对应的新位置呈现,配置管理客户端101更新所保存的P、T、Z的值;若一致,增强现实标签在实时视频中呈现的位置不改变。
在视频设备102拍摄实时视频的过程中,视频设备102的镜头会移动或转动,即视频设备102相对于目标位置的P、T、Z的值会发生改变,目标位置相应的 增强现实标签在实时视频中呈现的位置也会改变。配置管理客户端101先判断视频设备102的P、T、Z值是否更改了,若更改了,增强现实客户端105需要重新计算增强现实标签在实时视频中呈现的位置,否则当实时视频在播放时,增强现实标签不会随着目标位置移动;若没有更改,则目标位置在实时视频播放中没有发生移动,增强现实客户端105不需要重新计算增强现实标签在实时视频中呈现的位置,增强现实标签保持在原来的位置。
在本实施例1中,增强现实标签由一个或多个点组成,该点或其中一个点为目标位置的GPS坐标点或视频坐标点。
增强现实标签可以是一个点,也可以是由两点组成的线,也可以是多个点组成的面。当增强现实标签是一个点时,该点就是目标位置的GPS坐标点或视频坐标点,当增强现实标签是多个点时,其中一个点是目标位置的GPS坐标点或视频坐标点,从而形成带目标位置的增强现实标签。
在本实施例1中,增强现实标签包括点标签、线标签、圆形标签、多边形标签、箭头标签、图标标签、文本标签。
用户可以自由地通过增强现实处理器104创建不同种类的增强现实标签,方便用户对实时视频中不同目标物体进行标识,有利于用户对现场情况进行管理和监控。
具体在创建增强现实标签的过程中,用户可以针对不同种类的增强现实标签修改不同的标签属性:
点标签可以修改点样式,包括对齐方式、颜色、透明度、大小等;
线标签可以修改线样式,包括对齐方式、颜色、透明度、粗细度等;
圆形标签可以修改圆点中心坐标、横向直径、纵向直径、边线样式、填充样式等;
多边形标签可以修改大小、边线样式、填充样式等;
箭头标签可以修改起点、终点、箭头宽度、箭头高度、尾巴连接宽度、边线样式、填充样式等;
图标标签可以修改图标样式;
文本标签可以修改文本中心坐标、文本样式(如加粗、下划线、斜体、字号大小、字体颜色)等。
实施例2
如图2所示,在实施例1的基础上,视频地图引擎系统还包括数据接入服务器106,所述数据接入服务器106用于接入第三方信息系统107,并接收第三方信息系统107发送的数据,增强现实客户端105与数据接入服务器106连接,并将第三方信息系统107发送的数据与视频接入服务器103发送的实时视频进行整合呈现。
数据接入服务器106提供接口,让第三方信息系统107的数据也可以传送至增强现实客户端105并呈现在实时视频中,有利于用户利用第三方信息系统107辅助进行现场的管理和监控。
第三方信息系统107可以是警备系统,其中包括警车、警员、设备等;也可以是交通系统,其中包括红绿灯、交警车、测车速仪等等。
在本实施例2中,第三方信息系统107发送的数据带有位置信息,增强现实客户端105根据P、T、Z的值以及位置信息计算出第三方信息系统107发送的数据在实时视频中呈现的位置,并将第三方信息系统107发送的数据与视频接入服务器103发送的实时视频进行整合,使第三方信息系统107发送的数据在实时视频中对应的位置呈现。
根据视频设备102相对于目标位置的P、T、Z的值以及第三方信息系统107数据所带的位置信息可以计算出第三方信息系统107的数据在实时视频中的位置,视频设备102拍摄的实时视频在增强现实客户端105播放的过程中,第三方信息系统107的数据可以呈现在实时视频中对应的位置,从而实现以实时视频作为底图,并在底图上的呈现第三方信息系统107的视频地图效果。
在本实施例2中,数据接入服务器106提供主动接入数据服务和被动接入数据服务,第三方信息系统107通过主动接入数据服务或者被动接入数据服务进行接入。
在本实施例2中,主动接入数据服务具体为,通过第三方信息系统107提供的SDK、API接口或数据库接入第三方信息系统107;被动接入数据服务具体为,通过被动接入数据服务提供的Http API接口接入第三方信息系统107。
数据接入服务器106提供主动接入数据服务和被动接入数据服务,便于用户在视频地图引擎系统的开发平台上,结合需要接入的第三方信息系统107的特点 灵活地选择第三方信息系统107的接入方式。
在本实施例2中,视频接入服务器103通过SDK、API接口或28281协议接入视频设备102。
视频接入服务器103提供多种接口,便于用户在视频地图引擎系统的开发平台上,结合视频设备102的特点灵活地选择视频设备102的接入方式。
相同或相似的标号对应相同或相似的部件;
附图中描述位置关系的用于仅用于示例性说明,不能理解为对本专利的限制。
参见图3,是本发明其中一实施例提供的处理线路的示意图。如图4所示,所述控制设备包括:至少一个处理器11,例如CPU,至少一个网络接口14或者其他用户接口13,存储器15,至少一个通信总线12,通信总线12用于实现这些组件之间的连接通信。其中,用户接口13可选的可以包括USB接口以及其他标准接口、有线接口。网络接口14可选的可以包括Wi-Fi接口以及其他无线接口。存储器15可能包含高速RAM存储器,也可能还包括非不稳定的存储器(non-volatilememory),例如至少一个磁盘存储器。存储器15可选的可以包含至少一个位于远离前述处理器11的存储装置。
在一些实施方式中,存储器15存储了如下的元素,可执行模块或者数据结构,或者他们的子集,或者他们的扩展集:
操作系统151,包含各种系统程序,如电池管理系统等等,用于实现各种基础业务以及处理基于硬件的任务;
程序152。
具体地,处理器11用于调用存储器15中存储的程序152,实现上述实施例所述各种功能,例如将增强现实标签与视频接入服务器103发送的实时视频进行整合。
示例性的,所述计算机程序可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器中,并由所述处理器执行,以完成本发明。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序在所述处理线路中的执行过程。
所述处理线路可包括,但不仅限于,处理器11、存储器15。本领域技术人员可以理解,所述示意图仅仅是处理线路的示例,并不构成对处理线路的限定, 可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述控制设备还可以包括输入输出设备、网络接入设备、总线等。
所称处理器11可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等,所述处理器11是所述控制设备的控制中心,利用各种接口和线路连接整个处理线路的各个部分。
所述存储器15可用于存储所述计算机程序和/或模块,所述处理器11通过运行或执行存储在所述存储器15内的计算机程序和/或模块,以及调用存储在存储器内的数据,实现所述控制设备的各种功能。所述存储器可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序等;存储数据区可存储根据手机的使用所创建的数据等。此外,存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
其中,所述处理线路集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践 的要求进行适当的增减。
显然,本发明的上述实施例仅仅是为清楚地说明本发明所作的举例,而并非是对本发明的实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明权利要求的保护范围之内。

Claims (10)

  1. 一种视频地图引擎系统,其特征在于,包括配置管理客户端、多个视频设备、视频接入服务器、增强现实处理器和增强现实客户端;所述增强现实客户端分别与所述配置管理客户端、所述视频接入服务器、所述增强现实处理器连接;
    所述配置管理客户端,用于配置和保存所述视频设备的参数,并将所述视频设备的参数发送至所述增强现实客户端;
    多个所述视频设备,用于拍摄实时视频,其中部分视频设备还用于拍摄增强现实的实时视频;
    所述视频接入服务器,与多个所述视频设备连接,用于发送所述实时视频至所述增强现实客户端;
    所述增强现实处理器,用于生成带目标位置的增强现实标签,并将已生成的所述增强现实标签发送至所述增强现实客户端,还用于删除带目标位置的增强现实标签,并将删除信息反馈至所述增强现实客户端;
    所述增强现实客户端包括处理线路,所述处理线路被配置为:
    当检测到所述增强现实标签的目标位置为GPS坐标时,根据所述视频设备的参数以及所述增强现实标签的GPS坐标计算出增强现实标签在实时视频中呈现的视频坐标,将所述增强现实标签与所述视频接入服务器发送的实时视频进行整合,并利用所计算出的视频坐标使所述增强现实标签在所述实时视频中对应的位置呈现;
    当检测到所述增强现实标签的目标位置为视频坐标时,将所述增强现实标签与所述视频接入服务器发送的实时视频进行整合,并直接利用该视频坐标使所述增强现实标签在所述实时视频中对应的位置呈现。
  2. 根据权利要求1所述的视频地图引擎系统,其特征在于,所述视频设备的参数包括目标位置相对于视频设备所处空间位置的方位角P、垂直夹角T以及视频设备的变焦倍数Z。
  3. 根据权利要求2所述的视频地图引擎系统,其特征在于,所述配置管理客户端判断当前P、T、Z的值与配置管理客户端所保存的P、T、Z的值是否一 致;若不一致,增强现实客户端重新根据当前P、T、Z的值以及增强现实标签的目标位置计算出增强现实标签在实时视频中呈现的新位置,使增强现实标签在实时视频中对应的新位置呈现,配置管理客户端更新所保存的P、T、Z的值;若一致,增强现实标签在实时视频中呈现的位置不改变。
  4. 根据权利要求1-3任一项所述的视频地图引擎系统,其特征在于,所述增强现实标签由一个或多个点组成,该点或其中一个点为目标位置的GPS坐标点或视频坐标点。
  5. 根据权利要求4所述的视频地图引擎系统,其特征在于,所述增强现实标签包括点标签、线标签、圆形标签、多边形标签、箭头标签、图标标签、文本标签中的一种或者多种。
  6. 根据权利要求2所述的视频地图引擎系统,其特征在于,还包括数据接入服务器,所述数据接入服务器用于接入第三方信息系统,并接收第三方信息系统发送的数据,所述增强现实客户端与数据接入服务器连接,所述增强现实客户端的处理线路被配置为将第三方信息系统发送的数据与视频接入服务器发送的实时视频进行整合呈现。
  7. 根据权利要求6所述的视频地图引擎系统,其特征在于,所述第三方信息系统发送的数据带有位置信息,所述增强现实客户端根据P、T、Z的值以及位置信息计算出所述第三方信息系统发送的数据在实时视频中呈现的位置,并将所述第三方信息系统发送的数据与所述视频接入服务器发送的实时视频进行整合,使所述第三方信息系统发送的数据在实时视频中对应的位置呈现。
  8. 根据权利要求6或7所述的视频地图引擎系统,其特征在于,所述数据接入服务器提供主动接入数据服务和被动接入数据服务,第三方信息系统通过主动接入数据服务或者被动接入数据服务进行接入。
  9. 根据权利要求8所述的视频地图引擎系统,其特征在于,所述主动接入数据服务通过第三方信息系统提供的SDK、API接口或数据库接入第三方信息系统;所述被动接入数据服务通过被动接入数据服务提供的Http API接口接入第三方信息系统。
  10. 根据权利要求9所述的视频地图引擎系统,其特征在于,所述视频接入服务器通过SDK、API接口或28281协议接入视频设备。
PCT/CN2019/074378 2018-03-15 2019-02-01 一种视频地图引擎系统 WO2019174429A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/461,382 US10909766B2 (en) 2018-03-15 2019-02-01 Video map engine system
EP19721186.5A EP3567832A4 (en) 2018-03-15 2019-02-01 VIDEO CARD ENGINE SYSTEM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810215658.2 2018-03-15
CN201810215658.2A CN108712362B (zh) 2018-03-15 2018-03-15 一种视频地图引擎系统

Publications (1)

Publication Number Publication Date
WO2019174429A1 true WO2019174429A1 (zh) 2019-09-19

Family

ID=63866188

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/074378 WO2019174429A1 (zh) 2018-03-15 2019-02-01 一种视频地图引擎系统

Country Status (4)

Country Link
US (1) US10909766B2 (zh)
EP (1) EP3567832A4 (zh)
CN (1) CN108712362B (zh)
WO (1) WO2019174429A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114071246A (zh) * 2020-07-29 2022-02-18 海能达通信股份有限公司 媒体增强现实标签方法、计算机设备及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108712362B (zh) * 2018-03-15 2021-03-16 高新兴科技集团股份有限公司 一种视频地图引擎系统
CN109889785B (zh) * 2019-02-26 2021-01-01 高新兴科技集团股份有限公司 一种基于unity的POI标签显示的虚拟仿真方法
CN110267087B (zh) * 2019-06-14 2022-03-11 高新兴科技集团股份有限公司 一种动态标签添加方法、设备及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205232356U (zh) * 2015-11-17 2016-05-11 高新兴科技集团股份有限公司 一种基于移动终端的增强现实监控系统
CN106027960A (zh) * 2016-05-13 2016-10-12 深圳先进技术研究院 一种定位系统和方法
CN107426065A (zh) * 2017-04-22 2017-12-01 高新兴科技集团股份有限公司 一种立体防控系统
CN107770496A (zh) * 2017-11-03 2018-03-06 中国民用航空总局第二研究所 全景视频上的飞机智能监视方法、装置及系统
CN108712362A (zh) * 2018-03-15 2018-10-26 高新兴科技集团股份有限公司 一种视频地图引擎系统

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8635307B2 (en) * 2007-02-08 2014-01-21 Microsoft Corporation Sensor discovery and configuration
US7777783B1 (en) * 2007-03-23 2010-08-17 Proximex Corporation Multi-video navigation
US8839121B2 (en) * 2009-05-06 2014-09-16 Joseph Bertolami Systems and methods for unifying coordinate systems in augmented reality applications
US8331611B2 (en) * 2009-07-13 2012-12-11 Raytheon Company Overlay information over video
US20110102460A1 (en) * 2009-11-04 2011-05-05 Parker Jordan Platform for widespread augmented reality and 3d mapping
WO2011063034A1 (en) * 2009-11-17 2011-05-26 Rtp, Llc Systems and methods for augmented reality
US20130142384A1 (en) * 2011-12-06 2013-06-06 Microsoft Corporation Enhanced navigation through multi-sensor positioning
US9367961B2 (en) * 2013-04-15 2016-06-14 Tencent Technology (Shenzhen) Company Limited Method, device and storage medium for implementing augmented reality
JP2015095686A (ja) * 2013-11-08 2015-05-18 キヤノン株式会社 撮像装置、撮像システム、撮像装置の制御方法、撮像システムの制御方法、及びプログラム
CN104331929B (zh) * 2014-10-29 2018-02-02 深圳先进技术研究院 基于视频地图与增强现实的犯罪现场还原方法
US10319128B2 (en) * 2016-09-26 2019-06-11 Rockwell Automation Technologies, Inc. Augmented reality presentation of an industrial environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205232356U (zh) * 2015-11-17 2016-05-11 高新兴科技集团股份有限公司 一种基于移动终端的增强现实监控系统
CN106027960A (zh) * 2016-05-13 2016-10-12 深圳先进技术研究院 一种定位系统和方法
CN107426065A (zh) * 2017-04-22 2017-12-01 高新兴科技集团股份有限公司 一种立体防控系统
CN107770496A (zh) * 2017-11-03 2018-03-06 中国民用航空总局第二研究所 全景视频上的飞机智能监视方法、装置及系统
CN108712362A (zh) * 2018-03-15 2018-10-26 高新兴科技集团股份有限公司 一种视频地图引擎系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3567832A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114071246A (zh) * 2020-07-29 2022-02-18 海能达通信股份有限公司 媒体增强现实标签方法、计算机设备及存储介质
CN114071246B (zh) * 2020-07-29 2024-04-16 海能达通信股份有限公司 媒体增强现实标签方法、计算机设备及存储介质

Also Published As

Publication number Publication date
CN108712362A (zh) 2018-10-26
EP3567832A4 (en) 2019-12-11
CN108712362B (zh) 2021-03-16
US10909766B2 (en) 2021-02-02
EP3567832A1 (en) 2019-11-13
US20200258304A1 (en) 2020-08-13

Similar Documents

Publication Publication Date Title
WO2019174429A1 (zh) 一种视频地图引擎系统
KR102344482B1 (ko) 지오-펜스 평가 시스템
KR102493509B1 (ko) 미디어 항목들의 관여를 추적하는 시스템
KR102272256B1 (ko) 가상 비전 시스템
US20190101407A1 (en) Navigation method and device based on augmented reality, and electronic device
US10863310B2 (en) Method, server and terminal for information interaction
CN113330484A (zh) 虚拟表面修改
KR20240033161A (ko) 중복 추적 시스템
CN114341780A (zh) 基于上下文的虚拟对象渲染
KR102558866B1 (ko) 필터 활동을 통한 오디언스 도출
TW201643818A (zh) 顯示資料的處理方法、裝置及系統
KR20170043537A (ko) 지오-펜스 통지 가입 기법
CN111295898B (zh) 基于移动的显示内容制图控制
WO2023179346A1 (zh) 特效图像处理方法、装置、电子设备及存储介质
CN107084740A (zh) 一种导航方法和装置
KR20230021722A (ko) 웹 뷰를 위한 양방향 브리지
KR20230104676A (ko) 증강 현실 컴포넌트들에서의 초상화 이미지들의 사용
CN110248165B (zh) 标签显示方法、装置、设备及存储介质
CN105224553A (zh) 一种在手机地图上展示实时路况的方法
TWI767225B (zh) 錨點共享方法及裝置、系統、電子設備和電腦可讀儲存媒體
CN116684540A (zh) 一种用于呈现增强现实数据的方法、设备及介质
CN105373542A (zh) 一种基于实时动态渲染的手机地图标注方法
WO2023235399A1 (en) External messaging function for an interaction system
CN111970545A (zh) 广告资源的下载方法、装置及存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019721186

Country of ref document: EP

Effective date: 20190502

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19721186

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE