CN115439635A - Method and equipment for presenting mark information of target object - Google Patents

Method and equipment for presenting mark information of target object Download PDF

Info

Publication number
CN115439635A
CN115439635A CN202210762152.XA CN202210762152A CN115439635A CN 115439635 A CN115439635 A CN 115439635A CN 202210762152 A CN202210762152 A CN 202210762152A CN 115439635 A CN115439635 A CN 115439635A
Authority
CN
China
Prior art keywords
position information
information
augmented reality
target object
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210762152.XA
Other languages
Chinese (zh)
Other versions
CN115439635B (en
Inventor
廖春元
黄海波
韩磊
梅岭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hiscene Information Technology Co Ltd
Original Assignee
Hiscene Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hiscene Information Technology Co Ltd filed Critical Hiscene Information Technology Co Ltd
Priority to CN202210762152.XA priority Critical patent/CN115439635B/en
Priority to PCT/CN2022/110489 priority patent/WO2024000733A1/en
Publication of CN115439635A publication Critical patent/CN115439635A/en
Application granted granted Critical
Publication of CN115439635B publication Critical patent/CN115439635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application aims to provide a method and equipment for presenting mark information of a target object, and the method and equipment specifically comprise the following steps: acquiring a scene image shot by unmanned aerial vehicle equipment; acquiring user operation of a command user of a command device on a target object in the scene image, and generating mark information on the target object based on the user operation, wherein the mark information comprises corresponding mark content and image position information of the mark content in the scene image, the image position information is used for determining geographic position information of the target object and displaying the mark content in a superposition manner in a real scene of an augmented reality device of a duty user, and the augmented reality device and the command device are in a collaborative execution state of the same collaborative task. The application can realize the combination of the space computing technology, the augmented reality technology and the command system, and greatly improves the command efficiency while enriching the command forms, thereby providing a good command environment for users.

Description

Method and equipment for presenting mark information of target object
Technical Field
The present application relates to the field of communications, and in particular, to a technique for presenting label information of a target object.
Background
With the rapid rise of the unmanned aerial vehicle industry, the way of unmanned aerial vehicle application exploration becomes clearer day by day, for example, police unmanned aerial vehicle equipment is favored by practical combat application through high-altitude panoramic video acquisition, and the unmanned aerial vehicle equipment transmits a high-altitude panoramic picture to a command center in a picture transmission mode for global command and scheduling of major events. However, the existing commanding and dispatching system has a single front-end and back-end commanding means, mainly focuses on modes such as bidirectional audio and video calls, texts and pictures, and is low in efficiency and lack of intuitiveness for some complex and changeable scenes.
Disclosure of Invention
An object of the present application is to provide a method and apparatus for presenting mark information of a target object.
According to an aspect of the present application, there is provided a method of presenting mark information of a target object, the method including:
acquiring a scene image shot by unmanned aerial vehicle equipment;
acquiring user operation of a command user of a command device on a target object in the scene image, and generating mark information on the target object based on the user operation, wherein the mark information comprises corresponding mark content and image position information of the mark content in the scene image, the image position information is used for determining geographic position information of the target object and displaying the mark content in a superposition manner in a real scene of an augmented reality device of a duty user, and the augmented reality device and the command device are in a collaborative execution state of the same collaborative task.
According to another aspect of the present application, there is provided a method for presenting marking information of a target object, applied to an augmented reality device, wherein the method includes:
acquiring first pose information of augmented reality equipment used by a duty user, wherein the first pose information comprises first position information and first pose information of the augmented reality equipment, the first pose information is used for determining superposition position information of a target object in a real scene of the augmented reality equipment by combining geographic position information of the corresponding target object, and the target object is superposed and presented with mark content in the real scene based on the superposition position information.
According to an aspect of the present application, there is provided a commander device that presents mark-up information of a target object, wherein the device includes:
the one-to-one module is used for acquiring a scene image shot by the unmanned aerial vehicle equipment;
and the second module is used for acquiring user operation of a command user of the command equipment on a target object in the scene image, and generating mark information on the target object based on the user operation, wherein the mark information comprises corresponding mark content and image position information of the mark content in the scene image, the image position information is used for determining the geographical position information of the target object and displaying the mark content in an overlapping manner in a real scene of augmented reality equipment of the duty user, and the augmented reality equipment and the command equipment are in a cooperative execution state of the same cooperative task.
According to another aspect of the present application, there is provided an augmented reality device that presents marker information of a target object, wherein the device includes:
the system comprises a first position information module and a second position information module, wherein the first position information module is used for acquiring first position information of an augmented reality device used by a duty user, the first position information comprises first position information and first posture information of the augmented reality device, the first position information is used for determining superposition position information of a target object in a real scene of the augmented reality device by combining geographic position information of the corresponding target object, and the target object is superposed and presented with mark content related to the target object in the real scene based on the superposition position information.
According to an aspect of the present application, there is provided a computer apparatus, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the steps of a method as described in any one of the above.
According to an aspect of the application, there is provided a computer readable storage medium having stored thereon a computer program/instructions, characterized in that the computer program/instructions, when executed, cause a system to perform the steps of performing the method as described in any of the above.
According to an aspect of the application, there is provided a computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the steps of the method as described in any of the above.
Compared with the prior art, the method and the device have the advantages that the scene image is obtained through the command device, the mark information of the target object is determined based on the scene image, so that the mark content of the mark information is superposed and presented in the real scene of the augmented reality device, the combination of the space computing technology, the augmented reality technology and the command system is realized, the command form is enriched, the command efficiency is greatly improved, and a good command environment is provided for users.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 illustrates a flow diagram of a method of presenting tagging information for a target object according to one embodiment of the application;
FIG. 2 illustrates a flow diagram of a method of presenting tagging information for a target object, according to another embodiment of the present application;
FIG. 3 illustrates an equipment configuration diagram of a command device according to an embodiment of the present application;
fig. 4 is a diagram illustrating an apparatus structure of an augmented reality apparatus according to another embodiment of the present application;
FIG. 5 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include forms of volatile Memory, random Access Memory (RAM), and/or non-volatile Memory in a computer-readable medium, such as Read Only Memory (ROM) or Flash Memory. Memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase-Change Memory (PCM), programmable Random Access Memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash Memory or other Memory technologies, compact Disc Read-Only Memory (CD-ROM), digital Versatile Disc (DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The device referred to in this application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product capable of performing human-computer interaction with a user (e.g., human-computer interaction through a touch panel), such as a smart phone, a tablet computer, and the like, and the mobile electronic product may employ any operating system, such as an Android operating system, an iOS operating system, and the like. The network Device includes an electronic Device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded Device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use in the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Fig. 1 shows a method for presenting mark information of a target object, applied to a command device, according to an aspect of the present application, where the method includes step S101 and step S102. In step S101, a scene image captured by the unmanned aerial vehicle device is acquired; in step S102, a user operation of a command user of a command device with respect to a target object in the scene image is obtained, and tag information with respect to the target object is generated based on the user operation, where the tag information includes corresponding tag content and image location information of the tag content in the scene image, the image location information is used to determine geographic location information of the target object and to superimpose and present the tag content in a real scene of an augmented reality device of an on-duty user, and the augmented reality device and the command device are in a cooperative execution state of the same cooperative task. The command device includes, but is not limited to, a user device, a network device, and a device formed by integrating the user device and the network device via a network. The user equipment includes, but is not limited to, any mobile electronic product capable of human-computer interaction with a user, such as a mobile phone, a personal computer, a tablet computer, and the like; the network device includes, but is not limited to, a computer, a network host, a single network server, a plurality of network server sets, or a cloud of multiple servers.
The command device establishes communication connection with corresponding unmanned aerial vehicle devices, augmented reality devices and the like, and transmits related data through the communication connection. In some cases, the command device and the drone device and/or the augmented reality device are in a cooperative execution state of the same cooperative task, where the cooperative task refers to a certain task that is jointly completed by multiple devices according to certain constraint conditions (e.g., a spatial distance from a target object, a time constraint, physical conditions on the devices themselves, or a task execution sequence, etc.) to achieve a certain criterion, and the task may be generally decomposed into multiple subtasks and distributed to each device in the system, and the distributed subtasks are respectively completed by each device, so that the progress of the overall task progress of the cooperative task is achieved. And the corresponding command equipment serves as a control center of the cooperative task system in the execution process of the corresponding cooperative task, and the subtasks and the like of each equipment in the cooperative task are regulated and controlled. The task participating devices of the collaborative task comprise commanding devices, one or more unmanned aerial vehicle devices and one or more augmented reality devices, and corresponding commanding devices are correspondingly operated by commanding users; the unmanned aerial vehicle equipment can acquire images or fly based on acquisition instructions/flight path planning instructions and the like sent by the command equipment, and can also control the unmanned aerial vehicle equipment through ground control equipment of the unmanned aerial vehicle equipment by a corresponding unmanned aerial vehicle flyer, the ground control equipment receives and presents control instructions sent by the command equipment, and the control of the unmanned aerial vehicle equipment is realized by the control operation of the unmanned aerial vehicle flyer; augmented reality equipment is worn and is controlled by corresponding user on duty, and augmented reality equipment includes but not limited to augmented reality glasses, augmented reality helmet etc.. Certainly, in some cases, besides the participation of the command device, the augmented reality device and/or the unmanned aerial vehicle device, the cooperative task may also be performed by the network device for three-way data transmission, data processing, and the like, for example, the unmanned aerial vehicle device sends the corresponding scene image to the corresponding network device, and the command device and/or the augmented reality device and the like acquire the scene image through the network device.
Specifically, in step S101, an image of a scene captured by the drone device is acquired. For example, the unmanned aerial vehicle device is an unmanned aerial vehicle operated by using a radio remote control device and a self-contained program control device, and has the advantages of small volume, low manufacturing cost, convenient use, low requirement on the battlefield environment, strong battlefield viability and the like. The unmanned aerial vehicle device can acquire scene images of a specific area, for example, the unmanned aerial vehicle device acquires the scene images of the corresponding area in the flight process based on a preset flight route or a predetermined target point, the unmanned aerial vehicle device can record shooting pose information corresponding to the unmanned aerial vehicle device when the scene images are acquired in the scene image acquisition process, and the shooting pose information comprises shooting position information, shooting attitude information and the like of a shooting device of the unmanned aerial vehicle device when the scene images are acquired. Unmanned aerial vehicle equipment or the ground controlgear who corresponds can send this scene image to network equipment to send to corresponding equipment etc. by network equipment, perhaps unmanned aerial vehicle equipment or the ground controlgear who corresponds can directly send the scene image to this corresponding equipment etc. with the communication connection who corresponds equipment, and wherein, corresponding equipment is including commander's equipment and/or augmented reality equipment. In some cases, in the process of sending the scene image, the drone device also sends the camera pose information corresponding to the scene image to the corresponding device or the network device, for example, the camera pose information is sent to the command device and/or the augmented reality device or the network device. Specifically, for example, the network device may forward, based on the cooperative task, a scene image acquired by the unmanned aerial vehicle device in the cooperative task execution state to the command device and/or the augmented reality device, and the like; or, the unmanned aerial vehicle device transmits the acquired scene image to the network device in real time, the commanding device and/or the augmented reality device sends an image acquisition request about the unmanned aerial vehicle device to the network device, the corresponding image acquisition request contains the device identification information of the unmanned aerial vehicle device, the network device responds to the image acquisition request, the scene image acquired by the unmanned aerial vehicle device is called based on the device identification information of the unmanned aerial vehicle device, and the scene image is sent to the commanding device and/or the augmented reality device. After the command device and/or the augmented reality device acquires the corresponding scene image, the scene image is presented in a corresponding display device (e.g., a display screen, a projector, etc.).
In step S102, a user operation of a command user of a command device with respect to a target object in the scene image is obtained, and tag information about the target object is generated based on the user operation, where the tag information includes corresponding tag content and image location information of the tag content in the scene image, the image location information is used to determine geographic location information of the target object and to overlay and present the tag content in a real scene of an augmented reality device of a duty user, and the augmented reality device and the command device are in a cooperative execution state of the same cooperative task. For example, the command device includes a data acquisition device for acquiring user operations of the command user, such as a keyboard, a mouse, a touch screen or a touch pad, an image acquisition unit, a voice input unit, and the like. For example, the user operation of the command user may be a gesture action or a voice instruction of the command user, and the mark information is generated by recognizing the gesture action or the voice instruction; for another example, the user operation of the command user may be a direct operation on the scene image by using a device such as a keyboard, a mouse, a touch screen, or a touch pad, for example, the command user performs a frame selection, a doodle, or other editing information (e.g., editing text, adding 2D or 3D model information, etc.) on the presented scene image by using the mouse, and the like. In some embodiments, the command device may present, at the same time as presenting the image of the scene captured by the drone device, an operation interface related to the image of the scene, and the command user may operate a control in the operation interface to implement marking of the target object in the image of the scene, for example, the command device may capture marking information generated by the user regarding a marking operation in the image of the scene about the target object, specifically, the marking information includes, but is not limited to, a frame selection, a graffiti, or other editing information added to a specific area/a specific position/a specific target in the image of the scene. The target object is used to indicate a physical object or the like, such as a pedestrian, a vehicle, a geographical location/area, a building, a street or other identifying object, etc., contained in the tagged region in the scene image corresponding to the tagging information. The corresponding mark information includes mark content determined by user operation and image position information of the mark content in the scene image, the mark content is determined by information added by the user operation in the scene image, including but not limited to a square, a circle, a line, a point, an arrow, a picture/video, an animation, a three-dimensional model, and the like, preferably, the mark content further includes parameter information, such as color, thickness, and the like, and the corresponding image position information is used for indicating coordinate information of the mark information in a corresponding image/pixel coordinate system of the scene image, and the coordinate information can be a region coordinate set of a mark region where the target object is located or position coordinates corresponding to a specific position, and the like.
In some cases, the image position information is used to determine the geographic position information of the target object and to superimpose and present the tagged content in the real scene of the augmented reality device of the duty user, for example, any participating device (command device, drone device, augmented reality device, network device, or the like) in the collaborative task may calculate and determine the geographic position information of the image position information in the geographic coordinate system corresponding to the real world based on the image position information and the camera pose information when the scene image is captured. The geographic coordinate system generally refers to a coordinate system consisting of longitude, latitude, and altitude, and can indicate any one of positions on the earth. Different reference ellipsoids may be used in different regions, and even if the same ellipsoid is used, the orientation or even the size of the ellipsoid may be adjusted to make the ellipsoid better fit the local geoid. This requires the use of different Geodetic datum systems for identification, such as the CGCS2000 and WGS84 geographical coordinate systems, which are often used in our country. Among them, WGS84 is a geographical coordinate system, which is the most popular geographical coordinate system at present, and is also a coordinate system used by a GPS global satellite positioning system that is widely used at present. The three-dimensional rectangular coordinate system includes but is not limited to: a station center coordinate system, a navigation coordinate system, a NWU coordinate system, etc. Specifically, the camera position information and the like corresponding to the scene image can be obtained, and meanwhile, the spatial position information of a plurality of map points can be obtained, wherein the spatial position information comprises the spatial coordinate information of the corresponding map points in a three-dimensional rectangular coordinate system. Under the condition that the three-dimensional rectangular coordinate system is known, coordinate transformation corresponding to the conversion of the geographic position information from the geographic coordinate system to the three-dimensional rectangular coordinate system is also known, and map points in the geographic coordinate system can be converted into the three-dimensional rectangular coordinate system based on the known coordinate transformation information, so that corresponding spatial position information is determined based on the geographic coordinate information of the map points; further, the target spatial position information of the target point in the three-dimensional rectangular coordinate system is determined according to the spatial position information of the plurality of map points, the target point image position information, the image pickup position information and the image pickup posture information, for example, after the known spatial position information of the plurality of map points, the target point image position information, the image pickup position information and the corresponding image pickup posture information are obtained, since the internal reference of the image pickup device is known, a spatial ray of the image position information corresponding to the target point through the camera optical center can be constructed based on the camera imaging model, and the target spatial position information of the target point is determined based on the spatial ray, the spatial position information of the plurality of map points and the image pickup position information. For example, it may be assumed that the image position information is perpendicular to the plane where the camera negative is located (e.g., the optical axis corresponding to the center of the image of the unmanned aerial vehicle is perpendicular to the plane where the camera negative is located, etc.), so as to determine corresponding spatial ray information based on the normal vector of the plane where the negative is located and the image position information, determine a corresponding intersection point based on the spatial ray information and ground information composed of a plurality of map points, and use the spatial coordinate information of the intersection point as target spatial position information of a target point, etc. Of course, if the pixel corresponding to the image position information is not located in the image center, there is an error between the normal vector determined based on the negative film and the actual ray vector, and at this time, we need to determine the vector information of the spatial target ray corresponding to the image position information through the imaging model of the camera, the image position information, and the shooting posture information, where the spatial target ray is described by the optical center coordinates and the vector information of the ray. After the computer device determines the vector information of the corresponding spatial target ray, it may calculate an intersection point of the ray with respect to the ground based on the vector information of the target ray, the camera position information, and the spatial position information of the plurality of map points, thereby taking the spatial coordinate information of the intersection point as the target spatial position information of the target point, and the like. Finally, the geographic coordinate information of the target point in a geographic coordinate system (such as a geodetic coordinate system) is determined based on the target spatial position information of the target point. For example, after the computer device determines the target spatial position information of the target point, the coordinate information in the three-dimensional rectangular coordinate system can be converted from the three-dimensional spatial coordinate system to a geographic coordinate system (e.g., WGS84 coordinate system) and stored, thereby facilitating subsequent calculation. In some embodiments, determining the target spatial position information of the target point in the three-dimensional rectangular coordinate system according to the vector information of the target ray, the imaging position information and the spatial position information of the plurality of map points includes: acquiring optical center space position information of an optical center of the camera device in a three-dimensional rectangular coordinate system based on the camera position information; determining a target map point closest to the target ray from the map points according to the vector information of the target ray, the spatial position information of the map points and the optical center spatial position information; two map points are taken from other map points except the target map point in the plurality of map points, a corresponding space triangle is formed with the target map point, and a corresponding space intersection point is determined according to the target ray and the corresponding space triangle; and taking the space coordinate information of the space intersection point as the target space position information of the target point.
Or determining the current position information of the target point in a camera coordinate system according to the image position information of the target point on the unmanned aerial vehicle scene image and the camera internal reference information of the unmanned aerial vehicle; and determining the geographical position information of the target point in the geographical coordinate system according to the current position information of the target point in the camera coordinate system and the external parameters of the camera determined based on the shooting parameter information when the unmanned aerial vehicle shoots the scene image, wherein the shooting parameter information comprises but is not limited to the resolution of an image pick-up device of the unmanned aerial vehicle device, the field angle, the rotation angle of the camera, the flight height of the unmanned aerial vehicle and the like. If the marking area of the target object is only 1 point, the geographical position information corresponding to the target object is the geographical position information corresponding to the target object; if the target object is located in the marked area of a plurality of points, in some embodiments, the target point is used to indicate one point of the marked area where the target object is located, and based on the geographic location information of each point in the marked area where the target object is located, we can determine the geographic coordinate set of the target object, and thus determine the geographic location information of the target object. In other embodiments, the target point is used to indicate one or more key points (e.g., corner coordinates or circle centers, etc.) in the marked area where the target object is located, based on the geographic location information of the one or more key points, we can determine the set of geographic coordinates of the target object, e.g., by calculating coordinate expressions of line segments corresponding to each edge based on the spatial coordinates of a plurality of corner points, thereby determining the set of coordinates corresponding to each edge, and summing the sets of coordinates of each edge can determine the geographic location information of the target object.
The determination of the geographic position information may occur at a command device end, or may occur at an unmanned aerial vehicle device, an augmented reality device, or a network device end, or the like. For example, preferably, the commanding equipment terminal calculates and determines the geographical position information of the target object according to the image position information determined by the user operation of the commanding user on the target object in the scene image and the camera shooting pose information corresponding to the scene image; for another example, after the command device determines the corresponding image position information, the image position information is sent to the drone device/augmented reality device/network device, the corresponding drone device/augmented reality device/network device calculates and determines the geographic position information of the target object based on the corresponding scene image and the camera pose information corresponding to the scene image, and the like. For example, the cooperative task includes a network device side for data transmission and data processing in addition to participation by users of the respective execution sides. In some cases, after determining the corresponding marking information based on the user operation of the command user, the command device sends the marking information to the corresponding network device, and the network device receives the marking information, calculates and determines the geographical position information of the target object based on the image position information in the marking information and the camera position and orientation information corresponding to the scene image transmitted to the network device by the unmanned aerial vehicle device, and the like. On one hand, the network device may return the geographic position information to the command device, so that the command device performs overlay presentation on the tagged content of the target object based on the geographic position information, for example, the tagged content of the target object is tracked and overlaid in a real-time scene image captured by the unmanned aerial vehicle acquired by the command device, or the tagged content of the target object is overlaid in a real-time real scene corresponding to the augmented reality device acquired by the command device, or the tagged content of the target object is presented in an electronic map about the target object presented by the command device, or the like. On the other hand, the network device may further determine the superimposition position information, return the superimposition position to the command providing device, and allow the command providing device to perform superimposition presentation on the mark content of the target object based on the superimposition position information. Wherein, the geographic coordinate system projection (such as equirectangular projection, mercator projection, gaussian-gram projection, lambert projection and the like) is used as the 2D plane description to form the map. The electronic map follows a geographic coordinate system protocol and is a mapping of a geographic coordinate system, and the mapping relation is known, that is, a certain point in the geographic coordinate system is known, and the map position of the electronic map can be determined. If map location information on the electronic map is known, the location in the geographic coordinate system can also be determined from the location information.
In some embodiments, after the geographic position information is determined, the geographic position information may be directly sent to an augmented reality device of the on-duty user by a corresponding determining device (such as a command device and an unmanned aerial vehicle device), or forwarded to the augmented reality device through a network device, and the augmented reality device locally calculates and determines superimposed position information of the geographic position information superimposed and displayed in a current real scene picture of the augmented reality device, for example, after the command device/unmanned aerial vehicle device/network device acquires the corresponding geographic position information, the geographic position information is sent to the augmented reality device, and the augmented reality device may determine screen position information and the like of a display screen superimposed with a marker content based on the received geographic position information, current on-duty camera position information and the like, where the on-duty camera position information includes camera position information, camera attitude information and the like of a camera device of the augmented reality device, and the camera position information is used for indicating the current geographic position information and the like of the on-duty user. If the calculation process of the geographic position information occurs at the augmented reality device end, the augmented reality device reserves the geographic position information and sends the geographic position information to other devices, or sends the geographic position information to network devices and sends the geographic position information to other devices through the network devices. In other embodiments, the determined geographic position information is not sent to the augmented reality device of the on-duty user, but the overlay position information, which is displayed in an overlay manner on the current real scene picture of the augmented reality device, of the geographic position information is directly sent to the augmented reality device. After any device in the cooperative task acquires the geographic position information, the superposition position information of the geographic position information superposed and displayed in the current real scene picture of the augmented reality device can be calculated and determined based on the geographic position information and the on-duty camera shooting pose information of the camera device of the augmented reality device, and the superposition position information is used for indicating the display position information of the mark content in the display screen of the augmented reality device, such as a screen/image/pixel coordinate point or set of a screen/image/pixel coordinate system corresponding to the display screen. Similarly, in some embodiments, after a certain device side (e.g., a network device/an augmented reality device/an unmanned aerial vehicle device/a command device) determines geographic position information of a target object, the geographic position information may be directly sent to other device sides, and the other device determines, at a local side, overlay position information of the geographic position information in a real-time scene of the augmented reality device/real-time scene image position information in a real-time scene image/map position information in an electronic map, so as to overlay and present the mark information in the electronic map that overlays and presents the mark information/displays the real-time scene image corresponding to the augmented reality device, the unmanned aerial vehicle device, and/or the command device; in other embodiments, a certain device side (e.g., a network device/an augmented reality device/a drone device/a director device) may further determine overlay position information corresponding to geographic position information in a real scene of the augmented reality device/real-time scene image position information in a real-time scene image/map position information in an electronic map, and send the overlay position information to another device side, so as to overlay and present the tag information in the electronic map overlaid and presenting the tag information in the real-time scene image overlaid and presented in the augmented reality device, the drone device, and/or the director device. In some embodiments, the geographic location information is further used to determine real-time image location information of the target object in a real-time scene image captured by the drone device, and to superimpose and present the markup content in a real-time scene image presented by the augmented reality device and/or drone device. For example, the marker information about the target object may be calculated based on the image position information in the marker information to obtain corresponding geographic position information, and then stored in a storage database (for example, a commanding device/augmented reality device/unmanned aerial vehicle device performs local storage or a network device end establishes a corresponding network storage database, etc.), so that when the marker information is called, the geographic position information corresponding to the marker information is called at the same time, and calculation conversion and the like are performed based on the geographic position information to perform other position information (for example, real-time image position information in a real-time scene image of the unmanned aerial vehicle device or real-time superposition position information in a real-time acquired real scene of the augmented reality device, etc.). For example, the drone device side may send the corresponding real-time scene image to the command device/augmented reality device directly through a communication connection, or send the corresponding real-time scene image to the command device or the augmented reality device via the network device, and the corresponding augmented reality device may present the real-time scene image in the display screen, for example, present the real-time scene image in the display screen in a video perspective manner, or present the real-time scene image in a certain screen area in the display screen. In order to facilitate the tracking, overlaying and presenting of the marker information in the real-time scene image, the unmanned aerial vehicle device acquires real-time flight camera shooting pose information corresponding to the real-time scene image, in some embodiments, the corresponding augmented reality device/command device can directly acquire the real-time flight camera shooting pose information and the like of the real-time scene image through communication connection with the unmanned aerial vehicle device or through a forwarding mode of network equipment, and in combination with the calculated and determined geographical position information and the like, the overlaying position information and the like in the corresponding real-time scene image can be calculated at a local end, and the marker content and the like are tracked, overlaid and presented in the presented real-time scene image. For example, when the unmanned aerial vehicle is at a certain position (such as a takeoff position), the unmanned aerial vehicle is set as an origin of a three-dimensional rectangular coordinate system (such as a station center coordinate system and a navigation coordinate system); converting the geographical position information corresponding to the marking information into the three-dimensional rectangular coordinate system; acquiring the real-time flying geographic position and attitude information of the unmanned aerial vehicle, converting the geographic position of the unmanned aerial vehicle into the three-dimensional rectangular coordinate system, and determining a rotation matrix from the three-dimensional rectangular coordinate system to an unmanned aerial vehicle camera coordinate system based on the attitude information of the unmanned aerial vehicle; and determining and displaying real-time image position information of the marking information in a real-time scene image acquired by the unmanned aerial vehicle based on the three-dimensional rectangular coordinate of the marking information, the three-dimensional rectangular coordinate corresponding to the position of the unmanned aerial vehicle, the rotation matrix and camera internal parameters of the unmanned aerial vehicle. In other embodiments, a certain device side (e.g., a command device/augmented reality device/unmanned aerial vehicle device/network device side) acquires real-time flight camera pose information and the like of a real-time scene image, and in combination with geographical location information and the like which is calculated and determined by the marker information, superposition location information and the like in the real-time scene image corresponding to the marker information can be calculated, and then the superposition location information is sent to other device sides, so that the marker content and the like are presented in the real-time scene image presented by the other device sides in a tracking and superposing manner. In some embodiments, the method further includes step S103 (not shown), and in step S103, geographic position information of the target object is determined based on the image position information and the camera pose information of the scene image. For example, after the command device determines corresponding marker information based on user operation of a command user, geographic position information of the target object is calculated and determined based on image position information in the marker information and camera pose information corresponding to the scene image transmitted by the unmanned aerial vehicle device, and then the geographic position information is directly sent to other execution devices of the cooperative task, such as augmented reality devices and unmanned aerial vehicle devices, or the geographic position information is sent to network devices and sent to other execution devices of the cooperative task by the network devices. On the one hand, the commanding device may send the geographic position information to other execution devices of the collaborative task, so that the other execution devices further determine the overlay position information based on the geographic position information to overlay and present the mark content of the target object, for example, the mark content of the target object is tracked and overlaid and presented in a real-time scene image captured by the unmanned aerial vehicle acquired by the augmented reality device, or the mark content of the target object is overlaid and presented in a real-time real scene corresponding to the augmented reality device acquired by the augmented reality device, or the mark content of the target object is presented in an electronic map related to the target object presented by the augmented reality device, and the like. On the other hand, the commanding device can further determine the superposition position information, and the superposition position is returned to other executing devices for the other executing devices to perform superposition presentation on the mark content of the target object based on the superposition position information. Wherein, the first and the second end of the pipe are connected with each other,
in some embodiments, the method further comprises a step S104 (not shown), in which step S104, an electronic map of the scene in which the target object is located is presented; and determining map position information of the target object in the electronic map according to the geographic position information of the target object, and presenting the mark content in the electronic map based on the map position information. For example, the command device side may invoke an electronic map of a scene in which the target object is located, for example, the command device determines, according to the geographic location information of the target object, an electronic map near the geographic location information from a local database or the network device side, and presents the electronic map. The command device may also obtain map location information of the target object in the electronic map, for example, perform projection conversion at the local end based on the geographic location information to determine the map location information in the corresponding electronic map, or receive map location information returned by other device ends (such as network devices, unmanned aerial vehicle devices, augmented reality devices), and the like. The command equipment can present the electronic map through the corresponding display device, and present the mark content of the mark information in the area corresponding to the map position information in the electronic map, thereby realizing the superposition presentation of the mark information of the target object in the electronic map.
In some embodiments, the geographic location information is further used to present the tagged content superimposed in an electronic map presented by the augmented reality device and/or drone device regarding the scene in which the target object is located. For example, the geographic location information may be determined by calculation at a command device end, an augmented reality device end, or an unmanned aerial vehicle device end, or may be determined by calculation at a network device end. The corresponding command equipment, the unmanned aerial vehicle equipment or the augmented reality equipment can present the electronic map of the scene where the target object is located through the respective display devices, and acquire the map position information of the target object based on the geographical position information, so that the mark content is superposed and presented in the respective presented electronic maps, and the mark information added to the target object in the scene image shot by the unmanned aerial vehicle equipment is synchronously presented at the corresponding position of the target object in the electronic map. The map location information may be obtained by performing projection conversion determination on the local end of each device based on the geographic location information corresponding to the mark information, or may be returned to each device after being calculated by the network device, or may be sent to other device ends after being calculated by a certain device end, or the like.
In some embodiments, the method further includes step S105 (not shown), in step S105, acquiring and presenting an electronic map, and determining operation marker information of an operation object in the electronic map based on a user operation of the command user on the operation object, where the operation marker information includes corresponding operation marker content and operation map location information of the operation marker content in the electronic map, where the operation map location information is used to determine operation geographic location information of the operation object and to superimpose and present the operation marker content in a real scene of the augmented reality device and/or a scene image captured by the drone device. For example, the user operation of the command user may be a gesture action or a voice instruction of the command user, and the operation mark information is generated by recognizing the gesture action or the voice instruction; for another example, the user operation of the command user may be a direct operation on the electronic map by using a device such as a keyboard, a mouse, a touch screen, or a touch pad, for example, the command user performs a frame selection, a doodle, or other editing information (e.g., editing text, adding 2D or 3D model information, etc.) on the presented electronic map through the mouse, and the like. In some embodiments, for example, the command device can call an electronic map of a target object on a local side or a network device side, the command device can present an operation interface related to the electronic map while presenting the electronic map, the command device can mark the electronic map through the operation interface, such as framing a part of an area in the electronic map or selecting one or more position identifiers, and the like, the command device can determine a corresponding area or position identifier as an operation object based on a user operation of the command device, and generate operation mark information corresponding to the operation object, the operation mark information includes corresponding operation mark content and operation map position information of the operation mark content in the electronic map, and the operation mark content is determined by information added by the user operation on the electronic map, including but not limited to a square, a circle, a line, a point, an arrow, a picture/video, an animation, a three-dimensional model, and the like, and preferably, the operation mark content further includes parameter information, such as color, thickness, and the like. The operation map position information is not related to the map position information of the target object, and may be the same position or different positions. Based on the operation map position information, operation geographic position information corresponding to the operation object can be obtained through calculation, for example, the corresponding operation geographic position information is locally calculated at a command device end, or the operation map position information is sent to network equipment, augmented reality equipment or unmanned aerial vehicle equipment to calculate and determine the corresponding operation geographic position information and the like, preferably, the command device calculates and determines the operation geographic position information corresponding to the operation object according to the operation map position information determined by the user operation of the operation object in the electronic map by the command user; for another example, after the command device determines the position information of the corresponding operation map, the position information of the operation map is sent to the drone device/augmented reality device/network device, the corresponding drone device/augmented reality device/network device calculates and determines the operation geographic position information of the operation object based on the position information of the corresponding operation map, and the like. For example, the network device receives the operation map position information sent by the command device, and determines the operation geographic position information corresponding to the operation map position information based on the inverse transformation of the projection transformation. On one hand, the network device may return the operation geographic position information to the command device, so that the command device performs overlay presentation on the tagged content of the operation object based on the operation geographic position information, for example, the tagged content of the operation object is presented in a real-time electronic map about the operation object presented at the command device side, or the tagged content of the operation object is presented in an overlay manner in a real-time live view corresponding to the augmented reality device acquired by the command device, or the tagged content of the operation object is presented in an overlay manner in a real-time scene image captured by the unmanned aerial vehicle acquired by the command device. On the other hand, the network device may further determine the superimposition position information, return the superimposition position to the command providing device, and allow the command providing device to perform superimposition presentation on the mark content of the operation object based on the superimposition position information. In order to implement the commanding and scheduling of the commanding device to the global, the augmented reality device and/or the unmanned aerial vehicle device end may also present the operation marker information in an image acquired in real time in an overlapping manner based on the corresponding operation geographic position information, in some embodiments, for example, the augmented reality device calculates and determines the operation geographic position information at a local end or receives the operation geographic position information sent by other devices (network device/commanding device/unmanned aerial vehicle device), and calculates and determines the corresponding overlapping position information of the operation geographic position information in the real scene of the augmented reality device based on the current on-duty camera pose information, so as to present the operation marker information in the real scene of the augmented reality device in an overlapping manner; for another example, the augmented reality device displays a real-time scene image shot by the unmanned aerial vehicle device, calculates and determines operation geographic position information at a local end or receives operation geographic position information sent by other devices (network device/command device/unmanned aerial vehicle device), and calculates and determines real-time scene image position information in the real-time scene image of the operation geographic position information based on real-time flight camera position and pose information corresponding to the real-time scene image shot by the unmanned aerial vehicle device, so that the operation mark information is superposed and presented in the real-time scene image displayed by the augmented reality device; for another example, the augmented reality device calculates and determines the operation geographic position information at the local end or receives the operation geographic position information sent by other devices (network device/command device/unmanned aerial vehicle device), calculates and determines the map position information of the operation geographic position information in the electronic map presented by the augmented reality device, and thereby displays the operation mark information in the electronic map of the augmented reality device in an overlapping manner; similarly, for example, the unmanned aerial vehicle device locally calculates and determines the operation geographic position information or receives the operation geographic position information sent by other devices (network device/command device/augmented reality device), and calculates and determines the real-time scene image position information in the real-time scene image corresponding to the operation geographic position information based on the real-time flight camera pose information corresponding to the real-time scene image shot by the unmanned aerial vehicle device, so that the operation mark information is superposed and presented in the real-time scene image of the unmanned aerial vehicle device. As another example, the unmanned aerial vehicle device calculates and determines the operation geographic position information at the local end or receives the operation geographic position information sent by other devices (the network device/the command device/the augmented reality device), and calculates and determines map position information of the operation geographic position information in an electronic map presented by the unmanned aerial vehicle device, so that the operation mark information is superimposed and presented in the electronic map of the unmanned aerial vehicle device. For another example, the unmanned aerial vehicle device calculates and determines the operation geographic position information at the local end or receives the operation geographic position information sent by other devices (network device/command device/unmanned aerial vehicle device), and calculates and determines the corresponding superposition position information of the operation geographic position information in the real scene of the augmented reality device acquired by the unmanned aerial vehicle device based on the current on-duty camera position information, so as to superpose and present the operation mark information in the real scene of the augmented reality device acquired by the unmanned aerial vehicle device. In other embodiments, a certain device side (e.g., a network device/an augmented reality device/an unmanned aerial vehicle device/a command device) may further determine overlay position information corresponding to the operation geographic position information in a real scene of the augmented reality device/real-time scene image position information in a real-time scene image/map position information in an electronic map, and send the overlay position information to another device side, so as to overlay and present the operation marker information in the electronic map overlaid and presented with the operation marker information/displayed in the real-time scene image overlaid and presented with the operation marker information/displayed in the real scene corresponding to the augmented reality device, the unmanned aerial vehicle device, and/or the command device. Through the technology, the operation mark information added to the operation object in the electronic map is synchronously presented in the real scene of the augmented reality equipment and/or the corresponding position of the operation object in the scene image shot by the unmanned aerial vehicle equipment.
As in some embodiments, the method further includes a step S106 (not shown) of determining operation geographical location information of the operation object based on the operation map location information in the step S106. For example, the command device determines, according to the position information of the operation map, the operation geographic position information corresponding to the position information of the operation map based on the inverse transformation of the projection transformation, and in some embodiments, the command device directly sends the operation geographic position information to other execution devices of the cooperative task, such as an augmented reality device, an unmanned aerial vehicle device, and the like, or sends the operation geographic position information to a network device, and the network device sends the operation geographic position information to other execution devices of the cooperative task, for example, other execution devices calculate and determine, based on the current on-duty camera pose information, the overlay position information corresponding to the operation geographic position information in the real scene of the augmented reality device, so that the operation marker information is overlaid and presented in the real scene corresponding to other execution devices, such as the marker content of the operation object is overlaid and presented in the real scene corresponding to the augmented reality device, in the real scene corresponding to the augmented reality device acquired by the command device or the unmanned aerial vehicle device; for another example, the method is used for other execution devices to calculate and determine the real-time scene image position information of the operation geographic position information in the real-time scene image based on the real-time flight camera position information corresponding to the real-time scene image shot by the unmanned aerial vehicle device, so that the operation mark information is superposed and presented in the real-time scene image displayed by other execution devices; for another example, the other execution device calculates and determines the map position information of the operation geographic position information in the electronic map, so that the operation mark information is superposed and presented in the electronic map displayed by the other execution device. In other embodiments, the commanding device side may further determine the overlay position information corresponding to the operation geographic position information in the real scene of the augmented reality device/the real-time scene image position information in the real-time scene image/the map position information in the electronic map, and send the overlay position information to other execution devices (such as the augmented reality device, the drone device, and the like), so as to overlay and present the operation mark information in the real scene/real-time drone picture/electronic map corresponding to the other execution devices.
In some embodiments, the operational geographic location information is further used for overlaying and presenting the operational marker content in an electronic map presented by the augmented reality device and/or drone device about a scene in which the operational object is located. For example, the unmanned aerial vehicle device or the augmented reality device may further invoke an electronic map of a scene where the operation object is located, in some embodiments, the unmanned aerial vehicle device or the augmented reality device acquires corresponding operation geographic position information from other device terminals (such as a command device or a network device), and superimposes and presents operation mark content in the corresponding electronic map based on the operation geographic position information, for example, the ground control center corresponding to the unmanned aerial vehicle device may present the corresponding electronic map, and determine corresponding operation map position information through projection conversion based on the acquired operation geographic position information, so that the operation mark content of the operation object is superimposed and presented on the operation map position information of the electronic map; for example, the augmented reality device may present a corresponding electronic map through a real screen, and determine corresponding operation map location information through projection conversion based on the acquired operation geographic location information, thereby superimposing and presenting operation marker content of the operation object on the operation map location information of the electronic map. In some embodiments, the drone device or the augmented reality device may obtain, from another device end (such as a command device or a network device), operation map location information corresponding to the operation geographic location information, so as to superimpose and present operation marker content of the operation object on the operation map location information of the presented electronic map.
In some embodiments, the method further includes step S107 (not shown), and in step S107, obtaining first map location information of the augmented reality device and/or second map location information of the drone device, and identifying the augmented reality device based on the first map location information and/or identifying the drone device based on the second map location information in the electronic map. For example, augmented reality device includes corresponding position sensing device (e.g., position sensor, etc.), can acquire this augmented reality device's first geographical position information, and similarly, unmanned aerial vehicle equipment includes corresponding position sensing device, can acquire unmanned aerial vehicle device's second geographical position information. The command device may acquire the first geographical position information or the second geographical position information, and determine, based on the projection transformation, first map position information corresponding to the first geographical position information and/or second map position information corresponding to the second geographical position information, and the like. Or the network device may receive the first geographical location information and/or the second geographical location information uploaded by the augmented reality device and/or the unmanned aerial vehicle device, and determine the corresponding first map location information and/or the second map location information based on projection transformation, and the network device may send the first map location information and/or the second map location information to the corresponding command device, so that the command device performs location identification in the electronic map. After command equipment acquires first map position information of augmented reality equipment and/or second map position information of unmanned aerial vehicle equipment, can be based on first map position information sign augmented reality equipment in the electronic map, and/or based on second map position information sign unmanned aerial vehicle equipment, for example, show user head portrait, serial number or the like that augmented reality equipment corresponds the user on duty at first map position information and carry out the sign to augmented reality equipment, and/or, show equipment head portrait, serial number or unmanned aerial vehicle flight hand head portrait, serial number etc. of unmanned aerial vehicle equipment at second map position information and carry out the sign to unmanned aerial vehicle equipment.
In some embodiments, the method further includes step S108 (not shown), in step S108, acquiring and presenting an electronic map of a scene in which the target object is located, wherein the electronic map includes device identification information of a plurality of candidate drone devices; in step S101, a scene image captured by the drone device that is one of the device identification information of the plurality of candidate drone devices is obtained based on a call operation of the command user about the one of the device identification information of the plurality of candidate drone devices, and the drone device is in a cooperative execution state of the cooperative task. For example, the director may invoke an electronic map of the scene in which the target object is located, e.g., the director invokes an electronic map of the task area from the local side or the network device side. If there are multiple candidate unmanned aerial vehicle devices in the current area of the electronic map, the command device may identify the multiple unmanned aerial vehicle devices in the electronic map, for example, the command device obtains map location information of each unmanned aerial vehicle device based on the geographic location information of each unmanned aerial vehicle device, and presents device identification information of each unmanned aerial vehicle device in the corresponding map location information of the electronic map, where the corresponding device identification information includes, but is not limited to, a device serial number, a head portrait, a serial number of the unmanned aerial vehicle device, or a head portrait, a name, a serial number of a flying hand of the unmanned aerial vehicle. The command user can click device identification information corresponding to the candidate unmanned aerial vehicle device on an operation interface of the electronic map, and the candidate unmanned aerial vehicle device is determined as a participating device for executing the cooperative task and the like by calling the acquired image of the candidate unmanned aerial vehicle device as a corresponding scene image. Or the candidate unmanned aerial vehicle devices are all cooperative task participating devices, and the commanding device calls a suitable unmanned aerial vehicle device from the candidate unmanned aerial vehicle devices to acquire a scene image at any time based on the operation of the commanding user so as to realize multi-angle observation, tracking and the like corresponding to the map.
In some embodiments, the method further includes step S109 (not shown), and in step S109, a task creation operation of the command user is obtained, where the task creation operation includes a selected operation of the device identification information about the drone device and/or the device identification information of the augmented reality device, and the task creation operation is used to establish a collaborative task about the command device and the drone device and/or the augmented reality device. For example, the cooperative task is created by a command user of the command device, for example, the command user may obtain device identification information of one or more pieces of unmanned aerial vehicle devices and device identification information of one or more pieces of augmented reality devices in a current area, and add device identification information that the command user desires to perform the cooperative task in a corresponding task creation interface, or the command user may input a corresponding constraint condition in the creation interface, and determine, based on the constraint condition, one or more pieces of device identification information that are adapted from the one or more pieces of unmanned aerial vehicle devices and/or the one or more pieces of augmented reality devices, so as to determine a corresponding task creation process based on the one or more pieces of device identification information, for example, establish, based on the one or more pieces of device identification information, a communication connection with the one or more pieces of device identification information corresponding unmanned aerial vehicle devices and/or augmented reality devices, thereby enabling the command device to execute the corresponding cooperative task together with the command device; or generating a corresponding task creation request based on the one or more pieces of equipment identification information, sending the task creation request to the network equipment, sending the corresponding task creation request to the unmanned aerial vehicle equipment and/or the augmented reality equipment corresponding to the one or more pieces of equipment identification information by the network equipment, and if a confirmation operation about the task creation request is obtained, creating a cooperative task about the command equipment and the unmanned aerial vehicle equipment and/or the augmented reality equipment corresponding to the one or more pieces of equipment identification information, and the like.
In some embodiments, the collaborative task includes a plurality of subtasks, the augmented reality device belongs to one of execution devices of a target subtask, and the target subtask belongs to one of the plurality of subtasks. For example, the corresponding collaborative task includes a plurality of subtasks, for example, different task division is performed on each device, for example, the capture collaborative task for the target person may be planned as a capture subtask, an intercept subtask, or a monitor subtask. The command device may issue different task instructions for different subtasks, for example, for a capture subtask, the command device may issue a task instruction for a capture route, and for an intercept subtask, the command device may issue a task instruction for an intercept route, and the like. The commander device may issue corresponding execution instructions to all the execution devices corresponding to different subtasks by selecting different subtasks, for example, establishing communication connection and performing voice instruction scheduling for all the execution devices of the same subtask at the same time, and in some embodiments, the method further includes step S110 (not shown), and in step S110, the method sends the subtask execution instructions related to the target subtask to all the execution devices of the target subtask, so that the execution instructions are presented by all the execution devices of the target subtask. For example, the director may send instructions to all of the on-duty devices for a certain subtask of the collaborative task, for a collaborative mode for multiple people for a single task, and so on. The multiple execution devices of the subtasks can complete small group formation in a formation mode. For example, the commander may determine the corresponding team member/team device based on the commander's selection, as well as, for example, the executive device participating in the current collaborative task determines the corresponding member/team device in advance, and so on. The command equipment can select the group corresponding to the specific subtask through obtaining the clicking operation of the commander, can also select the group corresponding to the specific subtask through a voice instruction and the like, and changes the scheduling mode of the current cooperative task from overall command to directional command; at this time, the marker information determined by the user operation of the target object in the scene image and/or the operation marker information determined by the user operation of the operation object in the electronic map by the commander is issued to all execution devices in the selected group, so that all execution devices of the target subtask present the execution instruction, and the execution devices of the non-selected group cannot acquire the marker information and the operation marker information. Of course, in some cases, the command device may also implement scheduling of a single device corresponding to a specific device identification information based on a touch operation of the command user on the specific device identification information, for example, issuing a corresponding scheduling instruction to the single device.
Fig. 2 shows a method for presenting tag information of a target object, which is applied to an augmented reality device, according to an aspect of the present application, and the method includes step S201, in step S201, obtaining first pose information of an augmented reality device being used by a user on duty, where the first pose information includes first position information and first pose information of the augmented reality device, and the first pose information is used to determine, in combination with geographic position information of a corresponding target object, overlay position information in a real scene of the augmented reality device of the target object, and overlay and present tag content about the target object in the real scene based on the overlay position information.
For example, the on-duty user is used to indicate a wearing user of an augmented reality device that is performing the same collaborative task as a corresponding command device and/or drone device. The geographic position information of the target object can be determined by combining image position information determined by the command device based on user operation of the command user in the scene image with flight camera position information of the scene image, and can also be determined by commanding map position information determined by user operation of the target object in the electronic map to be subjected to projection transformation. The geographical location information may be obtained by local calculation of corresponding devices (such as augmented reality devices, unmanned aerial vehicle devices, and command devices), or may be obtained by calculation of network devices. In some embodiments, the augmented reality device side may obtain geographic position information of the target object, and then determine, based on the real-time pose information of the augmented reality device, superimposed position information of the target object in the real scene of the augmented reality device, for example, the local side of the augmented reality device calculates and determines the geographic position information of the target object, and for example, the director device/the drone device directly sends the calculated geographic position information to the augmented reality device, or the network device calculates and determines and sends the calculated geographic position information to the augmented reality device. In other embodiments, the other device side (e.g., network device, drone device, command device) may determine overlay position information of the target object in the real scene of the augmented reality device based on the geographic position information and the real-time pose information of the augmented reality device, and send the overlay position to the augmented reality device.
The augmented reality equipment can acquire first position and posture information corresponding to a camera device of the augmented reality equipment in real time, wherein the first position and posture information of the augmented reality equipment is first position information and first posture information. According to the first pose information and the geographic position information, the superposition position information of the target object in the real scene picture of the augmented reality device can be calculated and determined, and the corresponding mark content is superposed and presented in the display screen of the augmented reality device based on the superposition position information. Specifically, the augmented reality device is set to be the origin of a three-dimensional rectangular coordinate system (such as a station center coordinate system, a navigation coordinate system and the like) at a certain position (such as the starting position of a duty worker); converting the geographical position information of the mark information into the three-dimensional rectangular coordinate system; acquiring real-time geographic position and attitude information of the augmented reality equipment, converting the geographic position of the augmented reality equipment into the three-dimensional rectangular coordinate system, and determining a rotation matrix from the three-dimensional rectangular coordinate system to a camera coordinate system of the augmented reality equipment based on the attitude information of the augmented reality equipment; and determining the superposition position information of the marker information in the screen of the augmented reality equipment based on the three-dimensional rectangular coordinate of the marker information, the three-dimensional rectangular coordinate corresponding to the position of the augmented reality equipment, the rotation matrix and the camera parameters of the augmented reality equipment. The calculation process of the superimposed position information may occur locally in the augmented reality device, or may be performed by other devices (such as a network device, an augmented reality device, an unmanned aerial vehicle device, and the like) to return to the augmented reality device after calculation based on the geographical position information and the first attitude information is completed.
In some cases, the live-action pictures of the augmented reality device may be transmitted to the corresponding command device and/or drone device and presented in a display device of the command device and/or a control device of the drone device. Similarly, the mark content of the corresponding target object is displayed in the display device of the commanding device/the drone control device based on the corresponding superimposed position information, and the corresponding superimposed position information may be determined by the network device/the augmented reality device and sent to the corresponding commanding device and/or the drone control device, or may be calculated and determined by the corresponding commanding device/the drone control device at the respective local ends based on the first position information (for example, directly sent by the augmented reality device or forwarded by the network device, etc.) and the geographic position information of the target object. As in some embodiments, the method further comprises step S202 (not shown), in step S202, sending the first pose information to a corresponding network device; the augmented reality equipment and the commanding equipment are in a collaborative execution state of the same collaborative task; receiving marker content to be overlaid and overlay position information of the marker content, which are returned by the network device and are related to a target object in a real scene of the augmented reality device, wherein the overlay position information is determined by the first pose information and geographical position information of the target object, the geographical position information is determined by image position information of the target object related to the marker content in a scene image shot by corresponding unmanned aerial vehicle equipment, the marker content and the image position information are determined by user operation of corresponding command equipment related to the scene image, and the command equipment, the unmanned aerial vehicle equipment and the augmented reality device are in a collaborative execution state of the same collaborative task. For example, the calculation of the corresponding superimposed position information may be performed at the network device, the augmented reality device uploads the corresponding first pose information to the network device, and the network device may calculate and determine the corresponding superimposed position information based on the first pose information and the geographic position information of the target object, where the geographic position information may be calculated and determined by the network device based on the flight camera pose information of the drone device and the image position information of the target object, or may be received by the command device/the drone device/the augmented reality device based on the flight camera pose information of the drone device and the image position information of the target object.
In some embodiments, the collaborative task includes a plurality of subtasks, the augmented reality device belongs to one of the execution devices of a target subtask, and the target subtask belongs to one of the plurality of subtasks; the method further includes step S203 (not shown), in step S203, receiving and presenting a subtask execution instruction about the target subtask, which is sent by the director device to the augmented reality device, where the subtask execution instruction is presented to all execution devices of the target subtask. For example, the corresponding collaborative task includes a plurality of subtasks, for example, different task division is performed on each device, for example, a capture collaborative task for a target person may be planned as a capture subtask, an interception subtask, a monitoring subtask, or the like. The command device may issue different task instructions to different subtasks, for example, for a capture subtask, the command device may issue a task instruction to capture a route, and for an intercept subtask, the command device may issue a task instruction to intercept a route, and the like. The command device may issue corresponding execution instructions to all execution devices corresponding to different subtasks by selecting different subtasks, for example, establishing communication connection and performing voice instruction scheduling for all execution devices of the same subtask at the same time. The commanding device can send instructions to all the on-duty devices of a certain subtask of the cooperative task, and the instructions are used for a cooperative mode aiming at a single task and multiple persons. The multiple execution devices of the subtasks can complete group formation in a formation mode. For example, the commander may determine the corresponding team member/team device based on the commander's selection, as well as, for example, the executive device participating in the current collaborative task determines the corresponding member/team device in advance, and so on. The command equipment can select the group corresponding to the specific subtask through obtaining the clicking operation of the commander, can also select the group corresponding to the specific subtask through a voice instruction and the like, and changes the scheduling mode of the current cooperative task from overall command to directional command; at this time, the commander issues the mark information determined by the user operation of the target object in the scene image and/or the operation mark information determined by the user operation of the operation object in the electronic map to all the execution devices in the selected group, so that all the execution devices of the target subtask present the execution instruction, and the execution devices of the non-selected group cannot acquire the mark information and the operation mark information. Of course, in some cases, the command device may also implement scheduling of a single device corresponding to a specific device identification information based on a touch operation of the command user on the specific device identification information, for example, issuing a corresponding scheduling instruction to the single device.
In some embodiments, the method further includes step S204 (not shown), in step S204, acquiring a scene image of the target object captured by the corresponding drone device and image position information of the corresponding tag content in the scene image; and presenting the scene image and displaying the mark content on the scene image in an overlapping way according to the image position information. For example, the augmented reality device may further obtain a scene image of the target object captured by the unmanned aerial vehicle, where the scene image may be directly obtained from an unmanned aerial vehicle device side, or may be obtained by calling from a network device based on device identification information of the unmanned aerial vehicle device. The corresponding scene image can be directly sent to the command device/augmented reality device by the unmanned aerial vehicle device end through communication connection, or sent to the command device or the augmented reality device through the network device, and the corresponding augmented reality device can present the scene image in the display screen, for example, the scene image is presented in the display screen in a video perspective mode, or the scene image is presented in a certain screen area in the display screen, and the like. In some embodiments, the corresponding augmented reality device may directly acquire the flight camera position information and the like of the scene image through communication connection with the unmanned aerial vehicle device or through forwarding by a network device, and in combination with the calculated and determined geographic position information and the like of the target object, may calculate image position information and the like in the corresponding scene image, and superimpose and present the mark content and the like in the presented scene image. In other embodiments, other devices (such as a network device, a command device, and an unmanned aerial vehicle device) acquire the flight camera pose information of the scene image, and in combination with the calculated geographic position information of the determined target object, the superimposition position information and the like in the corresponding scene image may be calculated, the superimposition position information is sent to the augmented reality device, and the markup content and the like are superimposed and presented in the scene image presented by the augmented reality device.
In some embodiments, the method further includes step S205 (not shown), in step S205, acquiring an electronic map of a scene where the target object is located and map location information of the target in the electronic map; and presenting the electronic map and displaying the mark content in the electronic map in an overlapping mode based on the map position information. For example, the augmented reality device side may invoke an electronic map of a scene in which the target object is located, such as determining, by the augmented reality device, an electronic map near the geographic location information from a local database or the network device side according to the geographic location information in which the target object is located, and presenting the electronic map. The augmented reality device may further obtain map location information of the target object in the electronic map, for example, the local end performs projection conversion based on the geographic location information to determine the map location information in the corresponding electronic map, or other device ends (such as a network device, a command device, and an unmanned aerial vehicle device) perform projection conversion based on the geographic location information to determine the map location information in the corresponding electronic map, and send the map location information to the augmented reality device. The augmented reality equipment can present the electronic map through the corresponding display device, and present the mark content of the mark information in the area corresponding to the map position information in the electronic map, thereby realizing the superposition presentation of the mark information of the target object in the electronic map.
The foregoing mainly describes embodiments of the present application for presenting mark information of a target object, and further provides a specific apparatus capable of implementing the above embodiments, which is described below with reference to fig. 3 and 4.
Fig. 3 illustrates a director device 100 presenting label information for a target object, where the device includes a one-to-one module 101 and a two-to-one module 102, according to an aspect of the present application. A one-to-one module 101, configured to acquire a scene image captured by an unmanned aerial vehicle device; a second module 102, configured to acquire a user operation of a command user of a command device on a target object in the scene image, and generate tag information on the target object based on the user operation, where the tag information includes corresponding tag content and image location information of the tag content in the scene image, the image location information is used to determine geographic location information of the target object and to present the tag content in a superposition manner in a real scene of an augmented reality device of a duty user, and the augmented reality device and the command device are in a cooperative execution state of the same cooperative task.
In some embodiments, the geographic location information is further used to determine real-time image location information of the target object in a real-time scene image captured by the drone device, and to superimpose and present the markup content in a real-time scene image presented by the augmented reality device and/or drone device.
Here, the specific implementation corresponding to the one-module 101 and the two-module 102 shown in fig. 3 is the same as or similar to the embodiment of the step S101 and the step S102 shown in fig. 1, and thus is not repeated herein and is included herein by way of reference.
In some embodiments, the apparatus further comprises a third module (not shown) for determining geographic location information of the target object based on the image location information and camera pose information of the scene image.
In some embodiments, the apparatus further comprises a quad module (not shown) for presenting an electronic map of a scene in which the target object is located; and determining map position information of the target object in the electronic map according to the geographic position information of the target object, and presenting the mark content in the electronic map based on the map position information.
In some embodiments, the geographic location information is further used to present the tagged content superimposed in an electronic map presented by the augmented reality device and/or drone device regarding the scene in which the target object is located.
In some embodiments, the apparatus further includes a fifth module (not shown) configured to acquire and present an electronic map, determine operation marker information of an operation object in the electronic map based on a user operation of the command user on the operation object, where the operation marker information includes corresponding operation marker content and operation map location information of the operation marker content in the electronic map, where the operation map location information is used to determine operation geographic location information of the operation object and to superimpose and present the operation marker content in a real scene of the augmented reality apparatus and/or in a scene image captured by the drone apparatus. In some embodiments, the apparatus further includes a sixth module (not shown) for determining the operation geographic location information of the operation object based on the operation map location information.
In some embodiments, the operational geographic location information is further used for overlaying and presenting the operational marker content in an electronic map presented by the augmented reality device and/or drone device about a scene in which the operational object is located.
In some embodiments, the device further includes a seventh module (not shown) configured to obtain first map location information of the augmented reality device and/or second map location information of the drone device, and identify the augmented reality device in the electronic map based on the first map location information and/or identify the drone device based on the second map location information.
In some embodiments, the device further comprises an eight module (not shown) for obtaining and presenting an electronic map of a scene in which the target object is located, wherein the electronic map includes device identification information of a plurality of candidate drone devices; the one-to-one module 101 is configured to obtain a scene image shot by the drone device that is one of the device identification information of the candidate drone devices, based on a call operation of the command user on one of the device identification information of the candidate drone devices, where the drone device is in a cooperative execution state of the cooperative task.
In some embodiments, the device further includes a nine module (not shown) configured to obtain a task creation operation of the command user, where the task creation operation includes a selected operation of the device identification information about the drone device and/or the device identification information about the augmented reality device, and the task creation operation is configured to establish a collaborative task about the command device and the drone device and/or the augmented reality device.
In some embodiments, the collaborative task includes a plurality of subtasks, the augmented reality device belongs to one of execution devices of a target subtask, and the target subtask belongs to one of the plurality of subtasks. In some embodiments, the device further includes a tenth module (not shown) for sending the subtask execution instruction related to the target subtask to all execution devices of the target subtask to present the execution instruction through all execution devices of the target subtask.
Here, the specific implementation corresponding to the three to ten modules is the same as or similar to the embodiment of the steps S103 to S110, and thus is not repeated herein and is included by reference.
Fig. 4 shows an augmented reality device for presenting mark information of a target object according to an aspect of the present application, the device includes a first module 201 for acquiring first pose information of an augmented reality device being used by a duty user, where the first pose information includes first position information and first pose information of the augmented reality device, and the first pose information is used for determining overlay position information in a real scene of the augmented reality device of the target object in combination with geographical position information of the corresponding target object, and overlaying mark content about the target object in the real scene based on the overlay position information.
Here, the specific implementation corresponding to the two-in-one module 201 is the same as or similar to the embodiment of the step S201, and thus is not repeated herein and is included by reference.
In some embodiments, the device further includes a second module (not shown) for sending the first pose information to a corresponding network device; the augmented reality equipment and the commanding equipment are in a collaborative execution state of the same collaborative task; receiving marker content to be overlaid and overlay position information of the marker content, which are returned by the network device and are related to a target object in a real scene of the augmented reality device, wherein the overlay position information is determined by the first pose information and geographical position information of the target object, the geographical position information is determined by image position information of the target object related to the marker content in a scene image shot by corresponding unmanned aerial vehicle equipment, the marker content and the image position information are determined by user operation of corresponding command equipment related to the scene image, and the command equipment, the unmanned aerial vehicle equipment and the augmented reality device are in a collaborative execution state of the same collaborative task.
In some embodiments, the collaborative task includes a plurality of subtasks, the augmented reality device belongs to one of the execution devices of a target subtask, and the target subtask belongs to one of the plurality of subtasks; the device further includes a second module, a third module (not shown), and a fourth module, configured to receive and present a subtask execution instruction about the target subtask, sent by the director device to the augmented reality device, where the subtask execution instruction is presented to all executing devices of the target subtask.
In some embodiments, the apparatus further includes a twenty-four module (not shown) for acquiring a scene image of the target object captured by the corresponding drone device and image position information of the corresponding tag content in the scene image; and presenting the scene image and displaying the mark content on the scene image in an overlapping way according to the image position information.
In some embodiments, the apparatus further includes a twenty-five module (not shown) configured to obtain an electronic map of a scene in which the target object is located and map location information of the target in the electronic map; and presenting the electronic map and displaying the mark content in the electronic map in an overlapping mode based on the map position information.
Here, the specific implementation manners of the two modules to the two modules are the same as or similar to the embodiments of the steps S202 to S205, and thus are not repeated herein and are included herein by reference.
In addition to the methods and apparatus described in the embodiments above, the present application also provides a computer readable storage medium storing computer code that, when executed, performs the method as described in any of the preceding claims.
The present application also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method as recited in any preceding claim.
FIG. 5 illustrates an exemplary system that can be used to implement the various embodiments described herein;
in some embodiments, as shown in FIG. 5, the system 300 can be implemented as any of the devices in each of the described embodiments. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 310 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
System memory 315 may be used, for example, to load and store data and/or instructions for system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 315 may comprise a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 320 may be accessible over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) of the system control module 310, such as memory controller module 330. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controllers of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310 to form a system on chip (SoC).
In various embodiments, system 300 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. As such, the software programs (including associated data structures) of the present application can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Additionally, some portions of the present application may be applied as a computer program product, such as computer program instructions, which, when executed by a computer, may invoke or provide the method and/or solution according to the present application through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. In this regard, computer readable media can be any available computer readable storage media or communication media that can be accessed by a computer.
Communication media includes media by which communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied in a modulated data signal, for example, in a wireless medium such as a carrier wave or similar mechanism such as is embodied as part of spread spectrum techniques. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, feRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed that are capable of storing computer-readable information/data for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (23)

1. A method for presenting mark information of a target object is applied to a command device, wherein the method comprises the following steps:
acquiring a scene image shot by unmanned aerial vehicle equipment;
acquiring user operation of a command user of a command device on a target object in the scene image, and generating marking information on the target object based on the user operation, wherein the marking information comprises corresponding marking content and image position information of the marking content in the scene image, the image position information is used for determining geographic position information of the target object and displaying the marking content in an overlaid manner in a real scene of an augmented reality device of a duty user, and the augmented reality device and the command device are in a collaborative execution state of the same collaborative task.
2. The method of claim 1, wherein the geographic location information is further used to determine real-time image location information of the target object in real-time scene images captured by the drone device, and to present the marker content superimposed in real-time scene images presented by the augmented reality device and/or drone device.
3. The method of claim 1, wherein the method further comprises:
and determining the geographical position information of the target object based on the image position information and the camera shooting pose information of the scene image.
4. The method of claim 1, wherein the method further comprises:
presenting an electronic map of a scene where the target object is located;
and determining the map position information of the target object in the electronic map according to the geographic position information of the target object, and presenting the mark content in the electronic map based on the map position information.
5. The method of claim 1, wherein the geographic location information is further used for superimposed presentation of the markup content in an electronic map presented by the augmented reality device and/or drone device regarding a scene in which the target object is located.
6. The method of claim 1, wherein the method further comprises:
the method comprises the steps of obtaining and presenting an electronic map, and determining operation mark information of an operation object in the electronic map based on user operation of the operation object by a command user, wherein the operation mark information comprises corresponding operation mark content and operation map position information of the operation mark content in the electronic map, and the operation map position information is used for determining operation geographic position information of the operation object and displaying the operation mark content in a real scene of augmented reality equipment and/or a scene image shot by unmanned aerial vehicle equipment in an overlapping mode.
7. The method of claim 6, wherein the method further comprises:
and determining operation geographic position information of the operation object based on the operation map position information.
8. The method of claim 6, wherein the operational geographic location information is further used for overlaying and presenting the operational marker content in an electronic map presented by the augmented reality device and/or drone device regarding a scene in which the operational object is located.
9. The method of any of claims 4 to 8, wherein the method further comprises:
and acquiring first map position information of the augmented reality equipment and/or second map position information of the unmanned aerial vehicle equipment, and identifying the augmented reality equipment based on the first map position information and/or identifying the unmanned aerial vehicle equipment based on the second map position information in the electronic map.
10. The method of claim 1, wherein the method further comprises:
acquiring and presenting an electronic map about a scene where a target object is located, wherein the electronic map comprises equipment identification information of a plurality of candidate unmanned aerial vehicle equipment;
wherein, the scene image who obtains unmanned aerial vehicle equipment and shoot includes:
acquiring a scene image shot by the unmanned aerial vehicle device of one of the device identification information of the candidate unmanned aerial vehicle devices based on the calling operation of the command user on the one of the device identification information of the candidate unmanned aerial vehicle devices, wherein the unmanned aerial vehicle device is in the cooperative execution state of the cooperative task.
11. The method of claim 1, wherein the method further comprises:
and acquiring task creation operation of the command user, wherein the task creation operation comprises selected operation of equipment identification information of the unmanned aerial vehicle equipment and/or equipment identification information of the augmented reality equipment, and the task creation operation is used for establishing a cooperative task of the command equipment, the unmanned aerial vehicle equipment and/or the augmented reality equipment.
12. The method of claim 11, wherein the collaborative task includes a plurality of subtasks, the augmented reality device belonging to one of the execution devices of a target subtask, the target subtask belonging to one of the plurality of subtasks.
13. The method of claim 12, wherein the method further comprises:
and sending the subtask execution instruction related to the target subtask to all execution devices of the target subtask, so that all the execution devices of the target subtask present the execution instruction.
14. A method for presenting marking information of a target object is applied to an augmented reality device, wherein the method comprises the following steps:
acquiring first pose information of augmented reality equipment used by a duty user, wherein the first pose information comprises first position information and first pose information of the augmented reality equipment, the first pose information is used for determining superposition position information of a target object in a real scene of the augmented reality equipment by combining geographic position information of the corresponding target object, and the target object is superposed and presented with mark content in the real scene based on the superposition position information.
15. The method of claim 14, wherein the method further comprises:
sending the first attitude information to corresponding network equipment; the augmented reality equipment and the command equipment are in a cooperative execution state of the same cooperative task;
receiving marker content to be overlaid and overlay position information of the marker content, which are returned by the network device and are related to a target object in a real scene of the augmented reality device, wherein the overlay position information is determined by the first pose information and geographical position information of the target object, the geographical position information is determined by image position information of the target object related to the marker content in a scene image shot by corresponding unmanned aerial vehicle equipment, the marker content and the image position information are determined by user operation of corresponding command equipment related to the scene image, and the command equipment, the unmanned aerial vehicle equipment and the augmented reality device are in a collaborative execution state of the same collaborative task.
16. The method of claim 14 or 15, wherein the collaborative task comprises a plurality of subtasks, the augmented reality device belonging to one of the execution devices of a target subtask, the target subtask belonging to one of the plurality of subtasks; wherein the method further comprises:
and receiving and presenting subtask execution instructions which are sent by the commanding equipment to the augmented reality equipment and are related to the target subtask, wherein the subtask execution instructions are presented to all execution equipment of the target subtask.
17. The method of claim 14, wherein the method further comprises:
acquiring a scene image which is shot by corresponding unmanned aerial vehicle equipment and is about the target object and image position information of corresponding mark content in the scene image;
and presenting the scene image and displaying the mark content on the scene image in an overlapping way according to the image position information.
18. The method of claim 14, wherein the method further comprises:
acquiring an electronic map of a scene where the target object is located and map position information of the target in the electronic map;
and presenting the electronic map and displaying the mark content in the electronic map in an overlapping way based on the map position information.
19. A command device for presenting marking information of a target object, wherein the device comprises:
the one-to-one module is used for acquiring a scene image shot by the unmanned aerial vehicle equipment;
and the second module is used for acquiring user operation of a command user of the command equipment on a target object in the scene image, and generating mark information on the target object based on the user operation, wherein the mark information comprises corresponding mark content and image position information of the mark content in the scene image, the image position information is used for determining the geographical position information of the target object and displaying the mark content in an overlapping manner in a real scene of augmented reality equipment of the duty user, and the augmented reality equipment and the command equipment are in a cooperative execution state of the same cooperative task.
20. An augmented reality device for presenting tagging information of a target object, wherein the device comprises:
the system comprises a first module and a second module, wherein the first attitude information is used for acquiring first attitude information of an augmented reality device used by a duty user, the first attitude information comprises first position information and first attitude information of the augmented reality device, the first attitude information is used for determining superposed position information of a target object in a real scene of the augmented reality device by combining with geographic position information of the corresponding target object, and the superposed position information is used for superposing and presenting mark content related to the target object in the real scene.
21. A computer device, wherein the device comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the steps of the method of any one of claims 1 to 18.
22. A computer-readable storage medium having stored thereon a computer program/instructions, characterized in that the computer program/instructions, when executed, cause a system to perform the steps of performing the method according to any one of claims 1 to 18.
23. A computer program product comprising computer program/instructions, characterized in that the computer program/instructions, when executed by a processor, implement the steps of the method of any one of claims 1 to 18.
CN202210762152.XA 2022-06-30 2022-06-30 Method and equipment for presenting marking information of target object Active CN115439635B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210762152.XA CN115439635B (en) 2022-06-30 2022-06-30 Method and equipment for presenting marking information of target object
PCT/CN2022/110489 WO2024000733A1 (en) 2022-06-30 2022-08-05 Method and device for presenting marker information of target object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210762152.XA CN115439635B (en) 2022-06-30 2022-06-30 Method and equipment for presenting marking information of target object

Publications (2)

Publication Number Publication Date
CN115439635A true CN115439635A (en) 2022-12-06
CN115439635B CN115439635B (en) 2024-04-26

Family

ID=84240888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210762152.XA Active CN115439635B (en) 2022-06-30 2022-06-30 Method and equipment for presenting marking information of target object

Country Status (2)

Country Link
CN (1) CN115439635B (en)
WO (1) WO2024000733A1 (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108769517A (en) * 2018-05-29 2018-11-06 亮风台(上海)信息科技有限公司 A kind of method and apparatus carrying out remote assistant based on augmented reality
CN109656319A (en) * 2018-11-22 2019-04-19 亮风台(上海)信息科技有限公司 A kind of action of ground for rendering auxiliary information method and apparatus
CN109656259A (en) * 2018-11-22 2019-04-19 亮风台(上海)信息科技有限公司 It is a kind of for determining the method and apparatus of the image location information of target object
CN110365666A (en) * 2019-07-01 2019-10-22 中国电子科技集团公司第十五研究所 Multiterminal fusion collaboration command system of the military field based on augmented reality
CN112017304A (en) * 2020-09-18 2020-12-01 北京百度网讯科技有限公司 Method, apparatus, electronic device, and medium for presenting augmented reality data
CN112639682A (en) * 2018-08-24 2021-04-09 脸谱公司 Multi-device mapping and collaboration in augmented reality environments
WO2021075878A1 (en) * 2019-10-18 2021-04-22 주식회사 도넛 Augmented reality record service provision method and user terminal
CN113741698A (en) * 2021-09-09 2021-12-03 亮风台(上海)信息科技有限公司 Method and equipment for determining and presenting target mark information
CN114116110A (en) * 2021-07-20 2022-03-01 上海诺司纬光电仪器有限公司 Intelligent interface based on augmented reality
CN114332417A (en) * 2021-12-13 2022-04-12 亮风台(上海)信息科技有限公司 Method, device, storage medium and program product for multi-person scene interaction
CN114529690A (en) * 2020-10-30 2022-05-24 北京字跳网络技术有限公司 Augmented reality scene presenting method and device, terminal equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104457704B (en) * 2014-12-05 2016-05-25 北京大学 Based on the unmanned aerial vehicle object locating system and the method that strengthen geography information
US9471059B1 (en) * 2015-02-17 2016-10-18 Amazon Technologies, Inc. Unmanned aerial vehicle assistant
CN109388230A (en) * 2017-08-11 2019-02-26 王占奎 AR fire-fighting emergent commands deduction system platform, AR fire helmet
CN108303994B (en) * 2018-02-12 2020-04-28 华南理工大学 Group control interaction method for unmanned aerial vehicle
CN109561282B (en) * 2018-11-22 2021-08-06 亮风台(上海)信息科技有限公司 Method and equipment for presenting ground action auxiliary information
CN110248157B (en) * 2019-05-25 2021-02-05 亮风台(上海)信息科技有限公司 Method and equipment for scheduling on duty
CN111625091B (en) * 2020-05-14 2021-07-20 佳都科技集团股份有限公司 Label overlapping method and device based on AR glasses

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108769517A (en) * 2018-05-29 2018-11-06 亮风台(上海)信息科技有限公司 A kind of method and apparatus carrying out remote assistant based on augmented reality
CN112639682A (en) * 2018-08-24 2021-04-09 脸谱公司 Multi-device mapping and collaboration in augmented reality environments
CN109656319A (en) * 2018-11-22 2019-04-19 亮风台(上海)信息科技有限公司 A kind of action of ground for rendering auxiliary information method and apparatus
CN109656259A (en) * 2018-11-22 2019-04-19 亮风台(上海)信息科技有限公司 It is a kind of for determining the method and apparatus of the image location information of target object
CN110365666A (en) * 2019-07-01 2019-10-22 中国电子科技集团公司第十五研究所 Multiterminal fusion collaboration command system of the military field based on augmented reality
WO2021075878A1 (en) * 2019-10-18 2021-04-22 주식회사 도넛 Augmented reality record service provision method and user terminal
CN112017304A (en) * 2020-09-18 2020-12-01 北京百度网讯科技有限公司 Method, apparatus, electronic device, and medium for presenting augmented reality data
CN114529690A (en) * 2020-10-30 2022-05-24 北京字跳网络技术有限公司 Augmented reality scene presenting method and device, terminal equipment and storage medium
CN114116110A (en) * 2021-07-20 2022-03-01 上海诺司纬光电仪器有限公司 Intelligent interface based on augmented reality
CN113741698A (en) * 2021-09-09 2021-12-03 亮风台(上海)信息科技有限公司 Method and equipment for determining and presenting target mark information
CN114332417A (en) * 2021-12-13 2022-04-12 亮风台(上海)信息科技有限公司 Method, device, storage medium and program product for multi-person scene interaction

Also Published As

Publication number Publication date
WO2024000733A1 (en) 2024-01-04
CN115439635B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
US9558559B2 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
AU2018450490B2 (en) Surveying and mapping system, surveying and mapping method and device, and apparatus
CN109459029B (en) Method and equipment for determining navigation route information of target object
RU2741443C1 (en) Method and device for sampling points selection for surveying and mapping, control terminal and data storage medium
CN107450088A (en) A kind of location Based service LBS augmented reality localization method and device
CN113345028B (en) Method and equipment for determining target coordinate transformation information
CN109561282B (en) Method and equipment for presenting ground action auxiliary information
US10733777B2 (en) Annotation generation for an image network
US20230162449A1 (en) Systems and methods for data transmission and rendering of virtual objects for display
WO2020103023A1 (en) Surveying and mapping system, surveying and mapping method, apparatus, device and medium
CN109656319B (en) Method and equipment for presenting ground action auxiliary information
CN115439528B (en) Method and equipment for acquiring image position information of target object
AU2018450426B2 (en) Method and device for planning sample points for surveying and mapping, control terminal and storage medium
KR20160007473A (en) Method, system and recording medium for providing augmented reality service and file distribution system
CN113869231A (en) Method and equipment for acquiring real-time image information of target object
CN110248157B (en) Method and equipment for scheduling on duty
CN111527375B (en) Planning method and device for surveying and mapping sampling point, control terminal and storage medium
CN114549766A (en) Real-time AR visualization method, device, equipment and storage medium
CN115460539B (en) Method, equipment, medium and program product for acquiring electronic fence
CN115439635B (en) Method and equipment for presenting marking information of target object
JP2022509082A (en) Work control system, work control method, equipment and devices
CN115565092A (en) Method and equipment for acquiring geographical position information of target object
CN115760964A (en) Method and equipment for acquiring screen position information of target object
CN118092710A (en) Human-computer interaction method, device and computer equipment for augmented reality of information of power transmission equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 201210 7th Floor, No. 1, Lane 5005, Shenjiang Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 501 / 503-505, 570 shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Applicant before: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant