CN115439635B - Method and equipment for presenting marking information of target object - Google Patents

Method and equipment for presenting marking information of target object Download PDF

Info

Publication number
CN115439635B
CN115439635B CN202210762152.XA CN202210762152A CN115439635B CN 115439635 B CN115439635 B CN 115439635B CN 202210762152 A CN202210762152 A CN 202210762152A CN 115439635 B CN115439635 B CN 115439635B
Authority
CN
China
Prior art keywords
position information
information
augmented reality
equipment
command
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210762152.XA
Other languages
Chinese (zh)
Other versions
CN115439635A (en
Inventor
廖春元
黄海波
韩磊
梅岭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hiscene Information Technology Co Ltd
Original Assignee
Hiscene Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hiscene Information Technology Co Ltd filed Critical Hiscene Information Technology Co Ltd
Priority to CN202210762152.XA priority Critical patent/CN115439635B/en
Priority to PCT/CN2022/110489 priority patent/WO2024000733A1/en
Publication of CN115439635A publication Critical patent/CN115439635A/en
Application granted granted Critical
Publication of CN115439635B publication Critical patent/CN115439635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application aims to provide a method and equipment for presenting marking information of a target object, which specifically comprise the following steps: acquiring a scene image shot by unmanned aerial vehicle equipment; and acquiring user operation of a command user of command equipment on a target object in the scene image, and generating mark information on the target object based on the user operation, wherein the mark information comprises corresponding mark content and image position information of the mark content in the scene image, the image position information is used for determining geographic position information of the target object and superposing and presenting the mark content in a scene of augmented reality equipment of a duty user, and the augmented reality equipment and the command equipment are in a cooperative execution state of the same cooperative task. The application can combine the space computing technology, the augmented reality technology and the command system, enrich the command form, greatly improve the command efficiency and provide a good command environment for users.

Description

Method and equipment for presenting marking information of target object
Technical Field
The present application relates to the field of communications, and in particular, to a technique for presenting tag information of a target object.
Background
As the unmanned aerial vehicle industry rises rapidly, the paths of unmanned aerial vehicle application exploration become clear, for example, police unmanned aerial vehicle equipment is favored by actual combat applications in terms of high-altitude panoramic video acquisition, and the unmanned aerial vehicle equipment transmits high-altitude panoramic pictures to a command center in a picture transmission manner for global command scheduling of major events. However, the existing command scheduling system has single front and rear command means, mainly focuses on modes such as bidirectional audio and video call, text and pictures, has lower efficiency aiming at some complicated and changeable scenes and lacks intuitiveness.
Disclosure of Invention
An object of the present application is to provide a method and apparatus for presenting marking information of a target object.
According to one aspect of the present application, there is provided a method of presenting marker information of a target object, the method comprising:
Acquiring a scene image shot by unmanned aerial vehicle equipment;
And acquiring user operation of a command user of command equipment on a target object in the scene image, and generating mark information on the target object based on the user operation, wherein the mark information comprises corresponding mark content and image position information of the mark content in the scene image, the image position information is used for determining geographic position information of the target object and superposing and presenting the mark content in a scene of augmented reality equipment of a duty user, and the augmented reality equipment and the command equipment are in a cooperative execution state of the same cooperative task.
According to another aspect of the present application, there is provided a method of presenting marker information of a target object, applied to an augmented reality device, wherein the method includes:
The method comprises the steps of obtaining first pose information of augmented reality equipment which is being used by a duty user, wherein the first pose information comprises first position information and first pose information of the augmented reality equipment, the first pose information is used for determining superposition position information in a live action of the augmented reality equipment of a target object in combination with geographic position information of the target object, and superposition and presenting marked content related to the target object in the live action based on the superposition position information.
According to one aspect of the present application, there is provided a command device presenting tag information of a target object, wherein the device comprises:
the one-to-one module is used for acquiring a scene image shot by the unmanned aerial vehicle equipment;
The system comprises a first module and a second module, wherein the first module is used for acquiring user operation of a command user of command equipment on a target object in a scene image, generating mark information on the target object based on the user operation, the mark information comprises corresponding mark content and image position information of the mark content in the scene image, the image position information is used for determining geographic position information of the target object and superposing and presenting the mark content in a live-action of augmented reality equipment of a duty user, and the augmented reality equipment and the command equipment are in a cooperative execution state of the same cooperative task.
According to another aspect of the present application, there is provided an augmented reality device presenting marker information of a target object, wherein the device includes:
The second module is used for acquiring first pose information of augmented reality equipment which is being used by a duty user, wherein the first pose information comprises first position information and first pose information of the augmented reality equipment, the first pose information is used for determining superposition position information in a live action of the augmented reality equipment of a target object by combining geographic position information of the target object, and the superposition position information is based on the superposition position information to superimpose and present marked content related to the target object in the live action.
According to one aspect of the present application, there is provided a computer apparatus, wherein the apparatus comprises:
A processor; and
A memory arranged to store computer executable instructions which, when executed, cause the processor to perform the steps of any of the methods described above.
According to one aspect of the present application there is provided a computer readable storage medium having stored thereon a computer program/instruction which, when executed, causes a system to perform the steps of a method as described in any of the above.
According to one aspect of the present application there is provided a computer program product comprising computer programs/instructions which when executed by a processor implement the steps of a method as described in any of the preceding.
Compared with the prior art, the method and the device have the advantages that the scene image is obtained through the command equipment, the mark information of the target object is determined based on the scene image, so that the mark content of the mark information is overlapped and presented in the augmented reality equipment live view, the combination of the space calculation technology, the augmented reality technology and the command system is realized, the command form is enriched, the command efficiency is greatly improved, and a good command environment is provided for users.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow chart of a method of presenting marking information of a target object according to one embodiment of the application;
FIG. 2 illustrates a flow chart of a method of presenting marking information of a target object in accordance with another embodiment of the application;
FIG. 3 illustrates a device architecture diagram of a command device according to one embodiment of the application;
fig. 4 shows a device structure diagram of an augmented reality device according to another embodiment of the application;
FIG. 5 illustrates an exemplary system that may be used to implement various embodiments described in the present application.
The same or similar reference numbers in the drawings refer to the same or similar parts.
Detailed Description
The application is described in further detail below with reference to the accompanying drawings.
In one exemplary configuration of the application, the terminal, the device of the service network, and the trusted party each include one or more processors (e.g., central processing units (Central Processing Unit, CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include non-volatile Memory, random access Memory (Random Access Memory, RAM), and/or non-volatile Memory in a computer-readable medium, such as Read Only Memory (ROM) or Flash Memory (Flash Memory). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase-Change Memory (PCM), programmable Random Access Memory (Programmable Random Access Memory, PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (Dynamic Random Access Memory, DRAM), other types of Random Access Memory (RAM), read-Only Memory (ROM), electrically erasable programmable read-Only Memory (EEPROM), flash Memory or other Memory technology, read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), digital versatile disks (DIGITAL VERSATILE DISC, DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, which may be used to store information that may be accessed by the computing device.
The device includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product which can perform man-machine interaction with a user (for example, perform man-machine interaction through a touch pad), such as a smart phone, a tablet computer and the like, and the mobile electronic product can adopt any operating system, such as an Android operating system, an iOS operating system and the like. The network device includes an electronic device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable GATE ARRAY, FPGA), a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an embedded device, and the like. The network device includes, but is not limited to, a computer, a network host, a single network server, a plurality of network server sets, or a cloud of servers; here, the Cloud is composed of a large number of computers or network servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, a virtual supercomputer composed of a group of loosely coupled computer sets. Including but not limited to the internet, wide area networks, metropolitan area networks, local area networks, VPN networks, wireless Ad Hoc networks (Ad Hoc networks), and the like. Preferably, the device may be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the above-described devices are merely examples, and that other devices now known or hereafter may be present as applicable to the present application, and are intended to be within the scope of the present application and are incorporated herein by reference.
In the description of the present application, the meaning of "a plurality" is two or more unless explicitly defined otherwise.
Fig. 1 shows a method of presenting marker information of a target object according to an aspect of the present application, applied to a command device, wherein the method comprises step S101 and step S102. In step S101, a scene image shot by an unmanned aerial vehicle device is acquired; in step S102, a user operation of a command user of a command device about a target object in the scene image is acquired, and marker information about the target object is generated based on the user operation, wherein the marker information includes corresponding marker content and image position information of the marker content in the scene image, the image position information is used for determining geographic position information of the target object and overlaying and presenting the marker content in a real scene of an augmented reality device of a duty user, and the augmented reality device and the command device are in a collaborative execution state of the same collaborative task. The command device includes, but is not limited to, a user device, a network device, and a device formed by integrating the user device and the network device through a network. The user equipment comprises, but is not limited to, any mobile electronic product which can perform man-machine interaction with a user, such as a mobile phone, a personal computer, a tablet personal computer and the like; the network device includes, but is not limited to, a computer, a network host, a single network server, a set of multiple network servers, or a cloud of multiple servers.
The command device establishes communication connection with corresponding unmanned aerial vehicle devices, augmented reality devices and the like, and related data transmission and the like are carried out through the communication connection. In some cases, the command device and the unmanned aerial vehicle device and/or the augmented reality device are in a collaborative execution state of the same collaborative task, where the collaborative task refers to a task that is jointly completed by a plurality of devices according to a certain constraint condition (for example, a spatial distance from a target object, a time constraint, a physical condition related to the devices or a task execution sequence, etc.) and is aimed at achieving a certain criterion, where the task can be generally decomposed into a plurality of subtasks and distributed to each device in the system, and each device completes the distributed subtasks respectively, so as to achieve the propulsion of the total task progress of the collaborative task. The corresponding command device serves as a control center of the collaborative task system in the execution process of the corresponding collaborative task, and subtasks and the like of each device in the collaborative task are regulated and controlled. The task participation equipment of the collaborative task comprises command equipment, one or more unmanned aerial vehicle equipment and one or more augmented reality equipment, and the corresponding command equipment is operated by a command user; the unmanned aerial vehicle device can acquire images or fly based on acquisition instructions/flight path planning instructions and the like sent by the command device, or can control the unmanned aerial vehicle device through ground control equipment of the unmanned aerial vehicle device by corresponding unmanned aerial vehicle flies, the ground control equipment receives and presents control instructions sent by the command device, and control operation of the unmanned aerial vehicle flies is used for controlling the unmanned aerial vehicle device and the like; the augmented reality device is worn and controlled by the corresponding duty user, and the augmented reality device includes, but is not limited to, augmented reality glasses, an augmented reality helmet, and the like. Of course, in some cases, the collaborative task may perform three-party data transmission and data processing by the network device in addition to participation of the command device, the augmented reality device, and/or the unmanned aerial vehicle device, for example, the unmanned aerial vehicle device sends the corresponding scene image to the corresponding network device, and the command device and/or the augmented reality device acquire the scene image through the network device.
Specifically, in step S101, a scene image captured by the unmanned aerial vehicle device is acquired. For example, unmanned aerial vehicle equipment refers to unmanned aerial vehicle operated by using radio remote control equipment and a self-provided program control device, and has the advantages of small size, low cost, convenient use, low requirements on battle environment, stronger battlefield survivability and the like. The unmanned aerial vehicle device can collect scene images of specific areas, for example, the unmanned aerial vehicle device collects scene images of corresponding areas in the flight process based on a preset flight route or a predetermined target place, the unmanned aerial vehicle device can record shooting pose information corresponding to the unmanned aerial vehicle device when the scene images are collected in the scene image collecting process, and the shooting pose information comprises shooting position information, shooting pose information and the like of a shooting device of the unmanned aerial vehicle device when the scene images are collected. The unmanned aerial vehicle device or the corresponding ground control device may send the scene image to the network device and by the network device to the corresponding device or the like, or the unmanned aerial vehicle device or the corresponding ground control device may directly send the scene image to the corresponding device or the like in communication connection with the corresponding device, wherein the corresponding device comprises a command device and/or an augmented reality device. In some cases, in the process of sending the scene image, the unmanned aerial vehicle device may also send the camera pose information corresponding to the scene image to a corresponding device or a network device, for example, send the camera pose information to a command device and/or an augmented reality device or a network device, etc. Specifically, for example, the network device may forward, based on the cooperative task, a scene image acquired by the unmanned aerial vehicle device in a state of performing the cooperative task to the command device and/or the augmented reality device, and the like; or the unmanned aerial vehicle device transmits the acquired scene image to the network device in real time, the command device and/or the augmented reality device send an image acquisition request about the unmanned aerial vehicle device to the network device, the corresponding image acquisition request contains the device identification information of the unmanned aerial vehicle device, and the network device responds to the image acquisition request, and invokes the scene image acquired by the unmanned aerial vehicle device based on the device identification information of the unmanned aerial vehicle device and sends the scene image to the command device and/or the augmented reality device. After the command device and/or augmented reality device acquires the corresponding scene image, the scene image is presented in a corresponding display device (e.g., display screen, projector, etc.).
In step S102, a user operation of a command user of a command device about a target object in the scene image is acquired, and marker information about the target object is generated based on the user operation, wherein the marker information includes corresponding marker content and image position information of the marker content in the scene image, the image position information is used for determining geographic position information of the target object and overlaying and presenting the marker content in a real scene of an augmented reality device of a duty user, and the augmented reality device and the command device are in a collaborative execution state of the same collaborative task. For example, the command device comprises data acquisition means for acquiring user operations of a command user, such as a keyboard, a mouse, a touch screen or a touch pad, an image acquisition unit, a voice input unit, etc. For example, the user operation of the command user may be a gesture action or a voice instruction of the command user, and the tag information is generated by recognizing the gesture action or the voice instruction; as another example, the user operation of the command user may be a direct operation on the scene image by using a device such as a keyboard, a mouse, a touch screen, or a touch pad, for example, the command user performs operations such as selecting a frame, graffiti, or adding other editing information (for example, editing text, adding 2D or 3D model information, etc.) on the presented scene image through the mouse. In some embodiments, the command device may present an operation interface related to the scene image at the same time of presenting the scene image collected by the unmanned aerial vehicle device, and the command user may operate a control in the operation interface, so as to implement marking of the target object in the scene image, for example, the command device may collect mark information related to the target object generated by the user about marking operation in the scene image, specifically, the mark information includes, but is not limited to, framing, graffiti, or adding other editing information for a specific area/specific position/specific target in the scene image. The target object is used to indicate a physical object or the like contained in a marked area in the scene image corresponding to the marking information, such as pedestrians, vehicles, geographic locations/areas, buildings, streets or other identified objects, etc. The corresponding mark information includes mark content determined by user operation and image position information of the mark content in the scene image, wherein the mark content is determined by the added information of the scene image operated by the user, including but not limited to a square frame, a circle, a line, a point, an arrow, a picture/video, an animation, a three-dimensional model and the like, preferably, the mark content further includes parameter information such as color, thickness and the like, and the corresponding image position information is used for indicating coordinate information of the mark information in a corresponding image/pixel coordinate system of the scene image, and the coordinate information can be a region coordinate set of a mark region where the target object is located or position coordinates corresponding to a specific position and the like.
In some cases, the image position information is used to determine geographic position information of the target object and superimpose and present the tag content in a live view of an augmented reality device of the duty user, for example, any one of the participant devices (a command device, an unmanned aerial vehicle device, an augmented reality device, or a network device, etc.) in the collaborative task may determine, based on the image position information and the imaging pose information when the scene image is acquired, the geographic position information of the image position information in a geographic coordinate system corresponding to the real world. The geographic coordinate system generally refers to a coordinate system consisting of longitude, latitude, and altitude, and is capable of indicating any one location on the earth. Different reference ellipsoids may be used in different regions, and even the same ellipsoids may be adjusted in azimuth or even in size to better fit the ellipsoids to the local ground level. This requires the use of a different geodetic system (Geodetic datum) for identification, such as the CGCS2000 and WGS84 geographical coordinate systems that are often used in our country. The WGS84 is a geographic coordinate system, which is the most popular geographic coordinate system at present, and is also a coordinate system used by a GPS global satellite positioning system widely used at present. Three-dimensional rectangular coordinate systems include, but are not limited to: a station center coordinate system, a navigation coordinate system, NWU coordinate systems and the like. Specifically, when capturing pose information and the like corresponding to a scene image are acquired, spatial position information of a plurality of map points can be acquired, wherein the spatial position information comprises spatial coordinate information of the corresponding map points in a three-dimensional rectangular coordinate system. In the case that the three-dimensional rectangular coordinate system is known, coordinate transformation corresponding to the conversion of the geographic position information from the geographic coordinate system to the three-dimensional rectangular coordinate system is also known, and based on the known coordinate transformation information, we can convert the map points in the geographic coordinate system into the three-dimensional rectangular coordinate system, so as to determine corresponding spatial position information based on the geographic coordinate information of the map points; further, determining the target spatial position information of the target point in the three-dimensional rectangular coordinate system according to the spatial position information of the map points, the target point image position information, the image capturing position information and the image capturing posture information, for example, after acquiring the known spatial position information of the map points, the known image capturing position information of the target point, the known image capturing position information and the known corresponding image capturing posture information, because of the internal knowledge of the image capturing device, we can construct a spatial ray of the image position information corresponding to the target point by the camera optical center based on the camera imaging model, and determine the target spatial position information of the target point based on the spatial ray, the spatial position information of the map points and the image capturing posture information. For example, it may be assumed that the image position information is perpendicular to the plane in which the camera film is located (for example, the corresponding optical axis of the image center of the unmanned aerial vehicle is perpendicular to the plane in which the camera film is located, etc.), so that corresponding space ray information is determined based on the normal vector of the plane in which the film is located and the image position information, so that a corresponding intersection point is determined based on the space ray information and ground information composed of a plurality of map points, and space coordinate information of the intersection point is used as target space position information of the target point. Of course, if the pixel corresponding to the image position information is not located at the center of the image, an error exists between the normal vector determined based on the negative film and the actual ray vector, and at this time, we need to determine the vector information of the spatial target ray corresponding to the image position information through the imaging model of the camera, the image position information and the imaging pose information, where the spatial target ray is described by the optical center coordinates and the vector information of the ray. After the computer device determines the vector information of the corresponding spatial target ray, the intersection point of the ray with respect to the ground may be calculated based on the vector information of the target ray, the imaging position information, and the spatial position information of a plurality of map points, so that the spatial coordinate information of the intersection point is taken as the target spatial position information of the target point, and the like. Finally, geographic coordinate information of the target point in a geographic coordinate system (such as a geodetic coordinate system) is determined based on the target spatial position information of the target point. For example, after the computer device determines the target spatial position information of the target point, the coordinate information in the three-dimensional rectangular coordinate system can be converted from the three-dimensional spatial coordinate system to a geographic coordinate system (for example, WGS84 coordinate system) and stored for facilitating subsequent calculation. Wherein in some embodiments, determining the target spatial position information of the target point in the three-dimensional rectangular coordinate system according to the vector information of the target ray, the imaging position information and the spatial position information of the plurality of map points comprises: acquiring optical center space position information of an optical center of the image pickup device in a three-dimensional rectangular coordinate system based on the image pickup position information; determining a target map point closest to the target ray from the map points according to the target ray vector information, the spatial position information of the map points and the optical center spatial position information; two map points are taken from other map points except the target map point in the plurality of map points, a corresponding space triangle is formed by the two map points and the target map point, and a corresponding space intersection point is determined according to the target ray and the corresponding space triangle; and taking the space coordinate information of the space intersection point as target space position information of the target point.
Or the current position information of the target point in a camera coordinate system can be determined according to the image position information of the target point on the unmanned aerial vehicle scene image and the unmanned aerial vehicle camera reference information; and determining geographic position information of the target point in the geographic coordinate system according to the current position information of the target point in the camera coordinate system and external parameters of the camera determined based on shooting parameter information of a scene image shot by the unmanned aerial vehicle, wherein the shooting parameter information comprises, but is not limited to, resolution of an image pickup device of unmanned aerial vehicle equipment, angle of view, rotation angle of the camera, flight height of the unmanned aerial vehicle and the like. If the marking area where the target object is located has only 1 point, the geographic position information corresponding to the target point is the geographic position information corresponding to the target object; if the marked area where the target object is located is a plurality of points, in some embodiments, the target point is used to indicate a point of the marked area where the target object is located, and based on the geographic location information of each point in the marked area where the target object is located, we can determine the set of geographic coordinates of the target object, thereby determining the geographic location information of the target object. In other embodiments, the target point is used to indicate one or more key points (such as coordinates of corner points or circle centers) in the marked area where the target object is located, based on the geographic location information of the one or more key points, we can determine a geographic coordinate set of the target object, for example, by calculating coordinate expressions of line segments corresponding to each edge based on spatial coordinates of a plurality of corner points, so as to determine a coordinate set corresponding to each edge, and summarizing the coordinate set of each edge can determine the geographic location information of the target object.
The determining of the geographic position information can occur at a command device end, or can occur at an unmanned plane device, an augmented reality device or a network device end, etc. For example, preferably, the command device side calculates and determines the geographic position information of the target object according to the image position information determined by the user operation of the command user on the target object in the scene image and the shooting pose information corresponding to the scene image; for another example, after the command device determines the corresponding image position information, the image position information is sent to the unmanned plane device/the augmented reality device/the network device, the corresponding unmanned plane device/the augmented reality device/the network device calculates and determines the geographic position information of the target object based on the corresponding scene image and the shooting pose information corresponding to the scene image, and the like, and the network device calculates and determines the geographic position information of the target object to introduce the geographic position information, and the command device sends the image position information to the corresponding network device and receives the geographic position information determined by the network device based on the image position information and the shooting pose information of the scene image. For example, the cooperative tasks include network device terminals for data transmission and data processing, in addition to participation by users of the respective execution terminals. In some cases, after determining corresponding marking information based on user operation of a command user, the command device sends the marking information to corresponding network equipment, the network equipment receives the marking information, and based on image position information in the marking information and shooting pose information corresponding to the scene image transmitted to the network equipment by the unmanned aerial vehicle equipment, geographic position information of the target object is calculated and determined, and the like. In one aspect, the network device may return the geographic location information to the command device, so that the command device performs overlay presentation on the tag content of the target object based on the geographic location information, for example, track overlay presentation on the tag content of the target object in a real-time scene image captured by the unmanned aerial vehicle and acquired by the command device, or overlay presentation on the tag content of the target object in a real-time live-action corresponding to the augmented reality device acquired by the command device, or presentation on the tag content of the target object in an electronic map about the target object presented by the command device, and so on. On the other hand, the network device can further determine superposition position information, and return the superposition position to the command-providing device, so that the command-providing device can perform superposition presentation on the mark content of the target object based on the superposition position information. Wherein, the geographic coordinate system projection (such as an isorectangular projection, a mercator projection, a gaussian-g-lu projection, a Lambert projection, etc.) is 2D plane description, and a map is formed. The electronic map follows a geographic coordinate system protocol, is the mapping of a geographic coordinate system, has a known mapping relation, namely, a certain point in the known geographic coordinate system can be determined, and the map position of the electronic map in the electronic map can be determined. If map location information on the electronic map is known, a location in a geographic coordinate system may also be determined based on the location information.
In some embodiments, after the geographic position information is determined, the geographic position information may be directly sent to the augmented reality device of the duty user by a corresponding determining device (such as a command device and an unmanned aerial vehicle device), or forwarded to the augmented reality device through a network device, and the local end of the augmented reality device calculates and determines the superposition position information of the geographic position information, which is displayed in the current live-action picture of the augmented reality device in a superposition manner, for example, after the command device/unmanned aerial vehicle device/network device acquires the corresponding geographic position information, the geographic position information is sent to the augmented reality device, and the augmented reality device may determine, based on the received geographic position information and the current duty shooting pose information, the screen position information of the display screen and the like, where the duty shooting pose information includes shooting position information and shooting pose information of a shooting device of the augmented reality device, and the like, and the shooting position information is used for indicating the current geographic position information of the duty user and the like. If the calculation process of the geographic position information occurs at the augmented reality equipment end, the augmented reality equipment reserves the geographic position information and sends the geographic position information to other equipment, or sends the geographic position information to network equipment, and the network equipment sends the geographic position information to other equipment. In other embodiments, the geographic location information is not transmitted to the augmented reality device of the duty user after being determined, but is transmitted to the augmented reality device directly as superimposed location information that is superimposed and displayed in the current live-action screen of the augmented reality device. After any device in the collaborative task acquires the geographic position information, the superposition position information of the geographic position information, which is displayed in the current live-action picture of the augmented reality device in a superposition manner, can be calculated and determined based on the geographic position information and the on-duty shooting pose information of the shooting device of the augmented reality device, and the superposition position information is used for indicating the display position information of the mark content in the display screen of the augmented reality device, such as a screen/image/pixel coordinate point or set of the display screen corresponding to a screen/image/pixel coordinate system. Likewise, in some embodiments, after determining the geographic location information of the target object, a certain device side (such as a network device/an augmented reality device/an unmanned aerial vehicle device/a command device) may directly send the geographic location information to other device sides, where the other device determines, at a local side, corresponding superposition location information of the geographic location information in a live view of the augmented reality device/live view image location information in a live view image/map location information in an electronic map, so that the label information is superposed and presented in the live view image corresponding to the augmented reality device, the unmanned aerial vehicle device and/or the command device; in other embodiments, a device side (such as a network device/an augmented reality device/an unmanned aerial vehicle device/a command device) may further determine the corresponding superposition position information of the geographic position information in the live view of the augmented reality device/the live view image position information in the live view image/the map position information in the electronic map, and send the superposition position information to other device sides, so that the label information is superposed and presented in the live view image of the augmented reality device, the unmanned aerial vehicle device and/or the command device, and the label information is superposed and presented in the electronic map displayed. In some embodiments, the geographic location information is further used to determine real-time image location information of the target object in a real-time scene image captured by the drone device, and to superimpose and present the marker content in the real-time scene image presented by the augmented reality device and/or drone device. For example, regarding the tag information of the target object, after the corresponding geographic position information is calculated based on the image position information in the tag information, the geographic position information can be stored in a storage database (for example, the command device/the augmented reality device/the unmanned aerial vehicle device performs local storage or the network device end sets up a corresponding network storage database, etc.), when the tag information is conveniently called, the geographic position information corresponding to the tag information is called at the same time, and based on the geographic position information, calculation conversion and the like are performed on other position information (for example, real-time image position information in a real-time scene image of the unmanned aerial vehicle device or real-time superposition position information in a real-time acquired real-time scene of the augmented reality device, etc.). For example, the drone device side may send the corresponding real-time scene image directly to the command device/augmented reality device through a communication connection, or to the command device or the augmented reality device via a network device, etc., and the corresponding augmented reality device may present the real-time scene image in a display screen, for example, in a video perspective manner, or in a certain screen area in the display screen, etc. In order to facilitate tracking, overlaying and presenting the marking information in the real-time scene image, the unmanned aerial vehicle device acquires real-time flight shooting pose information corresponding to the real-time scene image, in some embodiments, the corresponding augmented reality device/command device can acquire real-time flight shooting pose information and the like of the real-time scene image directly through communication connection with the unmanned aerial vehicle device or through a network device forwarding mode, and in combination with the calculated and determined geographic position information and the like, the overlaying position information and the like in the corresponding real-time scene image can be calculated at a local end, and the marking content and the like are tracked, overlaid and presented in the presented real-time scene image. For example, setting the origin of a three-dimensional rectangular coordinate system (such as a station center coordinate system, a navigation coordinate system and the like) when the unmanned aerial vehicle is at a certain position (such as a take-off position); converting geographic position information corresponding to the marking information into the three-dimensional rectangular coordinate system; acquiring the real-time flight geographic position and attitude information of the unmanned aerial vehicle, converting the geographic position of the unmanned aerial vehicle into the three-dimensional rectangular coordinate system, and determining a rotation matrix from the three-dimensional rectangular coordinate system to the unmanned aerial vehicle camera coordinate system based on the attitude information of the unmanned aerial vehicle; and determining and presenting the real-time image position information of the marking information in the real-time scene image acquired by the unmanned aerial vehicle based on the three-dimensional rectangular coordinates of the marking information, the three-dimensional rectangular coordinates corresponding to the position of the unmanned aerial vehicle, the rotation matrix and the camera internal parameters of the unmanned aerial vehicle. In other embodiments, a device side (for example, a command device/an augmented reality device/an unmanned aerial vehicle device/a network device side) acquires real-time flight shooting pose information of a real-time scene image, and the like, and combines the real-time flight shooting pose information with the geographic position information which is calculated and determined by the mark information, and the like, so that superposition position information and the like in the mark information corresponding to the real-time scene image can be calculated, and then the superposition position information is sent to other device sides, so that the mark content and the like can be tracked, superposed and presented in the real-time scene image presented by the other device sides. In some embodiments, the method further comprises step S103 (not shown), in step S103 geographic location information of the target object is determined based on the image location information, the camera pose information of the scene image. For example, after determining the corresponding mark information based on the user operation of the command user, the command device calculates and determines the geographic position information of the target object based on the image position information in the mark information and the shooting pose information corresponding to the scene image transmitted by the unmanned aerial vehicle device, and then directly sends the geographic position information to other execution devices of the collaborative task, such as the augmented reality device, the unmanned aerial vehicle device and the like, or sends the geographic position information to the network device, and the network device sends the geographic position information to other execution devices of the collaborative task. In one aspect, the command device may send the geographic location information to other execution devices of the collaborative task, so that the other execution devices further determine the overlay location information based on the geographic location information to perform overlay presentation on the tag content of the target object, for example, track and overlay the tag content of the target object in a real-time scene image captured by the unmanned aerial vehicle and acquired by the augmented reality device, or overlay and present the tag content of the target object in a real-time real scene corresponding to the augmented reality device and acquired by the augmented reality device, or present the tag content of the target object in an electronic map related to the target object and so on. On the other hand, the command device can further determine superposition position information, and return the superposition position to other execution devices for the other execution devices to carry out superposition presentation on the mark content of the target object based on the superposition position information. Wherein,
In some embodiments, the method further includes step S104 (not shown), in step S104, presenting an electronic map of a scene in which the target object is located; and determining map position information of the target object in the electronic map according to the geographic position information of the target object, and presenting the marking content in the electronic map based on the map position information. For example, the command device side may call an electronic map of a scene where the target object is located, e.g., the command device determines an electronic map near the geographic location information from a local database or the network device side according to the geographic location information where the target object is located, and presents the electronic map. The command device may further obtain map location information of the target object in the electronic map, for example, perform projection conversion on the local terminal based on the geographical location information to determine map location information in the corresponding electronic map, or receive map location information returned by other device terminals (such as a network device, an unmanned aerial vehicle device, and an augmented reality device), and so on. The command device can display the electronic map through the corresponding display device, and display the mark content of the mark information in the map position information corresponding area in the electronic map, so that the mark information of the target object is overlapped and displayed in the electronic map.
In some embodiments, the geographic location information is further used to superimpose and present the marker content in an electronic map presented by the augmented reality device and/or the drone device with respect to a scene in which the target object is located. For example, the geographic location information may be determined by calculation at the command device side/the augmented reality device side/the unmanned aerial vehicle device side, or may be determined by calculation at the network device side. The corresponding command device, unmanned aerial vehicle device or augmented reality device can present the electronic map of the scene where the target object is located through the respective display device, and obtain the map position information of the target object based on the geographic position information, so that the mark content is overlapped and presented in the respective presented electronic map, and the mark information added to the target object in the scene image shot by the unmanned aerial vehicle device is synchronously presented at the corresponding position of the target object in the electronic map. The map location information may be obtained by performing projection conversion determination on the local end of each device based on geographical location information corresponding to the tag information, or may be returned to each device after calculation by the network device, or may be sent to other device after calculation by a certain device end, etc.
In some embodiments, the method further includes step S105 (not shown), in step S105, an electronic map is acquired and presented, and operation marker information of the operation object is determined based on a user operation of the operation object in the electronic map by the command user, where the operation marker information includes corresponding operation marker content and operation map position information of the operation marker content in the electronic map, and the operation map position information is used to determine operation geographic position information of the operation object and superimpose and present the operation marker content in a live view of the augmented reality device and/or a scene image captured by an unmanned aerial vehicle device. For example, the user operation of the command user may be a gesture action or a voice instruction of the command user, and the operation mark information is generated by recognizing the gesture action or the voice instruction; for another example, the user operation of the command user may be a direct operation on the electronic map by using a device such as a keyboard, a mouse, a touch screen, or a touch pad, for example, the command user performs operations such as selecting frames, graffiti or adding other editing information (for example, editing text, adding 2D or 3D model information, etc.) on the presented electronic map through the mouse. In some embodiments, for example, the command device can call an electronic map of a local end or a network device end about a target object, the command device can present an operation interface about the electronic map while presenting the electronic map, a command user can mark the electronic map through the operation interface, such as framing a part of areas in the electronic map or selecting one or more position identifications, etc., the command device can determine the corresponding areas or the position identifications as operation objects based on user operation of the command user, and generate operation mark information corresponding to the operation objects, wherein the operation mark information comprises operation mark content and operation map position information of the operation mark content in the electronic map, etc., the operation mark content is determined by information added by a user operating on the electronic map, including but not limited to boxes, circles, lines, dots, arrows, pictures/videos, animations, three-dimensional models, etc., and preferably, the operation mark content also comprises parameter information, such as colors, thickness, etc. The operation map position information is not related to the map position information of the target object, and may be the same position or different positions. Based on the operation map position information, operation geographic position information corresponding to the operation object can be obtained through calculation, for example, the corresponding operation geographic position information is calculated locally at a command device end, or the operation map position information is sent to network equipment, augmented reality equipment or unmanned aerial vehicle equipment to calculate and determine the corresponding operation geographic position information, and the like, preferably, the command device calculates and determines the operation geographic position information corresponding to the operation object according to the operation map position information determined by the user operation of the operation object in the electronic map; in another example, after the command device determines the corresponding operation map location information, the command device sends the operation map location information to the unmanned aerial vehicle device/augmented reality device/network device, the corresponding unmanned aerial vehicle device/augmented reality device/network device calculates and determines the operation geographic location information of the operation object based on the corresponding operation map location information, and the like, and the network device calculates and determines the operation geographic location information of the operation object to introduce the operation geographic location information as an example, and the command device sends the operation map location information to the corresponding network device and receives the operation geographic location information determined by the network device based on the operation map location information. For example, the network device receives the operation map position information sent by the command device, and determines operation geographic position information corresponding to the operation map position information based on inverse transformation of the projective transformation. In one aspect, the network device may return the operation geographic location information to the command device, so that the command device performs overlay presentation on the tag content of the operation object based on the operation geographic location information, for example, the tag content of the operation object is presented in a real-time electronic map about the operation object presented at the command device end, or the tag content of the operation object is overlaid and presented in a real-time scene corresponding to the augmented reality device acquired by the command device, or the tag content of the operation object is tracked and overlaid and presented in a real-time scene image captured by the unmanned aerial vehicle acquired by the command device. On the other hand, the network device can further determine superposition position information, and return the superposition position to the command-providing device, so that the command-providing device can perform superposition presentation on the mark content of the operation object based on the superposition position information. In order to realize global command scheduling of command equipment, the augmented reality equipment and/or the unmanned aerial vehicle equipment end can also superimpose and present the operation mark information in the image acquired in real time based on the corresponding operation geographic position information, in some embodiments, for example, the augmented reality equipment calculates and determines the operation geographic position information at the local end or receives the operation geographic position information sent by other equipment (network equipment/command equipment/unmanned aerial vehicle equipment), and calculates and determines the corresponding superimposed position information of the operation geographic position information in the live view of the augmented reality equipment based on the current on-duty camera pose information, so that the operation mark information is superimposed and presented in the live view of the augmented reality equipment; for another example, the augmented reality device displays a real-time scene image shot by the unmanned aerial vehicle device, the augmented reality device calculates and determines operation geographic position information at a local end or receives operation geographic position information sent by other devices (network devices/command devices/unmanned aerial vehicle devices), and calculates and determines real-time scene image position information in the real-time scene image of the operation geographic position information based on real-time flight shooting pose information corresponding to the real-time scene image shot by the unmanned aerial vehicle device, so that the operation mark information is overlapped and presented in the real-time scene image displayed by the augmented reality device; for another example, the augmented reality device calculates and determines the operation geographic position information at the local end or receives the operation geographic position information sent by other devices (network device/command device/unmanned aerial vehicle device), calculates and determines the map position information of the operation geographic position information in the electronic map presented by the augmented reality device, so as to superimpose and present the operation mark information in the electronic map of the augmented reality device; likewise, for example, the unmanned aerial vehicle device calculates and determines operation geographic position information at a local end or receives operation geographic position information sent by other devices (network devices/command devices/augmented reality devices), and calculates and determines real-time scene image position information in a real-time scene image corresponding to the operation geographic position information based on real-time flight shooting pose information corresponding to the real-time scene image shot by the unmanned aerial vehicle device, so that the operation marker information is overlapped and presented in the real-time scene image of the unmanned aerial vehicle device. For another example, the unmanned aerial vehicle device calculates and determines the operation geographic position information at the local end or receives the operation geographic position information sent by other devices (network devices/command devices/augmented reality devices), calculates and determines the map position information of the operation geographic position information in the electronic map presented by the unmanned aerial vehicle device, so that the operation marker information is presented in an overlapped mode in the electronic map of the unmanned aerial vehicle device. For another example, the unmanned aerial vehicle device calculates and determines operation geographic position information at a local end or receives operation geographic position information sent by other devices (network devices/command devices/unmanned aerial vehicle devices), calculates and determines corresponding superposition position information of the operation geographic position information in the augmented reality device obtained by the unmanned aerial vehicle device based on current on-duty camera pose information, and accordingly superimposes and presents the operation mark information in the augmented reality device obtained by the unmanned aerial vehicle device. In other embodiments, a device side (such as a network device/an augmented reality device/an unmanned aerial vehicle device/a command device) may further determine the corresponding superposition position information of the operation geographic position information in the live view of the augmented reality device/the corresponding real-time scene image position information in the real-time scene image/the map position information in the electronic map, and send the superposition position information to other device sides, so that the operation marker information is superposed and presented in the real-time scene image corresponding to the augmented reality device, the unmanned aerial vehicle device and/or the command device, and the operation marker information is superposed and presented in the electronic map displayed. Through the technology, the operation mark information added to the operation object in the electronic map is synchronously presented at the corresponding position of the operation object in the live view of the augmented reality device and/or the scene image shot by the unmanned aerial vehicle device.
As in some embodiments, the method further comprises step S106 (not shown), in step S106, operating geographical location information of the operating object is determined based on the operating map location information. For example, the command device determines operation geographic position information corresponding to the operation map position information according to the operation map position information and based on inverse transformation of projection transformation, in some embodiments, the command device directly sends the operation geographic position information to other execution devices of the collaborative task, such as an augmented reality device, an unmanned aerial vehicle device and the like, or sends the operation geographic position information to a network device, and the network device sends the operation geographic position information to other execution devices of the collaborative task, for example, the other execution devices calculate and determine superposition position information corresponding to the operation geographic position information in a live action of the augmented reality device based on current on-duty camera pose information, so that the operation marker information is superposed and presented in the live action corresponding to the other execution devices, such as superposition and presentation of marker content of the operation object in the live action corresponding to the augmented reality device acquired by the command device or the unmanned aerial vehicle device; for another example, based on the real-time flight shooting pose information corresponding to the real-time scene image shot by the unmanned aerial vehicle device, the other execution devices calculate and determine the position information of the real-time scene image in the real-time scene image, so that the operation mark information is overlapped and presented in the real-time scene image displayed by the other execution devices; for another example, the other execution device calculates and determines map position information of the operation geographic position information in the electronic map, so that the operation mark information is overlapped and presented in the electronic map displayed by the other execution device. In other embodiments, the command device side may further determine the superposition position information corresponding to the operation geographic position information in the live-action of the augmented reality device/the real-time scene image position information in the real-time scene image/the map position information in the electronic map, and send the superposition position information to other execution devices (such as the augmented reality device, the unmanned aerial vehicle device, etc.), so as to superimpose and present the operation marker information in the live-action/the real-time unmanned aerial vehicle picture/the electronic map corresponding to the other execution devices.
In some embodiments, the operation geographic location information is further used for overlaying and presenting the operation marker content in an electronic map presented by the augmented reality device and/or the unmanned aerial vehicle device, wherein the electronic map is related to a scene in which the operation object is located. For example, the unmanned plane device or the augmented reality device may also call an electronic map of a scene where the operation object is located, in some embodiments, the unmanned plane device or the augmented reality device obtains corresponding operation geographic position information from other device ends (such as a command device or a network device, etc.), and superimposes and presents operation marker content in the corresponding electronic map based on the operation geographic position information, for example, the unmanned plane device corresponding ground control center may present the corresponding electronic map, and determines the corresponding operation map position information through projection conversion based on the obtained operation geographic position information, so as to superimpose and present the operation marker content of the operation object on the operation map position information of the electronic map; for example, the augmented reality device may present a corresponding electronic map through a real screen, and determine corresponding operation map position information through projection conversion based on the acquired operation geographic position information, thereby presenting operation marker contents of the operation object in superposition on the operation map position information of the electronic map. In some embodiments, the unmanned aerial vehicle device or the augmented reality device may acquire the operation map location information corresponding to the operation geographic location information from other device sides (such as a command device or a network device, etc.), so as to superimpose and present the operation mark content of the operation object on the operation map location information of the presented electronic map.
In some embodiments, the method further comprises step S107 (not shown), in step S107, first map location information of the augmented reality device and/or second map location information of the drone device are acquired, and the augmented reality device is identified in the electronic map based on the first map location information, and/or the drone device is identified based on the second map location information. For example, the augmented reality device includes a corresponding position sensing device (e.g., a position sensor, etc.) that can acquire first geographic position information of the augmented reality device, and similarly, the drone device includes a corresponding position sensing device that can acquire second geographic position information of the drone device. The command device may acquire the first geographic location information or the second geographic location information, and determine, based on projection conversion, first map location information corresponding to the first geographic location information and/or second map location information corresponding to the second geographic location information, and so on. Or the network device may receive the first geographic location information and/or the second geographic location information uploaded by the augmented reality device and/or the unmanned aerial vehicle device, and determine corresponding first map location information and/or second map location information based on projection conversion, and the network device may send the first map location information and/or the second map location information to the corresponding command device, so that the command device performs location identification in the electronic map. After the command device obtains the first map position information of the augmented reality device and/or the second map position information of the unmanned aerial vehicle device, the augmented reality device can be identified in the electronic map based on the first map position information, and/or the unmanned aerial vehicle device can be identified based on the second map position information, for example, the augmented reality device is identified by presenting the user head portrait, the number or the serial number of the augmented reality device corresponding to the duty user at the first map position information, and/or the unmanned aerial vehicle device is identified by presenting the device head portrait, the number of the unmanned aerial vehicle device or the head portrait, the number of the unmanned aerial vehicle flying hand at the second map position information.
In some embodiments, the method further includes step S108 (not shown), in step S108, acquiring and presenting an electronic map of a scene in which the target object is located, wherein the electronic map includes device identification information of a plurality of candidate unmanned aerial vehicle devices; in step S101, a scene image shot by an unmanned aerial vehicle device of one of the device identification information of the candidate unmanned aerial vehicle devices is acquired based on a call operation of the command user with respect to the one of the device identification information of the candidate unmanned aerial vehicle devices, where the unmanned aerial vehicle device is in a collaborative execution state of the collaborative task. For example, the command device may invoke an electronic map of the scene in which the target object is located, e.g., the command device invokes an electronic map of the task area from the local side or the network device side. If a plurality of candidate unmanned aerial vehicle devices exist in the current area of the electronic map, the command device may identify the plurality of unmanned aerial vehicle devices in the electronic map, for example, the command device may obtain map location information of each unmanned aerial vehicle device based on geographic location information of each unmanned aerial vehicle device, and present device identification information of each unmanned aerial vehicle device in corresponding map location information of the electronic map, where the corresponding device identification information includes, but is not limited to, a device serial number, a head image, a number of the unmanned aerial vehicle device, or a head image, a name, a number, etc. of a flying hand of the unmanned aerial vehicle. The command user can click the equipment identification information corresponding to the candidate unmanned aerial vehicle equipment on the operation interface of the electronic map, and the candidate unmanned aerial vehicle equipment is determined to be the participation equipment for executing the collaborative task by calling the acquired image of the candidate unmanned aerial vehicle equipment as the corresponding scene image. Or the candidate unmanned aerial vehicle devices are all participation devices of cooperative tasks, and the command device calls the proper unmanned aerial vehicle device from the candidate unmanned aerial vehicle devices at any time based on the operation of a command user to acquire scene images so as to realize corresponding multi-angle observation, tracking and the like in the map.
In some embodiments, the method further comprises step S109 (not shown), in which step S109 a task creation operation of the command user is obtained, wherein the task creation operation includes a selection operation of device identification information on the drone device and/or device identification information of the augmented reality device, the task creation operation being used to establish a collaborative task on the command device and the drone device and/or the augmented reality device. For example, the collaborative task is created by a command user of the command device, for example, the command user can acquire device identification information of one or more unmanned aerial vehicle devices and device identification information of one or more augmented reality devices in the current area, and add the device identification information of the command user desiring to perform the collaborative task in a corresponding task creation interface, or the command user can input corresponding constraint conditions in the creation interface, and determine one or more device identification information which is adapted from the one or more unmanned aerial vehicle devices and/or the one or more augmented reality devices based on the constraint conditions, so as to determine a corresponding task creation process based on the one or more device identification information, for example, establish communication connection with the one or more device identification information corresponding unmanned aerial vehicle devices and/or the augmented reality devices based on the one or more device identification information, so as to realize that the command device and the one or more device commonly perform the corresponding collaborative task; or generating a corresponding task creation request based on the one or more pieces of equipment identification information, and sending the task creation request to the network equipment, wherein the network equipment sends the corresponding task creation request to the one or more pieces of equipment identification information corresponding to the unmanned aerial vehicle equipment and/or the augmented reality equipment, and if a confirmation operation about the task creation request is acquired, a cooperative task about the command equipment and the one or more pieces of equipment identification information corresponding to the unmanned aerial vehicle equipment and/or the augmented reality equipment is created, and the like.
In some implementations, the collaborative task includes a plurality of subtasks, the augmented reality device belonging to one of the execution devices of a target subtask, the target subtask belonging to one of the plurality of subtasks. For example, the corresponding collaborative task includes a plurality of subtasks, e.g., different task branches for each device, e.g., a capture collaborative task for a target person may be planned as a capture subtask, an intercept subtask, or a monitor subtask, etc. The command device may issue different task instructions to different subtasks, for example, for capturing subtasks, the command device may issue task instructions for capturing routes, for intercepting subtasks, the command device may issue task instructions for intercepting routes, etc. The instruction device may issue corresponding execution instructions to all execution devices corresponding to different subtasks by selecting different subtasks, for example, to establish communication connection and schedule voice instructions for all execution devices of the same subtask at the same time, in some embodiments, the method further includes step S110 (not shown), in which step S110, the subtask execution instructions about the target subtask are sent to all execution devices of the target subtask, so that the execution instructions are presented by all execution devices of the target subtask. For example, the command device may send instructions to all duty devices of a certain subtask of the collaborative task for collaborative mode for a single task multiple personnel, etc. The plurality of execution devices of the subtask can complete group formation in a formation mode. For example, the command device may determine the corresponding team personnel/team device based on the selection of the command personnel, and as another example, the performing device participating in the current collaborative task has predetermined the corresponding personnel/team device, etc. The command device can select a group corresponding to a specific subtask through the click operation of a command personnel, can select a group corresponding to the specific subtask through a voice command, and the like, and changes the scheduling mode of the current cooperative task from global command to directional command; at this time, marking information determined by user operation of a target object in the scene image by a commander and/or operation marking information determined by user operation of an operation object in the electronic map are issued to all execution devices in the selected small group, so that all the execution devices of the target subtask can present the execution instruction, and the execution devices in the non-selected small group cannot acquire the marking information and the operation marking information. Of course, in some cases, the command device may also implement scheduling of a single device corresponding to a specific device identification information based on a touch operation of a command user with respect to the specific device identification information, for example, issue a corresponding scheduling instruction to the single device.
Fig. 2 shows a method of presenting marker information of a target object according to an aspect of the present application, applied to an augmented reality device, the method comprising step S201, in which first pose information of an augmented reality device being used by a duty user is acquired, wherein the first pose information includes first position information and first pose information of the augmented reality device, the first pose information is used to determine overlay position information in a real scene of the augmented reality device of the target object in combination with geographical position information of the corresponding target object, and marker content about the target object is overlaid and presented in the real scene based on the overlay position information.
For example, the duty user is used to indicate to a wearing user of an augmented reality device that is performing the same collaborative task as the corresponding command device and/or drone device. The geographic position information of the target object can be determined by the command device based on image position information determined by user operation of the command user in the scene image and flight shooting pose information of the scene image, and can also be determined by the command user through projection conversion on map position information determined by user operation of the target object in the electronic map. The geographical location information may be calculated locally by a corresponding device (e.g., an augmented reality device, an unmanned aerial vehicle device, a command device) or may be calculated by a network device. In some embodiments, the augmented reality device side may acquire the geographic position information of the target object, and then determine the superimposed position information of the target object in the real scene of the augmented reality device based on the real-time pose information of the augmented reality device, for example, the local side of the augmented reality device calculates and determines the geographic position information of the target object, and for example, the geographic position information is directly sent to the augmented reality device after being calculated by the command device/unmanned aerial vehicle device, or the geographic position information is calculated and determined by the network device and sent to the augmented reality device, and so on. In other embodiments, other device sides (e.g., network devices, unmanned aerial vehicle devices, command devices) may determine superimposed position information of the target object in the real scene of the augmented reality device based on the geographic position information and the real-time pose information of the augmented reality device, and send the superimposed position to the augmented reality device.
The augmented reality device can acquire first pose information corresponding to the camera device of the augmented reality device in real time, wherein the first pose information comprises first position information and first pose information of the augmented reality device. According to the first pose information and the geographic position information, superposition position information of a target object in a live-action picture of the augmented reality device can be calculated and determined, and corresponding mark content and the like are superposed and presented in a display screen of the augmented reality device based on the superposition position information. Specifically, setting the origin of a three-dimensional rectangular coordinate system (such as a station center coordinate system, a navigation coordinate system and the like) of the augmented reality equipment at a certain position (such as a start position of a duty personnel); converting the geographic position information of the marking information into the three-dimensional rectangular coordinate system; acquiring real-time geographic position and gesture information of the augmented reality equipment, converting the geographic position of the augmented reality equipment into the three-dimensional rectangular coordinate system, and determining a rotation matrix from the three-dimensional rectangular coordinate system to a camera coordinate system of the augmented reality equipment based on the gesture information of the augmented reality equipment; and determining superposition position information of the marker information in the screen of the augmented reality device based on the three-dimensional rectangular coordinates of the marker information, the three-dimensional rectangular coordinates corresponding to the position of the augmented reality device, the rotation matrix and the internal parameters of the camera of the augmented reality device. The superposition position information calculating process may occur locally in the augmented reality device, or may be performed by other devices (such as a network device, an augmented reality device, an unmanned aerial vehicle device, etc.) based on the geographic position information and the first pose information, and then return to the augmented reality device.
In some cases, the live view of the augmented reality device may be transmitted to the corresponding command device and/or drone device and presented in the display of the command device and/or the control device of the drone device. Similarly, in the display device of the command device/unmanned aerial vehicle control device, the marking content of the corresponding target object is presented based on the corresponding superposition position information, where the corresponding superposition position information may be determined by the network device/augmented reality device and sent to the corresponding command device and/or unmanned aerial vehicle control device, and may also be determined by the corresponding command device/unmanned aerial vehicle control device based on the first pose information (for example, the augmented reality device directly sends or the network device forwards, etc.) and the geographic position information of the target object by calculation at the respective local ends. As in some embodiments, the method further comprises step S202 (not shown), in step S202, sending the first pose information to a corresponding network device; wherein the augmented reality device and the command device are in a collaborative execution state of the same collaborative task; and receiving the mark content to be overlapped in the live-action of the augmented reality equipment and the overlapped position information of the mark content, which are returned by the network equipment, of the target object, wherein the overlapped position information is determined by the first pose information and the geographic position information of the target object, the geographic position information is determined by the image position information of the target object in a scene image shot by corresponding unmanned aerial vehicle equipment, the mark content and the image position information are determined by the user operation of corresponding command equipment, and the command equipment, the unmanned aerial vehicle equipment and the augmented reality equipment are in the cooperative execution state of the same cooperative task. For example, the calculation of the corresponding superimposed position information may be performed at a network device, and the augmented reality device uploads the corresponding first pose information to the network device, and the network device may determine the corresponding superimposed position information based on the first pose information and the geographic position information of the target object, where the geographic position information may be determined by the network device based on the flight imaging pose information of the unmanned aerial vehicle device and the image position information of the target object, or may be determined by the receiving command device/unmanned aerial vehicle device/augmented reality device based on the flight imaging pose information of the unmanned aerial vehicle device and the image position information of the target object.
In some implementations, the collaborative task includes a plurality of subtasks, the augmented reality device belonging to one of the execution devices of a target subtask, the target subtask belonging to one of the plurality of subtasks; the method further includes step S203 (not shown), in step S203, a subtask execution instruction about the target subtask sent by the command device to the augmented reality device is received and presented, where the subtask execution instruction is presented to all execution devices of the target subtask. For example, the corresponding collaborative task includes a plurality of subtasks, e.g., different task branches for each device, e.g., a capture collaborative task for a target person may be planned as a capture subtask, an intercept subtask, or a monitor subtask, etc. The command device may issue different task instructions to different subtasks, for example, for capturing subtasks, the command device may issue task instructions for capturing routes, for intercepting subtasks, the command device may issue task instructions for intercepting routes, etc. The duty personnel of the corresponding augmented reality device belongs to one of the execution devices of the target subtasks of one of the subtasks, and the command device can send corresponding execution instructions to all the execution devices corresponding to the different subtasks by selecting the different subtasks, for example, communication connection is simultaneously established for all the execution devices of the same subtask, voice instruction scheduling is performed, and the like. The command device may send instructions to all duty devices of a certain subtask of the collaborative task for collaborative mode of multiple people for a single task, etc. The plurality of execution devices of the subtask can complete group formation in a formation mode. For example, the command device may determine the corresponding team personnel/team device based on the selection of the command personnel, and as another example, the performing device participating in the current collaborative task has predetermined the corresponding personnel/team device, etc. The command device can select a group corresponding to a specific subtask through the click operation of a command personnel, can select a group corresponding to the specific subtask through a voice command, and the like, and changes the scheduling mode of the current cooperative task from global command to directional command; at this time, marking information determined by user operation of a target object in the scene image by a commander and/or operation marking information determined by user operation of an operation object in the electronic map are issued to all execution devices in the selected small group, so that all the execution devices of the target subtask can present the execution instruction, and the execution devices in the non-selected small group cannot acquire the marking information and the operation marking information. Of course, in some cases, the command device may also implement scheduling of a single device corresponding to a specific device identification information based on a touch operation of a command user with respect to the specific device identification information, for example, issue a corresponding scheduling instruction to the single device.
In some embodiments, the method further includes step S204 (not shown), in which in step S204, a scene image about the target object captured by the corresponding unmanned aerial vehicle device and image position information of the corresponding marker content in the scene image are acquired; and presenting the scene image and superposing and presenting the marked content on the scene image according to the image position information. For example, the augmented reality device may further acquire a scene image about the target object captured by the unmanned aerial vehicle, where the scene image may be acquired directly from the unmanned aerial vehicle device side, or may be acquired from the network device based on a device identification information call of the unmanned aerial vehicle device. The unmanned aerial vehicle device side can send the corresponding scene image to the command device/the augmented reality device directly through communication connection, or send the corresponding scene image to the command device or the augmented reality device through the network device, and the corresponding augmented reality device can present the scene image in a display screen, for example, in a video perspective mode, or present the scene image in a certain screen area in the display screen, and the like. In order to facilitate the superposition presentation of the marker information in the scene image, the unmanned aerial vehicle device acquires the flight image capturing pose information corresponding to the scene image, and in some embodiments, the corresponding augmented reality device may acquire the flight image capturing pose information of the scene image directly through communication connection with the unmanned aerial vehicle device or through a network device forwarding manner, and in combination with the geographic position information of the target object determined through calculation, the image position information in the corresponding scene image may be calculated, and the marker content may be superimposed and presented in the presented scene image. In other embodiments, other devices (such as a network device, a command device, and an unmanned aerial vehicle device) acquire flight shooting pose information of a scene image, and the like, and in combination with the calculated and determined geographic position information of a target object, and the like, superposition position information in a corresponding scene image can be calculated, the superposition position information is sent to an augmented reality device, and the marker content and the like are superposed and presented in the scene image presented by the augmented reality device.
In some embodiments, the method further includes step S205 (not shown), in step S205, acquiring an electronic map of a scene where the target object is located and map position information of the target in the electronic map; and presenting the electronic map and superposing and presenting the marked content in the electronic map based on the map position information. For example, the augmented reality device side may call an electronic map of a scene where the target object is located, e.g., the augmented reality device determines an electronic map near the geographic location information from a local database or the network device side according to the geographic location information where the target object is located, and presents the electronic map. The augmented reality device may further acquire map location information of the target object in the electronic map, for example, perform projection conversion on the local terminal based on the geographic location information to determine map location information in the corresponding electronic map, or perform projection conversion on other device terminals (such as network device, command device, and unmanned aerial vehicle device) based on the geographic location information to determine map location information in the corresponding electronic map, and send the map location information to the augmented reality device. The augmented reality device can display the electronic map through the corresponding display device, and display the marking content of the marking information in the map position information corresponding area in the electronic map, so that the marking information of the target object is overlapped and displayed in the electronic map.
The foregoing description mainly describes embodiments of the present application for presenting the marking information of the target object, and the present application further provides specific devices capable of implementing the foregoing embodiments, and we describe below with reference to fig. 3 and 4.
Fig. 3 shows a command device 100 for presenting marking information of a target object, according to an aspect of the application, wherein the device comprises a one-to-one module 101 and a two-to-two module 102. A one-to-one module 101, configured to acquire a scene image shot by the unmanned aerial vehicle device; and the second module 102 is configured to obtain a user operation of a command user of the command device with respect to a target object in the scene image, and generate, based on the user operation, tag information about the target object, where the tag information includes corresponding tag content and image position information of the tag content in the scene image, where the image position information is used to determine geographic position information of the target object and superimpose and present the tag content in a real scene of an augmented reality device of a duty user, and the augmented reality device and the command device are in a collaborative execution state of the same collaborative task.
In some embodiments, the geographic location information is further used to determine real-time image location information of the target object in a real-time scene image captured by the drone device, and to superimpose and present the marker content in the real-time scene image presented by the augmented reality device and/or drone device.
Here, the specific embodiments of the one-to-one module 101 and the two-to-one module 102 shown in fig. 3 are the same as or similar to the embodiments of the step S101 and the step S102 shown in fig. 1, and thus are not described in detail and are incorporated herein by reference.
In some embodiments, the apparatus further comprises a three module (not shown) for determining geographic location information of the target object based on the image location information, camera pose information of the scene image.
In some embodiments, the apparatus further comprises a four-module (not shown) for presenting an electronic map of a scene in which the target object is located; and determining map position information of the target object in the electronic map according to the geographic position information of the target object, and presenting the marking content in the electronic map based on the map position information.
In some embodiments, the geographic location information is further used to superimpose and present the marker content in an electronic map presented by the augmented reality device and/or the drone device with respect to a scene in which the target object is located.
In some embodiments, the device further includes a five-module (not shown) configured to acquire and present an electronic map, and determine operation marker information of an operation object based on a user operation of the operation object in the electronic map by the command user, where the operation marker information includes corresponding operation marker content and operation map position information of the operation marker content in the electronic map, where the operation map position information is used to determine operation geographic position information of the operation object and superimpose and present the operation marker content in a live view of the augmented reality device and/or a scene image captured by a drone device. In some embodiments, the apparatus further comprises a six module (not shown) for determining the operation geographical location information of the operation object based on the operation map location information.
In some embodiments, the operation geographic location information is further used for overlaying and presenting the operation marker content in an electronic map presented by the augmented reality device and/or the unmanned aerial vehicle device, wherein the electronic map is related to a scene in which the operation object is located.
In some embodiments, the device further comprises a seventh module (not shown) for obtaining first map location information of the augmented reality device and/or second map location information of the drone device, and identifying the augmented reality device based on the first map location information and/or the drone device based on the second map location information in the electronic map.
In some embodiments, the device further includes an eight module (not shown) for acquiring and presenting an electronic map of a scene in which the target object is located, where the electronic map includes device identification information of a plurality of candidate drone devices; the one-to-one module 101 is configured to obtain a scene image captured by an unmanned aerial vehicle device of one of the device identification information of the candidate unmanned aerial vehicle devices based on a call operation of the command user with respect to the one of the device identification information of the candidate unmanned aerial vehicle devices, where the unmanned aerial vehicle device is in a collaborative execution state of the collaborative task.
In some embodiments, the device further includes a nine module (not shown) configured to obtain a task creation operation of the command user, where the task creation operation includes a selection operation of device identification information about the unmanned aerial vehicle device and/or device identification information about the augmented reality device, and the task creation operation is configured to establish a collaborative task about the command device and the unmanned aerial vehicle device and/or the augmented reality device.
In some implementations, the collaborative task includes a plurality of subtasks, the augmented reality device belonging to one of the execution devices of a target subtask, the target subtask belonging to one of the plurality of subtasks. In some embodiments, the device further comprises a tenth module (not shown) for sending subtask execution instructions regarding the target subtask to all execution devices of the target subtask to present the execution instructions through all execution devices of the target subtask.
Here, the embodiments corresponding to the three to ten modules are the same as or similar to the embodiments of the steps S103 to S110, and thus are not described in detail and are incorporated herein by reference.
Fig. 4 illustrates an augmented reality device presenting marker information of a target object according to an aspect of the application, the device comprising two modules 201 for obtaining first pose information of an augmented reality device being used by a duty user, wherein the first pose information comprises first position information and first pose information of the augmented reality device, the first pose information is used for determining overlay position information in a real scene of the augmented reality device of the target object in combination with geographic position information of the corresponding target object, and overlaying and presenting marker content related to the target object in the real scene based on the overlay position information.
Here, the specific implementation manner of the two modules 201 is the same as or similar to the embodiment of the step S201, so that the description is omitted herein for reference.
In some embodiments, the device further comprises a second module (not shown) for sending the first pose information to a corresponding network device; wherein the augmented reality device and the command device are in a collaborative execution state of the same collaborative task; and receiving the mark content to be overlapped in the live-action of the augmented reality equipment and the overlapped position information of the mark content, which are returned by the network equipment, of the target object, wherein the overlapped position information is determined by the first pose information and the geographic position information of the target object, the geographic position information is determined by the image position information of the target object in a scene image shot by corresponding unmanned aerial vehicle equipment, the mark content and the image position information are determined by the user operation of corresponding command equipment, and the command equipment, the unmanned aerial vehicle equipment and the augmented reality equipment are in the cooperative execution state of the same cooperative task.
In some implementations, the collaborative task includes a plurality of subtasks, the augmented reality device belonging to one of the execution devices of a target subtask, the target subtask belonging to one of the plurality of subtasks; the device further comprises a second module and a third module (not shown), wherein the second module is used for receiving and presenting a subtask execution instruction about the target subtask, which is sent to the augmented reality device by the command device, and the subtask execution instruction is presented to all execution devices of the target subtask.
In some embodiments, the device further includes a second and fourth modules (not shown) for acquiring a scene image about the target object captured by the corresponding unmanned aerial vehicle device, and image position information of the corresponding marker content in the scene image; and presenting the scene image and superposing and presenting the marked content on the scene image according to the image position information.
In some embodiments, the apparatus further includes a second and fifth modules (not shown) for acquiring an electronic map of a scene in which the target object is located and map position information of the target in the electronic map; and presenting the electronic map and superposing and presenting the marked content in the electronic map based on the map position information.
Here, the specific implementation manners of the two-five modules are the same as or similar to the embodiments of the step S202-step S205, and thus are not described in detail and are incorporated herein by reference.
In addition to the methods and apparatus described in the above embodiments, the present application also provides a computer-readable storage medium storing computer code which, when executed, performs a method as described in any one of the preceding claims.
The application also provides a computer program product which, when executed by a computer device, performs a method as claimed in any preceding claim.
The present application also provides a computer device comprising:
One or more processors;
A memory for storing one or more computer programs;
The one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 5 illustrates an exemplary system that may be used to implement various embodiments described in the present disclosure;
in some embodiments, as shown in fig. 5, the system 300 can function as any of the devices of the various described embodiments. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement the modules to perform the actions described in the present application.
For one embodiment, the system control module 310 may include any suitable interface controller to provide any suitable interface to at least one of the processor(s) 305 and/or any suitable device or component in communication with the system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
The system memory 315 may be used, for example, to load and store data and/or instructions for the system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as, for example, a suitable DRAM. In some embodiments, the system memory 315 may comprise a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable nonvolatile memory (e.g., flash memory) and/or may include any suitable nonvolatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or which may be accessed by the device without being part of the device. For example, NVM/storage 320 may be accessed over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. The system 300 may wirelessly communicate with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic of one or more controllers (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic of one or more controllers of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die as logic of one or more controllers of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic of one or more controllers of the system control module 310 to form a system on chip (SoC).
In various embodiments, the system 300 may be, but is not limited to being: a server, workstation, desktop computing device, or mobile computing device (e.g., laptop computing device, handheld computing device, tablet, netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, keyboards, liquid Crystal Display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application Specific Integrated Circuits (ASICs), and speakers.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, e.g., using Application Specific Integrated Circuits (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software program of the present application may be executed by a processor to perform the steps or functions described above. Likewise, the software programs of the present application (including associated data structures) may be stored on a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. In addition, some steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Furthermore, portions of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application by way of operation of the computer. Those skilled in the art will appreciate that the form of computer program instructions present in a computer readable medium includes, but is not limited to, source files, executable files, installation package files, etc., and accordingly, the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Herein, a computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
Communication media includes media whereby a communication signal containing, for example, computer readable instructions, data structures, program modules, or other data, is transferred from one system to another. Communication media may include conductive transmission media such as electrical cables and wires (e.g., optical fibers, coaxial, etc.) and wireless (non-conductive transmission) media capable of transmitting energy waves, such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied as a modulated data signal, for example, in a wireless medium, such as a carrier wave or similar mechanism, such as that embodied as part of spread spectrum technology. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory, such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read only memory (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, feRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed computer-readable information/data that can be stored for use by a computer system.
An embodiment according to the application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to operate a method and/or a solution according to the embodiments of the application as described above.
It will be evident to those skilled in the art that the application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.

Claims (21)

1. A method of presenting tag information of a target object, applied to a command device, wherein the method comprises:
Acquiring a scene image shot by unmanned aerial vehicle equipment;
Acquiring user operation of a command user of command equipment on a target object in the scene image, and generating mark information on the target object based on the user operation, wherein the mark information comprises corresponding mark content and image position information of the mark content in the scene image, the image position information is used for determining geographic position information of the target object and superposing and presenting the mark content in a real scene of augmented reality equipment of a duty user, the augmented reality equipment and the command equipment are in a cooperative execution state of the same cooperative task, wherein screen position information of a display screen of the augmented reality equipment, which is superposed on the mark content, is determined based on the geographic position information and duty shooting pose information, the duty shooting pose information comprises shooting position information and shooting pose information of a shooting device of the augmented reality equipment, and a determination mode of the screen position information comprises setting the augmented reality equipment to be an origin of a three-dimensional rectangular coordinate system when the augmented reality equipment is at one position, and converting the geographic position information into the three-dimensional rectangular coordinate system; acquiring image pickup position information and image pickup posture information of an image pickup device of the augmented reality equipment, converting the image pickup position information into the three-dimensional rectangular coordinate system, and determining a rotation matrix from the three-dimensional rectangular coordinate system to a camera coordinate system of the augmented reality equipment based on the image pickup posture information; determining screen position information of the mark content superimposed on a display screen of the augmented reality device based on the three-dimensional rectangular coordinates of the mark information, the three-dimensional rectangular coordinates corresponding to the image capturing position information, the rotation matrix and the camera internal parameters of the augmented reality device;
Wherein the method further comprises:
Acquiring and presenting an electronic map, determining operation mark information of an operation object based on user operation of the operation object in the electronic map by the command user, wherein the operation mark information comprises corresponding operation mark content and operation map position information of the operation mark content in the electronic map, wherein the operation map position information is used for determining operation geographic position information of the operation object and superposing and presenting the operation mark content in a live view of the augmented reality equipment and/or a scene image shot by unmanned aerial vehicle equipment, the operation object comprises a corresponding area or position mark of the user operation in the electronic map, and the operation geographic position information of the operation object is determined according to the operation map position information and based on reverse transformation of projection transformation.
2. The method of claim 1, wherein the geographic location information is further used to determine real-time image location information of the target object in a real-time scene image captured by the drone device, and to superimpose and present the marker content in the real-time scene image presented by the augmented reality device and/or drone device.
3. The method of claim 1, wherein the method further comprises:
and determining the geographic position information of the target object based on the image position information and the shooting pose information of the scene image.
4. The method of claim 1, wherein the method further comprises:
Presenting an electronic map of a scene where the target object is located;
And determining map position information of the target object in the electronic map according to the geographic position information of the target object, and presenting the marking content in the electronic map based on the map position information.
5. The method of claim 1, wherein the geographic location information is further used to overlay the marker content in an electronic map presented by the augmented reality device and/or drone device regarding a scene in which the target object is located.
6. The method of claim 1, wherein the method further comprises:
And determining the operation geographic position information of the operation object based on the operation map position information.
7. The method of claim 1, wherein the operational geographic location information is further used to superimpose and present the operational marker content in an electronic map presented by the augmented reality device and/or drone device regarding a scene in which the operational object is located.
8. The method of any of claims 4 to 7, wherein the method further comprises:
And acquiring first map position information of the augmented reality equipment and/or second map position information of the unmanned aerial vehicle equipment, and identifying the augmented reality equipment based on the first map position information and/or identifying the unmanned aerial vehicle equipment based on the second map position information in the electronic map.
9. The method of claim 1, wherein the method further comprises:
acquiring and presenting an electronic map of a scene where a target object is located, wherein the electronic map comprises equipment identification information of a plurality of candidate unmanned aerial vehicle equipment;
The acquiring the scene image shot by the unmanned aerial vehicle equipment comprises the following steps:
and acquiring scene images shot by the unmanned aerial vehicle equipment of one of the equipment identification information of the candidate unmanned aerial vehicle equipment based on the calling operation of the command user on the one of the equipment identification information of the candidate unmanned aerial vehicle equipment, wherein the unmanned aerial vehicle equipment is in a cooperative execution state of the cooperative task.
10. The method of claim 1, wherein the method further comprises:
And acquiring task creation operation of the command user, wherein the task creation operation comprises a selection operation of equipment identification information about the unmanned aerial vehicle equipment and/or equipment identification information of the augmented reality equipment, and the task creation operation is used for establishing cooperative tasks about the command equipment, the unmanned aerial vehicle equipment and/or the augmented reality equipment.
11. The method of claim 10, wherein the collaborative task comprises a plurality of subtasks, the augmented reality device belonging to one of the execution devices of a target subtask, the target subtask belonging to one of the plurality of subtasks.
12. The method of claim 11, wherein the method further comprises:
and sending the subtask execution instruction about the target subtask to all the execution devices of the target subtask so as to present the execution instruction on all the execution devices of the target subtask.
13. A method of presenting marker information of a target object for use in an augmented reality device, wherein the method comprises:
Acquiring first pose information of augmented reality equipment being used by a duty user, wherein the first pose information comprises first position information and first pose information of the augmented reality equipment, the first pose information is used for determining superposition position information of a target object in a live action of the augmented reality equipment by combining geographic position information of the target object, and superposition content related to the target object is presented in the live action based on the superposition position information, and a determination mode of the superposition position information comprises setting the augmented reality equipment to be an origin of a three-dimensional rectangular coordinate system when the augmented reality equipment is at one position, and converting the geographic position information into the three-dimensional rectangular coordinate system; acquiring first position information and first posture information of the augmented reality equipment, converting the first position information into the three-dimensional rectangular coordinate system, and determining a rotation matrix from the three-dimensional rectangular coordinate system to a camera coordinate system of the augmented reality equipment based on the first posture information; determining superposition position information of the target object in the live-action of the augmented reality equipment based on the three-dimensional rectangular coordinates of the marking information, the three-dimensional rectangular coordinates corresponding to the first position information, the rotation matrix and the camera internal parameters of the augmented reality equipment;
The command device acquires and presents an electronic map, and determines operation mark information of an operation object based on user operation of the operation object in the electronic map by a command user, wherein the operation mark information comprises corresponding operation mark content and operation map position information of the operation mark content in the electronic map, the operation map position information is used for determining operation geographic position information of the operation object and superposing and presenting the operation mark content in a scene image shot by the augmented reality device and/or unmanned aerial vehicle device, the operation object comprises a corresponding area or position mark of the user operation in the electronic map, and the operation geographic position information of the operation object is determined according to the operation map position information and based on reverse transformation of projection transformation.
14. The method of claim 13, wherein the method further comprises:
transmitting the first pose information to corresponding network equipment; wherein the augmented reality device and the command device are in a collaborative execution state of the same collaborative task;
And receiving the mark content to be overlapped in the live-action of the augmented reality equipment and the overlapped position information of the mark content, which are returned by the network equipment, of the target object, wherein the overlapped position information is determined by the first pose information and the geographic position information of the target object, the geographic position information is determined by the image position information of the target object in a scene image shot by corresponding unmanned aerial vehicle equipment, the mark content and the image position information are determined by the user operation of corresponding command equipment, and the command equipment, the unmanned aerial vehicle equipment and the augmented reality equipment are in the cooperative execution state of the same cooperative task.
15. The method of claim 14, wherein the collaborative task comprises a plurality of subtasks, the augmented reality device belonging to one of the execution devices of a target subtask, the target subtask belonging to one of the plurality of subtasks; wherein the method further comprises:
And receiving and presenting a subtask execution instruction about the target subtask, which is sent to the augmented reality device by the command device, wherein the subtask execution instruction is presented to all execution devices of the target subtask.
16. The method of claim 13, wherein the method further comprises:
acquiring a scene image, which corresponds to the unmanned aerial vehicle equipment and is shot by the unmanned aerial vehicle equipment, about the target object, and image position information of corresponding mark content in the scene image;
And presenting the scene image and superposing and presenting the marked content on the scene image according to the image position information.
17. The method of claim 13, wherein the method further comprises:
Acquiring an electronic map of a scene where the target object is located and map position information of the target in the electronic map;
and presenting the electronic map and superposing and presenting the marked content in the electronic map based on the map position information.
18. A command device for presenting tag information of a target object, wherein the device comprises:
the one-to-one module is used for acquiring a scene image shot by the unmanned aerial vehicle equipment;
A second module, configured to obtain a user operation of a command user of a command device with respect to a target object in the scene image, generate, based on the user operation, mark information about the target object, where the mark information includes corresponding mark content and image position information of the mark content in the scene image, where the image position information is used to determine geographic position information of the target object and superimpose and present the mark content in a real scene of an augmented reality device of a duty user, where the augmented reality device and the command device are in a collaborative execution state of the same collaborative task, where screen position information of a display screen of the augmented reality device where the mark content is superimposed is determined based on the geographic position information and duty image capturing pose information, and the determination mode of the screen position information includes setting an origin of a three-dimensional rectangular coordinate system of the augmented reality device when in a position, and converting the screen position information into the three-dimensional rectangular coordinate system; acquiring image pickup position information and image pickup posture information of an image pickup device of the augmented reality equipment, converting the image pickup position information into the three-dimensional rectangular coordinate system, and determining a rotation matrix from the three-dimensional rectangular coordinate system to a camera coordinate system of the augmented reality equipment based on the image pickup posture information; determining screen position information of the mark content superimposed on a display screen of the augmented reality device based on the three-dimensional rectangular coordinates of the mark information, the three-dimensional rectangular coordinates corresponding to the image capturing position information, the rotation matrix and the camera internal parameters of the augmented reality device;
The device is further used for acquiring and presenting an electronic map, determining operation mark information of an operation object based on user operation of the operation object in the electronic map by the command user, wherein the operation mark information comprises corresponding operation mark content and operation map position information of the operation mark content in the electronic map, the operation map position information is used for determining operation geographic position information of the operation object and superposing and presenting the operation mark content in a live-action of the augmented reality device and/or a scene image shot by the unmanned aerial vehicle device, the operation object comprises a corresponding area or position mark of the user operation in the electronic map, and the operation geographic position information of the operation object is determined according to the operation map position information and based on reverse transformation of projection transformation.
19. An augmented reality device that presents marker information of a target object, wherein the device comprises:
The second module is used for acquiring first pose information of augmented reality equipment which is being used by a duty user, wherein the first pose information comprises first position information and first pose information of the augmented reality equipment, the first pose information is used for determining superposition position information of a target object in a live action of the augmented reality equipment by combining geographic position information of the corresponding target object, and superposition and presenting marking content related to the target object in the live action based on the superposition position information, and a determination mode of the superposition position information comprises setting an origin of a three-dimensional rectangular coordinate system of the augmented reality equipment when the augmented reality equipment is at one position, and converting the geographic position information into the three-dimensional rectangular coordinate system; acquiring first position information and first posture information of the augmented reality equipment, converting the first position information into the three-dimensional rectangular coordinate system, and determining a rotation matrix from the three-dimensional rectangular coordinate system to a camera coordinate system of the augmented reality equipment based on the first posture information; determining superposition position information of the target object in the live-action of the augmented reality equipment based on the three-dimensional rectangular coordinates of the marking information, the three-dimensional rectangular coordinates corresponding to the first position information, the rotation matrix and the camera internal parameters of the augmented reality equipment;
The command device acquires and presents an electronic map, and determines operation mark information of an operation object based on user operation of the operation object in the electronic map by a command user, wherein the operation mark information comprises corresponding operation mark content and operation map position information of the operation mark content in the electronic map, the operation map position information is used for determining operation geographic position information of the operation object and superposing and presenting the operation mark content in a scene image shot by the augmented reality device and/or unmanned aerial vehicle device, the operation object comprises a corresponding area or position mark of the user operation in the electronic map, and the operation geographic position information of the operation object is determined according to the operation map position information and based on reverse transformation of projection transformation.
20. A computer device, wherein the device comprises:
A processor; and
A memory arranged to store computer executable instructions which, when executed, cause the processor to perform the steps of the method of any one of claims 1 to 17.
21. A computer readable storage medium having stored thereon a computer program/instructions which, when executed, cause a system to perform the steps of the method according to any of claims 1 to 17.
CN202210762152.XA 2022-06-30 2022-06-30 Method and equipment for presenting marking information of target object Active CN115439635B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210762152.XA CN115439635B (en) 2022-06-30 2022-06-30 Method and equipment for presenting marking information of target object
PCT/CN2022/110489 WO2024000733A1 (en) 2022-06-30 2022-08-05 Method and device for presenting marker information of target object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210762152.XA CN115439635B (en) 2022-06-30 2022-06-30 Method and equipment for presenting marking information of target object

Publications (2)

Publication Number Publication Date
CN115439635A CN115439635A (en) 2022-12-06
CN115439635B true CN115439635B (en) 2024-04-26

Family

ID=84240888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210762152.XA Active CN115439635B (en) 2022-06-30 2022-06-30 Method and equipment for presenting marking information of target object

Country Status (2)

Country Link
CN (1) CN115439635B (en)
WO (1) WO2024000733A1 (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108769517A (en) * 2018-05-29 2018-11-06 亮风台(上海)信息科技有限公司 A kind of method and apparatus carrying out remote assistant based on augmented reality
CN109656259A (en) * 2018-11-22 2019-04-19 亮风台(上海)信息科技有限公司 It is a kind of for determining the method and apparatus of the image location information of target object
CN109656319A (en) * 2018-11-22 2019-04-19 亮风台(上海)信息科技有限公司 A kind of action of ground for rendering auxiliary information method and apparatus
CN110365666A (en) * 2019-07-01 2019-10-22 中国电子科技集团公司第十五研究所 Multiterminal fusion collaboration command system of the military field based on augmented reality
CN112017304A (en) * 2020-09-18 2020-12-01 北京百度网讯科技有限公司 Method, apparatus, electronic device, and medium for presenting augmented reality data
CN112639682A (en) * 2018-08-24 2021-04-09 脸谱公司 Multi-device mapping and collaboration in augmented reality environments
WO2021075878A1 (en) * 2019-10-18 2021-04-22 주식회사 도넛 Augmented reality record service provision method and user terminal
CN113741698A (en) * 2021-09-09 2021-12-03 亮风台(上海)信息科技有限公司 Method and equipment for determining and presenting target mark information
CN114116110A (en) * 2021-07-20 2022-03-01 上海诺司纬光电仪器有限公司 Intelligent interface based on augmented reality
CN114332417A (en) * 2021-12-13 2022-04-12 亮风台(上海)信息科技有限公司 Method, device, storage medium and program product for multi-person scene interaction
CN114529690A (en) * 2020-10-30 2022-05-24 北京字跳网络技术有限公司 Augmented reality scene presenting method and device, terminal equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104457704B (en) * 2014-12-05 2016-05-25 北京大学 Based on the unmanned aerial vehicle object locating system and the method that strengthen geography information
US9471059B1 (en) * 2015-02-17 2016-10-18 Amazon Technologies, Inc. Unmanned aerial vehicle assistant
CN109388230A (en) * 2017-08-11 2019-02-26 王占奎 AR fire-fighting emergent commands deduction system platform, AR fire helmet
CN108303994B (en) * 2018-02-12 2020-04-28 华南理工大学 Group control interaction method for unmanned aerial vehicle
CN109561282B (en) * 2018-11-22 2021-08-06 亮风台(上海)信息科技有限公司 Method and equipment for presenting ground action auxiliary information
CN110288207A (en) * 2019-05-25 2019-09-27 亮风台(上海)信息科技有限公司 It is a kind of that the method and apparatus of scene information on duty is provided
CN111625091B (en) * 2020-05-14 2021-07-20 佳都科技集团股份有限公司 Label overlapping method and device based on AR glasses

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108769517A (en) * 2018-05-29 2018-11-06 亮风台(上海)信息科技有限公司 A kind of method and apparatus carrying out remote assistant based on augmented reality
CN112639682A (en) * 2018-08-24 2021-04-09 脸谱公司 Multi-device mapping and collaboration in augmented reality environments
CN109656259A (en) * 2018-11-22 2019-04-19 亮风台(上海)信息科技有限公司 It is a kind of for determining the method and apparatus of the image location information of target object
CN109656319A (en) * 2018-11-22 2019-04-19 亮风台(上海)信息科技有限公司 A kind of action of ground for rendering auxiliary information method and apparatus
CN110365666A (en) * 2019-07-01 2019-10-22 中国电子科技集团公司第十五研究所 Multiterminal fusion collaboration command system of the military field based on augmented reality
WO2021075878A1 (en) * 2019-10-18 2021-04-22 주식회사 도넛 Augmented reality record service provision method and user terminal
CN112017304A (en) * 2020-09-18 2020-12-01 北京百度网讯科技有限公司 Method, apparatus, electronic device, and medium for presenting augmented reality data
CN114529690A (en) * 2020-10-30 2022-05-24 北京字跳网络技术有限公司 Augmented reality scene presenting method and device, terminal equipment and storage medium
CN114116110A (en) * 2021-07-20 2022-03-01 上海诺司纬光电仪器有限公司 Intelligent interface based on augmented reality
CN113741698A (en) * 2021-09-09 2021-12-03 亮风台(上海)信息科技有限公司 Method and equipment for determining and presenting target mark information
CN114332417A (en) * 2021-12-13 2022-04-12 亮风台(上海)信息科技有限公司 Method, device, storage medium and program product for multi-person scene interaction

Also Published As

Publication number Publication date
CN115439635A (en) 2022-12-06
WO2024000733A1 (en) 2024-01-04

Similar Documents

Publication Publication Date Title
EP3885871B1 (en) Surveying and mapping system, surveying and mapping method and apparatus, device and medium
RU2741443C1 (en) Method and device for sampling points selection for surveying and mapping, control terminal and data storage medium
KR101583286B1 (en) Method, system and recording medium for providing augmented reality service and file distribution system
CN109459029B (en) Method and equipment for determining navigation route information of target object
CN109561282B (en) Method and equipment for presenting ground action auxiliary information
US10733777B2 (en) Annotation generation for an image network
CN112469967B (en) Mapping system, mapping method, mapping device, mapping apparatus, and recording medium
KR101600456B1 (en) Method, system and recording medium for providing augmented reality service and file distribution system
CN109656319B (en) Method and equipment for presenting ground action auxiliary information
CN115439528B (en) Method and equipment for acquiring image position information of target object
CN110248157B (en) Method and equipment for scheduling on duty
CN111527375B (en) Planning method and device for surveying and mapping sampling point, control terminal and storage medium
CN109618131B (en) Method and equipment for presenting decision auxiliary information
CN115460539B (en) Method, equipment, medium and program product for acquiring electronic fence
CN115439635B (en) Method and equipment for presenting marking information of target object
EP3885940A1 (en) Job control system, job control method, apparatus, device and medium
CN115760964B (en) Method and equipment for acquiring screen position information of target object
CN118092710A (en) Human-computer interaction method, device and computer equipment for augmented reality of information of power transmission equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 201210 7th Floor, No. 1, Lane 5005, Shenjiang Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 501 / 503-505, 570 shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Applicant before: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant