WO2024000746A1 - 一种获取电子围栏的方法、设备、介质及程序产品 - Google Patents

一种获取电子围栏的方法、设备、介质及程序产品 Download PDF

Info

Publication number
WO2024000746A1
WO2024000746A1 PCT/CN2022/111993 CN2022111993W WO2024000746A1 WO 2024000746 A1 WO2024000746 A1 WO 2024000746A1 CN 2022111993 W CN2022111993 W CN 2022111993W WO 2024000746 A1 WO2024000746 A1 WO 2024000746A1
Authority
WO
WIPO (PCT)
Prior art keywords
fence
electronic fence
duty
information
target
Prior art date
Application number
PCT/CN2022/111993
Other languages
English (en)
French (fr)
Inventor
廖春元
黄伟华
陈嘉伟
孙智沛
Original Assignee
亮风台(上海)信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 亮风台(上海)信息科技有限公司 filed Critical 亮风台(上海)信息科技有限公司
Publication of WO2024000746A1 publication Critical patent/WO2024000746A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/024Guidance services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services

Definitions

  • This application relates to the field of communications, and in particular to a technology for acquiring a target electronic fence.
  • Electronic fence is a virtual fence, which is different from the traditional physical fence detection structure. In the actual combat application of UAV systems, they often face complex flight airspace constraints.
  • the airspace constraints here are electronic fences. For drones, it is necessary to ensure that their flight path is within the allowed range of the electronic fence at any stage during the mission.
  • One purpose of this application is to provide a method, equipment, medium and program product for obtaining a target electronic fence.
  • a method for obtaining a target electronic fence wherein, applied to a command device, the method includes:
  • the target electronic fence includes the corresponding target fence attribute and the The target image position information of the target area in the scene image, the target image position information is used to determine the geographical location information of the target area and is used to coordinate the relationship between the on-duty user's augmented reality device and/or drone device and the The target electronic fence performs collision detection, and the augmented reality device and the command device are in a collaborative execution state of the same collaborative task.
  • a method for obtaining a target electronic fence wherein, applied to a network device, the method includes:
  • the set of electronic fences includes at least one electronic fence, and each electronic fence includes corresponding fence attributes and geographical location information of the fence area, where the duty equipment of the collaborative task includes a duty user augmented reality equipment and/or drone equipment; wherein the geographical location information of the fence area is used to detect collisions between the duty equipment and the electronic fence.
  • a command device that presents a target electronic fence
  • the device includes:
  • One-to-one module used to obtain scene images captured by drone equipment
  • the first and second modules are used to obtain the user operation of the command user of the command device regarding the target area in the scene image, and generate a target electronic fence regarding the target area based on the user operation, wherein the target electronic fence includes a corresponding Target fence attributes and target image location information of the target area in the scene image.
  • the target image location information is used to determine the geographical location information of the target area and is used for augmented reality devices and/or wireless monitoring of users on duty.
  • the human-machine device performs collision detection with the target electronic fence, and the augmented reality device and the command device are in a collaborative execution state of the same collaborative task.
  • a network device that presents a target electronic fence
  • the device includes:
  • the two-one module is used to obtain the electronic fence set corresponding to the collaborative task, wherein the electronic fence set includes at least one electronic fence, and each electronic fence includes corresponding fence attributes and geographical location information of the fence area, wherein the collaborative task
  • the duty equipment includes the duty user's augmented reality equipment and/or drone equipment; wherein the geographical location information of the fence area is used to detect collisions between the duty equipment and the electronic fence.
  • a computer device wherein the device includes:
  • a memory arranged to store computer-executable instructions which, when executed, cause the processor to perform the steps of any of the methods described above.
  • a computer-readable storage medium on which a computer program/instruction is stored, characterized in that, when executed, the computer program/instruction causes the system to perform any one of the methods described above. step.
  • a computer program product including a computer program/instruction, characterized in that when the computer program/instruction is executed by a processor, the steps of any of the above methods are implemented.
  • this application obtains the target image position information of the electronic fence based on the scene image collected by the drone device and the command user's user operation, and quickly and accurately combines the target image position information with the augmented reality device and/or wireless Human-machine equipment performs collision detection to achieve multi-terminal linkage, saving computing resources on each terminal and providing a good business processing environment for each terminal of system tasks.
  • Figure 1 shows a flow chart of a method for obtaining an electronic fence according to an embodiment of the present application
  • Figure 2 shows a flow chart of a method for obtaining an electronic fence according to another embodiment of the present application
  • Figure 3 shows an example diagram of a ray method according to an embodiment of the present application
  • Figure 4 shows an example diagram for determining the shortest distance according to an embodiment of the present application.
  • Figure 4a is an example of p in the middle of line segment AB.
  • Figure 4b is an example of p on the right side of line segment AB.
  • Figure 4c is an example of p on line segment AB. Example on the left;
  • Figure 5 shows a device structure diagram of a command device according to an embodiment of the present application
  • Figure 6 shows a device structure diagram of a network device according to an embodiment of the present application
  • Figure 7 illustrates an example system that may be used to implement various embodiments described in this application.
  • the terminal, the device of the service network and the trusted party all include one or more processors (for example, central processing unit (Central Processing Unit, CPU)), input/output interfaces, network interfaces and Memory.
  • Memory may include non-permanent memory in computer-readable media, random access memory (Random Access Memory, RAM) and/or non-volatile memory, such as read-only memory (Read Only Memory, ROM) or flash memory ( Flash Memory). Memory is an example of computer-readable media.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • Flash Memory Flash Memory
  • Computer-readable media includes both persistent and non-volatile, removable and non-removable media that can be implemented by any method or technology for storage of information.
  • Information may be computer-readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random-Access Memory (Static Random-Access Memory, SRAM), Dynamic Random Access Memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (Electrically-Erasable Programmable Read) -Only Memory (EEPROM), flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage , magnetic tape cassettes, magnetic tape disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • PCM Phase-Change Memory
  • the equipment referred to in this application includes but is not limited to user equipment, network equipment, or equipment composed of user equipment and network equipment integrated through a network.
  • the user equipment includes but is not limited to any kind of mobile electronic product that can perform human-computer interaction with the user (such as human-computer interaction through a touch panel), such as smart phones, tablet computers, etc., and the mobile electronic product can use any operation.
  • Systems such as Android operating system, iOS operating system, etc.
  • the network device includes an electronic device that can automatically perform numerical calculations and information processing according to preset or stored instructions, and its hardware includes but is not limited to microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASIC ), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), Digital Signal Processor (Digital Signal Processor, DSP), embedded devices, etc.
  • the network equipment includes but is not limited to a computer, a network host, a single network server, multiple network server sets, or a cloud composed of multiple servers; here, the cloud is composed of a large number of computers or network servers based on cloud computing (Cloud Computing), Among them, cloud computing is a type of distributed computing, a virtual supercomputer composed of a group of loosely coupled computer sets.
  • the network includes but is not limited to the Internet, wide area network, metropolitan area network, local area network, VPN network, wireless self-organizing network (AdHoc network), etc.
  • the device may also be a program running on the user equipment, network equipment, or a device formed by integrating the user equipment with the network equipment, the network equipment, the touch terminal, or the network equipment and the touch terminal through a network.
  • Figure 1 shows a method for obtaining an electronic fence according to an aspect of the present application, applied to a command device, and the method includes steps S101 and S102.
  • step S101 obtain the scene image captured by the drone device;
  • step S102 obtain the user operation of the command user on the target area in the scene image by the command device, and generate a user operation on the target area based on the user operation.
  • Target electronic fence wherein the target electronic fence includes corresponding target fence attributes and target image location information of the target area in the scene image, and the target image location information is used to determine the geographical location of the target area
  • the information is used to detect collisions between the user's augmented reality device and/or drone device and the target electronic fence, and the augmented reality device and the command device are in a collaborative execution state of the same collaborative task.
  • command equipment includes but is not limited to user equipment, network equipment, and equipment formed by integrating user equipment and network equipment through a network.
  • the user equipment includes but is not limited to any mobile electronic product that can interact with the user, such as mobile phones, personal computers, tablets, etc.;
  • the network equipment includes but is not limited to computers, network hosts, single network servers, A collection of network servers or a cloud of servers.
  • the command equipment has established a communication connection with the corresponding drone equipment and/or augmented reality equipment, etc., and relevant data is transmitted through the communication connection.
  • the command device and the drone device and/or the augmented reality device are in a collaborative execution state of the same collaborative task.
  • the collaborative task refers to multiple devices based on certain constraints (for example, the spatial distance to the target object). , time constraints, physical conditions of the device itself or task execution sequence, etc.)
  • a certain task that is completed together with the goal of achieving a certain criterion.
  • This task can usually be decomposed into multiple subtasks and assigned to each device in the system. , each device completes the assigned sub-tasks respectively, thereby achieving the advancement of the overall task progress of the collaborative task.
  • the corresponding command device acts as the control center of the collaborative task system, regulating the subtasks of each device in the collaborative task.
  • the task participating devices of the collaborative task include a command device, one or more drone devices, and/or one or more augmented reality devices, and the corresponding command device is operated by the command user; the drone device may be Image collection or flight can be carried out based on the acquisition instructions/flight path planning instructions sent by the command equipment.
  • the corresponding UAV pilot can also control the UAV equipment through the ground control equipment of the UAV equipment.
  • the ground control The device receives and presents the control instructions sent by the command device, and controls the drone device through the control operations of the drone pilot; the augmented reality device is worn and controlled by the corresponding user on duty, and the augmented reality device includes But it is not limited to augmented reality glasses, augmented reality helmets, etc.
  • the collaborative task can also be performed by network equipment for three-party data transmission and data processing.
  • the drone equipment will The corresponding scene image is sent to the corresponding network device, and the command device and/or augmented reality device obtains the scene image through the network device.
  • a scene image captured by a drone device is obtained.
  • UAV equipment refers to unmanned aircraft controlled by radio remote control equipment and self-prepared program control devices. It has the advantages of small size, low cost, easy use, low requirements on the combat environment, and strong battlefield survivability.
  • the drone device can collect scene images of a specific area. For example, the drone device collects scene images of the corresponding area during flight based on a preset flight route or a predetermined target location. The drone device collects scene images.
  • the camera pose information corresponding to the UAV device when the scene image is collected will be recorded.
  • the camera pose information includes the camera position information and camera attitude information of the camera device of the UAV device when the scene image is collected. wait.
  • the UAV equipment or the corresponding ground control equipment can send the scene image to the network device, and then the network equipment sends it to the corresponding equipment, etc., or the UAV equipment or the corresponding ground control equipment can directly connect the communication with the corresponding equipment.
  • the scene image is sent to the corresponding device, etc., where the corresponding device includes a command device and/or an augmented reality device.
  • the UAV device will also send the camera pose information corresponding to the scene image to the corresponding device or network device, such as sending the camera pose information to the command device and /Or augmented reality equipment or network equipment, etc.
  • the network device can forward the scene image collected by the drone device in the collaborative task execution state to the command device and/or the augmented reality device, etc.; or, the drone device can forward the collected scene image in real time.
  • the scene image is transmitted to the network device, the command device and/or the augmented reality device sends an image acquisition request about the drone device to the network device, the corresponding image acquisition request contains the device identification information of the drone device, and the network device responds to the image Obtain the request, retrieve the scene image collected by the drone device based on the device identification information of the drone device, and send the scene image to the command device and/or augmented reality device, etc.
  • the command device and/or the augmented reality device obtains the corresponding scene image
  • the scene image is presented on the corresponding display device (eg, display screen, projector, etc.).
  • step S102 obtain the user operation of the command user of the command device regarding the target area in the scene image, and generate a target electronic fence regarding the target area based on the user operation, wherein the target electronic fence includes a corresponding target Fence attributes and target image location information of the target area in the scene image.
  • the target image location information is used to determine the geographical location information of the target area and is used to monitor the augmented reality device of the user on duty and/or the unmanned aerial vehicle.
  • the mobile device performs collision detection with the target electronic fence, and the augmented reality device and the command device are in a collaborative execution state of the same collaborative task.
  • the command device includes a data collection device for acquiring user operations of the command user, such as a keyboard, mouse, touch screen or trackpad, image capture unit, voice input unit, etc.
  • the user operation to direct the user can be to direct the user's gesture movements or voice instructions
  • the target electronic fence can be generated by recognizing the gesture movements or voice instructions; for another example, the user operation to direct the user can be to use the keyboard, mouse, touch screen or trackpad. and other devices directly operate the scene image, such as instructing the user to use the mouse to select the target area on the presented scene image, graffiti, and other operations to generate the corresponding target electronic fence.
  • the command device while presenting the scene image collected by the drone device, can also present an operation interface about the scene image, and the command user can operate the controls in the operation interface to obtain the target in the scene image.
  • the target electronic fence of the area For example, the command device can collect the user's frame selection operation on the target area in the scene image to generate a target electronic fence on the target area.
  • the target electronic fence includes but is not limited to the target area in the scene image. (Such as a specific range/specific location/specific target, etc.) The prohibited entry or exit range determined by operations such as box selection and graffiti.
  • the corresponding target electronic fence includes the target area determined by the user's operation and the image position information of the target area in the scene image.
  • the target area is determined by the information added by the user's operation in the scene image, including but not limited to boxes, circles, and others. Polygons or custom polygons composed of multiple points selected by the user.
  • the corresponding operation interface contains an area selection method of a preset shape (for example, a box, a circle, or other polygons, etc.), based on the user's selection of the preset shape. Select the operation and determine the selected target area based on the frame selection of a specific position/area of the scene image, such as selecting a specific position as the center of the circle/polygon, inputting or pulling a certain distance as the corresponding radius/circumscribed radius of the polygon, etc.
  • a preset shape for example, a box, a circle, or other polygons, etc.
  • the target electronic fence includes target fence attribute information.
  • the target fence attributes include the no-entry or no-exit attributes of the fence.
  • the fence attribute information includes but is not limited to the electronic fence being a no-entry fence or the electronic fence being a no-entry fence. No access to the fence.
  • the no-entry fence is used to indicate that the target area enclosed by the fence is a no-entry area; the no-exit fence is used to indicate that the target area enclosed by the fence is a no-go area.
  • the target fence attributes include but are not limited to the warning distance information of the fence, for example, assigning the same default warning distance information to each fence information based on default settings, or based on different warning distances input by the user.
  • the target fence attributes include but are not limited to task identification information used to indicate the collaborative tasks of the target electronic fence that have established a mapping relationship.
  • each collaborative task has a corresponding electronic fence set, and the electronic fence set contains Each electronic fence has a mapping relationship with the collaborative task.
  • the command device can call one or more electronic fences corresponding to the collaborative task.
  • the command device If it is in the execution state of the collaborative task, If it is in the execution state of the collaborative task, If it is in the execution state of the collaborative task, If the command device generates a target electronic fence based on user operation, the command device establishes a mapping relationship between the target electronic fence and the collaborative task, and adds the target electronic fence to the corresponding electronic fence collection.
  • the target fence attributes include but are not limited to identification information, color information, etc. of the target electronic fence.
  • the target image position information is used to indicate the coordinate information of the target area of the target electronic fence in the corresponding image/pixel coordinate system of the scene image.
  • the coordinate information may be a set of regional coordinates of the target area, etc.
  • the target image location information is used to determine the geographical location information of the target area and is used to detect collisions between the on-duty user's augmented reality device and/or drone device and the target electronic fence, for example, in collaboration with Any participating equipment in the task (command equipment, drone equipment, augmented reality equipment or network equipment, etc.) can calculate and determine the target image location information based on the target image location information and the camera pose information when the scene image is collected.
  • the geographical location information of the geographical coordinate system corresponding to the real world.
  • a geographical coordinate system generally refers to a coordinate system consisting of longitude, latitude and altitude, which can mark any location on the earth. Different regions may use different reference ellipsoids. Even if the same ellipsoid is used, the orientation and even size of the ellipsoid may be adjusted to make the ellipsoid better fit the local geoid. This requires the use of different geodetic datum systems for identification, such as the CGCS2000 and WGS84 geographical coordinate systems often used in my country. Among them, WGS84 is a geographical coordinate system. It is currently the most popular geographical coordinate system and is also the coordinate system used by the widely used GPS global satellite positioning system.
  • Three-dimensional rectangular coordinate systems include but are not limited to: station center coordinate system, navigation coordinate system, NWU coordinate system, etc.
  • the spatial position information of multiple map points can also be obtained, where the spatial position information includes the spatial coordinate information of the corresponding map point in the three-dimensional rectangular coordinate system.
  • the coordinate transformation corresponding to the geographical location information converted from the geographical coordinate system to the three-dimensional rectangular coordinate system is also known.
  • the map points are converted into a three-dimensional rectangular coordinate system, so that the corresponding spatial position information is determined based on the geographical coordinate information of the map points; further, according to the spatial position information of the multiple map points, the image position information of the target object, the camera
  • the position information and the camera posture information determine the target spatial position information of the target object in the three-dimensional rectangular coordinate system, such as obtaining the spatial position information of multiple known map points and the image position information of the target object.
  • the spatial position information and camera position information of each map point determine the target spatial position information of the target object.
  • the image position information is perpendicular to the plane where the camera film is located (for example, the optical axis corresponding to the center of the drone image is perpendicular to the plane where the camera film is located, etc.), thus based on the normal vector of the plane where the film is located and the image position information
  • Corresponding space ray information is determined, and the corresponding intersection point is determined based on the space ray information and the ground information composed of multiple map points, and the spatial coordinate information of the intersection point is used as the target space position information of the target object, etc.
  • the information is the vector information of the spatial target ray, where the spatial target ray is described by the optical center coordinates and the vector information of the ray.
  • the computer device After the computer device determines the vector information corresponding to the spatial target ray, it can calculate the intersection point of the ray with respect to the ground based on the vector information of the target ray, the camera position information, and the spatial position information of multiple map points, thereby converting the spatial coordinate information of the intersection point Target space position information as the target object, etc. Finally, the geographical coordinate information of the target object in the geographical coordinate system (such as the geodetic coordinate system, etc.) is determined based on the target spatial location information of the target object. For example, after the computer device determines the target spatial location information of the target object, the coordinate information in the three-dimensional rectangular coordinate system can be converted from the three-dimensional spatial coordinate system into a geographical coordinate system (for example, WGS84 coordinate system) and stored to facilitate subsequent calculations.
  • a geographical coordinate system for example, WGS84 coordinate system
  • determining the target spatial position information of the target object in the three-dimensional rectangular coordinate system based on the vector information of the target ray, the camera position information, and the spatial position information of multiple map points includes: obtaining the camera device based on the camera position information.
  • the optical center spatial position information of the optical center in the three-dimensional rectangular coordinate system determine the target map point closest to the target ray from multiple map points based on the target ray vector information, the spatial position information of multiple map points, and the optical center spatial position information.
  • the spatial coordinate information serves as the target spatial position information of the target object.
  • the target object is used to indicate a point in the target area.
  • the target object is used to indicate one or more key points of the target area (for example, corner point coordinates or center of a circle, etc.).
  • the geographical coordinate set of the target area such as calculating the coordinate expression of the line segment corresponding to each edge based on the spatial coordinates of multiple corner points, thereby determining the coordinate set corresponding to each edge, and summarizing the coordinate set of each edge can be determined The geographical location information of the target area.
  • the determination of the geographical location information can occur on the command equipment side, or on the drone equipment, augmented reality equipment, or network equipment side, etc.
  • the command device calculates and determines the geographical location information of the target area based on the target image position information determined by the command user's user operation on the target area in the scene image and the camera pose information corresponding to the scene image; for another example, the command device After determining the corresponding target image position information, the target image position information is sent to the drone device/augmented reality device/network device, and the corresponding drone device/augmented reality device/network device is based on the corresponding scene image and the scene image corresponding Calculate the camera pose information to determine the geographical location information of the target area, etc.
  • the command device sends the target image location information to the corresponding network device, and receives
  • the network device determines the geographical location information based on the target image location information and the camera pose information of the scene image.
  • collaborative tasks also include network device ends for data transmission and data processing.
  • the command device determines the corresponding target geo-fence based on the command user's user operation, it sends the target image location information of the target geo-fence/target geo-fence to the corresponding network device, and the network device receives the target geo-fence/target The target image position information of the electronic fence, and based on the target image position information in the target electronic fence and the camera pose information corresponding to the scene image transmitted by the drone device to the network device, calculate and determine the geographical location information of the target area, etc.
  • the network device can return the geographical location information to the command device, so that the command device can overlay and present the electronic fence in the target area based on the geographical location information, for example, in the real-time scene image captured by the drone obtained by the command device. Track and overlay the electronic fence that presents the target area, or overlay and present the electronic fence of the target area in the real-time scene corresponding to the augmented reality device obtained by the command equipment, or present the target area in the electronic map of the target area presented by the command equipment electronic fence, etc.
  • the network device can further determine the overlay position information, and return the overlay position to the command device, so that the command device can overlay and present the electronic fence in the target area based on the overlay position information.
  • the geographical coordinate system projection (such as equirectangular projection, Mercator projection, Gauss-Krüger projection, Lambert projection, etc.) is described as a 2D plane to form a map.
  • the electronic map follows the geographical coordinate system protocol and is a mapping of the geographical coordinate system.
  • the mapping relationship is known, that is, if a certain point in the geographical coordinate system is known, its map position in the electronic map can be determined. If the map location information on the electronic map is known, the location in the geographical coordinate system can also be determined based on the location information.
  • the collision detection refers to determining whether the duty equipment is inside/outside the fence of the target electronic fence based on the duty location information of the duty equipment of the collaborative task and the geographical location information of the target electronic fence, or whether it is within the early warning range of the target electronic fence. Wait within. For example, we can directly perform collision detection based on the longitude and latitude information in the corresponding geographical location information to determine whether the longitude and latitude information of the corresponding duty equipment is within/outside/within the warning range of the target area. Of course, in order to facilitate calculation and achieve accurate collision detection, we can convert the duty location information and the geographical location information of the target electronic fence into the same plane rectangular coordinate system (such as a map coordinate system or any two-dimensional plane rectangular coordinate system, etc.
  • determining whether the duty equipment is within the internal/external/warning range of the target electronic fence is based on the two-dimensional location information of the duty and the two-dimensional location information of the target electronic fence, including: based on the two-dimensional location information of the duty and the target electronic fence.
  • the two-dimensional position information is used for collision detection to determine whether the two-dimensional position information on duty meets the fence attribute information of the target electronic fence; if it does not meet the fence attribute information of the target electronic fence, a fence alarm event about the target electronic fence is generated; if it meets the fence attribute information of the target electronic fence, For the fence attribute information of the fence, the corresponding distance difference is determined based on the two-dimensional position information of the duty and the two-dimensional position information of the target electronic fence. If the distance difference is less than or equal to the warning distance threshold, a fence warning event about the target electronic fence is generated.
  • the collision detection includes: based on the two-dimensional position information of the duty and the two-dimensional position information of the center of the fence area, calculating the distance information from the real-time two-dimensional position information of the duty to the center of the circle; according to The two-dimensional position information of the center of the circle and the two-dimensional position information of any point on the circle determine the radius of the circle; based on the distance information, the radius of the circle and the fence attribute information of the target electronic fence, it is determined whether the two-dimensional position information on duty satisfies the target electronic fence.
  • Fence property information for the fence is based on the two-dimensional position information of the duty and the two-dimensional position information of the center of the fence area, calculating the distance information from the real-time two-dimensional position information of the duty to the center of the circle; according to The two-dimensional position information of the center of the circle and the two-dimensional position information of any point on the circle determine the radius of the circle; based on the distance information, the radius of the circle and the fence attribute information of the target electronic fence,
  • collision detection includes: based on the duty two-dimensional position information, using the ray method to determine the corresponding duty ray information, and based on the duty ray information, determine the relationship between the duty ray information and the target electronic fence The number of intersection points in the fence area; determine the relationship between the on-duty two-dimensional position information and the fence area of the target electronic fence based on the number of intersection points.
  • the inside-out area relationship is used to indicate that the on-duty two-dimensional position information is within the fence area of the target electronic fence.
  • the shape of the fence area of the target electronic fence is a polygon
  • determining the corresponding distance difference based on the two-dimensional position information of the duty and the two-dimensional position information of the target electronic fence includes: calculating each of the two-dimensional position information of the duty and the polygon of the fence area.
  • duty equipment includes other participating equipment in collaborative tasks in addition to network equipment and command equipment, such as augmented reality equipment and/or drone equipment.
  • the collision detection process can occur on the network device side and the corresponding results are returned to other device sides, or it can occur locally on the command equipment, drone equipment, augmented reality equipment, etc.
  • the geographical location information in addition to calculating collisions, is also used to overlay and present the target electronic fence in the scene image of the real scene/drone device on the augmented reality device side. As in some embodiments, the geographical location information is also used to determine the superimposed position information of the target electronic fence in the real scene of the on-duty user's augmented reality device, and overlay it in the real scene of the augmented reality device. the target area. In some embodiments, after the geographical location information is determined, the corresponding determination device (such as a command device, a drone device, or a network device) can be sent directly to the augmented reality device of the user on duty, or forwarded to the augmented reality device via a network device.
  • the corresponding determination device such as a command device, a drone device, or a network device
  • the local end of the augmented reality device calculates and determines the overlay position information that the geographical location information is displayed on the current live scene of the augmented reality device, so that the target electronic fence is superimposed and displayed on the current live scene of the augmented reality device, for example, the command device /UAV equipment/network equipment obtains the corresponding geographical location information, and sends the geographical location information to the augmented reality device.
  • the augmented reality device can determine based on the received geographical location information and the current duty camera pose information, etc.
  • the target area is superimposed on the screen position information of the display screen.
  • the on-duty camera pose information includes the camera position information and camera posture information of the camera device of the augmented reality device.
  • the camera position information is used to indicate the current geographical location of the on-duty user.
  • the augmented reality device retains the geographical location information and sends the geographical location information to other devices, or sends it to the network device and is sent by the network device to other equipment, etc.
  • the geographical location information is not sent to the augmented reality device of the user on duty. Instead, the geographical location information is directly superimposed and displayed on the current live scene of the augmented reality device and the superimposed location information is sent to the augmented reality device.
  • Augmented reality device Augmented reality device.
  • any device in the collaborative task After any device in the collaborative task obtains the geographical location information, it can calculate and determine based on the geographical location information and the on-duty camera pose information of the camera device of the augmented reality device to superimpose and display the geographical location information on the current live scene of the augmented reality device.
  • the overlay position information is used to indicate the display position information of the target area of the target electronic fence in the display screen of the augmented reality device, such as the screen/image/pixel corresponding to the screen/image/pixel coordinate system of the display screen. Coordinate points or sets, etc.
  • a certain device such as network device/augmented reality device/drone device/command device
  • the local end determines the superimposed position information corresponding to the geographical location information in the real scene of the augmented reality device/the real-time scene image position information in the real-time scene image/the map position information in the electronic map, so that in the augmented reality device, unmanned The target area is superimposed and presented on the real scene corresponding to the machine equipment and/or the command equipment/the target area is superimposed on the displayed real-time scene image/the target area is superimposed on the displayed electronic map; in other embodiments, a certain device
  • the terminal (such as network equipment/augmented reality equipment/unmanned aerial vehicle equipment/command equipment) can further determine the geographical location information corresponding to the superimposed position information in the real scene of the augmented reality device/the real-time scene image position information in the real-time scene image/in
  • the map location information in the electronic map is sent to other
  • the target area is overlaid on the electronic map of the area/display.
  • the geographical location information is also used to determine the real-time scene image position information of the target area in the real-time scene image captured by the drone device, and is used in the augmented reality device and/or wireless
  • the target area is superimposed on the real-time scene image presented by the human-machine device.
  • the corresponding geographical location information can be calculated based on the target image position information of the target area and then stored in the storage database (for example, command equipment/augmented reality equipment/unmanned aerial vehicle equipment to conduct local storage or a corresponding network storage database is set up on the network device side, etc.), so that when calling the target geo-fence, the geographical location information corresponding to the target geo-fence is also called, and other location information (for example, unmanned The real-time scene image position information in the real-time scene image of the machine device or the real-time superimposed position information in the real scene collected in real time by the augmented reality device, etc.) calculation and transformation, etc.
  • the storage database for example, command equipment/augmented reality equipment/unmanned aerial vehicle equipment to conduct local storage or a corresponding network storage database is set up on the network device side, etc.
  • other location information for example, unmanned The real-time scene image position information in the real-time scene image of the machine device or the real-time superimposed position information in the real scene collected in real time by the augmented reality device,
  • the UAV device can directly send the corresponding real-time scene image to the command device/augmented reality device through the communication connection, or send it to the command device or augmented reality device via a network device, and the corresponding augmented reality device can be presented on the display screen
  • the real-time scene image is, for example, presented on the display screen in a video perspective manner, or the real-time scene image is presented on a certain screen area of the display screen, etc.
  • the drone device obtains the real-time flight camera pose information corresponding to the real-time scene image.
  • the corresponding augmented reality device/command device can directly communicate with The real-time flight camera pose information of the real-time scene image is obtained through the communication connection of the UAV device or forwarded by the network device, and combined with the calculated geographical location information, etc., the target electronic fence can be calculated locally.
  • the target area corresponds to the superimposed position information in the real-time scene image.
  • the drone device can also calculate the target electronics on the local side of the drone device based on the real-time flight camera pose information and the calculated geographical location information.
  • the target area of the fence corresponds to the superimposed position information in the real-time scene image, and the target electronic fence is tracked and superimposed in the real-time scene image presented by the augmented reality device/command equipment/unmanned aerial vehicle device, etc.
  • the drone when the drone is at a certain position (such as the take-off position), it is set as the origin of the three-dimensional rectangular coordinate system (such as the station center coordinate system, navigation coordinate system, etc.); the geographical location information corresponding to the target electronic fence is converted into the three-dimensional Cartesian coordinate system; obtain the geographical location and attitude information of the real-time flight of the drone, convert the geographical location of the drone to the three-dimensional Cartesian coordinate system, and determine the three-dimensional Cartesian coordinate system to the drone camera coordinates based on the attitude information of the drone
  • the rotation matrix of the system based on the three-dimensional rectangular coordinates of the target electronic fence, the three-dimensional rectangular coordinates corresponding to the drone's position, the rotation matrix and the drone's camera internal parameters, determine the real-time scene of the target area in the real-time scene image collected by the drone Image location information and presentation.
  • the real-time flight camera pose information of the real-time scene image is obtained by a certain device (for example, command device/augmented reality device/unmanned aerial vehicle device/network device), and combined with the target electronic
  • the fence has calculated and determined geographical location information, etc., and the overlay position information in the real-time scene image corresponding to the target electronic fence can be calculated, and then the overlay position information is sent to other devices to be displayed in the real-time scene images presented on other devices.
  • the tracking overlay renders the target electronic fence etc.
  • the method further includes step S103 (not shown). In step S103, the geographical location information of the target area is determined based on the target image location information and the camera pose information of the scene image. .
  • the command equipment determines the corresponding target electronic fence based on the command user's user operation, it calculates and determines the target based on the target image position information of the target area of the target electronic fence and the camera pose information corresponding to the scene image transmitted by the drone equipment.
  • the geographical location information of the area, etc., and then the geographical location information is directly sent to other execution devices of the collaborative task, such as augmented reality equipment, drone equipment, etc., or the geographical location information is sent to the network device, and the network device sends it to Other execution devices for collaborative tasks.
  • the command device can send the geographical location information to other execution devices of the collaborative task, so that the other execution devices can further determine the superimposed location information based on the geographical location information to overlay and present the target electronic fence in the target area, for example, in augmented reality
  • the target electronic fence is tracked and superimposed on the real-time scene image captured by the drone obtained by the device, or the target electronic fence is superimposed on the real-time scene corresponding to the augmented reality device obtained by the augmented reality device, or is presented on the augmented reality device.
  • the target electronic fence, etc. are presented in the electronic map of the target area.
  • the command device can further determine the superimposed position information, and return the superimposed position to other execution devices, so that other execution devices can superimpose and present the target electronic fence based on the superimposed position information.
  • the method further includes step S104 (not shown).
  • step S104 an electronic map corresponding to the collaborative task is presented; and the target electronic fence is determined according to the geographical location information of the target electronic fence. Map location information of the fence, and the target electronic fence is presented in the electronic map based on the map location information.
  • the command device can call the task identification information of the collaborative task/the location of the target area to call the electronic map of the scene where the collaborative task is located from the local or network device.
  • the command device can, based on the geographical location information of the target area, The local or network device determines the electronic map near the geographical location information and presents the electronic map; or the command device/network device stores the task areas corresponding to each task, and establishes a relationship between each task area and the corresponding task identification information. Mapping relationship, the command device can call the corresponding electronic map from the local or network device based on the task identification information.
  • the command device can also obtain the map location information of the target electronic fence in the electronic map, such as performing projection conversion on the local end based on the geographical location information to determine the map location information in the corresponding electronic map, or it can also receive other equipment (such as network equipment, wireless Map location information returned by human-machine devices, augmented reality devices, etc.
  • the command equipment can present the electronic map through the corresponding display device, and present the target electronic fence in the area corresponding to the map location information in the electronic map, thereby achieving overlay presentation of the target electronic fence in the electronic map.
  • the geographical location information of the target electronic fence is also used to overlay and present the electronic map of the scene where the target area is located, presented by the augmented reality device and/or the drone device.
  • Target electronic fence the geographical location information may be calculated and determined by the command device end/augmented reality device end/drone device end, or may be calculated and determined by the network device end.
  • Corresponding command equipment, drone equipment or augmented reality equipment can present an electronic map of the scene in the target area through their respective display devices, and obtain the map location information of the target area based on the geographical location information, thereby overlaying the presentation on the electronic map presented respectively.
  • the target electronic fence realizes that the target electronic fence added to the target area in the scene image captured by the drone device is synchronously presented as the target electronic fence in the target area in the electronic map.
  • the map location information can be obtained by projection conversion and determination based on the geographical location information corresponding to the target area on the local side of the respective device, or it can be returned to the respective device after the calculation is completed by the network device, or it can be calculated by a certain device. After completion, it is sent to other devices and so on.
  • the method further includes step S105 (not shown).
  • step S105 an electronic map corresponding to the collaborative task is obtained and presented, based on the command of the user to operate the area in the electronic map.
  • the user operates an operating electronic fence that determines the operating area.
  • the operating electronic fence includes the corresponding operating fence attribute and the operating map location information of the operating area in the electronic map.
  • the operating map location information Used to determine the operating geographical location information of the operating area and used to detect collisions between the on-duty user's augmented reality device and/or drone device and the operating electronic fence.
  • the user operation to direct the user can be to direct the user's gesture movements or voice instructions, and the operation of the electronic fence can be generated by recognizing the gesture movements or voice instructions; for another example, the user operation to direct the user can be to use the keyboard, mouse, touch screen or trackpad.
  • Direct operations on the electronic map by other devices such as instructing the user to use the mouse to select a specific area/specific location/specific target, etc. on the presented electronic map to generate a corresponding electronic fence.
  • the command device can call an electronic map of the scene on the local side or the network device. While presenting the electronic map, the command device can present an operation interface about the electronic map.
  • the command user can Mark specific areas/specific locations/specific targets in the electronic map through the operation interface, such as selecting a part of the area in the electronic map, etc.
  • the command device can determine the corresponding area as the operation area based on the command user's user operation, and An operation electronic fence corresponding to the operation area is generated.
  • the operation electronic fence includes attributes of the corresponding operation fence and operation map location information of the operation area in the electronic map.
  • the operation map location information is not related to the map location information of the target area, and may be the same location or different locations.
  • the specific implementations of the fence attribute information, the geographical location information calculation method and the corresponding overlay presentation method of operating the electronic fence are compared with the embodiments of the fence attribute information, the geographical location information calculation method and the corresponding overlay presentation method of the target area.
  • the command device determines the operation geographical location information of the operation area based on the operation map location information.
  • the operation geographical location information is also used to overlay and present the operation area on an electronic map of the scene where the operation area is presented by the augmented reality device and/or the drone device.
  • the operation geographical location information is also used to overlay and present the operation area in the real scene of the augmented reality device and/or in the scene image captured by the drone device.
  • the command device determines the operating geographical location information of the operating area and sends it to other execution devices of the collaborative task (such as augmented reality equipment, drone equipment), and the local terminals of other execution devices use the operational geographical location information and the location of the drone.
  • the real-time flight camera pose information is calculated.
  • the real-time scene image position information in the real-time scene image captured by the drone is calculated.
  • the camera pose information of the camera device based on the operating geographical location information and the augmented reality device is calculated and corresponds to the real scene of the augmented reality device.
  • the target electronic fence and/or the corresponding operation electronic fence are used to update or establish the electronic fence set of the collaborative task, wherein the electronic fence set includes at least one electronic fence, and each electronic fence includes Corresponding to the fence attributes and the geographical location information of the fence area, the target electronic fence or the operation electronic fence belongs to one of the at least one electronic fence, and the operation electronic fence is based on the command user's information about the operation area in the electronic map.
  • User action OK For example, each collaborative task has corresponding task identification information, and the task identification information is used to identify the uniqueness of the task, such as task number, name, or image.
  • Each collaborative task stores a corresponding electronic fence set in the corresponding database. The electronic fence set is bound to the task identification information of the corresponding collaborative task.
  • the electronic fence set contains one or more electronic fences, for example, based on the command user
  • Each electronic fence includes corresponding fence attribute information and geographical location information.
  • the fence attribute information includes the corresponding warning distance threshold.
  • the fence attribute information includes the no-entry or no-exit attributes of the fence.
  • the database that stores the electronic fence collection of collaborative tasks may be set on the command device side, or may be set on the network device side, etc.
  • the collaborative task has generated the corresponding geo-fence collection based on the preset geo-fence before obtaining the corresponding target geo-fence and/or operating the geo-fence, then we can update the geo-fence based on the target geo-fence and/or operating the geo-fence. Collection; if the collaborative task does not establish a mapping of the geo-fence collection before obtaining the corresponding target geo-fence and/or operating geo-fence, we can create the geo-fence collection for the target geo-fence and/or operating geo-fence, and The electronic fence collection is updated based on other electronic fence information subsequently determined, etc.
  • the number of target electronic fences can be one or more, and the number of operating electronic fences can also be one or more, which are not limited here.
  • the method further includes step S106 (not shown).
  • step S106 fence warning information corresponding to one of the at least one electronic fence is obtained and presented, wherein the fence
  • the early warning prompt information is used to indicate that the real-time duty location information of the duty equipment meets the fence attribute information of one of the at least one electronic fence, and the distance difference from one of the at least one electronic fence is less than or equal to the early warning distance threshold
  • the duty equipment includes the duty user's augmented reality equipment and/or the drone equipment.
  • the duty equipment is used to indicate other equipment in a mobile state and/or a task execution state in addition to command equipment and network equipment, such as augmented reality devices worn by users on duty and drones controlled by drone pilots. Equipment etc.
  • the generation process of the corresponding fence warning prompt information can occur on the command equipment side.
  • the command equipment obtains the real-time duty location information of the duty equipment, and based on the real-time duty location information of the duty equipment and the geographical location of each electronic fence in the electronic fence set Information, calculate and determine whether the real-time duty location information of the duty equipment meets the fence attribute information of one of the at least one electronic fence, and the distance difference from one of the at least one electronic fence is less than or equal to the warning distance threshold; if so , the command device generates and displays the fence warning prompt information.
  • the generation process of the corresponding fence warning information occurs on the network device side.
  • the network device obtains the real-time duty location information of the duty equipment and based on the real-time duty location information of the duty equipment and the geographical location of each electronic fence in the electronic fence set Information, calculate and determine whether the real-time duty location information of the duty equipment meets the fence attribute information of one of the at least one electronic fence, and the distance difference from one of the at least one electronic fence is less than or equal to the warning distance threshold; if so , the network device generates the fence warning prompt information and sends it to the command device for presentation by the command device for subsequent processing.
  • the fence warning prompt information is calculated for each duty equipment in the collaborative task and each electronic fence in the electronic fence set in the collaborative task.
  • the fence early warning prompt information also includes the equipment identification information of the duty equipment (for example, the equipment number and name or the corresponding user number and name of the equipment). etc.), further, the fence early warning information also includes the fence identification information of the electronic fence (for example, fence number, name or coordinate position, etc.), where, for example, the fence early warning information can be directly displayed on the command equipment or on duty.
  • the fence warning prompt information can also be displayed on the display screen of the command equipment or duty equipment in the form of a timeline, and the display position is not limited.
  • the collaborative task includes multiple subtasks, and each subtask includes a corresponding subtask duty device and a subtask electronic fence set, wherein the subtask electronic fence set of the subtask only determines the duty device corresponding to the subtask.
  • the prohibited entry range or prohibited exit range, then the fence warning prompt information of the subtask of the collaborative task is calculated for each duty equipment in the subtask and each electronic fence in the subtask electronic fence set.
  • Non-subtask The duty equipment in the electronic fence set of the subtask is not calculated with the electronic fences in the electronic fence set of the subtask, and the electronic fences in the electronic fence set of the non-subtask are not calculated with the duty equipment of the subtask.
  • the real-time duty location information of the duty equipment satisfies the fence attribute information of one of the at least one electronic fence, which is used to indicate that the real-time duty location information matches the fence attribute information of the electronic fence. For example, if the corresponding fence attribute If the information includes that the electronic fence is a no-entry fence, then the real-time duty location information is outside the area enclosed by the electronic fence. If the corresponding fence attribute information includes that the electronic fence is a no-exit fence, then the real-time duty location information is within the electronic fence. etc. within the surrounding area.
  • the method further includes step S107 (not shown).
  • step S107 fence alarm prompt information of the duty equipment regarding one of the at least one electronic fence is obtained and presented, wherein: The fence alarm prompt information is used to indicate that the real-time duty location information of the duty equipment does not meet the fence attribute information of one of the at least one electronic fence.
  • the duty equipment includes the augmented reality device of the duty user and/or the Drone equipment.
  • the real-time duty location information of the duty equipment does not meet the fence attribute information of at least one of the electronic fences, and if the corresponding fence attribute information includes that the electronic fence is a no-entry fence, then the real-time duty location information is in the electronic fence.
  • the command device/network device generates a corresponding fence alarm prompt Information, the command device can present locally generated fence alarm prompt information or receive and present fence alarm prompt information sent by network devices, etc.
  • Figure 2 shows a method for obtaining an electronic fence according to another aspect of the present application, applied to a network device, and the method includes step S201.
  • step S201 a set of electronic fences corresponding to the collaborative task is obtained, wherein the set of electronic fences includes at least one electronic fence, and each electronic fence includes corresponding fence attributes and geographical location information of the fence area, wherein the set of electronic fences corresponding to the collaborative task
  • the duty equipment includes an augmented reality device and/or a drone device of the duty user; wherein the geographical location information of the fence area is used to detect collisions between the duty equipment and the electronic fence.
  • the network device receives the target electronic fence and/or the operation electronic fence determined by the user's user operation, and establishes or updates the database based on the target electronic fence and/or the operation electronic fence.
  • Storage of the electronic fence collection of the collaborative tasks, etc. the fence attributes of the electronic fence, the calculation and presentation of the geographical location information, and other specific implementation methods are the same or similar to the previous embodiments, and will not be described again here.
  • the method further includes step S202 (not shown).
  • step S202 real-time duty location information of the duty equipment is obtained; based on the real-time duty location information and the at least one electronic fence Geographical location information determines the corresponding fence warning event or fence alarm event.
  • the network device can receive real-time duty location information uploaded by the corresponding duty device based on the communication connection with the duty device, and perform collision detection based on the real-time duty location information and the geographical location information of the at least one electronic fence, thereby determining the corresponding Fence early warning event or fence alarm event.
  • the duty location information and the geographical location information of the electronic fence into the same plane rectangular coordinate system (such as a map coordinate system or any two-dimensional plane rectangular coordinate system, etc.) , thereby determining the corresponding two-dimensional position information on duty and the two-dimensional position information of the electronic fence, and based on the two-dimensional position information on duty and the two-dimensional position information of the electronic fence, determine whether the duty equipment is within the internal/external/early warning range of the electronic fence Wait within.
  • the duty equipment is used to indicate other equipment in a mobile state and/or a task execution state in addition to command equipment and network equipment, such as augmented reality equipment worn by users on duty, and drones controlled by drone pilots.
  • the generation process of corresponding fence warning information occurs on the network device side.
  • the network device obtains the real-time duty location information of the duty equipment, and based on the real-time duty location information of the duty equipment and the geographical location information of each electronic fence in the electronic fence set, Calculate and determine whether the real-time duty location information of the duty equipment meets the fence attribute information of one of the at least one electronic fence, and the distance difference from one of the at least one electronic fence is less than or equal to the warning distance threshold; if so, the network The device generates the fence warning prompt information and sends it to the command device for presentation by the command device for subsequent processing.
  • the fence warning prompt information is calculated for each duty equipment in the collaborative task and each electronic fence in the electronic fence set in the collaborative task.
  • fence early warning prompt information about the duty equipment and the electronic fence will be generated.
  • the fence early warning prompt information also includes the equipment identification information of the duty equipment (for example, the equipment number and name or the corresponding user number and name of the equipment). etc.), further, the fence warning information also includes the fence identification information of the electronic fence (for example, fence number, name or coordinate location, etc.).
  • the collaborative task includes multiple subtasks, and each subtask includes a corresponding subtask duty device and a subtask electronic fence set, wherein the subtask electronic fence set of the subtask only determines the duty device corresponding to the subtask.
  • determining the corresponding fence warning event or fence alarm event based on the real-time duty position and the geographical location information of the at least one electronic fence includes: converting the real-time duty position into plane rectangular coordinates In the system, the corresponding real-time duty two-dimensional location information is determined; the two-dimensional location information of the at least one electronic fence is determined based on the geographical location information of the at least one electronic fence; based on the real-time duty two-dimensional location information and the at least one electronic fence The two-dimensional position information of an electronic fence determines the corresponding fence warning event or fence alarm event. For example, collision detection based on three-dimensional geographical location information requires a large amount of calculation and cannot ignore the distance effect caused by elevation.
  • the real-time duty two-dimensional position information can simultaneously satisfy the fence attribute information of multiple electronic fences whose fence attributes are forbidden fences, or , the real-time on-duty two-dimensional location information can be outside the electronic fence whose fence attribute is the no-entry fence and also inside the electronic fence whose fence attribute is the no-exit fence, etc.
  • the network device If there is an unsatisfied electronic fence, the network device generates corresponding fence alarm prompt information.
  • the fence alarm prompt information also includes the device identification information of the duty device (for example, the device number and name or the device corresponding user number and name, etc.).
  • the fence alarm prompt information also includes fence identification information (for example, fence number, name or coordinate position, etc.) of the certain electronic fence.
  • the fence alarm prompt information is used to indicate that the duty equipment does not meet the requirements of the certain electronic fence.
  • the network device determines that the real-time duty two-dimensional location information satisfies the fence attribute information of a certain electronic fence, it further determines the distance difference between the real-time duty two-dimensional location information and the two-dimensional location information of a certain electronic fence based on the real-time duty two-dimensional location information. value, etc., specifically based on the nearest distance between the real-time duty two-dimensional position information and the boundary of the certain electronic fence as the distance difference between the real-time duty two-dimensional position information and the two-dimensional position information of the certain electronic fence, and the distance difference Compare with the preset early warning threshold. If the distance difference is less than or equal to the preset early warning threshold, then the corresponding fence early warning prompt information is generated.
  • the fence early warning prompt information also includes the equipment identification information of the duty equipment (for example, equipment number, The name or device corresponds to the user number, name, etc.). Furthermore, the fence warning prompt information also includes the fence identification information of the certain electronic fence (for example, fence number, name or coordinate location, etc.). The fence warning prompt information is used for Indicates that the duty equipment has entered the early warning range of a certain electronic fence, etc.
  • the shape of the fence area of the certain electronic fence is circular
  • the collision detection includes: based on the real-time duty two-dimensional position information and the two-dimensional position information of the center of the circle of the fence area, Calculate the distance information from the real-time duty two-dimensional position information to the center of the circle; determine the radius of the circle based on the two-dimensional position information of the center of the circle and the two-dimensional position information of any point on the circle; The distance information, the radius of the circle and the fence attribute information of a certain electronic fence determine whether the real-time duty two-dimensional position information satisfies the fence attribute information of a certain electronic fence.
  • the duty two-dimensional location information is outside the circle of the electronic fence and the fence attribute information of the electronic fence is a no-entry fence, then it is determined that the duty two-dimensional location information satisfies the fence attribute information of the electronic fence; if the duty two If the two-dimensional location information is outside the circle of the electronic fence and the fence attribute information of the electronic fence is a no-exit fence, it is determined that the two-dimensional location information of the duty does not meet the fence attribute information of the electronic fence; if the two-dimensional location information of the duty is within the Within the circle of the electronic fence, if the fence attribute information of the electronic fence is a no-exit fence, then it is determined that the two-dimensional position information of the duty meets the fence attribute information of the electronic fence; if the two-dimensional position information of the duty is within the circle of the electronic fence, If the fence attribute information of the electronic fence is a no-entry fence, it is determined that the two-dimensional position information of the duty does not meet the fence attribute information of the electronic fence.
  • the distance difference between the electronic fence and the corresponding two-dimensional position information on duty can be obtained through the above
  • the shape of the fence area of a certain electronic fence is a polygon; wherein the collision detection includes: based on the real-time duty two-dimensional position information, using the ray method to determine the corresponding duty ray information, and based on The duty ray information determines the number of intersections between the duty ray information and the target area of a certain electronic fence; the number of intersections between the real-time duty two-dimensional position information and the fence area of the certain electronic fence is determined based on the number of intersections. Relationship, wherein the inside-outside relationship is used to indicate that the real-time duty two-dimensional location information is inside or outside the area of a certain electronic fence; determined based on the inside-outside relationship and the fence attribute information of the certain electronic fence.
  • the real-time duty two-dimensional location information satisfies the fence attribute information of a certain electronic fence. For example, for a polygonal electronic fence in a certain fence area, when we calculate whether the duty equipment meets the fence attribute information of the electronic fence, we use the ray method to determine whether the duty two-dimensional position information is inside or outside the electronic fence. As shown in Figure 3, the ray method is used to draw a ray from the duty's two-dimensional position information, and determine the number of intersections between this ray and all sides of the polygon of the electronic fence. If the number of intersections on both sides of the duty's two-dimensional position information is If it is an odd number, it means that the two-dimensional position information of the duty is inside the polygon of the electronic fence.
  • the two-dimensional position information of the duty is outside the polygon of the electronic fence. Subsequently, based on whether the fence attribute information of the electronic fence is a no-entry fence or a no-exit fence, etc., it is determined whether the duty two-dimensional location information satisfies the fence attribute information of the electronic fence.
  • the duty two-dimensional location information is outside the electronic fence and the fence attribute information of the electronic fence is a no-entry fence, then it is determined that the duty two-dimensional location information satisfies the fence attribute information of the electronic fence; if the duty two If the two-dimensional location information is outside the electronic fence and the fence attribute information of the electronic fence is a no-exit fence, then it is determined that the two-dimensional location information of the duty does not meet the fence attribute information of the electronic fence; if the two-dimensional location information of the duty is within the electronic fence Inside the fence, if the fence attribute information of the electronic fence is a no-exit fence, then it is determined that the two-dimensional position information of the duty meets the fence attribute information of the electronic fence; if the two-dimensional position information of the duty is inside the electronic fence, the electronic fence The fence attribute information of the electronic fence is a no-entry fence, then it is determined that the two-dimensional position information of the duty does not meet the fence attribute information of the electronic fence.
  • determining the corresponding distance difference based on the real-time duty two-dimensional position information and the two-dimensional location information of a certain electronic fence includes: calculating the real-time duty two-dimensional position information and the fence area The distance between each side of the polygon, thereby determining multiple distances; determine the smallest distance from the multiple distances, and use the smallest distance as the corresponding distance difference; based on whether the distance difference is less than or equal to the warning distance The threshold determines whether to generate a fence warning event regarding the certain electronic fence.
  • the network device calculates the distance between the real-time two-dimensional position information and each edge of the polygon, and then uses a sorting algorithm (such as bubble sort, quick sort, insertion sort, Hill sort, etc.) to find the shortest edge.
  • the distance is used as the distance difference and compared with the early warning distance threshold of the electronic fence. If the distance difference is less than or equal to the early warning distance threshold, it is determined that the duty equipment is within the early warning range of the electronic fence.
  • vector algorithm is used to calculate the distance from a point to a line segment. Assume that the target point (such as two-dimensional position information on duty) is p and the line segment is AB.
  • the network device can calculate and determine the distance between the duty device and each side of the electronic fence, determine the minimum distance from multiple distances, determine the minimum distance as the distance difference between the duty device and the electronic fence, and determine the distance based on The distance difference is compared with the preset warning threshold to determine whether the duty equipment is within the warning range of the electronic fence.
  • the method further includes step S203 (not shown).
  • step S203 prompt information corresponding to the fence warning event and/or fence alarm event is generated according to the preset time interval, and the prompt information is Delivered to the participating devices of the collaborative task.
  • the network device starts a background thread to retrieve the elements of the real-time duty location information queue, and performs collision detection calculations based on the real-time duty location information and the geographical location information of the electronic fence.
  • the stock or all electronic fences in the task are loaded from the database and stored in the electronic fence data. Collision detection is performed based on the electronic fence data and the real-time duty location information of the duty equipment.
  • the network device assigns a current limiter to each duty device.
  • a token needs to be obtained from the current limiter first. Only the request to obtain the token can be released to the collision detection service for processing. If the token is not obtained, The request will not be processed. For example, a token is issued at a predetermined interval and the location is processed every n seconds.
  • the current limiter is a flow control service implemented through the token bucket algorithm.
  • the token algorithm adds tokens to the bucket at a fixed speed (for example, a preset time interval, etc.).
  • the capacity of the token bucket is 1. Add tokens to the bucket at a fixed speed every n seconds.
  • n 0, that is, there is no current limit.
  • Each location point can obtain a token and enter the collision calculation service.
  • Assume n 5, that is, within 5 seconds. Only add 1 token to the bucket. Even if there are multiple location request processing within these 5 seconds (when the position of the duty terminal or drone changes, a location processing request will be generated), only one of them will obtain the token and enter. Collision calculation service.
  • the network device performs collision detection for each change in duty location, but the results of the collision detection are displayed at preset intervals. According to the above technical solution, even if multiple alarms or early warnings are generated at the duty location within a preset time interval, only one will be sent, which can provide better reference value for participating devices in collaborative tasks. In some cases, different time intervals can be set for different duty devices. In other cases, different time intervals can be set for fence warning events and fence alarm events.
  • Figure 5 shows a command device for acquiring an electronic fence according to an aspect of the present application.
  • the device includes a first module 101 and a second module 102.
  • the first module 101 is used to obtain the scene image captured by the UAV device;
  • the first module 102 is used to obtain the user operation of the command user of the command device regarding the target area in the scene image, and generate information about the target area based on the user operation.
  • the target electronic fence of the target area wherein the target electronic fence includes corresponding target fence attributes and target image position information of the target area in the scene image, and the target image position information is used to determine the target
  • the geographical location information of the area is used to detect collisions between the user's augmented reality device and/or drone device and the target electronic fence.
  • the augmented reality device and the command device are in a collaborative execution state of the same collaborative task.
  • the fence attribute information also includes the no-entry or no-exit attribute of the fence.
  • the fence attribute information includes but is not limited to the electronic fence being a no-enter fence or the electronic fence being a no-exit fence.
  • the geographical location information is also used to determine the superimposed position information of the target electronic fence in the real scene of the on-duty user's augmented reality device, and superimpose and present the target electronic fence in the real scene of the augmented reality device. Describe the target area.
  • the geographical location information is also used to determine the real-time target image position information of the target area in the real-time scene image captured by the drone device, and is used in the augmented reality device and/or wireless
  • the target electronic fence is superimposed on the real-time scene image presented by the human-machine device.
  • the specific implementation of the first module 101 and the first two module 102 shown in FIG. 5 is the same as or similar to the embodiment of step S101 and step S102 shown in FIG. 1, and therefore will not be described again.
  • the method is included here.
  • the device further includes a module (not shown) for presenting an electronic map corresponding to the collaborative task; determining the location of the target electronic fence according to the geographical location information of the target electronic fence. Map location information, presenting the target electronic fence in the electronic map based on the map location information.
  • the target electronic fence and/or the corresponding operation electronic fence are used to update or establish the electronic fence set of the collaborative task, wherein the electronic fence set includes at least one electronic fence, and each electronic fence includes Corresponding to the fence attributes and the geographical location information of the target area, the target electronic fence or the operation electronic fence belongs to one of the at least one electronic fence, and the operation electronic fence is based on the command user's information about the operation area in the electronic map. User action OK.
  • the device further includes a module (not shown) for obtaining and presenting fence warning prompt information about one of the at least one electronic fence corresponding to the duty device, wherein the fence warning prompt The information is used to indicate that the real-time duty location information of the duty equipment meets the fence attribute information of one of the at least one electronic fence, and the distance difference from one of the at least one electronic fence is less than or equal to the warning distance threshold, the The duty equipment includes the augmented reality equipment of the duty user and/or the drone equipment.
  • the device further includes a module (not shown) for obtaining and presenting fence alarm prompt information of the duty device regarding one of the at least one electronic fence, wherein the fence alarm
  • the prompt information is used to indicate that the real-time duty location information of the duty equipment does not meet the fence attribute information of one of the at least one electronic fence.
  • the duty equipment includes the duty user's augmented reality device and/or the drone. equipment.
  • FIG. 6 shows a network device for acquiring an electronic fence according to another aspect of the present application.
  • the device includes a two-one module 201.
  • the two-one module 201 is used to obtain the electronic fence set corresponding to the collaborative task, wherein the electronic fence set includes at least one electronic fence, and each electronic fence includes corresponding fence attributes and geographical location information of the target area, wherein the collaborative task
  • the duty equipment of the task includes the augmented reality equipment and/or the drone equipment of the duty user; wherein the geographical location information of the target area is used for collision detection between the duty equipment and the electronic fence.
  • the device further includes a second module (not shown) for obtaining real-time duty location information of the duty equipment; based on the real-time duty location information and the geographical location of the at least one electronic fence information to determine the corresponding fence warning event or fence alarm event.
  • a second module (not shown) for obtaining real-time duty location information of the duty equipment; based on the real-time duty location information and the geographical location of the at least one electronic fence information to determine the corresponding fence warning event or fence alarm event.
  • determining the corresponding fence warning event or fence alarm event based on the real-time duty position and the geographical location information of the at least one electronic fence includes: converting the real-time duty position into plane rectangular coordinates In the system, the corresponding real-time duty two-dimensional location information is determined; the two-dimensional location information of the at least one electronic fence is determined based on the geographical location information of the at least one electronic fence; based on the real-time duty two-dimensional location information and the at least one electronic fence The two-dimensional position information of an electronic fence determines the corresponding fence warning event or fence alarm event.
  • the shape of the fence area of a certain electronic fence is a polygon; wherein the collision detection includes: based on the real-time duty two-dimensional position information, using the ray method to determine the corresponding duty ray information, and based on The duty ray information determines the number of intersections between the duty ray information and the target area of a certain electronic fence; the number of intersections between the real-time duty two-dimensional position information and the fence area of the certain electronic fence is determined based on the number of intersections. Relationship, wherein the inside-outside relationship is used to indicate that the real-time duty two-dimensional location information is inside or outside the area of a certain electronic fence; determined based on the inside-outside relationship and the fence attribute information of the certain electronic fence. Whether the real-time duty two-dimensional location information satisfies the fence attribute information of a certain electronic fence.
  • determining the corresponding distance difference based on the real-time duty two-dimensional position information and the two-dimensional location information of a certain electronic fence includes: calculating the real-time duty two-dimensional position information and the target area The distance between each side of the polygon, thereby determining multiple distances; determine the smallest distance from the multiple distances, and use the smallest distance as the corresponding distance difference; based on whether the distance difference is less than or equal to the warning distance The threshold determines whether to generate a fence warning event regarding the certain electronic fence.
  • the device also includes a second and third module (not shown), which is used to generate prompt information corresponding to fence warning events and/or fence alarm events according to preset time intervals, and deliver the prompt information to participating devices in the collaborative task.
  • a second and third module (not shown), which is used to generate prompt information corresponding to fence warning events and/or fence alarm events according to preset time intervals, and deliver the prompt information to participating devices in the collaborative task.
  • the present application also provides a computer-readable storage medium that stores computer code.
  • the computer code is executed, as in the previous item The method described is executed.
  • This application also provides a computer program product.
  • the computer program product is executed by a computer device, the method described in the previous item is executed.
  • Memory for storing one or more computer programs
  • the one or more processors When the one or more computer programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any one of the preceding items.
  • system 300 can serve as any of the above-mentioned devices in each of the described embodiments.
  • system 300 may include one or more computer-readable media (eg, system memory or NVM/storage device 320) having instructions coupled thereto and configured to perform Instructions are provided to implement means for one or more processors (eg, processor(s) 305) to perform the actions described herein.
  • processors eg, processor(s) 305
  • system control module 310 may include any suitable interface controller to provide information to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310 Any appropriate interface.
  • System control module 310 may include a memory controller module 330 to provide an interface to system memory 315 .
  • Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
  • System memory 315 may be used, for example, to load and store data and/or instructions for system 300 .
  • system memory 315 may include any suitable volatile memory, such as suitable DRAM.
  • system memory 315 may include double data rate type quad synchronous dynamic random access memory (DDR4SDRAM).
  • DDR4SDRAM double data rate type quad synchronous dynamic random access memory
  • system control module 310 may include one or more input/output (I/O) controllers to provide interfaces to NVM/storage device 320 and communication interface(s) 325 .
  • I/O input/output
  • NVM/storage device 320 may be used to store data and/or instructions.
  • NVM/storage device 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more hard drives (e.g., one or more hard drives) HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives).
  • suitable non-volatile memory e.g., flash memory
  • suitable non-volatile storage device(s) e.g., one or more hard drives (e.g., one or more hard drives) HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives.
  • Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device.
  • System 300 may wirelessly communicate with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols.
  • At least one of the processor(s) 305 may be packaged with the logic of one or more controllers of the system control module 310 (eg, memory controller module 330). For one embodiment, at least one of the processor(s) 305 may be packaged together with the logic of one or more controllers of the system control module 310 to form a system-in-package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die as the logic of the one or more controllers of the system control module 310 . For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with the logic of one or more controllers of the system control module 310 to form a system on a chip (SoC).
  • SoC system on a chip
  • system 300 may be, but is not limited to, a server, a workstation, a desktop computing device, or a mobile computing device (eg, laptop computing device, handheld computing device, tablet, netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or a different architecture. For example, in some embodiments, system 300 includes one or more cameras, keyboards, liquid crystal display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application specific integrated circuits ( ASIC) and speakers.
  • LCD liquid crystal display
  • ASIC application specific integrated circuits
  • the present application may be implemented in software and/or a combination of software and hardware, for example, using an application specific integrated circuit (ASIC), a general purpose computer or any other similar hardware device.
  • the software program of the present application can be executed by a processor to implement the steps or functions described above.
  • the software program of the present application (including related data structures) may be stored in a computer-readable recording medium, such as a RAM memory, a magnetic or optical drive or a floppy disk and similar devices.
  • some steps or functions of the present application may be implemented using hardware, for example, as a circuit that cooperates with a processor to perform each step or function.
  • part of the present application may be applied as a computer program product, such as computer program instructions.
  • a computer program product such as computer program instructions.
  • methods and/or technical solutions according to the present application may be invoked or provided.
  • the form in which computer program instructions exist in a computer-readable medium includes but is not limited to source files, executable files, installation package files, etc.
  • the manner in which computer program instructions are executed by a computer includes but is not limited to Limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. program.
  • the computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by the computer.
  • Communication media includes the medium whereby communication signals containing, for example, computer readable instructions, data structures, program modules or other data are transmitted from one system to another system.
  • Communication media may include conducted transmission media, such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (unguided transmission) media that can propagate energy waves, such as acoustic, electromagnetic, RF, microwave, and infrared .
  • Computer readable instructions, data structures, program modules, or other data may be embodied, for example, as a modulated data signal in a wireless medium, such as a carrier wave or a similar mechanism such as that embodied as part of spread spectrum technology.
  • modulated data signal refers to a signal in which one or more characteristics are altered or set in a manner that encodes information in the signal. Modulation can be analog, digital or hybrid modulation techniques.
  • computer-readable storage media may include volatile and nonvolatile, removable, storage media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
  • Removable and non-removable media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM ,EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks, tapes, CDs, DVDs); or other media now known or developed in the future that can be stored for computer systems Computer readable information/data used.
  • volatile memory such as random access memory (RAM, DRAM, SRAM
  • non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM ,EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks,
  • one embodiment according to the present application includes a device, the device includes a memory for storing computer program instructions and a processor for executing the program instructions, wherein when the computer program instructions are executed by the processor, triggering
  • the device operates based on the aforementioned methods and/or technical solutions according to multiple embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种获取目标电子围栏的方法、设备、介质及程序产品,方法具体包括:获取无人机设备拍摄的场景图像(S101);获取指挥设备(100)的指挥用户关于场景图像中目标区域的用户操作,基于用户操作生成关于目标区域的目标电子围栏,其中,目标电子围栏包括对应的目标围栏属性及目标区域在场景图像中的目标图像位置信息,目标图像位置信息用于确定目标区域的地理位置信息并用于对执勤用户的增强现实设备和/或无人机设备与目标电子围栏进行碰撞检测(S102)。实现多端联动的同时,节省了各端的计算资源,为系统任务的各端提供了良好的业务处理环境。

Description

一种获取电子围栏的方法、设备、介质及程序产品
本申请是以CN申请号为202210778277.1,申请日为2022.06.30的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中。
技术领域
本申请涉及通信领域,尤其涉及一种获取目标电子围栏的技术。
背景技术
电子围栏,是一种虚拟围栏,不同于传统意义上的物理围栏探测的组成结构。在无人机系统的实际作战应用中,经常面临复杂的飞行空域约束条件,这里的空域约束就是电子围栏。对于无人机来说,需要在执行任务过程中的任意阶段,确保自身飞行航迹处于电子围栏允许范围内。
发明内容
本申请的一个目的是提供一种获取目标电子围栏的方法、设备、介质及程序产品。
根据本申请的一个方面,提供了一种获取目标电子围栏的方法,其中,应用于指挥设备,该方法包括:
获取无人机设备拍摄的场景图像;
获取指挥设备的指挥用户关于所述场景图像中目标区域的用户操作,基于所述用户操作生成关于所述目标区域的目标电子围栏,其中,所述目标电子围栏包括对应的目标围栏属性及所述目标区域在所述场景图像中的目标图像位置信息,所述目标图像位置信息用于确定所述目标区域的地理位置信息并用于对执勤用户的增强现实设备和/或无人机设备与所述目标电子围栏进行碰撞检测,所述增强现实设备与所述指挥设备处于同一协同任务的协同执行状态。
根据本申请的另一个方面,提供了一种获取目标电子围栏的方法,其中,应用于网络设备,该方法包括:
获取对应协同任务的电子围栏集合,其中,所述电子围栏集合包括至少一个电子围栏,每个电子围栏包括对应围栏属性和围栏区域的地理位置信息,其中,所述协同任务 的执勤设备包括执勤用户的增强现实设备和/或无人机设备;其中,所述围栏区域的地理位置信息用于对所述执勤设备与所述电子围栏进行碰撞检测。
根据本申请的一个方面,提供了一种呈现目标电子围栏的指挥设备,该设备包括:
一一模块,用于获取无人机设备拍摄的场景图像;
一二模块,用于获取指挥设备的指挥用户关于所述场景图像中目标区域的用户操作,基于所述用户操作生成关于所述目标区域的目标电子围栏,其中,所述目标电子围栏包括对应的目标围栏属性及所述目标区域在所述场景图像中的目标图像位置信息,所述目标图像位置信息用于确定所述目标区域的地理位置信息并用于对执勤用户的增强现实设备和/或无人机设备与所述目标电子围栏进行碰撞检测,所述增强现实设备与所述指挥设备处于同一协同任务的协同执行状态。
根据本申请的一个方面,提供了一种呈现目标电子围栏的网络设备,该设备包括:
二一模块,用于获取对应协同任务的电子围栏集合,其中,所述电子围栏集合包括至少一个电子围栏,每个电子围栏包括对应围栏属性和围栏区域的地理位置信息,其中,所述协同任务的执勤设备包括执勤用户的增强现实设备和/或无人机设备;其中,所述围栏区域的地理位置信息用于对所述执勤设备与所述电子围栏进行碰撞检测。
根据本申请的一个方面,提供了一种计算机设备,其中,该设备包括:
处理器;以及
被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行如上任一所述方法的步骤。
根据本申请的一个方面,提供了一种计算机可读存储介质,其上存储有计算机程序/指令,其特征在于,该计算机程序/指令在被执行时使得系统进行执行如上任一所述方法的步骤。
根据本申请的一个方面,提供了一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现如上任一所述方法的步骤。
与现有技术相比,本申请基于无人机设备采集的场景图像及指挥用户的用户操作,获取电子围栏的目标图像位置信息,快速准确基于该目标图像位置信息与增强现实设备和/或无人机设备进行碰撞检测,实现多端联动的同时,节省了各端的计算资源,为系统任务的各端提供了良好的业务处理环境。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1示出根据本申请一个实施例的一种获取电子围栏的方法流程图;
图2示出根据本申请另一个实施例的一种获取电子围栏的方法流程图;
图3示出根据本申请一个实施例的一种引射线法的示例图;
图4示出根据本申请一个实施例的一种确定最短距离的示例图,图4a为p在线段AB中间的示例,图4b为p在线段AB右侧的示例,图4c为p在线段AB左侧的示例;
图5示出根据本申请一个实施例的一种指挥设备的设备结构图;
图6示出根据本申请的一个实施例的一种网络设备的设备结构图;
图7示出可被用于实施本申请中所述的各个实施例的示例性系统。
附图中相同或相似的附图标记代表相同或相似的部件。
具体实施方式
下面结合附图对本申请作进一步详细描述。
在本申请一个典型的配置中,终端、服务网络的设备和可信方均包括一个或多个处理器(例如,中央处理器(Central Processing Unit,CPU))、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(Random Access Memory,RAM)和/或非易失性内存等形式,如只读存储器(Read Only Memory,ROM)或闪存(Flash Memory)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(Phase-Change Memory,PCM)、可编程随机存取存储器(Programmable Random Access Memory,PRAM)、静态随机存取存储器(Static Random-Access Memory,SRAM)、动态随机存取存储器(Dynamic Random Access Memory,DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、快闪记忆体或其他内存技术、只读 光盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能光盘(Digital Versatile Disc,DVD)或其他光学存储、磁盒式磁带,磁带磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
本申请所指设备包括但不限于用户设备、网络设备、或用户设备与网络设备通过网络相集成所构成的设备。所述用户设备包括但不限于任何一种可与用户进行人机交互(例如通过触摸板进行人机交互)的移动电子产品,例如智能手机、平板电脑等,所述移动电子产品可以采用任意操作系统,如Android操作系统、iOS操作系统等。其中,所述网络设备包括一种能够按照事先设定或存储的指令,自动进行数值计算和信息处理的电子设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑器件(Programmable Logic Device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、数字信号处理器(Digital Signal Processor,DSP)、嵌入式设备等。所述网络设备包括但不限于计算机、网络主机、单个网络服务器、多个网络服务器集或多个服务器构成的云;在此,云由基于云计算(Cloud Computing)的大量计算机或网络服务器构成,其中,云计算是分布式计算的一种,由一群松散耦合的计算机集组成的一个虚拟超级计算机。所述网络包括但不限于互联网、广域网、城域网、局域网、VPN网络、无线自组织网络(AdHoc网络)等。优选地,所述设备还可以是运行于所述用户设备、网络设备、或用户设备与网络设备、网络设备、触摸终端或网络设备与触摸终端通过网络相集成所构成的设备上的程序。
当然,本领域技术人员应能理解上述设备仅为举例,其他现有的或今后可能出现的设备如可适用于本申请,也应包含在本申请保护范围以内,并在此以引用方式包含于此。
在本申请的描述中,“多个”的含义是两个或者更多,除非另有明确具体的限定。
图1示出了根据本申请的一个方面的一种获取电子围栏的方法,应用于指挥设备,该方法包括步骤S101和步骤S102。在步骤S101中,获取无人机设备拍摄的场景图像;在步骤S102中,获取指挥设备的指挥用户关于所述场景图像中目标区域的用户操作,基于所述用户操作生成关于所述目标区域的目标电子围栏,其中,所述目标电子围栏包括对应的目标围栏属性及所述目标区域在所述场景图像中的目标图像位置信息,所述目标图像位置信息用于确定所述目标区域的地理位置信息并用于对执勤用户的增强现实设备和/或无人机设备与所述目标电子围栏进行碰撞检测,所述增强现实设备与 所述指挥设备处于同一协同任务的协同执行状态。例如,指挥设备包括但不限于用户设备、网络设备、用户设备与网络设备通过网络相集成所构成的设备。所述用户设备包括但不限于任何一种可与用户进行人机交互的移动电子产品,例如手机、个人电脑、平板电脑等;所述网络设备包括但不限于计算机、网络主机、单个网络服务器、多个网络服务器集或多个服务器构成的云。
所述指挥设备与对应无人机设备和/或增强现实设备等建立了通信连接,通过该通信连接进行相关数据的传输等。在一些情形下,该指挥设备与无人机设备和/或增强现实设备处于同一协同任务的协同执行状态,所述协同任务是指多个设备根据一定约束条件(例如,与目标对象的空间距离、时间约束、关于设备自身物理条件或者任务执行顺序等)以实现某个准则为目标的,共同完成的某项任务,该任务通常可以被分解成多个子任务,并分配至系统中的各个设备,由各个设备分别去完成被分配的子任务,从而实现该协同任务的总任务进度的推进。对应协同任务在执行过程中由对应指挥设备充当该协同任务系统的控制中心,对协同任务中各个设备的子任务等进行调控。在此,所述协同任务的任务参与设备包括指挥设备、一个或多个无人机设备和/或一个或多个增强现实设备,对应指挥设备由指挥用户进行相应操作;无人机设备可以是基于指挥设备发送的采集指令/飞行路径规划指令等进行图像采集或者飞行等,还可以是由对应无人机飞手通过无人机设备的地面控制设备对无人机设备进行控制,该地面控制设备接收并呈现该指挥设备发送的控制指令,并由无人机飞手的控制操作实现对无人机设备的控制等;所述增强现实设备由对应执勤用户穿戴并进行控制,增强现实设备包括但不限于增强现实眼镜、增强现实头盔等。当然,在一些情形下,所述协同任务除了指挥设备、增强现实设备和/或无人机设备参与之外,还可以由网络设备进行三方数据传输和数据处理等,例如,无人机设备将对应场景图像发送至对应网络设备,指挥设备和/或增强现实设备等通过网络设备获取场景图像。
具体而言,在步骤S101中,获取无人机设备拍摄的场景图像。例如,无人机设备是指利用无线电遥控设备和自备的程序控制装置操纵的不载人飞行器,具有体积小、造价低、使用方便、对作战环境要求低、战场生存能力较强等优点。所述无人机设备可以采集特定区域的场景图像,例如,无人机设备基于预设飞行路线或者预先确定的目标地点在飞行过程中采集对应区域的场景图像,该无人机设备在采集场景图像过程中会记录该场景图像被采集时该无人机设备对应的摄像位姿信息,该摄像位姿信息包括该无人机 设备的摄像装置在采集场景图像时的摄像位置信息及摄像姿态信息等。无人机设备或对应的地面控制设备可以将该场景图像发送至网络设备,并由网络设备发送至对应设备等,或者无人机设备或对应的地面控制设备可以与对应设备的通信连接直接将场景图像发送至该对应设备等,其中,对应设备包括指挥设备和/或增强现实设备。在一些情形下,该无人机设备在发送场景图像的过程中,还会将该场景图像对应的摄像位姿信息发送至对应设备或网络设备,如将该摄像位姿信息发送至指挥设备和/或增强现实设备或者网络设备等。具体地,例如,网络设备可以基于该协同任务,将处于协同任务执行状态的无人机设备采集的场景图像转发至指挥设备和/或增强现实设备等;或者,无人机设备实时将采集的场景图像传输至网络设备,指挥设备和/或增强现实设备向网络设备发送关于该无人机设备的图像获取请求,对应图像获取请求包含该无人机设备的设备标识信息,网络设备响应于图像获取请求,基于该无人机设备的设备标识信息调取该无人机设备采集的场景图像并将该场景图像发送至指挥设备和/或增强现实设备等。指挥设备和/或增强现实设备获取到对应场景图像后,在对应显示装置(例如,显示屏、投影仪等)中呈现该场景图像。
在步骤S102中,获取指挥设备的指挥用户关于所述场景图像中目标区域的用户操作,基于所述用户操作生成关于所述目标区域的目标电子围栏,其中,所述目标电子围栏包括对应的目标围栏属性及所述目标区域在所述场景图像中的目标图像位置信息,所述目标图像位置信息用于确定所述目标区域的地理位置信息并用于对执勤用户的增强现实设备和/或无人机设备与所述目标电子围栏进行碰撞检测,所述增强现实设备与所述指挥设备处于同一协同任务的协同执行状态。例如,指挥设备包括数据采集装置,用于获取指挥用户的用户操作,例如,键盘、鼠标、触摸屏或者触控板、图像采集单元、语音输入单元等。例如,指挥用户的用户操作可以是指挥用户的手势动作或者语音指令,通过识别手势动作或语音指令生成目标电子围栏;又如,指挥用户的用户操作可以是利用键盘、鼠标、触摸屏或者触控板等设备对场景图像直接的操作,如指挥用户通过鼠标在呈现的场景图像上的目标区域进行框选、涂鸦等操作生成对应目标电子围栏。在一些实施例中,指挥设备在呈现无人机设备采集的场景图像同时,还可以呈现关于该场景图像的操作界面,指挥用户可以在操作界面中的控件进行操作,从而获取对场景图像中目标区域的目标电子围栏,例如,指挥设备可以采集用户关于场景图像中的目标区域的框选操作生成关于该目标区域的目标电子围栏,具体地,目标电子围栏包括但不限于对于 场景图像中目标区域(如特定范围/特定位置/特定目标等)的框选、涂鸦等操作确定的禁入范围或者禁出范围。对应目标电子围栏包括用户操作确定的目标区域及该目标区域在场景图像中的图像位置信息,该目标区域由用户操作在场景图像的添加的信息确定,包括但不限于方框、圆形、其他多边形或者用户选取的多个点组成的自定义多边形等,例如,对应操作界面包含预设形状(例如,方框、圆形或者其他多边形等)的区域框选方式,基于用户关于预设形状的选中操作并基于场景图像的特定位置/区域的框选从确定该选中的目标区域,如选择特定位置作为圆心/多边形的中心,输入或者拉取一定距离作为对应的半径/多边形的外接圆半径等,从而确定对应目标区域,或者选择特定位置作为圆心/多边形的中心,选择一个或多个角点作为圆周上的点/多边形的角点等,从而确定对应的目标区域等;还如,基于用户关于多个位置点的选取操作,基于选取顺序依次连接多个点并形成闭合图案,从而确定对应的目标区域。
目标电子围栏包括目标围栏属性信息,在一些实施方式中,目标围栏属性包括围栏的禁入或者禁出属性,如围栏属性信息包括但不限于所述电子围栏为禁入围栏或所述电子围栏为禁出围栏。例如,所述禁入围栏用于指示该围栏圈出的目标区域为禁入区域;所述禁出围栏用于指示该围栏圈出的目标区域为禁出区域。在一些情形下,我们可以通过不同的颜色对于禁入围栏和禁出围栏进行较为明显的区分,如通过黑色围栏标识该围栏为禁入围栏,通过白色围栏标识对应围栏为禁出围栏等,当然,本领域技术人员应能理解,其他颜色标识禁入/禁出属性也同样适用于本申请,在此不做限制。在另一些实施例中,目标围栏属性包括但不限于该围栏的预警距离信息,例如,基于默认设置为每个围栏信息赋予相同的默认预警距离信息等,或者,基于用户输入的不同的预警距离确定每个围栏信息的预警距离信息,还如,根据目标区域的形状和/或对应区域总范围等综合确定对应的预警距离信息等,具体地,根据目标区域的最小外接圆的半径的一定比例确定预警距离信息等。在另一些实施例中,目标围栏属性包括但不限于用于指示目标电子围栏的建立了映射关系的协同任务的任务标识信息,例如,每个协同任务存在对应电子围栏集合,该电子围栏集合中每个电子围栏均与该协同任务建立了映射关系,当对该协同任务进行处理时,指挥设备可以调用该协同任务对应的一个或多个电子围栏等,若处于该协同任务的执行状态时,指挥设备基于用户操作生成了目标电子围栏,则指挥设备将该目标电子围栏与该协同任务建立映射关系,将该目标电子围栏添加至对应电子围栏集合中。在另一些实施例中,该目标围栏属性包括但不限于目标电子围栏的标识信息、 颜色信息等。
所述目标图像位置信息用于指示该目标电子围栏的目标区域在场景图像对应图像/像素坐标系中的坐标信息,该坐标信息可以是该目标区域的区域坐标集合等。在一些情形下,该所述目标图像位置信息用于确定所述目标区域的地理位置信息并用于对执勤用户的增强现实设备和/或无人机设备与目标电子围栏进行碰撞检测,例如,协同任务中的任一参与设备(指挥设备、无人机设备、增强现实设备或者网络设备等)可以基于该目标图像位置信息及场景图像被采集时的摄像位姿信息计算确定该目标图像位置信息在现实世界对应的地理坐标系的地理位置信息。地理坐标系一般是指由经度、纬度和高度组成的坐标系,能够标示地球上的任何一个位置。不同地区可能会使用不同的参考椭球体,即使是使用相同的椭球体,也可能会为了让椭球体更好地吻合当地的大地水准面,而调整椭球体的方位,甚至大小。这就需要使用不同的大地测量系统(Geodetic datum)来标识,例如,我国经常使用的CGCS2000与WGS84地理坐标系等。其中,WGS84为一种地理坐标系,是目前最流行的地理坐标系统,也是目前广泛使用的GPS全球卫星定位系统使用的坐标系。三维直角坐标系包括但不限于:站心坐标系、导航坐标系、NWU坐标系等。具体地,在获取场景图像对应的摄像位姿信息等同时,还可以获取多个地图点的空间位置信息,其中,所述空间位置信息包括对应地图点在三维直角坐标系中的空间坐标信息。在三维直角坐标系已知的情况下,地理位置信息从地理坐标系转换至三维直角坐标系对应的坐标变换也为已知的,基于该已知的坐标变换信息我们可以将处于地理坐标系中的地图点转换至三维直角坐标系中,从而基于地图点的地理坐标信息确定对应空间位置信息;进一步地,根据所述多个地图点的空间位置信息、目标对象的图像位置信息、所述摄像位置信息以及所述摄像姿态信息确定所述目标对象在所述三维直角坐标系的目标空间位置信息,如在获取到已知的多个地图点的空间位置信息、所述目标对象的图像位置信息、所述摄像位置信息以及对应摄像姿态信息之后,由于摄像装置的内参已知,我们可以基于相机成像模型构建由相机光心通过目标对象对应的图像位置信息的空间射线,基于该空间射线、多个地图点的空间位置信息及摄像位置信息确定目标对象的目标空间位置信息。例如,我们可以假设该图像位置信息与该相机底片所在平面垂直(例如,无人机图像中心对应光轴与相机底片所在平面垂直等),从而基于该底片所在平面的法向量及该图像位置信息确定对应空间射线信息,从而基于该空间射线信息及多个地图点组成的地面信息确定相应的交点,将该交点的空间坐标信息作为目标 对象的目标空间位置信息等。当然,若对应图像位置信息对应像素未处于图像中心,则基于底片确定的法向量与实际射线向量存在误差,此时,我们需要通过相机的成像模型、图像位置信息和摄像姿态信息确定对应图像位置信息的空间目标射线的向量信息,其中,空间目标射线由光心坐标和射线的向量信息描述。计算机设备确定对应空间目标射线的向量信息之后,可以基于该目标射线的向量信息、摄像位置信息以及多个地图点的空间位置信息,计算射线相对于地面的交点,从而将该交点的空间坐标信息作为目标对象的目标空间位置信息等。最后,基于目标对象的目标空间位置信息确定目标对象在地理坐标系(如大地坐标系等)中的地理坐标信息。例如,计算机设备确定目标对象的目标空间位置信息之后,利用可以将三维直角坐标系下的坐标信息从三维空间坐标系转成地理坐标系(例如,WGS84坐标系)并存储,便于后续的计算。其中,在一些实施例中,根据目标射线的向量信息、摄像位置信息以及多个地图点的空间位置信息确定目标对象在三维直角坐标系的目标空间位置信息包括:基于摄像位置信息获取摄像装置的光心在三维直角坐标系中的光心空间位置信息;根据目标射线向量信息、多个地图点的空间位置信息、光心空间位置信息从多个地图点中确定距离目标射线最近的目标地图点;从多个地图点中除目标地图点之外的其他地图点中取两个地图点,与目标地图点构成对应空间三角形,并根据目标射线及对应空间三角形确定对应空间交点;将空间交点的空间坐标信息作为目标对象的目标空间位置信息。或者,我们可以根据目标对象在无人机场景图像上的图像位置信息,以及无人机相机内参信息,确定目标对象在相机坐标系中的当前位置信息;根据目标对象在相机坐标系中的当前位置信息,以及基于无人机拍摄的场景图像时的拍摄参数信息确定的相机的外参,从而确定目标对象在地理坐标系中的地理位置信息,其中,所述拍摄参数信息包括但不限于无人机设备的摄像装置的分辨率、视场角、相机的旋转角度以及无人机的飞行高度等。其中,在一些实施例中,目标对象用于指示该目标区域的一个点,基于目标区域中每个点的地理位置信息,我们可以确定该目标区域的地理坐标集合,从而确定该目标区域的地理位置信息。在另一些实施例中,目标对象用于指示该目标区域的一个或多个关键点(例如,角点坐标或者圆心等),基于该一个或多个关键点的地理位置信息,我们可以确定该目标区域的地理坐标集合,如以基于多个角点的空间坐标计算每条边对应的线段的坐标表达式,从而确定每条边对应的坐标集合,对每条边的坐标集合进行汇总可以确定该目标区域的地理位置信息。
该地理位置信息的确定可以是发生在指挥设备端,还可以是发生在无人机设备、增 强现实设备或者网络设备端等。例如,优选地,指挥设备端根据指挥用户关于场景图像中目标区域的用户操作确定的目标图像位置信息、该场景图像对应的摄像位姿信息计算确定目标区域的地理位置信息;又如,指挥设备确定对应目标图像位置信息之后,将该目标图像位置信息发送至无人机设备/增强现实设备/网络设备,由对应无人机设备/增强现实设备/网络设备基于对应场景图像、该场景图像对应的摄像位姿信息计算确定该目标区域的地理位置信息等,在此以网络设备计算确定该目标区域的地理位置信息为例进行介绍,指挥设备将目标图像位置信息发送至对应网络设备,并接收网络设备基于目标图像位置信息、场景图像的摄像位姿信息确定的地理位置信息。例如,协同任务除了由各执行端的用户参与之外,还包括用于数据传输和数据处理的网络设备端。在一些情形下,指挥设备基于指挥用户的用户操作确定对应目标电子围栏后,将该目标电子围栏/目标电子围栏的目标图像位置信息发送至对应的网络设备,网络设备接收该目标电子围栏/目标电子围栏的目标图像位置信息,并基于目标电子围栏中的目标图像位置信息及无人机设备传输至网络设备的该场景图像对应的摄像位姿信息,计算确定该目标区域的地理位置信息等。一方面,网络设备可以将该地理位置信息返回至指挥设备,供指挥设备基于地理位置信息对目标区域的电子围栏进行叠加呈现,例如,在指挥设备端获取的无人机拍摄的实时场景图像中跟踪叠加呈现该目标区域的电子围栏,或者在指挥设备获取的增强现实设备对应的实时实景中叠加呈现该目标区域的电子围栏,或者在指挥设备呈现的关于目标区域的电子地图中呈现该目标区域的电子围栏等。另一方面,网络设备可以进一步确定叠加位置信息,将叠加位置返回至供指挥设备,供指挥设备基于叠加位置信息对目标区域的电子围栏进行叠加呈现。其中,将地理坐标系投影(如等矩形投影、墨卡托投影、高斯-克吕格投影、Lambert投影等)为2D平面描述,形成地图。电子地图遵循地理坐标系协议,是地理坐标系的映射,映射关系已知,即已知地理坐标系中的某一点,可以确定其在电子地图中的地图位置。如果已知电子地图上的地图位置信息,根据该位置信息也可确定在地理坐标系中的位置。
所述碰撞检测是指基于协同任务的执勤设备的执勤位置信息及该目标电子围栏的地理位置信息确定该执勤设备是否处于该目标电子围栏的围栏内部/外部,或者是否处于目标电子围栏的预警范围内等。例如,我们可以直接基于对应地理位置信息中的经纬度信息进行碰撞检测等,从而确定对应执勤设备的经纬度信息是否处于目标区域的经纬度范围内部/外部/预警范围内等。当然,为了方便计算并实现精准进行碰撞检测等,我们 可以将执勤位置信息及目标电子围栏的地理位置信息转换至同一平面直角坐标系下(如地图坐标系或者任一二维平面直角坐标系等),从而确定对应执勤二维位置信息及目标电子围栏的二维位置信息,并基于该执勤二维位置信息及目标电子围栏的二维位置信息确定该执勤设备是否处于该目标电子围栏的内部/外部/预警范围内等。其中,基于该执勤二维位置信息及目标电子围栏的二维位置信息确定该执勤设备是否处于该目标电子围栏的内部/外部/预警范围内,包括:根据执勤二维位置信息和目标电子围栏的二维位置信息进行碰撞检测,确定执勤二维位置信息是否满足目标电子围栏的围栏属性信息;若不满足目标电子围栏的围栏属性信息,则生成关于目标电子围栏的围栏报警事件;若满足目标电子围栏的围栏属性信息,则根据执勤二维位置信息和目标电子围栏的二维位置信息确定对应距离差值,若距离差值小于或等于预警距离阈值,则生成关于目标电子围栏的围栏预警事件。其中,若目标电子围栏的围栏区域的形状为圆形,碰撞检测包括:基于执勤二维位置信息和围栏区域的圆心的二维位置信息,计算实时执勤二维位置信息到圆心的距离信息;根据圆心的二维位置信息及圆形上的任一点的二维位置信息确定圆形的半径;根据距离信息、圆形的半径及目标电子围栏的围栏属性信息确定执勤二维位置信息是否满足目标电子围栏的围栏属性信息。其中,若目标电子围栏的围栏区域的形状为多边形;碰撞检测包括:基于执勤二维位置信息,采用引射线法,确定对应执勤射线信息,并基于执勤射线信息确定执勤射线信息与目标电子围栏的围栏区域的交点数量;根据交点数量确定执勤二维位置信息与目标电子围栏的围栏区域的区域内外关系,其中,区域内外关系用于指示执勤二维位置信息处于目标电子围栏的围栏区域的区域内部或者区域外部;根据区域内外关系及目标电子围栏的围栏属性信息确定执勤二维位置信息是否满足目标电子围栏的围栏属性信息。其中,若目标电子围栏的围栏区域的形状为多边形;根据执勤二维位置信息和目标电子围栏的二维位置信息确定对应距离差值,包括:计算执勤二维位置信息与围栏区域的多边形中每条边的距离,从而确定多个距离;从多个距离中确定最小的距离,将该最小的距离作为对应距离差值;根据距离差值是否小于或等于预警距离阈值确定是否生成关于某电子围栏的围栏预警事件。其中,在一些情形下,报警事件的优先级高于预警事件的优先级。其中,执勤设备包括协同任务中除了网络设备、指挥设备之外的其他参与设备,如增强现实设备和/或无人机设备等。该碰撞检测过程可以是发生在网络设备端并将对应结果返回至其它设备端,还可以是发生在指挥设备、无人机设备、增强现实设备端本地等,我们以网络设备计算为例阐述以下实 施例,本领域技术人员应能理解该等实施例同样适用于该过程发生在指挥设备、无人机设备、增强现实设备端的情形。当然,在执行协同任务时,除了对该目标电子围栏执行碰撞检测之外,同时,我们还会基于同样的计算方式对该协同任务的电子围栏集合中所有电子围栏与一个或多个执勤设备进行碰撞检测,从而确定对应的碰撞结果等。
在一些实施方式中,该地理位置信息除了用于计算碰撞之外,还用于在增强现实设备端的实景/无人机设备的场景图像中叠加呈现该目标电子围栏等。如在一些实施方式中,所述地理位置信息还用于确定所述目标电子围栏在所述执勤用户的增强现实设备的实景中的叠加位置信息,并在所述增强现实设备的实景中叠加呈现所述目标区域。在一些实施例中,该地理位置信息确定后可以由对应的确定设备(如指挥设备、无人机设备或者网络设备)直接发送至执勤用户的增强现实设备,或者经网络设备转发至增强现实设备,由增强现实设备本地端计算确定该地理位置信息叠加显示于该增强现实设备当前实景画面中的叠加位置信息,从而将目标电子围栏叠加显示于该增强现实设备当前实景画面中,例如,指挥设备/无人机设备/网络设备获取到对应地理位置信息后,将该地理位置信息发送至该增强现实设备,该增强现实设备可以基于接收到的地理位置信息及当前执勤摄像位姿信息等,确定目标区域叠加于显示屏幕的屏幕位置信息等,其中,执勤摄像位姿信息包括该增强现实设备的摄像装置的摄像位置信息及摄像姿态信息等,该摄像位置信息用于指示执勤用户当前的地理位置信息等,其中,若该地理位置信息的计算过程发生在增强现实设备端,则增强现实设备保留该地理位置信息,并将该地理位置信息发送至其他设备,或者发送至网络设备由网络设备发送至其他设备等。在另一些实施例中,该地理位置信息确定后并不发送至执勤用户的增强现实设备,而是直接将该地理位置信息叠加显示于该增强现实设备当前实景画面中的叠加位置信息发送至该增强现实设备。协同任务中的任一设备获取到该地理位置信息后,可以基于地理位置信息及增强现实设备的摄像装置的执勤摄像位姿信息计算确定该地理位置信息叠加显示于该增强现实设备当前实景画面中的叠加位置信息,该叠加位置信息用于指示该目标电子围栏的目标区域在增强现实设备的显示屏幕中的显示位置信息,如该显示屏幕对应屏幕/图像/像素坐标系的屏幕/图像/像素坐标点或集合等。同样地,在一些实施例中,某一设备端(如网络设备/增强现实设备/无人机设备/指挥设备)确定目标区域的地理位置信息后,可以直接发送给其它设备端,由其它设备在本地端确定该地理位置信息在增强现实设备的实景中对应的叠加位置信息/在实时场景图像中的实时场景图像位置信息/在电子地图 中的地图位置信息,从而在增强现实设备、无人机设备和/或指挥设备对应的实景中叠加呈现该目标区域/显示的实时场景图像中叠加呈现该目标区域/显示的电子地图中叠加呈现该目标区域;在另一些实施例中,某一设备端(如网络设备/增强现实设备/无人机设备/指挥设备)可以进一步确定地理位置信息在增强现实设备的实景中对应的叠加位置信息/在实时场景图像中的实时场景图像位置信息/在电子地图中的地图位置信息,并发送至其它设备端,从而在增强现实设备、无人机设备和/或指挥设备对应的实景中叠加呈现该目标区域/显示的实时场景图像中叠加呈现该目标区域/显示的电子地图中叠加呈现该目标区域。在一些实施方式中,所述地理位置信息还用于确定所述目标区域在所述无人机设备拍摄的实时场景图像中的实时场景图像位置信息,并在所述增强现实设备和/或无人机设备呈现的实时场景图像中叠加呈现所述目标区域。例如,关于目标区域的目标电子围栏,可以基于该目标区域的目标图像位置信息计算得到对应的地理位置信息后将其存储于存储数据库(例如,指挥设备/增强现实设备/无人机设备进行本地存储或者网络设备端设立对应网络存储数据库等)中,方便关于该目标电子围栏进行调用时,同时调用该目标电子围栏对应的地理位置信息,基于该地理位置信息进行其他位置信息(例如,无人机设备的实时场景图像中的实时场景图像位置信息或者增强现实设备的实时采集的实景中的实时叠加位置信息等)计算转化等。例如,无人机设备端可以将对应实时场景图像直接通过通信连接发送至指挥设备/增强现实设备,或者经由网络设备发送至指挥设备或者增强现实设备等,对应增强现实设备可以在显示屏幕中呈现该实时场景图像,例如,以视频透视的方式在显示屏幕中呈现该实时场景图像,或者在显示屏幕中的一定屏幕区域中呈现该实时场景图像等。为了方便对于目标电子围栏在实时场景图像中进行跟踪叠加呈现,无人机设备获取实时场景图像对应的实时飞行摄像位姿信息,在一些实施例中,对应增强现实设备/指挥设备可以直接通过与无人机设备的通信连接或者通过网络设备转发的方式获取到实时场景图像的实时飞行摄像位姿信息等,并结合该已计算确定的地理位置信息等,可以在本地端计算出目标电子围栏的目标区域对应实时场景图像中的叠加位置信息等,同样地,无人机设备也可以根据实时飞行摄像位姿信息和该已计算确定的地理位置信息等在无人机设备本地端计算出目标电子围栏的目标区域对应实时场景图像中的叠加位置信息,并在增强现实设备/指挥设备/无人机设备呈现的实时场景图像中跟踪叠加呈现该目标电子围栏等。例如,设定无人机在某一位置时(如起飞位置)为三维直角坐标系(如站心坐标系、导航坐标系等)的原点;将目标电子围栏对应 的地理位置信息转换到该三维直角坐标系;获取无人机实时飞行的地理位置和姿态信息,将无人机的地理位置转换到该三维直角坐标系,基于无人机的姿态信息确定三维直角坐标系到无人机相机坐标系的旋转矩阵;基于目标电子围栏的三维直角坐标、无人机位置对应的三维直角坐标、旋转矩阵和无人机的相机内参,确定目标区域在无人机采集的实时场景图像中的实时场景图像位置信息并呈现。在另一些实施例中,由某一设备端(例如,指挥设备/增强现实设备/无人机设备/网络设备端)获取到实时场景图像的实时飞行摄像位姿信息等,并结合该目标电子围栏已计算确定的地理位置信息等,可以计算出该目标电子围栏对应实时场景图像中的叠加位置信息等,然后将叠加位置信息发送到其它设备端,以在其它设备端呈现的实时场景图像中跟踪叠加呈现该目标电子围栏等。在一些实施方式中,所述方法还包括步骤S103(未示出),在步骤S103中,基于所述目标图像位置信息、所述场景图像的摄像位姿信息确定所述目标区域的地理位置信息。例如,指挥设备基于指挥用户的用户操作确定对应目标电子围栏后,基于目标电子围栏的目标区域的目标图像位置信息及无人机设备传输的该场景图像对应的摄像位姿信息,计算确定该目标区域的地理位置信息等,然后将该地理位置信息直接发送至协同任务的其它执行设备,如增强现实设备、无人机设备等,或者将该地理位置信息发送至网络设备,由网络设备发送至协同任务的其它执行设备。一方面,指挥设备可以将该地理位置信息发送至协同任务的其它执行设备,供其它执行设备基于地理位置信息进一步确定叠加位置信息从而对目标区域的目标电子围栏进行叠加呈现,例如,在增强现实设备端获取的无人机拍摄的实时场景图像中跟踪叠加呈现该目标电子围栏,或者在增强现实设备获取的增强现实设备对应的实时实景中叠加呈现该目标电子围栏,还或者在增强现实设备呈现的关于目标区域的电子地图中呈现该目标电子围栏等。另一方面,指挥设备可以进一步确定叠加位置信息,将叠加位置返回至其它执行设备,供其它执行设备基于叠加位置信息对目标电子围栏进行叠加呈现。
在一些实施方式中,所述方法还包括步骤S104(未示出),在步骤S104中,呈现所述协同任务所对应的电子地图;根据所述目标电子围栏的地理位置信息确定所述目标电子围栏的地图位置信息,基于所述地图位置信息在所述电子地图中呈现所述目标电子围栏。例如,指挥设备端可以调用该协同任务的任务标识信息/目标区域所处位置从本地或者网络设备端调用协同任务所处场景的电子地图,如指挥设备根据目标区域所处的地理位置信息,从本地或者网络设备端确定该地理位置信息附近的电子地图,并呈现该电 子地图;或者指挥设备/网络设备端中存储有各个任务对应的任务区域,并建立每个任务区域与对应任务标识信息的映射关系,指挥设备可以基于任务标识信息从本地或者网络设备端调用对应电子地图。指挥设备还可以获取目标电子围栏在电子地图中的地图位置信息,如在本地端基于地理位置信息进行投影转换确定对应电子地图中的地图位置信息,还或者接收其它设备端(如网络设备、无人机设备、增强现实设备)返回的地图位置信息等。指挥设备可以通过对应显示装置呈现该电子地图,并在电子地图中的地图位置信息对应区域呈现目标电子围栏,从而实现在电子地图中对于目标电子围栏进行叠加呈现。
在一些实施方式中,所述目标电子围栏的地理位置信息还用于在所述增强现实设备和/或无人机设备呈现的、关于所述目标区域所处场景的电子地图中叠加呈现所述目标电子围栏。例如,所述地理位置信息可以是指挥设备端/增强现实设备端/无人机设备端计算确定的,也可以是网络设备端计算确定的。对应指挥设备、无人机设备或者增强现实设备可以通过各自显示装置呈现目标区域所处场景的电子地图,并基于地理位置信息获取目标区域的地图位置信息,从而在各自呈现的电子地图中叠加呈现目标电子围栏,从而实现了在无人机设备拍摄的场景图像中对目标区域添加的目标电子围栏同步呈现在电子地图中的目标区域的目标电子围栏。其中,地图位置信息的获取可以是在各自设备本地端基于目标区域对应的地理位置信息进行投影转换确定,也可以是由网络设备计算完成后返回至各自设备,还可以是由某一设备端计算完成后发送至其它设备端等。
在一些实施方式中,所述方法还包括步骤S105(未示出),在步骤S105中,获取并呈现所述协同任务对应的电子地图,基于所述指挥用户在所述电子地图中操作区域的用户操作确定所述操作区域的操作电子围栏,其中,所述操作电子围栏包括对应的操作围栏属性及所述操作区域在所述电子地图中的操作地图位置信息,其中,所述操作地图位置信息用于确定所述操作区域的操作地理位置信息并用于对执勤用户的增强现实设备和/或无人机设备与所述操作电子围栏进行碰撞检测。例如,指挥用户的用户操作可以是指挥用户的手势动作或者语音指令,通过识别手势动作或语音指令生成操作电子围栏;又如,指挥用户的用户操作可以是利用键盘、鼠标、触摸屏或者触控板等设备对电子地图直接的操作,如指挥用户通过鼠标在呈现的电子地图上对特定区域/特定位置/特定目标等进行框选、涂鸦等操作生成对应操作电子围栏。在一些实施例中,例如,指挥设备在能够调用本地端或者网络设备端关于所处场景的电子地图,指挥设备在呈现该电 子地图的同时,可以呈现关于该电子地图的操作界面,指挥用户可以通过操作界面对电子地图中特定区域/特定位置/特定目标等进行标记等,如在电子地图中框选一部分区域等,指挥设备可以基于指挥用户的用户操作,将对应区域确定为操作区域,并生成该操作区域对应的操作电子围栏,该操作电子围栏包括对应操作围栏属性及该操作区域在电子地图中的操作地图位置信息等。其中,该操作地图位置信息与目标区域的地图位置信息并无关联,可以是同一位置也可以是不同位置等。在此,所述操作电子围栏的围栏属性信息、地理位置信息计算方式及对应叠加呈现方式的各具体实施方式与前述目标区域的围栏属性信息、地理位置信息计算方式及对应叠加呈现方式的实施例相同或相似,因而不再赘述,以引用的方式包含于此。例如,指挥设备基于所述操作地图位置信息确定所述操作区域的操作地理位置信息。例如,所述操作地理位置信息还用于在所述增强现实设备和/或无人机设备呈现的、关于所述操作区域所处场景的电子地图中叠加呈现所述操作区域。例如,所述操作地理位置信息还用于在增强现实设备的实景中和/或无人机设备拍摄的场景图像中叠加呈现所述操作区域。例如,指挥设备端确定操作区域的操作地理位置信息发送至协同任务的其它执行设备端(如增强现实设备、无人机设备),由其它执行设备本地端基于操作地理位置信息和无人机的实时飞行摄像位姿信息计算在无人机拍摄的实时场景图像中的实时场景图像位置信息、基于操作地理位置信息和增强现实设备的摄像装置的摄像位姿信息计算在增强现实设备的实景中对应的叠加位置信息、和/或基于操作地理位置信息计算在电子地图中的地图位置信息,从而在其它执行设备对应的实时无人机画面/实景/电子地图中叠加呈现该操作区域。
在一些实施方式中,所述目标电子围栏和/或对应操作电子围栏用于更新或建立所述协同任务的电子围栏集合,其中,所述电子围栏集合包括至少一个电子围栏,每个电子围栏包括对应围栏属性和围栏区域的地理位置信息,所述目标电子围栏或所述操作电子围栏属于所述至少一个电子围栏之一,所述操作电子围栏基于所述指挥用户在电子地图中关于操作区域的用户操作确定。例如,每个协同任务存在对应的任务标识信息,该任务标识信息用于标识该任务的唯一性,如任务编号、名称或者图像等。每个协同任务在对应数据库中存储有相对应的电子围栏集合,该电子围栏集合与对应协同任务的任务标识信息绑定,该电子围栏集合中包含一个或多个电子围栏,例如,基于指挥用户在无人机图像中用户操作生成的目标电子围栏,基于指挥用户在电子地图中用户操作生成的操作电子围栏,或者该任务中预设的某些特殊区域的电子围栏等。每个电子围栏包括对应 围栏属性信息及地理位置信息,在一些情形下,围栏属性信息包括对应预警距离阈值,在一些情形下,该围栏属性信息包括该围栏的禁入或者禁出属性等。该存储协同任务的电子围栏集合的数据库可以是设置在指挥设备端,还可以是设置于网络设备端等。若在获取对应目标电子围栏和/或操作电子围栏之前,该协同任务已基于预设的电子围栏生成对应的电子围栏集合,则我们可以基于该目标电子围栏和/或操作电子围栏更新该电子围栏集合;若在获取对应目标电子围栏和/或操作电子围栏之前,该协同任务并未建立关于电子围栏集合的映射,我们可以将该目标电子围栏和/或操作电子围栏建立该电子围栏集合,并基于后续确定的其他电子围栏信息更新该电子围栏集合等。其中,目标电子围栏的数量可以是一个或多个,操作电子围栏的数量也可以是一个或多个,在此不做限制。
在一些实施方式中,所述方法还包括步骤S106(未示出),在步骤S106中,获取并呈现对应执勤设备关于所述至少一个电子围栏之一的围栏预警提示信息,其中,所述围栏预警提示信息用于指示所述执勤设备的实时执勤位置信息满足所述至少一个电子围栏之一的围栏属性信息,且与所述至少一个电子围栏之一的距离差值小于或等于预警距离阈值,所述执勤设备包括所述执勤用户的增强现实设备和/或所述无人机设备。例如,所述执勤设备用于指示除指挥设备和网络设备之外的其他处于移动状态和/或任务执行状态的设备,如执勤用户佩戴的增强现实设备、无人机飞手控制的无人机设备等。对应围栏预警提示信息的生成过程可以是发生在指挥设备端,例如,指挥设备获取执勤设备的实时执勤位置信息,并基于该执勤设备的实时执勤位置信息及电子围栏集合中各个电子围栏的地理位置信息,计算确定所述执勤设备的实时执勤位置信息是否满足所述至少一个电子围栏之一的围栏属性信息,且与所述至少一个电子围栏之一的距离差值小于或等于预警距离阈值;若是,则指挥设备生成并呈现该围栏预警提示信息。或者,对应围栏预警提示信息的生成过程发生在网络设备端,例如,网络设备获取执勤设备的实时执勤位置信息,并基于该执勤设备的实时执勤位置信息及电子围栏集合中各个电子围栏的地理位置信息,计算确定所述执勤设备的实时执勤位置信息是否满足所述至少一个电子围栏之一的围栏属性信息,且与所述至少一个电子围栏之一的距离差值小于或等于预警距离阈值;若是,网络设备生成该围栏预警提示信息并将其发送至指挥设备供指挥设备呈现进行后续处理等。其中,在一些实施例中,该围栏预警提示信息对于协同任务中的每个执勤设备与该协同任务中的电子围栏集合中每个电子围栏均进行计算,若获取到某个执勤设备与某个电子围栏满足上述条件,则生成关于该执勤设备与该电子围栏的围栏 预警提示信息,该围栏预警提示信息还包括该执勤设备的设备标识信息(例如,设备编号、名称或者设备对应用户编号、名称等),更进一步地,该围栏预警提示信息还包括该电子围栏的围栏标识信息(例如,围栏编号、名称或者坐标位置等),其中,例如,围栏预警提示信息可以直接显示在指挥设备或执勤设备的显示屏幕中,又如,围栏预警提示信息还可以以时间轴的方式显示在指挥设备或执勤设备的显示屏幕中,显示位置不进行限定。在另一些实施例中,协同任务包括多个子任务,每个子任务包括相应的子任务执勤设备和子任务电子围栏集合,其中,该子任务的子任务电子围栏集合仅确定该子任务对应的执勤设备的禁入范围或者禁出范围,则协同任务的子任务的围栏预警提示信息对于该子任务中的每个执勤设备与该子任务电子围栏集合中每个电子围栏均进行计算,非该子任务中的执勤设备不与该子任务电子围栏集合中电子围栏进行计算,非该子任务电子围栏集合中电子围栏不与该子任务中的执勤设备进行计算。
其中,所述执勤设备的实时执勤位置信息满足所述至少一个电子围栏之一的围栏属性信息,用于指示该实时执勤位置信息与该电子围栏的围栏属性信息相匹配,例如,若对应围栏属性信息包括该电子围栏为禁入围栏,则该实时执勤位置信息处于该电子围栏所围区域之外,若对应围栏属性信息包括该电子围栏为禁出围栏,则该实时执勤位置信息处于该电子围栏所围区域之内等。若对应执勤设备的实时执勤位置信息不满足某电子围栏的围栏属性信息,则生成对应的围栏报警提示信息,用于提醒指挥用户该任务的执勤设备之一不满足至少一个电子围栏之一的围栏属性信息,需要指挥用户对其进行指挥调度等。如在一些实施方式中,所述方法还包括步骤S107(未示出),在步骤S107中,获取并呈现所述执勤设备关于所述至少一个电子围栏之一的围栏报警提示信息,其中,所述围栏报警提示信息用于指示所述执勤设备的实时执勤位置信息不满足所述至少一个电子围栏之一的围栏属性信息,所述执勤设备包括所述执勤用户的增强现实设备和/或所述无人机设备。例如,若所述执勤设备的实时执勤位置信息不满足所述至少电子围栏之一的围栏属性信息,如对应围栏属性信息包括该电子围栏为禁入围栏,则该实时执勤位置信息处于该电子围栏所围区域之内,或者,对应围栏属性信息包括该电子围栏为禁出围栏,则该实时执勤位置信息处于该电子围栏所围区域之外等,则指挥设备/网络设备生成对应的围栏报警提示信息,指挥设备可以呈现本地生成的围栏报警提示信息或者接收并呈现网络设备发送的围栏报警提示信息等。
在一些情形下,前述围栏预警提示信息和/或围栏报警提示信息还可以发送至执勤设 备,并呈现于所述协同任务的执勤设备的显示装置,用于提示对应执勤用户当前位置处于预警距离或者不满足围栏属性信息等。具体地,如指挥设备在本地生成对应围栏预警提示信息和/或围栏报警提示信息后,将该围栏预警提示信息和/或围栏报警提示信息发送至该执勤设备或者发送至协同任务的所有执勤设备等;还如,网络设备生成对应围栏预警提示信息和/或围栏报警提示信息后,在将围栏预警提示信息和/或围栏报警提示信息发送至指挥设备的同时,还将该围栏预警提示信息和/或围栏报警提示信息发送至对应执勤设备或者该协同任务的所有执勤设备等。
图2示出根据本申请的另一个方面的一种获取电子围栏的方法,应用于网络设备,该方法包括步骤S201。在步骤S201中,获取对应协同任务的电子围栏集合,其中,所述电子围栏集合包括至少一个电子围栏,每个电子围栏包括对应围栏属性和围栏区域的地理位置信息,其中,所述协同任务的执勤设备包括执勤用户的增强现实设备和/或无人机设备;其中,所述围栏区域的地理位置信息用于对所述执勤设备与所述电子围栏进行碰撞检测。例如,网络设备作为对应协同任务的数据传输和数据处理设备,接收指挥用户的用户操作确定的目标电子围栏和/或操作电子围栏,并基于目标电子围栏和/或操作电子围栏建立或更新数据库中所述协同任务的电子围栏集合的存储等。其中,电子围栏的围栏属性,地理位置信息的计算、呈现等具体实施方式与前述实施例相同或相似,在此不再赘述。
在一些实施方式中,所述方法还包括步骤S202(未示出),在步骤S202中,获取所述执勤设备的实时执勤位置信息;基于所述实时执勤位置信息和所述至少一个电子围栏的地理位置信息,确定对应的围栏预警事件或围栏报警事件。例如,网络设备可以基于与执勤设备的通信连接,接收对应执勤设备上传的实时执勤位置信息,并基于所述实时执勤位置信息和所述至少一个电子围栏的地理位置信息进行碰撞检测,从而确定对应的围栏预警事件或围栏报警事件,例如,所述碰撞检测是指基于协同任务的执勤设备的执勤位置信息及该电子围栏的地理位置信息确定该执勤设备是否处于该电子围栏的围栏内部/外部,或者是否处于电子围栏的预警范围内等。例如,我们可以直接基于对应地理位置信息中的经纬度信息进行碰撞检测等,从而确定对应执勤设备的经纬度信息是否处于围栏区域的经纬度范围内部等。当然,为了方便计算并实现精准进行碰撞检测等,我们可以将执勤位置信息及电子围栏的地理位置信息转换至同一平面直角坐标系下(如地图坐标系或者任一二维平面直角坐标系等),从而确定对应执勤二维位置信息及电子 围栏的二维位置信息,并基于该执勤二维位置信息及电子围栏的二维位置信息确定该执勤设备是否处于该电子围栏的内部/外部/预警范围内等。其中,所述执勤设备用于指示除指挥设备和网络设备之外的其他处于移动状态和/或任务执行状态的设备,如执勤用户佩戴的增强现实设备、无人机飞手控制的无人机设备等。对应围栏预警提示信息的生成过程发生在网络设备端,例如,网络设备获取执勤设备的实时执勤位置信息,并基于该执勤设备的实时执勤位置信息及电子围栏集合中各个电子围栏的地理位置信息,计算确定所述执勤设备的实时执勤位置信息是否满足所述至少一个电子围栏之一的围栏属性信息,且与所述至少一个电子围栏之一的距离差值小于或等于预警距离阈值;若是,网络设备生成该围栏预警提示信息并将其发送至指挥设备供指挥设备呈现进行后续处理等。其中,在一些实施例中,该围栏预警提示信息对于协同任务中的每个执勤设备与该协同任务中的电子围栏集合中每个电子围栏均进行计算,若获取到某个执勤设备与某个电子围栏满足上述条件,则生成关于该执勤设备与该电子围栏的围栏预警提示信息,该围栏预警提示信息还包括该执勤设备的设备标识信息(例如,设备编号、名称或者设备对应用户编号、名称等),更进一步地,该围栏预警提示信息还包括该电子围栏的围栏标识信息(例如,围栏编号、名称或者坐标位置等)。在另一些实施例中,协同任务包括多个子任务,每个子任务包括相应的子任务执勤设备和子任务电子围栏集合,其中,该子任务的子任务电子围栏集合仅确定该子任务对应的执勤设备的禁入范围或者禁出范围,则协同任务的子任务的围栏预警提示信息对于该子任务中的每个执勤设备与该子任务电子围栏集合中每个电子围栏均进行计算,非该子任务中的执勤设备不与该子任务电子围栏集合中电子围栏进行计算,非该子任务电子围栏集合中电子围栏不与该子任务中的执勤设备进行计算。其中,所述执勤设备的实时执勤位置信息满足所述至少一个电子围栏之一的围栏属性信息,用于指示该实时执勤位置信息与电子围栏的围栏属性信息相匹配,例如,若对应围栏属性信息包括该电子围栏为禁入围栏,则该实时执勤位置信息处于该电子围栏所围区域之外,若对应围栏属性信息包括该电子围栏为禁出围栏,则该实时执勤位置信息处于该电子围栏所围区域之内等。若对应执勤设备的实时执勤位置信息不满足某电子围栏的围栏属性信息,网络设备则生成对应的围栏报警提示信息并将其发送至指挥设备供指挥设备呈现进行后续处理等,用于提醒指挥用户该任务的执勤设备之一不满足至少一个电子围栏之一的围栏属性信息,需要指挥用户对其进行指挥调度等。
在一些实施方式中,所述基于所述实时执勤位置和所述至少一个电子围栏的地理位置信息,确定对应的围栏预警事件或围栏报警事件,包括:将所述实时执勤位置转换至平面直角坐标系中,确定对应的实时执勤二维位置信息;根据所述至少一个电子围栏的地理位置信息确定所述至少一个电子围栏的二维位置信息;基于所述实时执勤二维位置信息和所述至少一个电子围栏的二维位置信息,确定对应的围栏预警事件或围栏报警事件。例如,基于三维的地理位置信息进行碰撞检测,计算量大且无法忽视高程带来的距离影响等,因此,我们将实时执勤位置信息及至少一个电子围栏的地理位置信息转换(例如,投影转换等)至二维平面直角坐标系(例如,地图坐标系或者任一平面直角坐标系等)中,从而得到对应实时执勤二维位置信息及至少一个电子围栏的二维位置信息,例如,将至少一个电子围栏的地理位置信息及实时执勤位置信息经过墨卡托投影的正向转换,得到平面直角坐标系下的至少一个电子围栏的二维位置信息及实时执勤二维位置信息等。随后,网络设备基于该实时执勤二维位置信息及至少一个电子围栏的二维位置信息进行对应碰撞检测,确定是否发生围栏预警事件或者围栏报警事件等。其中,在一些情形下,报警事件的优先级高于预警事件的优先级。
在一些实施方式中,所述基于所述实时执勤二维位置信息和所述至少一个电子围栏的二维位置信息,确定对应的围栏预警事件或围栏报警事件,包括:根据所述实时执勤二维位置信息和所述至少一个电子围栏中某电子围栏的二维位置信息进行碰撞检测,确定所述实时执勤二维位置信息是否满足所述某电子围栏的围栏属性信息;若不满足所述至少一个电子围栏中某电子围栏的围栏属性信息,则生成关于所述某电子围栏的围栏报警事件;若满足所述至少一个电子围栏中某电子围栏的围栏属性信息,则根据所述实时执勤二维位置信息和所述某电子围栏的二维位置信息确定对应距离差值,若所述距离差值小于或等于预警距离阈值,则生成关于所述某电子围栏的围栏预警事件。例如,网络设备基于实时执勤二维位置信息、所述协同任务的电子围栏对应二维位置信息进行碰撞检测,确定该执勤设备是否满足某电子围栏的围栏属性信息,其中,某电子围栏为该协同任务的所有电子围栏之一。其中,该实时执勤二维位置信息可以同时满足一个或多个电子围栏的围栏属性信息,例如,该实时执勤二维位置信息可以同时在多个围栏属性为禁入围栏的电子围栏的外部,或者,该实时执勤二维位置信息可以在围栏属性为禁入围栏的电子围栏的外部的同时还处于围栏属性为禁出围栏的电子围栏的内部等。若存在不满足的电子围栏,则网络设备生成对应的围栏报警提示信息,该围栏报警提示信息还包 括该执勤设备的设备标识信息(例如,设备编号、名称或者设备对应用户编号、名称等),更进一步地,该围栏报警提示信息还包括该某电子围栏的围栏标识信息(例如,围栏编号、名称或者坐标位置等),该围栏报警提示信息用于指示该执勤设备不满足该某电子围栏的围栏属性信息等。
网络设备若确定实时执勤二维位置信息满足某电子围栏的围栏属性信息,则进一步基于该实时执勤二维位置信息确定该实时执勤二维位置信息与该某电子围栏的二维位置信息的距离差值等,具体根据实时执勤二维位置信息与该某电子围栏的边界最近的距离作为该实时执勤二维位置信息与该某电子围栏的二维位置信息的距离差值,并将该距离差值与预设预警阈值进行比较,若该距离差值小于或等于预设预警阈值,则生成对应的围栏预警提示信息,该围栏预警提示信息还包括该执勤设备的设备标识信息(例如,设备编号、名称或者设备对应用户编号、名称等),更进一步地,该围栏预警提示信息还包括该某电子围栏的围栏标识信息(例如,围栏编号、名称或者坐标位置等),该围栏预警提示信息用于指示该执勤设备已进入该某电子围栏的预警范围等。
在一些实施方式中,所述某电子围栏的围栏区域的形状为圆形,其中,所述碰撞检测包括:基于所述实时执勤二维位置信息和所述围栏区域的圆心的二维位置信息,计算所述实时执勤二维位置信息到所述圆心的距离信息;根据所述圆心的二维位置信息及所述圆形上的任一点的二维位置信息确定所述圆形的半径;根据所述距离信息、所述圆形的半径及所述某电子围栏的围栏属性信息确定所述实时执勤二维位置信息是否满足所述某电子围栏的围栏属性信息。例如,对于某围栏区域的形状为圆形的电子围栏,我们计算该执勤设备是否满足该电子围栏的围栏属性信息时,通过判断该执勤二维位置信息是否处于该电子围栏内部,假设圆周上的某点为p1(x1,y1),圆心为o(x,y),执勤二维位置信息为p2(x2,y2),则半径r如下:
Figure PCTCN2022111993-appb-000001
对应执勤二维位置信息与圆心的距离l:
Figure PCTCN2022111993-appb-000002
若l-r>0,则确定执勤二维位置信息处于该电子围栏的圆外;l-r<=0,则确定执勤二维位置信息处于该电子围栏的圆内。随后,根据该电子围栏的围栏属性信息是禁入围栏或者禁出围栏等,确定该执勤二维位置信息是否满足该电子围栏的围栏属性信息等。 例如,若该执勤二维位置信息处于该电子围栏的圆外,该电子围栏的围栏属性信息为禁入围栏,则确定该执勤二维位置信息满足该电子围栏的围栏属性信息;若该执勤二维位置信息处于该电子围栏的圆外,该电子围栏的围栏属性信息为禁出围栏,则确定该执勤二维位置信息不满足该电子围栏的围栏属性信息;若该执勤二维位置信息处于该电子围栏的圆内,该电子围栏的围栏属性信息为禁出围栏,则确定该执勤二维位置信息满足该电子围栏的围栏属性信息;若该执勤二维位置信息处于该电子围栏的圆内,该电子围栏的围栏属性信息为禁入围栏,则确定该执勤二维位置信息不满足该电子围栏的围栏属性信息。
进一步地,当该实时执勤二维位置信息满足该电子围栏的围栏属性信息时,对于该电子围栏与对应执勤二维位置信息的距离差值可以通过上述|l-r|获得,从而将该距离差值与对应预设预警阈值进行比较,判断该执勤二维位置信息是否处于该电子围栏的预警范围内等。
在一些实施方式中,所述某电子围栏的围栏区域的形状为多边形;其中,所述碰撞检测包括:基于所述实时执勤二维位置信息,采用引射线法,确定对应执勤射线信息,并基于所述执勤射线信息确定所述执勤射线信息与所述某电子围栏的目标区域的交点数量;根据所述交点数量确定所述实时执勤二维位置信息与所述某电子围栏的围栏区域的区域内外关系,其中,所述区域内外关系用于指示所述实时执勤二维位置信息处于所述某电子围栏的区域内部或者区域外部;根据所述区域内外关系及所述某电子围栏的围栏属性信息确定所述实时执勤二维位置信息是否满足所述某电子围栏的围栏属性信息。例如,对于某围栏区域的形状为多边形的电子围栏,我们计算该执勤设备是否满足该电子围栏的围栏属性信息时,采用引射线法确定该执勤二维位置信息是否处于该电子围栏内部或外部。如图3所示,采用引射线法,从执勤二维位置信息出发引一条射线,判断这射线与电子围栏的多边形所有边的交点数目,如果该执勤二维位置信息两边的交点个数都是奇数,说明该执勤二维位置信息在电子围栏的多边形内部,否则确认该执勤二维位置信息在电子围栏的多边形外部。随后,根据该电子围栏的围栏属性信息是禁入围栏或者禁出围栏等,确定该执勤二维位置信息是否满足该电子围栏的围栏属性信息等。同样地,若该执勤二维位置信息处于该电子围栏的外部,该电子围栏的围栏属性信息为禁入围栏,则确定该执勤二维位置信息满足该电子围栏的围栏属性信息;若该执勤二维位置信息处于该电子围栏的外部,该电子围栏的围栏属性信息为禁出围栏,则确定该执勤 二维位置信息不满足该电子围栏的围栏属性信息;若该执勤二维位置信息处于该电子围栏的内部,该电子围栏的围栏属性信息为禁出围栏,则确定该执勤二维位置信息满足该电子围栏的围栏属性信息;若该执勤二维位置信息处于该电子围栏的内部,该电子围栏的围栏属性信息为禁入围栏,则确定该执勤二维位置信息不满足该电子围栏的围栏属性信息。
在一些实施方式中,所述根据所述实时执勤二维位置信息和所述某电子围栏的二维位置信息确定对应距离差值,包括:计算所述实时执勤二维位置信息与所述围栏区域的多边形中每条边的距离,从而确定多个距离;从所述多个距离中确定最小的距离,将该最小的距离作为对应距离差值;根据所述距离差值是否小于或等于预警距离阈值确定是否生成关于所述某电子围栏的围栏预警事件。例如,网络设备通过计算实时执勤二维位置信息与多边形的每条边相距的距离,再通过排序算法(如冒泡排序、快速排序、插入排序、希尔排序等)找出其中最短那条边的距离作为距离差值,与电子围栏的预警距离阈值对比,如果该距离差值小于或等于预警距离阈值,则确定该执勤设备处于该电子围栏的预警范围内。具体地,如图4所示的示例,采用矢量算法计算点到线段的距离,假设目标点(如执勤二维位置信息)为p,线段为AB,从点p向线段AB作垂线,投影点设为C,向量AP与AB之间的夹角设为θ,通过求AP和AB两个向量的内积r,可以判断P与AB的位置关系,如:
Figure PCTCN2022111993-appb-000003
如图4a所示,如果0<r<1,说明目标点p在AB上,AC和BC其中的最小值就是P到AB的距离;如图4b所示,如果r>=1,说明目标点p在AB的右侧,BP为p到AB的距离;如图4c所示,如果r<=0,说明目标点p在AB的左侧,AP为P到AB的距离。例如,网络设备可以计算确定该执勤设备与电子围栏的每条边的距离,并从多个距离中确定最小的距离,将该最小的距离确定为执勤设备至电子围栏的距离差值,并基于该距离差值与预设预警阈值进行比较,确定该执勤设备是否处于电子围栏的预警范围内。
在一些实施方式中,所述方法还包括步骤S203(未示出),在步骤S203中,根据预设时间间隔生成围栏预警事件和/或围栏报警事件对应的提示信息,并将所述提示信息下发至所述协同任务的参与设备。例如,网络设备启动一个后台线程用于取出实时 执勤位置信息队列的元素,并基于该实时执勤位置信息及电子围栏的地理位置信息进行碰撞检测计算。例如,根据执勤设备当前执行的任务标识信息,从数据库加载任务内的存量或全部电子围栏,储存到电子围栏数据中,基于该电子围栏数据及执勤设备的实时执勤位置信息进行碰撞检测,在一些实施例中,网络设备为每个执勤设备分配一个限流器。当实时执勤位置信息携带了更新(位置发生变化)的标识,需要先从限流器中获取一个令牌,只有获得令牌的请求才能放行到碰撞检测服务进行处理,如果没有拿到令牌,则不处理该请求,如预定间隔发放一个令牌,每隔n秒处理一次位置。具体地,限流器是一个通过令牌桶算法实现的流量控制服务,令牌算法是以固定速度(例如,预设时间间隔等)往桶内增加令牌,这里设令牌桶的容量为1,每n秒以固定的速度向桶中增加令牌,假设n=0,即不限流,每个位置点都可以获得令牌并进入碰撞计算服务,假设n=5,即5秒内只往桶里增加1个令牌,这5秒内即使有多个位置请求处理(当执勤端或无人机的位置改变了就会产生位置处理请求),也只有其中一个获得令牌并进入碰撞计算服务。在另一些实施例中,网络设备对于每次执勤位置的变化都进行碰撞检测,但是对于碰撞检测的结果以预设间隔显示。根据上述技术方案,执勤位置在预设时间间隔内,即使产生了多个报警或预警也只会发送一个,能为协同任务的参与设备提供更好的参考价值。在一些情形下,可以为不同的执勤设备设置不同的时间间隔。在另一些情形下,可以为围栏预警事件和围栏报警事件设置不同的时间间隔。
上文主要对本申请一个方面的一种获取电子围栏的方法的各具体实施例进行了介绍,此外,本申请还提供了能够实施上述各实施例的具体设备,下面我们结合图5、图6进行介绍。
图5示出了根据本申请的一个方面的一种获取电子围栏的指挥设备,该设备包括一一模块101和一二模块102。一一模块101,用于获取无人机设备拍摄的场景图像;一二模块102,用于获取指挥设备的指挥用户关于所述场景图像中目标区域的用户操作,基于所述用户操作生成关于所述目标区域的目标电子围栏,其中,所述目标电子围栏包括对应的目标围栏属性及所述目标区域在所述场景图像中的目标图像位置信息,所述目标图像位置信息用于确定所述目标区域的地理位置信息并用于对执勤用户的增强现实设备和/或无人机设备与所述目标电子围栏进行碰撞检测,所述增强现实设备与所述指挥设备处于同一协同任务的协同执行状态。
在一些实施方式中,所述围栏属性信息还包括围栏的禁入或者禁出属性,如围栏属 性信息包括但不限于所述电子围栏为禁入围栏或所述电子围栏为禁出围栏。
在一些实施方式中,所述地理位置信息还用于确定所述目标电子围栏在所述执勤用户的增强现实设备的实景中的叠加位置信息,并在所述增强现实设备的实景中叠加呈现所述目标区域。
在一些实施方式中,所述地理位置信息还用于确定所述目标区域在所述无人机设备拍摄的实时场景图像中的实时目标图像位置信息,并在所述增强现实设备和/或无人机设备呈现的实时场景图像中叠加呈现所述目标电子围栏。
在此,所述图5示出的一一模块101和一二模块102的具体实施方式与前述图1示出的步骤S101、步骤S102的实施例相同或相似,因而不再赘述,以引用的方式包含于此。
在一些实施方式中,所述设备还包括一三模块(未示出),用于基于所述图像位置信息、所述场景图像的摄像位姿信息确定所述目标对象的地理位置信息。
在一些实施方式中,所述设备还包括一四模块(未示出),用于呈现所述协同任务所对应的电子地图;根据所述目标电子围栏的地理位置信息确定所述目标电子围栏的地图位置信息,基于所述地图位置信息在所述电子地图中呈现所述目标电子围栏。
在一些实施方式中,所述目标电子围栏的地理位置信息还用于在所述增强现实设备和/或无人机设备呈现的、关于所述目标对象所处场景的电子地图中叠加呈现所述目标电子围栏。
在一些实施方式中,所述设备还包括一五模块(未示出),用于获取并呈现所述协同任务对应的电子地图,基于所述指挥用户在所述电子地图中操作区域的用户操作确定所述操作区域的操作电子围栏,其中,所述操作电子围栏包括对应的操作围栏属性及所述操作区域在所述电子地图中的操作地图位置信息,其中,所述操作地图位置信息用于确定所述操作区域的操作地理位置信息并用于对执勤用户的增强现实设备和/或无人机设备与所述操作电子围栏进行碰撞检测。
在一些实施方式中,所述目标电子围栏和/或对应操作电子围栏用于更新或建立所述协同任务的电子围栏集合,其中,所述电子围栏集合包括至少一个电子围栏,每个电子围栏包括对应围栏属性和目标区域的地理位置信息,所述目标电子围栏或所述操作电子围栏属于所述至少一个电子围栏之一,所述操作电子围栏基于所述指挥用户在电子地图中关于操作区域的用户操作确定。
在一些实施方式中,所述设备还包括一六模块(未示出),用于获取并呈现对应执勤设备关于所述至少一个电子围栏之一的围栏预警提示信息,其中,所述围栏预警提示信息用于指示所述执勤设备的实时执勤位置信息满足所述至少一个电子围栏之一的围栏属性信息,且与所述至少一个电子围栏之一的距离差值小于或等于预警距离阈值,所述执勤设备包括所述执勤用户的增强现实设备和/或所述无人机设备。
在一些实施方式中,所述设备还包括一七模块(未示出),用于获取并呈现所述执勤设备关于所述至少一个电子围栏之一的围栏报警提示信息,其中,所述围栏报警提示信息用于指示所述执勤设备的实时执勤位置信息不满足所述至少一个电子围栏之一的围栏属性信息,所述执勤设备包括所述执勤用户的增强现实设备和/或所述无人机设备。
在此,所述一三模块至一七模块对应的具体实施方式与前述步骤S103至步骤S107的实施例相同或相似,因而不再赘述,以引用的方式包含于此。
图6示出根据本申请的另一个方面的一种获取电子围栏的网络设备,该设备包括二一模块201。二一模块201,用于获取对应协同任务的电子围栏集合,其中,所述电子围栏集合包括至少一个电子围栏,每个电子围栏包括对应围栏属性和目标区域的地理位置信息,其中,所述协同任务的执勤设备包括执勤用户的增强现实设备和/或无人机设备;其中,所述目标区域的地理位置信息用于对所述执勤设备与所述电子围栏进行碰撞检测。
在此,所述二一模块201对应的具体实施方式与前述步骤S201的实施例相同或相似,因而不再赘述,以引用的方式包含于此。
在一些实施方式中,所述设备还包括二二模块(未示出),用于获取所述执勤设备的实时执勤位置信息;基于所述实时执勤位置信息和所述至少一个电子围栏的地理位置信息,确定对应的围栏预警事件或围栏报警事件。
在一些实施方式中,所述基于所述实时执勤位置和所述至少一个电子围栏的地理位置信息,确定对应的围栏预警事件或围栏报警事件,包括:将所述实时执勤位置转换至平面直角坐标系中,确定对应的实时执勤二维位置信息;根据所述至少一个电子围栏的地理位置信息确定所述至少一个电子围栏的二维位置信息;基于所述实时执勤二维位置信息和所述至少一个电子围栏的二维位置信息,确定对应的围栏预警事件或围栏报警事件。
在一些实施方式中,所述基于所述实时执勤二维位置信息和所述至少一个电子围栏 的二维位置信息,确定对应的围栏预警事件或围栏报警事件,包括:根据所述实时执勤二维位置信息和所述至少一个电子围栏中某电子围栏的二维位置信息进行碰撞检测,确定所述实时执勤二维位置信息是否满足所述某电子围栏的围栏属性信息;若不满足所述至少一个电子围栏中某电子围栏的围栏属性信息,则生成关于所述某电子围栏的围栏报警事件;若满足所述至少一个电子围栏中某电子围栏的围栏属性信息,则根据所述实时执勤二维位置信息和所述某电子围栏的二维位置信息确定对应距离差值,若所述距离差值小于或等于预警距离阈值,则生成关于所述某电子围栏的围栏预警事件。
在一些实施方式中,所述某电子围栏的围栏区域的形状为圆形,其中,所述碰撞检测包括:基于所述实时执勤二维位置信息和所述围栏区域的圆心的二维位置信息,计算所述实时执勤二维位置信息到所述圆心的距离信息;根据所述圆心的二维位置信息及所述圆形上的任一点的二维位置信息确定所述圆形的半径;根据所述距离信息、所述圆形的半径及所述某电子围栏的围栏属性信息确定所述实时执勤二维位置信息是否满足所述某电子围栏的围栏属性信息。
在一些实施方式中,所述某电子围栏的围栏区域的形状为多边形;其中,所述碰撞检测包括:基于所述实时执勤二维位置信息,采用引射线法,确定对应执勤射线信息,并基于所述执勤射线信息确定所述执勤射线信息与所述某电子围栏的目标区域的交点数量;根据所述交点数量确定所述实时执勤二维位置信息与所述某电子围栏的围栏区域的区域内外关系,其中,所述区域内外关系用于指示所述实时执勤二维位置信息处于所述某电子围栏的区域内部或者区域外部;根据所述区域内外关系及所述某电子围栏的围栏属性信息确定所述实时执勤二维位置信息是否满足所述某电子围栏的围栏属性信息。
在一些实施方式中,所述根据所述实时执勤二维位置信息和所述某电子围栏的二维位置信息确定对应距离差值,包括:计算所述实时执勤二维位置信息与所述目标区域的多边形中每条边的距离,从而确定多个距离;从所述多个距离中确定最小的距离,将该最小的距离作为对应距离差值;根据所述距离差值是否小于或等于预警距离阈值确定是否生成关于所述某电子围栏的围栏预警事件。
在一些实施方式中,所述设备还包括二三模块(未示出),用于根据预设时间间隔生成围栏预警事件和/或围栏报警事件对应的提示信息,并将所述提示信息下发至所述协同任务的参与设备。
在此,所述二二模块、二三模块对应的具体实施方式与前述步骤S202、步骤S203 的实施例相同或相似,因而不再赘述,以引用的方式包含于此。
除上述各实施例介绍的方法和设备外,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机代码,当所述计算机代码被执行时,如前任一项所述的方法被执行。
本申请还提供了一种计算机程序产品,当所述计算机程序产品被计算机设备执行时,如前任一项所述的方法被执行。
本申请还提供了一种计算机设备,所述计算机设备包括:
一个或多个处理器;
存储器,用于存储一个或多个计算机程序;
当所述一个或多个计算机程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如前任一项所述的方法。
图7示出了可被用于实施本申请中所述的各个实施例的示例性系统;
如图7所示在一些实施例中,系统300能够作为各所述实施例中的任意一个上述设备。在一些实施例中,系统300可包括具有指令的一个或多个计算机可读介质(例如,系统存储器或NVM/存储设备320)以及与该一个或多个计算机可读介质耦合并被配置为执行指令以实现模块从而执行本申请中所述的动作的一个或多个处理器(例如,(一个或多个)处理器305)。
对于一个实施例,系统控制模块310可包括任意适当的接口控制器,以向(一个或多个)处理器305中的至少一个和/或与系统控制模块310通信的任意适当的设备或组件提供任意适当的接口。
系统控制模块310可包括存储器控制器模块330,以向系统存储器315提供接口。存储器控制器模块330可以是硬件模块、软件模块和/或固件模块。
系统存储器315可被用于例如为系统300加载和存储数据和/或指令。对于一个实施例,系统存储器315可包括任意适当的易失性存储器,例如,适当的DRAM。在一些实施例中,系统存储器315可包括双倍数据速率类型四同步动态随机存取存储器(DDR4SDRAM)。
对于一个实施例,系统控制模块310可包括一个或多个输入/输出(I/O)控制器,以向NVM/存储设备320及(一个或多个)通信接口325提供接口。
例如,NVM/存储设备320可被用于存储数据和/或指令。NVM/存储设备320 可包括任意适当的非易失性存储器(例如,闪存)和/或可包括任意适当的(一个或多个)非易失性存储设备(例如,一个或多个硬盘驱动器(HDD)、一个或多个光盘(CD)驱动器和/或一个或多个数字通用光盘(DVD)驱动器)。
NVM/存储设备320可包括在物理上作为系统300被安装在其上的设备的一部分的存储资源,或者其可被该设备访问而不必作为该设备的一部分。例如,NVM/存储设备320可通过网络经由(一个或多个)通信接口325进行访问。
(一个或多个)通信接口325可为系统300提供接口以通过一个或多个网络和/或与任意其他适当的设备通信。系统300可根据一个或多个无线网络标准和/或协议中的任意标准和/或协议来与无线网络的一个或多个组件进行无线通信。
对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器(例如,存储器控制器模块330)的逻辑封装在一起。对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器的逻辑封装在一起以形成系统级封装(SiP)。对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器的逻辑集成在同一模具上。对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器的逻辑集成在同一模具上以形成片上系统(SoC)。
在各个实施例中,系统300可以但不限于是:服务器、工作站、台式计算设备或移动计算设备(例如,膝上型计算设备、手持计算设备、平板电脑、上网本等)。在各个实施例中,系统300可具有更多或更少的组件和/或不同的架构。例如,在一些实施例中,系统300包括一个或多个摄像机、键盘、液晶显示器(LCD)屏幕(包括触屏显示器)、非易失性存储器端口、多个天线、图形芯片、专用集成电路(ASIC)和扬声器。
需要注意的是,本申请可在软件和/或软件与硬件的组合体中被实施,例如,可采用专用集成电路(ASIC)、通用目的计算机或任何其他类似硬件设备来实现。在一个实施例中,本申请的软件程序可以通过处理器执行以实现上文所述步骤或功能。同样地,本申请的软件程序(包括相关的数据结构)可以被存储到计算机可读记录介质中,例如,RAM存储器,磁或光驱动器或软磁盘及类似设备。另外,本申请的一些步骤或功能可采用硬件来实现,例如,作为与处理器配合从而执行各个步骤或功能的电路。
另外,本申请的一部分可被应用为计算机程序产品,例如计算机程序指令,当其被计算机执行时,通过该计算机的操作,可以调用或提供根据本申请的方法和/或技术方案。本领域技术人员应能理解,计算机程序指令在计算机可读介质中的存在形式包括但不限于源文件、可执行文件、安装包文件等,相应地,计算机程序指令被计算机执行的方式包括但不限于:该计算机直接执行该指令,或者该计算机编译该指令后再执行对应的编译后程序,或者该计算机读取并执行该指令,或者该计算机读取并安装该指令后再执行对应的安装后程序。在此,计算机可读介质可以是可供计算机访问的任意可用的计算机可读存储介质或通信介质。
通信介质包括藉此包含例如计算机可读指令、数据结构、程序模块或其他数据的通信信号被从一个系统传送到另一系统的介质。通信介质可包括有导的传输介质(诸如电缆和线(例如,光纤、同轴等))和能传播能量波的无线(未有导的传输)介质,诸如声音、电磁、RF、微波和红外。计算机可读指令、数据结构、程序模块或其他数据可被体现为例如无线介质(诸如载波或诸如被体现为扩展频谱技术的一部分的类似机制)中的已调制数据信号。术语“已调制数据信号”指的是其一个或多个特征以在信号中编码信息的方式被更改或设定的信号。调制可以是模拟的、数字的或混合调制技术。
作为示例而非限制,计算机可读存储介质可包括以用于存储诸如计算机可读指令、数据结构、程序模块或其它数据的信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动的介质。例如,计算机可读存储介质包括,但不限于,易失性存储器,诸如随机存储器(RAM,DRAM,SRAM);以及非易失性存储器,诸如闪存、各种只读存储器(ROM,PROM,EPROM,EEPROM)、磁性和铁磁/铁电存储器(MRAM,FeRAM);以及磁性和光学存储设备(硬盘、磁带、CD、DVD);或其它现在已知的介质或今后开发的能够存储供计算机系统使用的计算机可读信息/数据。
在此,根据本申请的一个实施例包括一个装置,该装置包括用于存储计算机程序指令的存储器和用于执行程序指令的处理器,其中,当该计算机程序指令被该处理器执行时,触发该装置运行基于前述根据本申请的多个实施例的方法和/或技术方案。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。 因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。装置权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。

Claims (24)

  1. 一种获取电子围栏的方法,其中,应用于指挥设备,该方法包括:
    获取无人机设备拍摄的场景图像;
    获取指挥设备的指挥用户关于所述场景图像中目标区域的用户操作,基于所述用户操作生成关于所述目标区域的目标电子围栏,其中,所述目标电子围栏包括对应的目标围栏属性及所述目标区域在所述场景图像中的目标图像位置信息,所述目标图像位置信息用于确定所述目标区域的地理位置信息并用于对执勤用户的增强现实设备和/或无人机设备与所述目标电子围栏进行碰撞检测,所述增强现实设备与所述指挥设备处于同一协同任务的协同执行状态。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    基于所述目标图像位置信息、所述场景图像的摄像位姿信息确定所述目标区域的地理位置信息。
  3. 根据权利要求1或2所述的方法,其中,所述地理位置信息还用于确定所述目标电子围栏在所述无人机设备拍摄的实时场景图像中的实时场景图像位置信息,并在所述增强现实设备和/或所述无人机设备呈现的实时场景图像中叠加呈现所述目标区域。
  4. 根据权利要求1或2所述的方法,其中,所述地理位置信息还用于确定所述目标电子围栏在所述执勤用户的增强现实设备的实景中的叠加位置信息,并在所述增强现实设备的实景中叠加呈现所述目标区域。
  5. 根据权利要求1所述的方法,其中,所述方法还包括:
    呈现所述协同任务所对应的电子地图;
    根据所述目标电子围栏的地理位置信息确定所述目标电子围栏的地图位置信息,基于所述地图位置信息在所述电子地图中呈现所述目标电子围栏。
  6. 根据权利要求1或5所述的方法,其中,所述目标电子围栏的地理位置信息还用于在所述增强现实设备和/或无人机设备呈现的、关于所述协同任务的电子地图中叠加呈现所述目标电子围栏。
  7. 根据权利要求1所述的方法,其中,所述方法还包括:
    获取并呈现所述协同任务对应的电子地图,基于所述指挥用户在所述电子地图中操作区域的用户操作确定所述操作区域的操作电子围栏,其中,所述操作电子围栏包括对 应的操作围栏属性及所述操作区域在所述电子地图中的操作地图位置信息,其中,所述操作地图位置信息用于确定所述操作区域的操作地理位置信息并用于对执勤用户的增强现实设备和/或无人机设备与所述操作电子围栏进行碰撞检测。
  8. 根据权利要求1所述的方法,其中,所述目标电子围栏和/或对应操作电子围栏用于更新或建立所述协同任务的电子围栏集合,其中,所述电子围栏集合包括至少一个电子围栏,每个电子围栏包括对应围栏属性和围栏区域的地理位置信息,所述目标电子围栏或所述操作电子围栏属于所述至少一个电子围栏之一,所述操作电子围栏基于所述指挥用户在电子地图中关于操作区域的用户操作确定。
  9. 根据权利要求8所述的方法,其中,所述方法还包括:
    获取并呈现对应执勤设备关于所述至少一个电子围栏之一的围栏预警提示信息,其中,所述围栏预警提示信息用于指示所述执勤设备的实时执勤位置信息满足所述至少一个电子围栏之一的围栏属性信息,且与所述至少一个电子围栏之一的距离差值小于或等于预警距离阈值,所述执勤设备包括所述执勤用户的增强现实设备和/或所述无人机设备。
  10. 根据权利要求8所述的方法,其中,所述方法还包括:
    获取并呈现对应所述执勤设备关于所述至少一个电子围栏之一的围栏报警提示信息,其中,所述围栏报警提示信息用于指示所述执勤设备的实时执勤位置信息不满足所述至少一个电子围栏之一的围栏属性信息,所述执勤设备包括所述执勤用户的增强现实设备和/或所述无人机设备。
  11. 根据权利要求9或10所述的方法,其中,所述围栏属性信息包括以下任一项:
    所述电子围栏为禁入围栏;
    所述电子围栏为禁出围栏。
  12. 一种获取电子围栏的方法,其中,应用于网络设备,该方法包括:
    获取对应协同任务的电子围栏集合,其中,所述电子围栏集合包括至少一个电子围栏,每个电子围栏包括对应围栏属性和围栏区域的地理位置信息,其中,所述协同任务的执勤设备包括执勤用户的增强现实设备和/或无人机设备;其中,所述围栏区域的地理位置信息用于对所述执勤设备与所述电子围栏进行碰撞检测。
  13. 根据权利要求12所述的方法,其中,所述方法还包括:
    获取所述执勤设备的实时执勤位置信息;
    基于所述实时执勤位置信息和所述至少一个电子围栏的地理位置信息,确定对应的围栏预警事件或围栏报警事件。
  14. 根据权利要求13所述的方法,其中,所述基于所述实时执勤位置和所述至少一个电子围栏的地理位置信息,确定对应的围栏预警事件或围栏报警事件,包括:
    将所述实时执勤位置转换至平面直角坐标系中,确定对应的实时执勤二维位置信息;
    根据所述至少一个电子围栏的地理位置信息确定所述至少一个电子围栏的二维位置信息;
    基于所述实时执勤二维位置信息和所述至少一个电子围栏的二维位置信息,确定对应的围栏预警事件或围栏报警事件。
  15. 根据权利要求14所述的方法,其中,所述基于所述实时执勤二维位置信息和所述至少一个电子围栏的二维位置信息,确定对应的围栏预警事件或围栏报警事件,包括:
    根据所述实时执勤二维位置信息和所述至少一个电子围栏中某电子围栏的二维位置信息进行碰撞检测,确定所述实时执勤二维位置信息是否满足所述某电子围栏的围栏属性信息;
    若不满足所述至少一个电子围栏中某电子围栏的围栏属性信息,则生成关于所述某电子围栏的围栏报警事件;
    若满足所述至少一个电子围栏中某电子围栏的围栏属性信息,则根据所述实时执勤二维位置信息和所述某电子围栏的二维位置信息确定对应距离差值,若所述距离差值小于或等于预警距离阈值,则生成关于所述某电子围栏的围栏预警事件。
  16. 根据权利要求15所述的方法,其中,所述某电子围栏的围栏区域的形状为圆形,其中,所述碰撞检测包括:
    基于所述实时执勤二维位置信息和所述围栏区域的圆心的二维位置信息,计算所述实时执勤二维位置信息到所述圆心的距离信息;
    根据所述圆心的二维位置信息及所述圆形上的任一点的二维位置信息确定所述圆形的半径;
    根据所述距离信息、所述圆形的半径及所述某电子围栏的围栏属性信息确定所述实时执勤二维位置信息是否满足所述某电子围栏的围栏属性信息。
  17. 根据权利要求15所述的方法,其中,所述某电子围栏的围栏区域的形状为多边 形;其中,所述碰撞检测包括:
    基于所述实时执勤二维位置信息,采用引射线法,确定对应执勤射线信息,并基于所述执勤射线信息确定所述执勤射线信息与所述某电子围栏的围栏区域的交点数量;
    根据所述交点数量确定所述实时执勤二维位置信息与所述某电子围栏的围栏区域的区域内外关系,其中,所述区域内外关系用于指示所述实时执勤二维位置信息处于所述某电子围栏的围栏区域的区域内部或者区域外部;
    根据所述区域内外关系及所述某电子围栏的围栏属性信息确定所述实时执勤二维位置信息是否满足所述某电子围栏的围栏属性信息。
  18. 根据权利要求17所述的方法,其中,所述根据所述实时执勤二维位置信息和所述某电子围栏的二维位置信息确定对应距离差值,包括:
    计算所述实时执勤二维位置信息与所述围栏区域的多边形中每条边的距离,从而确定多个距离;
    从所述多个距离中确定最小的距离,将该最小的距离作为对应距离差值;
    根据所述距离差值是否小于或等于预警距离阈值确定是否生成关于所述某电子围栏的围栏预警事件。
  19. 根据权利要求12至18中任一项所述的方法,其中,所述方法还包括:
    根据预设时间间隔生成围栏预警事件和/或围栏报警事件对应的提示信息,并将所述提示信息下发至所述协同任务的参与设备。
  20. 一种呈现目标电子围栏的指挥设备,该设备包括:
    一一模块,用于获取无人机设备拍摄的场景图像;
    一二模块,用于获取指挥设备的指挥用户关于所述场景图像中目标区域的用户操作,基于所述用户操作生成关于所述目标区域的目标电子围栏,其中,所述目标电子围栏包括对应的目标围栏属性及所述目标区域在所述场景图像中的目标图像位置信息,所述目标图像位置信息用于确定所述目标区域的地理位置信息并用于对执勤用户的增强现实设备和/或无人机设备与所述目标电子围栏进行碰撞检测,所述增强现实设备与所述指挥设备处于同一协同任务的协同执行状态。
  21. 一种呈现目标电子围栏的网络设备,该设备包括:
    二一模块,用于获取对应协同任务的电子围栏集合,其中,所述电子围栏集合包括至少一个电子围栏,每个电子围栏包括对应围栏属性和围栏区域的地理位置信息,其中, 所述协同任务的执勤设备包括执勤用户的增强现实设备和/或无人机设备;其中,所述围栏区域的地理位置信息用于对所述执勤设备与所述电子围栏进行碰撞检测。
  22. 一种计算机设备,其中,该设备包括:
    处理器;以及
    被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行如权利要求1至19中任一项所述方法的步骤。
  23. 一种计算机可读存储介质,其上存储有计算机程序/指令,其特征在于,该计算机程序/指令在被执行时使得系统进行执行如权利要求1至19中任一项所述方法的步骤。
  24. 一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现权利要求1至19中任一项所述方法的步骤。
PCT/CN2022/111993 2022-06-30 2022-08-12 一种获取电子围栏的方法、设备、介质及程序产品 WO2024000746A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210778277.1 2022-06-30
CN202210778277.1A CN115460539B (zh) 2022-06-30 2022-06-30 一种获取电子围栏的方法、设备、介质及程序产品

Publications (1)

Publication Number Publication Date
WO2024000746A1 true WO2024000746A1 (zh) 2024-01-04

Family

ID=84297218

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/111993 WO2024000746A1 (zh) 2022-06-30 2022-08-12 一种获取电子围栏的方法、设备、介质及程序产品

Country Status (2)

Country Link
CN (1) CN115460539B (zh)
WO (1) WO2024000746A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934057B (zh) * 2023-09-15 2023-12-08 深圳优立全息科技有限公司 摄像机布设方法、装置及设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180343538A1 (en) * 2017-05-25 2018-11-29 International Business Machines Corporation Responding to changes in social traffic in a geofenced area
CN109561282A (zh) * 2018-11-22 2019-04-02 亮风台(上海)信息科技有限公司 一种用于呈现地面行动辅助信息的方法与设备
CN109656364A (zh) * 2018-08-15 2019-04-19 亮风台(上海)信息科技有限公司 一种用于在用户设备上呈现增强现实内容的方法与设备
CN110248157A (zh) * 2019-05-25 2019-09-17 亮风台(上海)信息科技有限公司 一种进行执勤调度的方法与设备
CN112995894A (zh) * 2021-02-09 2021-06-18 中国农业大学 无人机监控系统及方法
CN113741698A (zh) * 2021-09-09 2021-12-03 亮风台(上海)信息科技有限公司 一种确定和呈现目标标记信息的方法与设备
CN114186011A (zh) * 2021-12-14 2022-03-15 广联达科技股份有限公司 基于电子围栏的管理方法、装置、计算机设备和存储介质

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574722A (zh) * 2013-10-12 2015-04-29 北京航天长峰科技工业集团有限公司 一种基于多传感器的海港安全控制系统
US9335764B2 (en) * 2014-05-27 2016-05-10 Recreational Drone Event Systems, Llc Virtual and augmented reality cockpit and operational control systems
WO2016154943A1 (en) * 2015-03-31 2016-10-06 SZ DJI Technology Co., Ltd. Systems and methods for geo-fencing device communications
US11453494B2 (en) * 2016-05-20 2022-09-27 Skydio, Inc. Unmanned aerial vehicle area surveying
CN107783554A (zh) * 2016-08-26 2018-03-09 北京臻迪机器人有限公司 无人机飞行控制方法及装置
US12096308B2 (en) * 2016-12-15 2024-09-17 Conquer Your Addiction Llc Systems and methods for conducting/defending digital warfare or conflict
CN107132852B (zh) * 2017-03-31 2019-10-25 西安戴森电子技术有限公司 一种基于北斗地理围栏差分定位模块的无人机监管云平台
US10905105B2 (en) * 2018-06-19 2021-02-02 Farm Jenny LLC Farm asset tracking, monitoring, and alerts
CA3156348A1 (en) * 2018-10-12 2020-04-16 Armaments Research Company Inc. REMOTE GUN MONITORING AND SUPPORT SYSTEM
CN109656259A (zh) * 2018-11-22 2019-04-19 亮风台(上海)信息科技有限公司 一种用于确定目标对象的图像位置信息的方法与设备
CN109459029B (zh) * 2018-11-22 2021-06-29 亮风台(上海)信息科技有限公司 一种用于确定目标对象的导航路线信息的方法与设备
CN109669474B (zh) * 2018-12-21 2022-02-15 国网安徽省电力有限公司淮南供电公司 基于先验知识的多旋翼无人机自适应悬停位置优化算法
CN111385738A (zh) * 2018-12-27 2020-07-07 北斗天地股份有限公司 位置监控方法及装置
CN109949576A (zh) * 2019-04-24 2019-06-28 英华达(南京)科技有限公司 交通监控方法及系统
CN111783579B (zh) * 2020-06-19 2022-05-06 江苏濠汉信息技术有限公司 基于无人机视觉分析的施工人员跨越围栏检测系统
KR20220036399A (ko) * 2020-09-14 2022-03-23 금오공과대학교 산학협력단 웨어러블 기기를 이용한 혼합현실 모니터링 시스템
CN112287928A (zh) * 2020-10-20 2021-01-29 深圳市慧鲤科技有限公司 一种提示方法、装置、电子设备及存储介质
CN112861725A (zh) * 2021-02-09 2021-05-28 深圳市慧鲤科技有限公司 一种导航提示方法、装置、电子设备及存储介质
CN113391639A (zh) * 2021-06-28 2021-09-14 苏州追风者航空科技有限公司 一种户外空间观光方法及系统
CN114295135A (zh) * 2021-12-22 2022-04-08 中寰卫星导航通信有限公司 一种位置信息的确定方法、装置及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180343538A1 (en) * 2017-05-25 2018-11-29 International Business Machines Corporation Responding to changes in social traffic in a geofenced area
CN109656364A (zh) * 2018-08-15 2019-04-19 亮风台(上海)信息科技有限公司 一种用于在用户设备上呈现增强现实内容的方法与设备
CN109561282A (zh) * 2018-11-22 2019-04-02 亮风台(上海)信息科技有限公司 一种用于呈现地面行动辅助信息的方法与设备
CN110248157A (zh) * 2019-05-25 2019-09-17 亮风台(上海)信息科技有限公司 一种进行执勤调度的方法与设备
CN112995894A (zh) * 2021-02-09 2021-06-18 中国农业大学 无人机监控系统及方法
CN113741698A (zh) * 2021-09-09 2021-12-03 亮风台(上海)信息科技有限公司 一种确定和呈现目标标记信息的方法与设备
CN114186011A (zh) * 2021-12-14 2022-03-15 广联达科技股份有限公司 基于电子围栏的管理方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN115460539B (zh) 2023-12-15
CN115460539A (zh) 2022-12-09

Similar Documents

Publication Publication Date Title
US11698449B2 (en) User interface for displaying point clouds generated by a LiDAR device on a UAV
RU2741443C1 (ru) Способ и устройство для планирования точек выборки для съёмки и картографирования, терминал управления и носитель для хранения данных
AU2018450490B2 (en) Surveying and mapping system, surveying and mapping method and device, and apparatus
KR102129408B1 (ko) 공공 지도 또는 외부 지도와 매칭되는 무인 비행체에 의해 촬영된 이미지의 레이어로부터 측량 데이터를 획득하는 방법 및 장치
WO2024000733A1 (zh) 一种呈现目标对象的标记信息的方法与设备
CN109459029B (zh) 一种用于确定目标对象的导航路线信息的方法与设备
US10733777B2 (en) Annotation generation for an image network
AU2018449839B2 (en) Surveying and mapping method and device
CN109656319B (zh) 一种用于呈现地面行动辅助信息方法与设备
CN109561282A (zh) 一种用于呈现地面行动辅助信息的方法与设备
CN113378605B (zh) 多源信息融合方法及装置、电子设备和存储介质
WO2023051027A1 (zh) 一种用于获取目标对象的实时图像信息的方法与设备
WO2024000746A1 (zh) 一种获取电子围栏的方法、设备、介质及程序产品
CN110248157B (zh) 一种进行执勤调度的方法与设备
AU2018450016B2 (en) Method and apparatus for planning sample points for surveying and mapping, control terminal and storage medium
KR102012361B1 (ko) 무인비행체의 안전 운항을 위한 디지털 무빙 맵 서비스 제공 방법 및 장치
AU2018450271B2 (en) Operation control system, and operation control method and device
CN116299534A (zh) 车辆位姿的确定方法、装置、设备及存储介质
CN115760964B (zh) 一种获取目标对象的屏幕位置信息的方法与设备
US11906322B2 (en) Environment map management device, environment map management system, environment map management method, and program
US20230281836A1 (en) Object movement imaging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22948838

Country of ref document: EP

Kind code of ref document: A1