CN117268343A - Remote target searching method and device, augmented reality equipment and storage medium - Google Patents

Remote target searching method and device, augmented reality equipment and storage medium Download PDF

Info

Publication number
CN117268343A
CN117268343A CN202311226833.5A CN202311226833A CN117268343A CN 117268343 A CN117268343 A CN 117268343A CN 202311226833 A CN202311226833 A CN 202311226833A CN 117268343 A CN117268343 A CN 117268343A
Authority
CN
China
Prior art keywords
target
searched
camera
positioning information
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311226833.5A
Other languages
Chinese (zh)
Inventor
黄旭伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luxshare Precision Technology Nanjing Co Ltd
Original Assignee
Luxshare Precision Technology Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luxshare Precision Technology Nanjing Co Ltd filed Critical Luxshare Precision Technology Nanjing Co Ltd
Priority to CN202311226833.5A priority Critical patent/CN117268343A/en
Publication of CN117268343A publication Critical patent/CN117268343A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a remote target searching method, a remote target searching device, augmented reality equipment and a storage medium. The method comprises the following steps: when a target searching instruction is detected, remotely controlling the view finding range of a camera on the intelligent equipment in a region to be searched according to the target searching instruction and collecting scene images in the view finding range; creating a three-dimensional scene graph in a view finding range according to the scene image returned by the camera, and displaying the three-dimensional scene graph; when a target selection instruction is detected, selecting a target to be searched from the three-dimensional scene graph according to the target selection instruction, and acquiring current positioning information of the camera and current relative position information of the target to be searched relative to the camera; and determining the positioning information of the target to be searched according to the current positioning information and the current relative position information of the camera, so that the user can look up the target as if the user is in the region to be searched, and the remote target can be quickly and accurately found.

Description

Remote target searching method and device, augmented reality equipment and storage medium
Technical Field
The present invention relates to the field of augmented reality technologies, and in particular, to a method and apparatus for remote searching for a target, an augmented reality device, and a storage medium.
Background
In some living or working scenarios, we need to find objects such as people or objects in a certain area to be found. Such as looking for vehicles in large parking lots, looking for keys in rooms, looking for people in streets, etc. This is a very simple matter if the area can be looked for in person. But it is a difficult task to remotely find a target if it is inconvenient or not desired to have a close visit to the area to be found.
If a target needs to be found remotely, the image acquired by the camera is generally subjected to visual screening to determine whether the target to be found exists. However, if the scene of the area to be searched is complex, the positions of various objects or people are complicated, the difficulty of searching the target is very high, and the time and effort are very high. The need to quickly and accurately query the target is far from enough to make a visual screening based on the acquired images alone.
Disclosure of Invention
The invention provides a remote target searching method, a remote target searching device, augmented reality equipment and a storage medium, which are used for solving the problem of great difficulty in remote target searching, so that a user looks like to be in a region to be searched, and the remote target can be quickly and accurately searched.
According to one aspect of the invention, a remote searching method of a target is provided, and the remote searching method is applied to an augmented reality device, wherein the augmented reality device establishes remote communication connection with an intelligent device in an area to be searched; the area to be searched comprises a target to be searched; the intelligent device comprises a camera; the method comprises the following steps:
when a target searching instruction is detected, remotely controlling the view finding range of a camera on the intelligent equipment in the region to be searched according to the target searching instruction, and collecting scene images in the view finding range;
creating a three-dimensional scene graph in the view finding range according to the scene image returned by the camera, and displaying the three-dimensional scene graph;
when a target selection instruction is detected, selecting a target to be searched from the three-dimensional scene graph according to the target selection instruction, and acquiring current positioning information of the camera and current relative position information of the target to be searched relative to the camera;
and determining the positioning information of the target to be searched according to the current positioning information of the camera and the current relative position information.
According to another aspect of the present invention, there is provided a remote finding apparatus of a target, integrated in an augmented reality device, the augmented reality device establishing a remote communication connection with an intelligent device in an area to be found; the area to be searched comprises a target to be searched; the intelligent device comprises a camera; the device comprises:
The control module is used for remotely controlling the view finding range of the camera on the intelligent equipment in the region to be found according to the target finding instruction when the target finding instruction is detected, and collecting scene images in the view finding range;
the display module is used for creating a three-dimensional scene graph in the view finding range according to the scene image returned by the camera and displaying the three-dimensional scene graph;
the acquisition module is used for selecting a target to be searched from the three-dimensional scene graph according to the target selection instruction when the target selection instruction is detected, and acquiring current positioning information of the camera and current relative position information of the target to be searched relative to the camera;
and the positioning module is used for determining the positioning information of the target to be searched according to the current positioning information of the camera and the current relative position information.
According to another aspect of the present invention, there is provided an augmented reality device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the remote finding method of the object according to any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to execute a remote search method for achieving the object according to any of the embodiments of the present invention.
According to the technical scheme, when the target searching instruction is detected, the camera on the intelligent device is remotely controlled to view the range of the area to be searched according to the target searching instruction, and scene images in the view range are acquired; creating a three-dimensional scene graph in a view finding range according to the scene image returned by the camera, and displaying the three-dimensional scene graph; when a target selection instruction is detected, selecting a target to be searched from the three-dimensional scene graph according to the target selection instruction, and acquiring current positioning information of the camera and current relative position information of the target to be searched relative to the camera; and determining the positioning information of the target to be searched according to the current positioning information and the current relative position information of the camera. The terminal equipment in the area to be searched is controlled by the augmented reality equipment to acquire the image of the area to be searched, the three-dimensional scene graph of the area to be searched is created by the augmented reality equipment, the target to be searched is searched in the three-dimensional scene graph, the positioning of the target to be searched is obtained, the problem of high difficulty in remotely searching the target is solved, and a user looks like to be in the area to be searched for the target in a physical manner, so that the remote target can be quickly and accurately searched.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for remote finding of an object according to a first embodiment of the present invention;
FIG. 2 is a flow chart of a method for remote searching of an object according to a second embodiment of the present invention;
FIG. 3 is a schematic architecture diagram of a remote finding system for targets;
fig. 4 is a flowchart of a method for obtaining current positioning information and current relative position information of a camera according to a second embodiment of the present invention;
FIG. 5 is a flow chart of a method for determining a target found path provided by an embodiment of the present invention;
Fig. 6 is a schematic structural diagram of a remote searching device for objects according to a third embodiment of the present invention;
fig. 7 is a schematic structural diagram of an augmented reality device implementing a remote finding method of an object of an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a remote searching method for an object according to an embodiment of the present invention, where the method may be performed by a remote searching device for an object in a remote searching area, and the remote searching device for an object may be implemented in hardware and/or software, and the remote searching device for an object may be configured in an augmented reality device.
The remote searching method of the target provided by the embodiment of the invention is executed by the augmented reality equipment. An augmented reality device is a device based on augmented reality technology (Augmented Reality, AR) that enables the application of various technical means to superimpose computer-generated virtual objects or non-geometric information about real objects onto a real-world scene, thereby enabling the augmentation of the real world. The augmented reality AR device may be, for example, an AR helmet, AR glasses, and the like.
In the embodiment of the invention, the augmented reality equipment establishes remote communication connection with the intelligent equipment in the area to be searched; the area to be searched contains the target to be searched; the intelligent device comprises a camera. The intelligent device in the area to be searched can be an immovable camera, various robots, intelligent trolleys or unmanned aerial vehicles, etc., and of course, the intelligent device can also be other electronic devices which are provided with cameras and can collect images of surrounding environment.
As shown in fig. 1, the method includes:
and S110, when a target searching instruction is detected, remotely controlling the view finding range of a camera on the intelligent device in the region to be searched according to the target searching instruction, and acquiring a scene image in the view finding range.
The target searching instruction is an instruction for indicating that a target to be searched needs to be searched, and the target searching instruction can be generated through key operation or voice triggering of the augmented reality equipment or forwarded through a server. The area to be searched is an area for searching the object to be searched, for example, a venue, a room or a place with a certain area; the area to be found may be set manually by the user. The view finding range of the camera on the intelligent device in the area to be found is determined by the pose of the intelligent device, the installation position of the camera on the intelligent device and the pose of the camera, and the view finding range of the camera on the intelligent device in the area to be found can be changed by controlling the intelligent device and the camera to change the pose, so that the current area finding range is changed.
In this embodiment, when the target searching instruction is detected, the augmented reality device remotely controls the intelligent device in the region to be searched according to the target searching instruction, so that the camera of the intelligent device can change the pose under the remote control of the target searching instruction, thereby changing the view finding range of the camera in the region to be searched, enabling the camera to collect scene images of different view finding ranges in the region to be searched, and transmitting the scene images back to the augmented reality device.
In an optional implementation manner of the embodiment of the present invention, when a target searching instruction is detected, remotely controlling a view range of a camera on the intelligent device in the area to be searched according to the target searching instruction, including: when a target searching instruction is detected, the target searching instruction is sent to a server; and forwarding the target searching instruction to the intelligent equipment through a server so as to control the viewing range of a camera on the intelligent equipment in the area to be searched and acquire a scene image in the viewing range.
In another optional implementation manner of the embodiment of the present invention, when a target searching instruction is detected, remotely controlling a view range of a camera on the intelligent device in the area to be searched according to the target searching instruction, including: and when the target searching instruction is detected, the target searching instruction is sent to the intelligent equipment, so that the intelligent equipment controls the view finding range of the camera in the area to be searched and acquires the scene image in the view finding range.
S120, creating a three-dimensional scene graph in a view finding range according to the scene image returned by the camera, and displaying the three-dimensional scene graph.
Wherein the three-dimensional scene graph is a three-dimensional graph with depth information created by an augmented reality device from a scene image using augmented reality techniques.
In this embodiment, after receiving a scene image returned by the intelligent device, the augmented reality device superimposes non-geometric information of a real object in the scene image onto a real-world scene by using an augmented reality technology, thereby implementing enhancement of the scene image, creating a three-dimensional scene map with depth information, and displaying the three-dimensional scene map in a picture display area of the augmented reality device. Therefore, the user can watch the three-dimensional scene graph of the area to be searched through the augmented reality equipment, so that the user can look like to be in the area to be searched, and the target to be searched can be quickly and accurately found from the area to be searched.
In an alternative implementation of the embodiment of the present invention, the user is not aware of the location of the object to be found in the area to be found. A user remotely controls a camera on the intelligent device to search for a target to be searched in a three-dimensional scene graph which is displayed by the augmented reality device and is created according to the scene image in the view range, wherein the view range of the camera in the region to be searched and the scene image in the view range are acquired; if the target to be found is not found, controlling a camera on the intelligent equipment to change the view finding range, and collecting a scene image in the next view finding range; and (5) searching again or ending searching until the target to be searched is found or the region to be searched is traversed.
In an alternative implementation of the embodiment of the present invention, the user knows where the object to be found is located in the area to be found. Marking and identifying the position of the target to be searched in the three-dimensional scene graph through a positioning system based on Ultra Wide Band (UWB), constructing a track route for finding the target to be searched, guiding a user to control a camera on the intelligent device to be in a view finding range of the region to be searched based on the track route, acquiring scene images in the view finding range, searching the target to be searched in the three-dimensional scene graph, and confirming whether the target to be searched is at the identification position.
And S130, selecting a target to be searched from the three-dimensional scene graph according to the target selection instruction when the target selection instruction is detected, and acquiring current positioning information of the camera and current relative position information of the target to be searched relative to the camera.
The object to be found can be a person, an animal, an object, a building or the like. The target selection instruction is an instruction for selecting a target to be found from a three-dimensional scene graph displayed by the augmented reality device; the target selection instruction may be a gesture instruction, a click trigger instruction, a voice instruction, or the like.
The current positioning information of the camera is the positioning information of the camera at the moment of collecting the image containing the target to be searched, and the positioning information can be the positioning information of the camera under the coordinate system of the area to be searched, or the positioning information can be obtained through GPS positioning. The current relative position information of the target to be searched relative to the camera is the positioning information of the target to be searched relative to the camera at the moment of collecting the image containing the target to be searched.
In this embodiment, after finding a target to be found from a three-dimensional scene graph displayed by the augmented reality device, the user issues a target selection instruction to the augmented reality device. When the augmented reality device detects the target selection instruction, the augmented reality device identifies the target selection instruction, and determines the target to be searched corresponding to the target selection instruction in the three-dimensional scene graph, so that the target to be searched is selected. After a user selects a target to be searched, current positioning information of a camera which collects an image containing the target to be searched and current relative position information of the target to be searched relative to the camera in an actual scene are obtained.
And S140, determining the positioning information of the target to be searched according to the current positioning information and the current relative position information of the camera.
After determining the current positioning information of the camera and the current relative position information of the target to be searched relative to the camera, the embodiment can further determine the positioning information of the target to be searched in the searching area according to the current positioning information and the current relative position information. The positioning information of the target to be searched can be positioning information under the coordinate system of the area to be searched, and also can be GPS positioning information.
According to the technical scheme, when the target searching instruction is detected, the camera on the intelligent device is remotely controlled to view the range of the area to be searched according to the target searching instruction, and scene images in the view range are acquired; creating a three-dimensional scene graph in a view finding range according to the scene image returned by the camera, and displaying the three-dimensional scene graph; when a target selection instruction is detected, selecting a target to be searched from the three-dimensional scene graph according to the target selection instruction, and acquiring current positioning information of the camera and current relative position information of the target to be searched relative to the camera; and determining the positioning information of the target to be searched according to the current positioning information and the current relative position information of the camera, so that the user can look up the target as if the user is in the region to be searched, and the remote target can be quickly and accurately found.
Example two
Fig. 2 is a flowchart of a remote searching method for an object according to a second embodiment of the present invention, where the present embodiment is further defined by the foregoing embodiments: the augmented reality device also establishes a communication connection with the terminal device. FIG. 3 is a schematic architecture diagram of a remote finding system for targets. As shown in fig. 3, the remote finding system includes: the intelligent device 1, the augmented reality device 2, and the terminal device 3, and a communication connection between the augmented reality device 2 and the terminal device 3 can be established through the server 4. The terminal device may include any augmented reality device such as an AR helmet and AR glasses, and may further include a head-mounted display, a smart phone, a smart watch, and the like, but is not limited thereto. Thereby, it is further defined that after determining the positioning information of the object to be found according to the positioning information of the camera and the relative position information, the method further comprises: acquiring positioning information of the terminal equipment and a three-dimensional imaging space model of the area to be searched; determining a target searching path according to the positioning information of the terminal equipment, the positioning information of the target to be searched and the three-dimensional imaging space model; and sending the target finding path to the terminal equipment.
As shown in fig. 2, the method includes:
s210, when the target searching instruction is detected, the target searching instruction is sent to the server.
The augmented reality device can be connected with the intelligent device in the area to be searched through the server, and remote control of the augmented reality device on the intelligent device is achieved.
S220, forwarding a target searching instruction to the intelligent device through the server so as to control the view finding range of a camera on the intelligent device in the area to be searched, and collecting scene images in the view finding range.
In this embodiment, the server forwards the target finding instruction to the smart device after receiving the target finding instruction. Therefore, after receiving the target searching instruction, the intelligent device can control the intelligent device to move or the camera to change the angle or switch according to the target searching instruction, change the view finding range of the camera in the area to be searched, and collect scene images in the view finding range through the camera. And setting a sufficient number of cameras at different positions in the area to be searched, and acquiring all scenes in the area to be searched by adjusting the view finding range of the cameras of the intelligent equipment in the area to be searched, so that a target to be searched in the area to be searched is conveniently searched.
In an optional embodiment of the present invention, controlling a view range of a camera on the smart device in the area to be searched includes at least one of:
(1) Controlling the intelligent device to switch the picture between at least two cameras under the condition that the intelligent device comprises at least two cameras so as to adjust the view finding range of the cameras of the intelligent device in the area to be found;
(2) Under the condition that the cameras on the intelligent equipment are rotatable, controlling the intelligent equipment to adjust the rotation angle or focal length of a single camera so as to adjust the view finding range of the camera of the intelligent equipment in the area to be found;
(3) Controlling the intelligent equipment to move under the condition that the intelligent equipment is movable so as to adjust the view finding range of a camera of the intelligent equipment in the area to be found;
(4) And under the condition that the camera on the intelligent device is not movable, controlling the intelligent device to adjust the focal length of the single camera so as to adjust the view finding range of the camera of the intelligent device in the area to be found.
In this embodiment, the view finding range of the camera in the area to be found may be adjusted according to whether the intelligent device is movable, whether the camera on the intelligent device is rotatable or adjustable in focal length, and the number of cameras on the intelligent device, by adopting any one of the above methods, or a combination of at least two of the above methods. For example, for an indoor intelligent monitoring device, the intelligent monitoring device includes a plurality of monitors fixedly mounted on a wall, and the view finding range of the camera of the intelligent device in the area to be found can be set in a combined mode of the mode (1) and the mode (4). For the robot with a plurality of cameras fixedly installed, the view finding range of the cameras of the intelligent device in the area to be found can be adjusted in a combined mode of the mode (1), the mode (3) and the mode (4). For an unmanned aerial vehicle provided with a rotatable camera, the view finding range of the camera of the intelligent device in the area to be found can be adjusted in a combined mode of the mode (2) and the mode (3).
The intelligent device is controlled to move, the intelligent device is controlled to adjust the rotation angle of the single camera, the view finding range of the camera of the intelligent device can be adjusted to cover a wider or even whole area to be found, and therefore scene images of more angles of the area to be found can be collected. The camera focal length is adjusted by controlling the intelligent equipment, so that the view finding range can be reduced, the scene image with local amplification can be acquired, and the target to be found at the corner position is avoided.
S230, creating a three-dimensional scene graph in a view finding range according to the scene image returned by the camera, and displaying the three-dimensional scene graph.
S240, selecting a target to be searched from the three-dimensional scene graph according to the target selection instruction when the target selection instruction is detected.
All the shot targets are contained in the three-dimensional scene graph displayed by the augmented reality device, and the targets to be searched need to be selected from the three-dimensional scene graph displayed by the augmented reality device.
In an optional embodiment of the present invention, when a first target selection instruction is detected, a target to be found corresponding to a trigger position of the first target selection instruction in the three-dimensional scene graph is obtained; the first target selection instruction is a trigger operation instruction.
Wherein the first target selection instruction is an instruction generated by a triggering operation of the user. For example, a user triggers a target in a three-dimensional scene graph displayed by an augmented reality device, generating a first target selection instruction.
In this embodiment, when the augmented reality device detects the first target selection instruction, the augmented reality device obtains a trigger position of the first target selection instruction in the three-dimensional scene graph, and selects a target to be found in the three-dimensional scene graph by comparing targets corresponding to the trigger position in the three-dimensional scene graph.
The embodiment supports a man-machine interaction mode of triggering operation, and is convenient for a user to select a target to be found in a three-dimensional scene graph displayed by the augmented reality device.
In another optional embodiment of the present invention, when a second target selection instruction is detected, target identification information corresponding to the second target selection instruction is obtained, and a target to be found mapped in the three-dimensional scene graph by the target identification information is determined; the second target selection instruction is a voice instruction.
Wherein the second target selection instruction is an instruction obtained by voice recognition of the user. For example, target identification information is added to each target in a three-dimensional scene graph displayed by the augmented reality device, a user selects target identification information corresponding to a target to be searched displayed in the three-dimensional scene graph in a voice mode, and a second target selection instruction is determined by recognizing voice sent by the user.
In this embodiment, when the augmented reality device detects the second target selection instruction, the augmented reality device identifies target identification information corresponding to the second target selection instruction, queries a data mapping library according to the target identification information, and determines a target to be found, which is mapped in the three-dimensional scene graph by the target identification information.
The embodiment supports a human-computer interaction mode of voice, and is convenient for a user to select a target to be found in a three-dimensional scene graph displayed by the augmented reality device.
S250, acquiring current positioning information of the camera and current relative position information of a target to be searched relative to the camera; and determining the positioning information of the target to be searched according to the current positioning information and the current relative position information of the camera.
In this embodiment, after the augmented reality device determines the target to be found in the region to be found, the positioning information of the target to be found is determined according to the current positioning information and the current relative position information of the camera.
In an optional embodiment of this embodiment, obtaining current positioning information of the camera and current relative position information of the object to be found relative to the camera includes:
s251, acquiring current positioning information of the intelligent equipment and current relative position information of the camera relative to the intelligent equipment;
S252, determining current positioning information of the camera according to the current positioning information of the intelligent device and the current relative position information of the camera relative to the intelligent device;
s253, obtaining depth information of a target to be searched in a three-dimensional scene graph acquired by a camera;
s254, determining the current relative position information of the target to be searched relative to the camera according to the depth information.
The depth information of the target to be searched in the three-dimensional scene graph acquired by the camera may be obtained through a Time of Flight (ToF) technology, for example, by transmitting continuous infrared light pulses with specific wavelength to the target, receiving the light signals returned by the target by a specific sensor, and calculating the Time of Flight or phase difference of the light to and fro, thereby obtaining the depth information of the target.
In this embodiment, fig. 4 is a flowchart of a method for obtaining current positioning information and current relative position information of a camera according to a second embodiment of the present invention. As shown in fig. 4, first, current positioning information of an intelligent device at the moment of acquiring a target image to be searched is acquired according to a GPS positioning system carried on the intelligent device; acquiring relative position information of the camera on the intelligent equipment; and then, determining the current positioning information of the camera according to the relative position information of the camera on the intelligent device and the current positioning information of the intelligent device. And secondly, calculating the current relative position information of the target to be searched relative to the camera according to the depth information corresponding to the target to be searched in the three-dimensional scene graph acquired by the camera.
S260, acquiring positioning information of the terminal equipment and a three-dimensional imaging space model of the area to be searched.
The three-dimensional imaging space model is obtained by modeling the area to be searched based on a three-dimensional imaging technology.
S270, determining a target searching path according to the positioning information of the terminal equipment, the positioning information of the target to be searched and the three-dimensional imaging space model, and sending the target searching path to the terminal equipment.
Because the embodiment of the invention determines the positioning information of the target to be searched in the region to be searched in a remote control mode of the augmented reality equipment, the positioning information may need to be sent to another terminal equipment later so that a user carrying the terminal equipment can go to the region to be searched according to the positioning information of the target to be searched to find the target to be searched. Therefore, in order to solve the above problems, the embodiment of the invention can acquire the positioning of the terminal equipment and the three-dimensional imaging space model of the area to be searched, and determine the target searching path between the terminal equipment and the target to be searched according to the positioning information of the terminal equipment, the positioning information of the target to be searched and the three-dimensional imaging space model, so that a party carrying the terminal equipment can quickly find the target to be searched under the guidance of the target searching path.
In an optional embodiment of the invention, a target searching path capable of avoiding fault objects in the region to be searched is generated by using a path planning technology according to positioning information of the terminal equipment, positioning information of the target to be searched and the three-dimensional imaging space model.
In another optional embodiment of the present invention, the determining the target finding path according to the positioning information of the terminal device, the positioning information of the target to be found, and the three-dimensional imaging spatial model includes:
s271, displaying the identification of the terminal equipment in the three-dimensional imaging space model according to the positioning information of the terminal equipment, and displaying the identification of the target to be searched in the three-dimensional imaging space model according to the positioning information of the target to be searched;
s272, tracking a motion track of the triggering operation on the three-dimensional imaging space model between the identification of the terminal equipment and the identification of the target to be searched when the triggering operation on the three-dimensional imaging space model displayed in the augmented reality equipment is detected;
s273, determining the motion trail as a target finding path.
The triggering operation of the user on the three-dimensional imaging space model displayed in the augmented reality device can be finger triggering operation, such as finger clicking triggering the position of the mark of the terminal device on the three-dimensional imaging space model displayed, and simulating walking and direction in the area to be searched to move the finger to the position of the mark of the target to be searched; it is also possible that the eye movement triggering operation, such as the eye focusing, is kept at the position of the mark of the terminal device on the three-dimensional imaging space model displayed for a preset time, and the movement and direction in the area to be searched are simulated to move the eye focusing point to the position of the mark of the object to be searched for a continuous preset time.
In this embodiment, fig. 5 is a flowchart of a method for determining a target finding path according to an embodiment of the present invention. As shown in fig. 5, the identifier of the terminal device is further displayed in the three-dimensional imaging space model according to the positioning information of the terminal device on the three-dimensional imaging space model displayed in the augmented reality device, and the identifier of the target to be found is displayed in the three-dimensional imaging space model according to the positioning information of the target to be found, so that the user can clearly know the spatial positions of the party carrying the terminal device in the region to be found and the target to be found. Therefore, a user can execute a triggering operation in the three-dimensional imaging space model of the augmented reality equipment, and the triggering operation can form a motion track between the identification of the terminal equipment and the identification of the target to be searched so as to simulate a party carrying the terminal equipment to find a path of the target to be searched in the area to be searched. And tracking a motion track between the identification of the terminal equipment and the identification of the target to be searched by triggering operation on the three-dimensional imaging space model through a track tracking algorithm, and obtaining a target searching path of the terminal equipment for searching the target to be searched.
In addition, a path modification function may also be provided for a party using the enhancement implementation device to modify the target found path after it is generated.
According to the method and the device for simulating the target searching path of the target to be searched in the region to be searched by triggering operation of the user on the three-dimensional imaging space model corresponding to the region to be searched, the target searching path of the target to be searched is searched in the region to be searched by a party of the simulated portable terminal device, and the target to be searched can be quickly found by the party of the simulated portable terminal device according to the guiding of the target searching path.
According to the technical scheme, when the target searching instruction is detected, the camera on the intelligent device is remotely controlled to view the range of the area to be searched according to the target searching instruction, and scene images in the view range are acquired; creating a three-dimensional scene graph in a view finding range according to the scene image returned by the camera, and displaying the three-dimensional scene graph; when a target selection instruction is detected, selecting a target to be searched from the three-dimensional scene graph according to the target selection instruction, and acquiring current positioning information of the camera and current relative position information of the target to be searched relative to the camera; determining positioning information of a target to be searched according to the current positioning information and the current relative position information of the camera; acquiring positioning information of terminal equipment and a three-dimensional imaging space model of a region to be searched; determining a target searching path according to the positioning information of the terminal equipment, the positioning information of the target to be searched and the three-dimensional imaging space model; transmitting the target finding path to the terminal device; according to the positioning of the target to be searched remotely found by the augmented reality device and the positioning of the terminal device positioned in the region to be searched, a target searching path between the terminal device and the target to be searched is determined and sent to the terminal device, so that a party using the terminal device can quickly find the target to be searched under the guidance of the target searching path.
Example III
Fig. 6 is a schematic structural diagram of a remote searching device for objects according to a third embodiment of the present invention. As shown in fig. 6, the device is configured to be an augmented reality device, and the augmented reality device establishes a remote communication connection with a smart device in an area to be searched; the area to be searched comprises a target to be searched; the intelligent device comprises a camera; the device comprises: a control module 310, a display module 320, a selection module 330, and a positioning module 340, wherein,
the control module 310 is configured to, when detecting a target seeking instruction, remotely control a view range of a camera on the smart device in the area to be sought according to the target seeking instruction, and collect a scene image in the view range;
the display module 320 is configured to create a three-dimensional scene graph in the view-finding range according to the scene image returned by the camera, and display the three-dimensional scene graph;
the selecting module 330 is configured to select, when a target selection instruction is detected, a target to be found from the three-dimensional scene graph according to the target selection instruction, and obtain current positioning information of the camera and current relative position information of the target to be found relative to the camera;
And the positioning module 340 is configured to determine positioning information of the target to be found according to the current positioning information of the camera and the current relative position information.
According to the technical scheme, when the target searching instruction is detected, the camera on the intelligent device is remotely controlled to be in the view finding range of the area to be searched according to the target searching instruction, and scene images in the view finding range are acquired; creating a three-dimensional scene graph in a view finding range according to the scene image returned by the camera, and displaying the three-dimensional scene graph; when a target selection instruction is detected, selecting a target to be searched from the three-dimensional scene graph according to the target selection instruction, and acquiring current positioning information of the camera and current relative position information of the target to be searched relative to the camera; and determining the positioning information of the target to be searched according to the current positioning information and the current relative position information of the camera, so that the user can look up the target as if the user is in the region to be searched, and the remote target can be quickly and accurately found.
Optionally, the control module 310 includes:
an instruction sending unit, configured to send a target finding instruction to a server when the target finding instruction is detected;
And the control unit is used for forwarding the target searching instruction to the intelligent equipment through the server so as to control the view finding range of the camera on the intelligent equipment in the area to be searched.
Optionally, the control unit is specifically configured to:
controlling the intelligent device to switch the picture between at least two cameras under the condition that the intelligent device comprises at least two cameras so as to adjust the view finding range of the cameras of the intelligent device in the area to be found;
under the condition that the cameras on the intelligent equipment are rotatable, controlling the intelligent equipment to adjust the rotation angle or focal length of a single camera so as to adjust the view finding range of the camera of the intelligent equipment in the area to be found;
controlling the intelligent equipment to move under the condition that the intelligent equipment is movable so as to adjust the view finding range of a camera of the intelligent equipment in the area to be found;
and under the condition that the camera on the intelligent device is not movable, controlling the intelligent device to adjust the focal length of the single camera so as to adjust the view finding range of the camera of the intelligent device in the area to be found.
Optionally, the selecting module 330 includes:
the first determining unit is used for acquiring a target to be searched corresponding to a triggering position of a first target selection instruction in the three-dimensional scene graph when the first target selection instruction is detected; the first target selection instruction is a trigger operation instruction;
the second determining unit is used for acquiring target identification information corresponding to a second target selection instruction when the second target selection instruction is detected, and determining a target to be searched, which is mapped in the three-dimensional scene graph by the target identification information; the second target selection instruction is a voice instruction.
Optionally, the selecting module 330 includes:
the position acquisition unit is used for acquiring current positioning information of the intelligent equipment and current relative position information of the camera relative to the intelligent equipment;
a third determining unit, configured to determine current positioning information of the camera according to current positioning information of the intelligent device and current relative position information of the camera relative to the intelligent device;
the depth acquisition unit is used for acquiring depth information of the target to be searched in the three-dimensional scene graph acquired by the camera;
and the fourth determining unit is used for determining the current relative position information of the target to be searched relative to the camera according to the depth information.
Optionally, the augmented reality device further establishes a communication connection with a terminal device;
optionally, the apparatus further includes:
the acquisition module is used for acquiring the positioning information of the terminal equipment and the three-dimensional imaging space model of the area to be searched after determining the positioning information of the object to be searched according to the positioning information of the camera and the relative position information;
the path determining module is used for determining a target searching path according to the positioning information of the terminal equipment, the positioning information of the target to be searched and the three-dimensional imaging space model;
and the path sending module is used for sending the target finding path to the terminal equipment.
Optionally, the path determining module is specifically configured to:
displaying the identification of the terminal equipment in the three-dimensional imaging space model according to the positioning information of the terminal equipment, and displaying the identification of the target to be searched in the three-dimensional imaging space model according to the positioning information of the target to be searched;
tracking a motion trail of triggering operation on the three-dimensional imaging space model between the identification of the terminal equipment and the identification of the target to be searched when the triggering operation on the three-dimensional imaging space model displayed in the augmented reality equipment is detected;
And determining the motion trail as a target finding path.
The remote searching device for the target provided by the embodiment of the invention can execute the remote searching method for the target provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Example IV
Fig. 7 shows a schematic diagram of the architecture of an augmented reality device 10 that may be used to implement an embodiment of the present invention. Augmented reality devices are intended to represent devices based on augmented reality technology, such as wearable devices (e.g., helmets, glasses, watches, etc.) and other similar computing devices based on augmented reality technology. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 7, the augmented reality device 10 includes at least one processor 11, and a memory such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc. communicatively connected to the at least one processor 11, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the augmented reality device 10 can also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the augmented reality device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the augmented reality device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as a remote finding method of the target.
In some embodiments, the remote finding method of the target may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the augmented reality device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the above-described remote finding method of the object may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the remote finding method of the target in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an augmented reality device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the augmented reality device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. The remote target searching method is characterized by being applied to the augmented reality equipment, wherein the augmented reality equipment establishes remote communication connection with intelligent equipment in an area to be searched; the area to be searched comprises a target to be searched; the intelligent device comprises a camera; the method comprises the following steps:
when a target searching instruction is detected, remotely controlling the view finding range of a camera on the intelligent equipment in the region to be searched according to the target searching instruction, and collecting scene images in the view finding range;
Creating a three-dimensional scene graph in the view finding range according to the scene image returned by the camera, and displaying the three-dimensional scene graph;
when a target selection instruction is detected, selecting a target to be searched from the three-dimensional scene graph according to the target selection instruction, and acquiring current positioning information of the camera and current relative position information of the target to be searched relative to the camera;
and determining the positioning information of the target to be searched according to the current positioning information of the camera and the current relative position information.
2. The method of claim 1, wherein upon detecting a target finding instruction, remotely controlling a viewing range of a camera on the smart device in the area to be found according to the target finding instruction, comprising:
when a target searching instruction is detected, the target searching instruction is sent to a server;
and forwarding the target searching instruction to the intelligent equipment through a server so as to control the view finding range of a camera on the intelligent equipment in the area to be searched.
3. The method of claim 1 or 2, wherein controlling the range of view of a camera on the smart device in the area to be found comprises at least one of:
Controlling the intelligent device to switch the picture between at least two cameras under the condition that the intelligent device comprises at least two cameras so as to adjust the view finding range of the cameras of the intelligent device in the area to be found;
under the condition that the cameras on the intelligent equipment are rotatable, controlling the intelligent equipment to adjust the rotation angle or focal length of a single camera so as to adjust the view finding range of the camera of the intelligent equipment in the area to be found;
controlling the intelligent equipment to move under the condition that the intelligent equipment is movable so as to adjust the view finding range of a camera of the intelligent equipment in the area to be found;
and under the condition that the camera on the intelligent device is not movable, controlling the intelligent device to adjust the focal length of the single camera so as to adjust the view finding range of the camera of the intelligent device in the area to be found.
4. The method according to claim 1, wherein upon detecting a target selection instruction, selecting a target to be found from the three-dimensional scene graph according to the target selection instruction comprises:
when a first target selection instruction is detected, acquiring a target to be searched corresponding to a triggering position of the first target selection instruction in the three-dimensional scene graph; the first target selection instruction is a trigger operation instruction;
When a second target selection instruction is detected, target identification information corresponding to the second target selection instruction is acquired, and a target to be searched, which is mapped in the three-dimensional scene graph by the target identification information, is determined; the second target selection instruction is a voice instruction.
5. The method of claim 1, wherein obtaining current positioning information of the camera and current relative position information of the object to be found relative to the camera comprises:
acquiring current positioning information of intelligent equipment and current relative position information of the camera relative to the intelligent equipment;
determining current positioning information of the camera according to the current positioning information of the intelligent device and the current relative position information of the camera relative to the intelligent device;
acquiring depth information of the target to be searched in a three-dimensional scene graph acquired by the camera;
and determining the current relative position information of the target to be searched relative to the camera according to the depth information.
6. The method of claim 1, wherein the augmented reality device further establishes a communication connection with a terminal device; after determining the positioning information of the object to be searched according to the positioning information of the camera and the relative position information, the method further comprises the following steps:
Acquiring positioning information of the terminal equipment and a three-dimensional imaging space model of the area to be searched;
determining a target searching path according to the positioning information of the terminal equipment, the positioning information of the target to be searched and the three-dimensional imaging space model;
and sending the target finding path to the terminal equipment.
7. The method of claim 6, wherein the determining the target finding path according to the positioning information of the terminal device, the positioning information of the target to be found, and the three-dimensional imaging spatial model comprises:
displaying the identification of the terminal equipment in the three-dimensional imaging space model according to the positioning information of the terminal equipment, and displaying the identification of the target to be searched in the three-dimensional imaging space model according to the positioning information of the target to be searched;
tracking a motion trail of triggering operation on the three-dimensional imaging space model between the identification of the terminal equipment and the identification of the target to be searched when the triggering operation on the three-dimensional imaging space model displayed in the augmented reality equipment is detected;
and determining the motion trail as a target finding path.
8. The remote target searching device is characterized by being integrated in augmented reality equipment, wherein the augmented reality equipment establishes remote communication connection with intelligent equipment in a region to be searched; the area to be searched comprises a target to be searched; the intelligent device comprises a camera; the device comprises:
The control module is used for remotely controlling the view finding range of the camera on the intelligent equipment in the region to be found according to the target finding instruction when the target finding instruction is detected, and collecting scene images in the view finding range;
the display module is used for creating a three-dimensional scene graph in the view finding range according to the scene image returned by the camera and displaying the three-dimensional scene graph;
the selection module is used for selecting a target to be searched from the three-dimensional scene graph according to the target selection instruction when the target selection instruction is detected, and acquiring current positioning information of the camera and current relative position information of the target to be searched relative to the camera;
and the positioning module is used for determining the positioning information of the target to be searched according to the current positioning information of the camera and the current relative position information.
9. An augmented reality device, the augmented reality device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the remote finding method of an object of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to perform the method of remote finding of an object according to any one of claims 1-7.
CN202311226833.5A 2023-09-21 2023-09-21 Remote target searching method and device, augmented reality equipment and storage medium Pending CN117268343A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311226833.5A CN117268343A (en) 2023-09-21 2023-09-21 Remote target searching method and device, augmented reality equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311226833.5A CN117268343A (en) 2023-09-21 2023-09-21 Remote target searching method and device, augmented reality equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117268343A true CN117268343A (en) 2023-12-22

Family

ID=89207527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311226833.5A Pending CN117268343A (en) 2023-09-21 2023-09-21 Remote target searching method and device, augmented reality equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117268343A (en)

Similar Documents

Publication Publication Date Title
US11394950B2 (en) Augmented reality-based remote guidance method and apparatus, terminal, and storage medium
CN111325796B (en) Method and apparatus for determining pose of vision equipment
CN111649724B (en) Visual positioning method and device based on mobile edge calculation
US8180107B2 (en) Active coordinated tracking for multi-camera systems
JP7432595B2 (en) Cooperative virtual interface
CN111968229A (en) High-precision map making method and device
KR20220028042A (en) Pose determination method, apparatus, electronic device, storage medium and program
KR102347586B1 (en) Method for providing augmented reality user interface
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN109656319B (en) Method and equipment for presenting ground action auxiliary information
CN113378605B (en) Multi-source information fusion method and device, electronic equipment and storage medium
CN113910224A (en) Robot following method and device and electronic equipment
WO2023273415A1 (en) Positioning method and apparatus based on unmanned aerial vehicle, storage medium, electronic device, and product
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal
CN115790621A (en) High-precision map updating method and device and electronic equipment
CN114187509B (en) Object positioning method and device, electronic equipment and storage medium
CN117268343A (en) Remote target searching method and device, augmented reality equipment and storage medium
WO2022250605A1 (en) Navigation guidance methods and navigation guidance devices
CN114721562B (en) Processing method, apparatus, device, medium and product for digital object
US20220012462A1 (en) Systems and Methods for Remote Measurement using Artificial Intelligence
CN117765194A (en) Implementation method, device, equipment and medium for on-line graph construction
CN115773759A (en) Indoor positioning method, device and equipment of autonomous mobile robot and storage medium
CN115019167A (en) Fusion positioning method, system, equipment and storage medium based on mobile terminal
CN116844133A (en) Target detection method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination