WO2022170736A1 - Navigation prompt method and apparatus, and electronic device, computer-readable storage medium, computer program and program product - Google Patents

Navigation prompt method and apparatus, and electronic device, computer-readable storage medium, computer program and program product Download PDF

Info

Publication number
WO2022170736A1
WO2022170736A1 PCT/CN2021/106909 CN2021106909W WO2022170736A1 WO 2022170736 A1 WO2022170736 A1 WO 2022170736A1 CN 2021106909 W CN2021106909 W CN 2021106909W WO 2022170736 A1 WO2022170736 A1 WO 2022170736A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
real scene
pose information
risk
collision
Prior art date
Application number
PCT/CN2021/106909
Other languages
French (fr)
Chinese (zh)
Inventor
陈思平
张国伟
Original Assignee
深圳市慧鲤科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市慧鲤科技有限公司 filed Critical 深圳市慧鲤科技有限公司
Publication of WO2022170736A1 publication Critical patent/WO2022170736A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the present disclosure relates to the technical field of navigation, and in particular, to a navigation prompting method, apparatus, electronic device, computer-readable storage medium, computer program, and program product.
  • Embodiments of the present disclosure provide at least a navigation prompting method, apparatus, electronic device, computer-readable storage medium, computer program, and program product.
  • an embodiment of the present disclosure provides a navigation prompt method, including:
  • the AR device In the case that the AR device is located within the geographic location range corresponding to the target area, detecting whether there is a target object in the real scene image;
  • an early warning prompt is given to the user through the AR device.
  • the AR device when the AR device is located in the target area, target detection can be performed on the real scene image collected by the AR device, there is a target object in the real scene image, and it is estimated that the AR device and the target object collide
  • the AR device can be used to warn the user to avoid the safety accident caused by the collision between the user and the target object during the navigation process, thereby improving the user's travel safety.
  • the determining whether the AR device is located within a geographic location range corresponding to the target area based on the real scene image includes:
  • the three-dimensional scene map is a map representing the real scene to which the target area belongs;
  • the first pose information of the AR device can be determined according to the real scene image and the preset three-dimensional scene map.
  • the first pose information of the device and the geographic location range corresponding to each area can quickly determine whether the AR device is located in the target area to be alerted, such as on the road where vehicles pass, so as to ensure the user's safe travel.
  • judging whether there is a risk of collision between the AR device and the target object includes:
  • the relative pose information between the target object and the AR device such as relative distance, orientation relationship, etc.
  • the relative pose information can be determined based on the real scene image, so that the relative pose information can be used to estimate the relationship between the AR device and the AR device. Whether there is a risk of collision between target objects provides a guarantee for users to travel safely.
  • the target object includes a static obstacle
  • the determining whether there is a risk of collision between the AR device and the target object based on the relative pose information includes:
  • a method for determining whether there is a risk of collision against a static obstacle is provided. For example, if the target object is a railing, the distance between the AR device and the railing can be relatively short, and the AR device is facing the railing. Next, it is determined that there is a risk of a collision due to the proximity of the AR device to the railing.
  • the target object includes a dynamic obstacle
  • the determining whether there is a risk of collision between the AR device and the target object based on the relative pose information includes:
  • a method for determining whether there is a risk of collision with respect to a dynamic obstacle which can determine the relative motion information between the target object and the AR device based on multiple real scene images, which can include the relative motion direction , relative motion speed, etc., in this way, the relative pose information and relative motion information can be combined to estimate whether there is a risk of collision between the AR device and the target object, and provide a guarantee for the user to travel safely.
  • the determining whether there is a risk of collision between the AR device and the dynamic obstacle based on the relative pose information and the relative motion information includes:
  • the distance between the AR device and the dynamic obstacle is less than the second preset distance, based on the relative motion information, it is determined that the driving direction of the dynamic obstacle relative to the AR device is different from that of the dynamic obstacle. Whether the included angle between the directions of the dynamic obstacles toward the AR device is less than the set angle threshold;
  • the included angle is smaller than the set angle threshold, it is determined that there is a risk of collision between the AR device and the dynamic obstacle.
  • a method for determining whether there is a risk of collision in combination with the motion state of the AR device and/or the dynamic obstacle, so that the distance between the AR device and the dynamic obstacle is less than a second preset distance, And when the dynamic obstacle and the AR device are close to each other, it can accurately predict the risk of collision between the AR device and the dynamic obstacle.
  • the determining relative pose information between the target object and the AR device based on the real scene image includes:
  • the detection area corresponding to the target object in the real scene image, and the preset three-dimensional scene map determine the second pose information of the target object in the world coordinate system;
  • the The three-dimensional scene map is a map representing the real scene to which the target area belongs;
  • the relative pose information between the target object and the AR device can be quickly and accurately determined.
  • the performing an early warning prompt to the user through the AR device includes:
  • An early warning prompt is given to the user through the AR device in at least one of voice form, text form, animation form, warning sign, and flashing form.
  • the user can be warned in various ways, so as to improve the travel safety of the user.
  • an embodiment of the present disclosure provides a navigation prompting device, including:
  • the image acquisition part is configured to acquire the real scene image captured by the augmented reality AR device
  • a first judging part configured to judge whether the AR device is located within the geographic location range corresponding to the target area based on the real scene image
  • a target detection part configured to detect whether a target object exists in the real scene image when the AR device is located within the geographic location range corresponding to the target area
  • the second judgment part is configured to judge whether there is a risk of collision between the AR device and the target object when there is a target object in the real scene image;
  • the pre-warning and prompting part is configured to provide a pre-warning prompt to the user through the AR device when there is a risk of collision between the AR device and the target object.
  • embodiments of the present disclosure provide an electronic device, including: a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, the processing The processor and the memory communicate through a bus, and the machine-readable instructions execute the navigation prompt method according to the first aspect when the machine-readable instructions are executed by the processor.
  • an embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to execute the navigation prompt method according to the first aspect.
  • an embodiment of the present disclosure further provides a computer program, including computer-readable code, when the computer-readable code is executed in a computer device, a processor in the computer device executes the navigation for implementing the navigation described in the first aspect hint method.
  • an embodiment of the present disclosure further provides a computer program product, including computer program instructions, the computer program instructions causing a computer to execute the navigation prompt method described in the first aspect.
  • FIG. 1A shows a schematic diagram 1 of an application scenario of an electronic device provided by an embodiment of the present disclosure
  • FIG. 1B shows a schematic diagram 2 of an application scenario of an electronic device provided by an embodiment of the present disclosure
  • FIG. 2 shows a flowchart of a navigation prompt method provided by an embodiment of the present disclosure
  • FIG. 3 shows a flowchart of a method for judging whether there is a risk of collision between an AR device and a target object provided by an embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of a scenario of an early warning prompt provided by an embodiment of the present disclosure
  • FIG. 5 shows a flowchart of another method for judging whether there is a risk of collision between an AR device and a target object provided by an embodiment of the present disclosure
  • FIG. 6 shows a schematic diagram of another early warning prompt provided by an embodiment of the present disclosure
  • FIG. 7 shows a flowchart of a method for determining relative pose information between a target object and an AR device provided by an embodiment of the present disclosure
  • FIG. 8 shows a schematic structural diagram of a navigation prompting device provided by an embodiment of the present disclosure
  • FIG. 9 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
  • a virtual indication arrow pointing to the target can be displayed in the navigation device, and the user can walk forward according to the virtual indication arrow, that is, the destination can be reached.
  • the user can walk forward according to the virtual indication arrow, that is, the destination can be reached.
  • danger is likely to occur. Therefore, how to ensure the safety of the user during the navigation process is a technical problem to be solved by the present disclosure.
  • the present disclosure provides a navigation prompt method, and proposes that when the AR device is located in the target area, target detection can be performed on the real scene image collected by the AR device.
  • the AR device can be used to warn the user to avoid safety accidents caused by the collision between the user and the target object during the navigation process, thereby improving the user's travel safety.
  • the navigation prompting method provided by the embodiment of the present disclosure can be executed by an electronic device.
  • the electronic device 10 may include a processor 11 and an image acquisition part 12 , in this way, the electronic device 10 can acquire an image of a real scene through the image acquisition part 12 , and use the processor 11 to acquire an image of a real scene.
  • the real scene image is analyzed and processed to determine the position of the electronic device 10, and when it is detected that the electronic device 10 may collide with other objects, the user is alerted.
  • electronic devices may be implemented as AR devices, which may include devices capable of augmented reality, such as smartphones, tablets, and AR glasses.
  • the electronic device 10 may receive images of real scenes sent by other devices (such as AR devices) 30 through the network 20 . In this way, the electronic device 10 can locate the position of the AR device 30 based on the real scene image, and when detecting that the AR device 30 may collide with other objects, output warning information to the user through the human-computer interaction interface of the AR device 30 .
  • the electronic device may be implemented as a server.
  • the navigation prompt method includes the following S101-S105:
  • the AR device may include devices capable of augmented reality, such as smartphones, tablet computers, and AR glasses.
  • the AR device may have a built-in image acquisition component or an external image acquisition component. After the AR device enters the working state, it can collect images through the The component captures real-time images of the scene in real time.
  • the target area may be a predetermined area with a certain driving danger, such as a road area, a landslide mountain road, a construction area, or other areas prone to danger.
  • a driving danger such as a road area, a landslide mountain road, a construction area, or other areas prone to danger.
  • the AR device can be positioned to determine the first pose information of the AR device in the world coordinate system, and then based on the first pose information of the AR device, it is determined whether the AR device is located in the AR device. Within the geographic range corresponding to the target area.
  • the first pose information of the AR device may be determined based on the determined first pose information of the AR device and the geographic location range of each area included in the three-dimensional scene map representing the real scene in the world coordinate system, to determine the first position of the AR device. Whether the pose information is located in the geographic location range corresponding to the target area, if the first pose information of the AR device is located within the geographic location range corresponding to the target area, it can be determined that the AR device is located within the geographic location range corresponding to the target area, otherwise the AR device is located within the geographic location range corresponding to the target area. The device is not located within the geographic range of the target area.
  • the first pose information of the AR device can be represented by the first pose information of the image acquisition part of the AR device, which can include the image acquisition part of the AR device in the real world.
  • the orientation can be represented by the included angles between the optical axis of the image acquisition component and the X, Y and Z axes in the world coordinate system.
  • the AR device when positioning the AR device based on the real scene image captured by the AR device, the AR device may be positioned based on the real scene image and the three-dimensional scene map representing the real scene.
  • the AR device can also be positioned in combination with the built-in inertial measurement unit (IMU) of the AR device. The detailed positioning method will be introduced later.
  • IMU built-in inertial measurement unit
  • the target object may be an obstacle that the user may encounter during driving, and may include static obstacles and dynamic obstacles.
  • the target area is a road
  • the target object may include vehicles, trees, etc. , railings, etc.
  • a pre-trained neural network for target detection can be used to detect whether a target object exists in an image of a real scene, and to determine the detection information of the target object, such as determining the corresponding position of the bounding rectangle of the target object in the image coordinate system. information, and the corresponding position information of the center point of the enclosing rectangular frame in the image coordinate system, or the pixel coordinate value of the pixel points constituting the target object in the image coordinate system, etc.
  • the AR device In the case where it is determined that the target object is detected from the real scene image, it can be further judged whether there is a risk of collision between the AR device and the target object. In the case of the risk of collision, the AR device is used to warn the user.
  • the AR when used to alert the user to the device, it may include:
  • An early warning prompt is given to the user through the AR device in at least one of voice form, text form, animation form, warning sign, and flashing form.
  • the text prompt information, animation prompt information, and warning signs may be virtual objects superimposed on the real scene, and may be virtual objects superimposed on the real scene image captured by the AR device for prompting, so as to achieve the purpose of prompting the user.
  • the AR device when the AR device is located in the target area, target detection can be performed on the real scene image collected by the AR device, there is a target object in the real scene image, and it is estimated that the AR device and the target object collide
  • the AR device can be used to warn the user to avoid the safety accident caused by the collision between the user and the target object during the navigation process, thereby improving the user's travel safety.
  • the three-dimensional scene map is a map representing the real scene to which the target area belongs;
  • the first pose information of the AR device indicates that the AR device is located within the geographic location range corresponding to the target area, determine that the AR device is located within the geographic location range corresponding to the target area, and if the first pose information of the AR device indicates that the AR device is located If it is not located within the geographic location range corresponding to the target area, it is determined that the AR device is not located within the geographic location range corresponding to the target area.
  • a three-dimensional scene map representing the real scene can be generated by pre-shooting the video or image data obtained from the real scene.
  • the three-dimensional scene map is generated based on the video data corresponding to the real scene, and can be constructed and A 3D scene map in which the scenes are completely overlapped in the same coordinate system, for example, a 3D scene map that completely overlaps with the real scene in the world coordinate system can be constructed. Therefore, the 3D scene map can be used as a high-precision map of the real scene.
  • the first pose information of the AR device can be determined based on the real scene image and the pre-built three-dimensional scene map representing the real scene image. The detailed process will be described in It will be explained in detail later.
  • real scene images are not collected in real time, they are generally collected at set time intervals.
  • positioning based on real scene images and 3D scene maps consumes a lot of power. Therefore, when positioning AR devices , in the process of determining the first pose information of the AR device, the visual positioning based on the real scene image and the IMU positioning method can be used in combination.
  • the first pose information of the AR device can be periodically determined according to the visual positioning, and the positioning is performed by the IMU in the intermediate process. For example, the positioning is performed visually every 10 seconds.
  • the pose information, the first pose information at the 10th, 20th, and 30th seconds are obtained based on visual positioning, and the first pose information for the first second after starting work can be based on the first
  • the pose information and the data collected by the IMU of the AR device from the start of work to the first second are estimated.
  • the first pose information in the second second can be based on the first pose information in the first second and It is estimated from the data collected by the IMU of the AR device from the first second to the second second.
  • the visual positioning method can be used. Correction to obtain the first pose information with higher accuracy.
  • the positioning process of the AR device can also be based on the method of Simultaneous Localization And Mapping (SLAM) to locate the AR device, such as constructing a world coordinate system for the real scene in advance.
  • SLAM Simultaneous Localization And Mapping
  • each area occupies a certain area in the three-dimensional scene map
  • the boundary line of the area can be marked in advance, and the position coordinates of the position points on the boundary line in the world coordinate system can be obtained.
  • the position coordinates of each position point on the boundary line under the world coordinate system may include the coordinate value of the position point along the X-axis direction under the world coordinate system, and the position point along the Y-axis direction under the world coordinate system.
  • the coordinate value of the position point along the Z axis in the world coordinate system when determining the geographic location range of the region in the world coordinate system according to the position coordinates of the location points on the boundary line in the world coordinate system, the location points on the boundary line in the world coordinate system may be used to determine the geographic location range of the region.
  • the coordinate value along the X-axis direction below determine the coordinate range of the area along the X-axis direction, and based on the coordinate value of the position point on the boundary line along the Y-axis direction in the world coordinate system, determine the area along the Y-axis direction.
  • the coordinate range, and the coordinate value along the Z-axis direction of the position point on the boundary line in the world coordinate system determine the coordinate range of the area along the Z-axis direction, the coordinate range of the area along the X-axis direction, along the Y-axis
  • the coordinate range of the direction and the coordinate range along the Z-axis direction are used as the geographic location range corresponding to the area.
  • the AR device After obtaining the first pose information of the AR device, it can be determined whether the AR device is located in the geographic location corresponding to the target area according to the location coordinates of the AR device in the world coordinate system corresponding to the real scene and the geographic location range corresponding to each area. within the range.
  • the first pose information of the AR device can be determined according to the real scene image and the three-dimensional scene map representing the real scene image. According to the first pose information of the AR device and the geographic location range corresponding to each area, it can quickly determine whether the AR device is located in the target area to be alerted, such as on the road where vehicles pass, so as to ensure the user's safe travel.
  • the relative pose information between the AR device and the target object may include the relative distance and relative angle between the AR device and the target object, wherein the relative distance
  • the relative distance of the target position point of the target object in the world coordinate system, the target position point may include the center position point of the target object, or the closest distance between the boundary of the target object and the optical center of the image acquisition component
  • the relative angle can be represented by the angle between the direction when the optical axis of the image acquisition component of the AR device points to the target position point of the target object and the orientation of the optical axis.
  • whether there is a risk of collision between the AR device and the target object may be determined based on the relative distance and relative angle, and the manner of determining whether there is a risk of collision will be described later.
  • the relative pose information may include the pose of the target object relative to the AR device, and the pose of the AR device relative to the target object, such as the target object approaching the AR device, or the AR device approaching the target object, or both. They are both approaching each other, and it can be determined that the relative poses of the two are getting closer and closer. These can be determined through the real scene images collected by the AR device. The determination process is described later.
  • the relative pose information between the target object and the AR device such as relative distance, orientation relationship, etc.
  • the relative pose information can be determined based on the real scene image, so that the relative pose information can be used to estimate the relationship between the AR device and the AR device. Whether there is a risk of collision between target objects provides a guarantee for users to travel safely.
  • the target object includes a static obstacle.
  • the following steps S2021 to S2023 are included:
  • the static obstacles may be static obstacles, such as railings, trees, steps, etc., collected by the AR device while the user is driving in the target area.
  • the first preset distance may be set according to experience, so that the user can be prompted through the AR device in advance to avoid collision with a static obstacle when the user continues to drive in the current driving direction.
  • the position coordinates of the target object in the camera coordinate system corresponding to the AR device can be determined according to the real scene image collected by the AR device, and then according to the direction of the optical axis of the AR device, it can be determined whether the AR device is facing a static obstacle. thing.
  • the risk level can be determined based on the distance between the AR device and the static obstacle.
  • the AR device prompts the user, it can give corresponding early warning prompts according to different risk levels. For example, when the risk level is low, the AR device can provide early warning prompts in one way. When the risk level is high, you can use the AR The equipment provides early warning prompts in various ways.
  • FIG 4 it is a schematic diagram of a scene where the target area is a construction area and the target object is a railing, and the user is prompted by an AR device.
  • Display virtual text information used to warn the user such as "There is a railing ahead, please pay attention to avoid collision", to remind the user of danger ahead, please pay attention to safety, so as to remind the user to stop approaching or take a detour.
  • a method for determining whether there is a risk of collision against a static obstacle is provided. For example, if the target object is a railing, the distance between the AR device and the railing can be relatively short, and the AR device is facing the railing. Next, it is determined that there is a risk of a collision due to the proximity of the AR device to the railing.
  • the target object includes dynamic obstacles.
  • the target object when judging whether there is a risk of collision between the AR device and the target object based on the relative pose information, as shown in FIG. 5 , it may include The following S301 ⁇ S302:
  • the relative movement information may include the movement direction of the AR device relative to the dynamic obstacle, or the relative movement direction of the dynamic obstacle relative to the AR device, and may also include the relative movement speed.
  • the relative pose information between the AR device and the dynamic obstacle can be determined, and the AR device and the dynamic obstacle can be determined through the relative pose information corresponding to the multiple real scene images respectively.
  • the position coordinates of the dynamic obstacle in the camera coordinate system corresponding to the AR device can be determined, and then the driving direction and the driving direction of the dynamic obstacle relative to the AR device can be determined. Driving speed.
  • the AR device in the case where the target object is a dynamic obstacle, considering that the dynamic obstacle can move in the target area, the AR device can also move in the target area, when the two are close to each other, It can be determined that there is a risk of collision between the AR device and the dynamic obstacle, and conversely, it can be determined that there is no risk of collision between the AR device and the dynamic obstacle.
  • a method for determining whether there is a risk of collision with respect to a dynamic obstacle which can determine the relative motion information between the target object and the AR device based on multiple real scene images, which can include the relative motion direction , relative motion speed, etc., in this way, the relative pose information and relative motion information can be combined to estimate whether there is a risk of collision between the AR device and the target object, and provide a guarantee for the user to travel safely.
  • the dynamic obstacles may include vehicles, pedestrians, etc.
  • the second preset distance may be set based on experience, subject to a safety distance corresponding to sufficient obstacle avoidance time for the user, such as the target object needs to avoid obstacles.
  • the duration is n1 seconds, and the target object can travel m meters within n1 seconds. In this case, the setting of the second preset distance needs to be greater than m meters.
  • the second The preset distance needs to be larger than the above-mentioned first preset distance during setting.
  • the position coordinates of the dynamic obstacle in the camera coordinate system corresponding to the AR device at the last moment can be determined, and then based on the real scene image collected by the AR device at the current moment, Determine the position coordinates of the dynamic obstacle in the camera coordinate system corresponding to the AR device at the current moment.
  • the relative position of the dynamic obstacle to the AR device can be determined. For example, point the position coordinates corresponding to the dynamic obstacle at the last moment in the direction of the position coordinates corresponding to the dynamic obstacle at the current moment, as the driving direction of the dynamic obstacle relative to the AR device.
  • the direction of the dynamic obstacle towards the AR device can be determined by the position coordinates of the dynamic obstacle in the camera coordinate system where the AR device is located, and the position coordinates of the AR device in the camera coordinate system, for example, the dynamic obstacle
  • the position coordinates in the camera coordinate system where the AR device is located point to the direction of the position coordinates of the AR device in the camera coordinate system as the direction of the dynamic obstacle toward the AR device.
  • the set angle threshold can be set empirically to determine whether the dynamic obstacle and the AR device are gradually approaching, for example, between the driving direction of the dynamic obstacle relative to the AR device and the direction of the dynamic obstacle toward the AR device. When the included angle is less than the set angle threshold, it can be determined that the dynamic obstacle and the AR device are gradually approaching.
  • the risk level may be determined based on the distance and the relative movement speed between the AR device and the dynamic obstacle.
  • the mapping relationship between different distances and risk scores, as well as the mapping relationship between different relative movement speeds and risk scores can be established in advance.
  • the current risk score can be determined according to the predetermined mapping relationship between different distances and risk scores, and the mapping relationship between different relative motion speeds and risk scores, and further, the risk level can be determined according to the risk score.
  • the corresponding early warning can be prompted according to different risk levels.
  • the risk level is high, early warnings can be given in various ways through AR devices.
  • FIG. 6 it is a schematic diagram of a scene where the target area is a road and the target object is a vehicle, and the user is prompted by the AR device.
  • the user can display it on the display screen of the AR device.
  • Virtual text information used to warn the user such as "there is a vehicle in the road ahead, please pay attention to avoid collision", to remind the user of danger ahead, please pay attention to safety, so as to remind the user to stop approaching or take a detour.
  • a method for determining whether there is a risk of collision in combination with the motion state of the AR device and/or the dynamic obstacle, so that the distance between the AR device and the dynamic obstacle is less than a second preset distance, And when the dynamic obstacle and the AR device are close to each other, it can accurately predict the risk of collision between the AR device and the dynamic obstacle.
  • when determining the relative pose information between the target object and the AR device based on the real scene image may include the following S401-S403:
  • the detection area corresponding to the target object in the real scene image, and the preset three-dimensional scene map determine the second pose information of the target object in the world coordinate system; wherein, the three-dimensional scene map represents the target A map of the real scene to which the area belongs;
  • the three-dimensional scene map here and the above-mentioned three-dimensional scene map may be the same scene map, and the construction process is described in the following description.
  • the AR device is positioned to obtain the first pose information of the AR device in the world coordinate system.
  • the process of locating the AR device may be similar to the process of determining the first pose information of the AR device in S1011, as described later, or it may be determined according to the positioning sensor set by the AR device itself, such as the one set by itself. Global satellite navigation equipment and inertial measurement units.
  • the three-dimensional scene map representing the real scene includes multiple feature points in the real scene, and the multiple feature points of the real scene can also be extracted from the real scene image, so that based on the multiple feature points included in the real scene image And the multiple feature points contained in the 3D scene map are matched to the feature points, and the position coordinates of the multiple feature points that constitute the target object in the corresponding world coordinate system in the 3D scene map are determined, that is, the target object contained in the detection area is obtained.
  • the position coordinates of the feature points in the world coordinate system based on which, the second pose information of the target object in the world coordinate system can be determined.
  • the second pose information of the target object in the world coordinate system After obtaining the second pose information of the target object in the world coordinate system, it can be determined based on the first pose information of the AR device in the world coordinate system and the second pose information of the target object in the world coordinate system The relative pose information between the target object and the AR device is obtained.
  • the relative pose information between the target object and the AR device can be quickly and accurately determined.
  • the method may further include:
  • the first pose information of the AR device in the world coordinate system obtained after positioning the AR device, and the pose information of the target object in the image coordinate system corresponding to the real scene image determine the detected target object and the AR device. relative pose information.
  • the pose of the target object in the camera coordinate system corresponding to the AR device can be determined. information. Further, based on the first pose information of the AR device in the world coordinate system obtained after positioning the AR device, the external parameters corresponding to the image acquisition component of the AR device can be determined, and based on the external parameters corresponding to the image acquisition component of the AR device, it can be determined. The second pose information of the target object in the world coordinate system. Finally, based on the second pose information of the AR device and the target object in the world coordinate system, the relative pose information of the AR device and the target object in the world coordinate system can be determined.
  • the following processes S501 to S505 are included:
  • S501 perform target detection on the real scene image, and determine the pose information of each pixel that constitutes the target object in the real scene image in the image coordinate system.
  • a pre-trained target detection neural network can be used to perform target detection on real scene images, determine the target object contained in the real scene image, and the pose information of each pixel that constitutes the target object in the image coordinate system, where
  • the pose information of the pixel point in the image coordinate system refers to the pixel coordinate value of the pixel point in the image coordinate system.
  • the depth image includes depth information of each pixel that constitutes the target object in the real scene image.
  • the depth image corresponding to the real scene image can be determined according to the collected real scene image and the pre-trained neural network for determining the depth image, so as to obtain the depth image of each pixel that constitutes the target object in the real scene image. in-depth information.
  • an image coordinate system can be established for the collected real scene image, and the pose information of each pixel that constitutes the target object in the image coordinate system can be determined based on the constructed image coordinate system, that is, the pose information in the image coordinate system can be determined. Pixel coordinate value.
  • the parameter information of the image acquisition part may include internal parameters and external parameters of the image acquisition part of the AR device, wherein the internal parameters are fixed parameters of the image acquisition part of the AR device, and the external parameters can be determined by the first pose information of the AR device, Among them, the internal parameter can be used to convert the coordinate value of the pixel in the image coordinate system into the coordinate value in the camera coordinate system; the external parameter can be used to convert the coordinate value of the pixel in the camera coordinate system into the coordinate value in the world Coordinate values in the coordinate system.
  • the pixel coordinate value of each pixel that constitutes the target object in the image coordinate system can be converted to the X-axis and Y-axis directions in the camera coordinate system.
  • the coordinate value of the pixel point along the Z-axis direction in the camera coordinate system can be obtained, and then based on the external parameters of the image acquisition component, the coordinate value of the pixel point in the camera coordinate system can be obtained. Convert to the coordinate value in the world coordinate system, that is, get the three-dimensional coordinate information of the pixel point in the world coordinate system.
  • the three-dimensional detection information corresponding to the target object may be determined based on a pre-trained three-dimensional target detection neural network.
  • the corresponding three-dimensional point cloud data of the target object can be obtained, and then based on the three-dimensional point cloud data and the pre-built three-dimensional target detection neural network, to The three-dimensional detection information corresponding to the target object is determined, and then the second pose information of the target object in the world coordinate system is determined according to the three-dimensional detection information.
  • the three-dimensional detection information of the target object may include the position coordinates of the center point of the target object in the world coordinate system, the length, width and height of the 3D detection frame of the target object, and the set positive direction and world coordinates of the target object.
  • the angle between each coordinate axis of the system, the set positive direction of the target object can be the direction of the head of the car, and the position coordinates of the center point of the target object in the world coordinate system and the direction of the head of the target object can be used as the The second pose information of the target object in the world coordinate system.
  • real-time mapping and positioning can also be performed based on SLAM to obtain a three-dimensional scene map of the real scene where the AR device is located and the first pose information of the AR device.
  • the three-dimensional scene map can obtain the second pose information of the target object in the world coordinate system, so that the relationship between the AR device and the target object can be determined according to the first pose information of the AR device and the second pose information of the target object. Relative pose information.
  • the three-dimensional scene map mentioned above can be pre-built in the following ways, including S601 to S603:
  • a multi-angle aerial photography of the real scene can be performed in advance by a drone, and a large number of real scene sample images corresponding to the real scene can be obtained.
  • an initial 3D scene model is generated based on the extracted multiple feature points and the pre-stored 3-D sample map matching the real scene; wherein, the 3-D sample map is a pre-stored 3-D map representing the appearance features of the real scene.
  • the feature points extracted for each real scene sample image may be points that can represent key information of the real scene sample image.
  • the feature points here may represent The feature points of the building outline information.
  • the pre-stored 3D sample map of the real scene may include a pre-set 3D map that can characterize the morphological features of the real scene and is dimensioned, such as a computer that characterizes the morphological features of the real scene. Aided design (Computer Aided Design, CAD) three-dimensional drawing.
  • Aided design Computer Aided Design, CAD
  • the feature point cloud composed of the feature points can form a three-dimensional model representing the real scene.
  • the feature points in the feature point cloud here are unitless, and the feature point cloud is composed of The 3D model is also unitless. Then, after aligning the feature point cloud with a 3D map with scale annotations that can characterize the topographical features of the real scene, the initial 3D scene model corresponding to the real scene is obtained.
  • the generated initial 3D model may be distorted, and then the alignment process can be completed through the calibration feature points on the real scene and the initial 3D scene model, so that a 3D scene model with high accuracy can be obtained.
  • S6032 Determine the real position coordinates of the calibration feature points in the real two-dimensional map corresponding to the real scene, and adjust the position coordinates of each feature point in the initial three-dimensional scene model based on the real position coordinates corresponding to each calibration feature point.
  • some feature points representing the spatial position points of the edges and corners of the building can be selected as the calibration feature points here, and then based on the real position coordinates corresponding to the calibration feature points and the position of the calibration feature point in the initial three-dimensional scene model. Coordinates, determine the position coordinate adjustment amount, and then correct the position coordinates of each feature point in the initial three-dimensional model based on the position coordinate adjustment amount, so that a three-dimensional scene model with high accuracy can be obtained.
  • the real position coordinates corresponding to the calibration feature points may refer to the position coordinates of the world coordinate system corresponding to the real scene to which the specified feature points belong in the target area.
  • the obtained three-dimensional scene map contains the world coordinates of each feature point. Position coordinates under the system.
  • the AR device can be positioned based on the real scene image captured by the AR device and the 3D scene map, and the first pose information of the AR device can be determined based on the real scene image and the 3D scene map.
  • S701 extract the feature points contained in the real scene image, and extract the feature points of each real scene sample image when the three-dimensional scene map is pre-built;
  • S703 Determine the first pose information of the AR device based on the shooting pose information corresponding to the target real scene sample image.
  • the feature points in the real scene image and the feature points of each real scene sample image when the three-dimensional scene map is pre-built can be used to find the image similar to the real scene image.
  • the sample image of the target real scene with the highest degree For example, based on the feature information of the feature points of the real scene image and the feature information of the feature points of each real scene sample image, the similarity value of the real scene image and each real scene sample image can be determined, and the similarity value is the highest and exceeds
  • the real scene sample image with the similarity threshold is used as the target real scene sample image here.
  • the first pose information of the AR device may be determined based on the shooting pose information corresponding to the target real scene sample image.
  • S7032 Determine the first pose information of the AR device based on the relative pose information and the shooting pose information corresponding to the target real scene sample image.
  • the target object contained in the target real scene sample image with the highest similarity with the real scene image is the same target object as the target object contained in the real scene image, for example, the target object contained in the real scene image is a building. A.
  • the target object contained in the target real scene sample image is also building A.
  • first pose information is the relative pose information between the building A in the real scene image and the building A in the target real scene sample image.
  • the target real scene sample image and the target real scene sample image and The three-dimensional detection information corresponding to the target object in the real scene image is determined, and then the relative pose data is determined through the respectively determined target real scene sample image and the three-dimensional detection information corresponding to the target object in the real scene image.
  • the shooting pose information corresponding to the target real scene sample image can be directly used as the first AR device here.
  • the embodiments of the present disclosure also provide a navigation prompting device corresponding to the navigation prompting method. Reference may be made to the implementation of the method, and repeated descriptions will not be repeated.
  • the navigation prompt device includes:
  • the image acquisition part 801 is configured to acquire the real scene image captured by the augmented reality AR device;
  • the first judging part 802 is configured to judge whether the AR device is located within the geographic location range corresponding to the target area based on the real scene image;
  • the target detection part 803 is configured to detect whether there is a target object in the real scene image when the AR device is located within the geographic location range corresponding to the target area;
  • the second judging part 804 is configured to judge whether there is a risk of collision between the AR device and the target object when the target object exists in the real scene image;
  • the early warning prompting part 805 is configured to provide early warning prompting to the user through the AR device when there is a risk of collision between the AR device and the target object.
  • the first judging part 802 is further configured to determine the first pose information of the AR device in the world coordinate system based on the real scene image and the preset three-dimensional scene map of the table; wherein the three-dimensional The scene map is a map that represents the real scene to which the target area belongs; based on the first pose information of the AR device and the geographic location of each area associated with the 3D scene map in the world coordinate system, it is determined whether the AR device is located in the corresponding target area. within the geographic range.
  • the second judgment part 804 is further configured to determine the relative pose information between the target object and the AR device based on the real scene image; based on the relative pose information, determine the AR device and the target object Whether there is a risk of collision between them.
  • the target object includes a static obstacle
  • the second judgment part 804 is further configured to judge whether the distance between the AR device and the static obstacle is less than the first preset distance based on the relative pose information ; When the distance between the AR device and the static obstacle is less than the first preset distance, determine whether the AR device is facing the static obstacle; if the AR device is facing the static obstacle, determine whether the AR device is facing the static obstacle. There is a risk of collision between them.
  • the target object includes a dynamic obstacle
  • the second judging part 804 is further configured to determine relative motion information between the AR device and the dynamic obstacle based on a plurality of real scene images; Attitude information and relative motion information to determine whether there is a risk of collision between the AR device and dynamic obstacles.
  • the second judging part 804 is further configured to judge whether the distance between the AR device and the dynamic obstacle is less than the second preset distance based on the relative pose information; When the distance between the objects is less than the second preset distance, based on the relative motion information, determine whether the angle between the driving direction of the dynamic obstacle relative to the AR device and the direction of the dynamic obstacle toward the AR device is less than the set angle Threshold; when the included angle is less than the set angle threshold, it is determined that there is a risk of collision between the AR device and the dynamic obstacle.
  • the second judging part 804 is further configured to determine the world coordinates of the target object according to the real scene image, the detection area corresponding to the target object in the real scene image, and the preset three-dimensional scene map
  • the second pose information under the system among them, the three-dimensional scene map is a map representing the real scene to which the target area belongs; locate the AR device to obtain the first pose information of the AR device in the world coordinate system; based on the first The pose information and the second pose information determine the relative pose information between the target object and the AR device.
  • the early warning prompting part 804 is further configured to provide early warning prompts to the user in at least one of voice form, text form, animation form, warning symbol, and flashing form through the AR device.
  • an embodiment of the present disclosure further provides an electronic device 900 .
  • the schematic structural diagram of the electronic device 900 provided by the embodiment of the present disclosure includes:
  • the communication between the processor 91 and the memory 92 is through the bus 93, so that the processor 91 executes the following instructions : Obtain the real scene image captured by the augmented reality AR device; based on the real scene image, determine whether the AR device is located within the geographic location range corresponding to the target area; if the AR device is located within the geographic location range corresponding to the target area, detect the real scene Whether there is a target object in the image; if there is a target object in the real scene image, determine whether there is a risk of collision between the AR device and the target object; if there is a risk of collision between the AR device and the target object, through the AR device Alert users.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of the navigation prompting method described in the above method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • Embodiments of the present disclosure further provide a computer program, including computer-readable code, when the computer-readable code is executed in an electronic device, a processor in the electronic device executes the steps configured to implement the above-mentioned navigation prompting method.
  • An embodiment of the present disclosure further provides a computer program product, where the computer program product includes computer program instructions, and the computer program instructions can be used to execute the steps of the navigation prompt method described in the above method embodiments.
  • the computer program product includes computer program instructions
  • the computer program instructions can be used to execute the steps of the navigation prompt method described in the above method embodiments.
  • the above-mentioned computer program product can be realized by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
  • the units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium.
  • the computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Traffic Control Systems (AREA)

Abstract

A navigation prompt method and apparatus, and an electronic device, a computer-readable storage medium, a computer program and a program product. The navigation prompt method comprises: acquiring a real scene image captured by an augmented reality (AR) device (S101); on the basis of the real scene image, determining whether the AR device is located within a geographical position range corresponding to a target region (S102); when the AR device is within the geographical position range corresponding to the target region, detecting whether a target object is present in the real scene image (S103); if a target object is present in the real scene image, determining whether there is a risk of a collision occurring between the AR device and the target object (S104); and when there is a risk of a collision occurring between the AR device and the target object, providing an early warning prompt for a user by means of the AR device (S105).

Description

导航提示方法、装置、电子设备、计算机可读存储介质、计算机程序及程序产品Navigation prompt method, apparatus, electronic device, computer readable storage medium, computer program and program product
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本公开要求在2021年02月09日提交中国专利局、申请号为202110176459.7、申请名称为“一种导航提示方法、装置、电子设备及存储介质”的中国专利申请的优先权,该中国专利申请的全部内容在此以全文引入的方式引入本公开。This disclosure claims the priority of the Chinese patent application with the application number of 202110176459.7 and the application title of "A Navigation Prompting Method, Device, Electronic Device and Storage Medium" filed with the China Patent Office on February 9, 2021. The Chinese patent application The entire contents of is hereby incorporated by reference into this disclosure in its entirety.
技术领域technical field
本公开涉及导航技术领域,尤其涉及一种导航提示方法、装置、电子设备、计算机可读存储介质、计算机程序及程序产品。The present disclosure relates to the technical field of navigation, and in particular, to a navigation prompting method, apparatus, electronic device, computer-readable storage medium, computer program, and program product.
背景技术Background technique
随着计算机技术的发展,出现了越来越多的用于导航的应用程序,给人们的出行生活带来了方便。With the development of computer technology, more and more applications for navigation have appeared, which brings convenience to people's travel life.
用于导航的应用程序在为人们的出行带来导航的方便的同时,如何提高用户在使用导航时的安全,为亟需解决的问题。While the application for navigation brings the convenience of navigation to people's travel, how to improve the safety of users when using navigation is an urgent problem to be solved.
发明内容SUMMARY OF THE INVENTION
本公开实施例至少提供一种导航提示方法、装置、电子设备、计算机可读存储介质、计算机程序及程序产品。Embodiments of the present disclosure provide at least a navigation prompting method, apparatus, electronic device, computer-readable storage medium, computer program, and program product.
第一方面,本公开实施例提供了一种导航提示方法,包括:In a first aspect, an embodiment of the present disclosure provides a navigation prompt method, including:
获取增强现实(Augmented Reality,AR)设备拍摄的现实场景图像;Obtain images of real scenes captured by Augmented Reality (AR) devices;
基于所述现实场景图像,判断所述AR设备是否位于目标区域对应的地理位置范围内;Based on the real scene image, determine whether the AR device is located within the geographic location range corresponding to the target area;
在所述AR设备位于所述目标区域对应的地理位置范围内的情况下,检测所述现实场景图像中是否存在目标对象;In the case that the AR device is located within the geographic location range corresponding to the target area, detecting whether there is a target object in the real scene image;
在所述现实场景图像中存在目标对象的情况下,判断所述AR设备与所述目标对象是否存在发生碰撞的风险;In the case that there is a target object in the real scene image, determine whether there is a risk of collision between the AR device and the target object;
在所述AR设备与所述目标对象存在发生碰撞的风险的情况下,通过所述AR设备对用户进行预警提示。In the case that there is a risk of collision between the AR device and the target object, an early warning prompt is given to the user through the AR device.
本公开实施例中,提出在AR设备位于目标区域的情况下,可以对AR设备采集的现实场景图像进行目标检测,在现实场景图像中存在目标对象,且预估AR设备与目标对象存在发生碰撞的风险的情况下,可以通过AR设备对用户进行预警提示,避免用户在导航过程中与目标对象发生碰撞引发的安全事故,从而提高用户出行安全。In the embodiment of the present disclosure, it is proposed that when the AR device is located in the target area, target detection can be performed on the real scene image collected by the AR device, there is a target object in the real scene image, and it is estimated that the AR device and the target object collide In the case of high risk, the AR device can be used to warn the user to avoid the safety accident caused by the collision between the user and the target object during the navigation process, thereby improving the user's travel safety.
在一种可能的实施方式中,所述基于所述现实场景图像,判断所述AR设备是否位于目标区域对应的地理位置范围内,包括:In a possible implementation manner, the determining whether the AR device is located within a geographic location range corresponding to the target area based on the real scene image includes:
基于所述现实场景图像和预设的三维场景地图,确定所述AR设备在世界坐标系下的第一位姿信息;所述三维场景地图为表征所述目标区域所属的现实场景的地图;Based on the real scene image and the preset three-dimensional scene map, determine the first pose information of the AR device in the world coordinate system; the three-dimensional scene map is a map representing the real scene to which the target area belongs;
基于所述第一位姿信息和所述三维场景地图关联的每个区域在世界坐标系下的地理位置范围,判断所述AR设备是否位于目标区域对应的地理位置范围内。Based on the first pose information and the geographic location range of each area associated with the three-dimensional scene map in the world coordinate system, it is determined whether the AR device is located within the geographic location range corresponding to the target area.
本公开实施例中,提出可以根据现实场景图像和预设的三维场景地图来确定AR设备的第一位姿信息,因为三维场景地图中可以包含每个区域对应的地理位置范围,因此可以根据AR设备的第一位姿信息和每个区域对应的地理位置范围,来快速确定出AR设备是否位于待进行预警提示的目标区域内,比如车辆通行的道路上,从而保证用户安 全出行。In the embodiment of the present disclosure, it is proposed that the first pose information of the AR device can be determined according to the real scene image and the preset three-dimensional scene map. The first pose information of the device and the geographic location range corresponding to each area can quickly determine whether the AR device is located in the target area to be alerted, such as on the road where vehicles pass, so as to ensure the user's safe travel.
在一种可能的实施方式中,所述在所述现实场景图像中存在目标对象的情况下,判断所述AR设备与所述目标对象是否存在发生碰撞的风险,包括:In a possible implementation manner, in the case where a target object exists in the real scene image, judging whether there is a risk of collision between the AR device and the target object includes:
基于所述现实场景图像,确定所述目标对象与所述AR设备之间的相对位姿信息;Based on the real scene image, determine the relative pose information between the target object and the AR device;
基于所述相对位姿信息,判断所述AR设备与所述目标对象之间是否存在发生碰撞的风险。Based on the relative pose information, it is determined whether there is a risk of collision between the AR device and the target object.
本公开实施例中,提出可以基于现实场景图像,来确定目标对象与AR设备之间的相对位姿信息,比如相对距离,朝向关系等,这样可以基于相对位姿信息,来预估AR设备与目标对象之间是否存在碰撞的风险,为保障用户安全出行提供保障。In the embodiment of the present disclosure, it is proposed that the relative pose information between the target object and the AR device, such as relative distance, orientation relationship, etc., can be determined based on the real scene image, so that the relative pose information can be used to estimate the relationship between the AR device and the AR device. Whether there is a risk of collision between target objects provides a guarantee for users to travel safely.
在一种可能的实施方式中,所述目标对象包含静态障碍物,所述基于所述相对位姿信息,判断所述AR设备与所述目标对象之间是否存在发生碰撞的风险,包括:In a possible implementation manner, the target object includes a static obstacle, and the determining whether there is a risk of collision between the AR device and the target object based on the relative pose information includes:
基于所述相对位姿信息,判断所述AR设备与所述静态障碍物之间的距离是否小于第一预设距离;Based on the relative pose information, determine whether the distance between the AR device and the static obstacle is less than a first preset distance;
在所述AR设备与所述静态障碍物之间的距离小于第一预设距离的情况下,判断所述AR设备是否朝向所述静态障碍物;In the case where the distance between the AR device and the static obstacle is less than a first preset distance, determine whether the AR device is facing the static obstacle;
在所述AR设备朝向所述静态障碍物的情况下,确定所述AR设备与所述静态障碍物之间存在发生碰撞的风险。In the case where the AR device faces the static obstacle, it is determined that there is a risk of collision between the AR device and the static obstacle.
本公开实施例中,提供一种针对静态障碍物确定是否存在发生碰撞的风险的方式,比如目标对象为栏杆,可以在AR设备与了栏杆之间的距离较近,且AR设备朝向栏杆的情况下,确定存在因AR设备靠近栏杆发生碰撞的风险。In the embodiment of the present disclosure, a method for determining whether there is a risk of collision against a static obstacle is provided. For example, if the target object is a railing, the distance between the AR device and the railing can be relatively short, and the AR device is facing the railing. Next, it is determined that there is a risk of a collision due to the proximity of the AR device to the railing.
在一种可能的实施方式中,所述目标对象包含动态障碍物,所述基于所述相对位姿信息,判断所述AR设备与所述目标对象之间是否存在发生碰撞的风险,包括:In a possible implementation manner, the target object includes a dynamic obstacle, and the determining whether there is a risk of collision between the AR device and the target object based on the relative pose information includes:
基于多张现实场景图像,确定所述AR设备和所述动态障碍物之间的相对运动信息;Determine relative motion information between the AR device and the dynamic obstacle based on a plurality of real scene images;
基于所述相对位姿信息和所述相对运动信息,判断所述AR设备与所述动态障碍物之间是否存在发生碰撞的风险。Based on the relative pose information and the relative motion information, it is determined whether there is a risk of collision between the AR device and the dynamic obstacle.
本公开实施例中,提供一种针对动态障碍物确定是否存在发生碰撞的风险的方式,可以基于多张现实场景图像,来确定目标对象与AR设备之间的相对运动信息,可以包含相对运动方向、相对运动速度等,这样可以结合相对位姿信息和相对运动信息,来预估AR设备与目标对象之间是否存在碰撞的风险,为保障用户安全出行提供保障。In the embodiment of the present disclosure, a method for determining whether there is a risk of collision with respect to a dynamic obstacle is provided, which can determine the relative motion information between the target object and the AR device based on multiple real scene images, which can include the relative motion direction , relative motion speed, etc., in this way, the relative pose information and relative motion information can be combined to estimate whether there is a risk of collision between the AR device and the target object, and provide a guarantee for the user to travel safely.
在一种可能的实施方式中,所述基于所述相对位姿信息和所述相对运动信息,判断所述AR设备与所述动态障碍物之间是否存在发生碰撞的风险,包括:In a possible implementation manner, the determining whether there is a risk of collision between the AR device and the dynamic obstacle based on the relative pose information and the relative motion information includes:
基于所述相对位姿信息,判断所述AR设备与所述动态障碍物之间的距离是否小于第二预设距离;Based on the relative pose information, determine whether the distance between the AR device and the dynamic obstacle is less than a second preset distance;
在所述AR设备与所述动态障碍物之间的距离小于第二预设距离的情况下,基于所述相对运动信息,判断所述动态障碍物相对于所述AR设备的行驶方向与所述动态障碍物朝向所述AR设备的方向之间的夹角是否小于设定角度阈值;In the case that the distance between the AR device and the dynamic obstacle is less than the second preset distance, based on the relative motion information, it is determined that the driving direction of the dynamic obstacle relative to the AR device is different from that of the dynamic obstacle. Whether the included angle between the directions of the dynamic obstacles toward the AR device is less than the set angle threshold;
在所述夹角小于设定角度阈值的情况下,确定所述AR设备与所述动态障碍物之间存在发生碰撞的风险。In the case that the included angle is smaller than the set angle threshold, it is determined that there is a risk of collision between the AR device and the dynamic obstacle.
本公开实施例中,提供一种结合AR设备和/或动态障碍物的运动状态确定是否存在发生碰撞的风险的方式,可以在AR设备和动态障碍物之间的距离小于第二预设距离,且动态障碍物和AR设备之间相互靠近的情况下,来准确预估AR设备与动态障碍物之间存在发生碰撞的风险。In the embodiment of the present disclosure, a method is provided for determining whether there is a risk of collision in combination with the motion state of the AR device and/or the dynamic obstacle, so that the distance between the AR device and the dynamic obstacle is less than a second preset distance, And when the dynamic obstacle and the AR device are close to each other, it can accurately predict the risk of collision between the AR device and the dynamic obstacle.
在一种可能的实施方式中,所述基于所述现实场景图像,确定所述目标对象与所述AR设备之间的相对位姿信息,包括:In a possible implementation manner, the determining relative pose information between the target object and the AR device based on the real scene image includes:
根据所述现实场景图像、所述目标对象在所述现实场景图像中对应的检测区域、以及预设的三维场景地图,确定所述目标对象在世界坐标系下的第二位姿信息;所述三维场景地图为表征所述目标区域所属的现实场景的地图;According to the real scene image, the detection area corresponding to the target object in the real scene image, and the preset three-dimensional scene map, determine the second pose information of the target object in the world coordinate system; the The three-dimensional scene map is a map representing the real scene to which the target area belongs;
对所述AR设备进行定位,获得所述AR设备在所述世界坐标系下的第一位姿信息;Positioning the AR device to obtain the first pose information of the AR device in the world coordinate system;
根据所述第一位姿信息以及所述第二位姿信息,确定所述目标对象与所述AR设备之间的相对位姿信息。Determine relative pose information between the target object and the AR device according to the first pose information and the second pose information.
本公开实施例中,提出通过确定目标对象和AR设备在世界坐标系下各自的位姿信息,可以快速准确的确定出目标对象和AR设备之间的相对位姿信息。In the embodiments of the present disclosure, it is proposed that by determining the respective pose information of the target object and the AR device in the world coordinate system, the relative pose information between the target object and the AR device can be quickly and accurately determined.
在一种可能的实施方式中,所述通过所述AR设备对用户进行预警提示,包括:In a possible implementation manner, the performing an early warning prompt to the user through the AR device includes:
通过所述AR设备以语音形式、文字形式、动画形式、警示符、和闪光形式中的至少一种对用户进行预警提示。An early warning prompt is given to the user through the AR device in at least one of voice form, text form, animation form, warning sign, and flashing form.
本公开实施例中,提出可以通过多种方式对用户进行预警提示,以提高用户出行安全。In the embodiment of the present disclosure, it is proposed that the user can be warned in various ways, so as to improve the travel safety of the user.
第二方面,本公开实施例提供了一种导航提示装置,包括:In a second aspect, an embodiment of the present disclosure provides a navigation prompting device, including:
图像获取部分,被配置为获取增强现实AR设备拍摄的现实场景图像;The image acquisition part is configured to acquire the real scene image captured by the augmented reality AR device;
第一判断部分,被配置为基于所述现实场景图像,判断所述AR设备是否位于目标区域对应的地理位置范围内;a first judging part, configured to judge whether the AR device is located within the geographic location range corresponding to the target area based on the real scene image;
目标检测部分,被配置为在所述AR设备位于目标区域对应的地理位置范围内的情况下,检测所述现实场景图像中是否存在目标对象;a target detection part, configured to detect whether a target object exists in the real scene image when the AR device is located within the geographic location range corresponding to the target area;
第二判断部分,被配置为在所述现实场景图像中存在目标对象的情况下,判断所述AR设备与所述目标对象是否存在发生碰撞的风险;The second judgment part is configured to judge whether there is a risk of collision between the AR device and the target object when there is a target object in the real scene image;
预警提示部分,被配置为在所述AR设备与所述目标对象存在发生碰撞的风险的情况下,通过所述AR设备对用户进行预警提示。The pre-warning and prompting part is configured to provide a pre-warning prompt to the user through the AR device when there is a risk of collision between the AR device and the target object.
第三方面,本公开实施例提供了一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如第一方面所述的导航提示方法。In a third aspect, embodiments of the present disclosure provide an electronic device, including: a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, the processing The processor and the memory communicate through a bus, and the machine-readable instructions execute the navigation prompt method according to the first aspect when the machine-readable instructions are executed by the processor.
第四方面,本公开实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如第一方面所述的导航提示方法。In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to execute the navigation prompt method according to the first aspect.
第五方面,本公开实施例还提供一种计算机程序,包括计算机可读代码,当计算机可读代码在计算机设备中运行时,计算机设备中的处理器执行用于实现第一方面所述的导航提示方法。In a fifth aspect, an embodiment of the present disclosure further provides a computer program, including computer-readable code, when the computer-readable code is executed in a computer device, a processor in the computer device executes the navigation for implementing the navigation described in the first aspect hint method.
第六方面,本公开实施例还提供一种计算机程序产品,包括计算机程序指令,该计算机程序指令使得计算机执行第一方面所述的导航提示方法。In a sixth aspect, an embodiment of the present disclosure further provides a computer program product, including computer program instructions, the computer program instructions causing a computer to execute the navigation prompt method described in the first aspect.
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。In order to make the above-mentioned objects, features and advantages of the present disclosure more obvious and easy to understand, the preferred embodiments are exemplified below, and are described in detail as follows in conjunction with the accompanying drawings.
附图说明Description of drawings
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to explain the technical solutions of the embodiments of the present disclosure more clearly, the following briefly introduces the accompanying drawings required in the embodiments, which are incorporated into the specification and constitute a part of the specification. The drawings illustrate embodiments consistent with the present disclosure, and together with the description serve to explain the technical solutions of the present disclosure. It should be understood that the following drawings only show some embodiments of the present disclosure, and therefore should not be regarded as limiting the scope. Other related figures are obtained from these figures.
图1A示出了本公开实施例所提供的一种电子设备应用场景示意图一;FIG. 1A shows a schematic diagram 1 of an application scenario of an electronic device provided by an embodiment of the present disclosure;
图1B示出了本公开实施例所提供的一种电子设备应用场景示意图二;FIG. 1B shows a schematic diagram 2 of an application scenario of an electronic device provided by an embodiment of the present disclosure;
图2示出了本公开实施例所提供的一种导航提示方法的流程图;FIG. 2 shows a flowchart of a navigation prompt method provided by an embodiment of the present disclosure;
图3示出了本公开实施例所提供的一种判断AR设备与目标对象是否存在发生碰撞的风险的方法流程图;3 shows a flowchart of a method for judging whether there is a risk of collision between an AR device and a target object provided by an embodiment of the present disclosure;
图4示出了本公开实施例所提供的一种预警提示的场景示意图;FIG. 4 shows a schematic diagram of a scenario of an early warning prompt provided by an embodiment of the present disclosure;
图5示出了本公开实施例所提供的另一种判断AR设备与目标对象是否存在发生碰撞的风险的方法流程图;5 shows a flowchart of another method for judging whether there is a risk of collision between an AR device and a target object provided by an embodiment of the present disclosure;
图6示出了本公开实施例所提供的另一种预警提示的场景示意图;FIG. 6 shows a schematic diagram of another early warning prompt provided by an embodiment of the present disclosure;
图7示出了本公开实施例所提供的一种确定目标对象与AR设备之间的相对位姿信息的方法流程图;7 shows a flowchart of a method for determining relative pose information between a target object and an AR device provided by an embodiment of the present disclosure;
图8示出了本公开实施例所提供的一种导航提示装置的结构示意图;FIG. 8 shows a schematic structural diagram of a navigation prompting device provided by an embodiment of the present disclosure;
图9示出了本公开实施例所提供的一种电子设备的示意图。FIG. 9 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only These are some, but not all, embodiments of the present disclosure. The components of the disclosed embodiments generally described and illustrated in the drawings herein may be arranged and designed in a variety of different configurations. Therefore, the following detailed description of the embodiments of the disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure as claimed, but is merely representative of selected embodiments of the disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative work fall within the protection scope of the present disclosure.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。It should be noted that like numerals and letters refer to like items in the following figures, so once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.
本文中术语“和/或”,仅仅是描述一种关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。The term "and/or" in this paper only describes an association relationship, which means that there can be three kinds of relationships, for example, A and/or B, which can mean: the existence of A alone, the existence of A and B at the same time, the existence of B alone. a situation. In addition, the term "at least one" herein refers to any combination of any one of the plurality or at least two of the plurality, for example, including at least one of A, B, and C, and may mean including from A, B, and C. Any one or more elements selected from the set of B and C.
在一般的导航场景中,可以通过导航设备中展示指向目标地的虚拟指示箭头,用户按照该虚拟指示箭头步行前进,即可以达到目的地,在按照导航路线步行过程中,在位于具有危险的目标区域的情况下,若用户仍然按照指示箭头机械式的步行过程中,容易发生危险,因此如何保证用户在导航过程中的安全性,为本公开要解决的技术问题。In a general navigation scenario, a virtual indication arrow pointing to the target can be displayed in the navigation device, and the user can walk forward according to the virtual indication arrow, that is, the destination can be reached. In the case of an area, if the user still follows the arrows in the mechanical walking process, danger is likely to occur. Therefore, how to ensure the safety of the user during the navigation process is a technical problem to be solved by the present disclosure.
基于上述研究,本公开提供了一种导航提示方法,提出在AR设备位于目标区域的情况下,可以对AR设备采集的现实场景图像进行目标检测,在现实场景图像中存在目标对象,且预估AR设备与目标对象存在发生碰撞的风险的情况下,可以通过AR设备对用户进行预警提示,避免用户在导航过程中与目标对象发生碰撞引发的安全事故,从而提高用户出行安全。Based on the above research, the present disclosure provides a navigation prompt method, and proposes that when the AR device is located in the target area, target detection can be performed on the real scene image collected by the AR device. When there is a risk of collision between the AR device and the target object, the AR device can be used to warn the user to avoid safety accidents caused by the collision between the user and the target object during the navigation process, thereby improving the user's travel safety.
本公开实施例提供的导航提示方法以由电子设备执行。The navigation prompting method provided by the embodiment of the present disclosure can be executed by an electronic device.
在一些可能的实现方式中,参考图1A所示,电子设备10可以包括处理器11和图像采集部件12,这样,电子设备10可以通过图像采集部件12采集现实场景图像,并通过处理器11对现实场景图像进行分析处理,以确定电子设备10的位置,当检测到电子设备10可能与其他对象发生碰撞时,则警示用户。例如,电子设备可以实施为AR设备,AR设备可以包括智能手机、平板电脑和AR眼镜等能够进行增强现实的设备。In some possible implementations, as shown in FIG. 1A , the electronic device 10 may include a processor 11 and an image acquisition part 12 , in this way, the electronic device 10 can acquire an image of a real scene through the image acquisition part 12 , and use the processor 11 to acquire an image of a real scene. The real scene image is analyzed and processed to determine the position of the electronic device 10, and when it is detected that the electronic device 10 may collide with other objects, the user is alerted. For example, electronic devices may be implemented as AR devices, which may include devices capable of augmented reality, such as smartphones, tablets, and AR glasses.
在一些可能的实现方式中,参考图1B所示,电子设备10可以通过网络20接收其 他设备(例如AR设备)30发送的现实场景图像。这样,电子设备10可以基于现实场景图像定位AR设备30所处的位置,检测到AR设备30可能与其他对象发生碰撞时,则通过AR设备30的人机交互界面来向用户输出警示信息。例如,电子设备可以实施为服务器。In some possible implementations, as shown in FIG. 1B , the electronic device 10 may receive images of real scenes sent by other devices (such as AR devices) 30 through the network 20 . In this way, the electronic device 10 can locate the position of the AR device 30 based on the real scene image, and when detecting that the AR device 30 may collide with other objects, output warning information to the user through the human-computer interaction interface of the AR device 30 . For example, the electronic device may be implemented as a server.
参见图2所示,为本公开实施例提供的导航提示方法的流程图,该导航提示方法包括以下S101~S105:Referring to FIG. 2, which is a flowchart of a navigation prompt method provided by an embodiment of the present disclosure, the navigation prompt method includes the following S101-S105:
S101,获取增强现实AR设备拍摄的现实场景图像。S101, obtain a real scene image captured by an augmented reality AR device.
示例性地,AR设备可以包括智能手机、平板电脑和AR眼镜等能够进行增强现实的设备,AR设备可以内置图像采集部件也可以外接图像采集部件,在AR设备进入工作状态后,可以通过图像采集部件实时拍摄现实场景图像。Exemplarily, the AR device may include devices capable of augmented reality, such as smartphones, tablet computers, and AR glasses. The AR device may have a built-in image acquisition component or an external image acquisition component. After the AR device enters the working state, it can collect images through the The component captures real-time images of the scene in real time.
S102,基于现实场景图像,判断AR设备是否位于目标区域对应的地理位置范围内。S102, based on the real scene image, determine whether the AR device is located within the geographic location range corresponding to the target area.
示例性地,目标区域可以为预先设定具有一定行驶危险的区域,比如道路区域、滑坡山路、施工区域或者其它容易发生危险的区域。Exemplarily, the target area may be a predetermined area with a certain driving danger, such as a road area, a landslide mountain road, a construction area, or other areas prone to danger.
示例性地,基于现实场景图像,可以对AR设备进行定位,从而确定出AR设备在世界坐标系下的第一位姿信息,然后再基于AR设备的第一位姿信息确定该AR设备是否位于目标区域对应的地理位置范围内。Exemplarily, based on the real scene image, the AR device can be positioned to determine the first pose information of the AR device in the world coordinate system, and then based on the first pose information of the AR device, it is determined whether the AR device is located in the AR device. Within the geographic range corresponding to the target area.
本公开一些实施例中,可以基于确定的AR设备的第一位姿信息、以及表征现实场景的三维场景地图中包含的每个区域在世界坐标系下的地理位置范围,确定该AR设备的第一位姿信息是否位于目标区域对应的地理位置范围,如果AR设备的第一位姿信息位于目标区域对应的地理位置范围内,可以确定AR设备位于该目标区域对应的地理位置范围内,否则AR设备不位于目标区域对应的地理位置范围内。In some embodiments of the present disclosure, the first pose information of the AR device may be determined based on the determined first pose information of the AR device and the geographic location range of each area included in the three-dimensional scene map representing the real scene in the world coordinate system, to determine the first position of the AR device. Whether the pose information is located in the geographic location range corresponding to the target area, if the first pose information of the AR device is located within the geographic location range corresponding to the target area, it can be determined that the AR device is located within the geographic location range corresponding to the target area, otherwise the AR device is located within the geographic location range corresponding to the target area. The device is not located within the geographic range of the target area.
考虑到现实场景图像是通过AR设备的图像采集部件采集的,AR设备的第一位姿信息可以通过AR设备的图像采集部件的第一位姿信息表示,可以包括AR设备的图像采集部件在现实场景对应的世界坐标系下的位置坐标和姿态数据,其中位置坐标可以通过该图像采集部件在世界坐标系中的位置坐标表示;姿态数据可以通过该图像采集部件的朝向来表示,该图像采集部件的朝向可以通过图像采集部件的光轴与世界坐标系中的X轴、Y轴和Z轴的夹角来表示。Considering that the real scene image is collected by the image acquisition part of the AR device, the first pose information of the AR device can be represented by the first pose information of the image acquisition part of the AR device, which can include the image acquisition part of the AR device in the real world. The position coordinates and attitude data in the world coordinate system corresponding to the scene, where the position coordinates can be represented by the position coordinates of the image acquisition component in the world coordinate system; the attitude data can be represented by the orientation of the image acquisition component. The orientation can be represented by the included angles between the optical axis of the image acquisition component and the X, Y and Z axes in the world coordinate system.
本公开一些实施例中,在基于AR设备拍摄的现实场景图像来对AR设备进行定位时,可以基于该现实场景图像和表征现实场景的三维场景地图来对AR设备进行定位,另外在定位过程中还可以结合AR设备内置的惯性测量单元(Inertial Measurement Unit,IMU)对AR设备进行定位,详细地的定位方式将在后文进行介绍。In some embodiments of the present disclosure, when positioning the AR device based on the real scene image captured by the AR device, the AR device may be positioned based on the real scene image and the three-dimensional scene map representing the real scene. In addition, during the positioning process The AR device can also be positioned in combination with the built-in inertial measurement unit (IMU) of the AR device. The detailed positioning method will be introduced later.
S103,在AR设备位于目标区域对应的地理位置范围内的情况下,检测现实场景图像中是否存在目标对象。S103, when the AR device is located within the geographic location range corresponding to the target area, detect whether there is a target object in the real scene image.
示例性地,目标对象可以是用户在行驶过程中遇到的可能发生碰撞的障碍物,可以包括静态障碍物和动态障碍物,比如在目标区域为道路的情况下,目标对象可以包含车辆、树木、栏杆等。Exemplarily, the target object may be an obstacle that the user may encounter during driving, and may include static obstacles and dynamic obstacles. For example, if the target area is a road, the target object may include vehicles, trees, etc. , railings, etc.
示例性地,可以通过预先训练的进行目标检测的神经网络来检测现实场景图像中是否存在目标对象,以及确定目标对象的检测信息,比如确定目标对象的外接矩形框在图像坐标系下对应的位置信息,以及外接矩形框的中心点在图像坐标系下对应的位置信息、或者构成目标对象的像素点在图像坐标系下的像素坐标值等。Exemplarily, a pre-trained neural network for target detection can be used to detect whether a target object exists in an image of a real scene, and to determine the detection information of the target object, such as determining the corresponding position of the bounding rectangle of the target object in the image coordinate system. information, and the corresponding position information of the center point of the enclosing rectangular frame in the image coordinate system, or the pixel coordinate value of the pixel points constituting the target object in the image coordinate system, etc.
S104,在现实场景图像中存在目标对象的情况下,判断AR设备与目标对象是否存在发生碰撞的风险。S104 , in the case where the target object exists in the real scene image, determine whether there is a risk of collision between the AR device and the target object.
S105,在AR设备与目标对象存在发生碰撞的风险的情况下,通过AR设备对用户进行预警提示。S105 , in the case that there is a risk of collision between the AR device and the target object, a warning is given to the user through the AR device.
在确定从现实场景图像中检测到目标对象的情况下,可以进一步判断AR设备与目标对象之间是否存在发生碰撞的风险,判断过程详见后文,在确定AR设备与目标对象之间存在发生碰撞的风险的情况下,再通过AR设备对用户进行预警提示。In the case where it is determined that the target object is detected from the real scene image, it can be further judged whether there is a risk of collision between the AR device and the target object. In the case of the risk of collision, the AR device is used to warn the user.
示例性地,通过AR对设备对用户进行预警提示时,可以包括:Exemplarily, when the AR is used to alert the user to the device, it may include:
通过AR设备以语音形式、文字形式、动画形式、警示符、和闪光形式中的至少一种对用户进行预警提示。An early warning prompt is given to the user through the AR device in at least one of voice form, text form, animation form, warning sign, and flashing form.
其中,文字提示信息、动画提示信息、警示符可以为叠加在现实场景中的虚拟对象,可以按照在AR设备拍摄的现实场景图像中叠加进行提示的虚拟对象,达到对用户进行提示的目的。The text prompt information, animation prompt information, and warning signs may be virtual objects superimposed on the real scene, and may be virtual objects superimposed on the real scene image captured by the AR device for prompting, so as to achieve the purpose of prompting the user.
除了通过文字提示信息、动画提示信息、警示符等视觉信息对用户进行预警提示外,还可以结合语音形式和/或闪光形式对用户进行提示,便于有效提示用户注意周围环境,以提高用户出行安全。In addition to prompting users with visual information such as text prompts, animation prompts, and warning signs, it can also prompt users in the form of voice and/or flashing, so as to effectively remind users to pay attention to the surrounding environment and improve travel safety for users .
本公开实施例中,提出在AR设备位于目标区域的情况下,可以对AR设备采集的现实场景图像进行目标检测,在现实场景图像中存在目标对象,且预估AR设备与目标对象存在发生碰撞的风险的情况下,可以通过AR设备对用户进行预警提示,避免用户在导航过程中与目标对象发生碰撞引发的安全事故,从而提高用户出行安全。In the embodiment of the present disclosure, it is proposed that when the AR device is located in the target area, target detection can be performed on the real scene image collected by the AR device, there is a target object in the real scene image, and it is estimated that the AR device and the target object collide In the case of high risk, the AR device can be used to warn the user to avoid the safety accident caused by the collision between the user and the target object during the navigation process, thereby improving the user's travel safety.
下面将结合实施例对上述S101~S104进行详细说明。The above S101 to S104 will be described in detail below with reference to the embodiments.
针对上述S102,在基于现实场景图像,判断AR设备是否位于目标区域对应的地理位置范围内时,可以包括以下S1011~S1013:For the above S102, when judging whether the AR device is located within the geographic location range corresponding to the target area based on the real scene image, the following steps S1011 to S1013 may be included:
S1011,基于现实场景图像和预设的三维场景地图,确定AR设备在世界坐标系下的第一位姿信息;其中,三维场景地图为表征目标区域所属的现实场景的地图;S1011, based on the real scene image and the preset three-dimensional scene map, determine the first pose information of the AR device in the world coordinate system; wherein, the three-dimensional scene map is a map representing the real scene to which the target area belongs;
S1012,基于AR设备的第一位姿信息和三维场景地图关联的每个区域在世界坐标系下的地理位置范围,判断AR设备是否位于目标区域对应的地理位置范围内。S1012, based on the first pose information of the AR device and the geographic location range of each area associated with the three-dimensional scene map in the world coordinate system, determine whether the AR device is located within the geographic location range corresponding to the target area.
S1013,若AR设备的第一位姿信息指示AR设备位于目标区域对应的地理位置范围内,确定AR设备位于该目标区域对应的地理位置范围内,若AR设备的第一位姿信息指示AR设备不位于目标区域对应的地理位置范围内,确定AR设备不位于该目标区域对应的地理位置范围内。S1013, if the first pose information of the AR device indicates that the AR device is located within the geographic location range corresponding to the target area, determine that the AR device is located within the geographic location range corresponding to the target area, and if the first pose information of the AR device indicates that the AR device is located If it is not located within the geographic location range corresponding to the target area, it is determined that the AR device is not located within the geographic location range corresponding to the target area.
例性地,可以通过预先拍摄现实场景得到的视频或者图像数据,生成表征现实场景的三维场景地图,生成方式详见后文,该三维场景地图基于现实场景对应的视频数据生成,可以构建与现实场景在相同坐标系下完全重合的三维场景地图,比如可以构建与现实场景在世界坐标系下完全重合的三维场景地图,因此可以将该三维场景地图作为现实场景的高精度地图使用。Exemplarily, a three-dimensional scene map representing the real scene can be generated by pre-shooting the video or image data obtained from the real scene. For details of the generation method, please refer to the following section. The three-dimensional scene map is generated based on the video data corresponding to the real scene, and can be constructed and A 3D scene map in which the scenes are completely overlapped in the same coordinate system, for example, a 3D scene map that completely overlaps with the real scene in the world coordinate system can be constructed. Therefore, the 3D scene map can be used as a high-precision map of the real scene.
示例性地,在得到AR设备拍摄的现实场景图像后,可以先基于该现实场景图像和预先构建的表征现实场景图像的三维场景地图,确定出AR设备的第一位姿信息,详细过程将在后文进行详细阐述。Exemplarily, after obtaining the real scene image captured by the AR device, the first pose information of the AR device can be determined based on the real scene image and the pre-built three-dimensional scene map representing the real scene image. The detailed process will be described in It will be explained in detail later.
另外,考虑到现实场景图像并非实时采集的,一般情况下是按照设定时间间隔采集的,另外,基于现实场景图像和三维场景地图进行定位的方式功耗较大,因此在对AR设备进行定位,确定AR设备的第一位姿信息的过程中,可以将基于现实场景图像的视觉定位和IMU定位方式结合使用。In addition, considering that real scene images are not collected in real time, they are generally collected at set time intervals. In addition, positioning based on real scene images and 3D scene maps consumes a lot of power. Therefore, when positioning AR devices , in the process of determining the first pose information of the AR device, the visual positioning based on the real scene image and the IMU positioning method can be used in combination.
示例性地,可以按照视觉定位周期性地确定AR设备的第一位姿信息,中间过程通过IMU进行定位,比如,每隔10秒通过视觉方式进行定位,则从AR设备开始工作时的第一位姿信息、第10秒、第20秒、第30秒的第一位姿信息是基于视觉定位得到的,针对开始工作后第1秒的第一位姿信息可以是基于开始工作时的第一位姿信息以及从开始工作时刻到第1秒过程中AR设备的IMU采集的数据预估得到的,同样,第2秒的 第一位姿信息可以是基于第1秒的第一位姿信息和从第1秒到第2秒过程中AR设备的IMU采集的数据预估得到的,随着时间的积累,基于IMU定位方式得到的第一位姿信息不再准确时,可以通过视觉定位方式进行修正,得到准确度较高的第一位姿信息。Exemplarily, the first pose information of the AR device can be periodically determined according to the visual positioning, and the positioning is performed by the IMU in the intermediate process. For example, the positioning is performed visually every 10 seconds. The pose information, the first pose information at the 10th, 20th, and 30th seconds are obtained based on visual positioning, and the first pose information for the first second after starting work can be based on the first The pose information and the data collected by the IMU of the AR device from the start of work to the first second are estimated. Similarly, the first pose information in the second second can be based on the first pose information in the first second and It is estimated from the data collected by the IMU of the AR device from the first second to the second second. With the accumulation of time, when the first pose information obtained based on the IMU positioning method is no longer accurate, the visual positioning method can be used. Correction to obtain the first pose information with higher accuracy.
另外,对AR设备进行定位过程还可以基于即时定位与地图构建(Simultaneous Localization And Mapping,SLAM)的方式对AR设备进行定位,比如预先为现实场景构建世界坐标系,在AR设备进入现实场景后,预先获取该AR设备在世界坐标系下的第一位姿信息,并获取AR设备拍摄的现实场景图像,随着AR设备在移动过程中拍摄的现实场景图像,实时构建该现实场景的三维场景地图并对该AR设备进行定位,得到AR设备在不同时刻的第一位姿信息。In addition, the positioning process of the AR device can also be based on the method of Simultaneous Localization And Mapping (SLAM) to locate the AR device, such as constructing a world coordinate system for the real scene in advance. Pre-acquire the first pose information of the AR device in the world coordinate system, and obtain the real scene image captured by the AR device, and construct a 3D scene map of the real scene in real time with the real scene image captured by the AR device during the movement process And locate the AR device to obtain the first pose information of the AR device at different times.
示例性地,每个区域在三维场景地图中占据一定的面积,可以预先对该区域的边界线进行标注,获取边界线上的位置点在世界坐标系下的位置坐标,按照该方式可以得到该区域在世界坐标系下的地理位置范围。Exemplarily, each area occupies a certain area in the three-dimensional scene map, the boundary line of the area can be marked in advance, and the position coordinates of the position points on the boundary line in the world coordinate system can be obtained. The geographic extent of the region in the world coordinate system.
示例性地,边界线上的每个位置点在世界坐标系下的位置坐标可以包括该位置点在世界坐标系下沿X轴方向的坐标值,该位置点在世界坐标系下沿Y轴方向的坐标值,该位置点在世界坐标系下沿Z轴方向的坐标值。本公开一些实施例中,在根据边界线上的位置点在世界坐标系下的位置坐标,确定该区域在世界坐标系下的地理位置范围时,可以基于边界线上的位置点在世界坐标系下的沿X轴方向的坐标值,确定该区域沿X轴方向的坐标范围,基于边界线上的位置点在世界坐标系下的沿Y轴方向的坐标值,确定该区域沿Y轴方向的坐标范围,以及基于边界线上的位置点在世界坐标系下的沿Z轴方向的坐标值,确定该区域沿Z轴方向的坐标范围,将该区域沿X轴方向的坐标范围、沿Y轴方向的坐标范围和沿Z轴方向的坐标范围作为该区域对应的地理位置范围。Exemplarily, the position coordinates of each position point on the boundary line under the world coordinate system may include the coordinate value of the position point along the X-axis direction under the world coordinate system, and the position point along the Y-axis direction under the world coordinate system. The coordinate value of the position point along the Z axis in the world coordinate system. In some embodiments of the present disclosure, when determining the geographic location range of the region in the world coordinate system according to the position coordinates of the location points on the boundary line in the world coordinate system, the location points on the boundary line in the world coordinate system may be used to determine the geographic location range of the region. The coordinate value along the X-axis direction below, determine the coordinate range of the area along the X-axis direction, and based on the coordinate value of the position point on the boundary line along the Y-axis direction in the world coordinate system, determine the area along the Y-axis direction. The coordinate range, and the coordinate value along the Z-axis direction of the position point on the boundary line in the world coordinate system, determine the coordinate range of the area along the Z-axis direction, the coordinate range of the area along the X-axis direction, along the Y-axis The coordinate range of the direction and the coordinate range along the Z-axis direction are used as the geographic location range corresponding to the area.
在得到AR设备的第一位姿信息后,可以根据AR设备在现实场景对应的世界坐标系下的位置坐标以及每个区域对应的地理位置范围,确定该AR设备是否位于目标区域对应的地理位置范围内。After obtaining the first pose information of the AR device, it can be determined whether the AR device is located in the geographic location corresponding to the target area according to the location coordinates of the AR device in the world coordinate system corresponding to the real scene and the geographic location range corresponding to each area. within the range.
本公开实施例中,提出可以根据现实场景图像和表征现实场景图像的三维场景地图来确定AR设备的第一位姿信息,因为三维场景地图中可以包含每个区域对应的地理位置范围,因此可以根据AR设备的第一位姿信息和每个区域对应的地理位置范围,来快速确定出AR设备是否位于待进行预警提示的目标区域内,比如车辆通行的道路上,从而保证用户安全出行。In the embodiment of the present disclosure, it is proposed that the first pose information of the AR device can be determined according to the real scene image and the three-dimensional scene map representing the real scene image. According to the first pose information of the AR device and the geographic location range corresponding to each area, it can quickly determine whether the AR device is located in the target area to be alerted, such as on the road where vehicles pass, so as to ensure the user's safe travel.
在一种实施方式中,针对上述S104,在现实场景图像中存在目标对象的情况下,判断AR设备与目标对象是否存在发生碰撞的风险时,如图3所示,包括以下S201~S202:In one embodiment, for the above S104, when there is a target object in the real scene image, when judging whether there is a risk of collision between the AR device and the target object, as shown in FIG. 3, the following steps S201-S202 are included:
S201,基于现实场景图像,确定目标对象与AR设备之间的相对位姿信息。S201 , based on a real scene image, determine relative pose information between a target object and an AR device.
示例性地,AR设备与目标对象之间的相对位姿信息可以包括AR设备和该目标对象之间的相对距离和相对角度,其中,相对距离可以通过AR设备的图像采集部件的光心与该目标对象的目标位置点在世界坐标系下的相对距离表示,该目标位置点可以包括该目标对象的中心位置点,也可以包括该目标对象的边界上与图像采集部件的光心之间距离最近的位置点;相对角度可以通过AR设备的图像采集部件的光轴指向该目标对象的目标位置点时的方向与该光轴的朝向之间的夹角表示。Exemplarily, the relative pose information between the AR device and the target object may include the relative distance and relative angle between the AR device and the target object, wherein the relative distance The relative distance of the target position point of the target object in the world coordinate system, the target position point may include the center position point of the target object, or the closest distance between the boundary of the target object and the optical center of the image acquisition component The relative angle can be represented by the angle between the direction when the optical axis of the image acquisition component of the AR device points to the target position point of the target object and the orientation of the optical axis.
S202,基于相对位姿信息,判断AR设备与目标对象是否存在发生碰撞的风险。S202, based on the relative pose information, determine whether there is a risk of collision between the AR device and the target object.
示例性地,可以基于相对距离和相对角度确定该AR设备与目标对象是否存在发生碰撞的风险,确定是否存在发生碰撞的风险的方式,将在后文进行阐述。Exemplarily, whether there is a risk of collision between the AR device and the target object may be determined based on the relative distance and relative angle, and the manner of determining whether there is a risk of collision will be described later.
示例性地,相对位姿信息可以包含目标对象相对于AR设备的位姿,以及AR设备相对于目标对象的位姿,比如目标对象向AR设备靠近,或者AR设备向目标对象靠近,或者两者都向对方靠近,可以确定两者的相对位姿越来越近,这些可以通过AR设备采 集的现实场景图像来确定,确定过程详见后文。Exemplarily, the relative pose information may include the pose of the target object relative to the AR device, and the pose of the AR device relative to the target object, such as the target object approaching the AR device, or the AR device approaching the target object, or both. They are both approaching each other, and it can be determined that the relative poses of the two are getting closer and closer. These can be determined through the real scene images collected by the AR device. The determination process is described later.
本公开实施例中,提出可以基于现实场景图像,来确定目标对象与AR设备之间的相对位姿信息,比如相对距离,朝向关系等,这样可以基于相对位姿信息,来预估AR设备与目标对象之间是否存在碰撞的风险,为保障用户安全出行提供保障。In the embodiment of the present disclosure, it is proposed that the relative pose information between the target object and the AR device, such as relative distance, orientation relationship, etc., can be determined based on the real scene image, so that the relative pose information can be used to estimate the relationship between the AR device and the AR device. Whether there is a risk of collision between target objects provides a guarantee for users to travel safely.
在一种实施方式中,目标对象包含静态障碍物,针对上述S202,在基于相对位姿信息,判断AR设备与目标对象之间是否存在发生碰撞的风险时,包括以下S2021~S2023:In one embodiment, the target object includes a static obstacle. For the above S202, when judging whether there is a risk of collision between the AR device and the target object based on the relative pose information, the following steps S2021 to S2023 are included:
S2021,基于相对位姿信息,判断AR设备与静态障碍物之间的距离是否小于第一预设距离。S2021, based on the relative pose information, determine whether the distance between the AR device and the static obstacle is less than a first preset distance.
示例性地,静态障碍物可以是用户在目标区域行驶过程中,通过AR设备采集到的处于静态的障碍物,比如栏杆、树木、台阶等。Exemplarily, the static obstacles may be static obstacles, such as railings, trees, steps, etc., collected by the AR device while the user is driving in the target area.
示例性地,第一预设距离可以根据经验设定,以便能够提前通过AR设备对用户进行提示,避免用户按照目前的行驶方向继续行驶的情况下,与静态障碍物发生碰撞。Exemplarily, the first preset distance may be set according to experience, so that the user can be prompted through the AR device in advance to avoid collision with a static obstacle when the user continues to drive in the current driving direction.
S2022,在AR设备与静态障碍物之间的距离小于第一预设距离的情况下,判断AR设备是否朝向静态障碍物。S2022, in the case that the distance between the AR device and the static obstacle is less than the first preset distance, determine whether the AR device faces the static obstacle.
S2023,在AR设备朝向静态障碍物的情况下,确定AR设备与静态障碍物之间存在发生碰撞的风险。S2023, when the AR device faces the static obstacle, determine that there is a risk of collision between the AR device and the static obstacle.
示例性地,可以根据AR设备采集的现实场景图像,确定出目标对象在AR设备所对应的相机坐标系中的位置坐标,然后根据AR设备的光轴指向方向,确定出AR设备是否朝向静态障碍物。Exemplarily, the position coordinates of the target object in the camera coordinate system corresponding to the AR device can be determined according to the real scene image collected by the AR device, and then according to the direction of the optical axis of the AR device, it can be determined whether the AR device is facing a static obstacle. thing.
示例性地,在AR设备与静态障碍物之间存在发生碰撞的风险的情况下,可以基于AR设备与静态障碍物之间的距离来确定风险等级,距离越小,风险等级越高,在通过AR设备对用户进行预警提示时,可以按照不同的风险等级进行对应的预警提示,比如风险等级较低时,可以通过AR设备以一种方式进行预警提示,在风险等级较高时,可以通过AR设备以多种方式进行预警提示。Exemplarily, in the case where there is a risk of collision between the AR device and the static obstacle, the risk level can be determined based on the distance between the AR device and the static obstacle. When the AR device prompts the user, it can give corresponding early warning prompts according to different risk levels. For example, when the risk level is low, the AR device can provide early warning prompts in one way. When the risk level is high, you can use the AR The equipment provides early warning prompts in various ways.
示例性地,如图4所示,是针对目标区域为施工区域,目标对象为栏杆的情况下,通过AR设备对用户进行提示的场景示意图,用户在靠近栏杆时,可以通过AR设备的显示屏展示用于对用户进行预警提示的虚拟文字信息,比如“前方为栏杆,请您注意,以免发生碰撞”,以提示用户前方危险,请注意安全,以便提示用户停止靠近或者绕道而行。Exemplarily, as shown in Figure 4, it is a schematic diagram of a scene where the target area is a construction area and the target object is a railing, and the user is prompted by an AR device. Display virtual text information used to warn the user, such as "There is a railing ahead, please pay attention to avoid collision", to remind the user of danger ahead, please pay attention to safety, so as to remind the user to stop approaching or take a detour.
本公开实施例中,提供一种针对静态障碍物确定是否存在发生碰撞的风险的方式,比如目标对象为栏杆,可以在AR设备与了栏杆之间的距离较近,且AR设备朝向栏杆的情况下,确定存在因AR设备靠近栏杆发生碰撞的风险。In the embodiment of the present disclosure, a method for determining whether there is a risk of collision against a static obstacle is provided. For example, if the target object is a railing, the distance between the AR device and the railing can be relatively short, and the AR device is facing the railing. Next, it is determined that there is a risk of a collision due to the proximity of the AR device to the railing.
在另一种实施方式中,目标对象包含动态障碍物,针对上述S202,在基于相对位姿信息,判断AR设备与目标对象之间是否存在发生碰撞的风险时,如图5所示,可以包括以下S301~S302:In another embodiment, the target object includes dynamic obstacles. For the above S202, when judging whether there is a risk of collision between the AR device and the target object based on the relative pose information, as shown in FIG. 5 , it may include The following S301~S302:
S301,基于多张现实场景图像,确定AR设备和动态障碍物之间的相对运动信息。S301 , based on a plurality of real scene images, determine relative motion information between the AR device and the dynamic obstacle.
示例性地,相对运动信息可以包括AR设备相对于动态障碍物的运动方向,或者动态障碍物相对于AR设备的相对运动方向,还可以包含相对运动速度。Exemplarily, the relative movement information may include the movement direction of the AR device relative to the dynamic obstacle, or the relative movement direction of the dynamic obstacle relative to the AR device, and may also include the relative movement speed.
示例性地,基于单张现实场景图像,可以确定AR设备与动态障碍物之间的相对位姿信息,通过多张现实场景图像分别对应的相对位姿信息,可以确定出AR设备和动态障碍物之间的相对运动信息,比如可以基于相邻两张现实场景图像,确定出动态障碍物在AR设备对应的相机坐标系下的位置坐标,进而确定出动态障碍物相对于AR设备的行驶方向和行驶速度。Exemplarily, based on a single real scene image, the relative pose information between the AR device and the dynamic obstacle can be determined, and the AR device and the dynamic obstacle can be determined through the relative pose information corresponding to the multiple real scene images respectively. For example, based on two adjacent real scene images, the position coordinates of the dynamic obstacle in the camera coordinate system corresponding to the AR device can be determined, and then the driving direction and the driving direction of the dynamic obstacle relative to the AR device can be determined. Driving speed.
S302,基于相对位姿信息和相对运动信息,判断AR设备与动态障碍物之间是否存在发生碰撞的风险。S302, based on the relative pose information and the relative motion information, determine whether there is a risk of collision between the AR device and the dynamic obstacle.
示例性地,在目标对象为动态障碍物的情况下,考虑到动态障碍物在目标区域中可以发生移动,AR设备在目标区域中也可以发生移动,当两者之间相互靠近的情况下,可以确定AR设备与动态障碍物之间存在发生碰撞的风险,反之,可以确定AR设备与动态障碍物之间不存在发生碰撞的风险。Exemplarily, in the case where the target object is a dynamic obstacle, considering that the dynamic obstacle can move in the target area, the AR device can also move in the target area, when the two are close to each other, It can be determined that there is a risk of collision between the AR device and the dynamic obstacle, and conversely, it can be determined that there is no risk of collision between the AR device and the dynamic obstacle.
本公开实施例中,提供一种针对动态障碍物确定是否存在发生碰撞的风险的方式,可以基于多张现实场景图像,来确定目标对象与AR设备之间的相对运动信息,可以包含相对运动方向、相对运动速度等,这样可以结合相对位姿信息和相对运动信息,来预估AR设备与目标对象之间是否存在碰撞的风险,为保障用户安全出行提供保障。In the embodiment of the present disclosure, a method for determining whether there is a risk of collision with respect to a dynamic obstacle is provided, which can determine the relative motion information between the target object and the AR device based on multiple real scene images, which can include the relative motion direction , relative motion speed, etc., in this way, the relative pose information and relative motion information can be combined to estimate whether there is a risk of collision between the AR device and the target object, and provide a guarantee for the user to travel safely.
针对上述S302,在基于相对位姿信息和相对运动信息,判断AR设备与动态障碍物之间是否存在发生碰撞的风险时,包括以下S3021~S3022:For the above S302, when judging whether there is a risk of collision between the AR device and the dynamic obstacle based on the relative pose information and the relative motion information, the following steps S3021 to S3022 are included:
S3021,基于相对位姿信息,判断AR设备与动态障碍物之间的距离是否小于第二预设距离。S3021 , based on the relative pose information, determine whether the distance between the AR device and the dynamic obstacle is less than a second preset distance.
示例性地,动态障碍物可以包含车辆、行人等,第二预设距离可以根据经验设定,以能够给用户留出足够的避障时长对应的安全距离为准,比如目标对象避障所需的时长为n1秒,在n1秒内目标对象可以行驶m米,这种情况下,第二预设距离的设定需要大于m米。Exemplarily, the dynamic obstacles may include vehicles, pedestrians, etc., and the second preset distance may be set based on experience, subject to a safety distance corresponding to sufficient obstacle avoidance time for the user, such as the target object needs to avoid obstacles. The duration is n1 seconds, and the target object can travel m meters within n1 seconds. In this case, the setting of the second preset distance needs to be greater than m meters.
示例性地,考虑到在目标对象为动态障碍物的情况下,若AR设备与目标对象相向运动,发生碰撞的危险程度要大于目标对象为静态障碍物时的危险程度,因此,这里的第二预设距离在设置时需要大于上述第一预设距离。Exemplarily, considering that when the target object is a dynamic obstacle, if the AR device and the target object move towards each other, the risk of collision is greater than that when the target object is a static obstacle. Therefore, the second The preset distance needs to be larger than the above-mentioned first preset distance during setting.
S3022,在AR设备与动态障碍物之间的距离小于第二预设距离的情况下,基于相对运动信息,判断动态障碍物相对于AR设备的行驶方向与动态障碍物朝向AR设备的方向之间的夹角是否小于设定角度阈值。S3022, in the case that the distance between the AR device and the dynamic obstacle is less than the second preset distance, based on the relative motion information, determine the distance between the driving direction of the dynamic obstacle relative to the AR device and the direction of the dynamic obstacle toward the AR device Whether the included angle is less than the set angle threshold.
示例性地,可以基于上一时刻AR设备采集的现实场景图像,确定出动态障碍物上一时刻在AR设备对应的相机坐标系中的位置坐标,然后基于当前时刻AR设备采集的现实场景图像,确定出动态障碍物当前时刻在AR设备对应的相机坐标系中的位置坐标,这样,根据动态障碍物上一时刻对应的位置坐标和当前时刻对应的位置坐标,可以确定动态障碍物相对于AR设备的行驶方向,比如将动态障碍物在上一时刻对应的位置坐标指向该动态障碍物在当前时刻对应的位置坐标的方向,作为动态障碍物相对于AR设备的行驶方向。Exemplarily, based on the real scene image collected by the AR device at the last moment, the position coordinates of the dynamic obstacle in the camera coordinate system corresponding to the AR device at the last moment can be determined, and then based on the real scene image collected by the AR device at the current moment, Determine the position coordinates of the dynamic obstacle in the camera coordinate system corresponding to the AR device at the current moment. In this way, according to the position coordinates corresponding to the dynamic obstacle at the previous moment and the position coordinates corresponding to the current moment, the relative position of the dynamic obstacle to the AR device can be determined. For example, point the position coordinates corresponding to the dynamic obstacle at the last moment in the direction of the position coordinates corresponding to the dynamic obstacle at the current moment, as the driving direction of the dynamic obstacle relative to the AR device.
示例性地,动态障碍物朝向AR设备的方向,可以通过动态障碍物在AR设备所在的相机坐标系下的位置坐标,以及AR设备在该相机坐标系下的位置坐标确定,比如将动态障碍物在AR设备所在的相机坐标系下的位置坐标指向AR设备在该相机坐标系下的位置坐标的方向,作为动态障碍物朝向AR设备的方向。Exemplarily, the direction of the dynamic obstacle towards the AR device can be determined by the position coordinates of the dynamic obstacle in the camera coordinate system where the AR device is located, and the position coordinates of the AR device in the camera coordinate system, for example, the dynamic obstacle The position coordinates in the camera coordinate system where the AR device is located point to the direction of the position coordinates of the AR device in the camera coordinate system as the direction of the dynamic obstacle toward the AR device.
示例性地,设定角度阈值可以根据经验设定,用来判断动态障碍物和AR设备是否逐渐靠近,比如在动态障碍物相对于AR设备的行驶方向与动态障碍物朝向AR设备的方向之间的夹角小于设定角度阈值的情况下,可以确定动态障碍物和AR设备逐渐靠近。Exemplarily, the set angle threshold can be set empirically to determine whether the dynamic obstacle and the AR device are gradually approaching, for example, between the driving direction of the dynamic obstacle relative to the AR device and the direction of the dynamic obstacle toward the AR device. When the included angle is less than the set angle threshold, it can be determined that the dynamic obstacle and the AR device are gradually approaching.
S3023,在夹角小于设定角度阈值的情况下,确定AR设备与动态障碍物之间存在发生碰撞的风险。S3023, when the included angle is smaller than the set angle threshold, determine that there is a risk of collision between the AR device and the dynamic obstacle.
在动态障碍物和AR设备逐渐靠近的情况下,可以预估AR设备与动态障碍物之间存在发生碰撞的风险。When the dynamic obstacle and the AR device are gradually approaching, it can be estimated that there is a risk of collision between the AR device and the dynamic obstacle.
示例性地,在AR设备与动态障碍物之间存在发生碰撞的风险的情况下,可以基于AR设备与动态障碍物之间的距离和相对运动速度来确定风险等级,比如,距离越小且 相对运动速度越快的情况下,风险等级越高。例如,可以预先建立不同距离和风险分值之间的映射关系,以及不同相对运动速度和风险分值之间的映射关系,在得到AR设备与动态障碍物之间的距离和相对运动速度后,可以根据预先确定的不同距离和风险分值之间的映射关系,以及不同相对运动速度和风险分值之间的映射关系,确定当前的风险分值,进一步根据风险分值来确定风险等级。Exemplarily, in the case where there is a risk of collision between the AR device and the dynamic obstacle, the risk level may be determined based on the distance and the relative movement speed between the AR device and the dynamic obstacle. The higher the movement speed, the higher the risk level. For example, the mapping relationship between different distances and risk scores, as well as the mapping relationship between different relative movement speeds and risk scores can be established in advance. After obtaining the distance and relative movement speed between the AR device and the dynamic obstacle, The current risk score can be determined according to the predetermined mapping relationship between different distances and risk scores, and the mapping relationship between different relative motion speeds and risk scores, and further, the risk level can be determined according to the risk score.
本公开一些实施例中,在通过AR设备对用户进行预警提示时,可以按照不同的风险等级进行对应的预警提示,比如风险等级较低时,可以通过AR设备以一种方式进行预警提示,在风险等级较高时,可以通过AR设备以多种方式进行预警提示。In some embodiments of the present disclosure, when an AR device is used to provide an early warning to a user, the corresponding early warning can be prompted according to different risk levels. When the risk level is high, early warnings can be given in various ways through AR devices.
示例性地,如图6所示,是针对目标区域为道路,目标对象为车辆的情况下,通过AR设备对用户进行提示的场景示意图,用户在靠近车辆时,可以通过AR设备的显示屏展示用于对用户进行预警提示的虚拟文字信息,比如“前方路段存在车辆,请您注意,以免发生碰撞”,以提示用户前方危险,请注意安全,以便提示用户停止靠近或者绕道而行。Exemplarily, as shown in Figure 6, it is a schematic diagram of a scene where the target area is a road and the target object is a vehicle, and the user is prompted by the AR device. When the user approaches the vehicle, the user can display it on the display screen of the AR device. Virtual text information used to warn the user, such as "there is a vehicle in the road ahead, please pay attention to avoid collision", to remind the user of danger ahead, please pay attention to safety, so as to remind the user to stop approaching or take a detour.
本公开实施例中,提供一种结合AR设备和/或动态障碍物的运动状态确定是否存在发生碰撞的风险的方式,可以在AR设备和动态障碍物之间的距离小于第二预设距离,且动态障碍物和AR设备之间相互靠近的情况下,来准确预估AR设备与动态障碍物之间存在发生碰撞的风险。In the embodiment of the present disclosure, a method is provided for determining whether there is a risk of collision in combination with the motion state of the AR device and/or the dynamic obstacle, so that the distance between the AR device and the dynamic obstacle is less than a second preset distance, And when the dynamic obstacle and the AR device are close to each other, it can accurately predict the risk of collision between the AR device and the dynamic obstacle.
针对上述提到的目标对象与AR设备之间的相对位姿信息,在一种实施方式中,在基于现实场景图像,确定目标对象与AR设备之间的相对位姿信息时,如图7所示,可以包括以下S401~S403:Regarding the relative pose information between the target object and the AR device mentioned above, in one embodiment, when determining the relative pose information between the target object and the AR device based on the real scene image, as shown in FIG. 7 display, may include the following S401-S403:
S401,根据现实场景图像、目标对象在现实场景图像中对应的检测区域、以及预设的三维场景地图,确定目标对象在世界坐标系下的第二位姿信息;其中,三维场景地图为表征目标区域所属的现实场景的地图;S401, according to the real scene image, the detection area corresponding to the target object in the real scene image, and the preset three-dimensional scene map, determine the second pose information of the target object in the world coordinate system; wherein, the three-dimensional scene map represents the target A map of the real scene to which the area belongs;
示例性地,这里的三维场景地图与上述三维场景地图可以为相同的场景地图,构建过程详见后文描述。Exemplarily, the three-dimensional scene map here and the above-mentioned three-dimensional scene map may be the same scene map, and the construction process is described in the following description.
S402,对AR设备进行定位,获得AR设备在世界坐标系下的第一位姿信息。S402, the AR device is positioned to obtain the first pose information of the AR device in the world coordinate system.
示例性地,这里对AR设备进行定位的过程可以与上述S1011确定AR设备的第一位姿信息的过程相似,详见后文,也可以按照AR设备自身设置的定位传感器确定,比如自身设置的全球卫星导航设备和惯性测量单元。Exemplarily, the process of locating the AR device here may be similar to the process of determining the first pose information of the AR device in S1011, as described later, or it may be determined according to the positioning sensor set by the AR device itself, such as the one set by itself. Global satellite navigation equipment and inertial measurement units.
S403,基于第一位姿信息和第二位姿信息,确定目标对象与AR设备之间的相对位姿信息。S403, based on the first pose information and the second pose information, determine relative pose information between the target object and the AR device.
示例性地,表征现实场景的三维场景地图中包含现实场景中的多个特征点,现实场景图像中同样可以提取到现实场景的多个特征点,这样基于现实场景图像中包含的多个特征点以及三维场景地图中包含的多个特征点进行特征点匹配,确定构成目标对象的多个特征点在三维场景地图中对应的世界坐标系下的位置坐标,即得到检测区域内的目标对象包含的特征点在世界坐标系下的位置坐标,基于此,可以确定出目标对象在世界坐标系下的第二位姿信息。Exemplarily, the three-dimensional scene map representing the real scene includes multiple feature points in the real scene, and the multiple feature points of the real scene can also be extracted from the real scene image, so that based on the multiple feature points included in the real scene image And the multiple feature points contained in the 3D scene map are matched to the feature points, and the position coordinates of the multiple feature points that constitute the target object in the corresponding world coordinate system in the 3D scene map are determined, that is, the target object contained in the detection area is obtained. The position coordinates of the feature points in the world coordinate system, based on which, the second pose information of the target object in the world coordinate system can be determined.
在得到目标对象在世界坐标系下的第二位姿信息后,可以基于AR设备在世界坐标系下的第一位姿信息,以及该目标对象在世界坐标系下的第二位姿信息,确定出目标对象与AR设备之间的相对位姿信息。After obtaining the second pose information of the target object in the world coordinate system, it can be determined based on the first pose information of the AR device in the world coordinate system and the second pose information of the target object in the world coordinate system The relative pose information between the target object and the AR device is obtained.
本公开实施例中,提出通过确定目标对象和AR设备在世界坐标系下各自的位姿信息,可以快速准确的确定出目标对象和AR设备之间的相对位姿信息。In the embodiments of the present disclosure, it is proposed that by determining the respective pose information of the target object and the AR device in the world coordinate system, the relative pose information between the target object and the AR device can be quickly and accurately determined.
在一种实施方式中,在基于现实场景图像,确定目标对象与AR设备之间的相对位姿信息时,还可以包括:In one embodiment, when determining the relative pose information between the target object and the AR device based on the real scene image, the method may further include:
根据对AR设备进行定位后得到的AR设备在世界坐标系下的第一位姿信息,以及目标对象在现实场景图像对应的图像坐标系下的位姿信息,确定检测到的目标对象与AR设备之间的相对位姿信息。According to the first pose information of the AR device in the world coordinate system obtained after positioning the AR device, and the pose information of the target object in the image coordinate system corresponding to the real scene image, determine the detected target object and the AR device. relative pose information.
示例性地,基于目标对象在现实场景图像中对应的图像坐标系下的位姿信息,以及AR设备的图像采集部件对应内部参数,可以确定目标对象在AR设备对应的相机坐标系下的位姿信息。进一步基于对AR设备进行定位后得到的AR设备在世界坐标系下的第一位姿信息可以确定AR设备的图像采集部件对应的外部参数,基于该AR设备的图像采集部件对应外部参数,可以确定目标对象在世界坐标系下的第二位姿信息。最后,基于AR设备和目标对象在世界坐标系下的第二位姿信息,可以确定AR设备和目标对象在世界坐标系下的相对位姿信息。Exemplarily, based on the pose information of the target object in the image coordinate system corresponding to the real scene image, and the corresponding internal parameters of the image acquisition component of the AR device, the pose of the target object in the camera coordinate system corresponding to the AR device can be determined. information. Further, based on the first pose information of the AR device in the world coordinate system obtained after positioning the AR device, the external parameters corresponding to the image acquisition component of the AR device can be determined, and based on the external parameters corresponding to the image acquisition component of the AR device, it can be determined. The second pose information of the target object in the world coordinate system. Finally, based on the second pose information of the AR device and the target object in the world coordinate system, the relative pose information of the AR device and the target object in the world coordinate system can be determined.
本公开一些实施例中,在根据对AR设备进行定位后得到的AR设备在世界坐标系下的第一位姿信息,以及目标对象在现实场景图像对应的图像坐标系下的位姿信息,确定检测到的目标对象与AR设备之间的相对位姿信息时,包括以下过程S501~S505:In some embodiments of the present disclosure, according to the first pose information of the AR device in the world coordinate system obtained after positioning the AR device, and the pose information of the target object in the image coordinate system corresponding to the real scene image, determine When detecting the relative pose information between the target object and the AR device, the following processes S501 to S505 are included:
S501,对现实场景图像进行目标检测,确定现实场景图像中构成目标对象的每个像素点在图像坐标系下的位姿信息。S501 , perform target detection on the real scene image, and determine the pose information of each pixel that constitutes the target object in the real scene image in the image coordinate system.
这里可以采用预先训练的目标检测神经网络,针对现实场景图像进行目标检测,确定现实场景图像中包含的目标对象,以及构成该目标对象的每个像素点在图像坐标系下的位姿信息,其中像素点在图像坐标系下的位姿信息是指该像素点在图像坐标系下的像素坐标值。Here, a pre-trained target detection neural network can be used to perform target detection on real scene images, determine the target object contained in the real scene image, and the pose information of each pixel that constitutes the target object in the image coordinate system, where The pose information of the pixel point in the image coordinate system refers to the pixel coordinate value of the pixel point in the image coordinate system.
S502,基于现实场景图像,确定现实场景图像对应的深度图像;深度图像中包含现实场景图像中构成目标对象的每个像素点的深度信息。S502 , based on the real scene image, determine a depth image corresponding to the real scene image; the depth image includes depth information of each pixel that constitutes the target object in the real scene image.
示例性地,可以根据采集到的现实场景图像以及预先训练的用于确定深度图像的神经网络,来确定现实场景图像对应的深度图像,从而得到现实场景图像中构成目标对象的每个像素点的深度信息。Exemplarily, the depth image corresponding to the real scene image can be determined according to the collected real scene image and the pre-trained neural network for determining the depth image, so as to obtain the depth image of each pixel that constitutes the target object in the real scene image. in-depth information.
S503,基于现实场景图像中构成目标对象的每个像素点在图像坐标系下的位姿信息、该像素点的深度信息以及AR设备中的图像采集部件的参数信息,确定该像素点在世界坐标系下的三维坐标信息。S503, based on the pose information of each pixel that constitutes the target object in the real scene image in the image coordinate system, the depth information of the pixel, and the parameter information of the image acquisition component in the AR device, determine that the pixel is in world coordinates 3D coordinate information under the system.
示例性地,针对采集到的现实场景图像可以建立图像坐标系,基于构建的图像坐标系可以确定构成目标对象的每个像素点在图像坐标系下的位姿信息,即在图像坐标系下的像素坐标值。Exemplarily, an image coordinate system can be established for the collected real scene image, and the pose information of each pixel that constitutes the target object in the image coordinate system can be determined based on the constructed image coordinate system, that is, the pose information in the image coordinate system can be determined. Pixel coordinate value.
图像采集部件的参数信息可以包括AR设备的图像采集部件的内部参数和外部参数,其中,内部参数为AR设备的图像采集部件的固定参数,外部参数可以通过AR设备的第一位姿信息确定,其中,该内部参数可以用于将像素点在图像坐标系下的坐标值转化为在相机坐标系下的坐标值;外部参数可以用于将像素点在相机坐标系下的坐标值转换为在世界坐标系下的坐标值。The parameter information of the image acquisition part may include internal parameters and external parameters of the image acquisition part of the AR device, wherein the internal parameters are fixed parameters of the image acquisition part of the AR device, and the external parameters can be determined by the first pose information of the AR device, Among them, the internal parameter can be used to convert the coordinate value of the pixel in the image coordinate system into the coordinate value in the camera coordinate system; the external parameter can be used to convert the coordinate value of the pixel in the camera coordinate system into the coordinate value in the world Coordinate values in the coordinate system.
示例性地,这里可以通过AR设备中的图像采集部件的内部参数,对构成目标对象的每个像素点在图像坐标系下的像素坐标值转换至在相机坐标系下沿X轴和Y轴方向的坐标值,基于像素点的深度信息,可以得到该像素点在相机坐标系下沿Z轴方向的坐标值,然后再基于图像采集部件的外部参数,将像素点在相机坐标系下的坐标值转化至在世界坐标系下的坐标值,即得到像素点在世界坐标系下的三维坐标信息。Exemplarily, through the internal parameters of the image acquisition component in the AR device, the pixel coordinate value of each pixel that constitutes the target object in the image coordinate system can be converted to the X-axis and Y-axis directions in the camera coordinate system. Based on the depth information of the pixel point, the coordinate value of the pixel point along the Z-axis direction in the camera coordinate system can be obtained, and then based on the external parameters of the image acquisition component, the coordinate value of the pixel point in the camera coordinate system can be obtained. Convert to the coordinate value in the world coordinate system, that is, get the three-dimensional coordinate information of the pixel point in the world coordinate system.
S504,基于构成目标对象的每个像素点在世界坐标系下的三维坐标信息,确定目标对象在世界坐标系下的第二位姿信息。S504, based on the three-dimensional coordinate information of each pixel constituting the target object in the world coordinate system, determine the second pose information of the target object in the world coordinate system.
示例性地,在得到构成目标对象的每个像素点在世界坐标系下的三维坐标信息后,可以基于预先训练的三维目标检测神经网络,来确定目标对象对应的三维检测信息。Exemplarily, after obtaining the three-dimensional coordinate information of each pixel constituting the target object in the world coordinate system, the three-dimensional detection information corresponding to the target object may be determined based on a pre-trained three-dimensional target detection neural network.
比如,可以基于构成目标对象的每个像素点在世界坐标系下的三维坐标信息,得到目标对象对应的三维点云数据,然后基于该三维点云数据和预先构建的三维目标检测神经网络,来确定目标对象对应的三维检测信息,进而根据该三维检测信息确定该目标对象在世界坐标系下的第二位姿信息。For example, based on the three-dimensional coordinate information of each pixel that constitutes the target object in the world coordinate system, the corresponding three-dimensional point cloud data of the target object can be obtained, and then based on the three-dimensional point cloud data and the pre-built three-dimensional target detection neural network, to The three-dimensional detection information corresponding to the target object is determined, and then the second pose information of the target object in the world coordinate system is determined according to the three-dimensional detection information.
示例性地,目标对象的三维检测信息可以包括目标对象在世界坐标系下的中心点位置坐标、目标对象的3D检测框的长度、宽度和高度,以及该目标对象的设定正方向与世界坐标系的各个坐标轴之间的夹角,该目标对象的设定正方向可以为车头朝向的方向,可以将目标对象在世界坐标系下的中心点位置坐标和该目标对象的车头朝向方向作为该目标对象在世界坐标系下的第二位姿信息。Exemplarily, the three-dimensional detection information of the target object may include the position coordinates of the center point of the target object in the world coordinate system, the length, width and height of the 3D detection frame of the target object, and the set positive direction and world coordinates of the target object. The angle between each coordinate axis of the system, the set positive direction of the target object can be the direction of the head of the car, and the position coordinates of the center point of the target object in the world coordinate system and the direction of the head of the target object can be used as the The second pose information of the target object in the world coordinate system.
S505,根据对AR设备进行定位后得到的AR设备在世界坐标系下的第一位姿信息,以及确定的目标对象在世界坐标系下的第二位姿信息,确定检测到的目标对象与AR设备之间的相对位姿信息。S505, according to the first pose information of the AR device under the world coordinate system obtained after positioning the AR device, and the determined second pose information of the target object under the world coordinate system, determine the detected target object and the AR Relative pose information between devices.
在另一种实施方式中,还可以基于SLAM的方式进行实时建图和定位,得到对AR设备所处于的现实场景的三维场景地图以及对AR设备的第一位姿信息,基于实时构建得到的三维场景地图,可以得到目标对象在世界坐标系下的第二位姿信息,这样可以根据AR设备的第一位姿信息和目标对象的第二位姿信息,确定AR设备和目标对象之间的相对位姿信息。In another embodiment, real-time mapping and positioning can also be performed based on SLAM to obtain a three-dimensional scene map of the real scene where the AR device is located and the first pose information of the AR device. The three-dimensional scene map can obtain the second pose information of the target object in the world coordinate system, so that the relationship between the AR device and the target object can be determined according to the first pose information of the AR device and the second pose information of the target object. Relative pose information.
针对上述多次提到的三维场景地图,可以按照以下方式预先构建,包括S601~S603:The three-dimensional scene map mentioned above can be pre-built in the following ways, including S601 to S603:
S601,获取多张现实场景样本图像。S601, acquiring a plurality of real scene sample images.
示例性地,可以预先通过无人机对该现实场景,比如某个城市进行多角度航拍,得到该现实场景对应的大量现实场景样本图像。Exemplarily, a multi-angle aerial photography of the real scene, such as a certain city, can be performed in advance by a drone, and a large number of real scene sample images corresponding to the real scene can be obtained.
S602,基于多张现实场景样本图像,构建表征现实场景的初始三维场景模型。S602 , constructing an initial three-dimensional scene model representing the real scene based on the plurality of real scene sample images.
针对S602,在基于多张现实场景样本图像,生成现实场景对应的初始三维场景模型时,可以包括以下S6021~S6022:For S602, when generating an initial three-dimensional scene model corresponding to a real scene based on a plurality of real scene sample images, the following steps S6021 to S6022 may be included:
S6021,从获取的每张现实场景样本图像中提取多个特征点;S6021, extracting multiple feature points from each obtained sample image of the real scene;
S6022,基于提取的多个特征点,以及预存的与现实场景匹配的三维样本图,生成初始三维场景模型;其中,三维样本图为预存储的表征现实场景形貌特征的三维图。S6022, an initial 3D scene model is generated based on the extracted multiple feature points and the pre-stored 3-D sample map matching the real scene; wherein, the 3-D sample map is a pre-stored 3-D map representing the appearance features of the real scene.
本公开一些实施例中,针对每张现实场景样本图像提取的特征点可以为能够表征该张现实场景样本图像关键信息的点,比如针对包含建筑物的现实场景样本图像,这里的特征点可以表示该建筑物轮廓信息的特征点。In some embodiments of the present disclosure, the feature points extracted for each real scene sample image may be points that can represent key information of the real scene sample image. For example, for a real scene sample image including buildings, the feature points here may represent The feature points of the building outline information.
示例性地,这里预存的与现实场景的三维样本图可以包括提前设置好的能够表征该现实场景形貌特征、且带有尺寸标注的三维图,比如可以是表征该现实场景形貌特征的计算机辅助设计(Computer Aided Design,CAD)三维图。Exemplarily, the pre-stored 3D sample map of the real scene may include a pre-set 3D map that can characterize the morphological features of the real scene and is dimensioned, such as a computer that characterizes the morphological features of the real scene. Aided design (Computer Aided Design, CAD) three-dimensional drawing.
针对该现实场景,当提取的特征点足够多时,特征点构成的特征点云,可以构成表征该现实场景的三维模型,这里的特征点云中的特征点是没有单位的,特征点云构成的三维模型也是没有单位的,然后将该特征点云与带有尺度标注的且能够表征该现实场景形貌特征的三维图对齐后,即得到该现实场景对应的初始三维场景模型。For the real scene, when enough feature points are extracted, the feature point cloud composed of the feature points can form a three-dimensional model representing the real scene. The feature points in the feature point cloud here are unitless, and the feature point cloud is composed of The 3D model is also unitless. Then, after aligning the feature point cloud with a 3D map with scale annotations that can characterize the topographical features of the real scene, the initial 3D scene model corresponding to the real scene is obtained.
S603,将构建的初始三维场景模型上的标定特征点与现实场景对应的标定特征点进行对齐,生成三维场景地图。S603: Align the calibration feature points on the constructed initial three-dimensional scene model with the calibration feature points corresponding to the real scene to generate a three-dimensional scene map.
生成的初始三维模型可能会存在失真现象,然后可以通过现实场景上的标定特征点和初始三维场景模型上的标定特征点来完成对齐过程,从而可以得到准确度较高的三维场景模型。The generated initial 3D model may be distorted, and then the alignment process can be completed through the calibration feature points on the real scene and the initial 3D scene model, so that a 3D scene model with high accuracy can be obtained.
针对S603,在将构建的初始三维场景模型上的标定特征点与现实场景对应的标定特征点进行对齐,生成三维场景地图时,包括S6031~S6032:For S603, when aligning the calibration feature points on the constructed initial 3D scene model with the calibration feature points corresponding to the real scene, and generating a 3D scene map, include S6031-S6032:
S6031,在现实场景对应的初始三维场景模型中提取用于表征现实场景多个空间位置点的标定特征点;S6031, extracting, in the initial three-dimensional scene model corresponding to the real scene, calibration feature points used to represent multiple spatial position points of the real scene;
S6032,确定标定特征点在现实场景对应的真实二维地图中的真实位置坐标,并基于每个标定特征点对应的真实位置坐标,调整初始三维场景模型中各个特征点的位置坐标。S6032: Determine the real position coordinates of the calibration feature points in the real two-dimensional map corresponding to the real scene, and adjust the position coordinates of each feature point in the initial three-dimensional scene model based on the real position coordinates corresponding to each calibration feature point.
示例性地,可以选择一些表征建筑物边缘、角落的空间位置点的特征点作为这里的标定特征点,然后基于标定特征点对应的真实位置坐标以及该标定特征点在初始三维场景模型中的位置坐标,确定位置坐标调整量,然后基于该位置坐标调整量对初始三维模型中各个特征点的位置坐标进行修正,即可以得到准确度较高的三维场景模型。Exemplarily, some feature points representing the spatial position points of the edges and corners of the building can be selected as the calibration feature points here, and then based on the real position coordinates corresponding to the calibration feature points and the position of the calibration feature point in the initial three-dimensional scene model. Coordinates, determine the position coordinate adjustment amount, and then correct the position coordinates of each feature point in the initial three-dimensional model based on the position coordinate adjustment amount, so that a three-dimensional scene model with high accuracy can be obtained.
示例性地,这里标定特征点对应的真实位置坐标可以指标定特征点在目标区域所属现实场景对应的世界坐标系的位置坐标,在对齐后,得到的三维场景地图中包含各个特征点在世界坐标系下的位置坐标。Exemplarily, the real position coordinates corresponding to the calibration feature points here may refer to the position coordinates of the world coordinate system corresponding to the real scene to which the specified feature points belong in the target area. After alignment, the obtained three-dimensional scene map contains the world coordinates of each feature point. Position coordinates under the system.
构建完成现实场景的三维场景地图后,可以基于AR设备拍摄的现实场景图像和该三维场景地图对AR设备进行定位,在基于现实场景图像,以及三维场景地图,确定AR设备的第一位姿信息时,可以包括以下S701~S703:After the 3D scene map of the real scene is constructed, the AR device can be positioned based on the real scene image captured by the AR device and the 3D scene map, and the first pose information of the AR device can be determined based on the real scene image and the 3D scene map. can include the following S701-S703:
S701,提取现实场景图像包含的特征点,以及提取预先构建三维场景地图时的每张现实场景样本图像的特征点;S701, extract the feature points contained in the real scene image, and extract the feature points of each real scene sample image when the three-dimensional scene map is pre-built;
S702,基于现实场景图像对应的特征点以及预先构建三维场景地图时的每张现实场景样本图像对应的特征点,确定与现实场景图像相似度最高的目标现实场景样本图像;S702, based on the feature points corresponding to the real scene image and the feature points corresponding to each real scene sample image when the three-dimensional scene map is pre-built, determine the target real scene sample image with the highest similarity with the real scene image;
S703,基于目标现实场景样本图像对应的拍摄位姿信息,确定AR设备的第一位姿信息。S703: Determine the first pose information of the AR device based on the shooting pose information corresponding to the target real scene sample image.
示例性地,在获取到AR设备拍摄的现实场景图像后,可以通过该现实场景图像中特征点,以及预先构建三维场景地图时每张现实场景样本图像的特征点,找到与该现实场景图像相似度最高的目标现实场景样本图像。比如可以基于现实场景图像的特征点的特征信息与每张现实场景样本图像的特征点的特征信息,确定该现实场景图像和每张现实场景样本图像的相似度值,将相似度值最高且超过相似度阈值的现实场景样本图像作为这里的目标现实场景样本图像。Exemplarily, after the real scene image captured by the AR device is acquired, the feature points in the real scene image and the feature points of each real scene sample image when the three-dimensional scene map is pre-built can be used to find the image similar to the real scene image. The sample image of the target real scene with the highest degree. For example, based on the feature information of the feature points of the real scene image and the feature information of the feature points of each real scene sample image, the similarity value of the real scene image and each real scene sample image can be determined, and the similarity value is the highest and exceeds The real scene sample image with the similarity threshold is used as the target real scene sample image here.
在确定出目标现实场景样本图像后,可以基于将该目标现实场景样本图像对应的拍摄位姿信息确定AR设备的第一位姿信息。After the target real scene sample image is determined, the first pose information of the AR device may be determined based on the shooting pose information corresponding to the target real scene sample image.
本公开一些实施例中,针对上述S703,在基于目标现实场景样本图像对应的拍摄位姿信息,确定AR设备的第一位姿信息时,可以包括以下S7031~S7032:In some embodiments of the present disclosure, for the above S703, when determining the first pose information of the AR device based on the shooting pose information corresponding to the target real scene sample image, the following steps S7031 to S7032 may be included:
S7031,确定目标现实场景样本图像中的目标对象和现实场景图像中的目标对象之间的相对位姿信息;S7031, determine the relative pose information between the target object in the target real scene sample image and the target object in the real scene image;
S7032,基于相对位姿信息和目标现实场景样本图像对应的拍摄位姿信息,确定AR设备的第一位姿信息。S7032: Determine the first pose information of the AR device based on the relative pose information and the shooting pose information corresponding to the target real scene sample image.
示例性地,与现实场景图像的相似度最高的目标现实场景样本图像中包含的目标对象与现实场景图像中包含的目标对象为相同的目标对象,比如现实场景图像中包含的目标对象为建筑物A,目标现实场景样本图像中包含的目标对象也为建筑物A,这样可以通过确定现实场景图像中的建筑物A和目标现实场景样本图像中的建筑物A之间的相对位姿信息,确定出图像采集部件在拍摄现实场景图像和拍摄目标现实场景样本图像时的相对拍摄位姿信息,进一步可以基于该相对拍摄位姿信息和目标现实场景样本图像对应的拍摄位姿信息,确定出AR设备的第一位姿信息。Exemplarily, the target object contained in the target real scene sample image with the highest similarity with the real scene image is the same target object as the target object contained in the real scene image, for example, the target object contained in the real scene image is a building. A. The target object contained in the target real scene sample image is also building A. In this way, by determining the relative pose information between the building A in the real scene image and the building A in the target real scene sample image, determine Obtain the relative shooting pose information of the image acquisition component when shooting the real scene image and the shooting target real scene sample image, and further determine the AR device based on the relative shooting pose information and the shooting pose information corresponding to the target real scene sample image. first pose information.
示例性地,在确定目标现实场景样本图像中的目标对象和现实场景图像中的目标对象之间的相对位姿数据时,可以基于上述提到的三维检测技术,分别确定目标现实场景 样本图像和现实场景图像中的目标对象对应的三维检测信息,然后通过分别确定的目标现实场景样本图像和现实场景图像中的目标对象对应的三维检测信息,确定出该相对位姿数据。Exemplarily, when determining the relative pose data between the target object in the target real scene sample image and the target object in the real scene image, the target real scene sample image and the target real scene sample image and The three-dimensional detection information corresponding to the target object in the real scene image is determined, and then the relative pose data is determined through the respectively determined target real scene sample image and the three-dimensional detection information corresponding to the target object in the real scene image.
特殊情况下,当目标现实场景样本图像中的目标对象和现实场景图像中的目标对象的位姿信息相同时,可以直接将目标现实场景样本图像对应的拍摄位姿信息,作为这里AR设备的第一位姿信息。In special cases, when the pose information of the target object in the target real scene sample image is the same as that of the target object in the real scene image, the shooting pose information corresponding to the target real scene sample image can be directly used as the first AR device here. One pose information.
本领域技术人员可以理解,在实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that in the above method of the embodiments, the writing order of each step does not mean a strict execution order but constitutes any limitation to the implementation process, and the execution order of each step should be based on its function and possible internal logic. Sure.
基于同一技术构思,本公开实施例中还提供了与导航提示方法对应的导航提示装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述导航提示方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。Based on the same technical concept, the embodiments of the present disclosure also provide a navigation prompting device corresponding to the navigation prompting method. Reference may be made to the implementation of the method, and repeated descriptions will not be repeated.
参照图8所示,为本公开实施例提供的一种导航提示装置800的示意图,该导航提示装置包括:Referring to FIG. 8 , which is a schematic diagram of a navigation prompt device 800 provided by an embodiment of the present disclosure, the navigation prompt device includes:
图像获取部分801,被配置为获取增强现实AR设备拍摄的现实场景图像;The image acquisition part 801 is configured to acquire the real scene image captured by the augmented reality AR device;
第一判断部分802,被配置为基于现实场景图像,判断AR设备是否位于目标区域对应的地理位置范围内;The first judging part 802 is configured to judge whether the AR device is located within the geographic location range corresponding to the target area based on the real scene image;
目标检测部分803,被配置为在AR设备位于目标区域对应的地理位置范围内的情况下,检测现实场景图像中是否存在目标对象;The target detection part 803 is configured to detect whether there is a target object in the real scene image when the AR device is located within the geographic location range corresponding to the target area;
第二判断部分804,被配置为在现实场景图像中存在目标对象的情况下,判断AR设备与目标对象是否存在发生碰撞的风险;The second judging part 804 is configured to judge whether there is a risk of collision between the AR device and the target object when the target object exists in the real scene image;
预警提示部分805,被配置为在AR设备与目标对象存在发生碰撞的风险的情况下,通过AR设备对用户进行预警提示。The early warning prompting part 805 is configured to provide early warning prompting to the user through the AR device when there is a risk of collision between the AR device and the target object.
在一种可能的实施方式中,第一判断部分802,还被配置为基于现实场景图像和表预设的三维场景地图,确定AR设备在世界坐标系下的第一位姿信息;其中,三维场景地图为表征目标区域所属的现实场景的地图;基于AR设备的第一位姿信息和三维场景地图关联的每个区域在世界坐标系下的地理位置范围,判断AR设备是否位于目标区域对应的地理位置范围内。In a possible implementation, the first judging part 802 is further configured to determine the first pose information of the AR device in the world coordinate system based on the real scene image and the preset three-dimensional scene map of the table; wherein the three-dimensional The scene map is a map that represents the real scene to which the target area belongs; based on the first pose information of the AR device and the geographic location of each area associated with the 3D scene map in the world coordinate system, it is determined whether the AR device is located in the corresponding target area. within the geographic range.
在一种可能的实施方式中,第二判断部分804,还被配置为基于现实场景图像,确定目标对象与AR设备之间的相对位姿信息;基于相对位姿信息,判断AR设备与目标对象之间是否存在发生碰撞的风险。In a possible implementation, the second judgment part 804 is further configured to determine the relative pose information between the target object and the AR device based on the real scene image; based on the relative pose information, determine the AR device and the target object Whether there is a risk of collision between them.
在一种可能的实施方式中,目标对象包含静态障碍物,第二判断部分804,还被配置为基于相对位姿信息,判断AR设备与静态障碍物之间的距离是否小于第一预设距离;在AR设备与静态障碍物之间的距离小于第一预设距离的情况下,判断AR设备是否朝向静态障碍物;在AR设备朝向静态障碍物的情况下,确定AR设备与静态障碍物之间存在发生碰撞的风险。In a possible implementation, the target object includes a static obstacle, and the second judgment part 804 is further configured to judge whether the distance between the AR device and the static obstacle is less than the first preset distance based on the relative pose information ; When the distance between the AR device and the static obstacle is less than the first preset distance, determine whether the AR device is facing the static obstacle; if the AR device is facing the static obstacle, determine whether the AR device is facing the static obstacle. There is a risk of collision between them.
在一种可能的实施方式中,目标对象包含动态障碍物,第二判断部分804,还被配置为基于多张现实场景图像,确定AR设备和动态障碍物之间的相对运动信息;基于相对位姿信息和相对运动信息,判断AR设备与动态障碍物之间是否存在发生碰撞的风险。In a possible implementation, the target object includes a dynamic obstacle, and the second judging part 804 is further configured to determine relative motion information between the AR device and the dynamic obstacle based on a plurality of real scene images; Attitude information and relative motion information to determine whether there is a risk of collision between the AR device and dynamic obstacles.
在一种可能的实施方式中,第二判断部分804,还被配置为基于相对位姿信息,判断AR设备与动态障碍物之间的距离是否小于第二预设距离;在AR设备与动态障碍物之间的距离小于第二预设距离的情况下,基于相对运动信息,判断动态障碍物相对于AR设备的行驶方向与动态障碍物朝向AR设备的方向之间的夹角是否小于设定角度阈值;在夹角小于设定角度阈值的情况下,确定AR设备与动态障碍物之间存在发生碰撞 的风险。In a possible implementation, the second judging part 804 is further configured to judge whether the distance between the AR device and the dynamic obstacle is less than the second preset distance based on the relative pose information; When the distance between the objects is less than the second preset distance, based on the relative motion information, determine whether the angle between the driving direction of the dynamic obstacle relative to the AR device and the direction of the dynamic obstacle toward the AR device is less than the set angle Threshold; when the included angle is less than the set angle threshold, it is determined that there is a risk of collision between the AR device and the dynamic obstacle.
在一种可能的实施方式中,第二判断部分804,还被配置为根据现实场景图像、目标对象在现实场景图像中对应的检测区域、以及预设的三维场景地图,确定目标对象在世界坐标系下的第二位姿信息;其中,三维场景地图为表征目标区域所属的现实场景的地图;对AR设备进行定位,获得AR设备在世界坐标系下的第一位姿信息;基于第一位姿信息以及第二位姿信息,确定目标对象与AR设备之间的相对位姿信息。In a possible implementation, the second judging part 804 is further configured to determine the world coordinates of the target object according to the real scene image, the detection area corresponding to the target object in the real scene image, and the preset three-dimensional scene map The second pose information under the system; among them, the three-dimensional scene map is a map representing the real scene to which the target area belongs; locate the AR device to obtain the first pose information of the AR device in the world coordinate system; based on the first The pose information and the second pose information determine the relative pose information between the target object and the AR device.
在一种可能的实施方式中,预警提示部分804,还被配置为通过AR设备以语音形式、文字形式、动画形式、警示符、和闪光形式中的至少一种对用户进行预警提示。In a possible implementation, the early warning prompting part 804 is further configured to provide early warning prompts to the user in at least one of voice form, text form, animation form, warning symbol, and flashing form through the AR device.
关于装置中的各部分的处理流程、以及各部分之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。For the description of the processing flow of each part in the apparatus and the interaction flow between the various parts, reference may be made to the relevant descriptions in the foregoing method embodiments, which will not be described in detail here.
对应于图2中的导航提示方法,本公开实施例还提供了一种电子设备900,如图9所示,为本公开实施例提供的电子设备900结构示意图,包括:Corresponding to the navigation prompting method in FIG. 2 , an embodiment of the present disclosure further provides an electronic device 900 . As shown in FIG. 9 , the schematic structural diagram of the electronic device 900 provided by the embodiment of the present disclosure includes:
处理器91、存储器92、和总线93;存储器92用于存储执行指令,包括内存921和外部存储器922;这里的内存921也称内存储器,用于暂时存放处理器91中的运算数据,以及与硬盘等外部存储器922交换的数据,处理器91通过内存921与外部存储器922进行数据交换,当电子设备900运行时,处理器91与存储器92之间通过总线93通信,使得处理器91执行以下指令:获取增强现实AR设备拍摄的现实场景图像;基于现实场景图像,判断AR设备是否位于目标区域对应的地理位置范围内;在AR设备位于目标区域对应的地理位置范围内的情况下,检测现实场景图像中是否存在目标对象;在现实场景图像中存在目标对象的情况下,判断AR设备与目标对象是否存在发生碰撞的风险;在AR设备与目标对象存在发生碰撞的风险的情况下,通过AR设备对用户进行预警提示。The processor 91, the memory 92, and the bus 93; the memory 92 is used to store the execution instructions, including the memory 921 and the external memory 922; the memory 921 here is also called the internal memory, which is used to temporarily store the operation data in the processor 91, and For the data exchanged by the external memory 922 such as the hard disk, the processor 91 exchanges data with the external memory 922 through the memory 921. When the electronic device 900 is running, the communication between the processor 91 and the memory 92 is through the bus 93, so that the processor 91 executes the following instructions : Obtain the real scene image captured by the augmented reality AR device; based on the real scene image, determine whether the AR device is located within the geographic location range corresponding to the target area; if the AR device is located within the geographic location range corresponding to the target area, detect the real scene Whether there is a target object in the image; if there is a target object in the real scene image, determine whether there is a risk of collision between the AR device and the target object; if there is a risk of collision between the AR device and the target object, through the AR device Alert users.
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的导航提示方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of the navigation prompting method described in the above method embodiments are executed. Wherein, the storage medium may be a volatile or non-volatile computer-readable storage medium.
本公开实施例还提供一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行配置为实现上述导航提示方法的步骤。Embodiments of the present disclosure further provide a computer program, including computer-readable code, when the computer-readable code is executed in an electronic device, a processor in the electronic device executes the steps configured to implement the above-mentioned navigation prompting method.
本公开实施例还提供一种计算机程序产品,该计算机程序产品包括计算机程序指令,所述计算机程序指令可用于执行上述方法实施例中所述的导航提示方法的步骤,详细可参见上述方法实施例,在此不再赘述。An embodiment of the present disclosure further provides a computer program product, where the computer program product includes computer program instructions, and the computer program instructions can be used to execute the steps of the navigation prompt method described in the above method embodiments. For details, please refer to the above method embodiments , and will not be repeated here.
其中,上述计算机程序产品可以通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品体现为计算机存储介质,在另一个可选实施例中,计算机程序产品体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。Wherein, the above-mentioned computer program product can be realized by means of hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。Those skilled in the art can clearly understand that, for the convenience and brevity of description, for the working process of the system and device described above, reference may be made to the corresponding process in the foregoing method embodiments, and details are not repeated here. In the several embodiments provided by the present disclosure, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. The apparatus embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个 网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium. Based on such understanding, the technical solutions of the present disclosure can be embodied in the form of software products in essence, or the parts that contribute to the prior art or the parts of the technical solutions. The computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
最后应说明的是:以上所述实施例,仅为本公开的实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。Finally, it should be noted that the above-mentioned embodiments are only embodiments of the present disclosure, and are used to illustrate the technical solutions of the present disclosure, but not to limit them. The present disclosure is described in detail by the examples, and those of ordinary skill in the art should understand that: any person skilled in the art can still modify or modify the technical solutions described in the foregoing embodiments within the technical scope disclosed by the present disclosure. Changes can be easily conceived, or equivalent replacements are made to some of the technical features; and these modifications, changes or replacements do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present disclosure, and should be covered by the present disclosure. within the scope of protection. Therefore, the protection scope of the present disclosure should be based on the protection scope of the claims.

Claims (20)

  1. 一种导航提示方法,包括:A navigation prompt method comprising:
    获取增强现实AR设备拍摄的现实场景图像;Obtain real scene images captured by augmented reality AR devices;
    基于所述现实场景图像,判断所述AR设备是否位于目标区域对应的地理位置范围内;Based on the real scene image, determine whether the AR device is located within the geographic location range corresponding to the target area;
    在所述AR设备位于所述目标区域对应的地理位置范围内的情况下,检测所述现实场景图像中是否存在目标对象;In the case that the AR device is located within the geographic location range corresponding to the target area, detecting whether there is a target object in the real scene image;
    在所述现实场景图像中存在目标对象的情况下,判断所述AR设备与所述目标对象是否存在发生碰撞的风险;In the case that there is a target object in the real scene image, determine whether there is a risk of collision between the AR device and the target object;
    在所述AR设备与所述目标对象存在发生碰撞的风险的情况下,通过所述AR设备对用户进行预警提示。In the case that there is a risk of collision between the AR device and the target object, an early warning prompt is given to the user through the AR device.
  2. 根据权利要求1所述的导航提示方法,其中,所述基于所述现实场景图像,判断所述AR设备是否位于目标区域对应的地理位置范围内,包括:The navigation prompting method according to claim 1, wherein the determining whether the AR device is located within a geographic location range corresponding to a target area based on the real scene image comprises:
    基于所述现实场景图像和预设的三维场景地图,确定所述AR设备在世界坐标系下的第一位姿信息;所述三维场景地图为表征所述目标区域所属的现实场景的地图;Based on the real scene image and the preset three-dimensional scene map, determine the first pose information of the AR device in the world coordinate system; the three-dimensional scene map is a map representing the real scene to which the target area belongs;
    基于所述第一位姿信息和所述三维场景地图关联的每个区域在世界坐标系下的地理位置范围,判断所述AR设备是否位于目标区域对应的地理位置范围内。Based on the first pose information and the geographic location range of each area associated with the three-dimensional scene map in the world coordinate system, it is determined whether the AR device is located within the geographic location range corresponding to the target area.
  3. 根据权利要求1或2所述的导航提示方法,其中,所述在所述现实场景图像中存在目标对象的情况下,判断所述AR设备与所述目标对象是否存在发生碰撞的风险,包括:The navigation prompting method according to claim 1 or 2, wherein the determining whether there is a risk of collision between the AR device and the target object when there is a target object in the real scene image comprises:
    基于所述现实场景图像,确定所述目标对象与所述AR设备之间的相对位姿信息;Based on the real scene image, determine the relative pose information between the target object and the AR device;
    基于所述相对位姿信息,判断所述AR设备与所述目标对象之间是否存在发生碰撞的风险。Based on the relative pose information, it is determined whether there is a risk of collision between the AR device and the target object.
  4. 根据权利要求3所述的导航提示方法,其中,所述目标对象包含静态障碍物,所述基于所述相对位姿信息,判断所述AR设备与所述目标对象之间是否存在发生碰撞的风险,包括:The navigation prompting method according to claim 3, wherein the target object includes a static obstacle, and based on the relative pose information, it is determined whether there is a risk of collision between the AR device and the target object ,include:
    基于所述相对位姿信息,判断所述AR设备与所述静态障碍物之间的距离是否小于第一预设距离;Based on the relative pose information, determine whether the distance between the AR device and the static obstacle is less than a first preset distance;
    在所述AR设备与所述静态障碍物之间的距离小于第一预设距离的情况下,判断所述AR设备是否朝向所述静态障碍物;In the case where the distance between the AR device and the static obstacle is less than a first preset distance, determine whether the AR device is facing the static obstacle;
    在所述AR设备朝向所述静态障碍物的情况下,确定所述AR设备与所述静态障碍物之间存在发生碰撞的风险。In the case where the AR device faces the static obstacle, it is determined that there is a risk of collision between the AR device and the static obstacle.
  5. 根据权利要求3或4所述的导航提示方法,其中,所述目标对象包含动态障碍物,所述基于所述相对位姿信息,判断所述AR设备与所述目标对象之间是否存在发生碰撞的风险,包括:The navigation prompting method according to claim 3 or 4, wherein the target object includes a dynamic obstacle, and it is determined whether there is a collision between the AR device and the target object based on the relative pose information risks, including:
    基于多张现实场景图像,确定所述AR设备和所述动态障碍物之间的相对运动信息;Determine relative motion information between the AR device and the dynamic obstacle based on a plurality of real scene images;
    基于所述相对位姿信息和所述相对运动信息,判断所述AR设备与所述动态障碍物之间是否存在发生碰撞的风险。Based on the relative pose information and the relative motion information, it is determined whether there is a risk of collision between the AR device and the dynamic obstacle.
  6. 根据权利要求5所述的导航提示方法,其中,所述基于所述相对位姿信息和所述相对运动信息,判断所述AR设备与所述动态障碍物之间是否存在发生碰撞的风险,包括:The navigation prompting method according to claim 5, wherein the determining whether there is a risk of collision between the AR device and the dynamic obstacle based on the relative pose information and the relative motion information, comprising: :
    基于所述相对位姿信息,判断所述AR设备与所述动态障碍物之间的距离是否小于 第二预设距离;Based on the relative pose information, determine whether the distance between the AR device and the dynamic obstacle is less than a second preset distance;
    在所述AR设备与所述动态障碍物之间的距离小于第二预设距离的情况下,基于所述相对运动信息,判断所述动态障碍物相对于所述AR设备的行驶方向与所述动态障碍物朝向所述AR设备的方向之间的夹角是否小于设定角度阈值;In the case that the distance between the AR device and the dynamic obstacle is less than the second preset distance, based on the relative motion information, it is determined that the driving direction of the dynamic obstacle relative to the AR device is different from that of the dynamic obstacle. Whether the included angle between the directions of the dynamic obstacles toward the AR device is less than the set angle threshold;
    在所述夹角小于设定角度阈值的情况下,确定所述AR设备与所述动态障碍物之间存在发生碰撞的风险。In the case that the included angle is smaller than the set angle threshold, it is determined that there is a risk of collision between the AR device and the dynamic obstacle.
  7. 根据权利要求3至6任一所述的导航提示方法,其中,所述基于所述现实场景图像,确定所述目标对象与所述AR设备之间的相对位姿信息,包括:The navigation prompting method according to any one of claims 3 to 6, wherein the determining relative pose information between the target object and the AR device based on the real scene image comprises:
    根据所述现实场景图像、所述目标对象在所述现实场景图像中对应的检测区域以及预设的三维场景地图,确定所述目标对象在世界坐标系下的第二位姿信息;所述三维场景地图用于表征所述目标区域所属的现实场景的地图;According to the real scene image, the detection area corresponding to the target object in the real scene image, and the preset three-dimensional scene map, determine the second pose information of the target object in the world coordinate system; the three-dimensional The scene map is used to represent the map of the real scene to which the target area belongs;
    对所述AR设备进行定位,获得所述AR设备在所述世界坐标系下的第一位姿信息;Positioning the AR device to obtain the first pose information of the AR device in the world coordinate system;
    根据所述第一位姿信息以及所述第二位姿信息,确定所述目标对象与所述AR设备之间的相对位姿信息。Determine relative pose information between the target object and the AR device according to the first pose information and the second pose information.
  8. 根据权利要求1至7任一所述的导航提示方法,其中,所述通过所述AR设备对用户进行预警提示,包括:The navigation prompting method according to any one of claims 1 to 7, wherein the performing an early warning prompt to the user through the AR device comprises:
    通过所述AR设备以语音形式、文字形式、动画形式、警示符、和闪光形式中的至少一种对用户进行预警提示。An early warning prompt is given to the user through the AR device in at least one of a voice form, a text form, an animation form, a warning sign, and a flashing form.
  9. 一种导航提示装置,包括:A navigation prompt device, comprising:
    图像获取部分,被配置为获取增强现实AR设备拍摄的现实场景图像;The image acquisition part is configured to acquire the real scene image captured by the augmented reality AR device;
    第一判断部分,被配置为基于所述现实场景图像,判断所述AR设备是否位于目标区域对应的地理位置范围内;a first judging part, configured to judge whether the AR device is located within the geographic location range corresponding to the target area based on the real scene image;
    目标检测部分,被配置为在所述AR设备位于所述目标区域对应的地理位置范围内的情况下,检测所述现实场景图像中是否存在目标对象;a target detection part, configured to detect whether a target object exists in the real scene image when the AR device is located within the geographic location range corresponding to the target area;
    第二判断部分,被配置为在所述现实场景图像中存在目标对象的情况下,判断所述AR设备与所述目标对象是否存在发生碰撞的风险;The second judgment part is configured to judge whether there is a risk of collision between the AR device and the target object when there is a target object in the real scene image;
    预警提示部分,被配置为在所述AR设备与所述目标对象存在发生碰撞的风险的情况下,通过所述AR设备对用户进行预警提示。The early warning prompting part is configured to provide early warning prompting to the user through the AR device when there is a risk of collision between the AR device and the target object.
  10. 根据权利要求9所述的装置,其中,所述第一判断部分,还被配置为基于所述现实场景图像和预设的三维场景地图,确定所述AR设备在世界坐标系下的第一位姿信息;所述三维场景地图为表征所述目标区域所属的现实场景的地图;基于所述第一位姿信息和所述三维场景地图关联的每个区域在世界坐标系下的地理位置范围,判断所述AR设备是否位于目标区域对应的地理位置范围内。The apparatus according to claim 9, wherein the first determination part is further configured to determine the first position of the AR device in the world coordinate system based on the real scene image and the preset three-dimensional scene map posture information; the three-dimensional scene map is a map representing the real scene to which the target area belongs; based on the first posture information and the geographic location range of each area associated with the three-dimensional scene map in the world coordinate system, Determine whether the AR device is located within the geographic location range corresponding to the target area.
  11. 根据权利要求9或10所述的装置,其中,所述第二判断部分,还被配置为基于所述现实场景图像,确定所述目标对象与所述AR设备之间的相对位姿信息;基于所述相对位姿信息,判断所述AR设备与所述目标对象之间是否存在发生碰撞的风险。The apparatus according to claim 9 or 10, wherein the second judgment part is further configured to determine the relative pose information between the target object and the AR device based on the real scene image; The relative pose information determines whether there is a risk of collision between the AR device and the target object.
  12. 根据权利要求11所述的装置,其中,所述目标对象包含静态障碍物;The apparatus of claim 11, wherein the target object comprises a static obstacle;
    所述第二判断部分,还被配置为基于所述相对位姿信息,判断所述AR设备与所述静态障碍物之间的距离是否小于第一预设距离;在所述AR设备与所述静态障碍物之间的距离小于第一预设距离的情况下,判断所述AR设备是否朝向所述静态障碍物;在所述AR设备朝向所述静态障碍物的情况下,确定所述AR设备与所述静态障碍物之间存在发生碰撞的风险。The second judging part is further configured to judge whether the distance between the AR device and the static obstacle is less than a first preset distance based on the relative pose information; If the distance between static obstacles is less than the first preset distance, determine whether the AR device is facing the static obstacle; if the AR device is facing the static obstacle, determine whether the AR device is facing the static obstacle There is a risk of collision with the static obstacle.
  13. 根据权利要求11或12所述的装置,其中,所述目标对象包含动态障碍物;The apparatus of claim 11 or 12, wherein the target object comprises a dynamic obstacle;
    所述第二判断部分,还被配置为基于多张现实场景图像,确定所述AR设备和所述 动态障碍物之间的相对运动信息;基于所述相对位姿信息和所述相对运动信息,判断所述AR设备与所述动态障碍物之间是否存在发生碰撞的风险。The second judgment part is further configured to determine relative motion information between the AR device and the dynamic obstacle based on a plurality of real scene images; based on the relative pose information and the relative motion information, Determine whether there is a risk of collision between the AR device and the dynamic obstacle.
  14. 根据权利要求13所述的装置,其中,所述第二判断部分,还被配置为基于所述相对位姿信息,判断所述AR设备与所述动态障碍物之间的距离是否小于第二预设距离;在所述AR设备与所述动态障碍物之间的距离小于第二预设距离的情况下,基于所述相对运动信息,判断所述动态障碍物相对于所述AR设备的行驶方向与所述动态障碍物朝向所述AR设备的方向之间的夹角是否小于设定角度阈值;在所述夹角小于设定角度阈值的情况下,确定所述AR设备与所述动态障碍物之间存在发生碰撞的风险。The apparatus according to claim 13, wherein the second determination part is further configured to determine whether the distance between the AR device and the dynamic obstacle is less than a second predetermined distance based on the relative pose information Set the distance; when the distance between the AR device and the dynamic obstacle is less than the second preset distance, determine the driving direction of the dynamic obstacle relative to the AR device based on the relative motion information Whether the included angle with the direction of the dynamic obstacle towards the AR device is less than a set angle threshold; if the included angle is less than the set angle threshold, determine the AR device and the dynamic obstacle There is a risk of collision between them.
  15. 根据权利要求11至14任一项所述的装置,其中,所述第二判断部分,还被配置为根据所述现实场景图像、所述目标对象在所述现实场景图像中对应的检测区域以及预设的三维场景地图,确定所述目标对象在世界坐标系下的第二位姿信息;所述三维场景地图用于表征所述目标区域所属的现实场景的地图;对所述AR设备进行定位,获得所述AR设备在所述世界坐标系下的第一位姿信息;根据所述第一位姿信息以及所述第二位姿信息,确定所述目标对象与所述AR设备之间的相对位姿信息。The apparatus according to any one of claims 11 to 14, wherein the second judgment part is further configured to be based on the real scene image, the detection area corresponding to the target object in the real scene image, and A preset three-dimensional scene map, to determine the second pose information of the target object in the world coordinate system; the three-dimensional scene map is used to represent the map of the real scene to which the target area belongs; to locate the AR device , obtain the first pose information of the AR device in the world coordinate system; determine the relationship between the target object and the AR device according to the first pose information and the second pose information Relative pose information.
  16. 根据权利要求9至15任一项所述的装置,其中,所述预警提示部分,还被配置为通过AR设备以语音形式、文字形式、动画形式、警示符、和闪光形式中的至少一种对用户进行预警提示。The device according to any one of claims 9 to 15, wherein the early warning prompt part is further configured to at least one of voice form, text form, animation form, warning sign, and flash form through the AR device Alert users.
  17. 一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至8任一所述的导航提示方法。An electronic device, comprising: a processor, a memory and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processor and the memory communicate through the bus , when the machine-readable instructions are executed by the processor, the navigation prompting method according to any one of claims 1 to 8 is executed.
  18. 一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至8任一所述的导航提示方法。A computer-readable storage medium storing a computer program on the computer-readable storage medium, the computer program executing the navigation prompt method according to any one of claims 1 to 8 when the computer program is run by a processor.
  19. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述计算机设备中的处理器执行用于实现权利要求1至8任一项所述的导航提示方法。A computer program comprising computer readable code, when the computer readable code is run in an electronic device, a processor in the computer device executes the navigation prompt for implementing any one of claims 1 to 8 method.
  20. 一种计算机程序产品,包括计算机程序指令,该计算机程序指令使得计算机执行如权利要求1至8中任一项所述的导航提示方法。A computer program product comprising computer program instructions that cause a computer to perform the navigation prompting method of any one of claims 1 to 8.
PCT/CN2021/106909 2021-02-09 2021-07-16 Navigation prompt method and apparatus, and electronic device, computer-readable storage medium, computer program and program product WO2022170736A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110176459.7 2021-02-09
CN202110176459.7A CN112861725A (en) 2021-02-09 2021-02-09 Navigation prompting method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2022170736A1 true WO2022170736A1 (en) 2022-08-18

Family

ID=75989465

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/106909 WO2022170736A1 (en) 2021-02-09 2021-07-16 Navigation prompt method and apparatus, and electronic device, computer-readable storage medium, computer program and program product

Country Status (2)

Country Link
CN (1) CN112861725A (en)
WO (1) WO2022170736A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152245A (en) * 2023-01-31 2023-12-01 荣耀终端有限公司 Pose calculation method and device

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861725A (en) * 2021-02-09 2021-05-28 深圳市慧鲤科技有限公司 Navigation prompting method and device, electronic equipment and storage medium
CN115460320A (en) * 2021-06-09 2022-12-09 阿里巴巴新加坡控股有限公司 Navigation method, navigation device, computer storage medium and computer program product
CN115460539B (en) * 2022-06-30 2023-12-15 亮风台(上海)信息科技有限公司 Method, equipment, medium and program product for acquiring electronic fence
CN115543093B (en) * 2022-11-24 2023-03-31 浙江安吉吾知科技有限公司 Anti-collision system based on VR technology interaction entity movement

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120194554A1 (en) * 2011-01-28 2012-08-02 Akihiko Kaino Information processing device, alarm method, and program
CN110764614A (en) * 2019-10-15 2020-02-07 北京市商汤科技开发有限公司 Augmented reality data presentation method, device, equipment and storage medium
CN112180605A (en) * 2020-10-20 2021-01-05 江苏濠汉信息技术有限公司 Auxiliary driving system based on augmented reality
CN112861725A (en) * 2021-02-09 2021-05-28 深圳市慧鲤科技有限公司 Navigation prompting method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287928A (en) * 2020-10-20 2021-01-29 深圳市慧鲤科技有限公司 Prompting method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120194554A1 (en) * 2011-01-28 2012-08-02 Akihiko Kaino Information processing device, alarm method, and program
CN110764614A (en) * 2019-10-15 2020-02-07 北京市商汤科技开发有限公司 Augmented reality data presentation method, device, equipment and storage medium
CN112180605A (en) * 2020-10-20 2021-01-05 江苏濠汉信息技术有限公司 Auxiliary driving system based on augmented reality
CN112861725A (en) * 2021-02-09 2021-05-28 深圳市慧鲤科技有限公司 Navigation prompting method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152245A (en) * 2023-01-31 2023-12-01 荣耀终端有限公司 Pose calculation method and device

Also Published As

Publication number Publication date
CN112861725A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
WO2022170736A1 (en) Navigation prompt method and apparatus, and electronic device, computer-readable storage medium, computer program and program product
US10712747B2 (en) Data processing method, apparatus and terminal
CN108571974B (en) Vehicle positioning using a camera
JP4344869B2 (en) Measuring device
EP2672232B1 (en) Method for Providing Navigation Information and Server
US9116011B2 (en) Three dimensional routing
CN110293965B (en) Parking method and control device, vehicle-mounted device and computer readable medium
EP3624002A2 (en) Training data generating method for image processing, image processing method, and devices thereof
KR20180050823A (en) Generating method and apparatus of 3d lane model
CN112287928A (en) Prompting method and device, electronic equipment and storage medium
WO2022041869A1 (en) Road condition prompt method and apparatus, and electronic device, storage medium and program product
US10304250B2 (en) Danger avoidance support program
JP2005268847A (en) Image generating apparatus, image generating method, and image generating program
CN112950790A (en) Route navigation method, device, electronic equipment and storage medium
KR102264219B1 (en) Method and system for providing mixed reality contents related to underground facilities
WO2023125363A1 (en) Automatic generation method and apparatus for electronic fence, and real-time detection method and apparatus
JP2007122247A (en) Automatic landmark information production method and system
CN113591518A (en) Image processing method, network training method and related equipment
US11373411B1 (en) Three-dimensional object estimation using two-dimensional annotations
CN112907757A (en) Navigation prompting method and device, electronic equipment and storage medium
US20210225082A1 (en) Boundary detection using vision-based feature mapping
CN112639822B (en) Data processing method and device
WO2023274270A1 (en) Robot preoperative navigation method and system, storage medium, and computer device
RU2681346C2 (en) Method and system of accurate localization of visually impaired or blind person
KR101566964B1 (en) Method of monitoring around view tracking moving object, attaratus performing the same and storage media storing the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21925389

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20/11/2023)