CN116400807A - Information prompting method and electronic equipment - Google Patents

Information prompting method and electronic equipment Download PDF

Info

Publication number
CN116400807A
CN116400807A CN202310342050.7A CN202310342050A CN116400807A CN 116400807 A CN116400807 A CN 116400807A CN 202310342050 A CN202310342050 A CN 202310342050A CN 116400807 A CN116400807 A CN 116400807A
Authority
CN
China
Prior art keywords
information
color
determining
color information
condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310342050.7A
Other languages
Chinese (zh)
Inventor
孙文涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202310342050.7A priority Critical patent/CN116400807A/en
Publication of CN116400807A publication Critical patent/CN116400807A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides an information prompting method and electronic equipment. Wherein the method comprises the following steps: obtaining image information of a first object, wherein the first object is a real object in a real space; performing data processing on the image information to obtain first information corresponding to the first object; the first information comprises a first corresponding relation between first space information and first color information of the first object; obtaining second information corresponding to a second object, wherein the second object is a virtual object in a virtual space; the second information comprises a second corresponding relation between second spatial information and second color information of the second object; and if the collision risk is determined to exist based on the first corresponding relation and the second corresponding relation, outputting prompt information related to the collision risk.

Description

Information prompting method and electronic equipment
Technical Field
The application relates to an information prompting method and electronic equipment.
Background
Due to the characteristics of the augmented reality (Augmented Reality, AR) device, a light shield, an optical film and the like can limit the visual range of a user and cannot observe the real environment very clearly, so that the user cannot distinguish objects in the real environment, and therefore the user can easily collide with the objects in the real environment in movement, and personal safety is caused. There is no effective solution to this problem.
Disclosure of Invention
In view of this, a main object of the present application is to provide an information prompting method and an electronic device.
In order to achieve the above purpose, the technical scheme of the application is realized as follows:
the embodiment of the application provides an information prompting method, which comprises the following steps:
obtaining image information of a first object, wherein the first object is a real object in a real space;
performing data processing on the image information to obtain first information corresponding to the first object; the first information comprises a first corresponding relation between first space information and first color information of the first object;
obtaining second information corresponding to a second object, wherein the second object is a virtual object in a virtual space; the second information comprises a second corresponding relation between second spatial information and second color information of the second object;
and if the collision risk is determined to exist based on the first corresponding relation and the second corresponding relation, outputting prompt information related to the collision risk.
In the above aspect, the method further includes determining whether there is a collision risk by:
determining a first color distance according to the first color information and the second color information;
Determining a first position distance according to the first space information and third space information of a user under the condition that the first color distance meets a first condition;
and determining that the user has collision risk with the first object under the condition that the first position distance meets a second condition.
In the above aspect, the image information includes depth information, and the method further includes:
determining point cloud information corresponding to the depth information;
determining a third corresponding relation between the first object and preset space information based on the point cloud information; the preset spatial information comprises the first spatial information;
and determining first space information corresponding to the first object in the first information based on the third corresponding relation and the first object.
In the above aspect, the image information further includes color information, and the method further includes:
obtaining contour information related to the first object according to the image information;
first color information related to the first object in the first information is determined based on the contour information and the color information.
In the above scheme, the method further comprises:
Determining at least one piece of color information related to the first object, and a color ratio corresponding to each piece of color information in the at least one piece of color information;
and when the color ratio satisfies a third condition, determining color information corresponding to the color ratio satisfying the third condition as the first color information, and determining the color ratio satisfying the third condition as the first color ratio corresponding to the first color information.
In the above scheme, the method further comprises:
determining a second color distance corresponding to any two pieces of color information in the at least one piece of color information;
determining the first color information based on two pieces of color information corresponding to the second color distance under the condition that the second color distance meets a fourth condition; and determining a first color ratio corresponding to the first color information based on the color ratios of the two color information corresponding to the second color distance.
In the above aspect, the first object includes at least two pieces of first color information; the determining a first color distance according to the first color information and the second color information includes:
and determining the first color distance based on first color information and the second color information corresponding to the first color ratio in the at least two pieces of first color information when the first color ratio meets a fifth condition.
In the above aspect, the first object includes at least two pieces of first color information; the second object includes at least two second color information; the determining a first color distance according to the first color information and the second color information includes:
and determining a first color distance based on the first color information corresponding to the first color proportion in the at least two pieces of first color information and the second color information corresponding to the second color proportion in the at least two pieces of second color information when the first color proportion meets the fifth condition and the second color proportion corresponding to the second color information meets the sixth condition.
In the above aspect, before the determining the first location distance according to the first spatial information and the third spatial information of the user, the method further includes:
determining a second location distance according to the first spatial information and the second spatial information;
and determining that the first object is at risk if the second location distance meets a seventh condition.
The embodiment of the application provides electronic equipment which comprises an image acquisition module, an information display module and a processing module; wherein,,
The image acquisition module is used for acquiring image information of a first object;
the information display module is used for presenting/displaying a second object;
the processing module is used for obtaining the image information of the first object, wherein the first object is a real object in a real space; performing data processing on the image information to obtain first information corresponding to the first object; the first information comprises a first corresponding relation between first space information and first color information of the first object; obtaining second information corresponding to the second object, wherein the second object is a virtual object in a virtual space; the second information comprises a second corresponding relation between second spatial information and second color information of the second object; and if the collision risk is determined to exist based on the first corresponding relation and the second corresponding relation, outputting prompt information related to the collision risk.
The embodiment of the application provides an information prompt device, which comprises:
the first obtaining module is used for obtaining image information of a first object, wherein the first object is a real object in a real space;
the processing module is used for carrying out data processing on the image information to obtain first information corresponding to the first object; the first information comprises a first corresponding relation between first space information and first color information of the first object;
The second obtaining module is used for obtaining second information corresponding to a second object, wherein the second object is a virtual object in the virtual space; the second information comprises a second corresponding relation between second spatial information and second color information of the second object;
and the output module is used for outputting prompt information related to the collision risk if the collision risk is determined to exist based on the first corresponding relation and the second corresponding relation.
The embodiment of the application provides information prompt equipment, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor realizes the method of any one of the above when executing the program.
Embodiments of the present application provide a storage medium storing executable instructions that, when executed by a processor, implement a method as described in any one of the above.
The embodiment of the application provides an information prompting method and electronic equipment. Wherein the method comprises the following steps: obtaining image information of a first object, wherein the first object is a real object in a real space; performing data processing on the image information to obtain first information corresponding to the first object; the first information comprises a first corresponding relation between first space information and first color information of the first object; obtaining second information corresponding to a second object, wherein the second object is a virtual object in a virtual space; the second information comprises a second corresponding relation between second spatial information and second color information of the second object; and if the collision risk is determined to exist based on the first corresponding relation and the second corresponding relation, outputting prompt information related to the collision risk.
Drawings
Fig. 1 is a schematic diagram illustrating the interleaving of virtual objects in a virtual scene and objects in a real environment in the related art of the information prompting method according to the embodiment of the present application;
FIG. 2 is a schematic diagram of an implementation flow of an information prompting method according to an embodiment of the present application;
FIG. 3 is a schematic workflow diagram of an information prompting method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an AR glasses user scanning a real environment and recognizing an object in the real environment according to the information prompting method of the embodiment of the present application;
FIG. 5 is a schematic diagram of analyzing the relationship between the position and color of a real and virtual object according to the collision detection result in the information prompting method according to the embodiment of the present application;
FIG. 6 is a schematic diagram of a risk reminding method according to an embodiment of the present disclosure when an AR glasses user approaches a real object;
fig. 7 is a schematic diagram of a composition structure of an electronic device according to an embodiment of the present application;
FIG. 8 is a schematic diagram of the composition structure of an information prompt device according to an embodiment of the present application;
fig. 9 is a schematic diagram of a hardware entity structure of an information prompting device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the specific technical solutions of the present application will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are illustrative of the present application, but are not intended to limit the scope of the present application.
In the related art, the AR smart glasses are different from Virtual Reality (VR) smart glasses in that a user can observe a surrounding real environment through the AR glasses, and if there are objects around the user, the user can see and bypass the real environment clearly, so that the risk of collision is avoided. Therefore, the AR system generally does not have a safety wall or an electronic fence, so that the user can wear the AR glasses to freely move, and virtual information transmitted from the AR glasses is fused with the real environment to form a virtual-real combination effect.
Fig. 1 is a schematic diagram of interlacing a virtual object in a virtual scene and an object in a real environment in the related art of the information prompting method of the embodiment of the present application, as shown in fig. 1, in some AR scenes, especially when the virtual object itself is relatively large or complex, a user needs to wear AR glasses to move through the real environment, at this time, if the virtual object is moving, due to the requirement of an application scene, the user needs to track or move itself to perform an operation, if the virtual object is close to the object in the real environment, the user cannot distinguish the object in the real environment, and if the virtual object is close to the real object in the real environment, if the virtual object is at the rear, the real object is at the front, and when the AR glasses user moves forward, the user cannot actively distinguish the virtual object and the real object, so that collision can occur with the real object.
In the related art, anti-collision solutions are mostly aimed at application scenarios in VR, where a safety boundary or an electronic fence is defined around a user in advance, and the user can only operate VR devices within a limited range. Aiming at the anti-collision solution applicable to AR equipment, a plurality of sensors such as an acoustic wave generator, an acoustic wave detector, an angle sensor and the like are added on AR and VR equipment, and the environment is detected by utilizing the working principle similar to radar such as acoustic waves or laser and the like and fed back to a user, so that the problems of high cost, increased equipment volume, increased equipment power consumption and the like exist, and the practical effect is poor.
In view of the shortcomings of the related art, the embodiments of the present application provide an information prompting method and an electronic device, and a method for prompting a user to move in danger through voice, visual prompting, and the like when detecting that the user approaches a real object with a distance and a color close to a virtual object based on methods such as visual image processing and object recognition.
The embodiment of the application provides an information prompting method, the functions realized by the method can be realized by calling program codes by a processor in an information prompting device, and the program codes can be stored in a computer storage medium.
Fig. 2 is a schematic flow chart of an implementation of an information prompting method in an embodiment of the present application, as shown in fig. 2, where the method includes:
step 201: obtaining image information of a first object, wherein the first object is a real object in a real space;
step 202: performing data processing on the image information to obtain first information corresponding to the first object; the first information comprises a first corresponding relation between first space information and first color information of the first object;
step 203: obtaining second information corresponding to a second object, wherein the second object is a virtual object in a virtual space; the second information comprises a second corresponding relation between second spatial information and second color information of the second object;
step 204: and if the collision risk is determined to exist based on the first corresponding relation and the second corresponding relation, outputting prompt information related to the collision risk.
In step 201, the information prompting method may be determined according to practical situations, which is not limited herein. As an example, the information prompting method may include a method of preventing an AR smart glasses user from colliding with a real object.
The image information may be determined according to actual situations, and is not limited herein. As an example, the image information may be an environmental image including a real space of the first object.
The obtaining the image information of the first object may be that an environment around the real space including the first object is scanned to obtain the image information including the first object.
In step 202, the first spatial information may be determined according to practical situations, which is not limited herein. As an example, the first spatial information may include information of a position, an orientation, and a size of the first object.
The data processing is performed on the image information to obtain first information corresponding to the first object may be that object identification is performed on the image information to obtain first spatial information corresponding to the first object in the first information; and carrying out image recognition on the image information to obtain first color information corresponding to the first object in the first information.
In step 203, the second spatial information may be determined according to practical situations, which is not limited herein. As an example, the second spatial information may include information of a position, an orientation, and a size of the second object.
The obtaining the second information corresponding to the second object may be determining, based on a preset code, second spatial information and second color information corresponding to the second object in the second information. The preset code may be determined according to practical situations, and is not limited herein. As an example, the preset code may include a generation code of the second object.
In step 204, the prompt information may be determined according to practical situations, which is not limited herein. As an example, the prompt information may include voice information, and may also include visual information. The voice information may include information prompting a first object at which a collision risk exists at a first distance in a first direction of a location of the user. The visual information may include highlight region information obtained by rendering a contour of the first object at risk of collision.
The embodiment of the application provides an information prompting method, which can output prompting information representing that the risk of collision with a real object exists under the condition that the color of the real object is detected to be close to that of a virtual object, and prompts a user that the danger exists in front of the user through the prompting information.
In an alternative embodiment of the present application, the method further comprises determining whether there is a risk of collision by:
determining a first color distance according to the first color information and the second color information;
determining a first position distance according to the first space information and third space information of a user under the condition that the first color distance meets a first condition;
And determining that the user has collision risk with the first object under the condition that the first position distance meets a second condition.
In this embodiment, the determining the first color distance according to the first color information and the second color information may be that the first color distance is obtained by performing an operation on the first color information and the second color information based on a first preset formula. The first preset formula may be determined according to practical situations, and is not limited herein. As an example, the first preset formula may include a color distance formula.
The first condition may be determined according to practical situations, and is not limited herein. As an example, the first condition may include the first color distance being less than a first color threshold. The first color threshold may be determined according to practical situations, and is not limited herein.
The determining the first location distance according to the first spatial information and the third spatial information of the user may be that the first location distance is obtained by calculating the first spatial information and the third spatial information based on a second preset formula. The second preset formula may be determined according to practical situations, and is not limited herein. As an example, the second preset formula may include a two-point inter-position distance formula.
The second condition may be determined according to practical situations, and is not limited herein. As one example, the second condition may include the first location distance being less than a first location threshold. The first position threshold may be determined according to practical situations, and is not limited herein.
In an optional embodiment of the application, the image information comprises depth information, the method further comprising:
determining point cloud information corresponding to the depth information;
determining a third corresponding relation between the first object and preset space information based on the point cloud information; the preset spatial information comprises the first spatial information;
and determining first space information corresponding to the first object in the first information based on the third corresponding relation and the first object.
In this embodiment, the depth information may be determined according to practical situations, which is not limited herein. As an example, the depth information may include a depth value of each pixel point in the image information. The determining the point cloud information corresponding to the depth information may be determining a spatial coordinate of each pixel point in the image information; and determining point cloud information corresponding to the depth information based on the space coordinates and the depth information.
The determining, based on the point cloud information, the third corresponding relationship between the first object and the preset spatial information may be that, based on the point cloud information, grid information corresponding to the image information is established, where the grid information includes the third corresponding relationship between the first object and the preset spatial information.
The third correspondence may be determined according to practical situations, and is not limited herein. As an example, the third correspondence may include that the first object has a correspondence with first spatial information in the preset spatial information, and that the first object does not have a correspondence with second spatial information other than the first spatial information in the preset spatial information.
The determining, based on the third correspondence and the first object, the first spatial information corresponding to the first object in the first information may be determining, based on a correspondence between the first object in the third correspondence and the first spatial information in the preset spatial information, the first spatial information corresponding to the first object in the first information.
In an optional embodiment of the present application, the image information further includes color information, and the method further includes:
Obtaining contour information related to the first object according to the image information;
first color information related to the first object in the first information is determined based on the contour information and the color information.
In this embodiment, the obtaining the profile information related to the first object according to the image information may be processing the image information by using a first preset manner to obtain the profile information related to the first object. The first preset manner may be determined according to actual situations, and is not limited herein. As an example, the first preset manner may include an edge detection method.
The determining, based on the profile information and the color information, the first color information related to the first object in the first information may be that color information corresponding to the profile information is processed by using a second preset manner, so as to obtain first color information related to the first object. The second preset manner may be determined according to practical situations, and is not limited herein. As an example, the second preset manner may include a color change detection method.
In an alternative embodiment of the present application, the method further comprises:
Determining at least one piece of color information related to the first object, and a color ratio corresponding to each piece of color information in the at least one piece of color information;
and when the color ratio satisfies a third condition, determining color information corresponding to the color ratio satisfying the third condition as the first color information, and determining the color ratio satisfying the third condition as the first color ratio corresponding to the first color information.
In this embodiment, the determining at least one color information related to the first object may be that color information corresponding to the profile information is processed by using the second preset manner to obtain at least one color information related to the first object.
The determining the color ratio corresponding to each color information in the at least one color information may be performing statistical analysis on the at least one color information to obtain the color ratio corresponding to each color information.
The third condition may be determined according to practical situations, and is not limited herein. As an example, the third condition may include the color ratio being greater than a first ratio threshold. The first proportional threshold may be determined according to practical situations, and is not limited herein. The first ratio threshold may be a fixed value, may be an average value of at least one of the color ratios, or may be a median value of at least one of the color ratios.
In some embodiments, at least one of the color ratios may be ranked from large to small, resulting in a ranking result; and determining the color proportion of the second sorting or the third sorting in the sorting result as the first proportion threshold value.
In an alternative embodiment of the present application, the method further comprises:
determining a second color distance corresponding to any two pieces of color information in the at least one piece of color information;
determining the first color information based on two pieces of color information corresponding to the second color distance under the condition that the second color distance meets a fourth condition; and determining a first color ratio corresponding to the first color information based on the color ratios of the two color information corresponding to the second color distance.
In this embodiment, the determining the second color distance corresponding to any two color information in the at least one color information may be that any two color information in the at least one color information is calculated based on the first preset formula, so as to obtain the second color distance.
The fourth condition may be determined according to practical situations, and is not limited herein. As an example, the fourth condition may include the second color distance being less than a second color threshold. The second color threshold may be determined according to practical situations, and is not limited herein.
The determining the first color information based on the two color information corresponding to the second color distance may be determining whether a first color ratio corresponding to the first color information of the two color information is greater than a second color ratio corresponding to the second color information of the two color information; determining the first color information as the first color information in the case that the first color ratio is greater than the second color ratio; determining the second color information as the first color information in the case that the first color ratio is smaller than the second color ratio; and determining the first color information or the second color information as the first color information in the case that the first color ratio is equal to the second color ratio.
In an alternative embodiment of the present application, the first object comprises at least two first color information; the determining a first color distance according to the first color information and the second color information includes:
and determining the first color distance based on first color information and the second color information corresponding to the first color ratio in the at least two pieces of first color information when the first color ratio meets a fifth condition.
In this embodiment, the fifth condition may be determined according to practical situations, which is not limited herein. As one example, the fifth condition may include the first color ratio being greater than a second ratio threshold. The second ratio threshold may be determined according to practical situations, and is not limited herein. The second ratio threshold may be 25% or 15%.
The determining the first color distance based on the first color information and the second color information corresponding to the first color ratio of the at least two first color information may be determining the first color distance based on the first color information and the second color information corresponding to the first color ratio of the at least two first color information satisfying the fifth condition, when the first color ratio satisfies the fifth condition.
As one example, the at least two first color information may be three first color information; the first color ratio satisfying the fifth condition may be two first color ratios; and determining two first color distances based on the first color information corresponding to the first color proportion meeting the fifth condition in the three first color information and the second color information respectively.
In some embodiments, the first object comprises at least two first color information; the method further comprises the steps of: and determining that the user and the first object are not at collision risk under the condition that the first color distance corresponding to any one of the at least two pieces of first color information does not meet the first condition.
In an alternative embodiment of the present application, the first object comprises at least two first color information; the second object includes at least two second color information; the determining a first color distance according to the first color information and the second color information includes:
and determining a first color distance based on the first color information corresponding to the first color proportion in the at least two pieces of first color information and the second color information corresponding to the second color proportion in the at least two pieces of second color information when the first color proportion meets the fifth condition and the second color proportion corresponding to the second color information meets the sixth condition.
In this embodiment, the sixth condition may be determined according to practical situations, and is not limited herein. As one example, the sixth condition may include the second color ratio being greater than a third ratio threshold. The third ratio threshold may be determined according to practical situations, and is not limited herein. The third ratio threshold may be 25% or 15%.
In the case where the first color ratio satisfies the fifth condition and the second color ratio corresponding to the second color information satisfies the sixth condition, determining the first color distance based on the first color information corresponding to the first color ratio of the at least two first color information and the second color information corresponding to the second color ratio of the at least two second color information may be determining the first color distance based on the first color information corresponding to the first color ratio satisfying the fifth condition of the at least two first color information and the second color information corresponding to the second color ratio satisfying the sixth condition of the at least two second color information.
As one example, the at least two first color information may be three first color information; the at least two second color information may be three second color information; the first color ratio satisfying the fifth condition may be two first color ratios; the second color ratio satisfying the sixth condition may be two second color ratios; and determining four first color distances based on first color information corresponding to the first color proportion meeting the fifth condition in the three first color information and second color information meeting the sixth condition in the three second color information respectively.
In some embodiments, the second object includes at least one color information; the method further comprises the steps of: determining a third color distance corresponding to any two pieces of color information in at least one piece of color information of the second object; determining the second color information based on two pieces of color information corresponding to the third color distance when the third color distance satisfies an eighth condition; and determining a second color ratio corresponding to the second color information based on the color ratios of the two color information corresponding to the third color distance.
The step of determining the second color information and the second color ratio of the second object in this embodiment may refer to the description in the first color information and the first color ratio of the first object in the foregoing embodiment, respectively, which is not repeated herein.
In some embodiments, the first object comprises at least two first color information; the second object includes at least two second color information; the method further comprises the steps of: and determining that the collision risk between the user and the first object does not exist under the condition that the first color distance corresponding to any one of the at least two first color information and any one of the at least two second color information does not meet the first condition.
In an optional embodiment of the application, before the determining the first location distance according to the first spatial information and the third spatial information of the user, the method further comprises:
determining a second location distance according to the first spatial information and the second spatial information;
and determining that the first object is at risk if the second location distance meets a seventh condition.
In this embodiment, the determining the second location distance according to the first spatial information and the second spatial information may be that the second location distance is obtained by performing an operation on the first spatial information and the obtained second spatial information based on the second preset formula.
The seventh condition may be determined according to practical situations, and is not limited herein. As one example, the seventh condition may include the second location distance being less than a second location threshold. The second position threshold may be determined according to practical situations, and is not limited herein. The second position threshold may be 0.3M.
The embodiment of the application is based on the sensor of the AR glasses equipment, a new sensor is not required to be additionally arranged, the scheme is concise and efficient, and additional cost and power consumption are avoided; the characteristics of the AR glasses are fully utilized, under most conditions, a user is not sensitive to the anti-collision system, the user can observe surrounding environment to avoid risks consciously through the AR glasses, and the risk prompt can be triggered only when the color is close and the virtual and real object distance is close under specific conditions and the user is close to the real object distance. The user safety is protected, and meanwhile, the device is in a silent state under most conditions, so that the immersion in the AR use experience is not damaged obviously.
In order to understand the embodiments of the present application, a method for preventing an AR smart glasses user from colliding with a real object will be described below as an example.
Fig. 3 is a schematic workflow diagram of an information prompting method according to an embodiment of the present application, as shown in fig. 3, where the method mainly includes performing data preprocessing on a scanning environment, identifying potential risk points in a system, and determining a risk giving prompt.
The first part, scan the environment and do the data pretreatment: and (3) scanning the real environment, identifying objects in the real environment, and extracting grids and color information of the real objects.
And (1) the AR glasses user scans the surrounding real environment by using the AR glasses.
And (2) collecting depth information of objects in a real environment by a depth camera such as a structured light three-dimensional camera, a Time of Flight (TOF) camera, an RGBD camera and the like which are arranged on the AR glasses. The depth information identifies three-dimensional coordinate information of pixel points corresponding to the RGB image, in general, the depth camera generates the RGB image of the shooting scene, synchronously generates a depth map, records the depth value of the current pixel point in the depth map, and can calculate the three-dimensional space coordinate corresponding to each pixel point in the depth map according to the principle of camera aperture imaging and combining internal parameters and external parameters of the camera. After the three-dimensional space coordinates of the pixel points are obtained, the RGB images synchronously collected by the depth cameras are combined, and each pixel point contains coordinate information and color information in the three-dimensional space, so that point clouds corresponding to the depth map can be constructed, grid information (Mesh) is generated after the point clouds are built, and the grid information comprises real objects and corresponding relations with position information of the real objects.
Step (3), an RGB color camera mounted on the AR glasses performs image segmentation from the obtained image, wherein the segmentation step mainly comprises the steps of obtaining the outer contour of an object according to an edge detection method; and acquiring the internal color of the object according to the color change of the object to obtain the real object in the image. And analyzing the real object to obtain the color of the texture of the segmented object image. The step of analysis in implementation can obtain the RGB color information of the whole object by counting the RGB values of the image pixels.
And (3.1) if the separated object image is in a single color, establishing a mapping of a real object grid to a color, namely, a corresponding relation between space information of the real object and the color, and forming (Pi, ci), wherein Pi represents the space information of the object with the sequence number i, including position, orientation, size and the like, and Ci represents the color of the object with the sequence number i and represents an RGB value.
Step (3.2), if the divided object image is composed of a plurality of colors, setting a threshold value, selecting 2-3 colors with the largest color value distribution, and establishing a real object grid-color mapping, namely (Pi, di, ei, fi), wherein Pi represents the space information of an object with a sequence number i, di represents a first color and a first proportion corresponding to the object with the sequence number i; ei represents a second color and a second proportion corresponding to an object with a sequence number i; fi represents a third color and a third ratio corresponding to the object with the sequence number i, and if only two colors exist in the object image, fi=0; if the colors in the object image are more than three, the three colors with the largest distribution proportion are taken. When proportional statistics is carried out, if the difference between two different colors is smaller, combining the two colors, and taking the color with larger color proportion as the combined color when implementing, and simultaneously combining the color statistical proportions; if the two colors are equal in proportion, one of the two colors is taken as the combined color.
In the step (3.3), when the color approach is judged, the square value can be solved for the RGB components. The set color distance is: cd= (r 1-r 2), (r 1-r 2) + (g 1-g 2), (g 1-g 2) + (b 1-b 2), wherein r1 represents the r value of the first color; r2 represents the r value of the second color; g1 represents the g value of the first color; g2 represents the g value of the second color; b1 represents the b value of the first color; b2 represents the value of b for the second color; and finally, judging the color distance between the two color values, and judging the color which is close when the color distance is smaller than a set threshold value.
Fig. 4 is a schematic diagram of an AR glasses user scanning a real environment and identifying objects in the real environment, as shown in fig. 4, in this embodiment, the user identifies three objects of red, green and blue after scanning the environment, and obtains grid space information corresponding to the real objects.
A second part, identifying potential risk points in the system: and performing collision detection on the direction of the AR glasses user, judging the distance and color information between the real object grid and the virtual object if the real object grid and the virtual object exist in the collision detection result, and recording the position of the real object grid if the real object grid and the virtual object are close in distance and similar in color. Fig. 5 is a schematic diagram of analyzing the relationship between the positions and colors of real and virtual objects according to the collision detection result in the information prompting method according to the embodiment of the present application.
In the step (1), in the virtual scene, with the AR glasses or the head of the user as a starting point, the AR glasses face in the direction, and rays (Raycast) are emitted to detect collision of objects in the scene.
Step (2), comparing the colors of the virtual object and the object image, and if the virtual object is monochrome, the following situations exist:
and (2.1) if the segmented object image is single-color, comparing the difference between the color of the virtual object and the single color of the object image, calculating the color distance between the single color of the object image and the color of the virtual object, comparing the color distance with a set color threshold value, and if the color distance is larger than the color threshold value, indicating that the difference between the color of the virtual object and the single color of the object image is larger, wherein a user can observe different colors of the virtual object and the real object through the AR glasses, and when the user approaches the virtual object, the risk of collision between the user and the real object is smaller. If the difference between the color of the virtual object and the single color of the object image is small, the virtual object is hard to distinguish from the real object by the user, and the risk is high.
And (2.2) if the segmented object image is colorful, comparing the difference between the virtual object color and the real object color (Di, ei, fi), judging whether the proportion of each color in the real object color (Di, ei, fi) is larger than a proportion threshold value, wherein the proportion threshold value can be set to be 25%, calculating the color distance between the color of the real object color (Di, ei, fi) which is larger than the proportion threshold value and the virtual object color, comparing the color distance with the color threshold value, and determining that the color difference between the virtual object color and the color of the real object color (Di, ei, fi) which is larger than the proportion threshold value by 25% is larger under the condition that the color distance is larger than the color threshold value, so that the user can observe different colors of the virtual object and the real object through AR glasses, and the risk is smaller. If the color difference between the color of the virtual object and the color of the real object (Di, ei, fi) which is larger than the proportion threshold value by 25% is smaller, the method indicates that the user is difficult to distinguish the virtual object from the real object, the risk is larger, and the position of the real object is judged to be a high risk point.
And (3) if the virtual object is colorful, recording the virtual object color as (Vi, wi, ui), merging the virtual object color with the similar color, comparing the virtual object color (Vi, wi, ui) with the real object color (Di, ei, fi), judging whether the proportion of each color in the virtual object color (Vi, wi, ui) is larger than a proportion threshold value, judging whether the proportion of each color in the real object color (Di, ei, fi) is larger than the proportion threshold value, calculating the color distance between the color in any pair of virtual object colors (Vi, wi, ui) which is larger than the proportion threshold value by 15% and the color in the real object color (Di, ei, fi) which is larger than the proportion threshold value, comparing the color distance with the color threshold value, and judging that the color in the virtual object color (Vi, wi, ui) which is larger than the proportion threshold value, if the color in the virtual object color (Vi, ei, fi) which is larger than the proportion threshold value is larger, and judging that the risk of the user is smaller through the glasses. Only when the virtual object colors (Vi, wi, ui) and the real object colors (Di, ei, fi) are relatively close, the position of the real object is judged to be a high risk point.
And (4) if the collision detects that the grid model of the real object and the virtual object exist at the same time, the colors of the grid model of the real object and the virtual object are relatively close, the distance between the grid model of the real object and the virtual object is judged, and if the distance is smaller than a set threshold value, wherein the distance can be set to be 0.3 meter in a typical embodiment, the position of the real object is determined to have safety risk, and a risk point corresponding to the real object is recorded in the system. If the distance between the grid model of the real object and the virtual object is too large, the edge of the grid model of the real object is empty, and the user can observe the surrounding environment through the AR glasses, so that no risk points need to be recorded.
The recording form of the risk points in step (5) may be (Ri, qi), ri being represented as the i-th risk point coordinate, and may be set as the centroid coordinate of the real object grid, qi representing the principal color value of the real object determined above.
Third part, judgment risk giving prompt: when the user moves to a certain range from the real object, the user is reminded of paying attention to the front danger in a voice, visual prompt mode and the like. Fig. 6 is a schematic diagram of a risk prompting method for an AR glasses user approaching a real object according to an embodiment of the present disclosure.
And (1) calculating the distance between the current position and the risk point in the moving process of the AR glasses. When the distance is less than the threshold, an audible cue is given and a visual feedback cue is synchronized.
The step (2) and the visual prompt can be contour lines outlined by the real object grid. In particular implementations, as shown in FIG. 6, a portion of the highlight region may be rendered on the sides of the grid by a Shader (loader).
And (3) establishing grid and color information after rescanning and identifying when the real object changes, including the space position or the color, so that the collision detection result and the judgment of the risk point can be updated in real time in the moving process of the user. Specifically, under the condition that the spatial position of the real object changes, whether the distance between the real object and the virtual object is smaller than a threshold value is judged again; and under the condition that the color of the real object changes, judging whether the color between the real object and the virtual object is close or not again. When the user orientation changes, the collision detection and risk point acquisition processes can be updated, and then the distance between the real-time position of the user and the risk point is judged under the updated data.
Fig. 7 is a schematic diagram of a composition structure of the electronic device according to the embodiment of the present application, as shown in fig. 7, the electronic device 700 includes an image acquisition module 701, an information display module 702, and a processing module 703; wherein,,
The image acquisition module 701 is configured to acquire image information of a first object;
the information display module 702 is configured to present/display a second object;
the processing module 703 is configured to obtain image information of the first object, where the first object is a real object in real space; performing data processing on the image information to obtain first information corresponding to the first object; the first information comprises a first corresponding relation between first space information and first color information of the first object; obtaining second information corresponding to the second object, wherein the second object is a virtual object in a virtual space; the second information comprises a second corresponding relation between second spatial information and second color information of the second object; and if the collision risk is determined to exist based on the first corresponding relation and the second corresponding relation, outputting prompt information related to the collision risk.
In this embodiment, the image acquisition module 701 may be determined according to practical situations, which is not limited herein. As an example, the image acquisition module may be a real object information acquisition module, configured to, after scanning an environment through a sensor carried by AR glasses, establish a mesh model of a real object, segment an image frame captured by a camera, analyze colors on the image frame, and establish a mapping between a spatial attribute and a color attribute of the real object.
The information display module 702 may be determined according to practical situations, and is not limited herein. As an example, the information display module may be a module for presenting a second object in the virtual space to be displayed or presented in a real environment.
The processing module 703 may be determined according to practical situations, and is not limited herein. As an example, the processing module may establish the risk point dataset by comparing the color and spatial distance of the virtual object to the real object, and determine whether the distance of the current user to the risk point is less than a threshold.
The electronic device 700 further comprises a feedback module 704 for giving audio and visual feedback in case the current user is less than a threshold distance from the risk point.
An embodiment of the present application provides an information presentation device, fig. 8 is a schematic diagram of a composition structure of the information presentation device of the embodiment of the present application, as shown in fig. 8, and the device 800 includes:
a first obtaining module 801, configured to obtain image information of a first object, where the first object is a real object in a real space;
a processing module 802, configured to perform data processing on the image information to obtain first information corresponding to the first object; the first information comprises a first corresponding relation between first space information and first color information of the first object;
A second obtaining module 803, configured to obtain second information corresponding to a second object, where the second object is a virtual object in a virtual space; the second information comprises a second corresponding relation between second spatial information and second color information of the second object;
and the output module 804 is configured to output prompt information related to the collision risk if it is determined that the collision risk exists based on the first correspondence and the second correspondence.
In other embodiments, the apparatus 800 further comprises: the device comprises a first determining module, a second determining module and a third determining module; wherein,,
the first determining module is used for determining a first color distance according to the first color information and the second color information;
the second determining module is configured to determine a first location distance according to the first spatial information and third spatial information of a user, where the first color distance meets a first condition;
the third determining module is configured to determine that the user has a collision risk with the first object if the first location distance meets a second condition.
In other embodiments, the image information includes depth information, and the processing module 802 is further configured to determine point cloud information corresponding to the depth information; determining a third corresponding relation between the first object and preset space information based on the point cloud information; the preset spatial information comprises the first spatial information; and determining first space information corresponding to the first object in the first information based on the third corresponding relation and the first object.
In other embodiments, the image information further includes color information, and the processing module 802 is further configured to obtain profile information related to the first object according to the image information; first color information related to the first object in the first information is determined based on the contour information and the color information.
In other embodiments, the apparatus 800 further comprises: a fourth determination module, a fifth determination module; wherein,,
the fourth determining module is configured to determine at least one color information related to the first object, and a color ratio corresponding to each color information in the at least one color information;
the fifth determining module is configured to determine, when the color ratio satisfies a third condition, color information corresponding to the color ratio satisfying the third condition as the first color information, and determine, when the color ratio satisfies the third condition as the first color ratio corresponding to the first color information.
In other embodiments, the apparatus 800 further comprises: a sixth determination module, a seventh determination module; wherein,,
the sixth determining module is configured to determine a second color distance corresponding to any two color information in the at least one color information;
The seventh determining module is configured to determine, when the second color distance meets a fourth condition, the first color information based on two color information corresponding to the second color distance; and determining a first color ratio corresponding to the first color information based on the color ratios of the two color information corresponding to the second color distance.
In other embodiments, the first object includes at least two first color information; the first determining module is further configured to determine, based on first color information and the second color information corresponding to a first color ratio of the at least two first color information, the first color distance if the first color ratio satisfies a fifth condition.
In other embodiments, the first object includes at least two first color information; the second object includes at least two second color information; the first determining module is further configured to determine, when the first color ratio meets a fifth condition and the second color ratio corresponding to the second color information meets a sixth condition, a first color distance based on first color information corresponding to the first color ratio of the at least two pieces of first color information and second color information corresponding to the second color ratio of the at least two pieces of second color information.
In other embodiments, the apparatus 800 further includes, prior to the determining the first location distance based on the first spatial information and the third spatial information of the user: an eighth determination module, a ninth determination module; wherein,,
the eighth determining module is configured to determine a second location distance according to the first spatial information and the second spatial information;
the ninth determining module is configured to determine that the first object is at risk if the second location distance meets a seventh condition.
The description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the device embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the information prompting method is implemented in the form of a software functional module, and is sold or used as a separate product, the information prompting method may also be stored in a computer readable storage medium. Based on such understanding, the technical embodiments of the present application may be embodied essentially or in part in the form of a software product stored in a storage medium, including instructions for causing an information-prompting device (which may include a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, the embodiment of the application provides information prompt equipment, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor realizes the method of any one of the above when executing the program.
Correspondingly, the embodiment of the application provides a storage medium, wherein the storage medium stores executable instructions, and when the executable instructions are executed by a processor, the method of any one of the above is realized.
It should be noted here that: the description of the storage medium and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus of the present application, please refer to the description of the method embodiments of the present application for understanding.
It should be noted that, fig. 9 is a schematic diagram of a hardware entity structure of an information presentation device according to an embodiment of the present application, and as shown in fig. 9, the hardware entity of the information presentation device 900 includes: processor 901 and memory 903, optionally, the information-bearing device 900 may also include a communication interface 902.
It is to be appreciated that the memory 903 can include volatile memory or nonvolatile memory, as well as both volatile and nonvolatile memory. The nonvolatile Memory may include, among others, read Only Memory (ROM), programmable Read Only Memory (PROM, programmable Read-Only Memory), erasable programmable Read Only Memory (EPROM, erasable Programmable Read-Only Memory), electrically erasable programmable Read Only Memory (EEPROM, electrically Erasable Programmable Read-Only Memory), magnetic random access Memory (FRAM, ferromagnetic random access Memory), flash Memory (Flash Memory), magnetic surface Memory, optical disk, or compact disk Read Only Memory (CD-ROM, compact Disc Read-Only Memory); the magnetic surface memory may include disk memory or tape memory. Volatile memory can include random access memory (RAM, random Access Memory), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (SRAM, static Random Access Memory), synchronous static random access memory (SSRAM, synchronous Static Random Access Memory), dynamic random access memory (DRAM, dynamic Random Access Memory), synchronous dynamic random access memory (SDRAM, synchronous Dynamic Random Access Memory), double data rate synchronous dynamic random access memory (ddr SDRAM, double Data Rate Synchronous Dynamic Random Access Memory), enhanced synchronous dynamic random access memory (ESDRAM, enhanced Synchronous Dynamic Random Access Memory), synchronous link dynamic random access memory (SLDRAM, syncLink Dynamic Random Access Memory), direct memory bus random access memory (DRRAM, direct Rambus Random Access Memory). The memory 903 described in the embodiments of the present application is intended to comprise, without being limited to, these and any other suitable types of memory.
The method disclosed in the embodiments of the present application may be applied to the processor 901 or implemented by the processor 901. Processor 901 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 901 or instructions in the form of software. The processor 901 may include a general purpose processor, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 901 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. A general purpose processor may include a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly embodied in a hardware decoding processor or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a storage medium in the memory 903. The processor 901 reads information in the memory 903 and performs the steps of the method described above in connection with its hardware.
In an exemplary embodiment, the information-seeking device may be implemented by one or more application-specific integrated circuits (ASIC, application Specific Integrated Circuit), DSPs, programmable logic devices (PLD, programmable Logic Device), complex programmable logic devices (CPLD, complex Programmable Logic Device), field-programmable gate arrays (FPGA, field-Programmable Gate Array), general purpose processors, controllers, microcontrollers (MCU, micro Controller Unit), microprocessors (Microprocessor), or other electronic components for performing the aforementioned methods.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatuses may be implemented in other manners. The above-described embodiment of the apparatus is merely illustrative, and for example, the division of the units is merely a logic function division, and there may be other division manners in actual implementation, such as: multiple units or components may be combined or may be integrated into another observational quantity or some features may be omitted or not performed. In addition, the illustrated or discussed communication connections of components to each other may include indirect coupling or communication connections via interfaces, devices, or units, which may include electrical, mechanical, or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units; some or all of the units may be selected according to actual needs to achieve the object of the present embodiment.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the integrated units described in the embodiments of the present application may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical embodiments of the present application may be embodied essentially or in part in the form of a software product stored in a storage medium, including instructions for causing an information-prompting device (which may include a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The information presenting method, apparatus and computer storage medium described in the examples are only examples of the embodiments described in the present application, but are not limited thereto, and the information presenting method, apparatus and computer storage medium are all within the scope of the present application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely an embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An information prompting method, the method comprising:
obtaining image information of a first object, wherein the first object is a real object in a real space;
performing data processing on the image information to obtain first information corresponding to the first object; the first information comprises a first corresponding relation between first space information and first color information of the first object;
obtaining second information corresponding to a second object, wherein the second object is a virtual object in a virtual space; the second information comprises a second corresponding relation between second spatial information and second color information of the second object;
and if the collision risk is determined to exist based on the first corresponding relation and the second corresponding relation, outputting prompt information related to the collision risk.
2. The method of claim 1, further comprising determining whether there is a risk of collision by:
determining a first color distance according to the first color information and the second color information;
determining a first position distance according to the first space information and third space information of a user under the condition that the first color distance meets a first condition;
and determining that the user has collision risk with the first object under the condition that the first position distance meets a second condition.
3. The method of claim 1 or 2, the image information comprising depth information, the method further comprising:
determining point cloud information corresponding to the depth information;
determining a third corresponding relation between the first object and preset space information based on the point cloud information; the preset spatial information comprises the first spatial information;
and determining first space information corresponding to the first object in the first information based on the third corresponding relation and the first object.
4. The method of claim 2, the image information further comprising color information, the method further comprising:
obtaining contour information related to the first object according to the image information;
First color information related to the first object in the first information is determined based on the contour information and the color information.
5. The method of claim 4, the method further comprising:
determining at least one piece of color information related to the first object, and a color ratio corresponding to each piece of color information in the at least one piece of color information;
and when the color ratio satisfies a third condition, determining color information corresponding to the color ratio satisfying the third condition as the first color information, and determining the color ratio satisfying the third condition as the first color ratio corresponding to the first color information.
6. The method of claim 5, the method further comprising:
determining a second color distance corresponding to any two pieces of color information in the at least one piece of color information;
determining the first color information based on two pieces of color information corresponding to the second color distance under the condition that the second color distance meets a fourth condition; and determining a first color ratio corresponding to the first color information based on the color ratios of the two color information corresponding to the second color distance.
7. The method of claim 5, the first object comprising at least two first color information; the determining a first color distance according to the first color information and the second color information includes:
and determining the first color distance based on first color information and the second color information corresponding to the first color ratio in the at least two pieces of first color information when the first color ratio meets a fifth condition.
8. The method of claim 5, the first object comprising at least two first color information; the second object includes at least two second color information; the determining a first color distance according to the first color information and the second color information includes:
and determining a first color distance based on the first color information corresponding to the first color proportion in the at least two pieces of first color information and the second color information corresponding to the second color proportion in the at least two pieces of second color information when the first color proportion meets the fifth condition and the second color proportion corresponding to the second color information meets the sixth condition.
9. The method of claim 2, prior to said determining a first location distance from said first spatial information and a third spatial information of a user, the method further comprising:
Determining a second location distance according to the first spatial information and the second spatial information;
and determining that the first object is at risk if the second location distance meets a seventh condition.
10. An electronic device comprises an image acquisition module, an information display module and a processing module; wherein,,
the image acquisition module is used for acquiring image information of a first object;
the information display module is used for displaying a second object;
the processing module is used for obtaining the image information of the first object, wherein the first object is a real object in a real space; performing data processing on the image information to obtain first information corresponding to the first object; the first information comprises a first corresponding relation between first space information and first color information of the first object; obtaining second information corresponding to the second object, wherein the second object is a virtual object in a virtual space; the second information comprises a second corresponding relation between second spatial information and second color information of the second object; and if the collision risk is determined to exist based on the first corresponding relation and the second corresponding relation, outputting prompt information related to the collision risk.
CN202310342050.7A 2023-03-31 2023-03-31 Information prompting method and electronic equipment Pending CN116400807A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310342050.7A CN116400807A (en) 2023-03-31 2023-03-31 Information prompting method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310342050.7A CN116400807A (en) 2023-03-31 2023-03-31 Information prompting method and electronic equipment

Publications (1)

Publication Number Publication Date
CN116400807A true CN116400807A (en) 2023-07-07

Family

ID=87009781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310342050.7A Pending CN116400807A (en) 2023-03-31 2023-03-31 Information prompting method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116400807A (en)

Similar Documents

Publication Publication Date Title
CN106993112B (en) Background blurring method and device based on depth of field and electronic device
CN111753609B (en) Target identification method and device and camera
CN105933589B (en) A kind of image processing method and terminal
US10043097B2 (en) Image abstraction system
CA2812117C (en) A method for enhancing depth maps
JPWO2020179065A1 (en) Image processing equipment, image processing methods and programs
Underwood et al. Explicit 3D change detection using ray-tracing in spherical coordinates
CN104145276A (en) Enhanced contrast for object detection and characterization by optical imaging
US10165168B2 (en) Model-based classification of ambiguous depth image data
CN106326832A (en) Apparatus for and method of processing image based on object region
CN109697444B (en) Object identification method and device based on depth image, equipment and storage medium
JPH0997337A (en) Trespasser monitor device
Miknis et al. Near real-time point cloud processing using the PCL
US11625859B2 (en) Method and system for calibrating a camera and localizing objects within the camera field of view
US20180039860A1 (en) Image processing apparatus and image processing method
US20150062166A1 (en) Expanding a digital representation of a physical plane
CN113888458A (en) Method and system for object detection
US20220180545A1 (en) Image processing apparatus, image processing method, and program
EP2677462B1 (en) Method and apparatus for segmenting object area
KR101337423B1 (en) Method of moving object detection and tracking using 3d depth and motion information
CN112990049A (en) AEB emergency braking method and device for automatic driving of vehicle
CN116400807A (en) Information prompting method and electronic equipment
CN111429568B (en) Point cloud processing method and device, electronic equipment and storage medium
CN115240168A (en) Perception result obtaining method and device, computer equipment and storage medium
JP2008217220A (en) Image retrieval method and image retrieval system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination