CN113608614A - Display method, augmented reality device, equipment and computer-readable storage medium - Google Patents

Display method, augmented reality device, equipment and computer-readable storage medium Download PDF

Info

Publication number
CN113608614A
CN113608614A CN202110898485.0A CN202110898485A CN113608614A CN 113608614 A CN113608614 A CN 113608614A CN 202110898485 A CN202110898485 A CN 202110898485A CN 113608614 A CN113608614 A CN 113608614A
Authority
CN
China
Prior art keywords
vehicle
information
current moment
positioning
dimensional scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110898485.0A
Other languages
Chinese (zh)
Inventor
孙红亮
徐立
王子彬
揭志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202110898485.0A priority Critical patent/CN113608614A/en
Publication of CN113608614A publication Critical patent/CN113608614A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The embodiment discloses a display method, an AR device, equipment and a computer-readable storage medium, wherein the method comprises the following steps: acquiring a three-dimensional scene map of a preset area; the method comprises the steps of responding to the condition that a vehicle runs in a preset area, obtaining image information of the current moment, collected by image collecting equipment on the vehicle, wherein the image collecting equipment on the vehicle is used for collecting images outside the vehicle; positioning a vehicle based on image information and a three-dimensional scene map at the current moment to obtain attribute data of the vehicle at the current moment, wherein the attribute data at least comprises position information of the vehicle in the three-dimensional scene map; acquiring target virtual information matched with attribute data of the vehicle at the current moment from virtual information of each interested position preset in a three-dimensional scene map; and displaying the AR effect of the superposition of the image information at the current moment and the target virtual information.

Description

Display method, augmented reality device, equipment and computer-readable storage medium
Technical Field
The present disclosure relates to image processing technology, and relates to, but is not limited to, a presentation method, an Augmented Reality (AR) device, an apparatus, and a computer-readable storage medium.
Background
At present, for scenes such as tourist attractions, parks, science and technology parks, large amusement parks, closed communities, campuses, resorts, urban pedestrian streets and the like, visitors can see actual objects in the scenic spots and parks, however, for the display effect of the scenes such as the scenic spots and the parks, the display effect is mostly dependent on the explanation of an instructor, and the display effect is not visual and rich enough.
Disclosure of Invention
The disclosed embodiment provides a presentation method, an AR device, equipment and a computer-readable storage medium.
The embodiment of the disclosure provides a display method, which comprises the following steps:
acquiring a three-dimensional scene map of a preset area;
responding to the condition that the vehicle runs in the preset area, and acquiring image information of the current moment acquired by an image acquisition device on the vehicle, wherein the image acquisition device on the vehicle is used for acquiring images outside the vehicle; positioning the vehicle based on the image information of the current moment and the three-dimensional scene map to obtain attribute data of the vehicle at the current moment, wherein the attribute data at least comprises position information of the vehicle in the three-dimensional scene map
Acquiring target virtual information matched with attribute data of the vehicle at the current moment from virtual information of each interested position preset in the three-dimensional scene map;
and displaying the AR effect of the superposition of the image information at the current moment and the target virtual information.
It can be understood that when the image information of the current moment is acquired by the image acquisition device of the vehicle, the vehicle is positioned based on the image information of the current moment and the three-dimensional scene map, the position information of the vehicle at the current moment can be accurately determined, the target virtual information can be determined according to the position information of the vehicle, and the AR effect of the superposition of the image information of the current moment and the target virtual information is displayed.
In some embodiments of the present disclosure, the method further comprises:
responding to the condition that the vehicle runs in the preset area, and acquiring positioning information of the current moment acquired by positioning equipment on the vehicle;
the positioning the vehicle based on the image information of the current moment and the three-dimensional scene map to obtain the attribute data of the vehicle at the current moment comprises:
and positioning the vehicle based on the positioning information of the current moment, the image information of the current moment and the three-dimensional scene map to obtain the attribute data of the vehicle at the current moment.
The method and the device have the advantages that the positioning information acquired by the positioning equipment is combined, so that the position of the vehicle in the three-dimensional scene map at the current moment can be obtained more accurately.
In some embodiments of the present disclosure, the locating the vehicle based on the positioning information at the current time, the image information at the current time, and the three-dimensional scene map to obtain attribute data of the vehicle at the current time includes:
acquiring calibration parameters of the image acquisition equipment; determining a visual positioning result of the vehicle at the current moment according to the calibration parameters, the image information at the current moment and the three-dimensional scene map; and performing fusion processing according to the positioning information of the current moment and the visual positioning result of the vehicle at the current moment to obtain attribute data of the vehicle at the current moment.
The method and the device have the advantages that the visual positioning result of the vehicle can be accurately determined by combining the calibration parameters of the image acquisition equipment, so that the visual positioning result and the positioning information acquired by the positioning equipment are accurately fused, and the positioning precision is improved.
In some embodiments of the present disclosure, the acquiring calibration parameters of the image capturing device includes:
acquiring a standard image acquired by the image acquisition equipment at a preset position, and acquiring standard positioning information acquired by the positioning equipment at the preset position; and calibrating the parameters of the image acquisition equipment according to the standard image and the standard positioning information to obtain the calibration parameters of the image acquisition equipment.
Therefore, the parameters of the image acquisition equipment can be accurately calibrated by acquiring the standard image and the standard positioning information.
In some embodiments of the present disclosure, the positioning device comprises an inertial positioning device and/or a GPS device.
It can be seen that by incorporating the location information of the inertial positioning device and/or the GPS device, accurate location of the vehicle position is facilitated.
In some embodiments of the present disclosure, the virtual information of each position of interest preset in the three-dimensional scene map includes virtual information of a respective visual orientation of each position of interest; the attribute data further includes orientation information of the vehicle in the three-dimensional scene map;
correspondingly, the obtaining of the target virtual information matched with the attribute data of the vehicle at the current moment from the virtual information of each interested position preset in the three-dimensional scene map includes:
and determining target virtual information matched with the position information and the orientation information of the vehicle at the current moment in the virtual information of each interested position.
It can be seen that, according to the embodiment of the present disclosure, from two angles of the position information and the orientation information of the vehicle, the target virtual information that is simultaneously matched with the position information and the orientation information of the vehicle is determined, so that the virtual content can be more accurately displayed.
The disclosed embodiment also provides a display device, which comprises a first acquisition module, a processing module, a second acquisition module and a display module, wherein,
the first acquisition module is used for acquiring a three-dimensional scene map of a preset area;
the processing module is used for responding to the condition that the vehicle runs in the preset area, acquiring image information of the current moment acquired by image acquisition equipment on the vehicle, wherein the image acquisition equipment on the vehicle is used for acquiring images outside the vehicle; positioning the vehicle based on the image information at the current moment and the three-dimensional scene map to obtain attribute data of the vehicle at the current moment, wherein the attribute data at least comprises position information of the vehicle in the three-dimensional scene map;
the second acquisition module is used for acquiring target virtual information matched with the attribute data of the vehicle at the current moment from virtual information of each interested position preset in the three-dimensional scene map;
and the display module is used for displaying the AR effect of the superposition of the image information at the current moment and the target virtual information.
In some embodiments of the present disclosure, the processing module is further configured to, in response to a situation that the vehicle travels in the preset area, obtain positioning information of a current time, which is acquired by a positioning device on the vehicle;
the processing module is configured to locate the vehicle based on the image information at the current time and the three-dimensional scene map to obtain attribute data of the vehicle at the current time, and includes:
and positioning the vehicle based on the positioning information of the current moment, the image information of the current moment and the three-dimensional scene map to obtain the attribute data of the vehicle at the current moment.
In some embodiments of the present disclosure, the processing module is configured to locate the vehicle based on the positioning information at the current time, the image information at the current time, and the three-dimensional scene map, and obtain attribute data of the vehicle at the current time, including:
acquiring calibration parameters of the image acquisition equipment; determining a visual positioning result of the vehicle at the current moment according to the calibration parameters, the image information at the current moment and the three-dimensional scene map; and performing fusion processing according to the positioning information of the current moment and the visual positioning result of the vehicle at the current moment to obtain attribute data of the vehicle at the current moment.
In some embodiments of the present disclosure, the processing module, configured to obtain calibration parameters of the image capturing device, includes:
acquiring a standard image acquired by the image acquisition equipment at a preset position, and acquiring standard positioning information acquired by the positioning equipment at the preset position; and calibrating the parameters of the image acquisition equipment according to the standard image and the standard positioning information to obtain the calibration parameters of the image acquisition equipment.
In some embodiments of the present disclosure, the positioning device comprises an inertial positioning device and/or a GPS device.
In some embodiments of the present disclosure, the virtual information of each position of interest preset in the three-dimensional scene map includes virtual information of a respective visual orientation of each position of interest; the attribute data further includes orientation information of the vehicle in the three-dimensional scene map;
the second obtaining module is configured to obtain, from virtual information of each location of interest preset in the three-dimensional scene map, target virtual information matched with the attribute data of the vehicle at the current time, and includes:
and determining target virtual information matched with the position information and the orientation information of the vehicle at the current moment in the virtual information of each interested position.
The disclosed embodiments also provide an electronic device comprising a processor and a memory for storing a computer program capable of running on the processor; wherein the processor is configured to run the computer program to perform any one of the above methods.
Embodiments of the present disclosure also provide a computer storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement any one of the above methods.
The display method, the AR device, the equipment and the computer readable storage medium provided by the embodiment of the disclosure comprise the following steps: acquiring a three-dimensional scene map of a preset area; the method comprises the steps of responding to the condition that a vehicle runs in a preset area, obtaining image information of the current moment, collected by image collecting equipment on the vehicle, wherein the image collecting equipment on the vehicle is used for collecting images outside the vehicle; positioning a vehicle based on image information and a three-dimensional scene map at the current moment to obtain attribute data of the vehicle at the current moment, wherein the attribute data at least comprises position information of the vehicle in the three-dimensional scene map; acquiring target virtual information matched with attribute data of a vehicle at the current moment from virtual information of each interested position preset in a three-dimensional scene map; and displaying the AR effect of the superposition of the image information at the current moment and the target virtual information.
Therefore, when the image information of the current moment is acquired by the image acquisition equipment of the vehicle, the vehicle is positioned based on the image information of the current moment and the three-dimensional scene map, the position information of the vehicle at the current moment can be accurately determined, the target virtual information can be determined according to the position information of the vehicle, and the AR effect of the image information of the current moment and the target virtual information which are overlapped is displayed.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic diagram of a display system according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a demonstration method of an embodiment of the present disclosure;
FIG. 3 is another flow chart of a demonstration method of an embodiment of the present disclosure;
fig. 4 is a schematic view of a vehicle and drone of an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of AR effects shown in an embodiment of the present disclosure;
FIG. 6 is yet another flow chart of a demonstration method of an embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of a display device according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In the related art, the tourist guide vehicle belongs to one of regional electric vehicles, is a special tourism and sightseeing vehicle developed for tourist attractions, parks, science and technology parks, large amusement parks, closed communities, campuses, holiday villages, urban pedestrian streets and other regions and a regional vehicle, is a special environment-friendly electric riding vehicle for riding instead of walk, and can also be used for connection of large enterprises such as industrial parks, science and technology parks and government industrial parks. At present, a guide vehicle for scenes such as scenic spots, gardens and the like is only used as a vehicle, has no other guide functions, depends on a guide to explain, and causes that the display effect is not visual and rich enough.
In view of the above technical problems, the technical solutions of the embodiments of the present disclosure are provided.
The present disclosure will be described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the examples provided herein are merely illustrative of the present disclosure and are not intended to limit the present disclosure. In addition, the embodiments provided below are some embodiments for implementing the disclosure, not all embodiments for implementing the disclosure, and the technical solutions described in the embodiments of the disclosure may be implemented in any combination without conflict.
The AR technology is a technology for skillfully fusing virtual information and a real world, an augmented reality scene is presented in AR equipment, namely the virtual information fused into the real scene is presented in the AR equipment, a presentation picture of the virtual information can be directly rendered out to be fused with the real scene, for example, a set of virtual tea set is presented, the display effect of the set of virtual tea set is placed on a real desktop in the real scene, or the display picture after fusion of a presentation special effect of the virtual information and an image of the real scene is presented; how to use the AR technology to present the display effect of scenes such as scenic spots and parks will be described below with reference to the following specific embodiments for the content to be discussed in the embodiments of the present disclosure.
The display method provided by the embodiment of the disclosure can be applied to a local server of a vehicle, the local server is a part of a display system, and the display system is exemplarily described below.
Fig. 1 is a schematic structural diagram of a display system 100 in an embodiment of the present disclosure, and referring to fig. 1, an AR device 101 and a local server 102 are located in a vehicle, the local server 102 may be connected to a cloud server 104 through a network 103, and the network 103 may be a wide area network, a car networking, or other type of local area network. In a real scene, at least one image acquisition device for shooting images outside the vehicle is installed on the vehicle, and the image acquisition device can be a camera or the like.
In some embodiments, the local server 102 may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers; cloud server 104 may be a cloud server that provides basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud functions, cloud storage, web services, cloud communications, middleware services, domain name services, security services, Content Delivery Networks (CDNs), and big data and artificial intelligence platforms.
It should be noted that the application scenario shown in fig. 1 is only one exemplary scenario of the above presentation method, and the embodiment of the present disclosure is not limited thereto.
Fig. 2 is a flowchart of a presentation method according to an embodiment of the disclosure, and as shown in fig. 2, the flowchart may include:
step 201: and acquiring a three-dimensional scene map of a preset area.
In the embodiment of the present disclosure, the preset area may be an area such as a tourist attraction, a park, a scientific and technological park, a large amusement park, a closed community, a campus, a resort, and a city pedestrian street, and the vehicle may travel along a certain route in the preset area.
In some embodiments, a vehicle has a locating device mounted thereon; in practical application, referring to fig. 3, in the process of driving a vehicle in a preset area, image acquisition equipment and positioning equipment can be used for acquiring mapping data, so that three-dimensional scene map construction is realized according to the acquired mapping data. Here, the mapping data may include an image acquired by the image acquisition device and positioning information acquired by the positioning device.
Illustratively, the positioning device may be a satellite positioning device, an inertial positioning device, or a combination of both; the satellite Positioning device may be a Global Positioning System (GPS), galileo, GLONASS or beidou based device.
In some embodiments, referring to fig. 3 and 4, in the process that the vehicle travels in the preset area, images may also be acquired by using image acquisition devices in the vehicle 401 and the unmanned aerial vehicle 402 at the same time, and the three-dimensional scene map construction may be implemented according to the images acquired by the image acquisition devices in the vehicle 401 and the unmanned aerial vehicle 402 and the positioning information acquired by the positioning device. For example, referring to fig. 4, during driving of vehicle 401, drone 402 may be suspended above vehicle 401; of course, the drone 402 may be in other locations as well, and the disclosed embodiments are not limited.
In some embodiments, the local server 102 may obtain mapping data collected by the image collecting device and the positioning device, and upload the mapping data to the cloud server 104 through the network 103; the cloud server 104 can construct a three-dimensional scene map of a preset area according to the mapping data; cloud server 104 may also send the three-dimensional scene map of the preset area to local server 102.
Step 202: the method comprises the steps of responding to the condition that a vehicle runs in a preset area, and obtaining image information of the current moment, which is collected by image collection equipment on the vehicle; and positioning the vehicle based on the image information and the three-dimensional scene map at the current moment to obtain attribute data of the vehicle at the current moment, wherein the attribute data at least comprises position information of the vehicle in the three-dimensional scene map.
In the embodiment of the disclosure, after the three-dimensional scene map of the preset area is acquired, the driver can drive the vehicle again to drive in the preset area, so as to acquire the image information of the current moment acquired by the image acquisition device on the vehicle.
Here, the space in which the vehicle is located may be understood as a real space, and the space in which the image information at the present time is located may be understood as a pixel space; the three-dimensional scene map corresponds to the real space. The corresponding relation between the pixel space and the real space can be determined according to the distance between an object in the image at the current moment and the image acquisition equipment and the parameters of the image acquisition equipment; according to the corresponding relation between the pixel space and the real space, the coordinate system of the image information at the current moment can be converted into the coordinate system same as that of the three-dimensional scene map, so that the coordinate system alignment of the image information at the current moment and the three-dimensional scene map is realized. Since the image information at the current time is related to the attribute data (e.g., position) of the vehicle at the current time, after the image information at the current time is aligned with the three-dimensional scene map in the coordinate system, the attribute data of the vehicle at the current time can be determined in the three-dimensional scene map, that is, the vehicle can be positioned.
Step 203: and acquiring target virtual information matched with attribute data of the vehicle at the current moment from virtual information of each interested position preset in the three-dimensional scene map.
In some embodiments, after the cloud server constructs the three-dimensional scene map, the three-dimensional scene map may be displayed to the user; in practical application, real-time capture of a live-action picture can be realized through a vehicle-mounted AR rendering engine, AR virtual content is rendered on the captured picture, and simulated physical effects such as depth of field, gravity and the like can be realized; a user can load a three-dimensional map of a current driving road section through a three-dimensional editor for editing, and therefore virtual information of each interested position in the three-dimensional scene map is obtained. After determining the attribute data of the vehicle at the current moment, the local server can acquire target virtual information matched with the attribute data of the vehicle at the current moment from the cloud server through interaction with the cloud server.
In other embodiments, the local server may display the three-dimensional scene map to the user after acquiring the three-dimensional scene map; the user can edit the virtual information of each interested position in the three-dimensional scene map. After the local server determines the attribute data of the vehicle at the current moment, the target virtual information matched with the attribute data of the vehicle at the current moment can be determined in the virtual information of each interested position in the three-dimensional scene map.
Step 204: and displaying the AR effect of the superposition of the image information at the current moment and the target virtual information.
In the embodiment of the disclosure, after the local server acquires the image information and the target virtual information at the current moment, the local server may display an AR effect obtained by superimposing the image information and the target virtual information at the current moment through a display screen of the AR device. Fig. 5 is a schematic diagram of an AR effect displayed by an AR device in an embodiment of the present disclosure.
In practical applications, the steps 201 to 203 may be implemented based on a Processor of a local server, and the step 204 may be implemented based on a Processor of the local server and an AR Device, where the Processor may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor.
It can be understood that when the image information of the current moment is acquired by the image acquisition device of the vehicle, the vehicle is positioned based on the image information of the current moment and the three-dimensional scene map, the position information of the vehicle at the current moment can be accurately determined, the target virtual information can be determined according to the position information of the vehicle, and the AR effect of the superposition of the image information of the current moment and the target virtual information is displayed.
Further, when the preset area is a scenic spot, a park, an industrial park and the like, the display system of the embodiment of the disclosure can meet the requirements of manned connection and guide explanation of the preset area, and can display contents such as industrial development, economic planning, scientific and technological culture and the like in a visual and intelligent manner.
In some embodiments, the current-time positioning information collected by the positioning device on the vehicle may be acquired in response to the vehicle traveling in the preset area.
Correspondingly, the vehicle can be positioned based on the positioning information at the current moment, the image information at the current moment and the three-dimensional scene map, and the attribute data of the vehicle at the current moment is obtained.
In the embodiment of the present disclosure, referring to fig. 3, during the running of the vehicle, the image capturing device and the positioning device may be used to capture real-time data, where the real-time data may include image information captured by the image capturing device and positioning information captured by the positioning device. After the image information at the current time is obtained, the positioning information at the current time and the image information at the current time can be compared with the three-dimensional scene map to obtain the position information of the vehicle at the current time in the three-dimensional scene map.
The method and the device have the advantages that the positioning information acquired by the positioning equipment is combined, so that the position of the vehicle in the three-dimensional scene map at the current moment can be obtained more accurately.
In some embodiments, locating a vehicle based on the positioning information at the current time, the image information at the current time, and the three-dimensional scene map, and obtaining attribute data of the vehicle at the current time may include: firstly, obtaining calibration parameters of image acquisition equipment; then, determining a visual positioning result of the vehicle at the current moment according to the calibration parameters, the image information at the current moment and the three-dimensional scene map; and finally, performing fusion processing according to the positioning information of the current moment and the visual positioning result of the vehicle at the current moment to obtain attribute data of the vehicle at the current moment.
For example, after the positioning information at the current moment is determined, a position area where the vehicle is located can be determined in the three-dimensional scene map according to the positioning information at the current moment; after the calibration parameters of the image acquisition equipment are obtained, the image information at the current moment can be calibrated according to the calibration parameters to obtain a calibration result, and the calibration result is compared with the three-dimensional scene map to obtain a visual positioning result of the vehicle at the current moment; and combining the visual positioning result of the vehicle at the current moment and the position area of the vehicle, the position information of the vehicle at the current moment in the three-dimensional scene map can be accurately determined.
It can be understood that a linear relationship exists between the image acquired by the image acquisition device and the real object in the three-dimensional space, the linear relationship is determined by the parameters of the image acquisition device, and the calibration parameters of the image acquisition device can be obtained by calibrating the image acquisition device, so that the mapping relationship between the image acquired by the image acquisition device and the real object in the three-dimensional space is determined.
For example, referring to fig. 3, after obtaining the visual positioning result of the vehicle at the current time and the positioning information collected by the positioning device, the visual positioning result of the vehicle at the current time and the positioning information collected by the positioning device may be fused to obtain attribute data of the vehicle at the current time, where the attribute data represents a positioning result obtained by fusing the visual positioning result and the positioning information collected by the positioning device, and therefore, the positioning accuracy of the vehicle is high.
It can be understood that the embodiment of the disclosure can improve the positioning accuracy of the vehicle by fusing the visual positioning result and the positioning information acquired by the positioning device.
In some embodiments, the implementation of obtaining the calibration parameter of the image capturing device may include: acquiring a standard image acquired by image acquisition equipment at a preset position, and acquiring standard positioning information acquired by positioning equipment at the preset position; and calibrating the parameters of the image acquisition equipment according to the standard image and the standard positioning information to obtain the calibration parameters of the image acquisition equipment.
Here, the preset position may include at least one position, and it can be seen that by acquiring the standard image and the standard positioning information, the parameter of the image capturing device can be accurately calibrated.
For example, referring to fig. 3, before the vehicle travels in the preset area, calibration data may be collected in advance at a preset position by the image collecting device and the positioning device, where the calibration data includes a standard image collected by the image collecting device and standard positioning information collected by the positioning device during a calibration process of the image collecting device; according to the calibration data, the calibration parameters of the image acquisition equipment can be obtained.
In some embodiments, the positioning device may comprise an inertial positioning device.
Exemplarily, the inertial positioning device may be a GPS inertial navigation positioning device, and referring to fig. 6, the image acquired by the image acquisition device may be subjected to visual positioning to obtain a visual positioning result; performing high-precision inertial navigation positioning according to information acquired by inertial positioning equipment to obtain inertial navigation positioning information; in the local server, the visual positioning result and the inertial navigation positioning information can be fused according to the calibration parameters to obtain attribute data of the vehicle.
It can be seen that the accurate positioning of the vehicle position is facilitated by combining the positioning information of the inertial positioning device.
In some embodiments, the virtual information of each position of interest preset in the three-dimensional scene map comprises virtual information of each visual orientation of the position of interest; that is, for any one location of interest in the three-dimensional scene map, the corresponding virtual information may be set in multiple visual orientations.
The attribute data of the vehicle may further include orientation information of the vehicle in the three-dimensional scene map.
Accordingly, obtaining target virtual information matched with attribute data of the vehicle at the current time from virtual information of each interested position preset in the three-dimensional scene map may include: among the virtual information of each location of interest, target virtual information that matches the location information and orientation information of the vehicle at the present time is determined.
Referring to fig. 3 and 6, the positioning information may be obtained according to the visual positioning result and the positioning device, and attribute data including the position and the orientation of the vehicle in the three-dimensional scene map may be determined, so that the target virtual information may be determined in the virtual information of each interested position in the three-dimensional scene map according to the attribute data of the vehicle. After the target virtual information is determined, the AR effect of the superposition of the image information at the current moment and the target virtual information can be displayed in a virtual-real fusion mode.
It can be seen that, according to the embodiment of the present disclosure, from two angles of the position information and the orientation information of the vehicle, the target virtual information that is simultaneously matched with the position information and the orientation information of the vehicle is determined, so that the virtual content can be more accurately displayed.
On the basis of the display method provided by the foregoing embodiment, the embodiment of the present disclosure further provides a display device.
Fig. 7 is a schematic structural diagram of a display device according to an embodiment of the disclosure, and as shown in fig. 7, the display device may include: a first acquisition module 701, a processing module 702, a second acquisition module 703 and a presentation module 704, wherein,
a first obtaining module 701, configured to obtain a three-dimensional scene map of a preset area;
the processing module 702 is configured to, in response to a situation that the vehicle runs in the preset area, acquire image information of a current moment acquired by an image acquisition device on the vehicle, where the image acquisition device on the vehicle is configured to acquire an image outside the vehicle; positioning the vehicle based on the image information at the current moment and the three-dimensional scene map to obtain attribute data of the vehicle at the current moment, wherein the attribute data at least comprises position information of the vehicle in the three-dimensional scene map;
a second obtaining module 703, configured to obtain, from virtual information of each location of interest preset in the three-dimensional scene map, target virtual information that matches the attribute data of the vehicle at the current time;
a display module 704, configured to display an AR effect obtained by superimposing the image information at the current time and the target virtual information.
In some embodiments of the present disclosure, the processing module 702 is further configured to, in response to a situation that the vehicle travels in the preset area, obtain positioning information of a current time, which is collected by a positioning device on the vehicle;
the processing module 702 is configured to locate the vehicle based on the image information at the current time and the three-dimensional scene map, and obtain attribute data of the vehicle at the current time, where the processing module includes:
and positioning the vehicle based on the positioning information of the current moment, the image information of the current moment and the three-dimensional scene map to obtain the attribute data of the vehicle at the current moment.
In some embodiments of the present disclosure, the processing module is configured to locate the vehicle based on the positioning information at the current time, the image information at the current time, and the three-dimensional scene map, and obtain attribute data of the vehicle at the current time, including:
acquiring calibration parameters of the image acquisition equipment; determining a visual positioning result of the vehicle at the current moment according to the calibration parameters, the image information at the current moment and the three-dimensional scene map; and performing fusion processing according to the positioning information of the current moment and the visual positioning result of the vehicle at the current moment to obtain attribute data of the vehicle at the current moment.
In some embodiments of the present disclosure, the processing module 702 is configured to obtain calibration parameters of the image capturing apparatus, including:
acquiring a standard image acquired by the image acquisition equipment at a preset position, and acquiring standard positioning information acquired by the positioning equipment at the preset position; and calibrating the parameters of the image acquisition equipment according to the standard image and the standard positioning information to obtain the calibration parameters of the image acquisition equipment.
In some embodiments of the present disclosure, the positioning device comprises an inertial positioning device.
In some embodiments of the present disclosure, the virtual information of each position of interest preset in the three-dimensional scene map includes virtual information of a respective visual orientation of each position of interest; the attribute data further includes orientation information of the vehicle in the three-dimensional scene map;
the second obtaining module 703 is configured to obtain, from the virtual information of each interested location preset in the three-dimensional scene map, target virtual information matched with the attribute data of the vehicle at the current time, where the target virtual information includes:
and determining target virtual information matched with the position information and the orientation information of the vehicle at the current moment in the virtual information of each interested position.
The first obtaining module 701, the processing module 702, and the second obtaining module 703 may be implemented based on a processor of a local server, and the displaying module 704 may be implemented based on a processor of a local server and a display of an AR device.
In addition, each functional module in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Specifically, the computer program instructions corresponding to a display method in the present embodiment may be stored on a storage medium such as an optical disc, a hard disc, a usb disk, or the like, and when the computer program instructions corresponding to a display method in the storage medium are read or executed by an electronic device, any one of the display methods in the foregoing embodiments is implemented.
Based on the same technical concept of the foregoing embodiment, referring to fig. 8, it shows an electronic device 80 provided by an embodiment of the present invention, which may include: a memory 81, a processor 82, and a computer program stored on the memory 81 and executable on the processor 82; wherein the content of the first and second substances,
a memory 81 for storing computer programs and data;
a processor 82 for executing the computer program stored in the memory to implement any of the presentation methods of the previous embodiments.
In practical applications, the memory 81 may be a volatile memory (RAM); or a non-volatile memory (non-volatile memory) such as a ROM, a flash memory (flash memory), a Hard Disk (Hard Disk Drive, HDD) or a Solid-State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the processor 82.
The processor 82 may be at least one of ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual Localization, Simultaneous Localization And Mapping (SLAM), three-dimensional reconstruction, image registration, background segmentation, key point extraction And tracking of objects, pose or depth detection of objects, And the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like.
The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, which are not repeated herein for brevity
The methods disclosed in the method embodiments provided by the present disclosure may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in the various product embodiments provided by the disclosure may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the various method or apparatus embodiments provided by the present disclosure may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A method of displaying, the method comprising:
acquiring a three-dimensional scene map of a preset area;
responding to the condition that the vehicle runs in the preset area, and acquiring image information of the current moment acquired by an image acquisition device on the vehicle, wherein the image acquisition device on the vehicle is used for acquiring images outside the vehicle; positioning the vehicle based on the image information at the current moment and the three-dimensional scene map to obtain attribute data of the vehicle at the current moment, wherein the attribute data at least comprises position information of the vehicle in the three-dimensional scene map;
acquiring target virtual information matched with attribute data of the vehicle at the current moment from virtual information of each interested position preset in the three-dimensional scene map;
and displaying the AR effect of the superposition of the image information at the current moment and the target virtual information.
2. The method of claim 1, further comprising:
responding to the condition that the vehicle runs in the preset area, and acquiring positioning information of the current moment acquired by positioning equipment on the vehicle;
the positioning the vehicle based on the image information of the current moment and the three-dimensional scene map to obtain the attribute data of the vehicle at the current moment comprises:
and positioning the vehicle based on the positioning information of the current moment, the image information of the current moment and the three-dimensional scene map to obtain the attribute data of the vehicle at the current moment.
3. The method according to claim 2, wherein the locating the vehicle based on the location information of the current time, the image information of the current time and the three-dimensional scene map to obtain the attribute data of the vehicle at the current time comprises:
acquiring calibration parameters of the image acquisition equipment; determining a visual positioning result of the vehicle at the current moment according to the calibration parameters, the image information at the current moment and the three-dimensional scene map; and performing fusion processing according to the positioning information of the current moment and the visual positioning result of the vehicle at the current moment to obtain attribute data of the vehicle at the current moment.
4. The method according to claim 3, wherein the obtaining calibration parameters of the image acquisition device comprises:
acquiring a standard image acquired by the image acquisition equipment at a preset position, and acquiring standard positioning information acquired by the positioning equipment at the preset position; and calibrating the parameters of the image acquisition equipment according to the standard image and the standard positioning information to obtain the calibration parameters of the image acquisition equipment.
5. The method according to any of claims 2 to 4, wherein the positioning device comprises an inertial positioning device and/or a GPS device.
6. The method according to claim 1, wherein the virtual information of each position of interest preset in the three-dimensional scene map comprises virtual information of each visual orientation of each position of interest; the attribute data further includes orientation information of the vehicle in the three-dimensional scene map;
correspondingly, the obtaining of the target virtual information matched with the attribute data of the vehicle at the current moment from the virtual information of each interested position preset in the three-dimensional scene map includes:
and determining target virtual information matched with the position information and the orientation information of the vehicle at the current moment in the virtual information of each interested position.
7. A presentation apparatus, comprising a first acquisition module, a processing module, a second acquisition module, and a presentation module, wherein,
the first acquisition module is used for acquiring a three-dimensional scene map of a preset area;
the processing module is used for responding to the condition that the vehicle runs in the preset area, acquiring image information of the current moment acquired by image acquisition equipment on the vehicle, wherein the image acquisition equipment on the vehicle is used for acquiring images outside the vehicle; positioning the vehicle based on the image information at the current moment and the three-dimensional scene map to obtain attribute data of the vehicle at the current moment, wherein the attribute data at least comprises position information of the vehicle in the three-dimensional scene map;
the second acquisition module is used for acquiring target virtual information matched with the attribute data of the vehicle at the current moment from virtual information of each interested position preset in the three-dimensional scene map;
and the display module is used for displaying the AR effect of the superposition of the image information at the current moment and the target virtual information.
8. The device of claim 7, wherein the processing module is further configured to, in response to a situation that the vehicle travels in the preset area, obtain positioning information of a current time, which is collected by a positioning device on the vehicle;
the processing module is configured to locate the vehicle based on the image information at the current time and the three-dimensional scene map to obtain attribute data of the vehicle at the current time, and includes:
and positioning the vehicle based on the positioning information of the current moment, the image information of the current moment and the three-dimensional scene map to obtain the attribute data of the vehicle at the current moment.
9. An electronic device comprising a processor and a memory for storing a computer program operable on the processor; wherein the content of the first and second substances,
the processor is configured to run the computer program to perform the method of any of claims 1 to 6.
10. A computer storage medium on which a computer program is stored, characterized in that the computer program realizes the method of any one of claims 1 to 6 when executed by a processor.
CN202110898485.0A 2021-08-05 2021-08-05 Display method, augmented reality device, equipment and computer-readable storage medium Pending CN113608614A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110898485.0A CN113608614A (en) 2021-08-05 2021-08-05 Display method, augmented reality device, equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110898485.0A CN113608614A (en) 2021-08-05 2021-08-05 Display method, augmented reality device, equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN113608614A true CN113608614A (en) 2021-11-05

Family

ID=78307236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110898485.0A Pending CN113608614A (en) 2021-08-05 2021-08-05 Display method, augmented reality device, equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN113608614A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188680A (en) * 2022-12-21 2023-05-30 金税信息技术服务股份有限公司 Dynamic display method and device for gun in-place state
DE102022108773A1 (en) 2022-04-11 2023-10-12 Volkswagen Aktiengesellschaft User interface, means of transport and method for displaying a virtual three-dimensional navigation map

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022108773A1 (en) 2022-04-11 2023-10-12 Volkswagen Aktiengesellschaft User interface, means of transport and method for displaying a virtual three-dimensional navigation map
CN116188680A (en) * 2022-12-21 2023-05-30 金税信息技术服务股份有限公司 Dynamic display method and device for gun in-place state

Similar Documents

Publication Publication Date Title
US11386672B2 (en) Need-sensitive image and location capture system and method
CN109583415B (en) Traffic light detection and identification method based on fusion of laser radar and camera
US11501104B2 (en) Method, apparatus, and system for providing image labeling for cross view alignment
CN111566664A (en) Method, apparatus and system for generating synthetic image data for machine learning
WO2020043081A1 (en) Positioning technique
US11024054B2 (en) Method, apparatus, and system for estimating the quality of camera pose data using ground control points of known quality
EP3644013B1 (en) Method, apparatus, and system for location correction based on feature point correspondence
CN113608614A (en) Display method, augmented reality device, equipment and computer-readable storage medium
Zhou et al. Developing and testing robust autonomy: The university of sydney campus data set
US20200272847A1 (en) Method, apparatus, and system for generating feature correspondence from camera geometry
WO2023123837A1 (en) Map generation method and apparatus, electronic device, and storage medium
CN112650772A (en) Data processing method, data processing device, storage medium and computer equipment
CN115164918A (en) Semantic point cloud map construction method and device and electronic equipment
CN105444773A (en) Navigation method and system based on real scene recognition and augmented reality
CN110827340B (en) Map updating method, device and storage medium
EP4202835A1 (en) Method, apparatus, and system for pole extraction from optical imagery
US20210334307A1 (en) Methods and systems for generating picture set from video
US20220122316A1 (en) Point cloud creation
CN107734383A (en) A kind of method that real-time geographic information is merged in mobile video
CN113724397A (en) Virtual object positioning method and device, electronic equipment and storage medium
CN111882675A (en) Model presentation method and device, electronic equipment and computer storage medium
US20240013554A1 (en) Method, apparatus, and system for providing machine learning-based registration of imagery with different perspectives
EP4202833A1 (en) Method, apparatus, and system for pole extraction from a single image
Sugimoto et al. Intersection Prediction from Single 360 {\deg} Image via Deep Detection of Possible Direction of Travel
WO2023053485A1 (en) Information processing device, information processing method, and information processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination