CN114088059A - Map information acquisition method based on intelligent street lamp and construction method of environment map - Google Patents

Map information acquisition method based on intelligent street lamp and construction method of environment map Download PDF

Info

Publication number
CN114088059A
CN114088059A CN202010744309.7A CN202010744309A CN114088059A CN 114088059 A CN114088059 A CN 114088059A CN 202010744309 A CN202010744309 A CN 202010744309A CN 114088059 A CN114088059 A CN 114088059A
Authority
CN
China
Prior art keywords
information
street lamp
intelligent street
image
local environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010744309.7A
Other languages
Chinese (zh)
Inventor
陆凡
肖洪波
付铭明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Xingke Hechuang Technology Co ltd
Original Assignee
Zhuhai Xingke Hechuang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Xingke Hechuang Technology Co ltd filed Critical Zhuhai Xingke Hechuang Technology Co ltd
Priority to CN202010744309.7A priority Critical patent/CN114088059A/en
Publication of CN114088059A publication Critical patent/CN114088059A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • G03B15/02Illuminating scene
    • G03B15/03Combinations of cameras with lighting apparatus; Flash units
    • G03B15/05Combinations of cameras with electronic flash apparatus; Electronic flash units
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The embodiment of the application provides a map information acquisition method based on an intelligent street lamp, and discloses a construction method and a system of an environment map. The map information acquisition method based on the intelligent street lamp comprises the following steps: the method comprises the steps that based on an image acquisition tool installed on an intelligent street lamp body, image information in a preset range with the intelligent street lamp body as a center is obtained; according to the pose information obtained by the image acquisition tool when the image information is acquired, performing pose adjustment on the image information to generate a local environment map with the intelligent street lamp body as the center; sending the local environment map to an environment map control end for constructing an environment map; the local environment map comprises identification information corresponding to the intelligent street lamp body for obtaining the local environment map. The method and the device can reduce the transmission pressure of a large number of data streams, and are favorable for drawing the environment map of the whole environment according to the corresponding relation between the identification information of the intelligent street lamp and the local environment map of the intelligent street lamp by the environment map control end.

Description

Map information acquisition method based on intelligent street lamp and construction method of environment map
Technical Field
The application relates to the field of map construction, in particular to a map information acquisition method based on an intelligent street lamp. The application also relates to a construction method of the environment map, and the application also relates to a construction system of the environment map.
Background
In the map construction method in the prior art, usually, an operator and a collection vehicle are used for collecting picture data on site and transmitting the picture data back to a processing center, or an unmanned aerial vehicle is used for carrying out mobile scanning to obtain data and transmitting the data back to the processing center; and then, calculating the acquired picture data to construct a map.
However, the above method has a problem that the on-site information cannot be collected in time, and in an urban environment, the conditions of road traffic, buildings and the like are changed at any time, and the map obtained by the above method is difficult to reflect the changes at any time.
Disclosure of Invention
The application provides a map information acquisition method based on an intelligent street lamp and a construction method of an environment map; the construction method of the environment map is based on the map information acquisition method based on the intelligent street lamp. The method is beneficial to realizing the timely acquisition of the field image data and the timely processing of the image data. The application also provides a construction system of the environment map.
The application provides a map information acquisition method based on an intelligent street lamp, which comprises the following steps: the method comprises the steps that based on an image acquisition tool installed on an intelligent street lamp body, image information in a preset range with the intelligent street lamp body as a center is obtained; according to the pose information obtained by the image acquisition tool when the image information is acquired, performing pose adjustment on the image information to generate a local environment map with the intelligent street lamp body as the center; sending the local environment map to an environment map control end for constructing an environment map; the local environment map comprises identification information corresponding to the intelligent street lamp body for obtaining the local environment map.
Optionally, the pose information includes a position relationship between the image capture tool and the intelligent street lamp body, and a position relationship between the image capture tool and the environment where the image capture tool is located.
Optionally, the method further includes: and pre-storing the geographical position information of the intelligent street lamp body corresponding to the identification information of the intelligent street lamp body.
Optionally, the obtaining of the image information within the preset range with the intelligent street lamp body as the center further includes: recording time information for obtaining the image information; in the step of generating the local environment map with the intelligent street lamp body as the center, the local environment map includes the time information of obtaining the image information.
Optionally, the performing pose adjustment on the image information includes: determining an initial pose of the image acquisition tool in a first spherical coordinate system with the intelligent street lamp body as a center; and matching the pose information when the image acquisition tool acquires the image information with the initial pose of the image acquisition tool, and adjusting the pose information when the image acquisition tool acquires the image information into the target pose information of the image information acquired by the image acquisition tool in the first spherical coordinate system.
Optionally, the first spherical coordinate system includes the following factors: taking the contact position of the intelligent street lamp body and the ground as an origin of coordinates; taking the direction vertical to the ground as the Z axis of the first spherical coordinate system; taking a direction parallel to the road advancing direction of the ground as an X axis of a first spherical coordinate system; taking a direction on one side perpendicular to the road advancing direction of the ground as a Y axis of a first spherical coordinate system; and determining the pose information of the image acquisition tool according to the first spherical coordinate system, and taking the pose information as the initial pose of the image acquisition tool in the first spherical coordinate system with the intelligent street lamp body as the center.
Optionally, the pose information obtained by the image capture tool when obtaining the image information is pose information of the image information obtained by the image capture tool in a second spherical coordinate system with the image capture tool as a center.
Optionally, the second spherical coordinate system includes the following factors: taking the initial pose of the image acquisition tool as a coordinate origin; taking the direction vertical to the ground as a Z1 axis of a second spherical coordinate system; setting the direction parallel to the X axis of the first spherical coordinate system as the X1 axis of the second spherical coordinate system; setting the direction parallel to the Y axis of the first spherical coordinate system as the Y1 axis of the second spherical coordinate system; and determining pose information when the image acquisition tool acquires the image information according to the second spherical coordinate system.
Optionally, the matching the pose information obtained when the image information is acquired by the image capture tool with the initial pose of the image capture tool includes: determining distance offset and angle offset between pose information when the image acquisition tool acquires image information and an initial pose of the image acquisition tool according to the second spherical coordinate system; and determining target pose information of the image information acquired by the image acquisition tool in the first spherical coordinate system according to the initial pose of the image acquisition tool, the distance offset and the angle offset of the image information.
The application also provides a construction method of the environment map, which comprises the following steps: obtaining a local environment map from each intelligent street lamp body; analyzing the local environment maps to obtain identification information of the intelligent street lamps corresponding to each local environment map; acquiring geographical position information of each intelligent street lamp according to the identification information of the intelligent street lamp, and taking the geographical position information as the geographical position information of the corresponding local environment map; and obtaining the spatial relative relationship among the local environment maps according to the geographical position information of the local environment maps, and splicing the local environment maps according to the spatial relative relationship to obtain a complete map of the whole area.
Optionally, the obtaining the local environment map from each intelligent street lamp body further includes: analyzing the local environment map of each intelligent street lamp body, and acquiring time information corresponding to each local environment map of each intelligent street lamp body; and obtaining the time relative relation between the local environment maps of each intelligent street lamp body according to the time information corresponding to the local environment maps of each intelligent street lamp body.
Optionally, the obtaining a complete map of the whole area according to the geographical location information of each local environment map includes: clustering the local environment maps of each intelligent street lamp body according to a time sequence according to the relative relation of time among the local environment maps of each intelligent street lamp body; according to the local environment maps of the intelligent street lamps under the same time condition and the relative spatial relationship, the local environment maps are spliced to obtain a complete map of the whole area under the same time condition.
Optionally, the local environment map obtained from each intelligent street lamp body is obtained by the following method: the method comprises the steps that based on an image acquisition tool installed on an intelligent street lamp body, image information in a preset range with the street lamp body as a center is obtained; and adjusting the pose of the image information according to the pose information obtained by the image acquisition tool when the image information is acquired, so as to generate a local environment map with the intelligent street lamp body as the center.
Optionally, the pose information includes a position relationship between the image capture tool and the intelligent street lamp body, and a position relationship between the image capture tool and the environment where the image capture tool is located.
Optionally, the pose adjustment of the image information includes: determining an initial pose of the image acquisition tool in a first spherical coordinate system with the intelligent street lamp body as a center; and matching the pose information when the image acquisition tool acquires the image information with the initial pose of the image acquisition tool, and adjusting the pose information when the image acquisition tool acquires the image information into the target pose information of the image information acquired by the image acquisition tool in the first spherical coordinate system.
Optionally, the first spherical coordinate system includes the following factors: taking the contact position of the intelligent street lamp body and the ground as an origin of coordinates; taking the direction vertical to the ground as the Z axis of the first spherical coordinate system; taking a direction parallel to the road advancing direction of the ground as an X axis of a first spherical coordinate system; taking a direction on one side perpendicular to the road advancing direction of the ground as a Y axis of a first spherical coordinate system; and determining the pose information of the image acquisition tool according to the first spherical coordinate system, and taking the pose information as the initial pose of the image acquisition tool in the first spherical coordinate system with the intelligent street lamp body as the center.
Optionally, the pose information obtained by the image capture tool when obtaining the image information is pose information of the image information obtained by the image capture tool in a second spherical coordinate system with the image capture tool as a center.
Optionally, the second spherical coordinate system includes the following factors: taking the initial pose of the image acquisition tool as a coordinate origin; taking the direction vertical to the ground as a Z1 axis of a second spherical coordinate system; setting the direction parallel to the X axis of the first spherical coordinate system as the X1 axis of the second spherical coordinate system; setting the direction parallel to the Y axis of the first spherical coordinate system as the Y1 axis of the second spherical coordinate system; and determining pose information when the image acquisition tool acquires the image information according to the second spherical coordinate system.
Optionally, the matching the pose information obtained when the image information is acquired by the image capture tool with the initial pose of the image capture tool includes: determining distance offset and angle offset between pose information when the image acquisition tool acquires image information and an initial pose of the image acquisition tool according to the second spherical coordinate system; and determining target pose information of the image information acquired by the image acquisition tool in the first spherical coordinate system according to the initial pose of the image acquisition tool, the distance offset and the angle offset of the image information.
The present application further provides a system for constructing an environment map, including: the system comprises an environment map information acquisition end and an environment map control end; the environment map information acquisition end is used for acquiring image information within a preset range with the intelligent street lamp body as the center, generating a local environment map with the intelligent street lamp body as the center according to the position and posture information of the image information, and sending the local environment map of the intelligent street lamp body to the environment map control end; the environment map control terminal is used for obtaining local environment maps from the intelligent street lamps, analyzing the local environment maps, obtaining geographic position information of the intelligent street lamps, splicing the local environment maps according to the geographic position information of the local environment maps, and obtaining a complete map of the whole area.
Optionally, the environment map information collecting end includes: the system comprises an image acquisition end and an image processing end; the image acquisition end is used for acquiring image information within a preset range with the intelligent street lamp body as a center based on an image acquisition tool installed on the intelligent street lamp body and sending the image information to the image processing end; the image processing terminal is used for adjusting the pose of the image information according to the pose information obtained by the image acquisition tool when the image information is acquired, and generating a local environment map with the intelligent street lamp body as the center; sending the local environment map to an environment map control end for constructing an environment map; the local environment map comprises identification information corresponding to the intelligent street lamp for obtaining the local environment map.
Optionally, when obtaining the local environment maps from the intelligent street lamp bodies, the environment map control terminal is further configured to analyze the local environment map of each intelligent street lamp body, and obtain time information corresponding to each local environment map of each intelligent street lamp body; and obtaining the time relative relation between the local environment maps of each intelligent street lamp body according to the time information corresponding to the local environment maps of each intelligent street lamp body.
Optionally, the environment map control end is specifically configured to cluster the local environment maps of each intelligent street lamp body according to a time sequence according to a relative time relationship between the local environment maps of each intelligent street lamp body when obtaining a complete map of the whole area according to the geographical location information of each local environment map; according to the local environment maps of the intelligent street lamps under the same time condition and the relative spatial relationship, the local environment maps are spliced to obtain a complete map of the whole area under the same time condition.
Optionally, the pose information includes a position relationship between the image capture tool and the intelligent street lamp body, and a position relationship between the image capture tool and the environment where the image capture tool is located.
Optionally, when the image processing end adjusts the pose of the image information, the image processing end is specifically configured to determine an initial pose of the image capture tool in a first spherical coordinate system with the intelligent street lamp body as a center; and matching the pose information when the image acquisition tool acquires the image information with the initial pose of the image acquisition tool, and adjusting the pose information when the image acquisition tool acquires the image information into the target pose information of the image information acquired by the image acquisition tool in the first spherical coordinate system.
Optionally, the first spherical coordinate system includes the following factors: taking the contact position of the intelligent street lamp body and the ground as an origin of coordinates; taking the direction vertical to the ground as the Z axis of the first spherical coordinate system; taking a direction parallel to the road advancing direction of the ground as an X axis of a first spherical coordinate system; taking a direction on one side perpendicular to the road advancing direction of the ground as a Y axis of a first spherical coordinate system; the image processing terminal is specifically configured to determine pose information of the image capture tool according to the first spherical coordinate system, and use the pose information as an initial pose of the image capture tool in the first spherical coordinate system with the intelligent street lamp body as a center.
Optionally, the pose information obtained by the image capture tool when obtaining the image information is pose information of the image information obtained by the image capture tool in a second spherical coordinate system with the image capture tool as a center.
Optionally, the second spherical coordinate includes the following factors: taking the initial pose of the image acquisition tool as a coordinate origin; taking the direction vertical to the ground as a Z1 axis of a second spherical coordinate system; setting the direction parallel to the X axis of the first spherical coordinate system as the X1 axis of the second spherical coordinate system; setting the direction parallel to the Y axis of the first spherical coordinate system as the Y1 axis of the second spherical coordinate system; the image processing terminal is specifically configured to determine pose information when the image acquisition tool acquires image information according to the second spherical coordinate system.
Optionally, when the image processing end matches the pose information obtained when the image acquisition tool acquires the image information with the initial pose of the image acquisition tool, the image processing end is specifically configured to determine, according to the second spherical coordinate system, a distance offset and an angle offset between the pose information obtained when the image acquisition tool acquires the image information and the initial pose of the image acquisition tool; and determining target pose information of the image information acquired by the image acquisition tool in the first spherical coordinate system according to the initial pose of the image acquisition tool, the distance offset and the angle offset of the image information.
The application provides a map information acquisition method based on intelligent street lamp, includes: the method comprises the steps that based on an image acquisition tool installed on an intelligent street lamp body, image information in a preset range with the intelligent street lamp body as a center is obtained; according to the pose information obtained by the image acquisition tool when the image information is acquired, performing pose adjustment on the image information to generate a local environment map with the intelligent street lamp body as the center; sending the local environment map to an environment map control end for constructing an environment map; the local environment map comprises identification information corresponding to the intelligent street lamp body for obtaining the local environment map.
According to the technical scheme, the environment information can be acquired at any time by using the image acquisition tool of the intelligent street lamp, so that the acquired map can reflect the latest environment information, and the situation that the map environment information lags behind the actual geographic environment condition is effectively reduced; in addition, the image information is collected by adopting the image collecting tool arranged on the intelligent street lamp body, the workload of outdoor operation of personnel can be reduced, and the map generation time is obviously saved.
In the preferred embodiment of the application, each intelligent street lamp is provided with an image processing unit, and the collected field image data can be processed locally and timely by the intelligent street lamps to generate a local environment map subjected to pose processing, and then the local environment map of each intelligent street lamp is sent to the environment map control end, so that the problem of mass data transmission caused by directly transmitting the field image data to the server can be avoided, and the transmission pressure of data streams is reduced.
In a further preferred embodiment of the present application, the image information further includes time information for recording and obtaining the image information, and the time information for obtaining the image information is included in the local environment map generated for the intelligent street lamp body. Therefore, the environment map control end can obtain the local environment maps of each intelligent street lamp body at different times, the environment map generated by the environment map control end can reflect the current geographic environment information, and the change conditions of the environment map at different time periods can be obtained according to the time axis, so that the geographic environment information can be traced back in time, and the time can be utilized.
Drawings
Fig. 1 is a schematic structural diagram of an intelligent street lamp body according to a first embodiment of the present application;
fig. 2 is a flowchart of a map information collection method based on an intelligent street lamp body according to a first embodiment of the present application;
FIG. 3 is a flowchart illustrating an implementation of step S102 in FIG. 2;
FIG. 4 is a flowchart illustrating an implementation of step S102-2 in FIG. 3;
fig. 5 is a flowchart of a method for constructing an environment map according to a second embodiment of the present application;
fig. 6 is a schematic logical structure diagram of an environment map construction system according to a third embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The first embodiment of the application provides a map information acquisition method based on an intelligent street lamp. The map information acquisition method based on the intelligent street lamp provides a foundation for the construction of a subsequent environment map. Please refer to fig. 1, which is a schematic structural diagram of an intelligent street lamp body according to a first embodiment of the present application. Please refer to fig. 2, which is a flowchart illustrating a method for acquiring map information on an intelligent street lamp body according to a first embodiment of the present application.
A map information collecting method based on an intelligent street lamp according to a first embodiment of the present application is described in detail below with reference to fig. 1 and 2.
In fig. 1, the intelligent street lamp body includes: the street lamp comprises a street lamp cap 1, a street lamp post 2, an image acquisition end 3, an image processing end 4, a solar power generation film 5 and an electricity storage box 6.
The street lamp holder 1 and the image acquisition end 3 can rotate around the street lamp post 2 by a preset angle, and the street lamp post 2 is fixed on the ground where the intelligent street lamp body is located; the image acquisition end 3 is used for acquiring image information within a preset range by taking the intelligent street lamp body as a center; the image processing terminal 4 is used for processing the image information acquired by the image acquisition terminal 3 and generating a local environment map with the intelligent street lamp body as the center; the solar power generation film 5 is pasted on the street lamp post 2, and the solar power generation film 5 adopts a film solar power generation technology to provide power for the intelligent street lamp body and is used for providing electric quantity for each part; the amount of electricity generated by the solar power generation film 5 is stored in the electricity storage box 6.
The above is a schematic structural diagram of the intelligent street lamp body described in fig. 1, and the map information collection method based on the intelligent street lamp is explained in detail through fig. 2.
And S101, acquiring image information within a preset range by taking the intelligent street lamp body as a center based on an image acquisition tool installed on the intelligent street lamp body.
The step is used for obtaining image information, namely field environment information, in a preset range with the intelligent street lamp body as the center. The method comprises the following steps of taking each intelligent street lamp body as a center, and obtaining field environment information within a preset range from the intelligent street lamp body provided with the image acquisition tool. The present embodiment exists for a technical basis of constructing an environment map.
The image acquisition tool is used for acquiring image information in a preset range of the intelligent street lamp body installed on the image acquisition tool. The image acquisition tool for acquiring image information comprises a plurality of data acquisition devices, and the commonly used acquisition devices comprise: a GPS (global positioning system) receiver, a laser radar apparatus, and a panoramic camera. In the present embodiment, a GPS receiver and a panoramic camera may be used. The panoramic camera can shoot images at a 360-degree visual angle, has the functions of automatic shooting, automatic focusing, automatic photo synthesis and the like, supports two working modes of automatic shooting and manual shooting, and can easily shoot 360-degree panoramic photos. The panoramic camera has the working principle that shot pictures are synthesized into a panorama, panoramic editing and designing software is arranged in the panoramic camera, a panoramic camera system is provided with the panoramic editing and designing software, and the software has the functions of roaming editing and panoramic scene designing.
The image information refers to the environment image information of a fixed target object and a dynamic target object which appears instantly in a preset range from the intelligent street lamp body, which is acquired by an image acquisition tool.
The intelligent street lamp comprises intelligent street lamp bodies, wherein the intelligent street lamp bodies are arranged on the basis of an image acquisition tool arranged on the intelligent street lamp bodies, image information within a preset range with the intelligent street lamp bodies as the center is obtained, namely, the process of shooting an environment picture of the intelligent street lamp bodies at a 360-degree visual angle through the image acquisition tool arranged on each intelligent street lamp is adopted. In a general acquisition method, a panoramic camera can be used for automatically shooting environment pictures of the intelligent street lamp body at each view angle, and then the pictures at each view angle are synthesized into a 360-degree panoramic picture. The embodiment is directed to processing the obtained image information, and aims to provide basic data for generating a local environment map of the intelligent street lamp body.
In the step S101, the image collecting tool is mounted on the intelligent street lamp body, and can collect the environmental information at any time by using the image collecting tool of the intelligent street lamp, so that the obtained map can reflect the latest environmental information, thereby effectively reducing the situation that the map environmental information lags behind the actual geographic environmental condition; in addition, the image information is collected by adopting the image collecting tool arranged on the intelligent street lamp body, the workload of outdoor operation of personnel can be reduced, and the map generation time is obviously saved.
And S102, carrying out pose adjustment on the image information according to pose information obtained by the image acquisition tool when the image information is acquired, and generating a local environment map with the intelligent street lamp body as the center.
The method is used for achieving the purpose of generating the local environment map of the intelligent street lamp according to the pose information of the image information.
The pose information is a concept commonly used in image information processing, the pose information obtained when the image acquisition tool acquires the image information is subjected to pose adjustment, and the pose information is a key link for generating a local environment map of the intelligent street lamp, and detailed description is given here.
The pose information is information reflecting a position relationship and an angle relationship between a camera and an intelligent street lamp body when an image acquisition tool (here, a panoramic camera) shoots current image information (also called a scene environment picture). Therefore, the current image information shot by each camera in the panoramic camera also reflects the position relationship and the angle relationship of the image information in the actual environment, and each scene environment picture has corresponding pose information.
Here, the pose information when the image capture tool acquires the image information is calculated by the image processing end when the image capture tool acquires the image information. In the embodiment of the application, the image processing end on the intelligent street lamp body comprises SLAM algorithm software, the intelligent street lamp body acquires image information in a preset range with the intelligent street lamp body as a center through an image acquisition tool, the image processing end processes the image information acquired by the image acquisition tool through the SLAM algorithm, and the image acquisition tool and the pose information of the image information can be obtained.
The image processing end adopts an SLAM algorithm, namely an instant positioning and mapping (SLAM) algorithm, and the algorithm can position the pose information of the image acquisition tool in the environment within a preset range with the intelligent street lamp body as the center and process the pose information of the image acquisition tool in real time when the image information is acquired.
The position and pose information specifically comprises the position relation between the image acquisition tool and the intelligent street lamp body, and the position relation between the image acquisition tool and the environment where the image acquisition tool is located.
The position relation between the image acquisition tool and the intelligent street lamp body specifically means that the position relation and the angle relation between the camera and the center position of the intelligent street lamp body before the image acquisition tool acquires image information each time are determined by taking the contact position between the intelligent street lamp body and the ground as a first center, and the initial pose of the image acquisition tool is determined according to the relation.
The position relationship between the image acquisition tool and the environment is specifically that the initial pose of the image acquisition tool is taken as a second center, the position relationship and the angle relationship between the camera and the center position (namely the initial pose of the image acquisition tool) before the image acquisition tool acquires the image information each time are determined, and the pose information of the image information acquired by the image acquisition tool is determined according to the relationship.
The position and pose information of the image acquisition tool when acquiring the image information is the position and pose information of a camera of the image acquisition tool when acquiring the image information by taking the position of the image acquisition tool on the intelligent street lamp body as a center; accordingly, the pose information of the image information shot by the camera is the pose information of the image information in a coordinate system with the image acquisition tool as the center.
The local environment map of the intelligent street lamp body is established by taking the contact position of the intelligent street lamp body and the ground as the center according to the position and posture information of a fixed target object and an instantaneous dynamic target object existing in a preset range of the intelligent street lamp body. The pose information here refers to target pose information in a coordinate system centered on the intelligent street lamp body, which includes the image information of the target object.
Therefore, in order to construct the local environment map of the intelligent street lamp body, the embodiment of the application adjusts the pose of the image information acquired by the image acquisition tool to obtain the target pose of the image information in the coordinate system with the intelligent street lamp body as the center. Please refer to fig. 3, which is a flowchart illustrating a method for adjusting the pose information in step S102 of fig. 2.
Step S102-1: and determining the initial pose of the image acquisition tool in a first spherical coordinate system taking the intelligent street lamp body as the center.
The method is used for obtaining the initial pose of the image acquisition tool in the intelligent street lamp body and indicating the position relation between the image acquisition tool and the intelligent street lamp body.
The initial pose refers to pose information of the image acquisition tool in the first spherical coordinate system. The camera of the image capture tool (the panoramic camera employed in the embodiment of the present application) can adjust the shooting angle, shooting distance, and the like of the camera each time the image information is captured, that is, when the image capture tool captures each piece of image information, the image capture tool has a corresponding initial pose in the first spherical coordinate system.
The method for acquiring the initial pose of the image acquisition tool in the intelligent street lamp body comprises various methods. The present application provides a simple and easy solution.
In the step, a first spherical coordinate system is established to represent the initial pose of the image acquisition tool in the intelligent street lamp body and to represent the position relation between the image acquisition tool and the intelligent street lamp body.
The first spherical coordinate system is used for constructing a local environment map with the intelligent street lamp body as the center. The first spherical coordinate system described in the present application includes the following factors:
a) taking the contact position of the intelligent street lamp body and the ground as an origin of coordinates;
b) taking the direction vertical to the ground as the Z axis of the first spherical coordinate system;
c) taking a side direction parallel to the road traveling direction of the ground as an X axis of the first spherical coordinate system;
d) and regarding a direction on one side perpendicular to a road traveling direction of the ground as a Y-axis of the first spherical coordinate system.
The method comprises the steps of constructing a local environment map with an intelligent street lamp body as a center, firstly acquiring target pose information of image information in a first spherical coordinate system in the step S101, and then constructing the local environment map of the intelligent street lamp body according to the target pose information of all the image information in a preset range of the intelligent street lamp body.
However, the pose information when the image capture tool acquires the image information in step S102 is the pose information of the image information centered on the image capture tool. Therefore, adjusting the pose information when the image acquisition tool acquires the image information to the target pose information of the image information in the first spherical coordinate system is a key link.
Step S102-2: and matching the pose information when the image acquisition tool acquires the image information with the initial pose of the image acquisition tool, and adjusting the pose information when the image acquisition tool acquires the image information into the target pose information of the image information acquired by the image acquisition tool in the first spherical coordinate system.
The method is used for acquiring the target pose information of the image information acquired by the image acquisition tool in the first coordinate system, and providing basic information for constructing the local environment map of the intelligent street lamp body.
In order to acquire the target pose information of the image information in the first spherical coordinate system, the step adopts a method of matching the pose information when the image acquisition tool acquires the image information with the initial pose of the image acquisition tool.
The method for implementing the above-described acquisition of the pose information when the image capture tool acquires the image information is described in detail herein according to the explanation of the pose information when the image capture tool acquires the image information in step S102. The present application provides a simple and easy solution.
In the step, a second spherical coordinate system is established to represent pose information when the image acquisition tool acquires the image information.
The second spherical coordinate system comprises the following factors:
a) taking the initial pose of the image acquisition tool as a coordinate origin;
b) the direction perpendicular to the ground is taken as the Z1 axis of the second spherical coordinate system;
c) setting the direction parallel to the X-axis of the first spherical coordinate system as the X1-axis of the second spherical coordinate system;
d) the direction parallel to the Y-axis of the first spherical coordinate system is the Y1-axis of the second spherical coordinate system.
And acquiring the pose information of the image acquisition tool when acquiring the image information according to the second spherical coordinate system, wherein the pose information comprises the pose information of the image information in the second spherical coordinate system. Therefore, in the second spherical coordinate system, the pose information of the image information has a corresponding relationship with the initial pose of the image capture tool.
Please refer to fig. 4 for a specific implementation method for matching the pose information of the image capturing tool when the image capturing tool obtains the image information with the initial pose of the image capturing tool.
Step S102-21: and determining the distance offset and the angle offset between the pose information when the image acquisition tool acquires the image information and the initial pose of the image acquisition tool according to the second spherical coordinate system.
The step is used for determining the pose information of the image information acquired by the image acquisition tool in the second spherical coordinate system.
The distance offset and the angle offset between the pose information when the image acquisition tool acquires the image information and the initial pose of the image acquisition tool refer to the coordinate position of the image information in the second spherical coordinate system.
The distance deviation amount refers to image information obtained by a camera of the image acquisition tool under preset adjustment parameters, and the coordinate position of the image information in the second spherical coordinate system is determined according to the offset of the distance coordinate origin in the X1, Y1 and Z1 axes. The angle offset refers to an angle between a camera of the image acquisition tool and a coordinate axis under the adjustment parameter.
And representing the pose information of the image acquisition tool in the second spherical coordinate system according to the distance offset and the angle offset, and preparing for acquiring the target pose information of the image information in the first spherical coordinate system according to the distance offset and the angle offset in the step S102-22.
Step S102-22: and determining target pose information of the image information acquired by the image acquisition tool in the first spherical coordinate system according to the initial pose of the image acquisition tool, the distance offset and the angle offset of the image information.
The method comprises the step of generating target pose information of the image information in the first spherical coordinate system according to the distance offset and the angle offset of the image information in the second spherical coordinate system.
In a specific implementation manner of this step, the image information is moved in the first spherical coordinate system according to the distance offset and the angle offset by taking the initial pose of the image capture tool as a reference coordinate through the relationship between the distance offset and the angle offset of the image information in the second spherical coordinate system and the origin of coordinates (the initial pose of the image capture tool), so as to obtain the target pose information of the image information in the first spherical coordinate system.
The above steps S102-21 to S102-22 describe in detail that the pose information obtained by the image capture tool when obtaining the image information is matched with the initial pose of the image capture tool, so as to obtain the target pose information of the image information in the first spherical coordinate system.
In the step S102-2, the pose information obtained when the image acquisition tool acquires the image information is matched with the initial pose of the image acquisition tool, so that on one hand, the position relationship between the image acquisition tool and the intelligent street lamp body is determined according to the initial pose of the image acquisition tool in the first spherical coordinate system; on the other hand, according to the pose relationship between the initial pose of the image acquisition tool and the pose information of the image information acquired by the image acquisition tool in the second spherical coordinate system; and adjusting the pose information of the image information acquired by the image acquisition tool in the second spherical coordinate system into the target pose information of the image information in the first spherical coordinate system according to the corresponding relation of the two aspects, and providing basic information for constructing a local environment map of the intelligent street lamp body.
In step S102, an initial pose of the image capture tool is obtained by establishing the first spherical coordinate system, and a second spherical coordinate system is further established according to the initial pose, so as to obtain pose information of the image information captured by the image capture tool in the second spherical coordinate system. According to the connection relation of the image acquisition tool in the first spherical coordinate system and the second spherical coordinate system, the position and posture information of the image information in the second spherical coordinate system is adjusted into target position and posture information of the image information in the first spherical coordinate system, and a local environment map of the intelligent street lamp body is generated in the first spherical coordinate system.
Step S103: sending the local environment map to an environment map control end for constructing an environment map; the local environment map comprises identification information corresponding to the intelligent street lamp body for obtaining the local environment map.
The method comprises the steps of sending a local environment map generated by an intelligent street lamp body end to an environment map control end, and providing a data basis for the environment map control end to construct an environment map.
The method comprises the following steps that an intelligent street lamp body is taken as a unit, a local environment map of each intelligent street lamp body is sent to an environment map control end, the local environment map of each intelligent street lamp body is managed by the environment map control end conveniently, the local environment map comprises identification information corresponding to the intelligent street lamp body of the local environment map, namely, the identification information corresponding to the intelligent street lamp body stores geographical position information of the intelligent street lamp body in advance.
In the step S102, each intelligent street lamp can process the acquired field image data locally and timely to generate a position and posture processed local environment map; step S103 sends the local environment map of each intelligent street lamp to the environment map control end, so that the problem of mass data transmission caused by directly transmitting field image data to the server can be avoided, and the transmission pressure of data streams is reduced.
In addition, in step S101, the obtaining of the image information within the preset range with the intelligent street lamp body as the center further includes: recording time information for obtaining the image information; therefore, in step S102, in the step of generating the local environment map centered on the intelligent street lamp body, the local environment map includes the time information of obtaining the image information.
Because the image acquisition tool is installed on the intelligent street lamp body, the image acquisition tool can acquire the image information within the preset range of the intelligent street lamp body within 24 hours. Correspondingly, the local environment map of the intelligent street lamp body can contain local environment maps at different times.
According to the time information corresponding to the local environment map obtained in step S102, in step S103, the environment map control terminal can obtain the local environment maps of each intelligent street lamp body at different times, and the environment map generated by the environment map control terminal can not only reflect the current geographic environment information, but also obtain the change conditions of the environment map at different time periods according to the time axis, so that the geographic environment information can be traced back in time and the time can be utilized.
According to the method provided by the first embodiment, the intelligent street lamp body can acquire the environmental information at any time, so that the obtained map can reflect the latest environmental information, and the situation that the map environmental information lags behind the actual geographic environmental condition is effectively reduced; in addition, the intelligent street lamp body can process the acquired field image data in time to generate a local environment map subjected to pose processing, and then the local environment map of the intelligent street lamp is sent to the environment map control end instead of the field image data, so that the transmission pressure of data streams is reduced; in addition, the local environment map sent to the environment map control terminal contains time information, so that the environment map generated by the environment map control terminal can reflect the current geographic environment information and can obtain the change conditions of the environment map at different time periods according to the time axis.
A second embodiment of the present application provides a method for constructing an environment map, which is used to implement a function of searching a map of a user in practical application, and can be used as a basis for implementing other functions of searching a map.
Please refer to fig. 5, which is a flowchart illustrating a second embodiment of the present application, and parts of this embodiment that are the same as the parts of the first embodiment will not be repeated, please refer to corresponding parts in the first embodiment. The method for constructing the environment map provided by the second embodiment is described below with reference to fig. 5.
Step S201, a local environment map from each intelligent street lamp body is obtained.
The step is used for obtaining the local environment map of the intelligent street lamp body.
The local environment map of each intelligent street lamp is used for reflecting the environment information in a preset range away from each intelligent street lamp by taking each intelligent street lamp body as a unit. The method for generating the local environment map of the intelligent street lamp has been described above, and is not described in detail herein.
The embodiment aims at obtaining the local environment maps from the intelligent street lamps, and aims to construct an environment map of the whole area according to the local environment maps of the intelligent street lamps, so that various possible practical application modes can be realized on the basis.
And S202, analyzing the local environment maps, and acquiring identification information of the intelligent street lamps corresponding to each local environment map.
The method comprises the steps of obtaining identification information of corresponding intelligent street lamps in a local environment map, and establishing identification corresponding relation between the intelligent street lamps and the local environment map according to the identification information.
The identification information is information which can represent characteristic factors of the intelligent street lamp, and the identification information can comprise various information or various forms.
After the local environment maps of the intelligent street lamps are obtained in step S201, the local environment maps cannot be directly used to construct the environment maps, and the obtained local environment maps need to be analyzed to obtain the identification information of the intelligent street lamps corresponding to the local environment maps, and the corresponding local environment maps are combined and spliced according to the identification information of each intelligent street lamp.
And step S203, acquiring the geographical position information of each intelligent street lamp according to the identification information of the intelligent street lamp, and taking the geographical position information as the geographical position information of the corresponding local environment map.
The step is used for acquiring the geographical position information of the intelligent street lamp according to the identification information of the intelligent street lamp.
The geographical position information of the intelligent street lamp indicates the position of the intelligent street lamp, and usually comprises longitude and latitude information of the intelligent street lamp.
In step S203, according to the geographical location information of each intelligent street lamp body, adjacent intelligent street lamps and corresponding local environment maps are screened out, a corresponding relationship between the local environment maps and address location information is established, and the corresponding local environment maps are connected.
And step S204, obtaining the spatial relative relation among the local environment maps according to the geographical position information of the local environment maps, splicing the local environment maps according to the spatial relative relation, and obtaining a complete map of the whole area.
The step is used for establishing a complete map of the whole area based on the geographical position information of each local environment map.
The spatial relative relationship between the local environment maps refers to clustering the local environment map of the intelligent street lamp body and the local environment map of the adjacent street lamp according to the geographical position information corresponding to each local environment map acquired in step S203, and establishing a position relationship at a connection between the local environment map of the intelligent street lamp body and the local environment map of the adjacent street lamp.
In addition, the local environment map of the adjacent street lamp may be a local environment map of a street lamp body adjacent to the intelligent street lamp body in any one of a longitude direction and a latitude direction in the geographical location information.
The splicing is to splice all local environment maps based on an image splicing technology; the stitching technology is a technology for stitching a plurality of images with overlapped parts (which may be obtained at different times, different viewing angles or different sensors) into a seamless panoramic image or a high-resolution image.
The image stitching technology mainly comprises two key links, namely image registration and image fusion. Image registration is the basis of image fusion, and the computational load of an image registration algorithm is generally very large, so the development of an image stitching technology depends on the innovation of the image registration technology to a great extent. The image stitching method is many, and different algorithm steps have certain differences, but the rough process is the same. Generally, image stitching mainly comprises the following five steps:
(1) image pre-processing
The method comprises the basic operations of digital image processing (such as denoising, edge extraction, histogram processing and the like), establishing a matching template of an image, performing certain transformation (such as Fourier transformation, wavelet transformation and the like) on the image and the like.
(2) Image registration
The method is characterized in that a certain matching strategy is adopted to find out the corresponding positions of templates or characteristic points in the images to be spliced in the reference image, and further determine the transformation relation between the two images.
(3) Establishing transformation model
And calculating parameter values in the mathematical model according to the corresponding relation between the template or the image characteristics so as to establish a mathematical transformation model of the two images.
(4) Unified coordinate transformation
And converting the images to be spliced into a coordinate system of the reference image according to the established mathematical conversion model to finish unified coordinate transformation.
(5) Fusion reconstruction
And fusing the overlapped areas of the images to be spliced to obtain a spliced and reconstructed smooth seamless panoramic image.
The step of splicing the local environment maps refers to splicing the local environment map of the intelligent street lamp body with the local environment map of the adjacent street lamp according to the spatial relative relationship between the local environment maps, specifically the position relationship of the joint of the local environment map of the intelligent street lamp body and the local environment map of the adjacent street lamp, and by analogy, the local environment maps of all the intelligent street lamp bodies are spliced according to the splicing mode to form a complete map of the whole area.
According to the implementation method of the step S204, the local environment maps with similar geographic position information are sequenced according to the corresponding relationship between the local environment maps of the intelligent street lamps and the geographic position information, the spatial relative relationship between the local environment maps is obtained, and the local environment maps are spliced according to the spatial relative relationship to obtain the complete map of the whole area.
Further, step S202 includes: analyzing the local environment map of each intelligent street lamp body, and acquiring time information corresponding to each local environment map of each intelligent street lamp body; and obtaining the time relative relation between the local environment maps of each intelligent street lamp body according to the time information corresponding to the local environment maps of each intelligent street lamp body.
The method comprises the steps of obtaining time information corresponding to each local environment map of each intelligent street lamp body, and establishing each local environment map of each intelligent street lamp body on a time axis according to a time sequence.
It should be noted that, the time information corresponding to each local environment map of each intelligent street lamp body is obtained, and specifically, the local environment maps corresponding to each intelligent street lamp body under different time conditions are obtained. For example, in order to record the change of plants in a scenic spot in the flowering process, local environment maps of the intelligent street lamp body beside the plants in different time periods can be obtained, and the local environment maps of the intelligent street lamps in the time periods are spliced according to the time relative relation of the local environment maps.
Correspondingly, the step S204 of obtaining a complete map of the entire area according to the geographical location information of each local environment map includes: clustering the local environment maps of each intelligent street lamp body according to a time sequence according to the relative relation of time among the local environment maps of each intelligent street lamp body; according to the local environment maps of the intelligent street lamps under the same time condition and the relative spatial relationship, the local environment maps are spliced to obtain a complete map of the whole area under the same time condition.
In this step, under the condition that the complete map of the whole area is established according to the spatial relative relationship of the local environment maps corresponding to each intelligent street lamp body in step S204, the complete map of the whole area at the same time is established according to each local environment map corresponding to each intelligent street lamp in the time dimension, and therefore, each complete map of the whole area at different times can be obtained.
For example, in order to record the scenery of red leaves on a certain road section of a scenic spot, the local environment maps of each intelligent street lamp in different time periods are clustered according to the relative time relationship between the local environment maps of each intelligent street lamp body on the road section. If a user needs to acquire the red-leaf scenery of the whole road section in the current time period, firstly, the local environment maps of each intelligent street lamp in the current time period are screened, then, the spatial relative relationship among the local environment maps is acquired according to the geographical position information of each local environment map, and the local environment maps in the current time period are spliced according to the spatial relative relationship, so that the complete environment map of the whole road section in the current time period is acquired.
Further, if the user needs to acquire the red leaf scenery of the whole road section within one day, on the basis of acquiring the complete environment map of the whole road section in the current time period, the local environment maps in other time periods are sequentially spliced to respectively acquire the complete environment maps of the whole road section in other time periods, and finally the complete environment map of the red leaf scenery change of the whole road section in each time period within one day is acquired.
In addition, optionally, the local environment map obtained from each intelligent street lamp body is obtained by: the method comprises the steps that based on an image acquisition tool installed on an intelligent street lamp body, image information in a preset range with the street lamp body as a center is obtained; and adjusting the pose of the image information according to the pose information obtained by the image acquisition tool when the image information is acquired, so as to generate a local environment map with the intelligent street lamp body as the center.
Optionally, the pose information includes a position relationship between the image capture tool and the intelligent street lamp body, and a position relationship between the image capture tool and the environment where the image capture tool is located.
Optionally, the pose adjustment of the image information includes: determining an initial pose of the image acquisition tool in a first spherical coordinate system with the intelligent street lamp body as a center; and matching the pose information when the image acquisition tool acquires the image information with the initial pose of the image acquisition tool, and adjusting the pose information when the image acquisition tool acquires the image information into the target pose information of the image information acquired by the image acquisition tool in the first spherical coordinate system.
Optionally, the first spherical coordinate system includes the following factors: taking the contact position of the intelligent street lamp body and the ground as an origin of coordinates; taking the direction vertical to the ground as the Z axis of the first spherical coordinate system; taking a direction parallel to the road advancing direction of the ground as an X axis of a first spherical coordinate system; taking a direction on one side perpendicular to the road advancing direction of the ground as a Y axis of a first spherical coordinate system; and determining the pose information of the image acquisition tool according to the first spherical coordinate system, and taking the pose information as the initial pose of the image acquisition tool in the first spherical coordinate system with the intelligent street lamp body as the center.
Optionally, the pose information obtained by the image capture tool when obtaining the image information is pose information of the image information obtained by the image capture tool in a second spherical coordinate system with the image capture tool as a center.
Optionally, the second spherical coordinate system includes the following factors: taking the initial pose of the image acquisition tool as a coordinate origin; taking the direction vertical to the ground as a Z1 axis of a second spherical coordinate system; setting the direction parallel to the X axis of the first spherical coordinate system as the X1 axis of the second spherical coordinate system; setting the direction parallel to the Y axis of the first spherical coordinate system as the Y1 axis of the second spherical coordinate system; and determining pose information when the image acquisition tool acquires the image information according to the second spherical coordinate system.
Optionally, the matching the pose information obtained when the image information is acquired by the image capture tool with the initial pose of the image capture tool includes: determining distance offset and angle offset between pose information when the image acquisition tool acquires image information and an initial pose of the image acquisition tool according to the second spherical coordinate system; and determining target pose information of the image information acquired by the image acquisition tool in the first spherical coordinate system according to the initial pose of the image acquisition tool, the distance offset and the angle offset of the image information.
In the second embodiment, a method for constructing an environment map is provided, where the received local environment map of each intelligent street lamp body is analyzed, and according to the geographical location information of the intelligent street lamp body corresponding to each local environment map, the local environment map of each intelligent street lamp body is taken as a unit, thereby avoiding processing excessive data streams, and facilitating to quickly construct a complete map of the whole area. In addition, different time relations of a plurality of local environment maps of each intelligent street lamp body can be analyzed, so that a complete map of the whole area at different times can be established, and the method can be used for map retrieval in practical application or observation of changes of the environment at different times and the like.
A third embodiment of the present application provides an environment map construction system, which is used to implement the methods provided in the first and second embodiments.
Referring to fig. 6, a schematic diagram of a logical structure of the environment map building system is provided.
As shown in fig. 6, the environment mapping system 100 includes: an environment map information acquisition terminal 101 and an environment map control terminal 102.
The environment map information acquisition terminal 101 is configured to acquire image information within a preset range with the intelligent street lamp body as a center, generate a local environment map with the intelligent street lamp body as the center according to pose information of the image information, and send the local environment map of the intelligent street lamp body to the environment map control terminal 102.
The above is a description of the environment map information collecting terminal 101 from a functional perspective, and the main device body is an intelligent street lamp body, and actually, in combination with fig. 6 and the method provided in the first embodiment, the function of the intelligent street lamp body as the environment map information collecting terminal can be clearly understood, and details are not repeated herein.
The environment map information acquisition terminal 101 comprises an image acquisition terminal 101-1 and an image processing terminal 101-2.
The image acquisition terminal 101-1 is used for acquiring image information within a preset range with the intelligent street lamp body as a center based on an image acquisition tool installed on the intelligent street lamp body, and sending the image information to the image processing terminal 101-2.
The image processing terminal 101-2 is used for adjusting the pose of the image information according to the pose information obtained by the image acquisition tool when the image information is acquired, and generating a local environment map with the intelligent street lamp body as the center; sending the local environment map to an environment map control end for constructing an environment map; the local environment map comprises identification information corresponding to the intelligent street lamp for obtaining the local environment map.
The environment map control terminal 102 is configured to obtain local environment maps from the intelligent street lamps, analyze the local environment maps, obtain geographical location information of the intelligent street lamps, splice the local environment maps according to the geographical location information of the local environment maps, and obtain a complete map of the whole area.
The above is a description of the environment map control terminal 102 from a functional perspective, and actually, the method provided by fig. 6 and the first and second embodiments can be combined to clearly understand the function of the environment map control terminal, which is not described herein again.
Optionally, when obtaining the local environment maps from the intelligent street lamp bodies, the environment map control terminal is further configured to analyze the local environment map of each intelligent street lamp body, and obtain time information corresponding to each local environment map of each intelligent street lamp body; and obtaining the time relative relation between the local environment maps of each intelligent street lamp body according to the time information corresponding to the local environment maps of each intelligent street lamp body.
Optionally, the environment map control end is specifically configured to cluster the local environment maps of each intelligent street lamp body according to a time sequence according to a relative time relationship between the local environment maps of each intelligent street lamp body when obtaining a complete map of the whole area according to the geographical location information of each local environment map; according to the local environment maps of the intelligent street lamps under the same time condition and the relative spatial relationship, the local environment maps are spliced to obtain a complete map of the whole area under the same time condition.
Optionally, the pose information includes a position relationship between the image capture tool and the intelligent street lamp body, and a position relationship between the image capture tool and the environment where the image capture tool is located.
Optionally, when the image processing end adjusts the pose of the image information, the image processing end is specifically configured to determine an initial pose of the image capture tool in a first spherical coordinate system with the intelligent street lamp body as a center; and matching the pose information when the image acquisition tool acquires the image information with the initial pose of the image acquisition tool, and adjusting the pose information when the image acquisition tool acquires the image information into the target pose information of the image information acquired by the image acquisition tool in the first spherical coordinate system.
Optionally, the first spherical coordinate system includes the following factors: taking the contact position of the intelligent street lamp body and the ground as an origin of coordinates; taking the direction vertical to the ground as the Z axis of the first spherical coordinate system; taking a direction parallel to the road advancing direction of the ground as an X axis of a first spherical coordinate system; taking a direction on one side perpendicular to the road advancing direction of the ground as a Y axis of a first spherical coordinate system; the image processing terminal is specifically configured to determine pose information of the image capture tool according to the first spherical coordinate system, and use the pose information as an initial pose of the image capture tool in the first spherical coordinate system with the intelligent street lamp body as a center.
Optionally, the pose information obtained by the image capture tool when obtaining the image information is pose information of the image information obtained by the image capture tool in a second spherical coordinate system with the image capture tool as a center.
Optionally, the second spherical coordinate includes the following factors: taking the initial pose of the image acquisition tool as a coordinate origin; taking the direction vertical to the ground as a Z1 axis of a second spherical coordinate system; setting the direction parallel to the X axis of the first spherical coordinate system as the X1 axis of the second spherical coordinate system; setting the direction parallel to the Y axis of the first spherical coordinate system as the Y1 axis of the second spherical coordinate system; the image processing terminal is specifically configured to determine pose information when the image acquisition tool acquires image information according to the second spherical coordinate system.
Optionally, when the image processing end matches the pose information obtained when the image acquisition tool acquires the image information with the initial pose of the image acquisition tool, the image processing end is specifically configured to determine, according to the second spherical coordinate system, a distance offset and an angle offset between the pose information obtained when the image acquisition tool acquires the image information and the initial pose of the image acquisition tool; and determining target pose information of the image information acquired by the image acquisition tool in the first spherical coordinate system according to the initial pose of the image acquisition tool, the distance offset and the angle offset of the image information.
Through the system, the map information acquisition method based on the intelligent street lamps provided by the first embodiment and the construction method of the environment map provided by the second embodiment can be better realized, on the one hand, the environment information acquisition is carried out at any time by utilizing an image acquisition tool of the intelligent street lamps, so that the obtained map can reflect the latest environment information, and the condition that the map environment information lags behind the actual geographic environment condition is effectively reduced; in the second aspect, each intelligent street lamp can process the acquired field image data locally and timely to generate a local environment map subjected to pose processing, and then the local environment map of each intelligent street lamp is sent to an environment map control end, so that the problem of mass data transmission caused by directly transmitting the field image data to a server can be avoided, and the transmission pressure of data streams is reduced; in the third aspect, the environment map control terminal can obtain the local environment maps of each intelligent street lamp body at different times, the environment map generated by the environment map control terminal can reflect the current geographic environment information, and the change conditions of the environment map at different time periods can be obtained according to the time axis, so that the geographic environment information can be traced back in time, and the time can be utilized.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.

Claims (10)

1. A map information acquisition method based on an intelligent street lamp is characterized by comprising the following steps:
the method comprises the steps that based on an image acquisition tool installed on an intelligent street lamp body, image information in a preset range with the intelligent street lamp body as a center is obtained;
according to the pose information obtained by the image acquisition tool when the image information is acquired, performing pose adjustment on the image information to generate a local environment map with the intelligent street lamp body as the center;
sending the local environment map to an environment map control end for constructing an environment map; the local environment map comprises identification information corresponding to the intelligent street lamp body for obtaining the local environment map.
2. The map information collection method based on the intelligent street lamp according to claim 1, wherein the pose information comprises a position relationship between the image collection tool and the intelligent street lamp body and a position relationship between the image collection tool and an environment where the image collection tool is located.
3. The map information collection method based on the intelligent street lamp according to claim 1, wherein the obtaining of the image information within a preset range centered on the intelligent street lamp body further comprises: recording time information for obtaining the image information;
in the step of generating the local environment map with the intelligent street lamp body as the center, the local environment map includes the time information of obtaining the image information.
4. The intelligent street lamp-based map information acquisition method according to claim 1, wherein the pose adjustment of the image information comprises:
determining an initial pose of the image acquisition tool in a first spherical coordinate system with the intelligent street lamp body as a center;
and matching the pose information when the image acquisition tool acquires the image information with the initial pose of the image acquisition tool, and adjusting the pose information when the image acquisition tool acquires the image information into the target pose information of the image information acquired by the image acquisition tool in the first spherical coordinate system.
5. The intelligent street lamp-based map information collection method according to claim 4, wherein the first spherical coordinate system comprises the following factors: taking the contact position of the intelligent street lamp body and the ground as an origin of coordinates; taking the direction vertical to the ground as the Z axis of the first spherical coordinate system; taking a direction parallel to the road advancing direction of the ground as an X axis of a first spherical coordinate system; taking a direction on one side perpendicular to the road advancing direction of the ground as a Y axis of a first spherical coordinate system;
and determining the pose information of the image acquisition tool according to the first spherical coordinate system, and taking the pose information as the initial pose of the image acquisition tool in the first spherical coordinate system with the intelligent street lamp body as the center.
6. The map information collection method based on intelligent street lamps according to claim 5, wherein the pose information obtained by the image collection tool when the image collection tool acquires the image information is the pose information of the image information acquired by the image collection tool in a second spherical coordinate system with the image collection tool as the center.
7. The intelligent street lamp-based map information collection method according to claim 6, wherein the second spherical coordinate system comprises the following factors: taking the initial pose of the image acquisition tool as a coordinate origin; taking the direction vertical to the ground as a Z1 axis of a second spherical coordinate system; setting the direction parallel to the X axis of the first spherical coordinate system as the X1 axis of the second spherical coordinate system; setting the direction parallel to the Y axis of the first spherical coordinate system as the Y1 axis of the second spherical coordinate system;
and determining pose information when the image acquisition tool acquires the image information according to the second spherical coordinate system.
8. A method for constructing an environment map is characterized by comprising the following steps:
obtaining a local environment map from each intelligent street lamp body;
analyzing the local environment maps to obtain identification information of the intelligent street lamps corresponding to each local environment map;
acquiring geographical position information of each intelligent street lamp according to the identification information of the intelligent street lamp, and taking the geographical position information as the geographical position information of the corresponding local environment map;
and obtaining the spatial relative relationship among the local environment maps according to the geographical position information of the local environment maps, and splicing the local environment maps according to the spatial relative relationship to obtain a complete map of the whole area.
9. The method for constructing the environment map according to claim 8, wherein the obtaining of the local environment map from each intelligent street lamp body further comprises: analyzing the local environment map of each intelligent street lamp body, and acquiring time information corresponding to each local environment map of each intelligent street lamp body;
and obtaining the time relative relation between the local environment maps of each intelligent street lamp body according to the time information corresponding to the local environment maps of each intelligent street lamp body.
10. An environment map construction system, comprising: the system comprises an environment map information acquisition end and an environment map control end;
the environment map information acquisition end is used for acquiring image information within a preset range with the intelligent street lamp body as the center, generating a local environment map with the intelligent street lamp body as the center according to the position and posture information of the image information, and sending the local environment map of the intelligent street lamp body to the environment map control end;
the environment map control terminal is used for obtaining local environment maps from the intelligent street lamps, analyzing the local environment maps, obtaining geographic position information of the intelligent street lamps, splicing the local environment maps according to the geographic position information of the local environment maps, and obtaining a complete map of the whole area.
CN202010744309.7A 2020-07-29 2020-07-29 Map information acquisition method based on intelligent street lamp and construction method of environment map Pending CN114088059A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010744309.7A CN114088059A (en) 2020-07-29 2020-07-29 Map information acquisition method based on intelligent street lamp and construction method of environment map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010744309.7A CN114088059A (en) 2020-07-29 2020-07-29 Map information acquisition method based on intelligent street lamp and construction method of environment map

Publications (1)

Publication Number Publication Date
CN114088059A true CN114088059A (en) 2022-02-25

Family

ID=80294920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010744309.7A Pending CN114088059A (en) 2020-07-29 2020-07-29 Map information acquisition method based on intelligent street lamp and construction method of environment map

Country Status (1)

Country Link
CN (1) CN114088059A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080304707A1 (en) * 2007-06-06 2008-12-11 Oi Kenichiro Information Processing Apparatus, Information Processing Method, and Computer Program
US20140300775A1 (en) * 2013-04-05 2014-10-09 Nokia Corporation Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
CN105973264A (en) * 2016-07-21 2016-09-28 触景无限科技(北京)有限公司 Intelligent blind guiding system
CN106162144A (en) * 2016-07-21 2016-11-23 触景无限科技(北京)有限公司 A kind of visual pattern processing equipment, system and intelligent machine for overnight sight
CN106197429A (en) * 2016-07-21 2016-12-07 触景无限科技(北京)有限公司 A kind of Multi-information acquisition location equipment and system
CN107229690A (en) * 2017-05-19 2017-10-03 广州中国科学院软件应用技术研究所 Dynamic High-accuracy map datum processing system and method based on trackside sensor
CN107728637A (en) * 2017-12-02 2018-02-23 广东容祺智能科技有限公司 A kind of UAS of intelligent adjustment camera angle
CN108802785A (en) * 2018-08-24 2018-11-13 清华大学 Vehicle method for self-locating based on High-precision Vector map and monocular vision sensor
CN115695720A (en) * 2022-09-13 2023-02-03 触景无限科技(北京)有限公司 Intelligent monitoring method and device for smart city road and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080304707A1 (en) * 2007-06-06 2008-12-11 Oi Kenichiro Information Processing Apparatus, Information Processing Method, and Computer Program
US20140300775A1 (en) * 2013-04-05 2014-10-09 Nokia Corporation Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
CN105973264A (en) * 2016-07-21 2016-09-28 触景无限科技(北京)有限公司 Intelligent blind guiding system
CN106162144A (en) * 2016-07-21 2016-11-23 触景无限科技(北京)有限公司 A kind of visual pattern processing equipment, system and intelligent machine for overnight sight
CN106197429A (en) * 2016-07-21 2016-12-07 触景无限科技(北京)有限公司 A kind of Multi-information acquisition location equipment and system
CN107229690A (en) * 2017-05-19 2017-10-03 广州中国科学院软件应用技术研究所 Dynamic High-accuracy map datum processing system and method based on trackside sensor
CN107728637A (en) * 2017-12-02 2018-02-23 广东容祺智能科技有限公司 A kind of UAS of intelligent adjustment camera angle
CN108802785A (en) * 2018-08-24 2018-11-13 清华大学 Vehicle method for self-locating based on High-precision Vector map and monocular vision sensor
CN115695720A (en) * 2022-09-13 2023-02-03 触景无限科技(北京)有限公司 Intelligent monitoring method and device for smart city road and electronic equipment

Similar Documents

Publication Publication Date Title
CN103424113B (en) Indoor positioning and navigating method of mobile terminal based on image recognition technology
CN107222467B (en) Method for realizing mobile communication base station panoramic operation and maintenance system
CN112053446A (en) Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
CN112767391A (en) Power grid line part defect positioning method fusing three-dimensional point cloud and two-dimensional image
CA3012049A1 (en) System and method for structural inspection and construction estimation using an unmanned aerial vehicle
CN103067856A (en) Geographic position locating method and system based on image recognition
CN112184890A (en) Camera accurate positioning method applied to electronic map and processing terminal
CN101794316A (en) Real-scene status consulting system and coordinate offset method based on GPS location and direction identification
CN112113542A (en) Method for checking and accepting land special data for aerial photography construction of unmanned aerial vehicle
CN104184995A (en) Method and system for achieving real-time linkage monitoring of networking video monitoring system
JP7179382B2 (en) Phenotypic information collection system for field plants
KR101852368B1 (en) Method for underground information based on vrs geometric-correction used by uav taking picture
CN103761274A (en) Updating method utilizing full view camera to update streetscape database
CN111602335A (en) Simulation method, system, and program for photovoltaic power generation device
CN110660125B (en) Three-dimensional modeling device for power distribution network system
CN115937288A (en) Three-dimensional scene model construction method for transformer substation
CN102831816B (en) Device for providing real-time scene graph
CN112469967B (en) Mapping system, mapping method, mapping device, mapping apparatus, and recording medium
CN115641401A (en) Construction method and related device of three-dimensional live-action model
CN110398760A (en) Pedestrian's coordinate acquisition equipment and its application method based on image analysis
CN111083438A (en) Unmanned inspection method, system and device based on video fusion and storage medium
CN113066112A (en) Indoor and outdoor fusion method and device based on three-dimensional model data
CN111738918A (en) Panorama splicing method and system based on unmanned aerial vehicle cloud server calculation
CN112991440A (en) Vehicle positioning method and device, storage medium and electronic device
CN108447042B (en) Fusion method and system for urban landscape image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination