WO2023045271A1 - Procédé et appareil de génération de carte bidimensionnelle, dispositif terminal et support de stockage - Google Patents

Procédé et appareil de génération de carte bidimensionnelle, dispositif terminal et support de stockage Download PDF

Info

Publication number
WO2023045271A1
WO2023045271A1 PCT/CN2022/080520 CN2022080520W WO2023045271A1 WO 2023045271 A1 WO2023045271 A1 WO 2023045271A1 CN 2022080520 W CN2022080520 W CN 2022080520W WO 2023045271 A1 WO2023045271 A1 WO 2023045271A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional
point cloud
information
depth image
Prior art date
Application number
PCT/CN2022/080520
Other languages
English (en)
Chinese (zh)
Inventor
陈紫荣
王琳
Original Assignee
奥比中光科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 奥比中光科技集团股份有限公司 filed Critical 奥比中光科技集团股份有限公司
Publication of WO2023045271A1 publication Critical patent/WO2023045271A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present invention relates to the field of positioning technology, in particular to a two-dimensional map generation method, device, terminal equipment and storage medium.
  • location positioning services on existing mobile devices are based on location fusion technology, which includes GPS signals, cellular network base station signals, etc., and can provide more accurate positioning when used in indoor and outdoor scenarios.
  • the positioning refers to a physical location in the world, which may be specific to a country, city, district, or street.
  • the current location location services are basically provided by telecom operators, or proprietary companies that cooperate with them. And if you want to embed the location positioning service into other systems (such as APPs with related positioning functions), in addition to paying the above service provider, you need to use the dedicated interface provided by it for development, and the development process is cumbersome, and you must The overall development progress will stagnate due to over-reliance on the current location-based services, thereby affecting users' use of the location-based function.
  • the technical problem to be solved by the present invention is to provide a two-dimensional map generation method, device, terminal equipment and storage medium in view of the above-mentioned defects of the prior art, aiming to solve the cumbersome process when developing the positioning function in the prior art , slow progress and other issues.
  • the present invention provides a method for generating a two-dimensional map, wherein the method includes:
  • the acquiring the depth image and the color image of the target scene, and determining the mapping relationship between the depth image and the color image include:
  • a mapping relationship between the depth image and the color image is determined according to the first pixel information and the second pixel information.
  • the acquiring the depth image and the color image of the target scene, and determining the mapping relationship between the depth image and the color image further includes:
  • the pose information of the color camera corresponding to the color image is associated with the pose information of the depth camera corresponding to the depth image.
  • the acquiring a two-dimensional point cloud image corresponding to the depth image according to the depth image, and determining a target map frame corresponding to the two-dimensional point cloud image includes:
  • the target map frame is determined according to the two-dimensional point cloud image.
  • the determining the target map frame according to the two-dimensional point cloud image includes:
  • the two-dimensional point cloud image determine the coordinate information of each trajectory point in the two-dimensional point cloud image
  • the determining the target map frame according to the two-dimensional point cloud image further includes:
  • the obtaining the location label information in each area in the target map frame includes:
  • the boundary information of each area, the position information of track points in each area, and the projection relationship between track points in each area and three-dimensional point cloud data are used as the position tag information.
  • the embodiment of the present invention also provides a two-dimensional map generation device for positioning, the device includes:
  • An image acquisition module configured to acquire depth images and color images of the target scene
  • mapping relationship determination module configured to determine the mapping relationship between the depth image and the color image
  • a target map frame determination module configured to acquire a two-dimensional point cloud image corresponding to the depth image according to the depth image, and determine a target map frame corresponding to the two-dimensional point cloud image;
  • a two-dimensional map generation module configured to acquire location label information in each region in the target map frame, and combine the location label information with the location label information in each region according to the location label information and the mapping relationship
  • the pose information of the color camera corresponding to the color image is bound to obtain a two-dimensional map.
  • an embodiment of the present invention further provides a terminal device, wherein the terminal device includes a memory, a processor, and a two-dimensional map generation program stored in the memory and operable on the processor, the When the processor executes the two-dimensional map generation program, the steps of the two-dimensional map generation method described in any one of the above solutions are implemented.
  • an embodiment of the present invention further provides a computer-readable storage medium, where a two-dimensional map generation program is stored on the computer-readable storage medium, and when the two-dimensional map generation program is executed by a processor, the above solution is realized The steps of any one of the two-dimensional map generation method.
  • the present invention provides a two-dimensional map generation method.
  • the present invention first obtains the depth image and the color image of the target scene. Since the depth image and the color image are obtained based on the same target scene, Therefore, the mapping relationship between the depth image and the color image can be determined. Since the image can be converted into point cloud data through coordinates, the present invention can obtain the two-dimensional point cloud image corresponding to the depth image, and determine the target map frame corresponding to the two-dimensional point cloud image, and the target map frame includes the target scene All point cloud data. Next, the present invention acquires position label information in each area in the target map frame, and the position label information is used to reflect position information of track points in each area in the target map.
  • the two-dimensional point cloud image is obtained based on the processing of the depth image, and there is a mapping relationship between the depth image and the color image. Therefore, based on the mapping relationship, the present invention can map the position label information to the color image corresponding to each area.
  • the pose information of the color camera is bound to obtain a two-dimensional map.
  • the generated two-dimensional map can reflect the position information of the track points in each area, so that when the user obtains the color image of the same target scene, he can obtain the pose information of the corresponding color camera, and further base on the two
  • the corresponding location label information can be obtained from the three-dimensional map, so as to achieve precise positioning. It can be seen that the present invention can quickly generate a two-dimensional map with a simple process, and the generated two-dimensional map can be reused, providing users with more convenient positioning services.
  • FIG. 1 is a flow chart of a specific implementation of a method for generating a two-dimensional map provided by an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of camera trajectories in a target scene of a floor building in a two-dimensional map generation method provided by an embodiment of the present invention.
  • Fig. 3 is a schematic side view of a camera track in a target scene of a floor building in a two-dimensional map generation method provided by an embodiment of the present invention.
  • FIG. 4 is a two-dimensional point cloud image of a target scene of a floor building in the two-dimensional map generation method provided by the embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a target map frame determined from the two-dimensional point cloud image in FIG. 4 in the method for generating a two-dimensional map provided by an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of performing a gridding operation on the target map frame in FIG. 5 in the method for generating a two-dimensional map provided by an embodiment of the present invention.
  • Fig. 7 is a schematic diagram of camera trajectories of a floor in a floor building in the method for generating a two-dimensional map provided by an embodiment of the present invention.
  • Fig. 8 is a functional block diagram of a two-dimensional map generation device provided by an embodiment of the present invention.
  • FIG. 9 is a functional block diagram of an internal structure of a terminal device provided by an embodiment of the present invention.
  • This embodiment provides a method for generating a two-dimensional map.
  • a two-dimensional map can be quickly generated.
  • the process is simple, and the generated two-dimensional map can be reused, providing users with more convenient positioning Serve.
  • this embodiment first acquires a depth image and a color image of a target scene. Since the depth image and the color image are acquired based on the same target scene, the mapping relationship between the depth image and the color image can be determined. Since the image can be converted into point cloud data through coordinates, this embodiment can obtain the two-dimensional point cloud image corresponding to the depth image, and determine the target map frame corresponding to the two-dimensional point cloud image, and the target map frame includes the target All point cloud data of the scene. Next, in this embodiment, position label information in each area in the target map frame is acquired, and the position label information is used to reflect position information of track points in each area in the target map.
  • the two-dimensional point cloud image is obtained based on the processing of the depth image, and there is a mapping relationship between the depth image and the color image, so this implementation can combine the position label information with the color image in each area based on the mapping relationship
  • the pose information of the corresponding color camera is bound to obtain a two-dimensional map.
  • the generated two-dimensional map can reflect the position information of the track points in each area, so that when the user obtains the color image of the same target scene, he can obtain the pose information of the corresponding color camera, and further base on the two
  • the corresponding location label information can be obtained from the three-dimensional map, so as to achieve precise positioning.
  • the depth image and the color image of the 3-story building may be acquired, and then the mapping relationship between the depth image and the color image is determined. Then, according to the depth image of the 3-story building, the corresponding two-dimensional point cloud image is obtained, and a target map frame is determined, and the target map frame includes all point cloud data of the 3-story building. Then the location label information in each area in the target map frame is obtained, and the location label information can reflect the location information of the track points on each floor of the 3-story building. Therefore, the location tag information can be bound with the pose information of the color camera corresponding to the color image in each area to obtain a two-dimensional map of the three-story building.
  • the user After the user obtains the color image of the 3-story building, he can obtain the pose information of the corresponding color camera, and further obtain the corresponding location label information according to the two-dimensional map of the 3-story building, that is, locate where One floor, which location, so as to achieve precise positioning.
  • the method for generating a two-dimensional map in this embodiment can be applied to a terminal device, and the terminal device can be an intelligent product such as a computer, a mobile phone, or a tablet.
  • the two-dimensional map generation method in this embodiment includes the following steps:
  • Step S100 acquiring a depth image and a color image of a target scene, and determining a mapping relationship between the depth image and the color image.
  • the depth image in this embodiment is also called the distance image (range image), which refers to the image with the distance (depth) from the depth camera to each point in the target scene as the pixel value, which directly reflects the visible surface of the target scene Geometry.
  • the depth image can be calculated as point cloud data after coordinate conversion, and the point cloud data with rules and necessary information can also be back-calculated as a depth image.
  • the color image in this embodiment is an image captured by a color camera. In this implementation, since the depth image and the color image are obtained based on the same target scene, after the depth image and the color image are obtained, the mapping relationship between the depth image and the color image can be determined.
  • this embodiment includes the following steps when determining the mapping relationship between the depth image and the color image:
  • Step S101 acquiring a frame of color image each time a frame of depth image of the target scene is acquired;
  • Step S102 acquiring the first pixel information of the color image and the second pixel information of the depth image respectively;
  • Step S103 according to the first pixel information and the second pixel information, determine the mapping relationship between the depth image and the color image.
  • this embodiment acquires multiple frames of depth images and color images of the target scene, and when acquiring the depth images and color images, this embodiment is implemented synchronously. That is to say, when acquiring a depth image of the target scene, a frame of color image is acquired, which can ensure that the depth image and the color image are based on the same target scene at the same time, so that the obtained mapping relationship is more accurate . After the depth image and the color image are obtained, the first pixel information of the color image and the second pixel information of the depth image can be obtained respectively.
  • this embodiment can align the depth image and the color image, and then determine the depth image and the color image according to the first pixel information and the second pixel information mapping relationship between them.
  • the mapping relationship refers to the mapping relationship between each pixel in the depth image and the color image, that is, when the first pixel information in the color image is known, then according to the The mapping relationship can determine the second pixel information in the depth image.
  • the pose information of the color camera corresponding to the color image and the pose information of the depth camera corresponding to the depth image can be respectively obtained.
  • the pose information in this embodiment reflects the track points of the color camera and the depth camera. Since there is a mapping relationship between the depth image and the color image, this embodiment can associate the pose information of the color camera corresponding to the color image with the pose information of the depth camera corresponding to the depth image according to the mapping relationship. Therefore, when the pose information of the color camera corresponding to the color image is known, the pose information of the depth camera corresponding to the depth image can be determined.
  • Step S200 according to the depth image, acquire a two-dimensional point cloud image corresponding to the depth image, and determine a target map frame corresponding to the two-dimensional point cloud image.
  • this embodiment can convert the depth image into a two-dimensional point cloud image, and then from the The target map frame is determined from the 2D point cloud image.
  • the target map frame contains all the trajectory points in the two-dimensional point cloud image, which is beneficial to generate a two-dimensional map based on the target map frame in subsequent steps.
  • Step S201 transforming the depth image into 3D point cloud data, the 3D point cloud data carries identifications of different colors;
  • Step S202 using the 3D point cloud data marked with different colors to obtain a 2D point cloud image
  • Step S203 Determine the target map frame according to the two-dimensional point cloud image.
  • the depth image is first converted into three-dimensional point cloud data.
  • the pixel information of each pixel point in the depth image that is, the above-mentioned second pixel information
  • the three-dimensional point cloud data is calculated based on the following calculation method.
  • (x s , y s , z s ) are the three-dimensional coordinates of the point cloud in the depth camera coordinate system
  • z is the depth of each pixel
  • (u, v) is the pixel coordinates
  • (u 0 , v 0 ) is The coordinates of the principal point of the image
  • d x and d y are the physical dimensions of the sensor pixel of the depth camera in two directions
  • f' is the focal length (in millimeters).
  • the 3D point cloud data in this embodiment carry different color marks.
  • the target scene is a multi-story building (such as a 3-story building)
  • different color identifications can be set for the trajectory points corresponding to the acquired color image, so as to distinguish the data and calculate.
  • this embodiment sets different color identifications for the trajectory points of each floor, as shown in the camera trajectory in the target scene of the floor building in Figure 2 and Figure 3, as can be seen from Figure 3,
  • the 6 blocks correspond to the data of 3 floors and 3 stairwells.
  • the 3D point cloud data in this embodiment may not carry the color identification, but there will be problems such as the above-mentioned large amount of data, slow calculation and processing speed, etc., which are not limited here.
  • the depth of the 3D point cloud data can be directly discarded or the 3D point cloud data can be normalized along the z-axis, and the normalized coordinates are According to the normalized coordinate information, it is known that it represents a z-axis coordinate of 1, and then obtains a two-dimensional point cloud image. It should be noted that the normalized z-axis coordinate can also be any other value, so that the normalized point cloud is flat, and there is no limitation here.
  • this embodiment projects the three-dimensional point cloud data with different color identifications into the same coordinate system (X-Y coordinate system) to obtain The two-dimensional point cloud image of , as shown in Figure 4. Since the 2D point cloud image is obtained based on the projection of the 3D point cloud data, there is a certain projection relationship between the 2D point cloud image and the 3D point cloud data. In an implementation manner, after the two-dimensional point cloud image is obtained, this embodiment performs noise reduction processing on the two-dimensional point cloud image.
  • this embodiment establishes an X-Y coordinate system, calculates the point density in the grid with a 20cm square as the unit, and then performs median filtering on the point density in the grid, which can effectively remove flying points and realize the two-dimensional point density. Noise reduction processing of cloud images.
  • a target map frame is determined from the two-dimensional point cloud image.
  • the target map frame in this embodiment needs to contain all the trajectory points in the two-dimensional point cloud image, and in a specific application, the target map frame can be set as a rectangular bounding box.
  • this implementation can set the target map frame as the smallest circumscribed rectangle that can contain all trajectory points in the two-dimensional point cloud image, the smallest The bounding rectangle refers to the maximum range that can be used to contain all trajectory points in a 2D point cloud image.
  • the target map frame when determining the target map frame, firstly, according to the two-dimensional point cloud image, the coordinate information of each trajectory point in the two-dimensional point cloud image is determined. Then, based on the coordinate information of each track point, the maximum abscissa point, the minimum abscissa point, the maximum ordinate point, and the minimum ordinate point are determined. Finally, according to the maximum abscissa point, the minimum abscissa point, the maximum ordinate point, and the minimum ordinate point, determine the rectangular bounding box, that is, the minimum circumscribed rectangle, and then use the rectangular bounding box as the target map frame, as shown in Figure 5.
  • a decentralized covariance matrix can be calculated for the trajectory points in the two-dimensional point cloud image, and SVD decomposition and other operations can be performed on the covariance matrix to calculate two features
  • the eigenvectors corresponding to the values are obtained to obtain the two directions of the rectangular bounding box, and then the boundary values in the two directions are obtained for the trajectory points in the two-dimensional point cloud image to determine the boundary of the rectangular bounding box.
  • this embodiment can also appropriately expand the rectangular bounding box based on different target scenarios. This enables the rectangular bounding box to express more information.
  • the target map frame can be corrected so that the target map frame is consistent with the X-Y coordinates system alignment, so that the target map frame can be gridded in the next step.
  • Step S300 Obtain the location tag information in each region in the target map frame, and bind the location tag information with the pose information of the color camera corresponding to the color image in each region according to the location tag information and the mapping relationship, to obtain 2D map.
  • this embodiment divides the target map frame into regions, and obtains location label information in each region in the target map frame.
  • the location label information reflects the location information of the track points in each area in the target map.
  • the two-dimensional point cloud image is obtained based on the processing of the depth image, and there is a mapping relationship between the depth image and the color image. Therefore, this implementation can compare the position label information with the color image corresponding to each area based on the mapping relationship.
  • the pose information of the color camera is bound to obtain a two-dimensional map.
  • this embodiment includes the following steps when acquiring location tag information:
  • Step S301 performing a grid operation on the target map frame to obtain each area in the target map frame;
  • Step S302 obtaining the boundary information of each area, the position information of the track points in each area, and the projection relationship between the track points in each area and the three-dimensional point cloud data;
  • Step S303 using the boundary information of each area, the position information of the track points in each area, and the projection relationship between the track points in each area and the three-dimensional point cloud data as position label information.
  • the target map frame in this embodiment is a rectangular bounding frame
  • the rectangular bounding frame can be evenly divided into grids, and the number of divided grids can be determined according to the size and location of the target scene corresponding to the two-dimensional point cloud image The accuracy is determined; if the actual demand for positioning accuracy is higher, the number of grids divided by the rectangular bounding box will be more, so that the pose information of the camera corresponding to each grid can be accurately corresponded in the future.
  • each area in the target map frame is obtained as shown in Figure 6.
  • the dotted line part in Figure 6 is the boundary of the grid, and each grid is an area , since it is uniformly divided into grids, the boundary information of each region can be obtained.
  • each area is numbered based on the boundary information of each area, and each grid in the target map frame is numbered. Then obtain the boundary information of each area, the position information of the track points in each area, and the projection relationship between the track points and the three-dimensional point cloud data in each area, and will obtain the boundary information of each area, the location of the track points in each area Position information (such as floor information) and the projection relationship between trajectory points and three-dimensional point cloud data in each area are set as position label information, so that the position label information of each area in the target map frame can be obtained.
  • the two-dimensional point cloud image is obtained based on the processing of the depth image, and there is a mapping relationship between the depth image and the color image.
  • this embodiment can map the position label information to the color image in each area based on the mapping relationship.
  • the pose information of the color camera is bound to obtain a two-dimensional map, as shown in Figure 7.
  • Figure 7 shows the camera trajectory map of a certain floor when the target scene is a 3-story building, and at the same time In the camera trajectory map, different colors are used to indicate that the trajectory points are located in different grid areas, that is, the two-dimensional map is obtained.
  • the two-dimensional map generated in this embodiment can reflect the position information of the track points in each area, so that when the user obtains the color image of the same target scene, the pose information (and the pose information) of the corresponding color camera can be obtained. track point), and further obtain the corresponding location tag information according to the two-dimensional map, so as to achieve precise positioning. It can be seen that this embodiment can quickly generate a two-dimensional map with a simple process, and the generated two-dimensional map can be reused to provide users with more convenient positioning services.
  • this embodiment further provides a two-dimensional map generation device, as shown in FIG. 8 .
  • the device of this embodiment includes: an image acquisition module 10 , a mapping relationship determination module 20 , a target map frame determination module 30 and a two-dimensional map generation module 40 .
  • the image collection module 10 is used to collect the depth image and the color image of the target scene;
  • the mapping relationship determination module 20 is used to determine the mapping relationship between the depth image and the color image.
  • the target map frame determining module 30 is configured to acquire a two-dimensional point cloud image corresponding to the depth image according to the depth image, and determine a target map frame corresponding to the two-dimensional point cloud image.
  • the two-dimensional map generation module 40 is configured to obtain the location tag information in each area in the target map frame, and according to the location tag information and the mapping relationship, link the location tag information to the color camera corresponding to the color image in each area
  • the pose information is bound to obtain a two-dimensional map.
  • the present invention further provides a terminal device, the functional block diagram of which may be shown in FIG. 9 .
  • the terminal equipment includes a processor, a memory, a network interface, a display screen, and a temperature sensor connected through a system bus.
  • the processor of the terminal device is used to provide calculation and control capabilities.
  • the memory of the terminal device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and computer programs.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the network interface of the terminal device is used to communicate with external terminals through a network connection.
  • the display screen of the terminal device may be a liquid crystal display screen or an electronic ink display screen, and the temperature sensor of the terminal device is pre-set inside the terminal device for detecting the operating temperature of the internal device.
  • a terminal device in one embodiment, includes a memory, a processor, and a two-dimensional map generation program stored in the memory and operable on the processor.
  • the processor executes the two-dimensional map generation program, the The following operation instructions:
  • Nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in many forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Chain Synchlink DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM Static RAM
  • DRAM Dynamic RAM
  • SDRAM Synchronous DRAM
  • DDRSDRAM Double Data Rate SDRAM
  • ESDRAM Enhanced SDRAM
  • SLDRAM Synchronous Chain Synchlink DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM
  • the present invention discloses a two-dimensional map generation method, device, terminal equipment, and storage medium.
  • the method includes: acquiring a depth image and a color image of a target scene, and determining the difference between the depth image and the color image.
  • obtain the two-dimensional point cloud image corresponding to the depth image and determine the target map frame corresponding to the two-dimensional point cloud image; obtain each of the target map frames location tag information in the region, and according to the location tag information and the mapping relationship, bind the location tag information with the pose information of the color camera corresponding to the color image in each region to obtain a two-dimensional map.
  • the invention can quickly construct a two-dimensional map, and the constructed two-dimensional map can be reused.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention divulgue un procédé et un appareil de correction de génération de carte bidimensionnelle, un dispositif terminal et un support de stockage. Le procédé consiste à : acquérir une image de profondeur et une image couleur d'une scène cible, et déterminer une relation de mappage entre l'image de profondeur et l'image couleur ; en fonction de l'image de profondeur, acquérir une image en nuage de points bidimensionnelle correspondant à l'image de profondeur, et déterminer une trame de carte cible correspondant à l'image en nuage de points bidimensionnelle ; et à acquérir des informations d'étiquette de position dans chaque région dans la trame de carte cible, et en fonction des informations d'étiquette de position et de la relation de mappage, associer les informations d'étiquette de position à des informations de pose d'une caméra couleur correspondant à l'image couleur dans chaque région pour obtenir une carte bidimensionnelle. Selon la présente invention, une carte bidimensionnelle peut être construite rapidement, et la carte bidimensionnelle construite peut être réutilisée.
PCT/CN2022/080520 2021-09-24 2022-03-13 Procédé et appareil de génération de carte bidimensionnelle, dispositif terminal et support de stockage WO2023045271A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111122466.5A CN114004882A (zh) 2021-09-24 2021-09-24 一种二维地图生成方法、装置、终端设备及存储介质
CN202111122466.5 2021-09-24

Publications (1)

Publication Number Publication Date
WO2023045271A1 true WO2023045271A1 (fr) 2023-03-30

Family

ID=79921854

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/080520 WO2023045271A1 (fr) 2021-09-24 2022-03-13 Procédé et appareil de génération de carte bidimensionnelle, dispositif terminal et support de stockage

Country Status (2)

Country Link
CN (1) CN114004882A (fr)
WO (1) WO2023045271A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597074A (zh) * 2023-04-18 2023-08-15 五八智能科技(杭州)有限公司 一种多传感器信息融合的方法、系统、装置和介质
CN116883584A (zh) * 2023-05-29 2023-10-13 东莞市捷圣智能科技有限公司 基于数模的轨迹生成方法、装置、电子设备及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004882A (zh) * 2021-09-24 2022-02-01 奥比中光科技集团股份有限公司 一种二维地图生成方法、装置、终端设备及存储介质
CN114663612A (zh) * 2022-03-24 2022-06-24 北京百度网讯科技有限公司 一种高精度地图构建方法、装置及电子设备
CN115308716A (zh) * 2022-10-12 2022-11-08 深圳市其域创新科技有限公司 扫描设备及扫描设备的控制方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107741234A (zh) * 2017-10-11 2018-02-27 深圳勇艺达机器人有限公司 一种基于视觉的离线地图构建及定位方法
CN110243375A (zh) * 2019-06-26 2019-09-17 汕头大学 一种同时构建二维地图和三维地图的方法
WO2021010784A2 (fr) * 2019-07-17 2021-01-21 주식회사 유진로봇 Appareil et procédé pour effectuer une génération d'images d'objets, une reconnaissance d'objets et un apprentissage d'environnement par un robot mobile
WO2021017314A1 (fr) * 2019-07-29 2021-02-04 浙江商汤科技开发有限公司 Procédé de traitement d'informations, procédé et appareil de positionnement d'informations, dispositif électronique et support de stockage
CN114004882A (zh) * 2021-09-24 2022-02-01 奥比中光科技集团股份有限公司 一种二维地图生成方法、装置、终端设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107741234A (zh) * 2017-10-11 2018-02-27 深圳勇艺达机器人有限公司 一种基于视觉的离线地图构建及定位方法
CN110243375A (zh) * 2019-06-26 2019-09-17 汕头大学 一种同时构建二维地图和三维地图的方法
WO2021010784A2 (fr) * 2019-07-17 2021-01-21 주식회사 유진로봇 Appareil et procédé pour effectuer une génération d'images d'objets, une reconnaissance d'objets et un apprentissage d'environnement par un robot mobile
WO2021017314A1 (fr) * 2019-07-29 2021-02-04 浙江商汤科技开发有限公司 Procédé de traitement d'informations, procédé et appareil de positionnement d'informations, dispositif électronique et support de stockage
CN114004882A (zh) * 2021-09-24 2022-02-01 奥比中光科技集团股份有限公司 一种二维地图生成方法、装置、终端设备及存储介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHEN CHAO, LI QIANG; YAN QING: "Mobile Robot Simulataneous Localization and Mapping Based on Heterogeneous Sensor Information Fusion", SCIENCE TECHNOLOGY AND ENGINEERING, ZHONGGUO JISHU JINGJI YANJIUHUI, CN, vol. 18, no. 13, 8 May 2018 (2018-05-08), CN , pages 86 - 91, XP093054349, ISSN: 1671-1815 *
XIN, GUANXI: "Research on Simultaneous Localization and Mapping Based on RGB-D Camera", CHINA MASTER'S THESES FULL-TEXT DATABASE, INFORMATION TECHNOLOGY, no. 02, 1 July 2016 (2016-07-01), CN, pages 1 - 64, XP009544915 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597074A (zh) * 2023-04-18 2023-08-15 五八智能科技(杭州)有限公司 一种多传感器信息融合的方法、系统、装置和介质
CN116883584A (zh) * 2023-05-29 2023-10-13 东莞市捷圣智能科技有限公司 基于数模的轨迹生成方法、装置、电子设备及存储介质
CN116883584B (zh) * 2023-05-29 2024-03-26 东莞市捷圣智能科技有限公司 基于数模的轨迹生成方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN114004882A (zh) 2022-02-01

Similar Documents

Publication Publication Date Title
WO2023045271A1 (fr) Procédé et appareil de génération de carte bidimensionnelle, dispositif terminal et support de stockage
CN112894832B (zh) 三维建模方法、装置、电子设备和存储介质
CN111436208B (zh) 一种测绘采样点的规划方法、装置、控制终端及存储介质
CN108932051B (zh) 增强现实图像处理方法、装置及存储介质
CN111383279B (zh) 外参标定方法、装置及电子设备
CN112184890B (zh) 一种应用于电子地图中的摄像头精准定位方法及处理终端
US10097753B2 (en) Image data processing method and apparatus
US11682170B2 (en) Generating three-dimensional geo-registered maps from image data
WO2018153313A1 (fr) Caméra stéréoscopique et procédé d'acquisition de hauteur associé et système d'acquisition de hauteur
CN109472829B (zh) 一种物体定位方法、装置、设备和存储介质
CN105043354B (zh) 一种利用摄像头成像对移动目标精准定位的系统
US11238647B2 (en) Apparatus for building map using machine learning and image processing
WO2016155110A1 (fr) Procédé et système de correction de distorsion de perspective d'image
CN110517209B (zh) 数据处理方法、装置、系统以及计算机可读存储介质
CN110033046B (zh) 一种计算特征匹配点分布可信度的量化方法
WO2023087860A1 (fr) Procédé et appareil pour générer une trajectoire de cible, et dispositif électronique et support
CN115376109B (zh) 障碍物检测方法、障碍物检测装置以及存储介质
CN116086411B (zh) 数字地形图生成方法、装置、设备和可读存储介质
CN112422653A (zh) 基于位置服务的场景信息推送方法、系统、存储介质及设备
EP3875902B1 (fr) Procédé et appareil de planification pour examiner et cartographier des points d'échantillonnage, terminal de commande et support de stockage
CN112509135B (zh) 元素标注方法、装置、设备、存储介质及计算机程序产品
JP2014099055A (ja) 検出装置、検出方法、及びプログラム
CN111210471B (zh) 一种定位方法、装置及系统
CN112767498A (zh) 相机标定方法、装置和电子设备
Dlesk et al. Possibilities of processing archival photogrammetric images captured by Rollei 6006 metric camera using current method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22871336

Country of ref document: EP

Kind code of ref document: A1