CN111435538A - Positioning method, positioning system, and computer-readable storage medium - Google Patents

Positioning method, positioning system, and computer-readable storage medium Download PDF

Info

Publication number
CN111435538A
CN111435538A CN201910032046.4A CN201910032046A CN111435538A CN 111435538 A CN111435538 A CN 111435538A CN 201910032046 A CN201910032046 A CN 201910032046A CN 111435538 A CN111435538 A CN 111435538A
Authority
CN
China
Prior art keywords
information
current
image
positioning
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910032046.4A
Other languages
Chinese (zh)
Inventor
温加睿
蒋如意
段勃勃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai OFilm Smart Car Technology Co Ltd
Original Assignee
Shanghai OFilm Smart Car Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai OFilm Smart Car Technology Co Ltd filed Critical Shanghai OFilm Smart Car Technology Co Ltd
Priority to CN201910032046.4A priority Critical patent/CN111435538A/en
Publication of CN111435538A publication Critical patent/CN111435538A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Abstract

The invention provides a positioning method, a positioning system and a non-volatile computer readable storage medium. The positioning method of the embodiment of the invention comprises the following steps: acquiring a panoramic image of a current scene; acquiring current scene information, current road characteristic information and current barrier information according to the all-around view image; matching the current scene information, the current road characteristic information and the current barrier information with an existing virtual map to obtain positioning information; and outputting the positioning information. The positioning method, the positioning system and the nonvolatile computer readable storage medium of the embodiment of the invention match the existing virtual map by identifying the current scene information, the current road characteristic information and the current obstacle information in the all-round looking image, and match various environmental factors during positioning, thereby reducing the influence of the environmental factors on positioning and improving the positioning precision.

Description

Positioning method, positioning system, and computer-readable storage medium
Technical Field
The present invention relates to the field of positioning technologies, and in particular, to a positioning method, a positioning system, and a non-volatile computer-readable storage medium.
Background
The image-based positioning system is affected by more environmental factors during positioning, resulting in poor positioning accuracy.
Disclosure of Invention
Embodiments of the present invention provide a positioning method, a positioning system, and a non-volatile computer-readable storage medium.
The positioning method of the embodiment of the invention comprises the following steps: acquiring a panoramic image of a current scene; acquiring current scene information, current road characteristic information and current barrier information according to the all-around view image; matching the current scene information, the current road characteristic information and the current barrier information with an existing virtual map to obtain positioning information; and outputting the positioning information.
The positioning method of the embodiment of the invention matches the existing virtual map by identifying the current scene information, the current road characteristic information and the current obstacle information in the all-round looking image, and matches various environmental factors during positioning, thereby reducing the influence of the environmental factors on positioning and improving the positioning precision.
In some embodiments, the positioning method further comprises: and acquiring the virtual map. Therefore, the virtual map is acquired before positioning is started, and preparation is made for subsequent positioning.
In some embodiments, the obtaining the virtual map comprises: acquiring video images at different positions; acquiring scene acquisition information, road characteristic acquisition information and obstacle acquisition information corresponding to the different positions according to the video image; and generating the virtual map according to the collected scene information, the collected road characteristic information and the collected obstacle information of the different positions.
Therefore, the acquired scene information, the acquired road characteristic information and the acquired barrier information of different positions of the current scene can be acquired by acquiring the video image of the current scene, the acquired information is comprehensive, and the high-precision virtual map can be established.
In some embodiments, the obtaining current scene information, current road characteristic information, and current obstacle information according to the all-around image includes: processing the all-around image to identify scene attribute information; preprocessing the all-around view image according to the scene attribute information to generate a preprocessed image; and processing the pre-processed image to obtain the current scene information, the current road characteristic information and the current obstacle information.
Therefore, the all-around image is preprocessed according to the scene attribute information, different preprocessing methods are adopted for different scene attributes, the influence of the different scene attributes on the all-around image is eliminated, the clear all-around image is obtained, and the positioning accuracy is favorably improved.
In some embodiments, said processing said pre-processed image to obtain said current scene information, said current road characteristic information, and said current obstacle information comprises: and processing the preprocessed image by adopting an image processing algorithm and a deep learning algorithm to acquire the scene information, the current road characteristic information and the current obstacle information.
At the moment, the preprocessing image is processed by adopting an image processing algorithm and a deep learning algorithm to obtain the road characteristic information, so that the identification of the road characteristic information is more accurate.
In some embodiments, the positioning method further comprises: selecting a working mode; and entering the step of acquiring the all-round view image of the current scene when the working mode is the positioning mode. At the moment, the user enters the positioning mode by selecting the working mode without entering the preprocessing mode to acquire the virtual map, and positioning can be performed quickly.
In some embodiments, the positioning method further comprises: selecting a working mode; and entering the step of acquiring the video images at different positions when the working mode is a preprocessing mode. At the moment, the user can actively establish the virtual map of the appointed scene through the preprocessing mode, the robustness is strong, the high-precision virtual map provided by a high-precision map supplier is not needed, and the cost can be saved.
The positioning system of an embodiment of the invention includes an image acquisition device and one or more processors. The image acquisition device is used for acquiring a panoramic image of a current scene; the processor is configured to acquire current scene information, current road characteristic information and current obstacle information according to the all-around image, match the current scene information, the current road characteristic information and the current obstacle information with an existing virtual map to obtain positioning information according to the current scene information, the current road characteristic information and the current obstacle information, and output the positioning information.
The positioning system of the embodiment of the invention matches the existing virtual map by identifying the current scene information, the current road characteristic information and the current barrier information in the all-around image, and matches various environmental factors during positioning, thereby reducing the influence of the environmental factors on positioning and improving the positioning precision.
In certain embodiments, the processor is further configured to obtain the virtual map. Therefore, the virtual map is acquired before positioning is started, and preparation is made for subsequent positioning.
In some embodiments, the image capturing device is further configured to capture video images of different locations; the processor is further used for acquiring the collected scene information, the collected road characteristic information and the collected obstacle information corresponding to the different positions according to the video image, and generating the virtual map according to the collected scene information, the collected road characteristic information and the collected obstacle information of the different positions.
Therefore, the positioning system acquires the video image of the current scene through the image acquisition device, the processor can acquire the acquired scene information, the acquired road characteristic information and the acquired barrier information of different positions of the current scene according to the video image, the acquired information is comprehensive, and the high-precision virtual map can be established.
In some embodiments, the processor is further configured to process the all-around image to identify scene attribute information, pre-process the all-around image according to the scene attribute information to generate a pre-processed image, and process the pre-processed image to obtain the current scene information, the current road characteristic information, and the current obstacle information.
Therefore, the processor can preprocess the all-around image according to the scene attribute information, and different preprocessing methods are adopted for different scene attributes, so that the influence of different scene attributes on the all-around image is eliminated, a clear all-around image is obtained, and the positioning precision is favorably improved.
In some embodiments, the processor is further configured to process the pre-processed image using an image processing algorithm and a deep learning algorithm to obtain the current scene information, the current road characteristic information, and the current obstacle information.
At the moment, the processor simultaneously adopts an image processing algorithm and a deep learning algorithm to process the preprocessed image to obtain the road characteristic information, and the identification of the road characteristic information is more accurate.
In some embodiments, the processor is further configured to select an operation mode, and control the image capturing device to capture the all-round image of the current scene when the operation mode is a positioning mode. At the moment, the user enters the positioning mode by selecting the working mode without entering the preprocessing mode to acquire the virtual map, and positioning can be performed quickly.
In some embodiments, the processor is further configured to select an operation mode, and control the image capturing device to capture the video images of the different locations when the operation mode is a pre-processing mode. At the moment, the user can enter a preprocessing mode by selecting a working mode, the virtual map of the appointed scene is actively established, the robustness is strong, a high-precision map supplier is not required to provide the high-precision virtual map, and the cost can be saved.
One or more non-transitory computer-readable storage media embodying computer-executable instructions that, when executed by one or more processors, cause the processors to perform the above-described location methods of embodiments of the present invention.
The positioning method, the positioning system and the nonvolatile computer readable storage medium of the embodiment of the invention match the existing virtual map by identifying the current scene information, the current road characteristic information and the current obstacle information in the all-round looking image, and match various environmental factors during positioning, thereby reducing the influence of the environmental factors on positioning and improving the positioning precision.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic flow chart of a positioning method according to some embodiments of the present invention;
FIG. 2 is a schematic diagram of a positioning system module in accordance with certain embodiments of the present invention;
FIG. 3 is a schematic structural view of a localization carrier according to certain embodiments of the present invention;
FIG. 4 is a schematic flow chart of a positioning method according to some embodiments of the present invention;
fig. 5-8 are schematic views of a positioning method according to some embodiments of the invention;
fig. 9-12 are schematic flow charts of positioning methods according to certain embodiments of the present invention; and
FIG. 13 is a schematic diagram of the connection of a computer-readable storage medium to a processor in accordance with certain embodiments of the invention.
Detailed Description
In the description of the present invention, it is to be understood that the terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; may be mechanically connected, may be electrically connected or may be in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The following disclosure provides many different embodiments or examples for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Of course, they are merely examples and are not intended to limit the present invention. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples, such repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. In addition, the present invention provides examples of various specific processes and materials, but one of ordinary skill in the art may recognize applications of other processes and/or uses of other materials.
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
Referring to fig. 1 and fig. 2, a positioning method according to an embodiment of the present invention includes:
012: acquiring a panoramic image of a current scene;
014: acquiring current scene information, current road characteristic information and current barrier information according to the all-round-looking image;
016: matching the current scene information, the current road characteristic information and the current barrier information with the existing virtual map to obtain positioning information; and
018: and outputting the positioning information.
The localization system 100 of an embodiment of the present invention includes an image acquisition device 10 and one or more processors 20. The image acquisition device 10 is used for acquiring a panoramic image of a current scene; the processor 20 is configured to obtain current scene information, current road characteristic information and current obstacle information according to the all-around image; matching the current scene information, the current road characteristic information and the current barrier information with the existing virtual map to obtain positioning information; and outputting the positioning information.
That is, step 012 can be realized by image acquisition apparatus 10. Step 014, step 016, and step 018 can be implemented by processor 20.
Specifically, the positioning system 100 acquires a panoramic image of the current scene in real time through the image acquisition device 10, where the panoramic image is a current frame image of the current scene acquired by the image acquisition device 10, for example, the image acquisition device 10 includes a plurality of cameras 12, and the plurality of cameras 12 are respectively used for acquiring images in different orientations, so that on one hand, the image acquisition efficiency can be improved, and on the other hand, the panoramic image (panoramic image) of the current frame can be obtained by synthesizing the images acquired by the cameras 12 in the plurality of orientations, where the panoramic image refers to a view angle greater than or equal to 180 degrees, for example, the panoramic image may be 180 degrees, 240 degrees, 360 degrees, 480 degrees, 720 degrees, and so on. The processor 20 may be one or more, for example, the processor 20 may be one, two, three, etc., after the image acquisition device 10 acquires the look-around image, the processor 20 of the positioning system 100 acquires current scene information (i.e., approximate position information of the current scene, such as parking lots, residential areas, etc.), current road characteristic information (such as road identification lines, traffic lights, etc.), current obstacle information (such as static obstacles, curbs and buildings at the roadside, etc., dynamic obstacles, such as moving pedestrians and vehicles, etc.) from the look-around image, matches the current scene information, the current road characteristic information, and the current obstacle information with an existing virtual map (a virtual map M1 shown in fig. 5 or a virtual map M2 shown in fig. 7) after identifying the current scene information, the current road characteristic information, and the current obstacle information, for example, after matching the current scene information, the current road characteristic information, and the current obstacle information with the scene information, the road characteristic information, and the obstacle information of the P1 portion of the virtual map M1, a similarity is obtained, the processor 20 determines whether the similarity is greater than a predetermined similarity, and determines that the current position matches with the virtual map (the P1 portion in M1 shown in fig. 6) when the similarity is greater than the predetermined similarity, so as to obtain the positioning information according to the position information corresponding to the P1 portion in the virtual map M1, and matching is performed through various environmental characteristics of the current scene, so that the matching accuracy is high, and the positioning accuracy can be improved; finally, the processor 20 of the positioning system 100 outputs the positioning information. The positioning information output by the processor 20 may be transmitted to a different output device, for example, the processor 20 transmits the positioning information to an audio device, and the audio device outputs audio positioning information (e.g., voice broadcast positioning information) to a user; alternatively, the processor 20 transmits the positioning information to the video device, and the video device outputs video positioning information (such as a map displayed on a display screen and information about the current position) to the user.
In summary, the positioning method and the positioning system 100 according to the embodiments of the present invention match the existing virtual map by identifying the current scene information, the current road characteristic information, and the current obstacle information in the all-round-looking image, and match various environmental factors during positioning, so as to reduce the influence of the environmental factors on positioning and improve the positioning accuracy.
In some embodiments, referring to fig. 3, the positioning system 100 may be applied to a positioning carrier 1000, where the carrier 1000 may be any movable device such as an automobile, an unmanned aerial vehicle, an unmanned ship, a robot, or even a wearable device, which is worn on a person to perform positioning when the person moves, and the embodiments of the present invention are described by taking the positioning system 100 as an example for being applied to the automobile 1000. In one example, the image capturing device 10 includes six cameras 12 respectively mounted on two sides of the head, two sides of the body, and two sides of the tail of the automobile 1000, and the processor 20 is also mounted on one side of the body of the automobile 1000 and is in communication connection with the six cameras 12 (including wired communication connection and wireless communication connection). The automobile 1000 may drive the image capturing device 10 to move on the road, and construct a panoramic image on the traveling route, where the field angle of the panoramic image may be 180 degrees, 240 degrees, 360 degrees, 480 degrees, 720 degrees, and so on.
Referring to fig. 4, in some embodiments, the positioning method further includes the following steps:
011: and acquiring a virtual map.
In certain embodiments, the processor 20 is also configured to obtain a virtual map.
That is, step 011 can be implemented by processor 20.
Specifically, before the positioning system 100 performs positioning, a virtual map needs to be obtained, the virtual map may be generated in multiple ways, in one embodiment, the virtual map (for example, a high-grade map, a Baidu map, an Tencent map, a map of an automobile manufacturer, etc.) is pre-installed and stored in the central control of the automobile 1000 when the automobile 1000 leaves a factory, the pre-installed virtual map is formed by map building of different scenes in advance, and a relatively accurate map generation device may be used to generate a high-precision virtual map, so as to improve the positioning precision of a user. The processor 20 may first read the virtual map directly from the central control of the automobile 1000, and then when the step 016 is executed, the processor 20 may match the read virtual map with the obtained current scene information, current road characteristic information, and current obstacle information to obtain the positioning information.
In another embodiment, the central control of the vehicle 1000 reads a virtual map (e.g., a gold map, a Baidu map, an Tencent map, a map of the vehicle manufacturer itself, etc.) in a cloud or an external device (e.g., a mobile phone or a navigation device), and the virtual map of the cloud or the external device is also formed by mapping different scenes in advance, or a precise map generating device can be used to generate a high-precision virtual map, so as to improve the positioning precision of the user. After the central control of the automobile 1000 reads the virtual map, the processor 20 may first directly read the virtual map from the central control of the automobile 1000, and then when the step 016 is executed, the processor 20 may match the read virtual map with the obtained current scene information, current road characteristic information, and current obstacle information to obtain the positioning information.
In another embodiment, referring to fig. 5, in order to improve robustness to different scenes in different regions, the positioning system 100 may actively pre-construct a virtual map for a scene that the user needs to locate under the operation of the user. In one example, when a user needs to construct a virtual map M1 of a parking lot, the user controls the vehicle 1000 to move at different positions of the parking lot to construct a virtual map M1 of the whole parking lot under the prompt of the positioning system 100 when the user controls the vehicle 1000 to enter the parking lot as shown in fig. 5, if the user controls the vehicle 1000 to enter from an entrance and then sequentially pass through parking spaces 1 to 8 while passing through a passage, the image capturing device 10 continuously captures video images, the processor 20 constructs a virtual map M1 of the whole parking lot according to the video images, the virtual map M1 includes map information of different positions (entrance, passage, and parking spaces 1 to 8) through which the user moves, and the map information includes scene information, road characteristic information and obstacle information of each position. The processor 20 may read the pre-constructed virtual map M1, and then when the step 016 is executed, the processor 20 may match the pre-constructed virtual map M1 with the acquired current scene information, current road characteristic information and current obstacle information to obtain the positioning information. More specifically, the processor 20 in the positioning system 100 can read the pre-constructed virtual map M1, please refer to fig. 6, when the automobile 1000 equipped with the positioning system 100 actually enters the parking space 8, the image acquiring device 10 in the positioning system 100 acquires the surround view image of the current scene, and the processor 20 acquires the current scene information, the current road characteristic information and the current obstacle information according to the surround view image; and matching the current scene information, the current road characteristic information and the current obstacle information with the virtual map M1 to obtain positioning information, that is, positioning the automobile 1000 at the parking space 8 of the parking lot. If the scene is changed to a park, a hotel, a street, or the like, the positioning carrier 1000 (the car 1000) constructs a virtual map for the scene in advance, and then the steps 012, 014, 016, and 018 are executed when the positioning carrier 1000 (the car 1000) actually enters the scene, and the virtual maps are the pre-constructed virtual maps when the step S016 is executed, so that the positioning system 100 can adapt to different scenes, construct virtual maps corresponding to the scenes according to the needs of users, and has good robustness.
In yet another embodiment, please refer to fig. 7, a user needs to construct a virtual map of three scenes, namely, the northward Xinhua road, the parking lot 1 and the parking lot 2, the user first enters the parking lot 1 to obtain video images of different positions of the parking lot 1, then moves along the northward Xinhua road to obtain a video image of the northXinhua road, and finally arrives at the parking lot 2 to obtain a video image of the parking lot 2, then the positioning system 100 generates the virtual map 1 of the parking lot 1 according to the video image of the parking lot 1, generates the virtual map 2 of the northxinhua road according to the video image of the northxinhua road, and generates the virtual map 3 of the parking lot 2 according to the video image of the parking lot 2, that is, the positioning system 100 generates a virtual. Of course, the positioning system 100 can also generate a virtual map M2 covering the three scenes (the parking lot 1, the parking lot 2, and the northward Xinhua road) according to the video images of the parking lot 1, the northward Xinhua road, and the parking lot 2, where the virtual map M2 includes not only the map information of the virtual map 1, the virtual map 2, and the virtual map 3, but also the relative position relationship of the virtual map 1, the virtual map 2, and the virtual map 3 (i.e., the parking lot 1, the northXinhua road, and the parking lot 2), and thus can be used to implement the navigation function of the automobile 1000. Referring to fig. 8, when the vehicle 1000 equipped with the positioning system 100 is guided based on the virtual map M2, for example, when the navigation route is from the parking lot 1 to the area (position) shown in fig. 8, the image acquisition device 10 in the positioning system 100 acquires the panoramic image of the current scene during the movement, and the processor 20 acquires the current scene information, the current road characteristic information, and the current obstacle information based on the panoramic image; and matching the current scene information, the current road characteristic information, and the current obstacle information with the virtual map M2 to output positioning information in real time, so that the user can know the position of the automobile 1000 at any time, and when the automobile 1000 reaches the position shown in fig. 8, the output positioning information is that the automobile 1000 is located at the entrance of the parking lot 2.
Referring to fig. 4 and 9, in some embodiments, step 011 includes the steps of:
0112: acquiring video images at different positions;
0114: acquiring scene acquisition information, road characteristic acquisition information and barrier acquisition information corresponding to different positions according to the video image; and
0116: and generating a virtual map according to the collected scene information, the collected road characteristic information and the collected obstacle information of different positions.
In some embodiments, the image capturing device 10 is further configured to capture a video image of the current scene; the processor 20 is further configured to acquire acquired scene information, acquired road characteristic information, and acquired obstacle information corresponding to different positions according to the video image, acquire the obstacle information according to the acquired scene information and the acquired road characteristic information of the different positions, and generate a virtual map by acquiring the obstacle information.
That is, step 0112 may be implemented by the image acquisition apparatus 10. Step 0114 and step 0116 may be implemented by processor 20.
Specifically, when there is no corresponding virtual map in the current scene, the user controls the vehicle 1000 to move at different positions of the current scene under the prompt of the positioning system 100, the positioning system 100 obtains video images (i.e., multiple frames of panoramic images, each position corresponding to one or multiple frames of panoramic images) of different positions of the current scene that the user passes through during the movement process through the image obtaining device 10, then obtains collected scene information, collected road characteristic information, and collected obstacle information corresponding to different positions of the current scene according to the video images, and then generates the virtual map of the current scene according to the collected scene information, collected road characteristic information, and collected obstacle information of different positions of the current scene, wherein the approximate position information of the current position can be obtained according to the collected scene information, and the current position identification line, the road identification, The information such as traffic lights and the like can obtain the static barrier information and the dynamic barrier information of the current position according to the collected barrier information, and because the dynamic barrier is continuously changed, only the static barrier needs to be considered when the virtual map is constructed, but the dynamic barrier needs to be considered when the virtual map is positioned to prompt a user to avoid the barrier, so that the driving safety is improved. Therefore, the high-precision virtual map of the appointed scene can be simply established according to the needs of the user, the robustness is strong, the high-precision virtual map provided by a high-precision map supplier is not needed, and the cost can be saved. More specifically, please refer to fig. 5, a user needs to construct a virtual map of a parking lot, when the user controls the car 1000 to enter the parking lot as shown in fig. 5, the user controls the car 1000 to move at different positions of the parking lot under the prompt of the positioning system 100, if the user controls the car 1000 to enter from an entrance and then sequentially pass through the parking spaces 1 to 8 through a passage, the image obtaining device 10 continuously obtains video images, the processor 20 obtains collected scene information, collected road characteristic information and collected obstacle information corresponding to different positions of the parking lot according to the video images, and then generates the virtual map M1 of the parking lot according to the collected scene information, collected road characteristic information and collected obstacle information of the different positions of the parking lot, wherein an approximate position information of the current position can be obtained according to the collected scene information, and a current position identification line, a current position identification line, The information such as traffic lights and the like can obtain the static barrier information and the dynamic barrier information of the current position according to the collected barrier information, and because the dynamic barrier is continuously changed, only the static barrier needs to be considered when the virtual map is constructed, but the dynamic barrier needs to be considered when the virtual map is positioned to prompt a user to avoid the barrier, so that the driving safety is improved.
In other embodiments, the user controls the vehicle 100 to move between different locations of different scenes to obtain virtual maps of different scenes, for example, the positioning system 100 may obtain video images of different locations of multiple scenes and then generate a corresponding virtual map for each scene and/or generate a virtual map that includes multiple different scenes. More specifically, please refer to fig. 7, a user needs to construct a virtual map of three scenes, namely, the northward road, the parking lot 1 and the parking lot, first, the user enters the parking lot 1 to obtain video images of different positions of the parking lot 1, the processor 20 obtains collected scene information, collected road characteristic information and collected obstacle information corresponding to different positions of the parking lot 1 according to the video images, and then generates the virtual map 1 of the parking lot 1 according to the collected scene information, the collected road characteristic information and the collected obstacle information of different positions of the parking lot 1; then, the processor 20 acquires the acquired scene information, the acquired road characteristic information and the acquired obstacle information corresponding to different positions of the Xinhua north road according to the video image, and then generates a virtual map 2 of the Xinhua north road according to the acquired scene information, the acquired road characteristic information and the acquired obstacle information of different positions of the Xinhua north road; finally, the vehicle arrives at the parking lot 2 to obtain the video image of the parking lot 2, the processor 20 obtains the collected scene information, the collected road characteristic information and the collected obstacle information corresponding to different positions of the parking lot 2 according to the video image, and then generates the virtual map 3 of the parking lot 2 according to the collected scene information, the collected road characteristic information and the collected obstacle information of the different positions of the parking lot 2, that is, the positioning system 100 generates a virtual map for each scene. Of course, the positioning system 100 can also generate a virtual map M2 covering the three scenes (the parking lot 1, the parking lot 2, and the northward Xinhua road) from the video images of the parking lot 1, the northward Xinhua road, and the parking lot 2, and the virtual map M2 includes not only the map information of the virtual map 1, the virtual map 2, and the virtual map 3, but also the relative positional relationship of the virtual map 1, the virtual map 2, and the virtual map 3 (i.e., the parking lot 1, the northXinhua road, and the parking lot 2).
In some embodiments, the corresponding virtual map is selected for matching according to the current scene information.
Specifically, the virtual map and the scene information are in one-to-one correspondence, for example, the current scene may be a parking lot where a user parks, and there may be a plurality of parking lots, such as the parking lot 1 in a residential area and the parking lot 2 near a company, when the current scene is the parking lot 1, the virtual map 1 corresponding to the parking lot 1 is selected, and when the current scene is the parking lot 2, the virtual map 2 corresponding to the parking lot 2 is called, the virtual map and the current scene information are in one-to-one correspondence, and when positioning is performed, it is not necessary to match the all-around view image of the current frame with all the virtual maps, but only to match the all-around view image of the current frame with the virtual map with the same scene information, which can reduce the amount of calculation.
Referring to fig. 4 and 10, in some embodiments, step 014 includes:
0142: processing the look-around image to identify scene attribute information;
0144: preprocessing the all-round looking image according to the scene attribute information to generate a preprocessed image; and
0146: the pre-processed image is processed to obtain current scene information, current road characteristic information, and current obstacle information.
In some embodiments, processor 20 is further configured to process the surround view image to identify scene attribute information, pre-process the surround view image according to the scene attribute information to generate a pre-processed image, and process the pre-processed image to obtain current scene information, current road characteristic information, and current obstacle information.
Specifically, when the current scene information, the current road characteristic information and the current obstacle information are obtained according to the panoramic image, the scene attribute information, such as the scene attributes of the current scene, such as weather, humidity, illumination and the like, is firstly identified according to the panoramic image, then the panoramic image is preprocessed according to the scene attributes to generate a preprocessed image, further, different preprocessing methods are executed on the panoramic image according to different scene attributes to generate the preprocessed image, for example, if the weather in the scene attribute information is rainy and snowy days or heavy fog days, in order to ensure the accuracy of the subsequent identification of the current scene information, the current road characteristic information and the current obstacle information, the panoramic image is preprocessed according to the scene attribute information (a preprocessing method corresponding to the rainy and snowy days is executed in the rainy and snowy days, a preprocessing method corresponding to the heavy fog days is executed in the heavy fog days) to remove the influence of the weather, the panoramic image in rainy and snowy days or foggy days is converted into the panoramic image of the normal scene attribute information (such as the weather which can obtain a clearer panoramic image in sunny days), so that the current scene information, the current road characteristic information and the current obstacle information can be more accurately identified by the subsequent processing of the preprocessed image, the accuracy of matching the current scene information, the current road characteristic information and the current obstacle information with the existing virtual map is improved, and the positioning accuracy is favorably improved.
In some embodiments, referring to fig. 4 and 11, step 0146 includes:
0148: and processing the preprocessed image by adopting an image processing algorithm and a deep learning algorithm to acquire current scene information, current road characteristic information and current obstacle information.
In some embodiments, processor 20 is further configured to process the pre-processed image using an image processing algorithm and a deep learning algorithm to obtain current scene information, current road characteristic information, and current obstacle information.
Specifically, when current scene information, current road characteristic information and current obstacle information are acquired, an image processing algorithm and a deep learning algorithm are adopted to process a preprocessed image, wherein the image processing algorithm comprises a feature descriptor extraction algorithm and target detection (such as road marking line detection, corner point detection and the like), and the deep learning algorithm comprises a convolutional neural network and a target detection framework. The current scene information, the current road characteristic information and the current obstacle information can be more accurately acquired by processing the preprocessed image according to the image processing algorithm and the deep learning algorithm.
In some embodiments, referring to fig. 3 and 12, the positioning method further includes:
010: selecting a working mode; when the operation mode is the positioning mode, the process proceeds to step 012.
In some embodiments, the processor 20 is further configured to select an operation mode and control the image capturing device 10 to capture a panoramic image of the current scene when the operation mode is the positioning mode.
That is, step 010 may be implemented by processor 20.
Specifically, the positioning system 100 includes a positioning mode for implementing positioning and a preprocessing mode for generating a virtual map in preparation for positioning. The user can manually select the working mode, when the virtual map of the current scene exists, the user can directly select the positioning mode, or the user forgets whether the virtual map is established in advance in the current scene, after the positioning mode is selected, the positioning system 100 can identify the information of the current scene, then match the corresponding virtual map according to the information of the current scene, and prompt the user that the virtual map does not exist in the current scene and enter the preprocessing mode when the corresponding virtual map does not exist, so that whether the corresponding virtual map exists in the current scene is quickly judged. In the positioning mode, the positioning system 100 first obtains a panoramic image of a current scene and identifies current scene information, current road characteristic information, and current obstacle information, and then matches the current scene information, current road characteristic information, and current obstacle information with an existing virtual map to obtain positioning information.
In some embodiments, referring again to fig. 3 and 12, the positioning method further includes:
010: selecting a working mode; when the operation mode is the preprocessing mode, step 0112 is entered.
In some embodiments, the processor 20 is further configured to select an operation mode and control the image capturing device 10 to capture video images of different locations when the operation mode is the pre-processing mode.
That is, step 010 may be implemented by processor 20.
Specifically, the method enters the preprocessing mode when the user manually selects the preprocessing mode or the positioning system 100 determines that the corresponding virtual map does not exist in the current scene, and in the preprocessing mode, the user moves at different positions under the prompt of the positioning system 100, the positioning system 100 acquires video images (i.e., multiple frames of panoramic images, each position corresponding to one or more frames of panoramic images), then generates the virtual map according to the video images, and can enter the positioning mode after the virtual map is generated. Therefore, the user can actively establish the virtual map of the appointed scene through the preprocessing mode, the robustness is strong, the high-precision virtual map provided by a high-precision map supplier is not needed, and the cost can be saved.
Referring to fig. 13, a computer-readable storage medium 300 is further provided according to an embodiment of the present invention. The non-transitory computer-readable storage medium 300 includes one or more computer-executable instructions 302 that, when executed by the one or more processors 20, cause the processor 20 to perform the positioning method of any one of the above embodiments, including the computer-executable instructions 302 stored by the computer-readable storage medium 300.
For example, when the computer-executable instructions 302 are executed by the processor 20, the processor 20 performs the steps of:
012: acquiring a panoramic image of a current scene;
014: acquiring current scene information, current road characteristic information and current barrier information according to the all-round-looking image;
016: matching the current scene information, the current road characteristic information and the current barrier information with the existing virtual map to obtain positioning information; namely, it is
018: and outputting the positioning information.
As another example, when the computer-executable instructions 302 are executed by the processor 20, the processor 20 performs the steps of:
011: and acquiring a virtual map.
As another example, when the computer-executable instructions 302 are executed by the processor 20, the processor 20 performs the steps of:
0112: video images of different positions are acquired.
0114: acquiring scene acquisition information, road characteristic acquisition information and barrier acquisition information corresponding to different positions according to the video image; and
0116: and generating a virtual map according to the collected scene information, the collected road characteristic information and the collected obstacle information of different positions.
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be performed by software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for performing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried out in the above method may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be executed in the form of hardware or in the form of a software functional module. The integrated module, if executed in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (15)

1. A positioning method, characterized in that the positioning method comprises:
acquiring a panoramic image of a current scene;
acquiring current scene information, current road characteristic information and current barrier information according to the all-round looking image;
matching the current scene information, the current road characteristic information and the current barrier information with an existing virtual map to obtain positioning information; and
and outputting the positioning information.
2. The positioning method according to claim 1, further comprising:
and acquiring the virtual map.
3. The method according to claim 2, wherein the obtaining the virtual map comprises:
acquiring video images at different positions;
acquiring scene acquisition information, road characteristic acquisition information and obstacle acquisition information corresponding to the different positions according to the video image; and
and generating the virtual map according to the collected scene information, the collected road characteristic information and the collected obstacle information of the different positions.
4. The positioning method according to claim 1, wherein the obtaining current scene information, current road characteristic information, and current obstacle information from the all-around image comprises:
processing the all-around image to identify scene attribute information;
preprocessing the all-around view image according to the scene attribute information to generate a preprocessed image; and
processing the pre-processed image to obtain the current scene information, the current road characteristic information, and the current obstacle information.
5. The method according to claim 4, wherein the processing the pre-processed image to obtain the current scene information, the current road characteristic information, and the current obstacle information comprises:
and processing the preprocessed image by adopting an image processing algorithm and a deep learning algorithm to acquire the current scene information, the current road characteristic information and the current obstacle information.
6. The positioning method according to claim 1, further comprising:
selecting a working mode; and
and when the working mode is the positioning mode, entering the step of acquiring the all-round-looking image of the current scene.
7. The positioning method according to claim 3, further comprising:
selecting a working mode; and
and when the working mode is a preprocessing mode, entering the step of acquiring the video images at different positions.
8. A positioning system, characterized in that the positioning system comprises:
the image acquisition device is used for acquiring a panoramic image of the current scene; and
one or more processors configured to acquire current scene information, current road characteristic information, and current obstacle information from the all-around image, match the current scene information, the current road characteristic information, and the current obstacle information with an existing virtual map to obtain positioning information, and output the positioning information.
9. The positioning system of claim 8, wherein the processor is further configured to obtain the virtual map.
10. The positioning system of claim 9, wherein the image capturing device is further configured to capture video images of different locations; the processor is further used for acquiring the collected scene information, the collected road characteristic information and the collected obstacle information corresponding to the different positions according to the video image, and generating the virtual map according to the collected scene information, the collected road characteristic information and the collected obstacle information of the different positions.
11. The positioning system of claim 8, wherein the processor is further configured to process the look-around image to identify scene attribute information, pre-process the look-around image according to the scene attribute information to generate a pre-processed image, and process the pre-processed image to obtain the current scene information, the current road characteristic information, and the current obstacle information.
12. The positioning system of claim 11, wherein the processor is further configured to process the pre-processed image using an image processing algorithm and a deep learning algorithm to obtain the current scene information, the current road characteristic information, and the current obstacle information.
13. The positioning system of claim 8, wherein the processor is further configured to select an operation mode and control the image capturing device to capture the panoramic image of the current scene when the operation mode is the positioning mode.
14. The positioning system of claim 10, wherein the processor is further configured to select an operation mode and control the image capturing device to capture the video images at the different positions when the operation mode is a pre-processing mode.
15. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the positioning method of any one of claims 1 to 7.
CN201910032046.4A 2019-01-14 2019-01-14 Positioning method, positioning system, and computer-readable storage medium Pending CN111435538A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910032046.4A CN111435538A (en) 2019-01-14 2019-01-14 Positioning method, positioning system, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910032046.4A CN111435538A (en) 2019-01-14 2019-01-14 Positioning method, positioning system, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN111435538A true CN111435538A (en) 2020-07-21

Family

ID=71579913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910032046.4A Pending CN111435538A (en) 2019-01-14 2019-01-14 Positioning method, positioning system, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111435538A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112488007A (en) * 2020-12-04 2021-03-12 深圳市优必选科技股份有限公司 Visual positioning method, device, robot and storage medium
CN112572431A (en) * 2020-12-30 2021-03-30 广州小鹏自动驾驶科技有限公司 Parking lot driving assistance method, system, equipment and storage medium
CN113343830A (en) * 2021-06-01 2021-09-03 上海追势科技有限公司 Method for rapidly repositioning vehicles in underground parking lot
CN113465619A (en) * 2021-06-01 2021-10-01 上海追势科技有限公司 Vehicle fusion positioning method based on detection data of vehicle-mounted looking-around system
CN113532450A (en) * 2021-06-29 2021-10-22 广州小鹏汽车科技有限公司 Virtual parking map data processing method and system
CN114449440A (en) * 2021-12-27 2022-05-06 上海集度汽车有限公司 Measuring method, device and system

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090202102A1 (en) * 2008-02-08 2009-08-13 Hermelo Miranda Method and system for acquisition and display of images
CN105096386A (en) * 2015-07-21 2015-11-25 中国民航大学 Method for automatically generating geographic maps for large-range complex urban environment
CN105528609A (en) * 2014-09-28 2016-04-27 江苏省兴泽实业发展有限公司 Vehicle license plate location method based on character position
CN105607635A (en) * 2016-01-05 2016-05-25 东莞市松迪智能机器人科技有限公司 Panoramic optic visual navigation control system of automatic guided vehicle and omnidirectional automatic guided vehicle
CN105793669A (en) * 2013-12-06 2016-07-20 日立汽车系统株式会社 Vehicle position estimation system, device, method, and camera device
CN105825198A (en) * 2016-03-29 2016-08-03 深圳市佳信捷技术股份有限公司 Pedestrian detection method and device
CN105946853A (en) * 2016-04-28 2016-09-21 中山大学 Long-distance automatic parking system and method based on multi-sensor fusion
CN106403964A (en) * 2016-08-30 2017-02-15 北汽福田汽车股份有限公司 Positioning navigation system and vehicle
CN106646566A (en) * 2017-01-03 2017-05-10 京东方科技集团股份有限公司 Passenger positioning method, device and system
CN107328410A (en) * 2017-06-30 2017-11-07 百度在线网络技术(北京)有限公司 Method and automobile computer for positioning automatic driving vehicle
CN107600067A (en) * 2017-09-08 2018-01-19 中山大学 A kind of autonomous parking system and method based on more vision inertial navigation fusions
WO2019000417A1 (en) * 2017-06-30 2019-01-03 SZ DJI Technology Co., Ltd. Map generation systems and methods
US20190011924A1 (en) * 2017-07-07 2019-01-10 Jianxiong Xiao System and method for navigating an autonomous driving vehicle
CN109186586A (en) * 2018-08-23 2019-01-11 北京理工大学 One kind towards dynamically park environment while position and mixing map constructing method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090202102A1 (en) * 2008-02-08 2009-08-13 Hermelo Miranda Method and system for acquisition and display of images
CN105793669A (en) * 2013-12-06 2016-07-20 日立汽车系统株式会社 Vehicle position estimation system, device, method, and camera device
CN105528609A (en) * 2014-09-28 2016-04-27 江苏省兴泽实业发展有限公司 Vehicle license plate location method based on character position
CN105096386A (en) * 2015-07-21 2015-11-25 中国民航大学 Method for automatically generating geographic maps for large-range complex urban environment
CN105607635A (en) * 2016-01-05 2016-05-25 东莞市松迪智能机器人科技有限公司 Panoramic optic visual navigation control system of automatic guided vehicle and omnidirectional automatic guided vehicle
CN105825198A (en) * 2016-03-29 2016-08-03 深圳市佳信捷技术股份有限公司 Pedestrian detection method and device
CN105946853A (en) * 2016-04-28 2016-09-21 中山大学 Long-distance automatic parking system and method based on multi-sensor fusion
CN106403964A (en) * 2016-08-30 2017-02-15 北汽福田汽车股份有限公司 Positioning navigation system and vehicle
CN106646566A (en) * 2017-01-03 2017-05-10 京东方科技集团股份有限公司 Passenger positioning method, device and system
CN107328410A (en) * 2017-06-30 2017-11-07 百度在线网络技术(北京)有限公司 Method and automobile computer for positioning automatic driving vehicle
WO2019000417A1 (en) * 2017-06-30 2019-01-03 SZ DJI Technology Co., Ltd. Map generation systems and methods
US20190011924A1 (en) * 2017-07-07 2019-01-10 Jianxiong Xiao System and method for navigating an autonomous driving vehicle
CN107600067A (en) * 2017-09-08 2018-01-19 中山大学 A kind of autonomous parking system and method based on more vision inertial navigation fusions
CN109186586A (en) * 2018-08-23 2019-01-11 北京理工大学 One kind towards dynamically park environment while position and mixing map constructing method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112488007A (en) * 2020-12-04 2021-03-12 深圳市优必选科技股份有限公司 Visual positioning method, device, robot and storage medium
CN112488007B (en) * 2020-12-04 2023-10-13 深圳市优必选科技股份有限公司 Visual positioning method, device, robot and storage medium
CN112572431A (en) * 2020-12-30 2021-03-30 广州小鹏自动驾驶科技有限公司 Parking lot driving assistance method, system, equipment and storage medium
CN113343830A (en) * 2021-06-01 2021-09-03 上海追势科技有限公司 Method for rapidly repositioning vehicles in underground parking lot
CN113465619A (en) * 2021-06-01 2021-10-01 上海追势科技有限公司 Vehicle fusion positioning method based on detection data of vehicle-mounted looking-around system
CN113532450A (en) * 2021-06-29 2021-10-22 广州小鹏汽车科技有限公司 Virtual parking map data processing method and system
CN114449440A (en) * 2021-12-27 2022-05-06 上海集度汽车有限公司 Measuring method, device and system
CN114449440B (en) * 2021-12-27 2023-11-17 上海集度汽车有限公司 Measurement method, device and system

Similar Documents

Publication Publication Date Title
CN111435538A (en) Positioning method, positioning system, and computer-readable storage medium
CN110174093B (en) Positioning method, device, equipment and computer readable storage medium
US11474247B2 (en) Methods and systems for color point cloud generation
CN110175498B (en) Providing map semantics of rich information to navigation metric maps
CN108271408B (en) Generating three-dimensional maps of scenes using passive and active measurements
KR101850795B1 (en) Apparatus for Parking and Vehicle
US9940527B2 (en) Driving assist system for vehicle and method thereof
US11061122B2 (en) High-definition map acquisition system
CN110390240B (en) Lane post-processing in an autonomous vehicle
CN107966158B (en) Navigation system and method for indoor parking lot
JP2014096135A (en) Moving surface boundary recognition device, mobile equipment control system using the same, moving surface boundary recognition method, and program for recognizing moving surface boundary
KR102541560B1 (en) Method and apparatus for recognizing object
CN111583335B (en) Positioning system, positioning method, and non-transitory computer readable storage medium
JP2018073275A (en) Image recognition device
CN113435232A (en) Object detection method, device, equipment and storage medium
CN110727269A (en) Vehicle control method and related product
JP5557036B2 (en) Exit determination device, exit determination program, and exit determination method
US20130147983A1 (en) Apparatus and method for providing location information
WO2020036044A1 (en) Image processing device, image processing method, and program
WO2021056185A1 (en) Systems and methods for partially updating high-definition map based on sensor data matching
CN110660113A (en) Method and device for establishing characteristic map, acquisition equipment and storage medium
US20230266469A1 (en) System and method for detecting road intersection on point cloud height map
KR20230068653A (en) Around view system for vehicle
WO2020073268A1 (en) Snapshot image to train roadmodel
WO2020073270A1 (en) Snapshot image of traffic scenario

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination