WO2019056845A1 - 道路图生成方法、装置、电子设备和计算机存储介质 - Google Patents
道路图生成方法、装置、电子设备和计算机存储介质 Download PDFInfo
- Publication number
- WO2019056845A1 WO2019056845A1 PCT/CN2018/096332 CN2018096332W WO2019056845A1 WO 2019056845 A1 WO2019056845 A1 WO 2019056845A1 CN 2018096332 W CN2018096332 W CN 2018096332W WO 2019056845 A1 WO2019056845 A1 WO 2019056845A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- road
- neural network
- map
- channel
- sub
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3807—Creation or updating of map data characterised by the type of data
- G01C21/3815—Road data
- G01C21/3822—Road feature data, e.g. slope data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3852—Data derived from aerial or satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/18—Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/182—Network patterns, e.g. roads or rivers
Definitions
- the present application relates to the field of data processing technologies, and may be related to the field of image processing technologies, and in particular, to a road map generation method, apparatus, electronic device, and computer storage medium.
- Maps enable people to determine their location information or destination location information at any time, which is conducive to travel planning and improves the convenience of people's lives.
- the embodiment of the present application provides a technical solution for road map generation.
- a road map generating method comprising: inputting a remote sensing image into a first neural network to extract multi-channel first road feature information via the first neural network;
- the multi-channel first road feature information is input to the third neural network to extract the multi-channel third road feature information via the third neural network, wherein the third neural network uses the road direction information as the supervisory information Training the completed neural network; fusing the first road feature information and the third road feature information; and generating a road map according to the fusion result.
- the fusing the first road feature information and the third road feature information comprises: adding or weighting the first road feature information and the third road feature information; or And connecting the first road feature information and the third road feature information in series.
- the first neural network includes: a second sub-neural network, wherein the second sub-neural network is a neural network that is trained to allow road width information to be supervised information; the remote sensing image is input
- the first neural network extracts the first road feature information of the multiple channels by using the first neural network, including: inputting the remote sensing image into the second sub-neural network, to extract more by the second sub-neural network a second road feature map of the passage; the first road feature information including the second road feature map.
- the first neural network further comprises: a first sub-neural network; the inputting the remote sensing image into the second sub-neural network to extract multi-channel via the second sub-neural network a second road feature map, comprising: inputting the remote sensing image into the first sub-neural network to extract a multi-channel first road feature map via the first sub-neural network; and the multi-channel first road A feature map is input to the second sub-neural network to extract a multi-channel second road feature map via the second sub-neural network.
- the first neural network further includes: a third sub-neural network; the extracting the multi-channel second road feature map by the second sub-neural, further comprising: a second road feature map is input to the third sub-neural network to extract a multi-channel third road feature map via the third sub-neural; the first road feature information includes the third road feature map.
- the allowable road width information includes: an allowable road width range, a width of each road in the road map falls within the allowable road width range; or the allowable road width information includes: a predetermined road width The width of each road in the road map is the predetermined road width.
- the generating a road map according to the fusion result comprises: inputting the fusion result into a fourth neural network to extract the fourth road feature information of the multi-channel via the fourth neural network;
- the fourth road feature information of the channel determines the road map.
- the fourth neural network is a neural network that is trained to supervise information based on the allowable road width information.
- the method further includes: determining a center line of the road in the road map.
- the method further includes: performing vectorization processing on the road map to obtain a road vector image.
- the method further includes: acquiring a road direction reference map of the remote sensing image sample; inputting the remote sensing image sample or the multi-channel road feature image of the remote sensing image sample into a third neural network to be trained, to Extracting a fourth road feature map of the multi-channel by the trained third neural network; determining a road direction regression map according to the multi-channel fourth road feature map; and the road direction regression map and the road direction reference map
- the first loss between the two neural networks to be trained is returned to adjust the network parameters of the third neural network to be trained.
- the method further includes: acquiring an equal-width road reference image of the remote sensing image sample; inputting the remote sensing image sample or the multi-channel road feature image of the remote sensing image sample into the second sub-neural network to be trained, Extracting, by the second sub-neural network to be trained, a multi-channel fifth road feature map; determining a first road probability map according to the multi-channel fifth road feature map; and the first road probability map and the The second loss between the equal-width road reference maps returns the second sub-neural network to be trained to adjust the network parameters of the second sub-neural network to be trained.
- the method further includes: acquiring an equal-width road reference map of the remote sensing image sample; inputting the multi-channel road feature image of the remote sensing image sample or the remote sensing image sample into a fourth neural network to be trained, to Extracting, by the fourth neural network to be trained, a multi-channel sixth road feature map; determining a second road probability map according to the multi-channel sixth road feature map; and using the second road probability map and the equal width
- the third loss between the road reference maps returns the fourth neural network to be trained to adjust the network parameters of the fourth neural network to be trained.
- the method further includes: acquiring an equal-width road reference map and a road direction reference map of the remote sensing image sample; and inputting the multi-channel road feature image of the remote sensing image sample or the remote sensing image sample into the second to be trained a sub-neural network, the fifth road feature map of the multi-channel is extracted by the second sub-neural network to be trained; and the first road probability map is determined according to the multi-channel fifth road feature map;
- the multi-channel fourth road feature map determines a road direction regression map; and inputs the remote sensing image sample or the multi-channel road feature map of the remote sensing image sample into a fourth neural network to be trained to be trained a fourth neural network extracting a multi-channel sixth road feature map; determining a second road probability map according to the multi-channel sixth road feature map; between the road direction regression map and the road direction reference map a third loss, the second loss between the first road probability map and the equal-width road reference map, the second road probability map, and a third loss between the equal-width road reference maps,
- the neural network system including the third neural network, the second sub-neural network, and the fourth neural network are respectively returned to jointly adjust network parameters of the neural network system.
- a road map generating apparatus comprising: a first road feature information acquiring unit, configured to input a remote sensing image into a first neural network, to pass the first neural network Extracting the first road feature information of the multi-channel; the third road feature information acquiring unit is configured to input the first road feature information of the multi-channel into the third neural network, to extract the multi-channel by the third neural network
- the third road feature information wherein the third neural network is a neural network completed with at least road direction information as supervisory information training; and an information fusion unit configured to fuse the first road feature information and the third road feature information a road map generation unit for generating a road map based on the fusion result.
- the information fusion unit is configured to: add or weight add the first road feature information and the third road feature information; or, concatenate the first road feature information and the Third road feature information.
- the first neural network includes: a second sub-neural network, wherein the second sub-neural network is a neural network that is trained to allow road width information as supervisory information; the first road feature
- the information acquiring unit includes: a first acquiring subunit, inputting the remote sensing image into the second sub-neural network to extract a multi-channel second road feature map via the second sub-neural network; the first road feature The information includes the second road feature map.
- the first neural network further includes: a first sub-neural network;
- the first road feature information acquiring unit further includes: a first acquiring sub-unit, configured to input the remote sensing image into the first a sub-neural network to extract a multi-channel first road feature map via the first sub-neural network;
- the second acquisition sub-unit configured to input the multi-channel first road feature map into a second sub-neural a network to extract a second road feature map of the plurality of channels via the second sub-neural network.
- the first neural network further includes: a third sub-neural network;
- the first road feature information acquiring unit further includes: a third acquiring sub-unit, configured to use the second path of the multi-channel
- the feature map is input to the third sub-neural network to extract a multi-channel third road feature map via the third sub-neural; the first road feature information includes the third road feature map.
- the allowable road width information includes: an allowable road width range, a width of at least one of the road maps falling within the allowable road width range; or the allowable road width information includes: a predetermined road Width, the width of at least one of the road maps being the predetermined road width.
- the road map generation unit includes: a fourth acquisition subunit, configured to input the fusion result into a fourth neural network to extract the fourth road feature information of the multiple channels via the fourth neural network. a road map determining subunit for determining a road map based on the fourth road feature information of the multi-channel.
- the fourth neural network is a neural network that is based on the admission of road width information to supervise information training.
- the road map generation unit further includes a center line determination subunit for determining a center line of the road in the road map.
- the road map generation unit further includes: a road vector map acquisition subunit, configured to perform vectorization processing on the road map to obtain a road vector map.
- the method further includes: a training unit of the third neural network, configured to: acquire a road direction reference map of the remote sensing image for training; and input the remote sensing image for training or a multi-channel road feature map thereof a third neural network trained to extract a multi-channel fourth road feature map via the third neural network to be trained; determining a road direction regression map according to the multi-channel fourth road feature map; The first loss between the regression map and the road direction reference map is returned to the third neural network to be trained to adjust the network parameters of the third neural network to be trained.
- a training unit of the third neural network configured to: acquire a road direction reference map of the remote sensing image for training; and input the remote sensing image for training or a multi-channel road feature map thereof a third neural network trained to extract a multi-channel fourth road feature map via the third neural network to be trained; determining a road direction regression map according to the multi-channel fourth road feature map; The first loss between the regression map and the road direction reference map is returned to the third neural network to be trained to adjust
- the method further includes: a training unit of the second sub-neural network, configured to: input a remote sensing image for training or a multi-channel road feature map into a second sub-neural network to be trained, to And extracting, by the second sub-neural network, the multi-channel fifth road feature map; determining, according to the multi-channel fifth road feature map, the first road probability map; and the first road probability map and the equal-width
- a training unit of the second sub-neural network configured to: input a remote sensing image for training or a multi-channel road feature map into a second sub-neural network to be trained, to And extracting, by the second sub-neural network, the multi-channel fifth road feature map; determining, according to the multi-channel fifth road feature map, the first road probability map; and the first road probability map and the equal-width
- the second loss between the road reference maps returns the second sub-neural network to be trained to adjust the network parameters of the second sub-neural network to be
- the method further includes: a training unit of the fourth neural network, configured to: acquire an equal-width road reference map of the remote sensing image for training; and input the remote sensing image for training or a multi-channel road feature map thereof a fourth neural network to be trained, extracting a sixth road feature map of the multi-channel by the fourth neural network to be trained; determining a second road probability map according to the sixth road feature map of the multi-channel; The third loss between the road probability map and the equal-width road reference map returns the fourth neural network to be trained to adjust network parameters of the fourth neural network to be trained.
- a training unit of the fourth neural network configured to: acquire an equal-width road reference map of the remote sensing image for training; and input the remote sensing image for training or a multi-channel road feature map thereof a fourth neural network to be trained, extracting a sixth road feature map of the multi-channel by the fourth neural network to be trained; determining a second road probability map according to the sixth road feature map of the multi-channel; The third loss between the road
- the training unit of the third neural network will perform the first loss, the training unit of the second sub-neural network, the second loss, the training unit of the fourth neural network
- the third loss separately returns a neural network system including the third neural network, the second sub-neural network, and the fourth neural network to jointly adjust network parameters of the neural network system.
- an electronic device comprising: a memory for storing executable instructions; and a processor for communicating with the memory to execute the executable instructions to complete the application
- a memory for storing executable instructions
- a processor for communicating with the memory to execute the executable instructions to complete the application
- a computer storage medium for storing computer readable instructions, when the instructions are executed, performing the operation of the road map generation method according to any one of the above embodiments of the present application .
- a computer program comprising computer readable code, the processor in the device executing the above-described implementation of the present application when the computer readable code is run on a device.
- the road map generation method, device, electronic device and computer storage medium provided by the embodiments of the present application input the remote sensing image into the first neural network to obtain the first road feature information of the multi-channel; and then input the first road feature information of the multi-channel a third neural network obtains a multi-channel third road feature information, wherein the third neural network is a neural network completed by using road direction information as supervisory information training; and then, combining the first road feature information and the third road feature information, and According to the fusion result, the road map is generated, and the accuracy of extracting the remote sensing image on the road direction feature is improved.
- FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present application can be applied;
- FIG. 2 is a flow chart of a road map generation method according to an embodiment of the present application.
- FIG. 3a is a schematic diagram of an application scenario of a road map generation method according to an embodiment of the present application.
- Figure 3b is a road map obtained after extracting road features from Figure 3a;
- FIG. 4 is a schematic structural diagram of a road map generating device according to an embodiment of the present application.
- FIG. 5 is a schematic structural diagram of a server according to an embodiment of the present application.
- Embodiments of the invention may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which may operate with numerous other general purpose or special purpose computing system environments or configurations.
- Examples of well-known terminal devices, computing systems, environments, and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers, and the like include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients Machines, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of the above, and the like.
- Electronic devices such as terminal devices, computer systems, servers, etc., can be described in the general context of computer system executable instructions (such as program modules) being executed by a computer system.
- program modules may include routines, programs, target programs, components, logic, data structures, and the like that perform particular tasks or implement particular abstract data types.
- the computer system/server can be implemented in a distributed cloud computing environment where tasks are performed by remote processing devices that are linked through a communication network.
- program modules may be located on a local or remote computing system storage medium including storage devices.
- FIG. 1 illustrates an exemplary system architecture 100 in which a road map generation method and a road map generation device of an embodiment of the present application can be applied.
- system architecture 100 can include terminal device 101 (e.g., aerial objects, etc.), terminal device 102 (e.g., satellite, etc.), network 103, and electronic device 104.
- the network 103 is used to provide a medium for communication links between the terminal devices 101, 102 and the electronic device 104.
- Network 103 may include various types of connections, such as wired, wireless communication links, fiber optic cables, and the like.
- the user can interact with the electronic device 104 via the network 103 using the terminal devices 101, 102 to receive or transmit image information and the like.
- the terminal devices 101 and 102 are vehicles for carrying sensors. Commonly used are balloons, aerial objects, and artificial satellites.
- the electromagnetic wave characteristics of the target object are acquired from a long distance, and the target object is transmitted, stored, corrected, and recognized by the image information. , and finally achieve its functions (for example, timing function, positioning function, qualitative function, quantitative function).
- the sensor may be, for example, an instrument for detecting electromagnetic wave characteristics of a target, and a camera, a scanner, and an imaging radar are commonly used.
- the electronic device 104 may be a server that provides various services, such as a background image processing server that acquires remote sensing images from sensors mounted on the terminal devices 101, 102.
- the background image processing server may perform processing such as analysis of the received remote sensing image and the like, and output the processing result (for example, the object detection result).
- the road map generating method provided by the embodiment of the present application may be executed by the electronic device 104. Accordingly, the road map generating device may be disposed in the server 104.
- terminal devices, networks, and electronic devices in FIG. 1 is merely illustrative. Depending on the needs of the implementation, there can be any number of terminal devices, networks, and electronic devices.
- the road map generation method includes:
- Step 201 Input a remote sensing image into the first neural network to extract the first road feature information of the multiple channels via the first neural network.
- the electronic device for example, the electronic device 104 shown in FIG. 1
- the electronic device for example, the electronic device 104 shown in FIG. 1
- the foregoing wireless connection manner may include, but is not limited to, a 3G/4G connection, a WiFi connection, a Bluetooth connection, a wireless metropolitan area network (WiMAX) connection, a wireless personal area network (Zigbee) connection, and an ultra wideband (ultra wideband).
- UWB ultra wideband
- the remote sensing image is imported into the first neural network, and the first neural network is capable of extracting the multi-channel first road feature information from the remote sensing image.
- the first road feature information may be, for example, road feature information including road width extracted from the remote sensing image.
- the first neural network may include: a second sub-neural network, wherein the second sub-neural network may be trained to allow road width information to be supervised by the supervisory information.
- Neural Networks the inputting the remote sensing image into the first neural network to extract the first road feature information of the multi-channel via the first neural network may include: inputting the remote sensing image into the second a sub-neural network to extract a multi-channel second road feature map via the second sub-neural network.
- the first road feature information includes the second road feature map.
- the remote sensing image may be directly input into the second sub-neural network, and the second sub-neural network is a neural network that is completed by the road width information as the supervision information, and the second sub-neural network
- the road image in the remote sensing image can be identified, and a second road feature map including multiple channels of the allowable width is extracted from the remote sensing image.
- the second sub-neural network may include multiple convolution layers, and each convolution layer may be followed by a normalization layer and a non-linear layer, and finally a convolution kernel is a classification layer of a set size. After that, the second road feature map of the multi-channel is output.
- the first neural network may include: a first sub-neural network and a second sub-neural network.
- the inputting the remote sensing image into the first neural network to extract the first road feature information of the multi-channel via the first neural network may include: inputting the remote sensing image into the first a sub-neural network to extract a first road feature map of the multi-channel via the first sub-neural network; input the first road feature map of the multi-channel into a second sub-neural network for extraction by the second sub-neural nerve A multi-channel second road feature map, wherein the second sub-neural network is a neural network that is completed by allowing road width information to be supervised by supervisory information.
- the first road feature information includes the second road feature map.
- the first neural network may include a first sub-neural network and a second sub-neural network.
- the first sub-neural network may, for example, extract a multi-channel first road feature map from the remote sensing image by means of convolution and downsampling.
- the first road feature map is then input to the second sub-application embodiment network to obtain a second road feature map including multiple lanes of allowable width.
- the first neural network may include: a first sub-neural network, a second sub-neural network, and a third sub-neural network.
- the inputting the remote sensing image into the first neural network to extract the first road feature information of the multi-channel via the first neural network may include: inputting the remote sensing image into the first a sub-neural network to extract a first road feature map of the multi-channel via the first sub-neural network; input the first road feature map of the multi-channel into a second sub-neural network for extraction by the second sub-neural nerve a second road feature map of the multi-channel, wherein the second sub-neural network is a neural network completed with the road width information as the supervised information training; and the second road feature map of the multi-channel is input to the third sub-neural network And extracting a multi-channel third road feature map by the third sub-neural.
- the first road feature information includes the third road
- the first neural network may further include a first sub-neural network, a second sub-neural network, and a third sub-neural network.
- the first sub-neural network and the second sub-neural network may be the same as described in the above implementation manner.
- the second road feature map of the multi-channel can be input to the third sub-neural network, and the second road feature map is denoised by the third sub-neural network, and the multi-channel output is output.
- Third road feature map With this embodiment, a smooth road of equal width can be obtained, and the burr phenomenon occurring in the extracted road feature map due to obstacle occlusion, image sharpness, extraction precision, and the like in the remote sensing image can be improved.
- the allowable road width information may be an allowable road width range, and a width of at least one road (eg, each road) in the road map falls within the allowable road width.
- the range; or the allowable road width information may also be a predetermined road width, and the width of at least one road (for example, each road) in the road map is the predetermined road width.
- the remote sensing images may be photographed at different heights.
- an allowable road width range may be set, and a width of part or all of the roads in the road map falls within the allowable road width range.
- the road width it is also possible to set the road width to a predetermined road width such that the width of some or all of the roads in the road map is the predetermined road width.
- the training method of the second sub-neural network may also be included, for example, the method may include:
- a ground-width road map (groundtruth) of remote sensing image samples ie, remote sensing images for training
- the equal-width road reference map may be a remote-sensing image pre-marked with a road of equal width, and used as supervisory information in the training process for the second sub-neural network.
- the multi-channel road feature map of the remote sensing image sample or the remote sensing image sample is input into the second sub-neural network to be trained, and the multi-channel fifth road feature map is extracted through the second sub-neural network to be trained.
- the training data of the second sub-neural network may be a remote sensing image sample or a multi-channel road feature map extracted from the remote sensing image sample, and the training data is input to the second sub-neural network to be trained, and the training data is to be trained.
- the second sub-neural network may extract corresponding road width feature information from training data such as training remote sensing image samples or multi-channel road feature maps of remote sensing image samples, and obtain a corresponding multi-channel fifth road feature map.
- the first road probability map is determined according to the multi-channel fifth road feature map.
- the fifth road feature map may be image processed to determine the first road probability map.
- the first road probability map is used to represent a probability that at least one pixel point (for example, each pixel point) in the fifth road feature map belongs to the road.
- the first road probability map may be normalized and then processed.
- the second loss between the first road probability map and the equal-width road reference map is returned to the second sub-neural network to be trained to adjust the second sub-need of the training Network parameters of the network.
- the above-described equal-width road reference map can be considered as an effect map in an ideal state.
- This error can be regarded as the second loss back, and the second loss is transmitted back to the second sub-neural network to be trained, which can be trained.
- the network parameters of the second sub-neural network are adjusted to reduce the second loss, and the second sub-neural network to be trained is improved in accuracy of extracting contour features of the road.
- the step 201 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by the first road feature information obtaining unit 401 being executed by the processor.
- Step 202 Input the first road feature information of the multi-channel into the third neural network, to extract the third road feature information of the multi-channel via the third neural network.
- the first road feature information may be input to the third neural network to obtain the third road feature information.
- the third road feature information may be feature information that adds direction information of the road based on the first road feature information.
- the third neural network is a neural network that is completed with at least road direction information as supervisory information training.
- the training method of the third neural network may also be included, for example, the following steps may be included:
- a road direction reference map (groundtruth) of the remote sensing image sample is obtained.
- the road direction reference map may be a remote sensing image with a road direction pre-marked, and the pre-marking manner may be a manual marking, a machine marking, or other methods.
- the multi-channel road feature map of the remote sensing image sample or the remote sensing image sample is input into the third neural network to be trained, and the fourth road feature map of the multi-channel is extracted through the third neural network to be trained.
- the training data of the third neural network may be a remote sensing image sample, or may be a multi-channel road feature map extracted from the remote sensing image sample, and the training data is input into the third neural network to be trained, and the third to be trained.
- the neural network may extract corresponding directional feature information from the remote sensing image samples for training or its multi-channel road feature map, and obtain a corresponding multi-channel fourth road feature map.
- the road direction regression map is determined according to the multi-channel fourth road feature map.
- the fourth road feature map may be subjected to image processing to determine a road direction regression map.
- the road direction regression graph is used to represent the value of the corresponding pixel of the multi-channel feature map, and the subsequent processing may be directly performed without normalization processing.
- the value of a single pixel in the road direction regression graph may be a number from 0-180, indicating the angle at which the road direction of the pixel is offset relative to the reference direction.
- the first loss between the road direction regression map and the road direction reference map is returned to the third neural network to be trained to adjust network parameters of the third neural network to be trained.
- the road direction reference map is an effect diagram of the road direction in an ideal state.
- this error can be considered as the first loss.
- the first loss is transmitted back to the third neural network to be trained, and the network parameters of the third neural network to be trained can be adjusted to reduce the first loss and improve the road direction feature of the third neural network to be trained. The accuracy.
- the step 202 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by a third road feature information acquisition unit 402 executed by the processor.
- Step 203 The first road feature information and the third road feature information are merged.
- the first road feature information may be road feature information of a certain road width extracted from the remote sensing image, and the third road feature information may be based on the first road feature information, and the feature information of the direction information of the road is added.
- the road feature information and the third road feature information may cause the road feature information to have both the road width feature and the road direction feature.
- the combining the first road feature information and the third road feature information may include: adding or weighting the first road feature information and the The third road feature information; or, the first road feature information and the third road feature information are concatenated.
- the first road feature information may be road feature information of a certain road width extracted from the remote sensing image. Therefore, the first road feature information may be an image including a certain road width.
- the third road feature information may be an image including direction information of the road.
- the first road feature information can be realized by directly combining (adding) the pixels in the image corresponding to the first road feature information with the pixels in the image corresponding to the third road feature information, or combining (weighting) according to a certain weight. Fusion with the third road feature information.
- the image corresponding to the first road feature information and the image corresponding to the third road feature information may be directly connected to realize the first road feature information and the third road feature information. Fusion.
- the step 203 may be performed by a processor invoking a corresponding instruction stored in the memory or by an information fusion unit 403 being executed by the processor.
- Step 204 Generate a road map according to the fusion result.
- the road feature information can be provided with both the road width feature and the road direction feature.
- a road map can be generated based on the road width feature and the directional characteristics of the road.
- the generating the road map according to the fusion result may include: inputting the fusion result into the fourth neural network, and extracting the fourth of the multiple channels by using the fourth neural network.
- Road feature information determining a road map based on the multi-channel fourth road feature information.
- the fusion result of the first road feature information and the third road feature information is input to the fourth neural network, and the road width feature and the road direction feature are combined by the fourth neural network to obtain the multi-channel fourth road feature information, and
- the multi-channel fourth road feature information determines the road map.
- the fourth neural network is a neural network based on the training of the road width information supervised information.
- the method may further include: determining a center line of the road in the road map.
- the centerline can improve the accuracy of automatic or assisted driving control such as navigation, steering, and channel maintenance.
- the existing method obtains the image of the road intersection, the extraction effect of the central line at the intersection of the extracted road intersection is poor due to obstacles such as obstacles of the remote sensing image, image sharpness, extraction precision, etc., and burrs and insufficient smoothness may occur.
- a smooth center line can be extracted, which can improve the extraction of the road intersection image due to obstruction of the remote sensing image, image sharpness, extraction precision, and the like.
- the extraction of the centerline at the intersection of roads is not good, resulting in burrs and insufficient smoothness.
- the method further includes: performing vectorization processing on the road map to obtain a road vector map. Through the road vector, it is possible to generate control commands for automatic or assisted driving control such as navigation, steering, and channel keeping.
- the occluded road may be supplemented by information such as the road width feature and the direction feature of the road to improve the accuracy of the road in the road map.
- the training method of the fourth neural network may also be included, for example, the following steps may be included:
- a uniform road reference map of remote sensing image samples ie, remote sensing images for training
- the multi-channel road feature map of the remote sensing image sample or the remote sensing image sample is input into the fourth neural network to be trained, and the sixth road feature map of the multi-channel is extracted through the fourth neural network to be trained.
- the second road probability map is determined according to the multi-channel sixth road feature map.
- the third loss between the second road probability map and the equal-width road reference map is returned to the fourth neural network to be trained to adjust the fourth neural network to be trained.
- Network parameters
- the training process of the fourth neural network is similar to the training process of the second sub-neural network described above, and the related indications can be referred to each other, and will not be repeated here.
- the first loss, the second loss, and the third loss may be respectively returned, including the third neural network, the first The two-child neural network and the neural network system of the fourth neural network to jointly adjust network parameters of the neural network system, for example, may include the following steps:
- the third loss between the wide road reference maps respectively returns a neural network system including the third neural network, the second sub-neural network, and the fourth neural network to jointly adjust the network parameters of the neural network system.
- the network parameters of the neural network system of the third neural network, the second sub-neural network, and the fourth neural network are adjusted to improve the accuracy of the road width and direction in the acquired road map.
- the step 204 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by a road map generation unit 404 that is executed by the processor.
- FIG. 3a is a schematic diagram of an application scenario of a road map generation method according to the present embodiment.
- Figure 3a is an actual remote sensing image. It can be seen that Figure 3a contains information such as roads, buildings, and trees.
- the remote sensing image may be first input into the first neural network to obtain the first road feature information; then the first road feature information of the multi-channel is input into the third neural network to obtain the multi-channel Three road feature information; after that, the first road feature information and the third road feature information are merged, and a road map is generated according to the fusion result, as shown in FIG. 3b.
- the method provided by the embodiment of the present application improves the accuracy of extracting remote sensing images on road width features and road direction features.
- Any road map generating method provided by the embodiment of the present invention may be performed by any suitable device having data processing capability, including but not limited to: a terminal device, a server, and the like.
- any road map generating method provided by the embodiment of the present invention may be executed by a processor, such as the processor executing any road map generating method mentioned in the embodiment of the present invention by calling corresponding instructions stored in the memory. This will not be repeated below.
- the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
- the foregoing steps include the steps of the foregoing method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
- the embodiment of the present application provides a road map generating device, and the road map generating device embodiment corresponds to the method embodiment shown in FIG. 2, and the device can be applied.
- the road map generating device embodiment corresponds to the method embodiment shown in FIG. 2, and the device can be applied.
- the device can be applied.
- electronic devices In a variety of electronic devices.
- the road map generating apparatus 400 of the present embodiment may include a first road feature information acquiring unit 401, a third road feature information acquiring unit 402, an information fusion unit 403, and a road map generating unit 404.
- the first road feature information acquiring unit 401 is configured to input the remote sensing image into the first neural network to extract the first road feature information of the multiple channels via the first neural network;
- the third road feature information acquiring unit 402 is configured to The multi-channel first road feature information is input to the third neural network to extract the multi-channel third road feature information via the third neural network, wherein the third neural network uses the road direction information as the supervisory information
- the information fusion unit 403 is configured to fuse the first road feature information and the third road feature information;
- the road map generating unit 404 is configured to generate a road map according to the fusion result.
- the information fusion unit 403 may be configured to: add or weight add the first road feature information and the third road feature information; or, a serial connection The first road feature information and the third road feature information are described.
- the first neural network may include: a second sub-neural network, wherein the second sub-neural network is a neural network that is trained to allow road width information to be supervised.
- the first road feature information acquiring unit 401 may include: a first acquiring subunit (not shown), and inputting the remote sensing image into the second sub-neural network to pass the second sub-neural The network extracts a second road feature map of the multi-channel.
- the first road feature information includes the second road feature map.
- the first neural network may include: a first sub-neural network and a second sub-neural network; the first road feature information acquiring unit 401 may include: first acquiring Subunits (not shown) and first acquisition subunits (not shown).
- the first acquiring subunit is configured to input the remote sensing image into the first sub-neural network to extract a multi-channel first road feature map via the first sub-neural network; And inputting the first road feature map of the multi-channel into the second sub-neural network to extract a multi-channel second road feature map via the second sub-neural, wherein the second sub-neural network is an allowable road
- the width information is a neural network that supervises the completion of information training.
- the first road feature information includes the second road feature map.
- the first neural network may include: a first sub-neural network, a second sub-neural network, and a third sub-neural network; the first road feature information acquiring unit 401
- the method may include: a first acquisition subunit (not shown in the drawing), a second acquisition subunit (not shown in the figure), and a third acquisition subunit (not shown in the drawing).
- the first acquiring subunit is configured to input the remote sensing image into the first sub-neural network to extract a multi-channel first road feature map via the first sub-neural network; And inputting the first road feature map of the multi-channel into the second sub-neural network to extract a multi-channel second road feature map via the second sub-neural, wherein the second sub-neural network is an allowable road
- the width information is a neural network in which the supervised information training is completed; the third obtaining subunit is configured to input the second road feature map of the multi-channel into the third sub-neural network to extract the multi-channel by the third sub-neural Three road feature maps.
- the first road feature information includes the third road feature map.
- the allowable road width information may be an allowable road width range, and a width of each road in the road map falls within the allowable road width range; or the allowable road
- the width information may also be a predetermined road width, and the width of each road in the road map is the predetermined road width.
- the road map generating unit 404 may include: a fourth road feature information acquiring subunit (not shown in the figure) and a road map determining subunit (not shown in the figure) ).
- the fourth road feature information acquiring subunit is configured to input the fusion result into the fourth neural network to extract the fourth road feature information of the multiple channels via the fourth neural network; the road map determining subunit is used to The fourth road feature information of the multi-channel determines the road map.
- the fourth neural network is a neural network that is completed by using the road width information as the supervision information.
- the road map generating unit 404 may further include: a center line determining subunit (not shown) for determining a center line of the road in the road map.
- the road map generating unit 404 may further include: a road vector image acquiring subunit (not shown) for performing vectorization processing on the road map, Get the road vector illustration.
- the training unit (not shown) of the third neural network is configured to acquire a road direction reference map of the remote sensing image for training; and the remote sensing image for training is used.
- a multi-channel road feature map is input to the third neural network to be trained, to extract a multi-channel fourth road feature map through the third neural network to be trained; and determine a road direction according to the multi-channel fourth road feature map Regression map; returning the first loss between the road direction regression map and the road direction reference map to the third neural network to be trained to adjust network parameters of the third neural network to be trained.
- the training unit (not shown) of the second sub-neural network is configured to acquire an equal-width road reference map of the remote sensing image for training; a remote sensing image or a multi-channel road feature map thereof, inputting a second sub-neural network to be trained, and extracting a multi-channel fifth road feature map via the second sub-neural network to be trained; according to the multi-channel fifth road Determining the first road probability map; returning the second loss between the first road probability map and the equal-width road reference map to the second sub-neural network to be trained to adjust the to-be-trained The network parameters of the second sub-neural network.
- the training unit (not shown) of the fourth neural network may be further configured to acquire a contour road map of the remote sensing image for training;
- the remote sensing image or the multi-channel road feature map is input to the fourth neural network to be trained, and the sixth road feature map of the multi-channel is extracted by the fourth neural network to be trained; and the sixth road feature map is determined according to the multi-channel a second road probability map; returning a third loss between the second road probability map and the equal-width road reference map to the fourth neural network to be trained to adjust the fourth nerve to be trained Network parameters of the network.
- the training unit of the second neural network, the training unit of the third neural network, and the training unit of the fourth neural network may also be included. among them:
- the training unit of the second sub-neural network is configured to: obtain a contour road map of the remote sensing image for training; input the remote sensing image for training or its multi-channel road feature map into the second sub-neural network to be trained, to be treated
- the trained second sub-neural network extracts a multi-channel fifth road feature map; the first road probability map is determined according to the multi-channel fifth road feature map; and the second road probability map and the equal-width road reference map are second Loss return includes a neural network system of a third neural network, a second sub-neural network, and a fourth neural network;
- the training unit of the third neural network is configured to: obtain a road direction reference map of the remote sensing image for training; input the training remote sensing image or the multi-channel road feature map into the third neural network to be trained, to be trained
- the third neural network extracts the fourth road feature map of the multi-channel; the road direction regression map is determined according to the fourth road feature map of the multi-channel; the first loss return between the road direction regression map and the road direction reference map includes the third nerve a neural network system of the network, the second sub-neural network, and the fourth neural network;
- the training unit of the fourth neural network is configured to: obtain a contour road map of the remote sensing image for training; input the remote sensing image for training or its multi-channel road feature map into the fourth neural network to be trained, to be trained
- the fourth neural network extracts the sixth road feature map of the multi-channel; determines the second road probability map according to the multi-channel sixth road feature map; and the third loss back between the second road probability map and the equal-width road reference map And transmitting a neural network system including a third neural network, a second sub-neural network, and a fourth neural network to adjust network parameters of the neural network system in combination with the first loss and the second loss.
- An embodiment of the present application provides an electronic device, including: a memory for storing executable instructions; and a processor, configured to communicate with the memory to execute the executable instructions to complete the path described in any of the above embodiments. The operation of the graph generation method.
- the embodiment of the present application provides a computer storage medium for storing computer readable instructions, and when the instructions are executed, performing the operation of the road map generation method according to any of the above embodiments.
- the embodiment of the present application provides a computer program, including computer readable code, when the computer readable code is run on a device, the processor in the device executes a road map generation method for implementing any of the above embodiments. Operation.
- FIG. 5 there is shown a block diagram of a server 500 suitable for use in implementing the embodiments of the present application.
- the server 500 includes a central processing unit (CPU) 501, which may be loaded according to a program stored in a read only memory (ROM) 502 or a program loaded from a storage portion 508 into a random access memory (RAM) 503. Perform various appropriate actions and processes.
- RAM 503 various programs and data required for the operation of the server 500 are also stored.
- the CPU 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504.
- ROM 502 is an optional module.
- the RAM 503 stores executable instructions, or writes executable instructions to the ROM 502 at runtime, and the executable instructions cause the CPU 501 to execute operations corresponding to the road map generating method of any of the above embodiments.
- An input/output (I/O) interface 505 is also coupled to bus 504.
- the communication unit 512 may be integrated or may be provided with a plurality of sub-modules (for example, a plurality of IB network cards) and on the bus link.
- the following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, etc.; an output portion 507 including a liquid crystal display (LCD) or the like and a speaker, etc.; a storage portion 508 including a hard disk or the like; and including, for example, a LAN card, A communication portion 509 of a network interface card such as a modem. The communication section 509 performs communication processing via a network such as the Internet.
- Driver 510 is also coupled to I/O interface 505 as needed.
- a removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive 510 as needed so that a computer program read therefrom is installed into the storage portion 508 as needed.
- an embodiment of the present disclosure includes a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing a road map generation method of an embodiment of the flowchart .
- the computer program can be downloaded and installed from the network via the communication portion 509, and/or installed from the removable medium 511.
- each block of the flowchart or block diagrams can represent a module, a program segment, or a portion of code that includes one or more Executable instructions.
- the functions noted in the blocks may also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks may in fact be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or operation. Or it can be implemented by a combination of dedicated hardware and computer instructions.
- the units involved in the embodiments of the present application may be implemented by software or by hardware.
- the described unit may also be disposed in the processor, for example, as a processor including a first road feature information acquiring unit, a third road feature information acquiring unit, an information fusion unit, and a road map generating unit.
- the names of these units do not constitute a limitation on the unit itself under certain circumstances.
- the road map generation unit may also be described as "a unit for acquiring a road map".
- the embodiment of the present application further provides a non-volatile computer storage medium, which may be a non-volatile computer storage medium included in the foregoing apparatus in the foregoing embodiment; It may also be a non-volatile computer storage medium that exists alone and is not assembled into the terminal.
- a non-volatile computer storage medium which may be a non-volatile computer storage medium included in the foregoing apparatus in the foregoing embodiment; It may also be a non-volatile computer storage medium that exists alone and is not assembled into the terminal.
- the non-volatile computer storage medium stores one or more programs, when the one or more programs are executed by a device, causing the device to: input a remote sensing image into the first neural network to pass the first neural network Extracting the first road feature information of the multi-channel; inputting the first road feature information of the multi-channel into the third neural network, to extract the third road feature information of the multi-channel via the third neural network, where the The three neural network is a neural network completed with at least road direction information as supervisory information training; the first road feature information and the third road feature information are merged; and the road map is generated according to the fusion result.
Abstract
Description
Claims (30)
- 一种道路图生成方法,其特征在于,包括:将遥感图像输入第一神经网络,以经所述第一神经网络提取多通道的第一道路特征信息;将所述多通道的第一道路特征信息输入第三神经网络,以经所述第三神经网络提取多通道的第三道路特征信息,其中,所述第三神经网络为以道路方向信息为监督信息训练完成的神经网络;融合所述第一道路特征信息和所述第三道路特征信息;根据融合结果生成道路图。
- 根据权利要求1所述的方法,其特征在于,所述融合所述第一道路特征信息和所述第三道路特征信息,包括:相加或加权相加所述第一道路特征信息和所述第三道路特征信息;或者,串接所述第一道路特征信息和所述第三道路特征信息。
- 根据权利要求1或2所述的方法,其特征在于,所述第一神经网络包括:第二子神经网络,其中,所述第二子神经网络为以容许道路宽度信息为监督信息训练完成的神经网络;所述将遥感图像输入第一神经网络,以经所述第一神经网络提取多通道的第一道路特征信息,包括:将所述遥感图像输入所述第二子神经网络,以经所述第二子神经网络提取多通道的第二道路特征图;所述第一道路特征信息包括所述第二道路特征图。
- 根据权利要求3所述的方法,其特征在于,所述第一神经网络还包括:第一子神经网络;所述将所述遥感图像输入所述第二子神经网络,以经所述第二子神经网络提取多通道的第二道路特征图,包括:将所述遥感图像输入所述第一子神经网络,以经所述第一子神经网络提取多通道的第一道路特征图;将所述多通道的第一道路特征图输入所述第二子神经网络,以经所述第二子神经网络提取多通道的第二道路特征图。
- 根据权利要求3或4所述的方法,其特征在于,所述第一神经网络还包括:第三子神经网络;所述经所述第二子神经提取多通道的第二道路特征图之后,还包括:将所述多通道的第二道路特征图输入所述第三子神经网络,以经所述第三子神经提取多通道的第三道路特征图;所述第一道路特征信息包括所述第三道路特征图。
- 根据权利要求3-5任一所述的方法,其特征在于,所述容许道路宽度信息包括:容许道路宽度范围,所述道路图中至少一个道路的宽度落入所述容许道路宽度范围;或者,所述容许道路宽度信息包括:预定道路宽度,所述道路图中至少一个道路的宽度为所述预定道路宽度。
- 根据权利要求1-6任一所述的方法,其特征在于,所述根据融合结果生成道路图,包括:将所述融合结果输入第四神经网络,以经所述第四神经网络提取多通道的第四道路特征信息;基于所述多通道的第四道路特征信息确定道路图。
- 根据权利要求7所述的方法,其特征在于,所述第四神经网络为以容许道路宽度信息为监督信息训练完成的神经网络。
- 根据权利要求1-8任一所述的方法,其特征在于,所述根据融合结果生成道路图之后,还包括:确定所述道路图中道路的中心线。
- 根据权利要求1-9任一所述的方法,其特征在于,所述根据融合结果生成道路图之后,还包括:将所述道路图进行矢量化处理,获得道路矢量图。
- 根据权利要求1-10任一所述的方法,其特征在于,还包括:获取遥感图像样本的道路方向基准图;将所述遥感图像样本或所述遥感图像样本的多通道的道路特征图输入待训练的第三神经网络,以经所述待训练的第三神经网络提取多通道的第四道路特征图;根据所述多通道的第四道路特征图确定道路方向回归图;将所述道路方向回归图和所述道路方向基准图之间的第一损失回传所述待训练的第三神经网络,以调整所述待训练的第三神经网络的网络参数。
- 根据权利要求3-11任一所述的方法,其特征在于,还包括:获取遥感图像样本的等宽道路基准图;将所述遥感图像样本或所述遥感图像样本的多通道的道路特征图输入待训练的第二子神经网络,以经所述待训练的第二子神经网络提取多通道的第五道路特征图;根据所述多通道的第五道路特征图确定第一道路概率图;将所述第一道路概率图和所述等宽道路基准图之间的第二损失回传所述待训练的第二子神经网络,以调整所述待训练的第二子神经网络的网络参数。
- 根据权利要求7-12任一所述的方法,其特征在于,还包括:获取遥感图像样本的等宽道路基准图;将所述遥感图像样本或所述遥感图像样本的多通道的道路特征图输入待训练的第四神经网络,以经所述待训练的第四神经网络提取多通道的第六道路特征图;根据所述多通道的第六道路特征图确定第二道路概率图;将所述第二道路概率图和所述等宽道路基准图之间的第三损失回传所述待训练的第四神经网络,以调整所述待训练的第四神经网络的网络参数。
- 根据权利要求7-10任一所述的方法,其特征在于,还包括:获取遥感图像样本的等宽道路基准图和道路方向基准图;将所述遥感图像样本或所述遥感图像样本的多通道的道路特征图输入待训练的第二子神经网络,以经所述待训练的第二子神经网络提取多通道的第五道路特征图;根据所述多通道的第五道路特征图确定第一道路概率图;将所述遥感图像样本或所述遥感图像样本的多通道的道路特征图输入待训练的第三神经网络,以经所述待训练的第三神经网络提取多通道的第四道路特征图;根据所述多通道的第四道路特征图确定道路方向回归图;将所述遥感图像样本或所述遥感图像样本的多通道的道路特征图输入待训练的第四神经网络,以经所述待训练的第四神经网络提取多通道的第六道路特征图;根据所述多通道的第六道路特征图确定第二道路概率图;将所述道路方向回归图和所述道路方向基准图之间的第一损失、所述第一道路概率图和所述等宽道路基准图之间的所述第二损失、所述第二道路概率图和所述等宽道路基准图之间的第三损失,分别回传包括所述第三神经网络、所述第二子神经网络和所述第四神经网络的神经网络系统,以联合调整所述神经网络系统的网络参数。
- 一种道路图生成装置,其特征在于,包括:第一道路特征信息获取单元,用于将遥感图像输入第一神经网络,以经所述第一神经网络提取多通道的第一道路特征信息;第三道路特征信息获取单元,用于将所述多通道的第一道路特征信息输入第三神经网络,以经所述第三神经网络提取多通道的第三道路特征信息,其中,所述第三神经网络为以道路方向信息为监督信息训练完成的神经网络;信息融合单元,用于融合所述第一道路特征信息和所述第三道路特征信息;道路图生成单元,用于根据融合结果生成道路图。
- 根据权利要求15所述的装置,其特征在于,所述信息融合单元用于:相加或加权相加所述第一道路特征信息和所述第三道路特征信息;或者,串接所述第一道路特征信息和所述第三道路特征信息。
- 根据权利要求15或16所述的装置,其特征在于,所述第一神经网络包括:第二子神经网络,其中,所述第二子神经网络为以容许道路宽度信息为监督信息训练完成的神经网络;所述第一道路特征信息获取单元包括:第一获取子单元,用于将所述遥感图像输入所述第二子神经网络,以经所述第二子神经网络提取多通道的第二道路特征图;所述第一道路特征信息包括所述第二道路特征图。
- 根据权利要求17所述的装置,其特征在于,所述第一神经网络还包括:第一子神经网络;所述第一道路特征信息获取单元还包括:第一获取子单元,用于将所述遥感图像输入所述第一子神经网络,以经所述第一子神经网络提取多通道的第一道路特征图;所述第二获取子单元,用于将所述多通道的第一道路特征图输入第二子神经网络,以经所述第二子神经网络提取多通道的第二道路特征图。
- 根据权利要求17或18所述的装置,其特征在于,所述第一神经网络还包括:第三子神经网络;所述第一道路特征信息获取单元还包括:第三获取子单元,用于将所述多通道的第二道路特征图输入第三子神经网络,以经所述第三子神经提取多通道的第三道路特征图;所述第一道路特征信息包括所述第三道路特征图。
- 根据权利要求17-19任一所述的装置,其特征在于,所述容许道路宽度信息包括:容许道路宽度范围,所述道路图中至少一个道路的宽度落入所述容许道路宽度范围;或者,所述容许道路宽度信息包括:预定道路宽度,所述道路图中至少一个道路的宽度为所述预定道路宽度。
- 根据权利要求15-20任一所述的装置,其特征在于,所述道路图生成单元包括:第四道获取子单元,用于将所述融合结果输入第四神经网络,以经所述第四神经网络提取多通道的第四道路特征信息;道路图确定子单元,用于基于所述多通道的第四道路特征信息确定道路图。
- 根据权利要求21所述的装置,其特征在于,所述第四神经网络为以容许道路宽度信息为监督信息训练完成的神经网络。
- 根据权利要求15-22任一所述的装置,其特征在于,所述道路图生成单元还包括:中心线确定子单元,用于确定所述道路图中道路的中心线。
- 根据权利要求15-23任一所述的装置,其特征在于,所述道路图生成单元还包括:道路矢量图获取子单元,用于将所述道路图进行矢量化处理,获得道路矢量图。
- 根据权利要求15-24任一所述的装置,其特征在于,还包括:第三神经网络的训练单元,用于:获取训练用遥感图像的道路方向基准图;将所述训练用遥感图像或其多通道的道路特征图输入待训练的第三神经网络,以经所述待训练的第三神经网络提取多通道的第四道路特征图;根据所述多通道的第四道路特征图确定道路方向回归图;将所述道路方向回归图和所述道路方向基准图之间的第一损失回传所述待训练的第三神经网络,以调整所述待训练的第三神经网络的网络参数。
- 根据权利要求17-25任一所述的装置,其特征在于,还包括:第二子神经网络的训练单元,用于:获取训练用遥感图像的等宽道路基准图;将所述训练用遥感图像或其多通道的道路特征图输入待训练的第二子神经网络,以经所述待训练的第二子神经网络提取多通道的第五道路特征图;根据所述多通道的第五道路特征图确定第一道路概率图;将所述第一道路概率图和所述等宽道路基准图之间的第二损失回传所述待训练的第二子神经网络,以调整所述待训练的第二子神经网络的网络参数。
- 根据权利要求21-26任一所述的装置,其特征在于,还包括:第四神经网络的训练单元,用于:获取训练用遥感图像的等宽道路基准图;将训练用的遥感图像或其多通道的道路特征图输入待训练的第四神经网络,以经所述待训练的第四神经网络提取多通道的第六道路特征图;根据所述多通道的第六道路特征图确定第二道路概率图;将所述第二道路概率图和所述等宽道路基准图之间的第三损失回传所述待训练的第四神经网络,以调整所述待训练的第四神经网络的网络参数。
- 根据权利要求21-24任一所述的装置,其特征在于,还包括:第二子神经网络的训练单元,第三神经网络的训练单元和第四神经网络的训练单元;所述第二子神经网络的训练单元,用于:获取训练用遥感图像的等宽道路基准图;将所述训练用遥感图像或其多通道的道路特征图输入待训练的第二子神经网络,以经所述待 训练的第二子神经网络提取多通道的第五道路特征图;根据所述多通道的第五道路特征图确定第一道路概率图;将所述第一道路概率图和所述等宽道路基准图之间的第二损失回传包括所述第三神经网络、所述第二子神经网络和所述第四神经网络的神经网络系统;所述第三神经网络的训练单元,用于:获取训练用遥感图像的道路方向基准图;将所述训练用遥感图像或其多通道的道路特征图输入待训练的第三神经网络,以经所述待训练的第三神经网络提取多通道的第四道路特征图;根据所述多通道的第四道路特征图确定道路方向回归图;将所述道路方向回归图和所述道路方向基准图之间的第一损失回传包括所述第三神经网络、所述第二子神经网络和所述第四神经网络的神经网络系统;所述第四神经网络的训练单元,用于:获取训练用遥感图像的等宽道路基准图;将训练用的遥感图像或其多通道的道路特征图输入待训练的第四神经网络,以经所述待训练的第四神经网络提取多通道的第六道路特征图;根据所述多通道的第六道路特征图确定第二道路概率图;将所述第二道路概率图和所述等宽道路基准图之间的第三损失回传包括所述第三神经网络、所述第二子神经网络和所述第四神经网络的神经网络系统,以联合所述第一损失、所述第二损失调整所述神经网络系统的网络参数。
- 一种电子设备,其特征在于,包括:存储器,用于存储可执行指令;以及处理器,用于与所述存储器通信以执行所述可执行指令从而完成权利要求1至14任意一项所述道路图生成方法的操作。
- 一种计算机存储介质,用于存储计算机可读取的指令,其特征在于,所述指令被执行时执行权利要求1至14中任一所述道路图生成方法的操作。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG11201909743Q SG11201909743QA (en) | 2017-09-19 | 2018-07-19 | Method and apparatus for generating road map, electronic device, and computer storage medium |
JP2019558374A JP6918139B2 (ja) | 2017-09-19 | 2018-07-19 | 道路地図生成方法、装置、電子機器およびコンピュータ記憶媒体 |
US16/655,336 US11354893B2 (en) | 2017-09-19 | 2019-10-17 | Method and apparatus for generating road map, electronic device, and computer storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710848159.2 | 2017-09-19 | ||
CN201710848159.2A CN108230421A (zh) | 2017-09-19 | 2017-09-19 | 一种道路图生成方法、装置、电子设备和计算机存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/655,336 Continuation US11354893B2 (en) | 2017-09-19 | 2019-10-17 | Method and apparatus for generating road map, electronic device, and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019056845A1 true WO2019056845A1 (zh) | 2019-03-28 |
Family
ID=62655449
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/096332 WO2019056845A1 (zh) | 2017-09-19 | 2018-07-19 | 道路图生成方法、装置、电子设备和计算机存储介质 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11354893B2 (zh) |
JP (1) | JP6918139B2 (zh) |
CN (1) | CN108230421A (zh) |
SG (1) | SG11201909743QA (zh) |
WO (1) | WO2019056845A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113361371A (zh) * | 2021-06-02 | 2021-09-07 | 北京百度网讯科技有限公司 | 道路提取方法、装置、设备以及存储介质 |
CN116109966A (zh) * | 2022-12-19 | 2023-05-12 | 中国科学院空天信息创新研究院 | 一种面向遥感场景的视频大模型构建方法 |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108230421A (zh) * | 2017-09-19 | 2018-06-29 | 北京市商汤科技开发有限公司 | 一种道路图生成方法、装置、电子设备和计算机存储介质 |
CN109376594A (zh) | 2018-09-11 | 2019-02-22 | 百度在线网络技术(北京)有限公司 | 基于自动驾驶车辆的视觉感知方法、装置、设备以及介质 |
CN109359598B (zh) * | 2018-10-18 | 2019-09-24 | 中国科学院空间应用工程与技术中心 | 一种识别光学遥感图像道路的y型神经网络系统及方法 |
CN109597862B (zh) * | 2018-10-31 | 2020-10-16 | 百度在线网络技术(北京)有限公司 | 基于拼图式的地图生成方法、装置及计算机可读存储介质 |
CN111047856B (zh) * | 2019-04-09 | 2020-11-10 | 浙江鸣春纺织股份有限公司 | 多通道拥堵指数获取平台 |
CN110967028B (zh) * | 2019-11-26 | 2022-04-12 | 深圳优地科技有限公司 | 导航地图构建方法、装置、机器人及存储介质 |
CN111353441B (zh) * | 2020-03-03 | 2021-04-23 | 成都大成均图科技有限公司 | 基于位置数据融合的道路提取方法和系统 |
CN113496182A (zh) * | 2020-04-08 | 2021-10-12 | 北京京东叁佰陆拾度电子商务有限公司 | 基于遥感影像的道路提取方法及装置、存储介质及设备 |
CN115135964A (zh) * | 2020-06-02 | 2022-09-30 | 华为技术有限公司 | 用于检测道路上的减速带和坑洞的装置、系统和方法 |
CN112131233B (zh) * | 2020-08-28 | 2022-11-15 | 北京百度网讯科技有限公司 | 识别更新道路的方法、装置、设备和计算机存储介质 |
KR102352009B1 (ko) * | 2020-10-16 | 2022-01-18 | 한국항공우주연구원 | 지도 정보를 이용한 기계학습 기반 위성영상 기하보정 방법 및 지도 정보를 이용한 기계학습 기반 위성영상 기하보정 시스템 |
CN113033608A (zh) * | 2021-02-08 | 2021-06-25 | 北京工业大学 | 遥感影像道路提取方法及装置 |
US11858514B2 (en) | 2021-03-30 | 2024-01-02 | Zoox, Inc. | Top-down scene discrimination |
US11810225B2 (en) * | 2021-03-30 | 2023-11-07 | Zoox, Inc. | Top-down scene generation |
CN113505627A (zh) * | 2021-03-31 | 2021-10-15 | 北京苍灵科技有限公司 | 遥感数据处理方法、装置、电子设备和存储介质 |
WO2023277791A1 (en) * | 2021-06-30 | 2023-01-05 | Grabtaxi Holdings Pte. Ltd | Server and method for generating road map data |
CN113421277B (zh) * | 2021-08-25 | 2021-12-14 | 中科星图股份有限公司 | 基于遥感图像的道路提取及异常监控的方法和装置 |
JP2023132855A (ja) * | 2022-03-11 | 2023-09-22 | 日立Astemo株式会社 | 地図情報処理装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130266214A1 (en) * | 2012-04-06 | 2013-10-10 | Brighham Young University | Training an image processing neural network without human selection of features |
CN104915636A (zh) * | 2015-04-15 | 2015-09-16 | 北京工业大学 | 基于多级框架显著性特征的遥感影像道路识别方法 |
CN105184270A (zh) * | 2015-09-18 | 2015-12-23 | 中国科学院遥感与数字地球研究所 | 一种基于脉冲耦合神经网络方法的道路信息遥感提取方法 |
CN107025440A (zh) * | 2017-03-27 | 2017-08-08 | 北京航空航天大学 | 一种基于新型卷积神经网络的遥感图像道路提取方法 |
CN108230421A (zh) * | 2017-09-19 | 2018-06-29 | 北京市商汤科技开发有限公司 | 一种道路图生成方法、装置、电子设备和计算机存储介质 |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8675995B2 (en) * | 2004-07-09 | 2014-03-18 | Terrago Technologies, Inc. | Precisely locating features on geospatial imagery |
US7660441B2 (en) * | 2004-07-09 | 2010-02-09 | Southern California, University | System and method for fusing geospatial data |
US7974814B2 (en) * | 2007-06-15 | 2011-07-05 | Raytheon Company | Multiple sensor fusion engine |
JP4825836B2 (ja) * | 2008-03-24 | 2011-11-30 | 株式会社日立ソリューションズ | 道路地図データ作成システム |
CN102110364B (zh) * | 2009-12-28 | 2013-12-11 | 日电(中国)有限公司 | 基于路口和路段的交通信息处理方法和装置 |
US9416499B2 (en) * | 2009-12-31 | 2016-08-16 | Heatwurx, Inc. | System and method for sensing and managing pothole location and pothole characteristics |
US8706407B2 (en) * | 2011-03-30 | 2014-04-22 | Nokia Corporation | Method and apparatus for generating route exceptions |
US9037519B2 (en) * | 2012-10-18 | 2015-05-19 | Enjoyor Company Limited | Urban traffic state detection based on support vector machine and multilayer perceptron |
CN103310443B (zh) * | 2013-05-20 | 2016-04-27 | 华浩博达(北京)科技股份有限公司 | 高分辨率遥感影像快速处理方法及系统 |
WO2015130970A1 (en) * | 2014-02-26 | 2015-09-03 | Analog Devices, Inc. | Systems for providing intelligent vehicular systems and services |
US20160379388A1 (en) * | 2014-07-16 | 2016-12-29 | Digitalglobe, Inc. | System and method for combining geographical and economic data extracted from satellite imagery for use in predictive modeling |
US10223816B2 (en) * | 2015-02-13 | 2019-03-05 | Here Global B.V. | Method and apparatus for generating map geometry based on a received image and probe data |
US10089528B2 (en) * | 2015-08-18 | 2018-10-02 | Digitalglobe, Inc. | Movement intelligence using satellite imagery |
CN105260699B (zh) * | 2015-09-10 | 2018-06-26 | 百度在线网络技术(北京)有限公司 | 一种车道线数据的处理方法及装置 |
EP3378222A4 (en) * | 2015-11-16 | 2019-07-03 | Orbital Insight, Inc. | MOVING VEHICLE DETECTION AND ANALYSIS USING LOW RESOLUTION REMOTE DETECTION IMAGING |
US9950700B2 (en) * | 2016-03-30 | 2018-04-24 | GM Global Technology Operations LLC | Road surface condition detection with multi-scale fusion |
US20170300763A1 (en) * | 2016-04-19 | 2017-10-19 | GM Global Technology Operations LLC | Road feature detection using a vehicle camera system |
CN106909886B (zh) * | 2017-01-20 | 2019-05-03 | 中国石油大学(华东) | 一种基于深度学习的高精度交通标志检测方法及系统 |
CN106874894B (zh) * | 2017-03-28 | 2020-04-14 | 电子科技大学 | 一种基于区域全卷积神经网络的人体目标检测方法 |
US10869627B2 (en) * | 2017-07-05 | 2020-12-22 | Osr Enterprises Ag | System and method for fusing information related to a driver of a vehicle |
US10395144B2 (en) * | 2017-07-24 | 2019-08-27 | GM Global Technology Operations LLC | Deeply integrated fusion architecture for automated driving systems |
-
2017
- 2017-09-19 CN CN201710848159.2A patent/CN108230421A/zh active Pending
-
2018
- 2018-07-19 WO PCT/CN2018/096332 patent/WO2019056845A1/zh active Application Filing
- 2018-07-19 JP JP2019558374A patent/JP6918139B2/ja active Active
- 2018-07-19 SG SG11201909743Q patent/SG11201909743QA/en unknown
-
2019
- 2019-10-17 US US16/655,336 patent/US11354893B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130266214A1 (en) * | 2012-04-06 | 2013-10-10 | Brighham Young University | Training an image processing neural network without human selection of features |
CN104915636A (zh) * | 2015-04-15 | 2015-09-16 | 北京工业大学 | 基于多级框架显著性特征的遥感影像道路识别方法 |
CN105184270A (zh) * | 2015-09-18 | 2015-12-23 | 中国科学院遥感与数字地球研究所 | 一种基于脉冲耦合神经网络方法的道路信息遥感提取方法 |
CN107025440A (zh) * | 2017-03-27 | 2017-08-08 | 北京航空航天大学 | 一种基于新型卷积神经网络的遥感图像道路提取方法 |
CN108230421A (zh) * | 2017-09-19 | 2018-06-29 | 北京市商汤科技开发有限公司 | 一种道路图生成方法、装置、电子设备和计算机存储介质 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113361371A (zh) * | 2021-06-02 | 2021-09-07 | 北京百度网讯科技有限公司 | 道路提取方法、装置、设备以及存储介质 |
CN113361371B (zh) * | 2021-06-02 | 2023-09-22 | 北京百度网讯科技有限公司 | 道路提取方法、装置、设备以及存储介质 |
CN116109966A (zh) * | 2022-12-19 | 2023-05-12 | 中国科学院空天信息创新研究院 | 一种面向遥感场景的视频大模型构建方法 |
CN116109966B (zh) * | 2022-12-19 | 2023-06-27 | 中国科学院空天信息创新研究院 | 一种面向遥感场景的视频大模型构建方法 |
Also Published As
Publication number | Publication date |
---|---|
US11354893B2 (en) | 2022-06-07 |
JP6918139B2 (ja) | 2021-08-11 |
CN108230421A (zh) | 2018-06-29 |
SG11201909743QA (en) | 2019-11-28 |
US20200050854A1 (en) | 2020-02-13 |
JP2020520493A (ja) | 2020-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019056845A1 (zh) | 道路图生成方法、装置、电子设备和计算机存储介质 | |
JP7025276B2 (ja) | 路面標示を用いた都市環境における位置特定 | |
US20180189577A1 (en) | Systems and methods for lane-marker detection | |
CN113377888B (zh) | 训练目标检测模型和检测目标的方法 | |
KR20210151724A (ko) | 차량 포지셔닝 방법, 장치, 전자 기기, 저장 매체 및 컴퓨터 프로그램 | |
CN113807350A (zh) | 一种目标检测方法、装置、设备及存储介质 | |
CN112927234A (zh) | 点云语义分割方法、装置、电子设备和可读存储介质 | |
CN115616937B (zh) | 自动驾驶仿真测试方法、装置、设备和计算机可读介质 | |
US20220215197A1 (en) | Data processing method and apparatus, chip system, and medium | |
CN112884837A (zh) | 道路定位方法、装置、设备及存储介质 | |
CN115019060A (zh) | 目标识别方法、目标识别模型的训练方法及装置 | |
CN115471805A (zh) | 点云处理和深度学习模型训练方法、装置及自动驾驶车辆 | |
CN111766891A (zh) | 用于控制无人机飞行的方法和装置 | |
KR20200065590A (ko) | 정밀 도로 지도를 위한 차선 중앙점 검출 방법 및 장치 | |
CN113378694A (zh) | 生成目标检测和定位系统及目标检测和定位的方法及装置 | |
CN113379059A (zh) | 用于量子数据分类的模型训练方法以及量子数据分类方法 | |
CN116164770B (zh) | 路径规划方法、装置、电子设备和计算机可读介质 | |
CN110097600B (zh) | 用于识别交通标志牌的方法及装置 | |
WO2023236601A1 (zh) | 参数预测方法、预测服务器、预测系统及电子设备 | |
CN114724116B (zh) | 车辆通行信息生成方法、装置、设备和计算机可读介质 | |
CN115311486A (zh) | 使用预训练的特征提取器训练蒸馏的机器学习模型 | |
CN114238790A (zh) | 用于确定最大感知范围的方法、装置、设备以及存储介质 | |
CN114549961A (zh) | 目标对象的检测方法、装置、设备以及存储介质 | |
CN115035359A (zh) | 一种点云数据处理方法、训练数据处理方法及装置 | |
CN113946125B (zh) | 一种基于多源感知与控制信息的决策方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18857964 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019558374 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.09.2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18857964 Country of ref document: EP Kind code of ref document: A1 |