CN115082772B - Location identification method, location identification device, vehicle, storage medium and chip - Google Patents

Location identification method, location identification device, vehicle, storage medium and chip Download PDF

Info

Publication number
CN115082772B
CN115082772B CN202210847873.0A CN202210847873A CN115082772B CN 115082772 B CN115082772 B CN 115082772B CN 202210847873 A CN202210847873 A CN 202210847873A CN 115082772 B CN115082772 B CN 115082772B
Authority
CN
China
Prior art keywords
image
place
feature information
network
recognized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210847873.0A
Other languages
Chinese (zh)
Other versions
CN115082772A (en
Inventor
刘洋
马雅楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202210847873.0A priority Critical patent/CN115082772B/en
Publication of CN115082772A publication Critical patent/CN115082772A/en
Application granted granted Critical
Publication of CN115082772B publication Critical patent/CN115082772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/38Outdoor scenes
    • G06V20/39Urban scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a location identification method, apparatus, vehicle, storage medium, and chip, the method comprising: acquiring an image to be identified; acquiring global characteristic information and local characteristic information of an image to be recognized, wherein the local characteristic information is used for describing the characteristics of a target object in the image to be recognized; and determining a place image matched with the global feature information and the local feature information from the database, and taking the place represented by the place image as the place of the image to be identified. According to the technical scheme, when the image to be recognized is subjected to location recognition, a mode of jointly matching global feature information and local feature information is adopted, the target object which can be distinguished in each location and is not prone to change along with time can be noticed through the local feature information, objects which are not strong in location distinction and prone to change along with time in shape, such as green plants, cannot be noticed, and accuracy of location recognition is improved.

Description

Location identification method, location identification device, vehicle, storage medium and chip
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a location identification method and apparatus, a vehicle, a storage medium, and a chip.
Background
The location identification system in the related technology has the problem of poor identification effect on certain scenes, such as large-area green plant scenes.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a location identification method, apparatus, vehicle, storage medium, and chip.
According to a first aspect of the embodiments of the present disclosure, there is provided a location identification method, including:
acquiring an image to be identified;
acquiring global characteristic information and local characteristic information of the image to be recognized, wherein the local characteristic information is used for describing the characteristics of a target object in the image to be recognized;
and determining a place image matched with the global feature information and the local feature information from a database, and taking a place represented by the place image as a place of the image to be identified.
Optionally, the determining, from a database, a location image matching the global feature information and the local feature information, and taking a location characterized by the location image as a location of the image to be recognized includes:
determining the area ratio of non-target objects in the image to be recognized;
under the condition that the area ratio is smaller than a preset threshold value, determining a place image matched with the global feature information from the database, and taking a place represented by the place image as a place of the image to be identified;
and under the condition that the area ratio is larger than or equal to the preset threshold, determining a candidate place image matched with the global feature information from the database, determining a first target place image from the candidate place image based on the local feature information, and taking a place represented by the first target place image as the place of the image to be recognized.
Optionally, the obtaining global feature information and local feature information of the image to be recognized includes:
and inputting the image to be recognized into a preset image recognition network, and acquiring global characteristic information and local characteristic information of the image to be recognized, wherein the preset image recognition network applies a local attention mechanism aiming at the target object.
Optionally, the preset image recognition network is obtained by training in the following manner:
randomly erasing partial areas of a plurality of first place images of an original training set to form a plurality of second place images;
expanding the second ground point images into an original training set to obtain a new training set;
and performing model training based on the new training set to obtain the preset image recognition network.
Optionally, the preset image recognition network includes an image processing sub-network, and the performing model training based on the new training set to obtain the preset image recognition network includes:
down-sampling the place images in the new training set into low-resolution place images to obtain a low-resolution training set;
training an initial image processing sub-network based on the low resolution training set to obtain the image processing sub-network, wherein the initial image processing sub-network is configured to perform at least one of the following operations on the location images in the low resolution training set: a defogging operation, a pixelation operation and a sharpening operation.
Optionally, the preset image recognition network further includes a main network, a global feature information generation branch network, and a local feature information generation branch network, where an input end of the main network is connected to an output end of the image processing sub-network, and an output end of the main network is connected to an input end of the global feature information generation branch network and an input end of the local feature information generation branch network, the global feature information generation branch network includes a multi-scale target detection sub-network and a generalized average pooling sub-network, the local feature information generation branch network is applied with a local attention mechanism for a target object, and the inputting the image to be recognized into the preset image recognition network to obtain the global feature information and the local feature information of the image to be recognized includes:
inputting the image to be identified into an image processing sub-network, and obtaining global characteristic information of the image to be identified through a backbone network, a multi-scale target detection sub-network and a generalized average pooling sub-network; and generating a branch network through a backbone network and the local characteristic information to obtain the local characteristic information of the image to be identified.
Optionally, the acquiring the image to be recognized includes:
acquiring the geographic position of image acquisition equipment, and determining the distance between the geographic position of the image acquisition equipment and the geographic position of a place represented by a second target place image in the database;
and responding to the fact that the distance is smaller than or equal to a second threshold value, and acquiring an image to be identified through the image acquisition equipment.
According to a second aspect of the embodiments of the present disclosure, there is provided a location identification apparatus including:
the image acquisition module is configured to acquire an image to be identified;
the characteristic information generation module is configured to acquire global characteristic information and local characteristic information of the image to be recognized, wherein the local characteristic information is used for describing the characteristics of a target object in the image to be recognized;
and the identification result output module is configured to determine a place image matched with the global feature information and the local feature information from a database, and take the place represented by the place image as the place of the image to be identified.
According to a third aspect of the embodiments of the present disclosure, there is provided a vehicle including:
a first processor;
a first memory for storing first processor-executable instructions;
wherein the first processor is configured to:
acquiring an image to be identified;
acquiring global characteristic information and local characteristic information of the image to be recognized, wherein the local characteristic information is used for describing the characteristics of a target object in the image to be recognized;
and determining a place image matched with the global feature information and the local feature information from a database, and taking the place represented by the place image as the place of the image to be recognized.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the location identification method provided by the first aspect of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a chip comprising a second processor and an interface; the second processor is configured to read instructions to perform the steps of the location identification method provided by the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the technical scheme, the local characteristic information is adopted to describe the characteristics of the target object in the image to be recognized, and the target object can be set as a landmark object such as a building, a road sign and the like which is not easy to change. When the image to be recognized is subjected to location recognition, a mode of joint matching of global feature information and local feature information is adopted, the target object which can distinguish each location and is not prone to change along with time can be noticed through the local feature information, objects which are not strong in location distinguishing performance and prone to change along with time in shape, such as green plants, cannot be noticed, and accuracy of location recognition is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a method of location identification in accordance with an exemplary embodiment.
FIG. 2 is a block diagram illustrating a pre-set image recognition network according to an exemplary embodiment.
Fig. 3 is a block diagram illustrating a location identification apparatus according to an example embodiment.
FIG. 4 is a block diagram of a vehicle shown in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
FIG. 1 is a flow chart illustrating a method of location identification in accordance with an exemplary embodiment. The location identification method can be applied to autonomous navigation, visual instant positioning and map building, automatic driving, automatic parking and the like of the mobile robot. As shown in fig. 1, the location identification method includes the following steps.
In step S11, an image to be recognized is acquired.
Wherein, the image to be identified is the image of the place information needing to be identified. When the location identification method is applied to automatic driving or automatic parking, the image to be identified may be an image of a vehicle during driving, which is acquired by a camera mounted on the vehicle during driving of the vehicle during automatic driving or automatic parking. The image may include a building, structure, green plant, person, car, etc.
In step S12, global feature information and local feature information of the image to be recognized are obtained, where the local feature information is used to describe features of a target object in the image to be recognized.
The target object may be a landmark object such as a building or a road sign (structure) that is not easily changed.
In step S13, a location image matching the global feature information and the local feature information is determined from a database, and a location represented by the location image is used as a location of the image to be recognized.
According to the technical scheme, the local characteristic information is adopted to describe the characteristics of the target object in the image to be recognized, and the target object can be set to be a landmark object such as a building, a road sign and the like which is not easy to change. When the image to be recognized is subjected to location recognition, a global feature information and local feature information combined matching mode is adopted, the target object which can distinguish each location and is not prone to change along with time can be noticed through the local feature information, objects which are not strong in location distinctiveness and prone to change along with time in shape such as green plants cannot be noticed, and accuracy of location recognition is improved.
Optionally, step S13 includes:
and determining the area ratio of the non-target object in the image to be recognized.
Wherein the non-target object is opposite to the target object. The non-target object may be an object such as a green plant that is not very distinguishable from a place (has no landmark) and is susceptible to change in form with time. The step of determining the area proportion of the non-target object in the image to be recognized may include: the method comprises the steps of obtaining the ratio of the number of pixel points included by a non-target object in an image to be recognized to the total number of the pixel points of the image to be recognized, and taking the ratio as the area ratio of the non-target object in the image to be recognized. For example, when the non-target object is green, the ratio of the number of pixels with pixel values represented as green in the image to be recognized to the total number of pixels in the image to be recognized can be obtained, and the ratio is used as the area ratio of the green in the image to be recognized.
And under the condition that the area ratio is smaller than a preset threshold value, determining a place image matched with the global feature information from the database, and taking a place represented by the place image as a place of the image to be identified.
The preset threshold value can be set according to actual conditions. Under the condition that the area ratio is smaller than the preset threshold value, that is, the area ratio of the non-target object in the image to be recognized is smaller, that is, it can be considered that green plants and the like included in the image to be recognized are not strong in place distinctiveness (do not have landmark property) and objects with forms which are easy to change along with time are fewer, so that the place of the image to be recognized can be obtained by directly matching global feature information, and the system calculation power is saved.
And under the condition that the area ratio is larger than or equal to the preset threshold, determining a candidate place image matched with the global feature information from the database, determining a first target place image from the candidate place image based on the local feature information, and taking a place represented by the first target place image as the place of the image to be recognized.
On the contrary, when the area ratio is greater than or equal to the preset threshold, that is, the area ratio of the non-target object in the image to be recognized is larger, that is, it may be considered that there are many objects whose shapes are likely to change over time, such as green plants, included in the image to be recognized, which are not distinguishable from the location (do not have a landmark), and therefore, after the candidate location image matched with the global feature information is determined from the database, it is necessary to pay attention to the target object, which is capable of distinguishing each location and is not likely to change over time, in the candidate location image through the local feature information, so as to reduce the influence of the objects whose shapes are likely to change over time, such as green plants, on location recognition, which is not distinguishable from the location (do not have a landmark), and ensure the accuracy of location recognition.
Optionally, step S12 includes:
and inputting the image to be recognized into a preset image recognition network, and acquiring global characteristic information and local characteristic information of the image to be recognized. Wherein the preset image recognition network is applied with a local attention mechanism for the target object.
According to the technical scheme, the global feature information and the local feature information are obtained by adopting one preset image recognition network, and compared with the mode that the global feature information and the local feature information are respectively obtained by adopting two separated networks, the method and the device are beneficial to efficiently training and deploying the model.
Optionally, the preset image recognition network is obtained by training in the following manner:
and randomly erasing partial areas of the plurality of first place images of the original training set to form a plurality of second place images.
And expanding the plurality of second ground point images into the original training set to obtain a new training set.
And performing model training based on the new training set to obtain the preset image recognition network.
According to the technical scheme, in the process of training the preset image recognition network, the second place image with a part of regions randomly erased is used, so that the preset image recognition network has better robustness for shielding images to be recognized to different degrees. Therefore, the technical scheme provided by the disclosure can effectively solve the problem that the difference of image identification shot at the same place is large due to the large-area moving target in the image to be identified, and improves the accuracy of the place identification of the image to be identified.
Optionally, the preset image recognition network includes an image processing sub-network, and the performing model training based on the new training set to obtain the preset image recognition network includes:
and downsampling the place images in the new training set into low-resolution place images to obtain a low-resolution training set.
Training an initial image processing sub-network based on the low resolution training set to obtain the image processing sub-network, wherein the initial image processing sub-network is configured to perform at least one of the following operations on the location images in the low resolution training set: a defogging operation, a pixelation operation and a sharpening operation.
Wherein the pixelation operation comprises at least one of: white balance processing, gamma conversion, contrast adjustment, and hue adjustment.
By the technical scheme, the initial image processing sub-network is trained by adopting a low-resolution place image in advance to obtain the image processing sub-network; in the process of location identification, the image processing subnetwork obtained by training is adopted to perform at least one of the following operations on the image to be identified: the method comprises the steps of defogging operation, pixelation operation and sharpening operation, so that the image to be recognized is subjected to self-adaptive enhancement, and the accuracy of the location recognition of the image to be recognized is improved. Therefore, the technical scheme provided by the disclosure can effectively reduce the interference of definition, illumination and fog on the recognition result of the image to be recognized. In the training process, the place images in the new training set are down-sampled into the place images with low resolution, the preset image recognition network is obtained through the place image training with low resolution, and the preset image recognition network is applied to the recognition of the to-be-recognized images with original resolution, so that the calculation amount is reduced, and the calculation resources are saved.
Optionally, as shown in fig. 2, the preset image recognition network further includes a backbone network, a global feature information generation branch network, and a local feature information generation branch network. The input end of the main network is connected with the output end of the image processing sub-network, and the output end of the main network is connected with the input ends of the global characteristic information generation branch network and the local characteristic information generation branch network. The global feature information generation branch network comprises a multi-scale target detection sub-network and a generalized average pooling sub-network. The local feature information generation branch network is applied with a local attention mechanism for the target object. Step S12 includes:
inputting the image to be identified into an image processing sub-network, and obtaining global characteristic information of the image to be identified through a backbone network, a multi-scale target detection sub-network and a generalized average pooling sub-network; and generating a branch network through a backbone network and the local characteristic information to obtain the local characteristic information of the image to be identified.
The multi-scale target detection subnetwork can realize a feature pyramid network and the like of multi-scale detection.
According to the technical scheme, when the global feature information is extracted, the multi-scale target detection sub-network is adopted, so that the extracted global feature information can learn richer semantic features, the to-be-identified images shot in the appearance similar places have better identification capability, and the accuracy of the place identification of the to-be-identified images is improved.
Optionally, step S11 includes:
and acquiring the geographic position of the image acquisition equipment, and determining the distance between the geographic position of the image acquisition equipment and the geographic position of the place represented by the second target place image in the database.
When the location identification method is applied to automatic parking, the image acquisition device may be mounted on a vehicle, and the image acquisition device may be a vehicle-mounted driving recorder or a camera, etc. The manner of acquiring the geographic position of the image capturing device may be various, for example, acquiring the geographic position of the image capturing device through a GPS (Global Positioning System), acquiring the geographic position of the image capturing device through a mobile base station, acquiring the geographic position of the image capturing device through a WIFI Positioning, acquiring the geographic position of the image capturing device through an assisted Global satellite Positioning System, and the like. The database may be pre-stored with at least one location image, and the second target location image may be any location image in the database, or may be a specific location image (e.g., specified by a user in some case) in the database. When the location recognition method is applied to an automatic parking, the location image in the database may be an image of a start location where the automatic parking is performed, which is pre-stored by a user, and the second target location image may be an image of a start location where the automatic parking is performed, which is pre-stored by a user.
And responding to the distance smaller than or equal to a second threshold value, and acquiring an image to be identified through the image acquisition equipment.
The second threshold is flexibly set according to the application condition, and is not limited herein.
Accordingly, in the case where the distance is greater than the second threshold value, the image to be recognized for location recognition is not acquired. That is, the image capture device may capture images, but the captured images are not used for location identification; or the image acquisition device does not acquire an image.
When the method for recognizing the location is applied to automatic parking, the technical scheme can realize that the image to be recognized for recognizing the location is acquired only when the fact that the geographic position of the image acquisition equipment is detected to be closer to a certain starting location of the automatic parking, and the location is recognized according to the acquired image to be recognized, so that the starting location for starting the automatic parking can be detected, a foundation is laid for the later-stage automatic parking, and meanwhile, computing resources are saved. The actual use scenario of the above technical solution may be: the method comprises the steps that a vehicle owner prestores two site images of starting sites for starting automatic parking in a database for the vehicle of the vehicle owner, such as site images of an automatic parking starting site close to a vehicle owner company and site images of an automatic parking starting site close to a vehicle owner, under the condition that the fact that the distance between the geographic position of an image acquisition device and any one starting site (the automatic parking starting site close to the vehicle owner company or the vehicle owner) is smaller than or equal to a second threshold value is detected, an image to be recognized for site recognition is obtained, site recognition is carried out according to the obtained image to be recognized, the automatic parking starting site can be recognized under the condition that the vehicle owner reaches the company or the vehicle owner reaches the vehicle owner, and therefore the automatic parking function is started to carry out automatic parking. In the above application scenario, the database may further pre-store a location image of one or more starting locations for starting the automatic parking, which is not described herein again.
Fig. 3 is a block diagram illustrating a location identification apparatus according to an example embodiment. Referring to fig. 3, the apparatus includes an image acquisition module 11, a feature information generation module 12, and a recognition result output module 13.
An image acquisition module 11 configured to acquire an image to be recognized.
A feature information generating module 12 configured to obtain global feature information and local feature information of the image to be recognized, where the local feature information is used to describe features of a target object in the image to be recognized.
And the recognition result output module 13 is configured to determine a place image matched with the global feature information and the local feature information from a database, and take a place represented by the place image as a place of the image to be recognized.
According to the technical scheme, the local characteristic information is adopted to describe the characteristics of the target object in the image to be recognized, and the target object can be set to be a landmark object such as a building, a road sign and the like which is not easy to change. When the image to be recognized is subjected to location recognition, a mode of jointly matching global feature information and local feature information is adopted, the target object which can distinguish each location and is not easy to change along with time can be noticed through the local feature information, and objects which are not strong in location distinction and easy to change along with time in shape, such as green plants, can not be noticed, so that the location recognition is improved.
Optionally, the recognition result output module 13 includes:
the area ratio submodule is configured to determine the area ratio of the non-target object in the image to be recognized.
And the first place identifying submodule is configured to determine a place image matched with the global feature information from the database and take a place represented by the place image as the place of the image to be identified when the area ratio is smaller than a preset threshold.
And the second place identification sub-module is configured to determine a candidate place image matched with the global feature information from the database when the area ratio is greater than or equal to the preset threshold, determine a first target place image from the candidate place image based on the local feature information, and take a place represented by the first target place image as a place of the image to be identified.
Through the technical scheme, under the condition that the area ratio is smaller than the preset threshold value, that is, the area ratio of the non-target object in the image to be recognized is smaller, that is, green plants and the like in the image to be recognized are considered to have low place distinguishability (no sign) and less objects with forms which are easy to change along with time, so that the place of the image to be recognized can be obtained by directly matching global feature information, and the system calculation power is saved. And when the area occupation ratio is larger than or equal to the preset threshold, determining a candidate place image matched with the global feature information from the database, determining a first target place image from the candidate place image based on the local feature information, and taking a place represented by the first target place image as the place of the image to be recognized.
Optionally, the feature information generating module 12 is configured to:
and inputting the image to be recognized into a preset image recognition network, and acquiring global characteristic information and local characteristic information of the image to be recognized. Wherein the preset image recognition network is applied with a local attention mechanism for the target object.
According to the technical scheme, the global feature information and the local feature information are obtained by adopting one preset image recognition network, and compared with the mode that the global feature information and the local feature information are respectively obtained by adopting two separated networks, the method and the device are beneficial to efficiently training and deploying the model.
Optionally, the apparatus further comprises a training module, the training module comprising:
an erase submodule configured to: and randomly erasing partial areas of the plurality of first place images of the original training set to form a plurality of second place images.
An expansion submodule configured to: and expanding the plurality of second ground point images into the original training set to obtain a new training set.
A first training submodule configured to: and performing model training based on the new training set to obtain the preset image recognition network.
According to the technical scheme, in the process of training the preset image recognition network, the second place image with part of regions randomly erased is used, so that the preset image recognition network has better robustness on different degrees of shelters in the image to be recognized. Therefore, the technical scheme provided by the disclosure can effectively solve the problem that the difference of image identification shot at the same place is large due to the large-area moving target in the image to be identified, and improves the accuracy of the place identification of the image to be identified.
Optionally, the preset image recognition network comprises an image processing sub-network, and the first training sub-module is configured to:
and downsampling the place images in the new training set into low-resolution place images to obtain a low-resolution training set. Training an initial image processing sub-network based on the low resolution training set to obtain the image processing sub-network, wherein the initial image processing sub-network is configured to perform at least one of the following operations on the location images in the low resolution training set: a defogging operation, a pixelation operation and a sharpening operation.
By the technical scheme, the initial image processing sub-network is trained by adopting a low-resolution place image in advance to obtain the image processing sub-network; in the process of location identification, the image processing subnetwork obtained by training is adopted to perform at least one of the following operations on the image to be identified: the method comprises the following steps of defogging operation, pixelation operation and sharpening operation, so that the image to be recognized is subjected to self-adaptive enhancement, and the accuracy of the location recognition of the image to be recognized is improved. Therefore, the technical scheme provided by the disclosure can effectively reduce the interference of definition, illumination and fog on the recognition result of the image to be recognized. In addition, in the training process, the place images in the new training set are down sampled into the place images with low resolution, the preset image recognition network is obtained through the place image training with low resolution, and the preset image recognition network is applied to the recognition of the to-be-recognized images with original resolution, so that the calculated amount is reduced, and the calculation resources are saved.
Optionally, as shown in fig. 2, the preset image recognition network further includes a backbone network, a global feature information generation branch network, and a local feature information generation branch network. The input end of the main network is connected with the output end of the image processing sub-network, and the output end of the main network is connected with the input ends of the global characteristic information generation branch network and the local characteristic information generation branch network. The global feature information generation branch network comprises a multi-scale target detection sub-network and a generalized average pooling sub-network. The local feature information generation branch network is applied with a local attention mechanism for the target object. A feature information generation module 12 configured to:
inputting the image to be identified into an image processing sub-network, and obtaining global characteristic information of the image to be identified through a backbone network, a multi-scale target detection sub-network and a generalized average pooling sub-network; and generating a branch network through a backbone network and the local characteristic information to obtain the local characteristic information of the image to be identified.
According to the technical scheme, when the global feature information is extracted, the multi-scale target detection sub-network is adopted, so that the extracted global feature information can learn richer semantic features, the to-be-identified images shot in the appearance similar places have better identification capability, and the accuracy of the place identification of the to-be-identified images is improved.
Optionally, the image acquisition module 11 includes a distance comparison sub-module and an image acquisition sub-module.
A distance comparison sub-module configured to obtain a geographic position of the image capture device and determine a distance between the geographic position of the image capture device and a geographic position of a place represented by a second target place image in the database.
An image acquisition sub-module configured to acquire, by the image acquisition device, an image to be identified in response to the distance being less than or equal to a second threshold.
When the method for recognizing the location is applied to automatic parking, the technical scheme can realize that the image to be recognized for recognizing the location is acquired only when the geographic position of the image acquisition equipment is detected to be closer to a certain starting location of the automatic parking, and the location is recognized according to the acquired image to be recognized, so that the starting location for starting the automatic parking can be detected, a foundation is laid for the automatic parking in the later period, and meanwhile, computing resources are saved. The actual use scenario of the above technical solution may be: the method comprises the steps that a vehicle owner prestores two site images of starting sites for starting automatic parking in a database for the vehicle of the vehicle owner, such as site images of an automatic parking starting site close to a vehicle owner company and site images of an automatic parking starting site close to a vehicle owner, under the condition that the fact that the distance between the geographic position of an image acquisition device and any one starting site (the automatic parking starting site close to the vehicle owner company or the vehicle owner) is smaller than or equal to a second threshold value is detected, an image to be recognized for site recognition is obtained, site recognition is carried out according to the obtained image to be recognized, the automatic parking starting site can be recognized under the condition that the vehicle owner reaches the company or the vehicle owner reaches the vehicle owner, and therefore the automatic parking function is started to carry out automatic parking. In the application scenario, the database may further pre-store a location image of one or more starting locations for starting the automatic parking, which is not described herein again.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the location identification method provided by the present disclosure.
The apparatus may be a part of a stand-alone electronic device, for example, in an embodiment, the apparatus may be an Integrated Circuit (IC) or a chip, where the IC may be one IC or a collection of multiple ICs; the chip may include, but is not limited to, the following categories: a GPU (Graphics Processing Unit), a CPU (Central Processing Unit), an FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an SOC (System on Chip, SOC, system on Chip, or System on Chip), and the like. The integrated circuit or chip may be configured to execute executable instructions (or code) to implement the location identification method. Where the executable instructions may be stored in the integrated circuit or chip or may be retrieved from another apparatus or device, for example, where the integrated circuit or chip includes a second processor, a second memory, and an interface for communicating with the other apparatus. The executable instructions may be stored in the second memory, and when executed by the second processor, implement the location identification method described above; alternatively, the integrated circuit or chip may receive executable instructions through the interface and transmit them to the second processor for execution, so as to implement the location identification method described above.
Referring to fig. 4, fig. 4 is a functional block diagram of a vehicle 600 according to an exemplary embodiment. The vehicle 600 may be configured in a fully or partially autonomous driving mode. For example, the vehicle 600 may acquire environmental information of its surroundings through the sensing system 620 and derive an automatic driving strategy based on an analysis of the surrounding environmental information to implement full automatic driving, or present the analysis result to the user to implement partial automatic driving.
Vehicle 600 may include various subsystems such as infotainment system 610, perception system 620, decision control system 630, drive system 640, and computing platform 650. Alternatively, vehicle 600 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the sub-systems and components of the vehicle 600 may be interconnected by wire or wirelessly.
In some embodiments, the infotainment system 610 may include a communication system 611, an entertainment system 612, and a navigation system 613.
The communication system 611 may comprise a wireless communication system that may wirelessly communicate with one or more devices, either directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a Wireless Local Area Network (WLAN) using WiFi. In some embodiments, the wireless communication system may communicate directly with the device using an infrared link, bluetooth, or ZigBee. Other wireless protocols, such as various vehicular communication systems, for example, a wireless communication system may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicles and/or roadside stations.
The entertainment system 612 may include a display device, a microphone and a sound, and a user may listen to a radio in the car based on the entertainment system, playing music; or the mobile phone is communicated with the vehicle, screen projection of the mobile phone is realized on the display equipment, the display equipment can be in a touch control type, and a user can operate the display equipment by touching the screen.
In some cases, the voice signal of the user may be acquired through a microphone, and certain control of the vehicle 600 by the user, such as adjusting the temperature in the vehicle, etc., may be implemented according to the analysis of the voice signal of the user. In other cases, music may be played to the user through a stereo.
The navigation system 613 may include a map service provided by a map provider to provide navigation of a route of travel for the vehicle 600, and the navigation system 613 may be used in conjunction with a global positioning system 621 and an inertial measurement unit 622 of the vehicle. The map service provided by the map supplier can be a two-dimensional map or a high-precision map.
The sensing system 620 may include several sensors that sense information about the environment surrounding the vehicle 600. For example, the sensing system 620 may include a global positioning system 621 (the global positioning system may be a GPS system, a beidou system or other positioning system), an Inertial Measurement Unit (IMU) 622, a laser radar 623, a millimeter wave radar 624, an ultrasonic radar 625, and a camera 626. The sensing system 620 may also include sensors of internal systems of the monitored vehicle 600 (e.g., an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors may be used to detect the object and its corresponding characteristics (position, shape, orientation, velocity, etc.). Such detection and identification is a critical function of the safe operation of the vehicle 600.
Global positioning system 621 is used to estimate the geographic location of vehicle 600.
The inertial measurement unit 622 is used to sense a pose change of the vehicle 600 based on the inertial acceleration. In some embodiments, inertial measurement unit 622 may be a combination of accelerometers and gyroscopes.
Lidar 623 utilizes laser light to sense objects in the environment in which vehicle 600 is located. In some embodiments, lidar 623 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
The millimeter-wave radar 624 utilizes radio signals to sense objects within the surrounding environment of the vehicle 600. In some embodiments, in addition to sensing objects, the millimeter-wave radar 624 may also be used to sense the speed and/or heading of objects.
The ultrasonic radar 625 may sense objects around the vehicle 600 using ultrasonic signals.
The camera 626 is used to capture image information of the surroundings of the vehicle 600. The image capturing device 626 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, and the like, and the image information acquired by the image capturing device 626 may include still images or video stream information.
Decision control system 630 includes a computing system 631 that makes analytical decisions based on information obtained by sensing system 620, and decision control system 630 further includes a vehicle controller 632 that controls the powertrain of vehicle 600, and a steering system 633, throttle 634, and brake system 635 for controlling vehicle 600.
The computing system 631 may be operable to process and analyze the various information acquired by the perception system 620 in order to identify objects, and/or features in the environment surrounding the vehicle 600. The target may comprise a pedestrian or an animal and the objects and/or features may comprise traffic signals, road boundaries and obstacles. The computing system 631 may use object recognition algorithms, structure From Motion (SFM) algorithms, video tracking, and the like. In some embodiments, the computing system 631 may be used to map an environment, track objects, estimate the speed of objects, and so forth. The computing system 631 may analyze the various information obtained and derive a control strategy for the vehicle.
The vehicle controller 632 may be used to perform coordinated control on the power battery and the engine 641 of the vehicle to improve the power performance of the vehicle 600.
The steering system 633 is operable to adjust the heading of the vehicle 600. For example, in one embodiment, a steering wheel system.
The throttle 634 is used to control the operating speed of the engine 641 and thus the speed of the vehicle 600.
The brake system 635 is used to control the deceleration of the vehicle 600. The braking system 635 may use friction to slow the wheel 644. In some embodiments, the braking system 635 may convert the kinetic energy of the wheels 644 into electrical current. The braking system 635 may also take other forms to slow the rotational speed of the wheels 644 to control the speed of the vehicle 600.
The drive system 640 may include components that provide powered motion to the vehicle 600. In one embodiment, the drive system 640 may include an engine 641, an energy source 642, a transmission 643, and wheels 644. The engine 641 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine consisting of a gasoline engine and an electric motor, a hybrid engine consisting of an internal combustion engine and an air compression engine. The engine 641 converts the energy source 642 into mechanical energy.
Examples of energy source 642 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 642 may also provide energy to other systems of the vehicle 600.
The transmission 643 may transmit mechanical power from the engine 641 to the wheels 644. The transmission 643 may include a gearbox, a differential, and a drive shaft. In one embodiment, the transmission 643 may also include other devices, such as clutches. Wherein the drive shaft may include one or more axles that may be coupled to one or more wheels 644.
Some or all of the functionality of the vehicle 600 is controlled by the computing platform 650. The computing platform 650 can include at least one first processor 651, which first processor 651 can execute instructions 653 stored in a non-transitory computer-readable medium, such as first memory 652. In some embodiments, the computing platform 650 may also be a plurality of computing devices that control individual components or subsystems of the vehicle 600 in a distributed manner.
The first processor 651 may be any conventional processor, such as a commercially available CPU. Alternatively, the first processor 651 may also include a processor such as a Graphics Processor Unit (GPU), a Field Programmable Gate Array (FPGA), a System On Chip (SOC), an Application Specific Integrated Circuit (ASIC), or a combination thereof. Although fig. 4 functionally illustrates a processor, memory, and other elements of a computer in the same block, those skilled in the art will appreciate that the processor, computer, or memory may actually comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard drive or other storage medium located in a different housing than the computer. Thus, references to a processor or computer are to be understood as including references to a collection of processors or computers or memories which may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only computations related to the component-specific functions.
In the disclosed embodiment, the first processor 651 may perform the location identification method described above.
In various aspects described herein, the first processor 651 may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle and others are executed by a remote processor, including taking the steps necessary to perform a single maneuver.
In some embodiments, the first memory 652 can contain instructions 653 (e.g., program logic), which instructions 653 can be executed by the first processor 651 to perform various functions of the vehicle 600. The first memory 652 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the infotainment system 610, the perception system 620, the decision control system 630, the drive system 640.
In addition to instructions 653, first memory 652 may store data such as road maps, route information, the location, direction, speed of the vehicle, and other such vehicle data, as well as other information. Such information may be used by the vehicle 600 and the computing platform 650 during operation of the vehicle 600 in autonomous, semi-autonomous, and/or manual modes.
The computing platform 650 may control functions of the vehicle 600 based on inputs received from various subsystems (e.g., the drive system 640, the perception system 620, and the decision control system 630). For example, computing platform 650 may utilize input from decision control system 630 in order to control steering system 633 to avoid obstacles detected by sensing system 620. In some embodiments, the computing platform 650 is operable to provide control over many aspects of the vehicle 600 and its subsystems.
Optionally, one or more of these components described above may be mounted or associated separately from the vehicle 600. For example, the first memory 652 may exist partially or completely separate from the vehicle 600. The above components may be communicatively coupled together in a wired and/or wireless manner.
Optionally, the above components are only an example, in an actual application, components in the above modules may be added or deleted according to an actual need, and fig. 4 should not be construed as limiting the embodiment of the present disclosure.
An autonomous automobile traveling on a roadway, such as vehicle 600 above, may identify objects within its surrounding environment to determine an adjustment to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently and may be used to determine the speed at which the autonomous vehicle is to be adjusted based on the respective characteristics of the object, such as its current speed, acceleration, separation from the vehicle, and the like.
Optionally, the vehicle 600 or a sensory and computing device associated with the vehicle 600 (e.g., computing system 631, computing platform 650) may predict behavior of the identified object based on characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.). Optionally, each identified object depends on the behavior of each other, so it is also possible to predict the behavior of a single identified object taking all identified objects together into account. The vehicle 600 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous vehicle is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 600, such as the lateral position of the vehicle 600 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 600 to cause the autonomous vehicle to follow a given trajectory and/or maintain a safe lateral and longitudinal distance from objects in the vicinity of the autonomous vehicle (e.g., vehicles in adjacent lanes on the road).
The vehicle 600 may be any type of vehicle, such as a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a recreational vehicle, a train, etc., and the disclosed embodiment is not particularly limited.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned location identification method when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of identifying a location, comprising:
acquiring an image to be identified;
acquiring global characteristic information and local characteristic information of the image to be recognized, wherein the local characteristic information is used for describing the characteristics of a target object in the image to be recognized;
determining a place image matched with the global feature information and the local feature information from a database, and taking a place represented by the place image as a place of the image to be identified;
determining a place image matched with the global feature information and the local feature information from a database, and taking a place represented by the place image as a place of the image to be identified comprises:
determining the area ratio of non-target objects in the image to be recognized;
under the condition that the area ratio is smaller than a preset threshold value, determining a place image matched with the global feature information from the database, and taking a place represented by the place image as a place of the image to be identified;
and under the condition that the area ratio is larger than or equal to the preset threshold, determining a candidate place image matched with the global feature information from the database, determining a first target place image from the candidate place image based on the local feature information, and taking a place represented by the first target place image as the place of the image to be recognized.
2. The location identification method according to claim 1, wherein the acquiring global feature information and local feature information of the image to be identified comprises:
and inputting the image to be recognized into a preset image recognition network, and acquiring global characteristic information and local characteristic information of the image to be recognized, wherein the preset image recognition network is applied with a local attention mechanism aiming at the target object.
3. The location recognition method according to claim 2, wherein the preset image recognition network is trained by:
randomly erasing partial areas of a plurality of first place images of an original training set to form a plurality of second place images;
expanding the second ground point images into an original training set to obtain a new training set;
and performing model training based on the new training set to obtain the preset image recognition network.
4. The location recognition method of claim 3, wherein the predetermined image recognition network comprises an image processing sub-network, and the model training based on the new training set to obtain the predetermined image recognition network comprises:
down-sampling the place images in the new training set into place images with low resolution to obtain a low-resolution training set;
training an initial image processing sub-network based on the low resolution training set to obtain the image processing sub-network, wherein the initial image processing sub-network is configured to perform at least one of the following operations on a location image in the low resolution training set: a defogging operation, a pixelation operation and a sharpening operation.
5. The location recognition method according to claim 4, wherein the preset image recognition network further comprises a main network, a global feature information generation branch network and a local feature information generation branch network, an input end of the main network is connected to an output end of the image processing sub-network, an output end of the main network is connected to an input end of the global feature information generation branch network and an input end of the local feature information generation branch network, the global feature information generation branch network comprises a multi-scale object detection sub-network and a generalized average pooling sub-network, the local feature information generation branch network is applied with a local attention mechanism for a target object, and the inputting the image to be recognized into the preset image recognition network to obtain the global feature information and the local feature information of the image to be recognized comprises:
inputting the image to be identified into the image processing sub-network, and obtaining global feature information of the image to be identified through the backbone network, the multi-scale target detection sub-network and the generalized average pooling sub-network; and generating a branch network through the main network and the local characteristic information to obtain the local characteristic information of the image to be identified.
6. The location identification method according to any one of claims 1 to 5, wherein the obtaining of the image to be identified comprises:
acquiring the geographic position of image acquisition equipment, and determining the distance between the geographic position of the image acquisition equipment and the geographic position of a place represented by a second target place image in the database;
and responding to the distance smaller than or equal to a second threshold value, and acquiring an image to be identified through the image acquisition equipment.
7. A location identification device, comprising:
the image acquisition module is configured to acquire an image to be identified;
the characteristic information generation module is configured to acquire global characteristic information and local characteristic information of the image to be recognized, wherein the local characteristic information is used for describing the characteristics of a target object in the image to be recognized;
the identification result output module is configured to determine a place image matched with the global feature information and the local feature information from a database, and use a place represented by the place image as a place of the image to be identified, and specifically includes:
determining the area ratio of non-target objects in the image to be recognized;
under the condition that the area ratio is smaller than a preset threshold value, determining a place image matched with the global feature information from the database, and taking a place represented by the place image as a place of the image to be identified;
and when the area occupation ratio is larger than or equal to the preset threshold, determining a candidate place image matched with the global feature information from the database, determining a first target place image from the candidate place image based on the local feature information, and taking a place represented by the first target place image as the place of the image to be recognized.
8. A vehicle, characterized by comprising:
a first processor;
a first memory for storing first processor-executable instructions;
wherein the first processor is configured to:
acquiring an image to be identified;
acquiring global characteristic information and local characteristic information of the image to be recognized, wherein the local characteristic information is used for describing the characteristics of a target object in the image to be recognized;
determining a place image matched with the global feature information and the local feature information from a database, and taking a place represented by the place image as a place of the image to be identified;
determining a place image matched with the global feature information and the local feature information from a database, and taking a place represented by the place image as a place of the image to be identified comprises:
determining the area ratio of non-target objects in the image to be recognized;
under the condition that the area ratio is smaller than a preset threshold value, determining a place image matched with the global feature information from the database, and taking a place represented by the place image as a place of the image to be recognized;
and under the condition that the area ratio is larger than or equal to the preset threshold, determining a candidate place image matched with the global feature information from the database, determining a first target place image from the candidate place image based on the local feature information, and taking a place represented by the first target place image as the place of the image to be recognized.
9. A computer-readable storage medium on which computer program instructions are stored, which program instructions, when executed by a processor, implement the steps of the method according to any one of claims 1 to 6.
10. A chip comprising a second processor and an interface; the second processor is configured to read instructions to perform the method of any one of claims 1 to 6.
CN202210847873.0A 2022-07-19 2022-07-19 Location identification method, location identification device, vehicle, storage medium and chip Active CN115082772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210847873.0A CN115082772B (en) 2022-07-19 2022-07-19 Location identification method, location identification device, vehicle, storage medium and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210847873.0A CN115082772B (en) 2022-07-19 2022-07-19 Location identification method, location identification device, vehicle, storage medium and chip

Publications (2)

Publication Number Publication Date
CN115082772A CN115082772A (en) 2022-09-20
CN115082772B true CN115082772B (en) 2022-11-11

Family

ID=83259985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210847873.0A Active CN115082772B (en) 2022-07-19 2022-07-19 Location identification method, location identification device, vehicle, storage medium and chip

Country Status (1)

Country Link
CN (1) CN115082772B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017102672A (en) * 2015-12-01 2017-06-08 株式会社日立ソリューションズ Geographic position information specification system and geographic position information specification method
JP7153264B2 (en) * 2018-08-02 2022-10-14 三菱重工業株式会社 Image analysis system, image analysis method and image analysis program
JP2021047737A (en) * 2019-09-19 2021-03-25 エアロセンス株式会社 Information processor, information processing method, and information processing program
CN111104867B (en) * 2019-11-25 2023-08-25 北京迈格威科技有限公司 Recognition model training and vehicle re-recognition method and device based on part segmentation
CN111507381B (en) * 2020-03-31 2024-04-02 上海商汤智能科技有限公司 Image recognition method, related device and equipment
CN111967515B (en) * 2020-08-14 2024-09-06 Oppo广东移动通信有限公司 Image information extraction method, training method and device, medium and electronic equipment

Also Published As

Publication number Publication date
CN115082772A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN115123257B (en) Pavement deceleration strip position identification method and device, vehicle, storage medium and chip
CN115205311B (en) Image processing method, device, vehicle, medium and chip
CN115100377B (en) Map construction method, device, vehicle, readable storage medium and chip
CN114842075B (en) Data labeling method and device, storage medium and vehicle
CN115035494A (en) Image processing method, image processing device, vehicle, storage medium and chip
CN115042821A (en) Vehicle control method, vehicle control device, vehicle and storage medium
CN114935334A (en) Method and device for constructing topological relation of lanes, vehicle, medium and chip
CN115100630B (en) Obstacle detection method, obstacle detection device, vehicle, medium and chip
CN115203457B (en) Image retrieval method, device, vehicle, storage medium and chip
CN114842440B (en) Automatic driving environment sensing method and device, vehicle and readable storage medium
CN115164910B (en) Travel route generation method, travel route generation device, vehicle, storage medium, and chip
CN114842455B (en) Obstacle detection method, device, equipment, medium, chip and vehicle
CN115202234B (en) Simulation test method and device, storage medium and vehicle
CN114782638B (en) Method and device for generating lane line, vehicle, storage medium and chip
CN115056784B (en) Vehicle control method, device, vehicle, storage medium and chip
CN115205848A (en) Target detection method, target detection device, vehicle, storage medium and chip
CN115205179A (en) Image fusion method and device, vehicle and storage medium
CN115334111A (en) System architecture, transmission method, vehicle, medium and chip for lane recognition
CN115042814A (en) Traffic light state identification method and device, vehicle and storage medium
CN115082772B (en) Location identification method, location identification device, vehicle, storage medium and chip
CN115222791A (en) Target association method, device, readable storage medium and chip
CN114822216B (en) Method and device for generating parking space map, vehicle, storage medium and chip
CN115147794B (en) Lane line determining method, lane line determining device, vehicle, medium and chip
CN115063639B (en) Model generation method, image semantic segmentation device, vehicle and medium
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant