CN115205311A - Image processing method, image processing apparatus, vehicle, medium, and chip - Google Patents

Image processing method, image processing apparatus, vehicle, medium, and chip Download PDF

Info

Publication number
CN115205311A
CN115205311A CN202210837775.9A CN202210837775A CN115205311A CN 115205311 A CN115205311 A CN 115205311A CN 202210837775 A CN202210837775 A CN 202210837775A CN 115205311 A CN115205311 A CN 115205311A
Authority
CN
China
Prior art keywords
image
target
bird
view image
eye view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210837775.9A
Other languages
Chinese (zh)
Other versions
CN115205311B (en
Inventor
李旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202210837775.9A priority Critical patent/CN115205311B/en
Publication of CN115205311A publication Critical patent/CN115205311A/en
Application granted granted Critical
Publication of CN115205311B publication Critical patent/CN115205311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure relates to an image processing method, an image processing device, a vehicle, a medium and a chip, and relates to the field of automatic driving. The method comprises the following steps: acquiring a target difficult example sample, wherein the target difficult example sample comprises a first bird's-eye view image; determining a first partial image in the first bird's eye view image that corresponds to the target object; and adding the first partial image to the second bird's-eye view image corresponding to the target scene to obtain a target bird's-eye view image which corresponds to the target scene and contains the target object. Thus, the image content corresponding to the target object can be specified based on the acquired target difficult example sample, which corresponds to the acquisition of the difficult example data of the target object, and the target bird's eye view image obtained based on the identification corresponds to the target scene, includes the target object, and corresponds to the formation of a new difficult example corresponding to the target scene. Therefore, images corresponding to different objects can be generated for different scenes to serve as new difficult sample, real-time acquisition is not needed, and the efficiency of obtaining the difficult sample is improved.

Description

Image processing method, image processing apparatus, vehicle, medium, and chip
Technical Field
The present disclosure relates to the field of automatic driving, and in particular, to an image processing method, an image processing apparatus, a vehicle, a medium, and a chip.
Background
Currently, multi-camera based visual perception technology is an important technology in the field of autopilot. In general, images are captured by a plurality of looking around fisheye cameras provided in a vehicle, and a sensing result in an aerial view space is obtained by performing a series of image processing on the captured images, and then the sensing result is used for different tasks (for example, ground obstacle detection, storage space detection, road surface sign detection, and the like). Therefore, obtaining high-quality images with complex information as training data plays an important role in improving the completion accuracy of tasks. In the related art, images used as task training data are usually acquired by a real vehicle, and the real vehicle acquisition mode needs a lot of time and labor, and the efficiency of acquiring the training data is not high, especially complex training data (i.e. difficult samples), and due to the problems of high acquisition difficulty, high labor consumption in data screening, and the like, the efficiency of acquiring the training data is further reduced.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image processing method, apparatus, vehicle, medium, and chip.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method, the method including:
acquiring a target difficult example sample, wherein the target difficult example sample comprises a first aerial view image;
determining a first partial image corresponding to a target object in the first bird's eye view image;
and adding the first partial image to a second aerial view image corresponding to a target scene to obtain a target aerial view image corresponding to the target scene and containing the target object.
Optionally, the determining a first partial image of the first bird's eye image that corresponds to the target object comprises:
inputting the first aerial view image into a pre-trained image segmentation model to obtain a segmentation image output by the image segmentation model, wherein the segmentation image is used for indicating an image area corresponding to a preset object in the first aerial view image;
determining an image area corresponding to the target object in the segmented image as a target image area;
an image area corresponding to the target image area is extracted from the first bird's-eye image as the first partial image.
Optionally, the image segmentation model is trained by:
acquiring training data, wherein the training data comprises a sample aerial view image and an annotation image corresponding to the sample aerial view image, and the annotation image is used for indicating preset objects corresponding to pixel points in the sample aerial view image;
and performing model training by taking the sample aerial view image as the input of a model and taking the marked image as the target output of the model to obtain the trained image segmentation model.
Optionally, the adding the first partial image to a second bird's eye view image corresponding to a target scene comprises:
determining an image area in the second bird's-eye view image, which has a position association relationship with the target object, as a first association area;
adding the first local image to the first associated region.
Optionally, the method further comprises:
acquiring a first acquired image corresponding to the first aerial view image, wherein the first aerial view image is a spliced image generated on the basis of the first acquired image;
determining a second partial image corresponding to the first partial image in the first collected image according to the coordinate mapping relation between the first aerial view image and the first collected image;
acquiring a second acquired image corresponding to the second aerial view image, wherein the second aerial view image is a spliced image generated on the basis of the second acquired image;
adding the second partial image to the second captured image resulting in a target captured image corresponding to the target scene and including the target object.
Optionally, the adding the second partial image to the second captured image comprises:
determining an image area in the second acquired image, which has a position association relation with the target object, as a second association area;
adding the second partial image into the second associated region.
Optionally, the target object is any one of the following:
a bank line, a wheel block, a deceleration strip and a zebra crossing.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus, the apparatus including:
a first acquisition module configured to acquire a target difficulty sample, the target difficulty sample including a first bird's eye view image;
a first determination module configured to determine a first partial image of the first bird's eye view image that corresponds to a target object;
a first adding module configured to add the first partial image to a second bird's eye view image corresponding to a target scene to obtain a target bird's eye view image corresponding to the target scene and containing the target object.
Optionally, the first determining module includes:
a segmentation sub-module configured to input the first bird's-eye view image into a pre-trained image segmentation model, and obtain a segmented image output by the image segmentation model, wherein the segmented image is used for indicating an image area corresponding to a preset object in the first bird's-eye view image;
a first determination submodule configured to determine an image region corresponding to the target object in the divided image as a target image region;
an extraction sub-module configured to extract an image area corresponding to the target image area from the first bird's-eye image as the first partial image.
Optionally, the image segmentation model is trained by the following modules:
the second acquisition module is configured to acquire training data, wherein the training data comprises a sample aerial view image and an annotation image corresponding to the sample aerial view image, and the annotation image is used for indicating preset objects corresponding to pixel points in the sample aerial view image;
and the training module is configured to perform model training by taking the sample aerial view image as an input of a model and taking the marked image as a target output of the model so as to obtain the trained image segmentation model.
Optionally, the first adding module includes:
a second determination submodule configured to determine, as a first related area, an image area having a positional relationship with the target object in the second bird's eye view image;
a first adding sub-module configured to add the first partial image into the first associated region.
Optionally, the apparatus further comprises:
a third acquisition module configured to acquire a first captured image corresponding to the first bird's-eye view image, the first bird's-eye view image being a stitched image generated based on the first captured image;
a second determination module configured to determine a second partial image corresponding to the first partial image in the first captured image according to a coordinate mapping relationship between the first bird's eye-view image and the first captured image;
a fourth acquisition module configured to acquire a second captured image corresponding to the second bird's-eye view image, the second bird's-eye view image being a stitched image generated based on the second captured image;
a second adding module configured to add the second partial image to the second captured image, resulting in a target captured image corresponding to the target scene and including the target object.
Optionally, the second adding module includes:
a third determination submodule configured to determine, as a second association area, an image area in the second captured image that has a position association relationship with the target object;
a second adding sub-module configured to add the second partial image into the second associated region.
Optionally, the target object is any one of the following:
a bank line, a wheel block, a deceleration strip and a zebra crossing.
According to a third aspect of an embodiment of the present disclosure, there is provided a vehicle including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute the instructions in the memory to implement the steps of the image processing method provided by the first aspect of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the image processing method provided by the first aspect of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a chip comprising a processor and an interface; the processor is used for reading instructions to execute the image processing method provided by the first aspect of the disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the technical scheme, after the first aerial view image in the target difficulty sample is obtained, the first local image corresponding to the target object in the first aerial view image is determined, and the first local image is added to the second aerial view image corresponding to the target scene, so that the target aerial view image corresponding to the target scene and containing the target object is obtained. Thus, based on the acquired target difficult-to-sample, the image content corresponding to the target object, that is, the first partial image, can be determined therefrom, which corresponds to the difficult-to-sample data of the target object being obtained; after that, the first partial image is added to the second bird's-eye view image corresponding to the target scene, and the obtained target bird's-eye view image corresponds to the target scene and also includes the target object, thereby forming a new difficult example sample corresponding to the target scene. Therefore, images corresponding to different objects can be generated for different scenes to serve as new difficultly-exemplified samples, real-vehicle collection is not needed, and the efficiency of obtaining the difficultly-exemplified samples is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment.
Fig. 2 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.
FIG. 3 is a functional block diagram schematic of a vehicle shown in an exemplary embodiment.
Fig. 4 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that all the actions of acquiring signals, information or data in the present application are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
In general, samples that make it difficult for an algorithm (or model) to obtain correct processing results (e.g., incorrect matching, incorrect recognition, incorrect detection, etc.) may be referred to as difficult samples, which may be considered obstacles that hinder the performance of the algorithm (or model). As described in the background art, in the prior art, a difficult sample required by a task is collected in a real vehicle collection manner, and problems of high collection difficulty and low collection efficiency exist.
In order to solve the above technical problems, the present disclosure provides an image processing method, an image processing apparatus, a vehicle, a medium, and a chip, so as to improve efficiency of obtaining a difficult-to-sample.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment. As shown in fig. 1, the method provided by the present disclosure may include the following steps 11 to 13.
In step 11, a target hard case sample is obtained.
As described above, a difficult-to-sample refers to a sample that is difficult to process (e.g., difficult to identify, difficult to detect, etc.) by an algorithm (or model). In different task scenarios, different difficult example samples are corresponded. For example, in the task of library bit line identification, images of library bit lines that are partially occluded, images of library bit lines that contain wear, and the like can all be taken as difficult examples.
In the application scene of the present disclosure, some images serving as difficult samples already exist, and the present disclosure aims to perform data augmentation based on these images to obtain new images capable of serving as difficult samples.
The target exception sample obtained in step 11 may include a first bird's-eye view image. In an automatic driving scenario, an image capturing device (e.g., a look-around fish-eye camera) for capturing images is usually provided on a vehicle, and based on these image capturing devices, a plurality of captured images corresponding to the same capturing time can be obtained, and these images are spliced and fused to obtain a corresponding bird's-eye view image.
In step 12, a first partial image of the first bird's eye view image corresponding to the target object is determined.
The target object may be set according to actual requirements, for example, set as an object related to a task. Target objects may include, but are not limited to, library lines, wheel blocks, speed bumps, zebra stripes, and the like. For example, for the task of identifying the bank bit line, it is necessary to have the capability of identifying the bank bit line, and therefore, the target object may be set as the bank bit line correspondingly. For another example, for a zebra crossing detection task, the zebra crossing detection task needs to have the ability to detect zebra crossings, and therefore, the target object may be set as the zebra crossing.
In one possible embodiment, step 12 may include the steps of:
inputting the first aerial view image into a pre-trained image segmentation model to obtain a segmentation image output by the image segmentation model;
determining an image area corresponding to the target object in the segmented image as a target image area;
an image area corresponding to the target image area is extracted from the first bird's eye view image as a first partial image.
Wherein the divided image may be used to indicate an image area in the first bird's eye view image corresponding to the preset object.
Illustratively, the image segmentation model may be trained by:
acquiring training data;
and performing model training by taking the sample bird's-eye view image as the input of the model and taking the marked image as the target of the model, so as to obtain a trained image segmentation model.
The training data can comprise a sample aerial view image and an annotation image corresponding to the sample aerial view image, and the annotation image can be used for indicating preset objects corresponding to pixel points in the sample aerial view image.
The preset object can be various objects possibly involved in a driving scene. Illustratively, the preset objects may include, but are not limited to, the following: a bank line, a wheel block, a deceleration strip and a zebra crossing.
In one possible embodiment, for each pixel point in the sample bird's eye view image, it may be labeled with an N-dimensional vector (N is a positive integer), where N is the number of preset objects, and each element in the N-dimensional vector corresponds to one preset object. For example, the annotation image corresponding to the sample bird's-eye view image can be obtained by means of manual annotation.
For example, if the preset object includes a library bit line, a wheel block and a deceleration strip, and the label information corresponding to the pixel point is [ X1, X2, X3], where X1 corresponds to the library bit line, X2 corresponds to the wheel block, X3 corresponds to the deceleration strip, and the label value 1 indicates yes, and the label value 0 indicates no. Then, if a certain pixel point is a pixel point corresponding to the bin bit line, the labeling information for the pixel point may be [1, 0]; if a certain pixel is not any of the bin line, the wheel block, and the deceleration strip, the label information for the pixel may be [0, 0].
After the training data required for training the image segmentation model is obtained based on the above manner, the model training is performed by using the sample bird's-eye view image as the input of the model and using the annotation image as the target of the model, so as to obtain the trained image segmentation model.
In an example, in one training process, the sample bird's-eye view image is input to the model used in the current training, the output result of the model used in the current training can be obtained, then, the loss function calculation is performed using the output result and the annotation image corresponding to the sample bird's-eye view image input this time, the model used in the current training is updated using the calculation result of the loss function, and the updated model is used in the next training. And repeating the steps in a circulating manner until the condition that the model stops training is met, and taking the obtained model as the image segmentation model after the training is finished.
Illustratively, the model training described above may use a neural network model. As another example, the model loss function may be a cross-entropy loss function. As another example, the conditions under which the model stops training may include, but are not limited to, any of the following: the training times reach the preset times, the training duration reaches the preset duration, and the calculation result of the loss function is lower than the preset loss value.
And inputting the first aerial view image into the image segmentation model based on the trained image segmentation model, so as to obtain a segmented image output by the image segmentation model.
Wherein the divided image may be used to indicate an image area in the first bird's eye view image corresponding to the preset object. In a segmented image output by the image segmentation model, each pixel point can correspond to a multi-dimensional vector, elements in the multi-dimensional vector correspond to preset objects one to one, and each element value in the multi-dimensional vector represents the probability that the pixel point belongs to the corresponding preset object, wherein the preset object corresponding to the maximum probability value is the preset object to which the pixel point belongs (namely, the preset object corresponding to the pixel point). Based on the thought, the preset object corresponding to each pixel point in the segmented image can be determined, and the image area corresponding to each preset object can be determined from the segmented image according to the plurality of pixel points corresponding to the same preset object.
Based on the image region corresponding to each preset object indicated in the segmented image, the image region corresponding to the target object, i.e. the target image region, can be determined.
After the target image area is determined, the area position of the target image area in the divided image can be determined, and since the first bird's-eye view image and the divided image have the same size, the corresponding image area can be extracted from the corresponding position of the first bird's-eye view image according to the determined area position to serve as the first partial image, that is, the corresponding image content (that is, pixel points) of the target object in the first bird's-eye view image can be extracted.
In step 13, the first partial image is added to the second bird's-eye view image corresponding to the target scene, resulting in a target bird's-eye view image corresponding to the target scene and containing the target object.
In one possible implementation, the first partial image may be added to a specified position in the second bird's eye view image. For example, the designated position may be preset according to actual requirements. As another example, the specified location may be determined manually.
In another possible embodiment, step 13 may include the steps of:
determining an image area in the second bird's-eye view image, which has a position association relationship with the target object, as a first association area;
the first partial image is added to the first associated region.
In general, a target object appears only in an area having a position association relationship therewith. For example, the wheel chock will appear in the parking area and not on the driving lane. Based on this, a set of rules may be set in advance to indicate the region having a position association relationship with the target object. Thus, when the first partial image is added to the second bird's-eye view image, the first related region having a positional relationship with the target object can be first identified in the second bird's-eye view image, and the first partial image can be added to the first related region. In this way, the first local image corresponding to the target object is prevented from being added to an inappropriate position, and the reality and accuracy of the target bird's-eye view image are improved.
The area division in the first bird's-eye view image may be performed by human division or by an image segmentation technique.
According to the technical scheme, after the first aerial view image in the target difficulty sample is obtained, the first local image corresponding to the target object in the first aerial view image is determined, and the first local image is added to the second aerial view image corresponding to the target scene, so that the target aerial view image corresponding to the target scene and containing the target object is obtained. Thereby, based on the acquired target difficult-to-sample, the image content corresponding to the target object, i.e. the first partial image, can be determined therefrom, which corresponds to the acquisition of the difficult-to-sample data of the target object; after that, the first partial image is added to the second bird's-eye view image corresponding to the target scene, and the obtained target bird's-eye view image corresponds to the target scene and also includes the target object, thereby forming a new difficult example sample corresponding to the target scene. Therefore, images corresponding to different objects can be generated for different scenes to serve as new difficult sample, real-time acquisition is not needed, and the efficiency of obtaining the difficult sample is improved.
Optionally, on the basis of the steps shown in fig. 1, the method provided by the present disclosure may further include the following steps:
acquiring a first acquisition image corresponding to the first aerial view image;
determining a second partial image corresponding to the first partial image in the first collected image according to the coordinate mapping relation between the first aerial view image and the first collected image;
acquiring a second acquisition image corresponding to the second aerial view image;
and adding the second local image into the second collected image to obtain a target collected image which corresponds to the target scene and contains the target object.
As described above, the first bird's-eye view image is a stitched image generated based on the captured images, and therefore, the captured image corresponding to the first bird's-eye view image, that is, the first captured image can be directly acquired.
Because the first bird's-eye view image is a stitched image generated based on the first collected image, the pixels corresponding to each other in the two images have a coordinate mapping relationship, and based on the coordinate mapping relationship, the pixels corresponding to each pixel in the first bird's-eye view image can be located in the first collected image. Furthermore, based on the coordinate mapping relationship, a second partial image corresponding to the first partial image in the first captured image can be specified, which corresponds to specifying the image content of the target object corresponding to the first captured image. For example, the coordinate mapping relationship may be formed from camera internal reference and camera external reference calibration of a camera that captures the first captured image.
After the second partial image is determined, the second partial image may be extracted from the first captured image, that is, the image content (i.e., pixel points) of the target object in the first captured image is extracted.
The second bird's-eye view image is a stitched image generated based on the second captured image. Thus, and adding the second partial image to the second captured image, a target captured image corresponding to the target scene and including the target object may be obtained.
In one possible embodiment, the second partial image may be added to a specified position in the second captured image. For example, the designated position may be preset according to actual requirements. As another example, the specified location may be determined manually.
In another possible embodiment, adding the second partial image to the second captured image may comprise the steps of:
determining an image area in the second acquired image, which has a position association relation with the target object, as a second association area;
the second partial image is added to the second associated region.
As described above, the target object appears only in the area having the position association relationship therewith. Based on this, a set of rules may be set in advance to indicate the area having the position association relationship with the target object. Thus, when the second partial image is added to the second captured image, the second association region having a position association relationship with the target object may be first determined in the second captured image, and then the second partial image may be added to the second association region. Therefore, the situation that the second local image corresponding to the target object is added to an improper position can be avoided, and the reality and the accuracy of the target acquisition image are improved.
The region division in the second acquired image can be artificially divided, and can also be divided by an image segmentation technology.
It should be noted that, the first collected image and the second collected image do not have a definite sequence of obtaining, and both of them may be obtained simultaneously or sequentially, which is not limited in the present disclosure.
By the scheme, based on the first local image extracted from the first aerial view, the image content of the target image in the first collected image can be further positioned, the image content of the first collected image corresponding to the target object can be extracted, and the extracted image content can be added to the second collected image of the target scene, so that data augmentation of a difficult sample of the collected image in the target scene is realized.
Fig. 2 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment. As shown in fig. 2, the apparatus 20 includes:
a first acquisition module 21 configured to acquire a target difficulty sample including a first bird's eye view image;
a first determining module 22 configured to determine a first partial image corresponding to the target object in the first bird's eye image;
a first adding module 23 configured to add the first partial image to a second bird's-eye view image corresponding to a target scene to obtain a target bird's-eye view image corresponding to the target scene and containing the target object.
Optionally, the first determining module 22 includes:
a segmentation sub-module configured to input the first bird's-eye view image into a pre-trained image segmentation model, and obtain a segmented image output by the image segmentation model, wherein the segmented image is used for indicating an image area corresponding to a preset object in the first bird's-eye view image;
a first determination submodule configured to determine an image region corresponding to the target object in the divided image as a target image region;
an extraction sub-module configured to extract an image area corresponding to the target image area from the first bird's-eye image as the first partial image.
Optionally, the image segmentation model is trained by the following modules:
the second acquisition module is configured to acquire training data, wherein the training data comprises a sample aerial view image and an annotation image corresponding to the sample aerial view image, and the annotation image is used for indicating preset objects corresponding to pixel points in the sample aerial view image;
and the training module is configured to perform model training by taking the sample aerial view image as an input of a model and taking the marked image as a target output of the model so as to obtain the trained image segmentation model.
Optionally, the first adding module 23 includes:
a second determination submodule configured to determine, as a first related area, an image area having a positional relationship with the target object in the second bird's eye view image;
a first adding sub-module configured to add the first partial image into the first associated region.
Optionally, the apparatus 20 further comprises:
a third acquisition module configured to acquire a first captured image corresponding to the first bird's-eye view image, the first bird's-eye view image being a stitched image generated based on the first captured image;
a second determination module configured to determine a second partial image corresponding to the first partial image in the first captured image according to a coordinate mapping relationship between the first bird's eye-view image and the first captured image;
a fourth acquisition module configured to acquire a second captured image corresponding to the second bird's-eye view image, the second bird's-eye view image being a stitched image generated based on the second captured image;
a second adding module configured to add the second partial image to the second captured image, resulting in a target captured image corresponding to the target scene and including the target object.
Optionally, the second adding module includes:
a third determination submodule configured to determine, as a second association area, an image area in the second captured image that has a position association relationship with the target object;
a second adding sub-module configured to add the second partial image into the second associated region.
Optionally, the target object is any one of the following:
a library line, a wheel block, a deceleration strip and a zebra crossing.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
Referring to fig. 3, fig. 3 is a functional block diagram of a vehicle 600 according to an exemplary embodiment. The vehicle 600 may be configured in a fully or partially autonomous driving mode. For example, the vehicle 600 may acquire environmental information around the vehicle through the sensing system 620 and derive an automatic driving strategy based on an analysis of the surrounding environmental information to implement fully automatic driving, or present the analysis results to the user to implement partially automatic driving.
The vehicle 600 may include various subsystems such as an infotainment system 610, a perception system 620, a decision control system 630, a drive system 640, and a computing platform 650. Alternatively, vehicle 600 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the sub-systems and components of the vehicle 600 may be interconnected by wire or wirelessly.
In some embodiments, the infotainment system 610 may include a communication system 611, an entertainment system 612, and a navigation system 613.
The communication system 611 may comprise a wireless communication system that may communicate wirelessly with one or more devices, either directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a Wireless Local Area Network (WLAN) using WiFi. In some embodiments, the wireless communication system may utilize an infrared link, bluetooth, or ZigBee to communicate directly with the device. Other wireless protocols, such as various vehicular communication systems, for example, a wireless communication system may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicles and/or roadside stations.
The entertainment system 612 may include a display device, a microphone, and a sound box, and a user may listen to a broadcast in the car based on the entertainment system, playing music; or the mobile phone is communicated with the vehicle, screen projection of the mobile phone is realized on the display equipment, the display equipment can be in a touch control type, and a user can operate the display equipment by touching the screen.
In some cases, the voice signal of the user may be captured by a microphone, and certain control of the vehicle 600 by the user, such as adjusting the temperature in the vehicle, etc., may be implemented according to the analysis of the voice signal of the user. In other cases, music may be played to the user through a sound.
The navigation system 613 may include a map service provided by a map provider to provide navigation of a route for the vehicle 600, and the navigation system 613 may be used in conjunction with a global positioning system 621 and an inertial measurement unit 622 of the vehicle. The map service provided by the map supplier can be a two-dimensional map or a high-precision map.
The sensing system 620 may include several types of sensors that sense information about the environment surrounding the vehicle 600. For example, the sensing system 620 may include a global positioning system 621 (the global positioning system may be a GPS system, a beidou system or other positioning system), an Inertial Measurement Unit (IMU) 622, a laser radar 623, a millimeter wave radar 624, an ultrasonic radar 625, and a camera 626. The sensing system 620 may also include sensors of internal systems of the monitored vehicle 600 (e.g., an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors may be used to detect the object and its corresponding characteristics (position, shape, orientation, velocity, etc.). Such detection and identification is a critical function of the safe operation of the vehicle 600.
Global positioning system 621 is used to estimate the geographic location of vehicle 600.
The inertial measurement unit 622 is used to sense a pose change of the vehicle 600 based on the inertial acceleration. In some embodiments, the inertial measurement unit 622 may be a combination of an accelerometer and a gyroscope.
Lidar 623 utilizes laser light to sense objects in the environment in which vehicle 600 is located. In some embodiments, lidar 623 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
The millimeter-wave radar 624 utilizes radio signals to sense objects within the surrounding environment of the vehicle 600. In some embodiments, in addition to sensing objects, the millimeter-wave radar 624 may also be used to sense the speed and/or heading of objects.
The ultrasonic radar 625 may sense objects around the vehicle 600 using ultrasonic signals.
The camera 626 is used to capture image information of the surroundings of the vehicle 600. The image capturing device 626 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, and the like, and the image information acquired by the image capturing device 626 may include still images or video stream information.
Decision control system 630 includes a computing system 631 that makes analytical decisions based on information obtained by sensing system 620, and decision control system 630 further includes a vehicle controller 632 that controls the powertrain of vehicle 600, and a steering system 633, throttle 634, and brake system 635 for controlling vehicle 600.
The computing system 631 may operate to process and analyze the various information acquired by the perception system 620 to identify objects, and/or features in the environment surrounding the vehicle 600. The target may comprise a pedestrian or an animal and the objects and/or features may comprise traffic signals, road boundaries and obstacles. The computing system 631 may use object recognition algorithms, motion from Motion (SFM) algorithms, video tracking, and the like. In some embodiments, the computing system 631 may be used to map an environment, track objects, estimate the speed of objects, and so forth. The computing system 631 may analyze the various information obtained and derive a control strategy for the vehicle.
The vehicle controller 632 may be used to perform coordinated control on the power battery and the engine 641 of the vehicle to improve the power performance of the vehicle 600.
The steering system 633 is operable to adjust the heading of the vehicle 600. For example, in one embodiment, a steering wheel system.
The throttle 634 is used to control the operating speed of the engine 641 and, in turn, the speed of the vehicle 600.
The brake system 635 is used to control the deceleration of the vehicle 600. The braking system 635 may use friction to slow the wheel 644. In some embodiments, the braking system 635 may convert the kinetic energy of the wheels 644 into electrical current. The braking system 635 may also take other forms to slow the rotational speed of the wheels 644 to control the speed of the vehicle 600.
The drive system 640 may include components that provide powered motion to the vehicle 600. In one embodiment, the drive system 640 may include an engine 641, an energy source 642, a transmission 643, and wheels 644. The engine 641 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine consisting of a gasoline engine and an electric motor, a hybrid engine consisting of an internal combustion engine and an air compression engine. The engine 641 converts the energy source 642 into mechanical energy.
Examples of energy sources 642 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 642 may also provide energy to other systems of the vehicle 600.
The transmission 643 may transmit mechanical power from the engine 641 to the wheels 644. The transmission 643 may include a gearbox, a differential, and a drive shaft. In one embodiment, the transmission 643 may also include other components, such as clutches. Wherein the drive shaft may include one or more axles that may be coupled to one or more wheels 644.
Some or all of the functions of the vehicle 600 are controlled by the computing platform 650. Computing platform 650 can include at least one processor 651, which processor 651 can execute instructions 653 stored in a non-transitory computer-readable medium, such as memory 652. In some embodiments, computing platform 650 may also be a plurality of computing devices that control individual components or subsystems of vehicle 600 in a distributed manner.
The processor 651 can be any conventional processor, such as a commercially available CPU. Alternatively, processor 651 may also comprise a processor such as a Graphics Processing Unit (GPU), field Programmable Gate Array (FPGA), system On Chip (SOC), application Specific Integrated Circuit (ASIC), or a combination thereof. Although fig. 3 functionally illustrates processors, memories, and other elements of the computer in the same block, one of ordinary skill in the art will appreciate that the processors, computers, or memories may actually comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard drive or other storage medium located in a different enclosure than the computer. Thus, references to a processor or computer are to be understood as including references to a collection of processors or computers or memories which may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some of the components, such as the steering and deceleration components, may each have their own processor that performs only computations related to the component-specific functions.
In the disclosed embodiment, the processor 651 may perform the image processing method described above.
In various aspects described herein, the processor 651 can be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle and others are executed by a remote processor, including taking the steps necessary to perform a single maneuver.
In some embodiments, the memory 652 may contain instructions 653 (e.g., program logic), which instructions 653 may be executed by the processor 651 to perform various functions of the vehicle 600. The memory 652 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the infotainment system 610, the perception system 620, the decision control system 630, the drive system 640.
In addition to instructions 653, memory 652 may store data such as road maps, route information, the location, direction, speed of the vehicle, and other such vehicle data, as well as other information. Such information may be used by the vehicle 600 and the computing platform 650 during operation of the vehicle 600 in autonomous, semi-autonomous, and/or manual modes.
The computing platform 650 may control functions of the vehicle 600 based on inputs received from various subsystems (e.g., the drive system 640, the perception system 620, and the decision control system 630). For example, computing platform 650 may utilize input from decision control system 630 in order to control steering system 633 to avoid obstacles detected by sensing system 620. In some embodiments, the computing platform 650 is operable to provide control over many aspects of the vehicle 600 and its subsystems.
Optionally, one or more of these components described above may be mounted separately from or associated with the vehicle 600. For example, the memory 652 may exist partially or completely separate from the vehicle 600. The above components may be communicatively coupled together in a wired and/or wireless manner.
Optionally, the above components are only an example, in an actual application, components in the above modules may be added or deleted according to an actual need, and fig. 3 should not be construed as limiting the embodiment of the present disclosure.
An autonomous automobile traveling on a roadway, such as vehicle 600 above, may identify objects within its surrounding environment to determine an adjustment to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, separation from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to be adjusted.
Optionally, the vehicle 600 or a sensing and computing device associated with the vehicle 600 (e.g., computing system 631, computing platform 650) may predict the behavior of the identified object based on characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.). Optionally, each of the identified objects is dependent on the behavior of each other, so all of the identified objects can also be considered together to predict the behavior of a single identified object. The vehicle 600 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous vehicle is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 600, such as the lateral position of the vehicle 600 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 600 to cause the autonomous vehicle to follow a given trajectory and/or maintain a safe lateral and longitudinal distance from objects in the vicinity of the autonomous vehicle (e.g., vehicles in adjacent lanes on the road).
The vehicle 600 may be any type of vehicle, such as a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a recreational vehicle, a train, etc., and the embodiment of the present disclosure is not particularly limited.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the image processing method provided by the present disclosure.
The apparatus may be a part of a stand-alone electronic device, for example, in an embodiment, the apparatus may be an Integrated Circuit (IC) or a chip, where the IC may be one IC or a set of multiple ICs; the chip may include, but is not limited to, the following categories: a GPU (Graphics Processing Unit), a CPU (Central Processing Unit), an FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an SOC (System on Chip, SOC, system on Chip, or System on Chip), and the like. The integrated circuit or chip can be used to execute executable instructions (or codes) to realize the image processing method. Where the executable instructions may be stored in the integrated circuit or chip or may be retrieved from another device or apparatus, for example, where the integrated circuit or chip includes a processor, a memory, and an interface for communicating with other devices. The executable instructions may be stored in the memory, and when executed by the processor, implement the image processing method described above; alternatively, the integrated circuit or chip may receive executable instructions through the interface and transmit the executable instructions to the processor for execution, so as to implement the image processing method.
In another exemplary embodiment, a computer program product is also provided, which contains a computer program executable by a programmable apparatus, the computer program having code portions for performing the image processing method described above when executed by the programmable apparatus.
Fig. 4 is a block diagram illustrating an image processing apparatus 1900 according to an exemplary embodiment. For example, the apparatus 1900 may be provided as a server. Referring to fig. 4, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the image processing method described above.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output interface 1958. The device 1900 may operate based on an operating system, such as Windows Server, stored in memory 1932 TM ,Mac OS X TM ,Unix TM ,Linux TM ,FreeBSD TM Or the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (11)

1. An image processing method, characterized in that the method comprises:
acquiring a target difficult example sample, wherein the target difficult example sample comprises a first aerial view image;
determining a first partial image corresponding to a target object in the first bird's eye view image;
and adding the first partial image to a second aerial view image corresponding to a target scene to obtain a target aerial view image corresponding to the target scene and containing the target object.
2. The method of claim 1, wherein the determining a first partial image of the first bird's eye view image that corresponds to a target object comprises:
inputting the first bird's-eye view image into a pre-trained image segmentation model to obtain a segmentation image output by the image segmentation model, wherein the segmentation image is used for indicating an image area corresponding to a preset object in the first bird's-eye view image;
determining an image area corresponding to the target object in the segmented image as a target image area;
an image area corresponding to the target image area is extracted from the first bird's eye view image as the first partial image.
3. The method of claim 2, wherein the image segmentation model is trained by:
acquiring training data, wherein the training data comprises a sample aerial view image and an annotation image corresponding to the sample aerial view image, and the annotation image is used for indicating preset objects corresponding to pixel points in the sample aerial view image;
and performing model training by taking the sample aerial view image as the input of a model and taking the marked image as the target output of the model to obtain the trained image segmentation model.
4. The method of claim 1, wherein the adding the first partial image to a second bird's eye image corresponding to a target scene comprises:
determining an image area in the second bird's-eye view image, which has a position association relationship with the target object, as a first association area;
adding the first local image to the first associated region.
5. The method of claim 1, further comprising:
acquiring a first acquired image corresponding to the first aerial view image, wherein the first aerial view image is a spliced image generated on the basis of the first acquired image;
determining a second partial image corresponding to the first partial image in the first collected image according to the coordinate mapping relation between the first aerial view image and the first collected image;
acquiring a second acquired image corresponding to the second bird's-eye view image, wherein the second bird's-eye view image is a spliced image generated based on the second acquired image;
adding the second local image to the second captured image resulting in a target captured image corresponding to the target scene and including the target object.
6. The method of claim 5, wherein the adding the second partial image to the second captured image comprises:
determining an image area having a position association relation with the target object in the second acquired image as a second association area;
adding the second partial image to the second associated region.
7. The method according to any one of claims 1-6, wherein the target object is any one of:
a library line, a wheel block, a deceleration strip and a zebra crossing.
8. An image processing apparatus, characterized in that the apparatus comprises:
a first acquisition module configured to acquire a target difficulty sample, the target difficulty sample including a first bird's eye view image;
a first determination module configured to determine a first partial image of the first bird's eye view image that corresponds to a target object;
a first adding module configured to add the first partial image to a second bird's-eye view image corresponding to a target scene to obtain a target bird's-eye view image corresponding to the target scene and containing the target object.
9. A vehicle, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute the instructions in the memory to implement the steps of the method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 7.
11. A chip comprising a processor and an interface; the processor is configured to read instructions to perform the method of any one of claims 1 to 7.
CN202210837775.9A 2022-07-15 2022-07-15 Image processing method, device, vehicle, medium and chip Active CN115205311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210837775.9A CN115205311B (en) 2022-07-15 2022-07-15 Image processing method, device, vehicle, medium and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210837775.9A CN115205311B (en) 2022-07-15 2022-07-15 Image processing method, device, vehicle, medium and chip

Publications (2)

Publication Number Publication Date
CN115205311A true CN115205311A (en) 2022-10-18
CN115205311B CN115205311B (en) 2024-04-05

Family

ID=83582213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210837775.9A Active CN115205311B (en) 2022-07-15 2022-07-15 Image processing method, device, vehicle, medium and chip

Country Status (1)

Country Link
CN (1) CN115205311B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115345321A (en) * 2022-10-19 2022-11-15 小米汽车科技有限公司 Data augmentation method, data augmentation device, electronic device, and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110115902A1 (en) * 2009-11-19 2011-05-19 Qualcomm Incorporated Orientation determination of a mobile station using side and top view images
JP2012257107A (en) * 2011-06-09 2012-12-27 Aisin Seiki Co Ltd Image generating device
CN103609100A (en) * 2011-06-09 2014-02-26 爱信精机株式会社 Image generation device
US20160119607A1 (en) * 2013-04-04 2016-04-28 Amatel Inc. Image processing system and image processing program
CN109255767A (en) * 2018-09-26 2019-01-22 北京字节跳动网络技术有限公司 Image processing method and device
CN110378201A (en) * 2019-06-05 2019-10-25 浙江零跑科技有限公司 A kind of hinged angle measuring method of multiple row vehicle based on side ring view fisheye camera input
CN111968133A (en) * 2020-07-31 2020-11-20 上海交通大学 Three-dimensional point cloud data example segmentation method and system in automatic driving scene
CN112464939A (en) * 2021-01-28 2021-03-09 知行汽车科技(苏州)有限公司 Data augmentation method, device and storage medium in target detection
CN113537085A (en) * 2021-07-20 2021-10-22 南京工程学院 Ship target detection method based on two-time transfer learning and data augmentation
CN113743434A (en) * 2020-05-29 2021-12-03 华为技术有限公司 Training method of target detection network, image augmentation method and device
CN114627438A (en) * 2020-11-26 2022-06-14 千寻位置网络有限公司 Target detection model generation method, target detection method, device and medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110115902A1 (en) * 2009-11-19 2011-05-19 Qualcomm Incorporated Orientation determination of a mobile station using side and top view images
JP2012257107A (en) * 2011-06-09 2012-12-27 Aisin Seiki Co Ltd Image generating device
CN103609100A (en) * 2011-06-09 2014-02-26 爱信精机株式会社 Image generation device
US20160119607A1 (en) * 2013-04-04 2016-04-28 Amatel Inc. Image processing system and image processing program
CN109255767A (en) * 2018-09-26 2019-01-22 北京字节跳动网络技术有限公司 Image processing method and device
CN110378201A (en) * 2019-06-05 2019-10-25 浙江零跑科技有限公司 A kind of hinged angle measuring method of multiple row vehicle based on side ring view fisheye camera input
CN113743434A (en) * 2020-05-29 2021-12-03 华为技术有限公司 Training method of target detection network, image augmentation method and device
CN111968133A (en) * 2020-07-31 2020-11-20 上海交通大学 Three-dimensional point cloud data example segmentation method and system in automatic driving scene
CN114627438A (en) * 2020-11-26 2022-06-14 千寻位置网络有限公司 Target detection model generation method, target detection method, device and medium
CN112464939A (en) * 2021-01-28 2021-03-09 知行汽车科技(苏州)有限公司 Data augmentation method, device and storage medium in target detection
CN113537085A (en) * 2021-07-20 2021-10-22 南京工程学院 Ship target detection method based on two-time transfer learning and data augmentation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115345321A (en) * 2022-10-19 2022-11-15 小米汽车科技有限公司 Data augmentation method, data augmentation device, electronic device, and storage medium

Also Published As

Publication number Publication date
CN115205311B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN114842075B (en) Data labeling method and device, storage medium and vehicle
CN112810603B (en) Positioning method and related product
CN115035494A (en) Image processing method, image processing device, vehicle, storage medium and chip
CN115123257A (en) Method and device for identifying position of road deceleration strip, vehicle, storage medium and chip
CN115205365A (en) Vehicle distance detection method and device, vehicle, readable storage medium and chip
CN115220449A (en) Path planning method and device, storage medium, chip and vehicle
CN115205311B (en) Image processing method, device, vehicle, medium and chip
CN115100630B (en) Obstacle detection method, obstacle detection device, vehicle, medium and chip
CN114842455B (en) Obstacle detection method, device, equipment, medium, chip and vehicle
CN115203457B (en) Image retrieval method, device, vehicle, storage medium and chip
CN115330923B (en) Point cloud data rendering method and device, vehicle, readable storage medium and chip
CN114782638B (en) Method and device for generating lane line, vehicle, storage medium and chip
CN115164910B (en) Travel route generation method, travel route generation device, vehicle, storage medium, and chip
CN114842440B (en) Automatic driving environment sensing method and device, vehicle and readable storage medium
CN115205848A (en) Target detection method, target detection device, vehicle, storage medium and chip
CN115042814A (en) Traffic light state identification method and device, vehicle and storage medium
CN115205179A (en) Image fusion method and device, vehicle and storage medium
CN115334111A (en) System architecture, transmission method, vehicle, medium and chip for lane recognition
CN115334109A (en) System architecture, transmission method, vehicle, medium and chip for traffic signal identification
CN114981138A (en) Method and device for detecting vehicle travelable region
CN115082772B (en) Location identification method, location identification device, vehicle, storage medium and chip
CN114822216B (en) Method and device for generating parking space map, vehicle, storage medium and chip
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle
CN115063639B (en) Model generation method, image semantic segmentation device, vehicle and medium
CN115205461B (en) Scene reconstruction method and device, readable storage medium and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant