CN115131754A - Method, system, equipment and storage medium for generalization of automatic driving scene - Google Patents

Method, system, equipment and storage medium for generalization of automatic driving scene Download PDF

Info

Publication number
CN115131754A
CN115131754A CN202210603505.1A CN202210603505A CN115131754A CN 115131754 A CN115131754 A CN 115131754A CN 202210603505 A CN202210603505 A CN 202210603505A CN 115131754 A CN115131754 A CN 115131754A
Authority
CN
China
Prior art keywords
data
sensor
target object
inconsistent
generalization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210603505.1A
Other languages
Chinese (zh)
Inventor
李森林
李诒雯
郝江波
邹元杰
高晟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Kotei Informatics Co Ltd
Original Assignee
Wuhan Kotei Informatics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Kotei Informatics Co Ltd filed Critical Wuhan Kotei Informatics Co Ltd
Priority to CN202210603505.1A priority Critical patent/CN115131754A/en
Publication of CN115131754A publication Critical patent/CN115131754A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method, a system, equipment and a storage medium for generalization of an automatic driving scene, wherein the method comprises the following steps: acquiring multi-channel sensor source data, and sensing a target object on each channel of sensor source data to obtain a corresponding sensing result, wherein the multi-channel sensor comprises a visual sensor and other sensors; the method comprises the steps of comparing a sensing result of a visual sensor with sensing results of other sensors to obtain inconsistent data of the visual sensor and the data of the other sensors, and marking the inconsistent data; and carrying out generalization processing on the inconsistent data through a data enhancement technology, and generating a virtual scene from the generalized data for a new training or training set. The method provided by the invention is used for pertinently extracting and generalizing the special scene in perception, and the value of the scene generalization on algorithm training is obviously improved.

Description

Method, system, equipment and storage medium for generalization of automatic driving scene
Technical Field
The present invention relates to the field of vehicle automatic driving technologies, and in particular, to a method, an apparatus, and a storage medium for generalization of an automatic driving scenario.
Background
With the continuous development of the automatic driving technology, problems found on actual roads, namely special scenes, become more and more diversified, so that people pay more and more attention to the mode of creating virtual scenes to test an automatic driving algorithm from a technical level. After these virtual scenes are created, it is necessary to have generalization capability, for example, to continuously modify the relevant attributes of the target object, so as to fully test the automatic driving algorithm.
Most of existing automatic driving companies lack the capability of generalization operation on virtual scenes, even if the capability of generalization operation on virtual scenes is supported, only independent or orthogonal combination generalization operation is carried out on some test dimensions, and most of generalized numerical values are mechanical discrete data. Meanwhile, the number of results of scene generalization is large, the processing capacity is large, but the pertinence and the effectiveness are lacked, and the special scenes such as problem scenes and edge scenes which need attention are not covered enough.
Disclosure of Invention
It is an object of the present invention to provide a method, system, device and storage medium for generalizing an automatic driving scenario to solve the problems set forth in the background art.
According to a first aspect of the present invention, there is provided a method of generalising an autonomous driving scenario, the method comprising the steps of:
acquiring multi-channel sensor source data, and sensing a target object on each channel of sensor source data to obtain a corresponding sensing result, wherein the multi-channel sensor comprises a visual sensor and other sensors;
the method comprises the steps of comparing a sensing result of a visual sensor with sensing results of other sensors to obtain inconsistent data of the visual sensor and the data of the other sensors, and marking the inconsistent data;
and carrying out generalization processing on the inconsistent data through a data enhancement technology, and generating a virtual scene from the generalized data for a new training or training set.
Optionally, the other sensors include a radar sensor and a lidar sensor, and the multi-channel sensor source data includes a visual image collected based on the visual sensor, a radar signal collected by the radar sensor, and lidar point cloud data collected by the lidar sensor.
Optionally, the sensing the target object on each path of sensor source data to obtain a corresponding sensing result includes:
and sensing the target object based on the visual picture, the radar signal and the laser radar point cloud data to obtain a corresponding target object sensing result, wherein the target object sensing result comprises a target object type, a target object size, a target object direction, a target object distance and a target object position.
Optionally, the obtaining of data in which the visual sensor data is inconsistent with the other sensor data by comparing the visual sensor sensing result with the other sensor sensing result includes:
and taking a target perception result of the radar sensor and a target perception result of the laser radar sensor as references, if the target perception result corresponding to the visual sensor is compared with a target object perception result of the radar sensor and a target object perception result of the laser radar sensor, the target object is not detected, the types of the target objects are inconsistent, or the difference between the sizes of the target objects, the directions of the target objects, the distances of the target objects or the positions of the target objects is larger than a set difference value, acquiring inconsistent visual pictures and areas where the inconsistent target objects are located in the visual pictures, and labeling the areas.
Optionally, the data enhancement technique includes a first data enhancement and a second data enhancement, wherein the first data enhancement includes at least one of flipping, rotating, cropping, morphing, and scaling; the second data enhancement includes at least one of noise, blurring, color transformation, erasure, and padding.
Optionally, the generalizing, by using a data enhancement technique, the inconsistent data further includes:
adding random noise into the inconsistent data, and performing generalization treatment on image data in the inconsistent data by adding at least one of texture, illumination, light source, color transformation, erasure and filling to obtain generalized image data; and performing generalization processing on the target object in the inconsistent data by adopting at least one of rotation, shielding, overturning, cutting, deformation and scaling to obtain a generalized target object form, and using the generalized image data and the target object form in a new training or training set.
According to a second aspect of the present invention, there is provided a system for generalizing an automatic driving scenario, comprising: the system comprises an input module, a logic processing module and a generalization module; wherein the content of the first and second substances,
the data acquisition module is used for acquiring multi-channel sensor source data and sensing a target object on each channel of sensor source data to obtain a corresponding sensing result, wherein the multi-channel sensor comprises a visual sensor and other sensors;
the data comparison module is used for comparing the perception result of the visual sensor with the perception results of other sensors to obtain inconsistent data of the visual sensor data and the data of other sensors and marking the inconsistent data;
and the data generalization module is used for carrying out generalization processing on the inconsistent data through a data enhancement technology, and generating a virtual scene from the data after the generalization processing for a new training or training set.
According to a third aspect of the present invention, there is provided an electronic device comprising a memory, a processor for implementing the steps of a method of generalizing an autonomous driving scenario when executing a computer management class program stored in the memory.
According to a fourth aspect of the present invention, there is provided a computer readable storage medium having stored thereon a computer management like program, which when executed by a processor, performs the steps of a method of automatic driving scenario generalization.
Compared with the prior art, the technical scheme of the application has the following beneficial technical effects:
the method adopts a multi-mode data source and deep learning as core means, so that the scene generalization effect is improved; the method aims at extracting and generalizing the special scenes in perception, and significantly improves the value of the algorithm training caused by scene generalization.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
FIG. 1 is a flow chart of a method for generalizing an autopilot scenario provided by the present invention;
fig. 2 is a schematic flowchart of generalization of an automatic driving scene in embodiment 2 of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Example 1
Fig. 1 is a flowchart of a method for generalizing an automatic driving scenario, where as shown in fig. 1, the method includes the following steps:
acquiring multi-channel sensor source data, and sensing a target object on each channel of sensor source data to obtain a corresponding sensing result, wherein the multi-channel sensor comprises a visual sensor and other sensors;
comparing the sensing result of the visual sensor with the sensing results of other sensors to obtain inconsistent data of the visual sensor data and the other sensor data, and labeling the inconsistent data;
and step three, carrying out generalization processing on the inconsistent data through a data enhancement technology, and generating a virtual scene from the generalized data for a new training or training set.
It can be understood that the invention uses multi-channel sensor source data to sense the target object, obtains data with inconsistent vision sensor data and other sensor data by comparing the sensing result of the vision sensor with the sensing result of other sensors, and judges the data with inconsistent detection result of other sensors by adopting distribution difference identification. And moreover, the inconsistent data is subjected to generalization processing by using deep learning as a core means, and a special scene in perception is extracted and generalized in a targeted manner, so that the scene generalization effect is further improved.
Specifically, in step one, the other sensors include a radar sensor and a lidar sensor, and the multiple sensor source data includes a visual image collected based on the visual sensor, a radar signal collected by the radar sensor, and lidar point cloud data collected by the lidar sensor.
It should be noted that the multi-channel sensor source data may be derived from images captured by millimeter-wave radar, laser radar, detector, camera and other imaging devices based on a specific target, and the source of the image data is not further limited herein. The image data may be understood as a set of picture data, the picture data may be a picture aiming at least one specific target and corresponding to each specific target under the states of different angles, different pixel colors and the like, and the quantity of the picture data is generally large and can reach the million level.
Specifically, in the second step, target object sensing is performed based on the visual image, the radar signal and the laser radar point cloud data respectively to obtain corresponding target object sensing results, wherein the target object sensing results comprise target object types, target object sizes, target object directions, target object distances and target object positions. And taking a target perception result of the radar sensor and a target perception result of the laser radar sensor as references, if the target perception result corresponding to the visual sensor is compared with a target object perception result of the radar sensor and a target object perception result of the laser radar sensor, the target object is not detected, the types of the target objects are inconsistent, or the difference between the sizes of the target objects, the directions of the target objects, the distances of the target objects or the positions of the target objects is larger than a set difference value, acquiring inconsistent visual pictures and areas where the inconsistent target objects are located in the visual pictures, and labeling the areas.
In this embodiment, time alignment is performed on each path of source data, and target object detection results of each path of source data are compared and fused, so as to sense and output target object type, size, direction, distance, and position information, and determine where detection results of the vision sensor are different from those of other sensors, including situations where detection of the vision sensor on the target object is missed, determination of the type of the target object is inconsistent, and detection values of the size, direction, distance, and position of the target object have large deviation. And marking the found pictures in which the missed detection and the detection are inconsistent, namely the special scenes, and framing out the corresponding areas.
It should be noted that, with the target object as a reference, the target object is an unknown road obstacle in this embodiment, that is, the default target object is accurate, the target object sensing result corresponding to the visual sensor is used as a comparison, and whether there is missing detection of the target object, inconsistency of types of the target object, or difference between the size of the target object, the direction of the target object, the distance of the target object, or the position of the target object, in comparison with the target object sensing result corresponding to the visual sensor and the target object sensing result of the lidar sensor is greater than a set difference value, then the inconsistent visual picture and the area where the inconsistent target object is located in the visual picture are obtained as distribution difference. Because the data structures of the sensor source data and the target object are different, the way of comparing the distribution difference of the multi-channel sensor source data relative to the target object is also different. The inconsistent visual pictures, namely the special scenes and the areas where the inconsistent target objects are located in the visual pictures can be identified by a distribution difference identification method, and manual marking is carried out to facilitate subsequent generalization processing on the inconsistent data.
Specifically, in step three, random noise is added to the inconsistent data, and at least one of texture, illumination, light source, color conversion, erasure and filling is added to the image data in the inconsistent data for generalization, so as to obtain generalized image data; and performing generalization treatment on the target object in the inconsistent data by adopting at least one of rotation, shielding, turning, cutting, deformation and scaling to obtain a generalized target object form, and using the generalized image data and the target object form in a new training or training set.
It should be noted that the data enhancement technique includes a first data enhancement and a second data enhancement, wherein the first data enhancement includes at least one of flipping, rotating, cropping, deforming, and scaling; and/or the second data enhancement includes at least one of noise, blur, color transformation, erasure, and padding. When the second data enhancement is actually selected, any one of flipping, rotating, cropping, deforming or scaling, any one of noise, blurring, color transformation, erasing and padding, or at least two of flipping, rotating, cropping, deforming and scaling may be selected; or, selecting at least two of noise, blur, color transform, erasure, and padding; alternatively, at least two of flipping, rotating, cropping, morphing, scaling, noise, blurring, color transformation, erasing, and padding are selected.
According to the depth image enhancement technology based on the depth image, the minimum filter is used for filtering the depth image after data enhancement, the problem of data loss caused by the fact that partial points are shielded after the point cloud data are transformed in the data enhancement of the depth image can be effectively eliminated, the effect of the depth image generated after the data enhancement is better, and the scene generalization effect is improved.
Example 2
A system for generalizing an autonomous driving scenario, comprising: the system comprises an input module, a logic processing module and a generalization module; wherein the content of the first and second substances,
the data acquisition module is used for acquiring multi-channel sensor source data and sensing a target object on each channel of sensor source data to obtain a corresponding sensing result, wherein the multi-channel sensor comprises a visual sensor and other sensors;
the data comparison module is used for comparing the perception result of the visual sensor with the perception results of other sensors to obtain data with inconsistent visual sensor data and other sensor data and marking the inconsistent data;
and the data generalization module is used for carrying out generalization processing on the inconsistent data through a data enhancement technology, and generating a virtual scene from the data after the generalization processing for a new training or training set.
It can be understood that the system for generalizing an automatic driving scenario provided by the present invention corresponds to the method for generalizing an automatic driving scenario provided by the foregoing embodiments, and the relevant technical features of the system for generalizing an automatic driving scenario may refer to the relevant technical features of the method for generalizing an automatic driving scenario, and are not described herein again.
In this embodiment, referring to fig. 2, the generalization of the automatic driving scene specifically includes: and inputting multi-channel acquisition data, wherein the acquisition data comprises multi-channel sensor source data and is used for target object detection. In this embodiment, a plurality of sensor source data including a visual image, a radar signal, and a lidar point cloud data are input. And performing target object detection on each source data by shunting, performing time alignment on each path of source data, and comparing and fusing target object detection results of each path of source data. Sensing and outputting information such as types, sizes, directions, distances, positions and the like of the target objects, and judging the places where the detection results of other sensors are different, wherein the situations that the target objects are missed to be detected by vision, the types of the target objects are judged to be inconsistent, and the detection values of the sizes, the directions, the distances and the positions of the target objects have large deviation. And marking the found pictures in which the missed detection and the detection are inconsistent, namely the special scenes, and framing out the corresponding areas. A data enhancement technology in deep learning is applied to carry out a series of image processing on the marked area, the target object is rotated and shielded in different degrees, a scene is generalized, and the generalization expression is specifically expressed by adding random noise and carrying out generalization processing on image data through a series of textures, illumination and light sources. And generating a virtual scene from the generalized data for a new training or training set.
Example 3
The embodiment of the invention provides electronic equipment, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the following steps: acquiring multi-channel sensor source data, and sensing a target object on each channel of sensor source data to obtain a corresponding sensing result, wherein the multi-channel sensor comprises a visual sensor and other sensors; the method comprises the steps of comparing a sensing result of a visual sensor with sensing results of other sensors to obtain inconsistent data of the visual sensor and the data of the other sensors, and marking the inconsistent data; and carrying out generalization treatment on the inconsistent data through a data enhancement technology, and generating a virtual scene from the generalized data for a new training or training set.
Example 4
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of: acquiring multi-channel sensor source data, and sensing a target object on each channel of sensor source data to obtain a corresponding sensing result, wherein the multi-channel sensor comprises a visual sensor and other sensors; the method comprises the steps that a perception result of a visual sensor is compared with perception results of other sensors to obtain inconsistent data of the visual sensor and the data of the other sensors, and the inconsistent data are marked; and carrying out generalization treatment on the inconsistent data through a data enhancement technology, and generating a virtual scene from the generalized data for a new training or training set.
In summary, the embodiments of the present invention provide a method, a system, and a storage medium for generalizing an automatic driving scenario, where the method, the system, and the storage medium identify differences in distribution, and generalize areas with differences by using a data enhancement technique in deep learning. Specifically, a multimode data source and deep learning are taken as core means, the effect of scene generalization is improved, a special scene in perception is extracted and generalized in a targeted manner, and the value of the scene generalization on algorithm training is obviously improved.
It should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart 1 flow or flows and/or block 1 block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows of FIG. 1 and/or block diagram block or blocks of FIG. 1.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart 1 flow or flows and/or block 1 block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.

Claims (9)

1. A method of generalizing an autonomous driving scenario, the method comprising the steps of:
acquiring multi-channel sensor source data, and sensing a target object on each channel of sensor source data to obtain a corresponding sensing result, wherein the multi-channel sensor comprises a visual sensor and other sensors;
the method comprises the steps of comparing a sensing result of a visual sensor with sensing results of other sensors to obtain inconsistent data of the visual sensor and the data of the other sensors, and marking the inconsistent data;
and carrying out generalization processing on the inconsistent data through a data enhancement technology, and generating a virtual scene from the generalized data for a new training or training set.
2. The method of claim 1, wherein the other sensors comprise a radar sensor and a lidar sensor, and wherein the multi-sensor source data comprises a visual picture based on the visual sensor, a radar signal based on the radar sensor, and lidar point cloud data based on the lidar sensor.
3. The method of claim 1, wherein the sensing the target object for each path of sensor source data to obtain a corresponding sensing result comprises:
and sensing the target object based on the visual picture, the radar signal and the laser radar point cloud data to obtain a corresponding target object sensing result, wherein the target object sensing result comprises a target object type, a target object size, a target object direction, a target object distance and a target object position.
4. The method of claim 1, wherein obtaining data that is inconsistent with the visual sensor data and other sensor data by comparing the visual sensor perception with other sensor perception comprises:
and taking a target perception result of the radar sensor and a target perception result of the laser radar sensor as references, if the target perception result corresponding to the visual sensor is compared with a target object perception result of the radar sensor and a target object perception result of the laser radar sensor, the target object is not detected, the types of the target objects are inconsistent, or the difference between the sizes of the target objects, the directions of the target objects, the distances of the target objects or the positions of the target objects is larger than a set difference value, acquiring inconsistent visual pictures and areas where the inconsistent target objects are located in the visual pictures, and labeling the areas.
5. The method of claim 1, wherein the data enhancement technique comprises a first data enhancement and a second data enhancement, wherein the first data enhancement comprises at least one of flipping, rotating, cropping, morphing, and scaling; the second data enhancement includes at least one of noise, blurring, color transformation, erasure, and padding.
6. The method according to claim 1, wherein the inconsistent data is generalized through a data enhancement technique, and the generalized data is generated into a virtual scene for a new training or training set; the method comprises the following steps:
adding random noise into the inconsistent data, and adding at least one of texture, illumination, light source, color transformation, erasing and filling to the image data in the inconsistent data for generalization to obtain generalized image data; and performing generalization processing on the target object in the inconsistent data by adopting at least one of rotation, shielding, overturning, cutting, deformation and scaling to obtain a generalized target object form, and using the generalized image data and the target object form in a new training or training set.
7. A system for generalizing an autonomous driving scenario, comprising: the system comprises an input module, a logic processing module and a generalization module; wherein, the first and the second end of the pipe are connected with each other,
the data acquisition module is used for acquiring multi-channel sensor source data and sensing a target object on each channel of sensor source data to obtain a corresponding sensing result, wherein the multi-channel sensor comprises a visual sensor and other sensors;
the data comparison module is used for comparing the perception result of the visual sensor with the perception results of other sensors to obtain data with inconsistent visual sensor data and other sensor data and marking the inconsistent data;
and the data generalization module is used for carrying out generalization processing on the inconsistent data through a data enhancement technology, and generating a virtual scene from the data after the generalization processing for a new training or training set.
8. An electronic device comprising a memory, a processor for implementing the steps of a method of generalizing an autonomous driving scenario as claimed in any one of claims 1 to 6 when executing a computer management like program stored in the memory.
9. A computer-readable storage medium, having stored thereon a computer management-like program, which when executed by a processor, carries out the steps of a method of automatic driving scenario generalization according to any one of claims 1 to 6.
CN202210603505.1A 2022-05-27 2022-05-27 Method, system, equipment and storage medium for generalization of automatic driving scene Pending CN115131754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210603505.1A CN115131754A (en) 2022-05-27 2022-05-27 Method, system, equipment and storage medium for generalization of automatic driving scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210603505.1A CN115131754A (en) 2022-05-27 2022-05-27 Method, system, equipment and storage medium for generalization of automatic driving scene

Publications (1)

Publication Number Publication Date
CN115131754A true CN115131754A (en) 2022-09-30

Family

ID=83377176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210603505.1A Pending CN115131754A (en) 2022-05-27 2022-05-27 Method, system, equipment and storage medium for generalization of automatic driving scene

Country Status (1)

Country Link
CN (1) CN115131754A (en)

Similar Documents

Publication Publication Date Title
CN112528878B (en) Method and device for detecting lane line, terminal equipment and readable storage medium
CN112132156B (en) Image saliency target detection method and system based on multi-depth feature fusion
CN109300151B (en) Image processing method and device and electronic equipment
Aziz et al. Implementation of lane detection algorithm for self-driving car on toll road cipularang using Python language
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN112435223B (en) Target detection method, device and storage medium
CN110689134A (en) Method, apparatus, device and storage medium for performing machine learning process
CN111191482B (en) Brake lamp identification method and device and electronic equipment
CN110288629A (en) Target detection automatic marking method and device based on moving Object Detection
CN112784675B (en) Target detection method and device, storage medium and terminal
CN111797832B (en) Automatic generation method and system for image region of interest and image processing method
US9305233B2 (en) Isotropic feature matching
CN117011819A (en) Lane line detection method, device and equipment based on feature guidance attention
Kiran et al. Automatic hump detection and 3D view generation from a single road image
CN115131754A (en) Method, system, equipment and storage medium for generalization of automatic driving scene
CN114359147A (en) Crack detection method, crack detection device, server and storage medium
CN113609980A (en) Lane line sensing method and device for automatic driving vehicle
CN112991419A (en) Parallax data generation method and device, computer equipment and storage medium
Singh et al. An ensemble approach for moving vehicle detection and tracking by using Ni vision module
CN116703952B (en) Method and device for filtering occlusion point cloud, computer equipment and storage medium
CN117784162B (en) Target annotation data acquisition method, target tracking method, intelligent device and medium
CN117333524A (en) Three-dimensional target detection method, device and equipment
Zali et al. Preliminary Study on Shadow Detection in Drone-Acquired Images with U-NET
CN117475138A (en) Method and device for generating target detection model, computer equipment and storage medium
CN116433893A (en) Target detection method, system, medium and equipment based on arbitrary direction rotating frame

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination