CN117542021A - Vehicle control method and device, vehicle and storage medium - Google Patents

Vehicle control method and device, vehicle and storage medium Download PDF

Info

Publication number
CN117542021A
CN117542021A CN202311676673.4A CN202311676673A CN117542021A CN 117542021 A CN117542021 A CN 117542021A CN 202311676673 A CN202311676673 A CN 202311676673A CN 117542021 A CN117542021 A CN 117542021A
Authority
CN
China
Prior art keywords
image
vehicle
fusion
mark point
position mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311676673.4A
Other languages
Chinese (zh)
Inventor
艾锐
黄佳伟
李洪杰
顾维灏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haomo Zhixing Technology Co Ltd
Original Assignee
Haomo Zhixing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haomo Zhixing Technology Co Ltd filed Critical Haomo Zhixing Technology Co Ltd
Priority to CN202311676673.4A priority Critical patent/CN117542021A/en
Publication of CN117542021A publication Critical patent/CN117542021A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application is applicable to the technical field of intelligent automobiles, and provides a vehicle control method, a device, a vehicle and a storage medium, wherein the method comprises the following steps: acquiring a first image set, wherein the first image set comprises continuous N frames of environment images acquired by a vehicle, and environment images acquired at the current moment by the vehicle exist in the N frames of environment images; performing image fusion on N frames of environment images in the first image set to obtain a first fusion image; acquiring a second fusion image, wherein the second fusion image is generated based on an environment image acquired when the position of the vehicle is at a position mark point, and the position mark point is determined based on the running distance of the vehicle before the current moment; performing image fusion on the first fusion image and the second fusion image to obtain a third fusion image; and controlling the vehicle to run based on the third fusion image. According to the method and the device, the purpose of accurately controlling the running of the vehicle is achieved according to rich environmental information.

Description

Vehicle control method and device, vehicle and storage medium
Technical Field
The application belongs to the technical field of intelligent automobiles, and particularly relates to a vehicle control method, a device, a vehicle and a storage medium.
Background
With the development of vehicles, autopilot and assisted driving are receiving increasing attention. When a vehicle is driven automatically and assisted, accurate detection of obstacles around the vehicle is important. At present, when a vehicle determines an obstacle, the obstacle is often determined through a frame image acquired at the current moment of the vehicle, and the obstacle is often determined incompletely due to less information in the frame image acquired at the current moment, so that the vehicle is controlled inaccurately.
Disclosure of Invention
The embodiment of the application provides a vehicle control method, a vehicle control device, a vehicle and a storage medium, which can solve the problem of inaccurate vehicle control caused by incomplete obstacle determination.
In a first aspect, an embodiment of the present application provides a vehicle control method, including:
acquiring a first image set, wherein the first image set comprises continuous N frames of environment images acquired by a vehicle, and the environment images acquired by the vehicle at the current moment exist in the N frames of environment images, wherein N is more than or equal to 2;
performing image fusion on the N frames of environment images in the first image set to obtain a first fusion image;
acquiring a second fusion image, wherein the second fusion image is generated based on an environment image acquired when the position of the vehicle is at a position mark point, and the position mark point is determined based on the running distance of the vehicle before the current moment;
Performing image fusion on the first fusion image and the second fusion image to obtain a third fusion image;
and controlling the vehicle to run based on the third fusion image.
In a second aspect, an embodiment of the present application provides a vehicle control apparatus, including:
the first image acquisition module is used for acquiring a first image set, wherein the first image set comprises continuous N frames of environment images acquired by a vehicle, and the environment images acquired by the vehicle at the current moment exist in the N frames of environment images, wherein N is more than or equal to 2;
the first image fusion module is used for carrying out image fusion on the N frames of environment images in the first image set to obtain a first fusion image;
the second image acquisition module is used for acquiring a second fusion image, the second fusion image is generated based on an environment image acquired when the position of the vehicle is at a position mark point, and the position mark point is determined based on the driving distance of the vehicle before the current moment;
the second image fusion module is used for carrying out image fusion on the first fusion image and the second fusion image to obtain a third fusion image;
and the vehicle control module is used for controlling the vehicle to run based on the third fusion image.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method of any one of the above first aspects when the computer program is executed.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program which, when executed by a processor, implements the method of any one of the first aspects.
In a fifth aspect, embodiments of the present application provide a computer program product for, when run on a terminal device, causing the terminal device to perform the method of any one of the first aspects.
Compared with the prior art, the embodiment of the first aspect of the application has the beneficial effects that: the method comprises the steps of firstly obtaining a first image set, and carrying out image fusion on multi-frame environment images in the first image set to obtain a first fusion image, wherein the first fusion image is obtained by fusion of continuous multi-frame environment images, so that the first fusion image comprises surrounding environment information of an area where a vehicle passes in a period of time; acquiring a second fusion image, wherein the second fusion image is a fusion image determined when the vehicle is at a position mark point, and the second fusion image is a fusion image determined according to the position of the vehicle; finally, fusing the first fused image and the second fused image to obtain a third fused image with more abundant information; the positions of each obstacle and each obstacle are accurately and comprehensively determined according to the third fusion image with rich information, and then the vehicle running can be controlled more accurately.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic configuration view of a vehicle control method provided in an embodiment of the present application;
FIG. 2 is a flow chart of a vehicle control method according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of an environment image according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a method for determining a second fused image according to an embodiment of the present disclosure;
FIG. 5 is a flow chart of a method for controlling vehicle travel according to a third fused image according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a convolutional neural network according to one embodiment of the present application;
FIG. 7 is a flow chart of a method for obstacle location determination according to an embodiment of the present application;
fig. 8 is a schematic structural view of a vehicle control apparatus provided in an embodiment of the present application;
fig. 9 is a schematic structural view of a vehicle according to an embodiment of the present application.
Detailed Description
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used in this specification and the appended claims, the term "if" may be interpreted in context as "when … …" or "upon" or "in response to determining" or "in response to detecting". Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
At present, when a vehicle is driven automatically or driven in an auxiliary mode, the obtained surrounding environment information is incomplete, so that the automatic driving of the vehicle is inaccurate. For example, the vehicle needs to adjust the position of the vehicle back and forth in the automatic parking process, at this time, the vehicle needs to advance and retreat in a small range, so that the environmental information collected by the vehicle in a period of time is the same, the content of the collected environmental information is not comprehensive, and if the vehicle needs to move a large distance, the vehicle cannot determine the obstacle condition in a large-range area, so that the running efficiency of the vehicle is low and the control of the vehicle is inaccurate.
Based on the above problems, the present application proposes a vehicle control method, in which a vehicle can jointly perform vehicle driving control according to each environmental image collected in a short time and an environmental image determined by the vehicle in a long distance.
Specifically, a position mark point is determined when a vehicle runs at a preset distance, and a fusion image corresponding to the position mark point is obtained according to the acquired M frame environment images when the vehicle is at the position mark point.
As shown in fig. 1, a vehicle performs short-time fusion on N environmental images acquired within a preset time period to obtain a short-time fusion image (first fusion image); acquiring a second fusion image stored before the current moment, and carrying out long-time sequence fusion on the first fusion image and the second fusion image to obtain a long-time fusion image (a third fusion image); and controlling the vehicle according to the third fusion image. The second fusion image is a fusion image obtained based on the collected environment image when the vehicle is at the last position mark point. The position detection precision of the obstacle can be improved through short-time sequence fusion; the memory of the fusion image is improved through long-time sequence fusion, so that the final fusion image can output history information, and the content of the fusion image is enriched. The vehicle control method according to the embodiment of the present application is described in detail below with reference to fig. 1.
Fig. 2 shows a schematic flowchart of a vehicle control method provided in the present application, and referring to fig. 2, the method is described in detail as follows:
s101, acquiring a first image set, wherein the first image set comprises continuous N frames of environment images acquired by a vehicle, and the environment images acquired by the vehicle at the current moment exist in the N frames of environment images, wherein N is more than or equal to 2.
In this embodiment, the vehicle acquires the environmental image in real time during the driving process, that is, the vehicle acquires the environmental image at a preset acquisition interval during the driving process, where the preset acquisition interval may be set as required, for example, 20ms acquires one frame of environmental image. The environmental image includes the shape of the object, the position of the object, and the like.
After each frame of new environment image is acquired by the vehicle, the new environment image and the N-1 frame of environment image acquired before form a first image set. The value of N may be set as required, for example, the N frame of environmental images may be 100 frames of environmental images acquired within 2 s.
Specifically, the method for acquiring the first image set includes:
the environment around the vehicle is perceived by an ultrasonic radar mounted on the vehicle, and environmental data (environmental information) around the vehicle is obtained. And performing BEV (Bird's Eye View) conversion on the environment data to obtain an environment image corresponding to the environment data. Successive N frames of ambient images are determined as a first image set.
Where BEV is a perspective from above to view an object or scene, as if birds were looking down on the ground in the sky. In the field of autopilot and robotics, data acquired by sensors is typically converted into BEV representations for better object detection, path planning, etc. BEVs can reduce complex three-dimensional environments to two-dimensional images, which is particularly important for efficient computation in real-time systems. For example, the BEV environment image shown in fig. 3, where the triangles shown in the image represent objects other than vehicles.
S102, performing image fusion on the N frames of environment images in the first image set to obtain a first fusion image.
In this embodiment, because the single frame signal collected by the vehicle is sparse and there are many false detection signals, the information of the object in the single frame environmental image is less and the information is inaccurate, so that in order to obtain more comprehensive and accurate environmental information, multiple frames of environmental images need to be fused.
Specifically, the image fusion of the N frames of environment images is actually to map the objects in the previous N-1 frames of environment images in the environment images acquired at the current moment, so that the current moment obtains richer and more comprehensive information of the objects.
Specifically, since the position information exists in the object in each frame of the environment image, for example, the starting position of the vehicle is set as the origin, then the object in each frame of the environment image has a coordinate relative to the origin, and therefore all the objects in the N frames of the environment images can be unified in the environment image acquired at the current moment according to the coordinate of the object in each frame of the environment image, and the purpose of image fusion is achieved.
And S103, acquiring a second fusion image, wherein the second fusion image is generated based on an environment image acquired when the position of the vehicle is at a position mark point, and the position mark point is determined based on the driving distance of the vehicle before the current moment.
In this embodiment, when the vehicle starts traveling from the start position, the position where the vehicle is located is recorded as a position mark point every traveling preset distance, and the preset distance may be set as needed, for example, the preset distance may be set to 5 meters or 10 meters, or the like. The time for the vehicle to travel the preset distance should be longer than the time for the vehicle to acquire N frames of environmental images for the purpose of long time sequence fusion, for example, the N frames of environmental images are environmental images acquired within 2s, and the vehicle should travel the preset distance for 10s or 15s, etc.
For example, if the preset distance is 5 meters, the vehicle arrives at the a position after traveling 5 meters from the initial position, and the a position is recorded as a position mark point. When the vehicle arrives at the position B, the distance between the position B and the position A is 5 meters, and the position B is marked as a position marking point.
In this embodiment, when the vehicle is at the position mark point, the vehicle obtains a second fusion image from the M environmental images acquired before and at the time. Wherein M may or may not be equal to N. Each position mark point corresponds to one second fusion image.
As an example, after the vehicle reaches the 3 rd position mark point, fusing all the environmental images acquired between the 2 nd position mark point and the 3 rd position mark point to obtain a second fused image corresponding to the 3 rd position mark point; or the vehicle extracts M environmental images from all the environmental images acquired between the 2 nd position mark point and the 3 rd position mark point to fuse to obtain a second fused image corresponding to the 3 rd position mark point. And the fusion image corresponding to each position mark point is marked as a second fusion image corresponding to the position mark point.
Thus, the number of the second fused images acquired in step S103 is one or more, that is, the second fused image corresponding to one position mark point may be acquired, or the second fused images corresponding to a plurality of position mark points may be acquired.
As an example, step S103 may include: and acquiring a fusion image generated based on the acquired environment image when the vehicle is at an ith position mark point, wherein the ith position mark point is the last position mark point determined before the current moment.
Alternatively, step S103 may include: and acquiring a second fusion image corresponding to the ith position mark point, a second fusion image corresponding to the (i-1) th position mark point and the like.
S104, performing image fusion on the first fusion image and the second fusion image to obtain a third fusion image.
In this embodiment, the method for performing image fusion on the first fused image and the second fused image is the same as the method for performing image fusion on the N frame environment image in the step S102, please refer to the description of the step S102, and the description is omitted here.
And S105, controlling the vehicle to run based on the third fusion image.
In this embodiment, the position of the obstacle (the ground line of the obstacle) may be determined according to the third fused image, so as to determine the obstacle avoidance strategy, thereby achieving the purpose of controlling the vehicle to travel.
In the embodiment of the application, a first image set is firstly obtained, and multi-frame environmental images in the first image set are subjected to image fusion to obtain a first fusion image, wherein the first fusion image is obtained by fusing continuous multi-frame environmental images, so that the first fusion image comprises surrounding environment information of an area where a vehicle passes in a period of time; acquiring a second fused image, wherein the second fused image is a fused image determined when the vehicle is at a position mark point, the second fused image is a fused image obtained after the vehicle travels a certain distance, and finally, the first fused image and the second fused image are fused to obtain a third fused image with richer content; the vehicle running can be controlled more rapidly and accurately according to the third fusion image with rich information.
In one possible implementation manner, the second fused image acquired in the step S103 may be a second fused image corresponding to the last position mark point determined before the current time. And in the running process of the vehicle, each time a new position mark point is determined, the stored second fusion image corresponding to the last position mark point can be replaced by the second fusion image corresponding to the new position mark point, so that only the second fusion image corresponding to the last determined position mark point is stored at each moment in the vehicle. And when the images are fused, directly acquiring the stored second fused image to obtain the second fused image corresponding to the last position mark point determined before the current moment.
As shown in fig. 4, specifically, the method for determining the second fused image corresponding to the position mark point includes:
s201, when the position of the vehicle is determined to be an ith position mark point, acquiring continuous M frame environment images acquired by the vehicle, wherein the M frame environment images contain environment images acquired when the vehicle is positioned at the ith position mark point, M is more than or equal to 2, and the distance between the ith position mark point and the ith-1 th position mark point is the preset distance.
In this embodiment, the M-1 frame ambient images are all ambient images acquired before the vehicle is at the ith position mark point.
S202, performing image fusion on the M frames of environment images to obtain the second fusion image.
In this embodiment, M may be the same as N or different from N.
When M is the same as N, the second fused image determined by the vehicle at the ith position mark point is actually the first fused image determined by the vehicle at the ith position mark point.
As an example, when the vehicle is traveling to the C position, the first fusion image of the vehicle at the C position is determined using the above-described step S101 and step S102. And the distance between the C position and the previous position mark point (B position) is a preset distance, the C position is determined to be a new position mark point, the first fused image determined by the C position can be directly recorded as a second fused image of the C position, and the second fused image of the C position is replaced by the second fused image determined by the previous position mark point (B position) stored in the storage module, so that only the second fused image corresponding to the latest determined position mark point (C position) exists in the storage module, and the second fused image stored in the storage module and the first fused image can be directly obtained for image fusion when the image fusion is carried out after the ith position mark point.
When M is different from N, the second fused image determined at the ith position mark point can be stored in the storage module so as to be convenient for subsequent use.
In a possible implementation manner, the second fused image may also be determined at the current moment, so the method for acquiring the second fused image in step S103 may further include:
at the current moment, the vehicle acquires an ith position mark point, a first environment image acquired when the vehicle is at the ith position mark point and M-1 second environment images acquired before the ith position mark point. And performing image fusion on the first environment image and each second environment image to obtain a second fusion image when the vehicle is positioned at the ith position mark point. The i-th position mark point is the last position mark point determined before the current moment.
In one possible implementation manner, each time the vehicle collects an environmental image, it may be calculated whether the distance between the position where the vehicle is located and the last position mark point determined previously reaches a preset distance, so as to determine whether the current position is a new position mark point.
Specifically, after step S102, the above method may further include:
Acquiring the current position of the vehicle; and when the distance between the current position and the ith position mark point is equal to the preset distance, determining the current position of the vehicle as the (i+1) th position mark point. And determining the first fusion image as a second fusion image corresponding to the (i+1) th position mark point. Or determining a second fusion image corresponding to the (i+1) th position mark point by using the continuous M-frame environment images.
In this embodiment, the position of the vehicle at the current time is acquired, and the position of the vehicle at the current time is recorded as the current position. The distance between the current position and the i-th position mark point is calculated. If the distance between the current position and the ith position mark point is equal to the preset distance, the current position can be used as a new position mark point. And determining the first fusion image determined at the current moment as a second fusion image corresponding to the (i+1) th position mark point.
When the vehicle runs between the (i+1) th position mark point and the (i+2) th position mark point (excluding the (1) th position mark point), the second fusion image corresponding to the (i+1) th position mark point is fused with the first fusion image determined at each sampling moment to obtain a third fusion image.
For example, the position of the vehicle at the 400 th sampling time is the 2 nd position mark point, the position of the vehicle at the 600 th sampling time is the 3 rd position mark point, and then the first fused image obtained at the 401 st sampling time of the vehicle is subjected to image fusion with the second fused image determined by the 2 nd position mark point to obtain a third fused image; the vehicle performs image fusion on a first fusion image obtained at the 402 th sampling moment and a second fusion image determined by the 2 nd position mark point to obtain a third fusion image; and by analogy, the first fusion image obtained at the 600 th sampling moment of the vehicle and the second fusion image determined by the 2 nd position mark point are subjected to image fusion to obtain a third fusion image; and (3) performing image fusion on the first fusion image obtained at the 601 st sampling moment of the vehicle and the second fusion image determined by the 3 rd position mark point to obtain a third fusion image.
In one possible implementation manner, in order to more accurately control the vehicle to run, the vehicle is prevented from colliding with the obstacle, after the third fused image is obtained, the feature points of the third fused image are extracted, the position of the obstacle is determined according to the feature points, and then the vehicle is controlled to run according to the position of the obstacle.
As shown in fig. 5, specifically, the implementation procedure of step S105 may include:
s1051, classifying each pixel point in the third fused image to obtain class information of each pixel point, wherein the class information is barrier or no barrier.
In the present embodiment, the third fused image is input to an example split network (convolutional neural network) for feature extraction, specifically, the example split network includes a backbone network (backbone), a decoder, and a split head including a semantic split model (semmantic) and an example split model (instance), as shown in fig. 6. The backbone network may be a mobilet_v2 network structure, and the backbone network is used for downsampling the features of the third fused image, and the downsampling multiple may be set as required, for example, downsampling by 8 times, and so on. The feature image obtained after passing through the backbone network is input to a decoder, the decoder performs up-sampling on the feature image, and the up-sampling multiple can be set according to the requirement, for example, up-sampling is 8 times, and the like. The feature image after passing through the backbone network and the decoder is restored to the same resolution size as the third fused image. The feature image obtained through the decoder is input into the segmentation head. The segmentation head performs semantic segmentation on the input feature image by using a semantic segmentation model, and performs instance segmentation on the input feature image by using an instance segmentation model.
And performing semantic segmentation on the third fusion image to obtain category information of each pixel point. For example, after semantic segmentation, the category information of each pixel point in the third fused image is 1 or 0, where 1 indicates that the pixel point has an obstacle, and 0 indicates that the pixel point has no obstacle.
S1052, performing instance segmentation on the third fused image to obtain a characteristic representation of each pixel point in the third fused image, wherein the characteristic representation is used for distinguishing different barriers.
In this embodiment, the example partitions different subjects for distinguishing the same kind of things, for example, the feature representation of each corresponding pixel obtained by different vehicles is different.
S1053, obtaining the position information of the obstacle in the area where the vehicle is located according to the category information and the characteristic representation of each pixel point.
In this embodiment, the pixel points with the obstacle are extracted according to the category information of each pixel point. And then distinguishing different barriers according to the characteristic representation corresponding to the pixel points with the barriers, so as to obtain the position information of each barrier. The position information of the obstacle in the present application may include a ground line position of the obstacle.
S1054, controlling the vehicle to travel based on the position information of the obstacle.
In the present embodiment, the travel route of the vehicle is determined according to the position information of the obstacle, and the vehicle travel is controlled based on the travel route of the vehicle.
In the embodiment of the application, the category information and the feature representation of each pixel point are obtained by extracting the features of the third fusion image, so that the position information of each obstacle is determined, the vehicle running control can be accurately performed according to the position information of the obstacle, and the collision between the vehicle and the obstacle is avoided.
As shown in fig. 7, in one possible implementation, the implementation procedure of step S1053 may include:
s301, screening target pixel points from the pixel points, wherein the category information of the target pixel points is that barriers exist.
In this embodiment, the pixel points with the obstacle are selected according to the category information, and the pixel points with the obstacle are recorded as target pixel points. For example, if the category information is represented by 1 and 0, and 1 indicates that an obstacle exists, the pixel marked with 1 is extracted and marked as a target pixel.
S302, inquiring the feature representation corresponding to the target pixel point.
In this embodiment, in order to distinguish between different obstacles, a feature representation of the target pixel point is extracted.
And S303, clustering the characteristic representations corresponding to the target pixel points to obtain the target positions of the obstacles in the area where the vehicle is located.
In this embodiment, since the characteristic representations of different obstacles are different, different obstacles can be distinguished by clustering the characteristic representations corresponding to the target pixel points. Specifically, clustering the feature representations corresponding to the target pixel points to obtain candidate positions of the obstacle; and performing skeletonizing extraction and/or thinning operation on the candidate positions of each obstacle to obtain the accurate position (target position) of the obstacle.
In one possible implementation manner, the method may further include:
environmental data around the vehicle is collected by ultrasonic radar. And performing BEV conversion on the environmental data to obtain environmental images corresponding to each sampling moment.
And acquiring continuous N frames of environment images acquired in a preset time at the current moment, wherein the N frames of environment images comprise environment images acquired at the current moment.
And carrying out image fusion on the N frames of environment images to obtain a first fusion image.
Acquiring a second fusion image corresponding to the last position mark point determined before the current moment, wherein a position mark point is determined every time the vehicle runs a preset distance, and the fusion image obtained by using continuous M frame environment images when the vehicle runs each position mark point is recorded as the second fusion image corresponding to the position mark point; the M frame of ambient images may include an ambient image acquired when the vehicle is at the location marker point.
And performing image fusion on the first fusion image and the second fusion image to obtain a third fusion image.
Inputting the third fused image into a convolutional neural network to perform feature extraction on the third fused image to obtain category information and feature representation of each pixel point, wherein the feature extraction comprises semantic segmentation and instance segmentation, and the category information comprises barriers and no barriers.
And extracting the target pixel points, wherein the category information of the target pixel points is that barriers exist.
Extracting the characteristic representation corresponding to the target pixel point, and clustering the characteristic representation of the target pixel point to obtain candidate positions of each obstacle.
And performing skeletonizing extraction and thinning operation on candidate positions of each obstacle to obtain a target position of each obstacle.
The vehicle travel is controlled according to the target position of the obstacle.
As an embodiment of the present application, taking n=10, m=10, and the preset distance is 100 meters as an example, the method of the present application may include:
when the vehicle starts to run, the vehicle acquires the 1 st frame of environment image.
When the vehicle collects the 20 th frame of environment image along with the running of the vehicle, the vehicle performs image fusion on the collected 11 th frame of environment image to the 20 th frame of environment image to obtain a y first fusion image, wherein y=11.
When the vehicle acquires the 20 th frame of environment image, the vehicle runs 100 meters, the position when the vehicle acquires the 20 th frame of environment image is marked as the 1 st position mark point, the y first fusion image obtained when the vehicle is at the 1 st position mark point is stored, and the stored y first fusion image is marked as the 1 st second fusion image.
When the vehicle acquires the 21 st frame of environment image, the vehicle fuses the 12 th frame of environment image to the 21 st frame of environment image to obtain a y+1 first fused image. And the vehicle fuses the y+1st first fused image and the 1 st second fused image to obtain a third fused image. The vehicle controls the vehicle to run according to the third fusion image.
When the vehicle acquires the 22 th frame of environment image, the vehicle fuses the 13 th frame of environment image to the 22 nd frame of environment image to obtain the y+2 first fused image. And the vehicle fuses the y+2th first fused image and the 1 st second fused image to obtain a third fused image. The vehicle controls the vehicle to run according to the third fusion image.
Sequentially cycling, when the vehicle acquires the 40 th frame of environment image, performing image fusion on the 31 st frame of environment image to the 40 th frame of environment image to obtain the h first fusion image; and carrying out image fusion on the h first fusion image and the 1 st second fusion image to obtain a third fusion image, wherein h=31.
When the 40 th frame of environment image is acquired by the vehicle, the vehicle runs for 200 meters (the distance between the position of the vehicle and the 1 st position mark point is 100 meters), the position of the vehicle when the 40 th frame of environment image is acquired is marked as the 2 nd position mark point, the h first fusion image obtained when the vehicle is at the 2 nd position mark point is stored, and the stored h first fusion image is marked as the 2 nd second fusion image.
When the vehicle acquires the 41 st frame of environment image, carrying out image fusion on the 32 st frame of environment image to the 41 st frame of environment image to obtain the h+1 first fusion image; and carrying out image fusion on the h+1st first fusion image and the 2 nd second fusion image to obtain a third fusion image. The vehicle controls the vehicle to run according to the third fusion image.
And determining a third fusion image at each sampling moment according to the method, and controlling the vehicle to run according to the third fusion image.
In the application, when each frame of environment image is acquired in the running process of the vehicle, a first fusion image is obtained by using short-time fusion (10 frames of environment images acquired continuously), the first fusion image and the last second fusion image (the second fusion image determined by the vehicle in 100 meters in each running process of the vehicle) determined in advance are fused for a long time to obtain a third fusion image, so that the final fusion image obtained by the vehicle not only comprises environment information acquired in a short time, but also comprises environment information acquired in a previous period of time, and the environment information of the finally obtained fusion image is richer. For example, when parking, because the vehicle may need to travel back and forth within a short distance, the images acquired by the vehicle within a period of time are the same, the environmental information in the fused image is that of a small section of parking space (for example, only the middle position or the end position of the parking space) in the whole parking space, and the environmental information in the fused image is less; according to the method and the device, the fusion image (second fusion image) of the previous path of the vehicle is added, and the fusion image of the previous path of the vehicle possibly comprises the environmental information (for example, the environmental information of the entrance position of the parking space) of the current vehicle, so that the final vehicle can obtain the environmental information of the whole parking space, the obtained environmental information is richer, and the vehicle running is more favorably controlled. It should be noted that, when the preset distance is travelled each time, the second fusion image may be determined according to any combination of all the frame environment images acquired within the preset distance, which is not limited herein.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Corresponding to the vehicle control method described in the above embodiments, fig. 8 shows a block diagram of the vehicle control apparatus provided in the embodiment of the present application, and for convenience of explanation, only the portions relevant to the embodiment of the present application are shown.
Referring to fig. 8, the apparatus 400 may include: a first image acquisition module 410, a first image fusion module 420, a second image acquisition module 430, a second image fusion module 440, and a vehicle control module 450.
The first image acquisition module 410 is configured to acquire a first image set, where the first image set includes N continuous environmental images acquired by a vehicle, where N environmental images acquired by the vehicle at a current moment exist in the N environmental images, and N is greater than or equal to 2;
the first image fusion module 420 is configured to perform image fusion on the N environmental images in the first image set to obtain a first fused image;
a second image acquisition module 430, configured to acquire a second fused image, where the second fused image is generated based on an environmental image acquired when the position of the vehicle is at a position mark point, and the position mark point is determined based on a driving distance of the vehicle before the current time;
A second image fusion module 440, configured to perform image fusion on the first fused image and the second fused image to obtain a third fused image;
and a vehicle control module 450 for controlling the vehicle to run based on the third fusion image.
In one possible implementation manner, the vehicle determines one position mark point every time a preset distance is travelled, the second fused image is a fused image generated based on the acquired environment image when the vehicle is at an ith position mark point, and the ith position mark point is the last position mark point determined before the current moment.
In one possible implementation, the apparatus 400 further includes:
the third image acquisition module is used for acquiring continuous M frame environment images acquired by the vehicle when the position of the vehicle is determined to be an ith position mark point, wherein the M frame environment images are environment images acquired when the vehicle is positioned at the ith position mark point, M is more than or equal to 2, and the distance between the ith position mark point and the (i-1) th position mark point is the preset distance;
and the third image fusion module is used for carrying out image fusion on the M frame environment images to obtain the second fusion image.
In one possible implementation, the vehicle control module 450 may be specifically configured to:
classifying each pixel point in the third fusion image to obtain category information of each pixel point, wherein the category information is that an obstacle exists or no obstacle exists;
performing example segmentation on the third fusion image to obtain a characteristic representation of each pixel point in the third fusion image, wherein the characteristic representation is used for distinguishing different barriers;
obtaining the position information of the obstacle in the area where the vehicle is located according to the category information and the characteristic representation of each pixel point;
and controlling the vehicle to run based on the position information of the obstacle.
In one possible implementation, the vehicle control module 450 may be specifically configured to:
screening target pixel points from all the pixel points, wherein the class information of the target pixel points is that barriers exist;
inquiring the feature representation corresponding to the target pixel point;
and clustering the characteristic representations corresponding to the target pixel points to obtain the target positions of the obstacles in the area where the vehicle is located.
In one possible implementation, the vehicle has an ultrasonic radar mounted thereon, and the first image acquisition module 410 may be specifically configured to:
Acquiring all environmental data acquired by the ultrasonic radar at the sampling moment;
performing BEV conversion on each piece of environment data to obtain the environment image corresponding to each piece of environment data;
and determining the environment images of N continuous frames as the first image set, wherein environment images acquired by the vehicle at the current moment exist in the environment images of N continuous frames.
In one possible implementation, connected to the first image fusion module 420 further comprises:
the position determining module is used for acquiring the current position of the vehicle;
the mark point determining module is used for determining the current position of the vehicle as the (i+1) th position mark point and determining the first fusion image as the fusion image determined by the vehicle at the (i+1) th position mark point when the distance between the current position and the (i) th position mark point is equal to the preset distance.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The present embodiment also provides a vehicle, referring to fig. 9, the vehicle 500 may include: at least one processor 510, a memory 520, and a computer program stored in the memory 520 and executable on the at least one processor 510, the processor 510, when executing the computer program, performing the steps of any of the various method embodiments described above, such as steps S101 to S105 in the embodiment shown in fig. 2. Alternatively, the processor 510 may perform the functions of the modules/units of the apparatus embodiments described above, such as the functions of the first image acquisition module 410 to the vehicle control module 450 shown in fig. 8, when executing the computer program.
By way of example, a computer program may be partitioned into one or more modules/units that are stored in memory 520 and executed by processor 510 to complete the present application. The one or more modules/units may be a series of computer program segments capable of performing specific functions for describing the execution of the computer program in the vehicle 500.
It will be appreciated by those skilled in the art that fig. 9 is merely an example of a vehicle and is not intended to be limiting of the vehicle, and may include more or fewer components than shown, or may combine certain components, or different components, such as input-output devices, network access devices, buses, etc.
The processor 510 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 520 may be an internal storage unit of the vehicle, or may be an external storage device of the vehicle, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), or the like. The memory 520 is used to store the computer program and other programs and data required by the terminal device. The memory 520 may also be used to temporarily store data that has been output or is to be output.
The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present application are not limited to only one bus or one type of bus.
The vehicle control method provided by the embodiment of the application can be applied to terminal equipment such as computers, tablet computers, notebook computers, netbooks, personal digital assistants (personal digital assistant, PDA) and the like, and the specific type of the terminal equipment is not limited.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed terminal device, apparatus and method may be implemented in other manners. For example, the above-described embodiments of the terminal device are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above-described embodiments, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by one or more processors, the computer program may implement the steps of each of the method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above-described embodiments, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by one or more processors, the computer program may implement the steps of each of the method embodiments described above.
Also, as a computer program product, the steps of the various method embodiments described above may be implemented when the computer program product is run on a terminal device, causing the terminal device to execute.
Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium may include content that is subject to appropriate increases and decreases as required by jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is not included as electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A vehicle control method characterized by comprising:
acquiring a first image set, wherein the first image set comprises continuous N frames of environment images acquired by a vehicle, and the environment images acquired by the vehicle at the current moment exist in the N frames of environment images, wherein N is more than or equal to 2;
performing image fusion on the N frames of environment images in the first image set to obtain a first fusion image;
acquiring a second fusion image, wherein the second fusion image is generated based on an environment image acquired when the vehicle is at a position mark point, and the position mark point is determined based on the running distance of the vehicle before the current moment;
Performing image fusion on the first fusion image and the second fusion image to obtain a third fusion image;
and controlling the vehicle to run based on the third fusion image.
2. The method of claim 1, wherein the vehicle determines one of the location-marking points per a preset distance traveled by the vehicle, the second fused image being a fused image generated based on the acquired environmental image when the vehicle is at an i-th location-marking point, the i-th location-marking point being a last location-marking point determined prior to the current time.
3. The method of claim 2, wherein the method of determining the second fused image comprises:
when the position of the vehicle is determined to be an ith position mark point, acquiring continuous M frame environment images acquired by the vehicle, wherein the environment images acquired when the vehicle is positioned at the ith position mark point exist in the M frame environment images, M is more than or equal to 2, and the distance between the ith position mark point and the ith-1 position mark point is the preset distance;
and carrying out image fusion on the M frame environment images to obtain the second fusion image.
4. The method of claim 1, wherein controlling vehicle travel based on the third fused image comprises:
Classifying each pixel point in the third fusion image to obtain category information of each pixel point, wherein the category information is that an obstacle exists or no obstacle exists;
performing example segmentation on the third fusion image to obtain a characteristic representation of each pixel point in the third fusion image, wherein the characteristic representation is used for distinguishing different barriers;
obtaining the position information of the obstacle in the area where the vehicle is located according to the category information and the characteristic representation of each pixel point;
and controlling the vehicle to run based on the position information of the obstacle.
5. The method of claim 4, wherein the obtaining the position information of the obstacle in the area where the vehicle is located according to the category information and the feature representation of each pixel includes:
screening target pixel points from all the pixel points, wherein the class information of the target pixel points is that barriers exist;
inquiring the feature representation corresponding to the target pixel point;
and clustering the characteristic representations corresponding to the target pixel points to obtain the target positions of the obstacles in the area where the vehicle is located.
6. The method of any one of claims 1 to 5, wherein the vehicle has an ultrasonic radar mounted thereon, and wherein the acquiring the first image set comprises:
Acquiring environmental data acquired by the ultrasonic radar at a sampling moment;
performing BEV conversion on the environmental data to obtain an environmental image corresponding to the environmental data;
and determining continuous N frames of environment images as the first image set, wherein environment images acquired by the vehicle at the current moment exist in the N frames of environment images.
7. A method according to claim 2 or 3, wherein after image fusion of the N environmental images in the first image set to obtain a first fused image, the method further comprises:
acquiring the current position of the vehicle;
and when the distance between the current position and the ith position mark point is equal to the preset distance, determining the current position of the vehicle as the (i+1) th position mark point, and determining the first fusion image as the fusion image determined by the vehicle at the (i+1) th position mark point.
8. A vehicle control apparatus characterized by comprising:
the first image acquisition module is used for acquiring a first image set, wherein the first image set comprises continuous N frames of environment images acquired by a vehicle, and the environment images acquired by the vehicle at the current moment exist in the N frames of environment images, wherein N is more than or equal to 2;
The first image fusion module is used for carrying out image fusion on the N frames of environment images in the first image set to obtain a first fusion image;
the second image acquisition module is used for acquiring a second fusion image, the second fusion image is generated based on an environment image acquired when the position of the vehicle is at a position mark point, and the position mark point is determined based on the driving distance of the vehicle before the current moment;
the second image fusion module is used for carrying out image fusion on the first fusion image and the second fusion image to obtain a third fusion image;
and the vehicle control module is used for controlling the vehicle to run based on the third fusion image.
9. A vehicle comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 7.
CN202311676673.4A 2023-12-08 2023-12-08 Vehicle control method and device, vehicle and storage medium Pending CN117542021A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311676673.4A CN117542021A (en) 2023-12-08 2023-12-08 Vehicle control method and device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311676673.4A CN117542021A (en) 2023-12-08 2023-12-08 Vehicle control method and device, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN117542021A true CN117542021A (en) 2024-02-09

Family

ID=89795760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311676673.4A Pending CN117542021A (en) 2023-12-08 2023-12-08 Vehicle control method and device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN117542021A (en)

Similar Documents

Publication Publication Date Title
US20200265710A1 (en) Travelling track prediction method and device for vehicle
CN110796007B (en) Scene recognition method and computing device
CN110532916B (en) Motion trail determination method and device
CN108133484B (en) Automatic driving processing method and device based on scene segmentation and computing equipment
JP7220169B2 (en) Information processing method, device, storage medium, and program
CN112528807B (en) Method and device for predicting running track, electronic equipment and storage medium
CN113554643B (en) Target detection method and device, electronic equipment and storage medium
CN112444258A (en) Method for judging drivable area, intelligent driving system and intelligent automobile
CN114475593A (en) Travel track prediction method, vehicle, and computer-readable storage medium
CN114694115A (en) Road obstacle detection method, device, equipment and storage medium
WO2024098992A1 (en) Vehicle reversing detection method and apparatus
CN113253278A (en) Parking space identification method and device and computer storage medium
CN117079238A (en) Road edge detection method, device, equipment and storage medium
CN116664498A (en) Training method of parking space detection model, parking space detection method, device and equipment
CN117542021A (en) Vehicle control method and device, vehicle and storage medium
CN115416651A (en) Method and device for monitoring obstacles in driving process and electronic equipment
CN115482672A (en) Vehicle reverse running detection method and device, terminal equipment and storage medium
CN116503695B (en) Training method of target detection model, target detection method and device
CN117437792B (en) Real-time road traffic state monitoring method, device and system based on edge calculation
CN117854032A (en) Data labeling and obstacle recognition method and device, terminal equipment and medium
CN117237899A (en) Method, device and equipment for determining position relationship between vehicle and lane line
WO2020073272A1 (en) Snapshot image to train an event detector
WO2020073268A1 (en) Snapshot image to train roadmodel
WO2020073270A1 (en) Snapshot image of traffic scenario
WO2020073271A1 (en) Snapshot image of traffic scenario

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination