CN114240816A - Road environment sensing method and device, storage medium, electronic equipment and vehicle - Google Patents

Road environment sensing method and device, storage medium, electronic equipment and vehicle Download PDF

Info

Publication number
CN114240816A
CN114240816A CN202210168552.8A CN202210168552A CN114240816A CN 114240816 A CN114240816 A CN 114240816A CN 202210168552 A CN202210168552 A CN 202210168552A CN 114240816 A CN114240816 A CN 114240816A
Authority
CN
China
Prior art keywords
road environment
image
recognition
target
different
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210168552.8A
Other languages
Chinese (zh)
Inventor
龚骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Momenta Suzhou Technology Co Ltd
Original Assignee
Momenta Suzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Momenta Suzhou Technology Co Ltd filed Critical Momenta Suzhou Technology Co Ltd
Priority to CN202210168552.8A priority Critical patent/CN114240816A/en
Publication of CN114240816A publication Critical patent/CN114240816A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a road environment sensing method, a road environment sensing device, a storage medium, electronic equipment and a vehicle, which can improve the road environment sensing efficiency and accuracy. The method comprises the following steps: acquiring an original road environment image acquired by a sensor; respectively adjusting the attributes of the original road environment image according to different attribute requirements of different road environment recognition models on the input image to obtain road environment images to be recognized corresponding to the different road environment recognition models; carrying out target object recognition on corresponding road environment images to be recognized in parallel based on different road environment recognition models to obtain initial recognition result images of the different road environment recognition models; respectively performing attribute restoration on different initial recognition result images according to attribute information of the original road environment image to obtain a plurality of target recognition result images with the same attribute as the original road environment image; and performing fusion processing on the multiple target recognition result images to obtain a target road environment image containing all recognition results.

Description

Road environment sensing method and device, storage medium, electronic equipment and vehicle
Technical Field
The application relates to the technical field of automobiles, in particular to a road environment sensing method and device, a storage medium, electronic equipment and a vehicle.
Background
With the rapid development of automobile technology, the automatic driving technology is gradually accepted by automobile manufacturers and users. Automatic driving not only can minimize the driving risk of the automobile, but also can reduce the heavy driving task of a driver, so that automatic driving is also a great trend of future automobile development.
The existing automatic driving technology can be divided into a plurality of parts of perception, positioning, planning and control according to the operation flow. Wherein, to the perception, the autopilot vehicle generally relies on the various sensors of self installation to carry out road environment perception, like sensors such as camera, millimeter wave radar, laser radar, ultrasonic radar. In the related art, after the road environment image collected by the camera is acquired, a vehicle-mounted single-core CPU (Central Processing Unit) is called to perform target recognition on the road environment image based on an integrated neural network model capable of recognizing various information, for example, to recognize information such as lane lines, pedestrians, vehicles, and the like, so as to assist driving according to a recognition result. However, when the road network is complex and the number of types of targets to be identified by the integrated neural network model is large, the complexity and difficulty of the algorithm of the integrated neural network model are increased, and the identification accuracy and efficiency of the integrated neural network model are greatly reduced.
Disclosure of Invention
The application provides a road environment sensing method and device, a storage medium, electronic equipment and a vehicle, which can solve the problem of low efficiency and accuracy when the road environment sensing is carried out based on a comprehensive neural network model in the related technology.
The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for sensing a road environment, where the method includes:
acquiring an original road environment image acquired by a sensor;
respectively adjusting the attributes of the original road environment image according to different attribute requirements of different road environment recognition models on an input image to obtain road environment images to be recognized corresponding to the different road environment recognition models, wherein the different road environment recognition models are respectively used for recognizing different target objects in the road environment images;
carrying out target object recognition on corresponding road environment images to be recognized in parallel based on different road environment recognition models to obtain initial recognition result images of the different road environment recognition models;
respectively performing attribute restoration on different initial recognition result images according to the attribute information of the original road environment image to obtain a plurality of target recognition result images with the same attribute as the original road environment image;
and performing fusion processing on the multiple target recognition result images to obtain a target road environment image containing all recognition results.
Through the scheme, compared with the method for identifying a plurality of target objects by adopting a comprehensive neural network model in the related art, the method for identifying the target objects in the road environment recognition model can not only identify different target objects in parallel based on a plurality of road environment recognition models for identifying a single target object and fuse a plurality of recognition results to obtain the target road environment images containing all the recognition results, thereby simplifying the complexity of the road environment recognition models and improving the road environment perception efficiency, but also can adjust the attribute information of the original road environment images to meet the input requirements of the corresponding road environment recognition models before carrying out the object recognition based on the road environment recognition models, and needs to carry out attribute restoration on the different initial recognition result images after obtaining the initial recognition result images of the different road environment recognition models to obtain a plurality of target recognition result images with the same attribute as the original road environment images, therefore, the accuracy of road environment perception is further improved.
In a first possible implementation manner of the first aspect, the parallel target object recognition of the corresponding road environment image to be recognized based on different road environment recognition models to obtain initial recognition result images of the different road environment recognition models includes:
calling a plurality of CPU cores of a CPU (central processing unit) to perform target object identification on corresponding road environment images to be identified respectively on the basis of different road environment identification models in parallel to obtain initial identification result images of the different road environment identification models, wherein the CPU cores correspond to the road environment identification models one by one;
or calling a graphic processing unit GPU to perform target object recognition on the corresponding road environment images to be recognized in parallel respectively based on different road environment recognition models to obtain initial recognition result images of the different road environment recognition models.
According to the scheme, the target objects of different road environment images to be recognized by different road environment recognition models can be recognized in parallel by calling the multiple CPU cores of one CPU or one GPU, and the multiple CPUs are not required to be called, so that the road environment perception efficiency can be improved, and hardware resources can be saved.
In a second possible implementation manner of the first aspect, invoking multiple CPU cores of a central processing unit CPU to perform parallel target object recognition on corresponding road environment images to be recognized based on different road environment recognition models, respectively, to obtain initial recognition result images of the different road environment recognition models, and includes:
acquiring a configuration file, wherein the configuration file comprises a mapping relation between CPU cores and a road environment recognition model, and calling the CPU cores to perform target object recognition on corresponding road environment images to be recognized respectively based on different road environment recognition models according to the mapping relation to obtain initial recognition result images of different road environment recognition models;
or randomly calling a plurality of CPU cores with the same number as the road environment recognition models, and using the CPU cores to perform target object recognition on the corresponding road environment images to be recognized respectively based on different road environment recognition models in parallel to obtain initial recognition result images of different road environment recognition models.
According to the scheme, when the plurality of CPU cores of the CPU are called, the plurality of CPU cores can be called according to the mapping relation between the CPU cores configured in the configuration file and the road environment identification model, and the plurality of CPU cores can also be directly called randomly, so that the parallel efficiency can be improved as much as possible under different conditions. For example, when the performances of a plurality of CPU cores are completely the same, the CPU cores can be directly called randomly without spending time to set configuration files, and when the performances of the plurality of CPU cores are not completely the same, the optimal mapping relation between the CPU cores and the road environment recognition model can be configured, so that the parallel efficiency is improved.
In a third possible implementation manner of the first aspect, the obtaining the configuration file includes:
acquiring a preset configuration file;
or detecting the performance of all CPU cores in the CPU, screening out target CPU cores meeting the corresponding preset performance requirements of different road environment recognition models from all the CPU cores, and writing the mapping relation between the target CPU cores and the road environment recognition models into a configuration file.
According to the scheme, the configuration file can be manually preset according to actual experience, so that the accuracy of the mapping relation can be improved, the performance of all CPU cores in the CPU can be automatically detected, the configuration file can be automatically generated according to the detection result and the preset performance requirement corresponding to the road environment recognition model, the labor can be saved, and the generation efficiency of the configuration file can be improved.
In a fourth possible implementation manner of the first aspect, the fusing the multiple target recognition result images to obtain a target road environment image including all recognition results includes:
mapping all recognition results in the multiple target recognition result images to the original road environment image, and determining the mapped original road environment image as the target road environment image;
or selecting a target recognition result image to be mapped from the plurality of target recognition result images, mapping all recognition results in the target recognition result images except the target recognition result image to be mapped in the target recognition result images to be mapped, and determining the mapped target recognition result image to be the target road environment image.
According to the scheme, the obtained multiple target recognition result images are not output or used as final results, all recognition results are mapped to the same image, so that one image can record all recognition results, and the efficiency of viewing the recognition results by subsequent users and the efficiency of using the images containing the recognition results subsequently can be improved.
In a fifth possible implementation manner of the first aspect, after the multiple target recognition result images are subjected to fusion processing to obtain a target road environment image including all recognition results, the method further includes:
calling an OpenCV library function;
and marking all recognition results in the target road environment image by using preset symbols, and outputting the target road environment image containing the marks.
According to the scheme, all the recognition results can be marked through the OpenCV library function, so that a user can visually check the recognition results, and the efficiency of reading the recognition results is improved.
In a sixth possible implementation manner of the first aspect, performing attribute restoration on different initial recognition result images according to attribute information of the original road environment image, to obtain multiple target recognition result images with the same attribute as that of the original road environment image, includes:
and respectively performing attribute restoration on different initial recognition result images according to the attribute information of the original road environment image under the condition that all the initial recognition result images contain the recognition results to obtain a plurality of target recognition result images with the same attribute as the original road environment image.
According to the scheme, the attribute restoring operation can be executed under the condition that all the initial recognition result images contain the recognition results, otherwise, the attribute restoring operation is not executed, so that redundant target road environment images can be reduced and the efficiency of acquiring the final required target road environment images can be improved under the condition that all the target objects are the required recognition targets.
In a seventh possible implementation manner of the first aspect, when the attribute requirements include an image size requirement and an image color requirement, respectively adjusting the attributes of the original road environment image according to different attribute requirements of different road environment recognition models on an input image, to obtain road environment images to be recognized corresponding to different road environment recognition models, including:
and aiming at the road environment recognition model to be adjusted, respectively adjusting the image size and the image color of the original road environment image to the image size and the image color which meet the image size requirement and the image color requirement corresponding to the road environment recognition model to be adjusted, and obtaining the road environment image to be recognized corresponding to the road environment recognition model to be adjusted.
According to the scheme, the image size and the image color of the original road environment image can be adjusted according to the image size requirement and the image color requirement of different road environment recognition models on the input image, so that the road environment image to be recognized, which meets the image size requirement and the image color requirement of each road environment recognition model, is obtained, and the efficiency and the accuracy of the road environment recognition model in recognizing the target object of the corresponding road environment image to be recognized are improved.
In an eighth possible implementation manner of the first aspect, performing attribute restoration on different initial recognition result images according to attribute information of the original road environment image, to obtain multiple target recognition result images with the same attribute as the original road environment image, includes:
and respectively reducing the image size and the image color of the initial identification result image to be reduced to be the same as the image size and the image color of the original road environment image aiming at the initial identification result image to be reduced, and obtaining a target identification result image corresponding to the initial identification result image to be reduced.
According to the scheme, after the road environment recognition model recognizes the original road environment image with the image size and the image color adjusted, the image size and the image color of the obtained image of the initial recognition result to be restored can be restored to be the same as those of the original road environment image, so that the efficiency and the accuracy of subsequent fusion processing are improved, and the proportion of each recognition result in the finally obtained target road environment image to the actual road environment is ensured to be the same.
In a second aspect, an embodiment of the present application provides a road environment sensing device, where the device includes:
the acquisition unit is configured to acquire an original road environment image acquired by the sensor;
the adjusting unit is configured to respectively adjust the attributes of the original road environment image according to different attribute requirements of different road environment recognition models on an input image, so as to obtain road environment images to be recognized corresponding to the different road environment recognition models, wherein the different road environment recognition models are respectively used for recognizing different target objects in a road environment;
the recognition unit is configured to perform target object recognition on corresponding road environment images to be recognized in parallel based on different road environment recognition models to obtain initial recognition result images of the different road environment recognition models;
the restoring unit is configured to respectively perform attribute restoration on different initial recognition result images according to attribute information of the original road environment image to obtain a plurality of target recognition result images with the same attributes as the original road environment image;
and the fusion unit is configured to perform fusion processing on the multiple target recognition result images to obtain a target road environment image containing all recognition results.
In a first possible implementation manner of the second aspect, the identification unit includes a first identification module or a second identification module;
the first identification module is configured to call a plurality of CPU cores of a CPU (central processing unit) to perform target object identification on corresponding road environment images to be identified respectively on the basis of different road environment identification models in parallel to obtain initial identification result images of the different road environment identification models, wherein the CPU cores correspond to the road environment identification models one to one;
and the second identification module is configured to call the GPU to respectively perform target object identification on the corresponding road environment images to be identified in parallel based on different road environment identification models so as to obtain initial identification result images of the different road environment identification models.
In a second possible implementation manner of the second aspect, the second identification module includes a first identification submodule or a second identification submodule;
the first identification submodule is configured to obtain a configuration file, wherein the configuration file comprises a mapping relation between CPU cores and road environment identification models, and the plurality of CPU cores are called to perform target object identification on corresponding road environment images to be identified respectively based on different road environment identification models according to the mapping relation so as to obtain initial identification result images of the different road environment identification models;
and the second identification submodule is configured to randomly call a plurality of CPU cores with the same number as that of the road environment identification models, and the plurality of CPU cores are used for identifying the corresponding road environment images to be identified in parallel on the basis of different road environment identification models respectively to obtain initial identification result images of the different road environment identification models.
In a third possible implementation manner of the second aspect, the first identification submodule is configured to obtain a preset configuration file; or detecting the performance of all CPU cores in the CPU, screening out target CPU cores meeting the corresponding preset performance requirements of different road environment recognition models from all the CPU cores, and writing the mapping relation between the target CPU cores and the road environment recognition models into a configuration file.
In a fourth possible implementation manner of the second aspect, the fusion unit includes a first fusion module or a second fusion module;
the first fusion module is configured to map all recognition results in the multiple target recognition result images into the original road environment image, and determine the mapped original road environment image as the target road environment image;
the second fusion module is configured to select a target recognition result image to be mapped from the plurality of target recognition result images, map all recognition results in the target recognition result images except the target recognition result image to be mapped in the plurality of target recognition result images to the target recognition result image to be mapped, and determine the mapped target recognition result image to be the target road environment image.
In a fifth possible implementation manner of the second aspect, the apparatus further includes:
the calling unit is configured to call an OpenCV library function after the multiple target recognition result images are subjected to fusion processing to obtain target road environment images containing all recognition results;
a marking unit configured to mark all recognition results in the target road environment image with preset symbols and output the target road environment image including the mark.
In a sixth possible implementation manner of the second aspect, the restoring unit is configured to perform attribute restoration on different initial recognition result images according to the attribute information of the original road environment image under the condition that all the initial recognition result images contain recognition results, and obtain a plurality of target recognition result images with the same attribute as the original road environment image.
In a seventh possible implementation manner of the second aspect, the adjusting unit is configured to, when the attribute requirements include an image size requirement and an image color requirement, adjust, for a to-be-adjusted road environment recognition model, an image size and an image color of the original road environment image to an image size and an image color that satisfy the image size requirement and the image color requirement corresponding to the to-be-adjusted road environment recognition model, respectively, and obtain the to-be-identified road environment image corresponding to the to-be-adjusted road environment recognition model.
In an eighth possible implementation manner of the second aspect, the restoring unit is configured to, for an initial recognition result image to be restored, respectively restore the image size and the image color of the initial recognition result image to be restored to be the same as the image size and the image color of the original road environment image, and obtain a target recognition result image corresponding to the initial recognition result image to be restored.
Through the scheme, compared with the method for identifying a plurality of target objects by adopting a comprehensive neural network model in the related art, the method for identifying the target objects in the road environment recognition model can not only identify different target objects in parallel based on a plurality of road environment recognition models for identifying a single target object and fuse a plurality of recognition results to obtain the target road environment images containing all the recognition results, thereby simplifying the complexity of the road environment recognition models and improving the road environment perception efficiency, but also can adjust the attribute information of the original road environment images to meet the input requirements of the corresponding road environment recognition models before carrying out the object recognition based on the road environment recognition models, and needs to carry out attribute restoration on the different initial recognition result images after obtaining the initial recognition result images of the different road environment recognition models to obtain a plurality of target recognition result images with the same attribute as the original road environment images, therefore, the accuracy of road environment perception is further improved.
In a third aspect, the present application provides a storage medium, on which a computer program is stored, where the program, when executed by a processor, implements the method according to any one of the embodiments of the first aspect.
In a fourth aspect, an embodiment of the present application provides an electronic device, including:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method as in any one of the embodiments of the first aspect.
In a fifth aspect, the present embodiments provide a vehicle including the apparatus according to any one of the second aspect, or the electronic device according to the fourth aspect.
In a sixth aspect, the present application provides a computer program, where the computer program includes program instructions, and the program instructions, when executed by a computer, implement the method according to any one of the embodiments of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are of some embodiments of the application only. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
Fig. 1 is a schematic flowchart of a road environment sensing method according to an embodiment of the present disclosure;
fig. 2 is an exemplary diagram of a road environment sensing result provided by an embodiment of the present application;
FIG. 3 is a schematic flowchart of another road environment sensing method according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of another road environment sensing method provided in the embodiment of the present application;
fig. 5 is a block diagram illustrating a road environment sensing device according to an embodiment of the present disclosure;
FIG. 6 is a schematic view of a vehicle according to an embodiment of the present disclosure.
Detailed Description
The technical solution in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the described embodiments are merely a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present disclosure.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the examples and figures herein, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Fig. 1 is a road environment sensing method provided by an embodiment of the present application, where the method may be applied to a vehicle or a server, and the method may include the following steps:
s110: and acquiring an original road environment image acquired by a sensor.
Wherein the sensor comprises one or more image sensors mounted on the vehicle for capturing raw road environment images around the vehicle. When one image sensor is used, the image sensor can be an image sensor with a 360-degree acquisition range, and can also be a front-view image sensor used for acquiring the front view information of the vehicle; when the number of the image sensors is multiple, the image sensors can be respectively installed in different directions of the vehicle to collect road environment images in different directions, and the road environment images in different directions are spliced to obtain a panoramic original road environment image.
When the embodiment of the application is applied to a vehicle, the vehicle-mounted device in the vehicle CAN directly acquire the original road environment image acquired by the sensor through a Controller Area Network (CAN) bus, and execute the subsequent steps. Specifically, an SDK (Software Development Kit) may be installed in the vehicle-mounted device, and the method provided in the embodiment of the present application is executed based on the SDK. Among these, an SDK is generally a collection of development tools that build application software for a particular software package, software framework, hardware platform, operating system, and the like.
When the method and the device are applied to the server, the original road environment image can be reported to the server after the sensor in the vehicle acquires the original road environment image, so that the server can execute the subsequent steps after obtaining the original road environment image.
S120: and respectively adjusting the attributes of the original road environment image according to different attribute requirements of different road environment recognition models on the input image to obtain the road environment image to be recognized corresponding to the different road environment recognition models.
The road environment recognition model may be a target detection model, and different road environment recognition models are respectively used for recognizing different target objects in the road environment image, such as vehicles, pedestrians, road ground signs (such as lane lines and zebra crossings), guideboards, traffic lights, and the like. The road environment recognition model is a neural network model obtained by training according to a road environment image including a target object label, for example, a plurality of road environment images may be collected first, then pedestrian labels are added to the plurality of road environment images, and finally the plurality of road environment images with the pedestrian labels added are input to an initial road environment recognition model for training to finally obtain a convergent road environment recognition model.
In order to improve the convergence efficiency and the identification accuracy, different attribute requirements can be configured for different road environment identification models, so that before the road environment identification model is trained, the attributes of the training samples are adjusted according to the attribute requirements, and then the road environment identification model is trained. In this case, when the road environment recognition model is required to recognize the target object, the attribute of the original road environment image may be adjusted according to the different attribute requirements of the different road environment recognition models on the input image, so as to obtain the road environment image to be recognized corresponding to the different road environment recognition models, and then perform the target recognition on the road environment image to be recognized.
When the attribute requirements comprise image size requirements and image color requirements, respectively adjusting the image size and the image color of the original road environment image to the image size and the image color which meet the image size requirements and the image color requirements corresponding to the road environment identification model to be adjusted aiming at the road environment identification model to be adjusted, and obtaining the road environment image to be identified corresponding to the road environment identification model to be adjusted.
S130: and identifying the target object in parallel on the basis of different road environment identification models on the corresponding road environment image to be identified to obtain initial identification result images of the different road environment identification models.
The method for executing the target object recognition in parallel can be implemented based on a plurality of CPU cores of the CPU, and can be implemented based on a GPU (Graphics Processing Unit). The initial recognition result image may include a tag of the recognition result, i.e., a tag of the target object.
S140: and respectively performing attribute restoration on different initial recognition result images according to the attribute information of the original road environment image to obtain a plurality of target recognition result images with the same attribute as the original road environment image.
In practical application, there may be no corresponding recognition result in the initial recognition result image due to the fact that there is no target object in the original road environment image or a recognition failure occurs. When the identification purpose is that the original road environment image includes all the target objects and all the target objects can be identified, in order to reduce redundant target road environment images and improve the efficiency of acquiring the finally required target road environment image, under the condition that all the initial identification result images include identification results, according to the attribute information of the original road environment image, the different initial identification result images are respectively subjected to attribute restoration to obtain a plurality of target identification result images with the same attribute as that of the original road environment image, namely step S140, and under the condition that no identification result exists in all the initial identification result images, step S140 is not executed.
However, when the identification is performed to identify the target object existing in the original road environment image as much as possible, all the initial identification result images (which may be simply referred to as target initial identification result images) including the identification result are acquired, and attribute restoration is performed on different target initial identification result images according to the attribute information of the original road environment image, so as to obtain at least one target identification result image having the same attribute as that of the original road environment image, and the initial identification result image not including the identification result may be discarded.
When the attribute requirement comprises an image size requirement and an image color requirement, the implementation method of the step comprises the following steps: and respectively reducing the image size and the image color of the initial recognition result image to be reduced to be the same as the image size and the image color of the original road environment image aiming at the initial recognition result image to be reduced, and obtaining a target recognition result image corresponding to the initial recognition result image to be reduced.
For example, assume that the original road environment image has an image size of 1920 × 1080 and an image color of BGR 3 color map. According to the attribute requirement of a road environment recognition model A for recognizing pedestrians or vehicles, the image size of the original road environment image is adjusted to 1792 × 896, the image color is adjusted to a gray scale map, a road environment image a1 to be recognized corresponding to the road environment recognition model A is obtained, the pedestrians or the vehicles in the road environment image a1 to be recognized are recognized based on the road environment recognition model A, an initial recognition result image a2 corresponding to the road environment recognition model A is obtained, 182 pixels are added to the initial recognition result image a2 in the x-axis direction of a pixel coordinate system, 184 pixels are added to the y-axis direction, the image size is reduced to 1920 × 1080 from 1792 × 896, the image color is reduced to BGR 3 color map from the gray scale map, and therefore a target recognition result image a3 is obtained. According to the attribute requirements of a road environment recognition model B for a road route, the image size of the original road environment image is adjusted (for example, cut and scaled) to 1024 × 384, the image color is adjusted to be a gray-scale map, a road environment image B1 to be recognized corresponding to the road environment recognition model B is obtained, pedestrians or vehicles in a road environment image B1 to be recognized are recognized based on the road environment recognition model B, an initial recognition result image B2 corresponding to the road environment recognition model B is obtained, the overall image size of the initial recognition result image B2 is enlarged to 1920 × 1080 after 192 pixels are added in the Y-axis direction of a pixel coordinate system, the image size is reduced to 1920 × 1080 from 1024 × 384, and the image color is reduced to a BGR 3 color map from the gray-scale map, so that a target recognition result image B3 is obtained.
S150: and performing fusion processing on the multiple target recognition result images to obtain a target road environment image containing all recognition results.
In order to fuse all recognition results into the same image, the specific implementation manner of the step includes: mapping all recognition results in the multiple target recognition result images to an original road environment image, and determining the mapped original road environment image as a target road environment image; or selecting a target recognition result image to be mapped from the plurality of target identification result images, mapping all recognition results in the target recognition result images except the target recognition result image to be mapped in the plurality of target recognition result images to the target recognition result image to be mapped, and determining the mapped target recognition result image to be the target road environment image. Wherein, the mapping relationship can be realized according to the position of the recognition result in the target recognition result image and the position in the original road environment image (or the target recognition result image to be mapped).
Therefore, in the embodiment of the application, the obtained multiple target recognition result images are not output or used as final results, but all recognition results are mapped to the same image, so that one image can record all recognition results, and the efficiency of the subsequent user in checking the recognition results and the efficiency of the subsequent use of the images containing the recognition results can be improved. In addition, the target road environment image may be output to a Display screen of the in-vehicle device, and may also be output to a front windshield based on a Head Up Display (HUD) system to provide a driving assistance function.
Compared with the method for perceiving the road environment provided by the embodiment of the application, which adopts the comprehensive neural network model to identify a plurality of target objects in the related technology, the embodiment of the application not only can respectively identify different target objects in parallel based on a plurality of road environment identification models for identifying a single target object, and fuse a plurality of identification results to obtain the target road environment image containing all identification results, thereby simplifying the complexity of the road environment identification models and improving the road environment perception efficiency, but also can adjust the attribute information of the original road environment image to meet the input requirements of the corresponding road environment identification models before carrying out object identification based on the road environment identification models, and can carry out attribute reduction on different initial identification result images after obtaining the initial identification result images of different road environment identification models, and a plurality of target recognition result images with the same attribute as the original road environment image are obtained, so that the accuracy of road environment perception is further improved.
In an embodiment, in order to enable a user to visually check a recognition result and improve the efficiency of reading the recognition result, after a plurality of target recognition result images are subjected to fusion processing to obtain a target road environment image containing all recognition results, an OpenCV library function may be further called, all recognition results in the target road environment image are marked by using preset symbols, and the target road environment image containing the marks is output. In addition, in order to distinguish different kinds of target objects, different formats of marks may be adopted for different kinds of target objects, and the format of the marks includes solid lines, dotted lines, line colors, and the like. As shown in fig. 2, the vehicles are marked with a solid frame and the guideboards are marked with a dashed frame.
Fig. 3 is a road environment sensing method according to another embodiment of the present application, which may include the following steps:
s210: and acquiring an original road environment image acquired by a sensor.
S220: and respectively adjusting the attributes of the original road environment image according to different attribute requirements of different road environment recognition models on the input image to obtain the road environment image to be recognized corresponding to the different road environment recognition models.
The different road environment recognition models are respectively used for recognizing different target objects in the road environment image.
S230: and calling a plurality of CPU cores of the CPU to perform target object recognition on the corresponding road environment images to be recognized respectively based on different road environment recognition models, and obtaining initial recognition result images of the different road environment recognition models.
The CPU cores correspond to the road environment recognition models one by one.
Specific implementation manners of the step include, but are not limited to, the following two:
the first method is as follows:
and acquiring a configuration file, wherein the configuration file comprises a mapping relation between the CPU cores and the road environment recognition models, and calling the CPU cores to perform target object recognition on the corresponding road environment images to be recognized respectively based on different road environment recognition models according to the mapping relation so as to acquire initial recognition result images of the different road environment recognition models.
The method for acquiring the configuration file comprises the following steps: acquiring a preset configuration file; or detecting the performance of all CPU cores in the CPU, screening out target CPU cores meeting the corresponding preset performance requirements of different road environment recognition models from all the CPU cores, and writing the mapping relation between the target CPU cores and the road environment recognition models into a configuration file.
For example, the requirements of the 3 road environment recognition models on the performance of the CPU cores are the road environment recognition model 1, the road environment recognition model 2 and the road environment recognition model 3 in sequence from high to low, and the CPU has 5 CPU cores, and the performance of the CPU cores is the CPU core 2, the CPU core 1, the CPU core 3, the CPU core 4 and the CPU core 5 in sequence from high to low. The finally obtained mapping relations are that the road environment recognition model 1 corresponds to the CPU core 2, the road environment recognition model 2 corresponds to the CPU core 1, and the road environment recognition model 3 corresponds to the CPU core 3.
According to the embodiment of the application, the configuration file can be manually preset according to actual experience, so that the accuracy of the mapping relation can be improved, the performance of all CPU cores in the CPU can be automatically detected, and the configuration file can be automatically generated according to the detection result and the preset performance requirement corresponding to the same road environment recognition model, so that the labor can be saved, and the generation efficiency of the configuration file can be improved. In addition, the method for calling the CPU core through the configuration file can use the optimal CPU core to execute the corresponding road environment recognition model especially in the scene that the performances of a plurality of CPU cores are not completely the same, thereby improving the efficiency of parallel processing.
The second method comprises the following steps: and randomly calling a plurality of CPU cores with the same number as the road environment recognition models, and using the plurality of CPU cores to perform target object recognition on the corresponding road environment images to be recognized in parallel based on different road environment recognition models respectively to obtain initial recognition result images of the different road environment recognition models. The method is mainly applied to the scene that the performance of a plurality of CPU cores is completely the same.
S240: and respectively performing attribute restoration on different initial recognition result images according to the attribute information of the original road environment image to obtain a plurality of target recognition result images with the same attribute as the original road environment image.
S250: and performing fusion processing on the multiple target recognition result images to obtain a target road environment image containing all recognition results.
Steps S210, S220, S240, and S250 are the same as the specific implementation manners of steps S110, S120, S140, and S150 in the foregoing embodiments, and are not described herein again.
The road environment perception method provided by the embodiment of the application can also realize the recognition of different road environment recognition models to the target objects of different road environment images to be recognized in parallel by calling a plurality of CPU cores of one CPU without calling a plurality of CPUs, so that the road environment perception efficiency can be improved, and hardware resources can be saved.
Fig. 4 is a road environment sensing method according to another embodiment of the present application, which may include the following steps:
s310: and acquiring an original road environment image acquired by a sensor.
S320: and respectively adjusting the attributes of the original road environment image according to different attribute requirements of different road environment recognition models on the input image to obtain the road environment image to be recognized corresponding to the different road environment recognition models.
The different road environment recognition models are respectively used for recognizing different target objects in the road environment image.
S330: and calling the GPU to perform target object recognition on the corresponding road environment images to be recognized respectively based on different road environment recognition models, and obtaining initial recognition result images of the different road environment recognition models.
The GPU, also known as a display core, a visual processor, and a display chip, is a microprocessor that is dedicated to image and graphics related operations on personal computers, workstations, game machines, and some mobile devices (e.g., tablet computers, smart phones, etc.). The GPU reduces the dependence of the graphics card on the CPU, and performs part of the original CPU work, and particularly, the core technologies adopted by the GPU in 3D graphics processing include hardware T & L (geometric transformation and illumination processing), cubic environment texture mapping and vertex mixing, texture compression and bump mapping, a dual-texture four-pixel 256-bit rendering engine, and the like, and the hardware T & L technology can be said to be a mark of the GPU.
The GPU includes various architectures such as NVidia Tesla, NVidia Fermi, NVidia Maxwell, NVidia Kepler, and NVidia rolling, and in any architecture, the GPU includes a plurality of units capable of processing images in parallel, such as a stream processor. Image recognition operations can thus be performed in parallel by invoking the GPU.
S340: and respectively performing attribute restoration on different initial recognition result images according to the attribute information of the original road environment image to obtain a plurality of target recognition result images with the same attribute as the original road environment image.
S350: and performing fusion processing on the multiple target recognition result images to obtain a target road environment image containing all recognition results.
Steps S310, S320, S340 and S350 are the same as the specific implementation manners of steps S110, S120, S140 and S150 in the foregoing embodiments, and are not described herein again.
According to the road environment perception method, the target objects of different road environment recognition models to different road environment images to be recognized can be recognized in parallel by calling one GPU, and a plurality of CPUs are not required to be called, so that the road environment perception efficiency can be improved, and hardware resources can be saved.
Corresponding to the above method embodiment, the present application provides a road environment sensing apparatus, which may be applied to a vehicle or a server, as shown in fig. 5, and the apparatus includes:
an acquisition unit 41 configured to acquire an original road environment image acquired by a sensor;
the adjusting unit 42 is configured to adjust the attributes of the original road environment image according to different attribute requirements of different road environment recognition models on the input image, so as to obtain road environment images to be recognized corresponding to the different road environment recognition models, where the different road environment recognition models are respectively used for recognizing different target objects in a road environment;
the recognition unit 43 is configured to perform target object recognition on the corresponding road environment images to be recognized in parallel based on different road environment recognition models to obtain initial recognition result images of the different road environment recognition models;
the restoring unit 44 is configured to perform attribute restoration on different initial recognition result images according to the attribute information of the original road environment image, so as to obtain a plurality of target recognition result images with the same attributes as the original road environment image;
and a fusion unit 45 configured to perform fusion processing on the multiple target recognition result images to obtain a target road environment image including all recognition results.
In one embodiment, the identification unit 43 comprises a first identification module or a second identification module;
the first identification module is configured to call a plurality of CPU cores of a CPU (central processing unit) to parallelly identify a target object on the corresponding road environment image to be identified respectively based on different road environment identification models to obtain initial identification result images of the different road environment identification models, wherein the CPU cores correspond to the road environment identification models one by one;
and the second identification module is configured to call the GPU to respectively perform target object identification on the corresponding road environment images to be identified in parallel based on different road environment identification models so as to obtain initial identification result images of the different road environment identification models.
In one embodiment, the second identification module comprises either the first identification submodule or the second identification submodule;
the first identification submodule is configured to acquire a configuration file, wherein the configuration file comprises a mapping relation between a CPU core and a road environment identification model, and according to the mapping relation, a plurality of CPU cores are called to perform target object identification on corresponding road environment images to be identified respectively based on different road environment identification models in parallel to acquire initial identification result images of different road environment identification models;
and the second identification submodule is configured to randomly call a plurality of CPU cores with the same number as the road environment identification models, and the plurality of CPU cores are used for identifying the corresponding road environment images to be identified in parallel on the basis of different road environment identification models respectively to obtain initial identification result images of the different road environment identification models.
In one embodiment, the first identification submodule is configured to obtain a preset configuration file; or detecting the performance of all CPU cores in the CPU, screening out target CPU cores meeting the corresponding preset performance requirements of different road environment recognition models from all the CPU cores, and writing the mapping relation between the target CPU cores and the road environment recognition models into a configuration file.
In one embodiment, the fusion unit 45 comprises a first fusion module or a second fusion module;
the first fusion module is configured to map all recognition results in the multiple target recognition result images to an original road environment image and determine the mapped original road environment image as a target road environment image;
and the second fusion module is configured to select a target recognition result image to be mapped from the plurality of target identification result images, map all recognition results in the target recognition result images except the target recognition result image to be mapped in the plurality of target recognition result images to the target recognition result image to be mapped, and determine the mapped target recognition result image to be mapped as a target road environment image.
In one embodiment, the apparatus further comprises:
the calling unit is configured to call an OpenCV library function after fusion processing is carried out on a plurality of target recognition result images to obtain target road environment images containing all recognition results;
a marking unit configured to mark all recognition results in the target road environment image with preset symbols and output the target road environment image including the mark.
In an embodiment, the adjusting unit 42 is configured to, when the attribute requirement includes an image size requirement and an image color requirement, adjust the image size and the image color of the original road environment image to the image size and the image color meeting the image size requirement and the image color requirement corresponding to the road environment identification model to be adjusted, respectively, for the road environment identification model to be adjusted, and obtain the road environment image to be identified corresponding to the road environment identification model to be adjusted.
In one embodiment, the restoring unit 44 is configured to restore the image size and the image color of the initial recognition result image to be restored to be the same as those of the original road environment image, respectively, for the initial recognition result image to be restored, and obtain a target recognition result image corresponding to the initial recognition result image to be restored.
Based on the above embodiments, another embodiment of the present application provides a storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the method according to any one of the above embodiments.
Based on the above embodiments, another embodiment of the present application provides an electronic device, including:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method as in any one of the embodiments described above.
Based on the above embodiments, another embodiment of the present application provides a vehicle including the apparatus according to any one of the above embodiments, or including the electronic device according to the above embodiments.
As shown in fig. 6, the Vehicle includes a GPS (Global Positioning System) Positioning device 51, a V2X (Vehicle-to-event) 52, a T-Box (telematics Box) 53, a radar 54, and a camera (i.e., image sensor) 55. The GPS positioning device 51 is used for obtaining vehicle position information, the V2X52 is used for communicating with other vehicles, roadside devices and the like, the radar 54 or the camera 55 is used for sensing road environment information in front of the vehicle, the radar 54 and/or the camera 55 CAN be configured at the front part and/or the tail part of the vehicle, the T-BOX53 is used as a wireless gateway, a remote communication interface is provided for the whole vehicle through the functions of 4G/5G and other remote wireless communication, GPS satellite positioning, acceleration sensing, CAN communication and the like, and services including vehicle data acquisition, driving track recording, vehicle fault monitoring, vehicle remote inquiry and control (locking and unlocking, air conditioner control, vehicle window control, transmitter torque limitation, engine starting and stopping), driving behavior analysis and the like are provided.
The above device embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment, and for the specific description, refer to the method embodiment. The device embodiment is obtained based on the method embodiment, and for specific description, reference may be made to the method embodiment section, which is not described herein again. Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or processes in the figures are not necessarily required to practice the present application.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (16)

1. A method for road environment perception, the method comprising:
acquiring an original road environment image acquired by a sensor;
respectively adjusting the attributes of the original road environment image according to different attribute requirements of different road environment recognition models on an input image to obtain road environment images to be recognized corresponding to the different road environment recognition models, wherein the different road environment recognition models are respectively used for recognizing different target objects in the road environment images;
carrying out target object recognition on corresponding road environment images to be recognized in parallel based on different road environment recognition models to obtain initial recognition result images of the different road environment recognition models;
respectively performing attribute restoration on different initial recognition result images according to the attribute information of the original road environment image to obtain a plurality of target recognition result images with the same attribute as the original road environment image;
and performing fusion processing on the multiple target recognition result images to obtain a target road environment image containing all recognition results.
2. The method of claim 1, wherein the step of performing target object recognition on the corresponding road environment images to be recognized in parallel based on different road environment recognition models to obtain initial recognition result images of the different road environment recognition models comprises:
calling a plurality of CPU cores of a CPU (central processing unit) to perform target object identification on corresponding road environment images to be identified respectively on the basis of different road environment identification models in parallel to obtain initial identification result images of the different road environment identification models, wherein the CPU cores correspond to the road environment identification models one by one;
or calling a graphic processing unit GPU to perform target object recognition on the corresponding road environment images to be recognized in parallel respectively based on different road environment recognition models to obtain initial recognition result images of the different road environment recognition models.
3. The method according to claim 2, wherein the step of calling a plurality of CPU cores of a Central Processing Unit (CPU) to perform parallel target object recognition on the corresponding road environment image to be recognized based on different road environment recognition models respectively to obtain initial recognition result images of the different road environment recognition models comprises the steps of:
acquiring a configuration file, wherein the configuration file comprises a mapping relation between CPU cores and a road environment recognition model, and calling the CPU cores to perform target object recognition on corresponding road environment images to be recognized respectively based on different road environment recognition models according to the mapping relation to obtain initial recognition result images of different road environment recognition models;
or randomly calling a plurality of CPU cores with the same number as the road environment recognition models, and using the CPU cores to perform target object recognition on the corresponding road environment images to be recognized respectively based on different road environment recognition models in parallel to obtain initial recognition result images of different road environment recognition models.
4. The method of claim 3, wherein obtaining the configuration file comprises:
acquiring a preset configuration file;
or detecting the performance of all CPU cores in the CPU, screening out target CPU cores meeting the corresponding preset performance requirements of different road environment recognition models from all the CPU cores, and writing the mapping relation between the target CPU cores and the road environment recognition models into a configuration file.
5. The method according to claim 1, wherein the fusing the plurality of target recognition result images to obtain the target road environment image including all recognition results comprises:
mapping all recognition results in the multiple target recognition result images to the original road environment image, and determining the mapped original road environment image as the target road environment image;
or selecting a target recognition result image to be mapped from the plurality of target recognition result images, mapping all recognition results in the target recognition result images except the target recognition result image to be mapped in the target recognition result images to be mapped, and determining the mapped target recognition result image to be the target road environment image.
6. The method according to claim 1, wherein after the fusion processing is performed on the plurality of target recognition result images to obtain the target road environment image containing all recognition results, the method further comprises:
calling an OpenCV library function;
and marking all recognition results in the target road environment image by using preset symbols, and outputting the target road environment image containing the marks.
7. The method according to claim 1, wherein performing attribute restoration on different initial recognition result images according to attribute information of the original road environment image to obtain a plurality of target recognition result images with the same attribute as the original road environment image comprises:
and respectively performing attribute restoration on different initial recognition result images according to the attribute information of the original road environment image under the condition that all the initial recognition result images contain the recognition results to obtain a plurality of target recognition result images with the same attribute as the original road environment image.
8. The method according to any one of claims 1 to 7, wherein when the attribute requirements include an image size requirement and an image color requirement, respectively adjusting the attributes of the original road environment image according to different attribute requirements of different road environment recognition models on an input image to obtain road environment images to be recognized corresponding to different road environment recognition models, comprises:
and aiming at the road environment recognition model to be adjusted, respectively adjusting the image size and the image color of the original road environment image to the image size and the image color which meet the image size requirement and the image color requirement corresponding to the road environment recognition model to be adjusted, and obtaining the road environment image to be recognized corresponding to the road environment recognition model to be adjusted.
9. The method according to claim 8, wherein performing attribute restoration on different initial recognition result images according to attribute information of the original road environment image to obtain a plurality of target recognition result images with the same attribute as the original road environment image comprises:
and respectively reducing the image size and the image color of the initial identification result image to be reduced to be the same as the image size and the image color of the original road environment image aiming at the initial identification result image to be reduced, and obtaining a target identification result image corresponding to the initial identification result image to be reduced.
10. A road environment sensing apparatus, comprising:
the acquisition unit is configured to acquire an original road environment image acquired by the sensor;
the adjusting unit is configured to respectively adjust the attributes of the original road environment image according to different attribute requirements of different road environment recognition models on an input image, so as to obtain road environment images to be recognized corresponding to the different road environment recognition models, wherein the different road environment recognition models are respectively used for recognizing different target objects in a road environment;
the recognition unit is configured to perform target object recognition on corresponding road environment images to be recognized in parallel based on different road environment recognition models to obtain initial recognition result images of the different road environment recognition models;
the restoring unit is configured to respectively perform attribute restoration on different initial recognition result images according to attribute information of the original road environment image to obtain a plurality of target recognition result images with the same attributes as the original road environment image;
and the fusion unit is configured to perform fusion processing on the multiple target recognition result images to obtain a target road environment image containing all recognition results.
11. The apparatus of claim 10, wherein the identification unit comprises a first identification module or a second identification module;
the first identification module is configured to call a plurality of CPU cores of a CPU (central processing unit) to perform target object identification on corresponding road environment images to be identified respectively on the basis of different road environment identification models in parallel to obtain initial identification result images of the different road environment identification models, wherein the CPU cores correspond to the road environment identification models one to one;
and the second identification module is configured to call the GPU to respectively perform target object identification on the corresponding road environment images to be identified in parallel based on different road environment identification models so as to obtain initial identification result images of the different road environment identification models.
12. The apparatus of claim 10, wherein the fusion unit comprises a first fusion module or a second fusion module;
the first fusion module is configured to map all recognition results in the multiple target recognition result images into the original road environment image, and determine the mapped original road environment image as the target road environment image;
the second fusion module is configured to select a target recognition result image to be mapped from the plurality of target recognition result images, map all recognition results in the target recognition result images except the target recognition result image to be mapped in the plurality of target recognition result images to the target recognition result image to be mapped, and determine the mapped target recognition result image to be the target road environment image.
13. The apparatus according to any one of claims 10-12, further comprising:
the calling unit is configured to call an OpenCV library function after the multiple target recognition result images are subjected to fusion processing to obtain target road environment images containing all recognition results;
a marking unit configured to mark all recognition results in the target road environment image with preset symbols and output the target road environment image including the mark.
14. A storage medium on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1-9.
15. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-9.
16. A vehicle comprising an apparatus according to any of claims 10-13 or comprising an electronic device according to claim 15.
CN202210168552.8A 2022-02-24 2022-02-24 Road environment sensing method and device, storage medium, electronic equipment and vehicle Pending CN114240816A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210168552.8A CN114240816A (en) 2022-02-24 2022-02-24 Road environment sensing method and device, storage medium, electronic equipment and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210168552.8A CN114240816A (en) 2022-02-24 2022-02-24 Road environment sensing method and device, storage medium, electronic equipment and vehicle

Publications (1)

Publication Number Publication Date
CN114240816A true CN114240816A (en) 2022-03-25

Family

ID=80748106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210168552.8A Pending CN114240816A (en) 2022-02-24 2022-02-24 Road environment sensing method and device, storage medium, electronic equipment and vehicle

Country Status (1)

Country Link
CN (1) CN114240816A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821531A (en) * 2022-04-25 2022-07-29 广州优创电子有限公司 Lane line recognition image display system based on electronic outside rear-view mirror ADAS
CN115056784A (en) * 2022-07-04 2022-09-16 小米汽车科技有限公司 Vehicle control method, device, vehicle, storage medium and chip

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845547A (en) * 2017-01-23 2017-06-13 重庆邮电大学 A kind of intelligent automobile positioning and road markings identifying system and method based on camera
CN112232312A (en) * 2020-12-10 2021-01-15 智道网联科技(北京)有限公司 Automatic driving method and device based on deep learning and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845547A (en) * 2017-01-23 2017-06-13 重庆邮电大学 A kind of intelligent automobile positioning and road markings identifying system and method based on camera
CN112232312A (en) * 2020-12-10 2021-01-15 智道网联科技(北京)有限公司 Automatic driving method and device based on deep learning and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821531A (en) * 2022-04-25 2022-07-29 广州优创电子有限公司 Lane line recognition image display system based on electronic outside rear-view mirror ADAS
CN114821531B (en) * 2022-04-25 2023-03-28 广州优创电子有限公司 Lane line recognition image display system based on electronic exterior rearview mirror ADAS
CN115056784A (en) * 2022-07-04 2022-09-16 小米汽车科技有限公司 Vehicle control method, device, vehicle, storage medium and chip
CN115056784B (en) * 2022-07-04 2023-12-05 小米汽车科技有限公司 Vehicle control method, device, vehicle, storage medium and chip

Similar Documents

Publication Publication Date Title
US11967109B2 (en) Vehicle localization using cameras
CN106980813B (en) Gaze generation for machine learning
US9082038B2 (en) Dram c adjustment of automatic license plate recognition processing based on vehicle class information
CN109961522B (en) Image projection method, device, equipment and storage medium
CN114240816A (en) Road environment sensing method and device, storage medium, electronic equipment and vehicle
CN111931683B (en) Image recognition method, device and computer readable storage medium
CN114549369B (en) Data restoration method and device, computer and readable storage medium
CN112132216B (en) Vehicle type recognition method and device, electronic equipment and storage medium
CN112037142A (en) Image denoising method and device, computer and readable storage medium
CN113920101A (en) Target detection method, device, equipment and storage medium
CN114677848B (en) Perception early warning system, method, device and computer program product
CN114820679A (en) Image annotation method and device, electronic equipment and storage medium
CN111191607A (en) Method, apparatus, and storage medium for determining steering information of vehicle
CN115588188A (en) Locomotive, vehicle-mounted terminal and driver behavior identification method
CN113221756A (en) Traffic sign detection method and related equipment
CN112241963A (en) Lane line identification method and system based on vehicle-mounted video and electronic equipment
CN113902047B (en) Image element matching method, device, equipment and storage medium
CN115618602A (en) Lane-level scene simulation method and system
CN114359147A (en) Crack detection method, crack detection device, server and storage medium
CN111753663B (en) Target detection method and device
CN114913329A (en) Image processing method, semantic segmentation network training method and device
CN113869440A (en) Image processing method, apparatus, device, medium, and program product
CN114120260A (en) Method and system for identifying travelable area, computer device, and storage medium
CN117173693B (en) 3D target detection method, electronic device, medium and driving device
CN117274957B (en) Road traffic sign detection method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220325

RJ01 Rejection of invention patent application after publication