CN115203457A - Image retrieval method, image retrieval device, vehicle, storage medium and chip - Google Patents

Image retrieval method, image retrieval device, vehicle, storage medium and chip Download PDF

Info

Publication number
CN115203457A
CN115203457A CN202210836988.XA CN202210836988A CN115203457A CN 115203457 A CN115203457 A CN 115203457A CN 202210836988 A CN202210836988 A CN 202210836988A CN 115203457 A CN115203457 A CN 115203457A
Authority
CN
China
Prior art keywords
image
historical
target
environment
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210836988.XA
Other languages
Chinese (zh)
Other versions
CN115203457B (en
Inventor
路卫杰
解智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202210836988.XA priority Critical patent/CN115203457B/en
Publication of CN115203457A publication Critical patent/CN115203457A/en
Application granted granted Critical
Publication of CN115203457B publication Critical patent/CN115203457B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure relates to an image retrieval method, an apparatus, a vehicle, a storage medium, and a chip, the method comprising: determining a model keyword corresponding to a target automatic driving model to be trained; according to image tags corresponding to a plurality of historical environment images, retrieving a plurality of target environment images matched with the model keywords from the plurality of historical environment images, wherein the image tags are used for representing attribute information corresponding to the historical environment images; and taking a plurality of target environment images as sample images for training the target automatic driving model. That is to say, according to the image tags, the required sample images can be quickly retrieved from the multiple historical environment images, and the sample images can be used for training the target automatic driving model in a targeted manner, so that the accuracy of the trained target automatic driving model is higher, and the safety of automatic driving is improved.

Description

Image retrieval method, image retrieval device, vehicle, storage medium and chip
Technical Field
The present disclosure relates to the field of vehicle technologies, and in particular, to an image retrieval method, an image retrieval device, a vehicle, a storage medium, and a chip.
Background
At present, the realization of the automatic driving technology mainly depends on an automatic driving model based on machine learning, and the automatic driving model determines a driving strategy according to driving environment data acquired during driving so as to control a vehicle to automatically execute corresponding actions. In order to improve the safety of automatic driving, higher requirements are also put on automatic driving models.
In the related technology, the automatic driving model is trained by collecting massive environment images so as to improve the accuracy of the automatic driving model. However, since a large amount of environment images may have a large amount of redundant data, and the accuracy of model training is relatively low, how to search for an environment image capable of performing targeted training on an automatic driving model from a large amount of environment images becomes an urgent problem to be solved.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image retrieval method, apparatus, vehicle, storage medium, and chip.
According to a first aspect of the embodiments of the present disclosure, there is provided an image retrieval method, including:
determining a model keyword corresponding to a target automatic driving model to be trained;
according to image tags corresponding to a plurality of historical environment images, retrieving a plurality of target environment images matched with the model keywords from the plurality of historical environment images, wherein the image tags are used for representing attribute information corresponding to the historical environment images;
and taking a plurality of target environment images as sample images for training the target automatic driving model.
Optionally, the retrieving, according to image tags corresponding to a plurality of historical environment images, a plurality of target environment images matching the model keyword from the plurality of historical environment images includes:
retrieving a plurality of target image tags matched with the model keywords from image tags corresponding to a plurality of historical environment images;
and taking the historical environment images corresponding to the target image labels as the target environment images.
Optionally, the method further comprises:
and responding to a preview request message triggered by a user, acquiring a preview image corresponding to the pre-stored sample image, and displaying the preview image, wherein the storage space occupied by the preview image is smaller than that of the sample image.
Optionally, before the retrieving, according to the image tags corresponding to the plurality of historical environment images, the plurality of target environment images matching the model keyword from the plurality of historical environment images, the method further includes:
acquiring a plurality of pre-generated historical image data, wherein the historical image data comprises the historical environment image and an image label corresponding to the historical environment image;
the retrieving, from the plurality of historical environment images, a plurality of target environment images matching the model keyword according to image tags corresponding to the plurality of historical environment images includes:
retrieving a plurality of target historical image data matched with the model keyword from a plurality of historical image data according to an image tag in the plurality of historical image data;
and taking a historical environment image in the plurality of target historical image data as the target environment image.
Optionally, the historical image data is generated in advance by:
acquiring vehicle data in a historical time period, wherein the vehicle data comprises an environment image, sensor data, controller data and map data;
dividing the vehicle data into a plurality of subdata according to a preset time interval;
and regarding each subdata, taking an environment image in the subdata as the historical environment image, and determining an image tag corresponding to the historical environment image to obtain the historical image data.
Optionally, the image tag includes a plurality of image tags, and the determining the image tag corresponding to the historical environmental image includes:
and inputting the subdata into a plurality of label obtaining models trained in advance to obtain the image label output by each label obtaining model, wherein different label obtaining models are used for obtaining different image labels.
Optionally, the method further comprises:
after each label obtaining model outputs the image label, determining a target message queue corresponding to the subdata from a plurality of preset message queues according to the vehicle identification corresponding to the subdata;
and storing the historical image data to a search database through the target message queue.
Optionally, before the storing the historical image data to a search database through the target message queue, the method further includes:
determining whether an image tag corresponding to the historical image data is a preset tag;
the storing the historical image data to a search database via the target message queue comprises:
and under the condition that the image tag corresponding to the historical image data is the preset tag, storing the historical image data to the search database through the target message queue.
Optionally, the method further comprises:
and determining a preview image corresponding to the environment image in the subdata for each subdata, and storing the preview image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image retrieval apparatus including:
the automatic driving training system comprises a first determining module, a second determining module and a control module, wherein the first determining module is configured to determine a model keyword corresponding to a target automatic driving model to be trained;
a retrieval module configured to retrieve a plurality of target environment images matched with the model keyword from a plurality of historical environment images according to image tags corresponding to the plurality of historical environment images, wherein the image tags are used for representing attribute information corresponding to the historical environment images;
a first obtaining module configured to use a plurality of the target environment images as sample images for training the target automatic driving model.
Optionally, the retrieving module is further configured to:
retrieving a plurality of target image tags matched with the model keywords from image tags corresponding to a plurality of historical environment images;
and taking the historical environment images corresponding to the target image labels as the target environment images.
Optionally, the apparatus further comprises:
and the display module is configured to respond to a preview request message triggered by a user, acquire a preview image corresponding to the pre-stored sample image, and display the preview image, wherein the storage space occupied by the preview image is smaller than that of the sample image.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring a plurality of pre-generated historical image data, and the historical image data comprises the historical environment image and an image label corresponding to the historical environment image;
the retrieval module further configured to:
retrieving a plurality of target historical image data matched with the model keyword from a plurality of historical image data according to an image tag in the plurality of historical image data;
and taking a historical environment image in a plurality of target historical image data as the target environment image.
Optionally, the apparatus further comprises:
a data generation module configured to acquire vehicle data over a historical period of time, the vehicle data including environmental images, sensor data, controller data, map data; dividing the vehicle data into a plurality of subdata according to a preset time interval; and regarding each subdata, taking an environment image in the subdata as the historical environment image, and determining an image label corresponding to the historical environment image to obtain the historical image data.
Optionally, the image tag includes a plurality of image tags, and the determining the image tag corresponding to the historical environmental image includes:
and inputting the subdata into a plurality of label obtaining models trained in advance to obtain the image label output by each label obtaining model, wherein different label obtaining models are used for obtaining different image labels.
Optionally, the apparatus further comprises:
the second determining module is configured to determine a target message queue corresponding to the subdata from a plurality of preset message queues according to the vehicle identifier corresponding to the subdata after each label obtaining model outputs the image label;
a storage module configured to store the historical image data to a search database through the target message queue.
Optionally, the apparatus further comprises:
a third determining module configured to determine whether an image tag corresponding to the historical image data is a preset tag;
the storage module further configured to:
and under the condition that the image tag corresponding to the historical image data is the preset tag, storing the historical image data to the search database through the target message queue.
Optionally, the apparatus further comprises:
a fourth determining module configured to determine, for each of the child data, a preview image corresponding to an environment image in the child data, and store the preview image.
According to a third aspect of the embodiments of the present disclosure, there is provided a vehicle including:
a first processor;
a memory for storing processor-executable instructions;
wherein the first processor is configured to:
the steps of the method of the first aspect of the present disclosure are implemented.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of the first aspect of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a chip comprising a second processor and an interface; the second processor is configured to read instructions to perform the method of the first aspect of the disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: determining a model keyword corresponding to a target automatic driving model to be trained; according to image tags corresponding to a plurality of historical environment images, retrieving a plurality of target environment images matched with the model keywords from the plurality of historical environment images, wherein the image tags are used for representing attribute information corresponding to the historical environment images; and taking a plurality of target environment images as sample images for training the target automatic driving model. That is to say, the present disclosure can quickly retrieve a required sample image from a plurality of historical environment images according to an image tag, and the sample image can be used for training the target automatic driving model in a targeted manner, so that the accuracy of the trained target automatic driving model is higher, and the safety of automatic driving is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating an image retrieval method according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating another image retrieval method according to an exemplary embodiment;
FIG. 3 is a flowchart illustrating an image data generation method according to an exemplary embodiment;
FIG. 4 is a flow diagram illustrating a data storage process according to an exemplary embodiment;
FIG. 5 is a block diagram illustrating an image retrieval apparatus according to an exemplary embodiment;
FIG. 6 is a block diagram illustrating another image retrieval device according to an exemplary embodiment;
FIG. 7 is a block diagram illustrating another image retrieval device according to an exemplary embodiment;
FIG. 8 is a functional block diagram schematic of a vehicle shown in an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that all actions of acquiring signals, information or data in the present application are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
First, an application scenario of the present disclosure will be explained. At present, a sample image for training an automatic driving model is an environment image acquired by a vehicle in a driving process, and under a normal condition, the vehicle can acquire environment images of different driving scenes in the driving process, for example, the environment image may be an environment image when the vehicle turns, an environment image when the vehicle is in a red light, or an environment image when a person is present in front of the vehicle. The automatic driving system comprises a plurality of automatic driving models, different automatic driving models are used for obtaining different decision information, and the automatic driving models can comprise a turning processing model and an obstacle avoiding processing model.
In the related art, in order to improve the accuracy of an automatic driving model, a target environment image that is strongly correlated with the automatic driving model is determined from a large number of environment images, and the automatic driving model is trained as a sample image. For example, when training the turn processing model, only the environmental image related to the turn scene may be selected. However, because the number of the environment images acquired in the vehicle driving process is huge, a large amount of redundant data may exist in a large number of environment images, and the accuracy rate of model training is low.
In order to solve the technical problem, the present disclosure provides an image retrieval method, an apparatus, a vehicle, a storage medium, and a chip, in which a required sample image is quickly retrieved from a plurality of historical environment images according to an image tag, and the sample image can be used to train the target automatic driving model in a targeted manner, so that the accuracy of the trained target automatic driving model is higher, and the safety of automatic driving is improved.
The disclosure is described below with reference to specific examples.
Fig. 1 is a flowchart illustrating an image retrieval method according to an exemplary embodiment, which may include, as shown in fig. 1:
s101, determining model keywords corresponding to a target automatic driving model to be trained.
In this step, a model keyword corresponding to the target autopilot model may be determined according to the function of the target autopilot model. For example, if the function of the target automatic driving model is obstacle detection, it may be determined that the model keyword corresponding to the automatic driving model may be an obstacle. The model keyword may also be a combination of a plurality of search terms, for example, the model keyword may also be a time point + vehicle model + curve.
And S102, searching a plurality of target environment images matched with the model key words from a plurality of historical environment images according to the image labels corresponding to the plurality of historical environment images.
The image tag may be used to represent attribute information corresponding to the historical environment image, and for example, the image tag may include an image type tag, a time point tag, and a time period tag. The historical environment image may be a plurality of frames of images, for example, the historical environment image may be a video image of a period of time, and the historical environment image may also be a frame of image of a time point, which is not limited by the present disclosure.
In this step, after the model keyword corresponding to the target automatic driving model is determined, a target image tag matching the model keyword may be retrieved from image tags corresponding to a plurality of historical environment images by a conventional character matching algorithm, and the historical environment image corresponding to the target image tag is used as the target environment image. For example, if the model keyword corresponding to the target automatic driving model is an obstacle, a target image tag matching the obstacle may be determined from a plurality of image tags, and for example, if the image tag is a pedestrian or a bicycle, the image tag may be determined to be the target image tag.
And S103, taking a plurality of target environment images as sample images for training the target automatic driving model.
The sample image can comprise a data set to be labeled, a training data set and a testing data set, the data set to be labeled is used for manual labeling, the target automatic driving model is trained through the labeled data set and the training data set, and the testing data set is used for testing the accuracy of the target automatic driving model.
In this step, after obtaining the plurality of target environment images, the plurality of target environment images may be used as sample images to train the target neural network model, so as to obtain the target automatic driving model.
In a possible implementation manner, after the sample image of the target automatic driving model is obtained, a preview image corresponding to the pre-stored sample image may be obtained in response to a preview request message triggered by a user, and the preview image is displayed. Wherein the preview image occupies a smaller storage space than the sample image, and may be, for example, an image of a lower resolution generated from the sample image. Therefore, when the user needs to preview the sample image, the preview image corresponding to the sample image can be sent to the browser, and the preview image is displayed through the browser, so that the storage resource is saved.
By adopting the method, the required sample image is quickly retrieved from the plurality of historical environment images according to the image label, and the sample image can be used for training the target automatic driving model in a targeted manner, so that the accuracy of the trained target automatic driving model is higher, and the safety of automatic driving is improved.
Fig. 2 is a flowchart illustrating another image retrieval method according to an exemplary embodiment, which may further include, as shown in fig. 2:
and S104, acquiring a plurality of pieces of pre-generated historical image data.
The historical image data may include the historical environment image and an image tag corresponding to the historical environment image, and different historical image data may be image data of different time periods, or image data of different tags in the same time period.
Accordingly, step S102 may be:
retrieving a plurality of target historical image data matched with the model keyword from a plurality of historical image data according to image tags in the plurality of historical image data; and taking a history environment image in a plurality of target history image data as the target environment image.
For example, after determining a model keyword corresponding to the automatic driving model, a plurality of historical image data may be acquired, and then, a plurality of target image tags matching the model keyword are determined from among image tags of the plurality of historical image data, the historical image data corresponding to the plurality of target image tags is taken as the target historical image data, and a historical environment image in the target historical image data is taken as the target environment image.
Where the historical image data may be generated in advance from vehicle data collected while the vehicle is traveling within a historical period of time, fig. 3 is a flowchart illustrating an image data generation method according to an exemplary embodiment, as shown in fig. 3, the method may include:
and S31, acquiring vehicle data in the historical time period.
The vehicle data may include an environment image, sensor data, controller data, and map data, the environment image may be acquired by a camera mounted on the vehicle, the sensor data may be acquired by a sensor mounted on the vehicle, the controller data may be acquired by a controller of the vehicle, and the controller data may include vehicle driving data, for example, the controller data may include turning data, speed information, and the like.
In this step, during the running of the vehicle, the camera, the sensor, the controller, and the like of the vehicle may be controlled to collect the vehicle data, and store the vehicle data. The vehicle data may include data collected by the vehicle at different operating conditions and different time periods.
And S32, dividing the vehicle data into a plurality of subdata according to a preset time interval.
Wherein the preset time interval may be set empirically, and for example, the preset time interval may be 20s.
In this step, after the vehicle data is obtained, the preset time interval may be obtained, and the historical time period is divided into a plurality of sub-time periods according to the preset time interval, where the data of each sub-time period is the sub-data.
And S33, regarding each subdata, taking the environment image in the subdata as the historical environment image, and determining an image tag corresponding to the historical environment image to obtain the historical image data.
In this step, after obtaining the plurality of sub-data, the environment image in the sub-data may be determined, and the environment image may be used as the historical environment image. Then, the subdata may be input into a plurality of label obtaining models trained in advance to obtain an image label output by each label obtaining model, and different label obtaining models are used for obtaining different image labels.
The tag acquisition model may include an image type acquisition model, a time point acquisition model, a time period acquisition model, and the like.
For example, for each sub data, the sub data may be input into the image type obtaining model, the time point obtaining model and the time period obtaining model respectively, an image type tag output by the image type obtaining model is obtained, a time point tag output by the time point obtaining model is obtained, and a time period tag output by the time period obtaining model is obtained. It should be noted that the image tag corresponding to each historical environment image may be different, and for example, the image tag corresponding to the historical environment image a may include an image type tag and a time point tag, and the image tag corresponding to the historical environment image B may include an image type tag and a time period tag.
In a possible implementation manner, after each tag obtaining model outputs the image tag, according to a vehicle identifier corresponding to the subdata, determining a target message queue corresponding to the subdata from a plurality of preset message queues; and storing the historical image data to a search database through the target message queue. Wherein the message queue may be the message queue of Kafka of the prior art.
Illustratively, fig. 4 is a data storage flowchart illustrating that the history image data may be stored to the search database through the message queue 1, the message queue 2, or the message queue 3, as shown in fig. 4, according to an exemplary embodiment. Therefore, historical image data of the same vehicle can be stored in the search database through the same message queue, and data loss caused by concurrence conflict generated in the data storage process is avoided. It should be noted that the message queue in fig. 4 is only an example, and may include more message queues, which is not limited by the present disclosure.
When the history image data is stored, for each history image data, a plurality of image tags corresponding to the history image data may be aggregated into one document data, for example, the plurality of image tags are stitched by a stitching algorithm in the related art to obtain tag document data corresponding to the history image data, and the tag document data is stored. Therefore, each image label does not need to be stored independently, the data updating frequency is reduced, and the memory resource is saved.
By adopting the method, the corresponding image label can be set for each historical image data, and the sample image can be quickly searched by comparing the image label with the model key word during image search, so that the efficiency of image search is improved.
Fig. 5 is a block diagram illustrating an image retrieval apparatus according to an exemplary embodiment, which may include, as shown in fig. 5:
a first determining module 501, configured to determine a model keyword corresponding to a target automatic driving model to be trained;
a retrieving module 502 configured to retrieve a plurality of target environment images matching the model keyword from a plurality of historical environment images according to image tags corresponding to the historical environment images, wherein the image tags are used for representing attribute information corresponding to the historical environment images;
a first obtaining module 503 configured to use a plurality of the target environment images as sample images for training the target automatic driving model.
Optionally, the retrieving module 502 is further configured to:
searching a plurality of target image labels matched with the model keywords from a plurality of image labels corresponding to the historical environment images;
and taking the historical environment images corresponding to the plurality of target image labels as the target environment images.
Alternatively, fig. 6 is a block diagram illustrating another image retrieval apparatus according to an exemplary embodiment, as shown in fig. 6, the apparatus further includes:
the display module 504 is configured to, in response to a preview request message triggered by a user, obtain a preview image corresponding to the pre-stored sample image, and display the preview image, where a storage space occupied by the preview image is smaller than that of the sample image.
Alternatively, fig. 7 is a block diagram illustrating another image retrieval apparatus according to an exemplary embodiment, as shown in fig. 7, the apparatus further includes:
a second obtaining module 505, configured to obtain a plurality of pre-generated historical image data, where the historical image data includes the historical environment image and an image tag corresponding to the historical environment image;
the retrieval module 502 is further configured to:
retrieving a plurality of target historical image data matched with the model keyword from a plurality of historical image data according to image tags in the plurality of historical image data;
and taking a history environment image in a plurality of target history image data as the target environment image.
Optionally, the apparatus further comprises:
a data generation module 506 configured to acquire vehicle data over a historical period of time, the vehicle data including environmental images, sensor data, controller data, map data; dividing the vehicle data into a plurality of subdata according to a preset time interval; and regarding each subdata, taking the environment image in the subdata as the historical environment image, and determining an image tag corresponding to the historical environment image to obtain the historical image data.
Optionally, the image tag includes a plurality of image tags, and the determining the image tag corresponding to the historical environment image includes:
and inputting the subdata into a plurality of label obtaining models trained in advance to obtain the image label output by each label obtaining model, wherein different label obtaining models are used for obtaining different image labels.
Optionally, the apparatus further comprises:
a second determining module 507, configured to determine, according to the vehicle identifier corresponding to the subdata after each of the tag obtaining models outputs the image tag, a target message queue corresponding to the subdata from a plurality of preset message queues;
a storage module 508 configured to store the historical image data to a search database through the target message queue.
Optionally, the apparatus further comprises:
a third determining module 509 configured to determine whether an image tag corresponding to the historical image data is a preset tag;
the storage module 508, further configured to:
and under the condition that the image tag corresponding to the historical image data is the preset tag, storing the historical image data to the search database through the target message queue.
Optionally, the apparatus further comprises:
a fourth determining module 510, configured to determine, for each subdata, a preview image corresponding to an environment image in the subdata, and store the preview image.
By the aid of the device, the required sample images are quickly retrieved from the plurality of historical environment images according to the image labels, and the sample images can be used for training the target automatic driving model in a targeted mode, so that the accuracy of the trained target automatic driving model is higher, and safety of automatic driving is improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the image retrieval method provided by the present disclosure.
The apparatus may be a part of a stand-alone electronic device, for example, in an embodiment, the apparatus may be an Integrated Circuit (IC) or a chip, where the IC may be one IC or a collection of multiple ICs; the chip may include, but is not limited to, the following categories: a GPU (Graphics Processing Unit), a CPU (Central Processing Unit), an FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an SOC (System on Chip, SOC, system on Chip, or System on Chip), and the like. The integrated circuit or chip may be configured to execute executable instructions (or code) to implement the image retrieval method described above. Where the executable instructions may be stored in the integrated circuit or chip or may be retrieved from another apparatus or device, for example, where the integrated circuit or chip includes a second processor, a memory, and an interface for communicating with the other apparatus. The executable instructions may be stored in the memory, and when executed by the processor, implement the image retrieval method described above; alternatively, the integrated circuit or chip may receive executable instructions through the interface and transmit the executable instructions to the second processor for execution, so as to implement the image retrieval method.
Referring to fig. 8, fig. 8 is a functional block diagram of a vehicle 600 according to an exemplary embodiment. The vehicle 600 may be configured in a fully or partially autonomous driving mode. For example, the vehicle 600 may acquire environmental information of its surroundings through the sensing system 620 and derive an automatic driving strategy based on an analysis of the surrounding environmental information to implement full automatic driving, or present the analysis result to the user to implement partial automatic driving.
Vehicle 600 may include various subsystems such as infotainment system 610, perception system 620, decision control system 630, drive system 640, and computing platform 650. Alternatively, vehicle 600 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the sub-systems and components of the vehicle 600 may be interconnected by wire or wirelessly.
In some embodiments, the infotainment system 610 may include a communication system 611, an entertainment system 612, and a navigation system 613.
The communication system 611 may comprise a wireless communication system that may communicate wirelessly with one or more devices, either directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a Wireless Local Area Network (WLAN) using WiFi. In some embodiments, the wireless communication system may communicate directly with the device using an infrared link, bluetooth, or ZigBee. Other wireless protocols, such as various vehicular communication systems, for example, a wireless communication system may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicles and/or roadside stations.
The entertainment system 612 may include a display device, a microphone and a sound, and a user may listen to a radio in the car based on the entertainment system, playing music; or the mobile phone is communicated with the vehicle, screen projection of the mobile phone is realized on the display equipment, the display equipment can be in a touch control type, and a user can operate the display equipment by touching the screen.
In some cases, the voice signal of the user may be acquired through a microphone, and certain control of the vehicle 600 by the user, such as adjusting the temperature in the vehicle, etc., may be implemented according to the analysis of the voice signal of the user. In other cases, music may be played to the user through a stereo.
The navigation system 613 may include a map service provided by a map provider to provide navigation of a route of travel for the vehicle 600, and the navigation system 613 may be used in conjunction with a global positioning system 621 and an inertial measurement unit 622 of the vehicle. The map service provided by the map provider can be a two-dimensional map or a high-precision map.
The sensing system 620 may include several types of sensors that sense information about the environment surrounding the vehicle 600. For example, the sensing system 620 may include a global positioning system 621 (the global positioning system may be a GPS system, a beidou system or other positioning system), an Inertial Measurement Unit (IMU) 622, a laser radar 623, a millimeter wave radar 624, an ultrasonic radar 625, and a camera 626. The sensing system 620 may also include sensors of internal systems of the monitored vehicle 600 (e.g., an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors may be used to detect the object and its corresponding characteristics (position, shape, orientation, velocity, etc.). Such detection and identification is a critical function of the safe operation of the vehicle 600.
Global positioning system 621 is used to estimate the geographic location of vehicle 600.
The inertial measurement unit 622 is used to sense a pose change of the vehicle 600 based on the inertial acceleration. In some embodiments, inertial measurement unit 622 may be a combination of accelerometers and gyroscopes.
Lidar 623 utilizes laser light to sense objects in the environment in which vehicle 600 is located. In some embodiments, lidar 623 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
The millimeter-wave radar 624 utilizes radio signals to sense objects within the surrounding environment of the vehicle 600. In some embodiments, in addition to sensing objects, the millimeter-wave radar 624 may also be used to sense the speed and/or heading of objects.
The ultrasonic radar 625 may sense objects around the vehicle 600 using ultrasonic signals.
The camera 626 is used to capture image information of the surroundings of the vehicle 600. The image capturing device 626 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, and the like, and the image information acquired by the image capturing device 626 may include still images or video stream information.
Decision control system 630 includes a computing system 631 that makes analytical decisions based on information acquired by sensing system 620, decision control system 630 further includes a vehicle control unit 632 that controls the powertrain of vehicle 600, and a steering system 633, throttle 634, and brake system 635 for controlling vehicle 600.
The computing system 631 may operate to process and analyze the various information acquired by the perception system 620 to identify objects, and/or features in the environment surrounding the vehicle 600. The target may comprise a pedestrian or an animal and the objects and/or features may comprise traffic signals, road boundaries and obstacles. The computing system 631 may use object recognition algorithms, motion from Motion (SFM) algorithms, video tracking, and the like. In some embodiments, the computing system 631 may be used to map an environment, track objects, estimate the speed of objects, and so forth. The computing system 631 may analyze the various information obtained and derive a control strategy for the vehicle.
The vehicle controller 632 may be used to perform coordinated control on the power battery and the engine 641 of the vehicle to improve the power performance of the vehicle 600.
The steering system 633 is operable to adjust the heading of the vehicle 600. For example, in one embodiment, a steering wheel system.
The throttle 634 is used to control the operating speed of the engine 641 and thus the speed of the vehicle 600.
The brake system 635 is used to control the deceleration of the vehicle 600. The braking system 635 may use friction to slow the wheel 644. In some embodiments, the braking system 635 may convert the kinetic energy of the wheels 644 into electrical current. The braking system 635 may also take other forms to slow the rotational speed of the wheels 644 to control the speed of the vehicle 600.
The drive system 640 may include components that provide powered motion to the vehicle 600. In one embodiment, the drive system 640 may include an engine 641, an energy source 642, a transmission 643, and wheels 644. The engine 641 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine consisting of a gasoline engine and an electric motor, a hybrid engine consisting of an internal combustion engine and an air compression engine. The engine 641 converts the energy source 642 into mechanical energy.
Examples of energy sources 642 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 642 may also provide energy to other systems of the vehicle 600.
The transmission 643 may transmit mechanical power from the engine 641 to the wheels 644. The transmission 643 may include a gearbox, a differential, and a drive shaft. In one embodiment, the transmission 643 may also include other components, such as clutches. Wherein the drive shaft may include one or more axles that may be coupled to one or more wheels 644.
Some or all of the functions of the vehicle 600 are controlled by the computing platform 650. The computing platform 650 can include at least one first processor 651, which first processor 651 can execute instructions 653 stored in a non-transitory computer-readable medium, such as memory 652. In some embodiments, the computing platform 650 may also be a plurality of computing devices that control individual components or subsystems of the vehicle 600 in a distributed manner.
The first processor 651 may be any conventional processor, such as a commercially available CPU. Alternatively, the first processor 651 may also include a processor such as a Graphics Processor Unit (GPU), a Field Programmable Gate Array (FPGA), a System On Chip (SOC), an Application Specific Integrated Circuit (ASIC), or a combination thereof. Although fig. 8 functionally illustrates a processor, memory, and other elements of a computer in the same block, those skilled in the art will appreciate that the processor, computer, or memory may actually comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard drive or other storage medium located in a different housing than the computer. Thus, references to a processor or computer are to be understood as including references to a collection of processors or computers or memories which may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only computations related to the component-specific functions.
In the disclosed embodiment, the first processor 651 may perform the image retrieval method described above.
In various aspects described herein, the first processor 651 may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle and others are executed by a remote processor, including taking the steps necessary to execute a single maneuver.
In some embodiments, the memory 652 can include instructions 653 (e.g., program logic), which instructions 653 can be executed by the first processor 651 to perform various functions of the vehicle 600. The memory 652 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the infotainment system 610, the perception system 620, the decision control system 630, the drive system 640.
In addition to instructions 653, memory 652 may also store data such as road maps, route information, the location, direction, speed, and other such vehicle data of the vehicle, as well as other information. Such information may be used by the vehicle 600 and the computing platform 650 during operation of the vehicle 600 in autonomous, semi-autonomous, and/or manual modes.
Computing platform 650 may control functions of vehicle 600 based on inputs received from various subsystems (e.g., drive system 640, perception system 620, and decision control system 630). For example, computing platform 650 may utilize input from decision control system 630 in order to control steering system 633 to avoid obstacles detected by perception system 620. In some embodiments, the computing platform 650 is operable to provide control over many aspects of the vehicle 600 and its subsystems.
Optionally, one or more of these components described above may be mounted or associated separately from the vehicle 600. For example, the memory 652 may exist partially or completely separate from the vehicle 600. The aforementioned components may be communicatively coupled together in a wired and/or wireless manner.
Optionally, the above components are only an example, in an actual application, components in the above modules may be added or deleted according to an actual need, and fig. 8 should not be construed as limiting the embodiment of the present disclosure.
An autonomous automobile traveling on a roadway, such as vehicle 600 above, may identify objects within its surrounding environment to determine an adjustment to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, separation from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to be adjusted.
Optionally, the vehicle 600 or a sensing and computing device associated with the vehicle 600 (e.g., computing system 631, computing platform 650) may predict the behavior of the identified object based on characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.). Optionally, each identified object depends on the behavior of each other, so it is also possible to predict the behavior of a single identified object taking all identified objects together into account. The vehicle 600 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous vehicle is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 600, such as the lateral position of the vehicle 600 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 600 to cause the autonomous vehicle to follow a given trajectory and/or maintain a safe lateral and longitudinal distance from objects in the vicinity of the autonomous vehicle (e.g., vehicles in adjacent lanes on the road).
The vehicle 600 may be any type of vehicle, such as a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a recreational vehicle, a train, etc., and the disclosed embodiment is not particularly limited.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the image retrieval method described above when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1. An image retrieval method, comprising:
determining a model keyword corresponding to a target automatic driving model to be trained;
according to image tags corresponding to a plurality of historical environment images, retrieving a plurality of target environment images matched with the model keywords from the plurality of historical environment images, wherein the image tags are used for representing attribute information corresponding to the historical environment images;
and taking a plurality of target environment images as sample images for training the target automatic driving model.
2. The method of claim 1, wherein the retrieving, from the plurality of historical environment images, the plurality of target environment images matching the model keyword according to the image tags corresponding to the plurality of historical environment images comprises:
retrieving a plurality of target image tags matched with the model keywords from image tags corresponding to a plurality of historical environment images;
and taking the historical environment images corresponding to the target image labels as the target environment images.
3. The method of claim 1, further comprising:
and responding to a preview request message triggered by a user, acquiring a preview image corresponding to the pre-stored sample image, and displaying the preview image, wherein the storage space occupied by the preview image is smaller than that of the sample image.
4. The method according to any one of claims 1-3, wherein before retrieving the plurality of target environment images matching the model keyword from the plurality of historical environment images according to the image tags corresponding to the plurality of historical environment images, the method further comprises:
acquiring a plurality of pre-generated historical image data, wherein the historical image data comprises the historical environment image and an image label corresponding to the historical environment image;
the retrieving, from the plurality of historical environment images, a plurality of target environment images matching the model keyword according to image tags corresponding to the plurality of historical environment images includes:
retrieving a plurality of target historical image data matched with the model keyword from a plurality of the historical image data according to an image label in the plurality of the historical image data;
and taking a historical environment image in the plurality of target historical image data as the target environment image.
5. The method according to claim 4, wherein the historical image data is pre-generated by:
acquiring vehicle data in a historical time period, wherein the vehicle data comprises an environment image, sensor data, controller data and map data;
dividing the vehicle data into a plurality of subdata according to a preset time interval;
and regarding each subdata, taking an environment image in the subdata as the historical environment image, and determining an image tag corresponding to the historical environment image to obtain the historical image data.
6. The method of claim 5, wherein the image tag comprises a plurality of image tags, and wherein determining the image tag corresponding to the historical environmental image comprises:
and inputting the subdata into a plurality of label obtaining models trained in advance to obtain the image label output by each label obtaining model, wherein different label obtaining models are used for obtaining different image labels.
7. The method of claim 6, further comprising:
after each label obtaining model outputs the image label, determining a target message queue corresponding to the subdata from a plurality of preset message queues according to the vehicle identification corresponding to the subdata;
and storing the historical image data to a search database through the target message queue.
8. The method of claim 7, wherein prior to said storing said historical image data to a search database via said target message queue, said method further comprises:
determining whether an image tag corresponding to the historical image data is a preset tag;
the storing the historical image data to a search database via the target message queue comprises:
and under the condition that the image tag corresponding to the historical image data is the preset tag, storing the historical image data to the search database through the target message queue.
9. The method of claim 5, further comprising:
and determining a preview image corresponding to the environment image in the subdata for each subdata, and storing the preview image.
10. An image retrieval apparatus, comprising:
the automatic driving training system comprises a first determining module, a second determining module and a control module, wherein the first determining module is configured to determine a model keyword corresponding to a target automatic driving model to be trained;
a retrieval module configured to retrieve a plurality of target environment images matched with the model keyword from a plurality of historical environment images according to image tags corresponding to the plurality of historical environment images, wherein the image tags are used for representing attribute information corresponding to the historical environment images;
a first acquisition module configured to take a plurality of the target environment images as sample images for training the target automatic driving model.
11. A vehicle, characterized by comprising:
a first processor;
a memory for storing processor-executable instructions;
wherein the first processor is configured to:
the steps of implementing the method of any one of claims 1 to 9.
12. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 9.
13. A chip comprising a second processor and an interface; the second processor is configured to read instructions to perform the method of any of claims 1-9.
CN202210836988.XA 2022-07-15 2022-07-15 Image retrieval method, device, vehicle, storage medium and chip Active CN115203457B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210836988.XA CN115203457B (en) 2022-07-15 2022-07-15 Image retrieval method, device, vehicle, storage medium and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210836988.XA CN115203457B (en) 2022-07-15 2022-07-15 Image retrieval method, device, vehicle, storage medium and chip

Publications (2)

Publication Number Publication Date
CN115203457A true CN115203457A (en) 2022-10-18
CN115203457B CN115203457B (en) 2023-11-14

Family

ID=83582599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210836988.XA Active CN115203457B (en) 2022-07-15 2022-07-15 Image retrieval method, device, vehicle, storage medium and chip

Country Status (1)

Country Link
CN (1) CN115203457B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937421A (en) * 2022-12-13 2023-04-07 昆易电子科技(上海)有限公司 Method for generating simulation video data, image generating device and readable storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609502A (en) * 2017-09-05 2018-01-19 百度在线网络技术(北京)有限公司 Method and apparatus for controlling automatic driving vehicle
US10061322B1 (en) * 2017-04-06 2018-08-28 GM Global Technology Operations LLC Systems and methods for determining the lighting state of a vehicle
CN111104536A (en) * 2019-12-24 2020-05-05 Oppo广东移动通信有限公司 Picture searching method, device, terminal and storage medium
CN111368789A (en) * 2020-03-18 2020-07-03 腾讯科技(深圳)有限公司 Image recognition method and device, computer equipment and storage medium
CN111694973A (en) * 2020-06-09 2020-09-22 北京百度网讯科技有限公司 Model training method and device for automatic driving scene and electronic equipment
WO2021006870A1 (en) * 2019-07-08 2021-01-14 Huawei Technologies Co., Ltd. Vehicular autonomy-level functions
CN113255445A (en) * 2021-04-20 2021-08-13 杭州飞步科技有限公司 Multitask model training and image processing method, device, equipment and storage medium
CN113762044A (en) * 2021-05-06 2021-12-07 腾讯科技(深圳)有限公司 Road recognition method, road recognition device, computer equipment and storage medium
CN113989785A (en) * 2021-12-01 2022-01-28 天津天瞳威势电子科技有限公司 Driving scene classification method, device, equipment and storage medium
CN114118413A (en) * 2021-11-30 2022-03-01 上海商汤临港智能科技有限公司 Network training and equipment control method, device, equipment and storage medium
CN114461853A (en) * 2022-01-28 2022-05-10 腾讯科技(深圳)有限公司 Training sample generation method, device and equipment of video scene classification model
CN114610628A (en) * 2022-03-16 2022-06-10 阿波罗智联(北京)科技有限公司 Scene library establishing and testing method, device, equipment, medium and program product

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10061322B1 (en) * 2017-04-06 2018-08-28 GM Global Technology Operations LLC Systems and methods for determining the lighting state of a vehicle
CN107609502A (en) * 2017-09-05 2018-01-19 百度在线网络技术(北京)有限公司 Method and apparatus for controlling automatic driving vehicle
WO2021006870A1 (en) * 2019-07-08 2021-01-14 Huawei Technologies Co., Ltd. Vehicular autonomy-level functions
CN111104536A (en) * 2019-12-24 2020-05-05 Oppo广东移动通信有限公司 Picture searching method, device, terminal and storage medium
CN111368789A (en) * 2020-03-18 2020-07-03 腾讯科技(深圳)有限公司 Image recognition method and device, computer equipment and storage medium
CN111694973A (en) * 2020-06-09 2020-09-22 北京百度网讯科技有限公司 Model training method and device for automatic driving scene and electronic equipment
CN113255445A (en) * 2021-04-20 2021-08-13 杭州飞步科技有限公司 Multitask model training and image processing method, device, equipment and storage medium
CN113762044A (en) * 2021-05-06 2021-12-07 腾讯科技(深圳)有限公司 Road recognition method, road recognition device, computer equipment and storage medium
CN114118413A (en) * 2021-11-30 2022-03-01 上海商汤临港智能科技有限公司 Network training and equipment control method, device, equipment and storage medium
CN113989785A (en) * 2021-12-01 2022-01-28 天津天瞳威势电子科技有限公司 Driving scene classification method, device, equipment and storage medium
CN114461853A (en) * 2022-01-28 2022-05-10 腾讯科技(深圳)有限公司 Training sample generation method, device and equipment of video scene classification model
CN114610628A (en) * 2022-03-16 2022-06-10 阿波罗智联(北京)科技有限公司 Scene library establishing and testing method, device, equipment, medium and program product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邵闯: ""面向自动驾驶的认知地图的自动生成算法与实现"", 《中国优秀硕士学位论文全文数据库(工程科技Ⅱ辑)》, no. 01, pages 035 - 522 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937421A (en) * 2022-12-13 2023-04-07 昆易电子科技(上海)有限公司 Method for generating simulation video data, image generating device and readable storage medium
CN115937421B (en) * 2022-12-13 2024-04-02 昆易电子科技(上海)有限公司 Method for generating simulated video data, image generating device and readable storage medium

Also Published As

Publication number Publication date
CN115203457B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
CN115042821B (en) Vehicle control method, vehicle control device, vehicle and storage medium
CN114882464B (en) Multi-task model training method, multi-task processing method, device and vehicle
CN115035494A (en) Image processing method, image processing device, vehicle, storage medium and chip
CN115123257A (en) Method and device for identifying position of road deceleration strip, vehicle, storage medium and chip
CN115203457B (en) Image retrieval method, device, vehicle, storage medium and chip
CN115056784B (en) Vehicle control method, device, vehicle, storage medium and chip
US20240017719A1 (en) Mapping method and apparatus, vehicle, readable storage medium, and chip
CN114842440B (en) Automatic driving environment sensing method and device, vehicle and readable storage medium
CN115202234B (en) Simulation test method and device, storage medium and vehicle
CN114842455B (en) Obstacle detection method, device, equipment, medium, chip and vehicle
CN115205311B (en) Image processing method, device, vehicle, medium and chip
CN115221151B (en) Vehicle data transmission method and device, vehicle, storage medium and chip
CN114782638B (en) Method and device for generating lane line, vehicle, storage medium and chip
CN115100630B (en) Obstacle detection method, obstacle detection device, vehicle, medium and chip
CN115205848A (en) Target detection method, target detection device, vehicle, storage medium and chip
CN115042814A (en) Traffic light state identification method and device, vehicle and storage medium
CN115334109A (en) System architecture, transmission method, vehicle, medium and chip for traffic signal identification
CN115330923A (en) Point cloud data rendering method and device, vehicle, readable storage medium and chip
CN115205179A (en) Image fusion method and device, vehicle and storage medium
CN114880408A (en) Scene construction method, device, medium and chip
CN115063639B (en) Model generation method, image semantic segmentation device, vehicle and medium
CN115147794B (en) Lane line determining method, lane line determining device, vehicle, medium and chip
CN115082772B (en) Location identification method, location identification device, vehicle, storage medium and chip
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle
CN115257628B (en) Vehicle control method, device, storage medium, vehicle and chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant