CN115203457B - Image retrieval method, device, vehicle, storage medium and chip - Google Patents

Image retrieval method, device, vehicle, storage medium and chip Download PDF

Info

Publication number
CN115203457B
CN115203457B CN202210836988.XA CN202210836988A CN115203457B CN 115203457 B CN115203457 B CN 115203457B CN 202210836988 A CN202210836988 A CN 202210836988A CN 115203457 B CN115203457 B CN 115203457B
Authority
CN
China
Prior art keywords
image
historical
data
target
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210836988.XA
Other languages
Chinese (zh)
Other versions
CN115203457A (en
Inventor
路卫杰
解智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202210836988.XA priority Critical patent/CN115203457B/en
Publication of CN115203457A publication Critical patent/CN115203457A/en
Application granted granted Critical
Publication of CN115203457B publication Critical patent/CN115203457B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure relates to an image retrieval method, apparatus, vehicle, storage medium, and chip, the method including: determining model keywords corresponding to a target automatic driving model to be trained; according to image labels corresponding to a plurality of historical environment images, a plurality of target environment images matched with the model keywords are retrieved from a plurality of historical environment images, and the image labels are used for representing attribute information corresponding to the historical environment images; and taking a plurality of target environment images as sample images for training the target automatic driving model. That is, the present disclosure may quickly retrieve a desired sample image from a plurality of historical environmental images according to an image tag, and the sample image may be capable of pertinently training the target autopilot model, so that the accuracy of the trained target autopilot model is higher, thereby improving the safety of autopilot.

Description

Image retrieval method, device, vehicle, storage medium and chip
Technical Field
The disclosure relates to the technical field of vehicles, and in particular relates to an image retrieval method, an image retrieval device, a vehicle, a storage medium and a chip.
Background
The implementation of the automatic driving technology mainly depends on an automatic driving model based on machine learning, and the automatic driving model determines a driving strategy according to driving environment data acquired during driving, so as to control the vehicle to automatically execute corresponding actions. In order to improve the safety of automatic driving, higher requirements are also put on an automatic driving model.
In the related art, an automatic driving model is trained by collecting massive environmental images, so that the accuracy of the automatic driving model is improved. However, since a large number of environmental images may have a large amount of redundant data, the accuracy of model training is relatively low, and thus, how to retrieve an environmental image capable of performing targeted training on an automatic driving model from a large number of environmental images is a problem to be solved.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image retrieval method, apparatus, vehicle, storage medium, and chip.
According to a first aspect of an embodiment of the present disclosure, there is provided an image retrieval method, including:
determining model keywords corresponding to a target automatic driving model to be trained;
according to image labels corresponding to a plurality of historical environment images, a plurality of target environment images matched with the model keywords are retrieved from a plurality of historical environment images, and the image labels are used for representing attribute information corresponding to the historical environment images;
And taking a plurality of target environment images as sample images for training the target automatic driving model.
Optionally, the retrieving, from a plurality of historical environment images, a plurality of target environment images matching the model keyword according to image tags corresponding to the historical environment images includes:
retrieving a plurality of target image labels matched with the model keywords from the image labels corresponding to a plurality of historical environment images;
and taking the historical environment images corresponding to the target image labels as the target environment images.
Optionally, the method further comprises:
responding to a preview request message triggered by a user, acquiring a pre-stored preview image corresponding to the sample image, and displaying the preview image, wherein the storage space occupied by the preview image is smaller than that of the sample image.
Optionally, before the image labels corresponding to the plurality of historical environment images are used for retrieving the plurality of target environment images matched with the model keywords from the plurality of historical environment images, the method further comprises:
acquiring a plurality of pre-generated historical image data, wherein the historical image data comprises the historical environment image and image tags corresponding to the historical environment image;
The searching the plurality of target environment images matched with the model keywords from the plurality of historical environment images according to the image labels corresponding to the plurality of historical environment images comprises the following steps:
retrieving a plurality of target historical image data matched with the model keyword from a plurality of historical image data according to image tags in the historical image data;
and taking historical environment images in the plurality of target historical image data as the target environment images.
Optionally, the historical image data is pre-generated by:
acquiring vehicle data in a historical time period, wherein the vehicle data comprises environment images, sensor data, controller data and map data;
dividing the vehicle data into a plurality of sub-data according to a preset time interval;
and regarding each piece of sub-data, taking the environment image in the sub-data as the historical environment image, and determining an image tag corresponding to the historical environment image to obtain the historical image data.
Optionally, the image tag includes a plurality of image tags, and the determining the image tag corresponding to the historical environment image includes:
inputting the sub data into a plurality of pre-trained tag acquisition models to acquire image tags output by each tag acquisition model, wherein different tag acquisition models are used for acquiring different image tags.
Optionally, the method further comprises:
after each tag acquisition model outputs the image tag, determining a target message queue corresponding to the sub-data from a plurality of preset message queues according to the vehicle identifier corresponding to the sub-data;
and storing the historical image data to a search database through the target message queue.
Optionally, before the storing the historical image data to a search database through the target message queue, the method further comprises:
determining whether an image tag corresponding to the historical image data is a preset tag or not;
the storing the historical image data to a search database via the target message queue includes:
and storing the historical image data to the search database through the target message queue under the condition that the image label corresponding to the historical image data is the preset label.
Optionally, the method further comprises:
and determining a preview image corresponding to the environment image in the sub data according to each sub data, and storing the preview image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image retrieval apparatus including:
The first determining module is configured to determine model keywords corresponding to a target automatic driving model to be trained;
a retrieval module configured to retrieve a plurality of target environment images matched with the model keywords from a plurality of historical environment images according to image tags corresponding to the historical environment images, wherein the image tags are used for representing attribute information corresponding to the historical environment images;
a first acquisition module configured to use a plurality of the target environmental images as sample images for training the target autopilot model.
Optionally, the retrieval module is further configured to:
retrieving a plurality of target image labels matched with the model keywords from the image labels corresponding to a plurality of historical environment images;
and taking the historical environment images corresponding to the target image labels as the target environment images.
Optionally, the apparatus further comprises:
the display module is configured to respond to a preview request message triggered by a user, acquire a preview image corresponding to the pre-stored sample image, and display the preview image, wherein the storage space occupied by the preview image is smaller than that of the sample image.
Optionally, the apparatus further comprises:
a second acquisition module for acquiring a plurality of pre-generated historical image data, wherein the historical image data comprises the historical environment image and image tags corresponding to the historical environment image;
the retrieval module is further configured to:
retrieving a plurality of target historical image data matched with the model keyword from a plurality of historical image data according to image tags in the historical image data;
and taking historical environment images in the plurality of target historical image data as the target environment images.
Optionally, the apparatus further comprises:
a data generation module configured to acquire vehicle data over a historical period of time, the vehicle data including an environmental image, sensor data, controller data, map data; dividing the vehicle data into a plurality of sub-data according to a preset time interval; and regarding each piece of sub-data, taking the environment image in the sub-data as the historical environment image, and determining an image tag corresponding to the historical environment image to obtain the historical image data.
Optionally, the image tag includes a plurality of image tags, and the determining the image tag corresponding to the historical environment image includes:
Inputting the sub data into a plurality of pre-trained tag acquisition models to acquire image tags output by each tag acquisition model, wherein different tag acquisition models are used for acquiring different image tags.
Optionally, the apparatus further comprises:
the second determining module is configured to determine a target message queue corresponding to the sub-data from a plurality of preset message queues according to the vehicle identifier corresponding to the sub-data after each tag acquisition model outputs the image tag;
a storage module configured to store the historical image data to a search database through the target message queue.
Optionally, the apparatus further comprises:
a third determining module configured to determine whether an image tag corresponding to the historical image data is a preset tag;
the memory module is further configured to:
and storing the historical image data to the search database through the target message queue under the condition that the image label corresponding to the historical image data is the preset label.
Optionally, the apparatus further comprises:
and a fourth determining module configured to determine, for each of the sub data, a preview image corresponding to an environment image in the sub data, and store the preview image.
According to a third aspect of embodiments of the present disclosure, there is provided a vehicle comprising:
a first processor;
a memory for storing processor-executable instructions;
wherein the first processor is configured to:
implementing the steps of the method described in the first aspect of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of the first aspect of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a chip comprising a second processor and an interface; the second processor is configured to read instructions to perform the method of the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: determining model keywords corresponding to an automatic driving model of a target to be trained; according to image labels corresponding to a plurality of historical environment images, a plurality of target environment images matched with the model keywords are retrieved from a plurality of historical environment images, and the image labels are used for representing attribute information corresponding to the historical environment images; and taking a plurality of target environment images as sample images for training the target automatic driving model. That is, the present disclosure may quickly retrieve a desired sample image from a plurality of historical environmental images according to an image tag, and the sample image may be capable of pertinently training the target autopilot model, so that the accuracy of the trained target autopilot model is higher, thereby improving the safety of autopilot.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flowchart illustrating an image retrieval method according to an exemplary embodiment;
FIG. 2 is a flowchart illustrating another image retrieval method according to an exemplary embodiment;
FIG. 3 is a flowchart illustrating a method of generating image data according to an exemplary embodiment;
FIG. 4 is a data storage flow diagram illustrating an exemplary embodiment;
FIG. 5 is a block diagram of an image retrieval apparatus, according to an exemplary embodiment;
FIG. 6 is a block diagram of another image retrieval apparatus shown according to an exemplary embodiment;
FIG. 7 is a block diagram of another image retrieval apparatus shown in accordance with an exemplary embodiment;
FIG. 8 is a functional block diagram of a vehicle, shown in an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, all actions of acquiring signals, information or data in the present application are performed under the condition of conforming to the corresponding data protection rule policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
First, an application scenario of the present disclosure will be described. At present, a sample image for training an automatic driving model is an environmental image acquired by a vehicle in the driving process, and in general, the vehicle can acquire environmental images of different driving scenes in the driving process, for example, the environmental image can be an environmental image when the vehicle turns, an environmental image when the vehicle is red, and the like, and an environmental image when people come out in front of the vehicle. While an autopilot system comprises a plurality of autopilot models, different autopilot models are used to obtain different decision information, for example, the autopilot models may include a turn handling model, an obstacle avoidance handling model.
In the related art, in order to improve the accuracy of an automatic driving model, a target environment image strongly related to the automatic driving model is determined from a large number of environment images, and the automatic driving model is trained as a sample image. For example, in training a turning process model, only the environment image related to the turning scene may be selected. However, due to the huge number of environmental images acquired during the running process of the vehicle, more redundant data may exist in a large number of environmental images, which results in lower accuracy of model training.
In order to solve the technical problems, the disclosure provides an image retrieval method, an image retrieval device, a vehicle, a storage medium and a chip, wherein a required sample image is rapidly retrieved from a plurality of historical environment images according to an image tag, and the sample image can pertinently train the target automatic driving model, so that the accuracy of the target automatic driving model obtained by training is higher, and the safety of automatic driving is improved.
The present disclosure is described below in connection with specific embodiments.
FIG. 1 is a flowchart illustrating an image retrieval method, as shown in FIG. 1, according to an exemplary embodiment, which may include:
s101, determining model keywords corresponding to a target automatic driving model to be trained.
In this step, model keywords corresponding to the target autopilot model may be determined according to the function of the target autopilot model. For example, if the function of the target autopilot model is obstacle detection, it may be determined that the model keyword corresponding to the autopilot model may be an obstacle. The model keyword may also be a combination of a plurality of search terms, for example, the model keyword may also be a time point+a vehicle model+a curve.
S102, searching a plurality of target environment images matched with the model keywords from a plurality of historical environment images according to image labels corresponding to the historical environment images.
The image tag may be used to represent attribute information corresponding to the historical environment image, and the image tag may include an image type tag, a time point tag, and a time period tag, for example. The historical environment image may be a multi-frame image, for example, the historical environment image may be a video image of a period of time, and the historical environment image may also be a frame image of a point of time, which is not limited in this disclosure.
In this step, after determining the model keyword corresponding to the target autopilot model, a target image tag matching the model keyword may be retrieved from image tags corresponding to a plurality of historical environment images by a character matching algorithm in the prior art, and the historical environment image corresponding to the target image tag may be used as the target environment image. For example, if the model keyword corresponding to the target autopilot model is an obstacle, a target image tag matching the obstacle may be determined from a plurality of image tags, and if the image tag is a pedestrian or a bicycle, for example, the image tag may be determined to be the target image tag.
S103, taking a plurality of target environment images as sample images for training the target automatic driving model.
The sample image may include a data set to be marked, a training data set and a test data set, wherein the data set to be marked is used for manual marking, the target automatic driving model is trained through the marked data set and the training data set, and the test data set is used for testing the accuracy of the target automatic driving model.
In this step, after obtaining a plurality of target environment images, the plurality of target environment images may be used as sample images to train the target neural network model, thereby obtaining the target autopilot model.
In one possible implementation manner, after obtaining the sample image of the target autopilot model, a pre-stored preview image corresponding to the sample image may be obtained and displayed in response to a preview request message triggered by a user. Wherein the preview image occupies less memory than the sample image, the preview image may be, for example, a lower resolution image generated from the sample image. Therefore, when a user needs to preview the sample image, the preview image corresponding to the sample image can be sent to the browser, and the preview image is displayed through the browser, so that storage resources are saved.
By adopting the method, the required sample image can be quickly retrieved from a plurality of historical environment images according to the image labels, and the sample image can train the target automatic driving model in a targeted manner, so that the accuracy of the target automatic driving model obtained by training is higher, and the safety of automatic driving is improved.
FIG. 2 is a flowchart illustrating another image retrieval method according to an exemplary embodiment, as shown in FIG. 2, which may further include:
s104, acquiring a plurality of historical image data which are generated in advance.
The historical image data may include the historical environment image and image labels corresponding to the historical environment image, and different historical image data may be image data in different time periods or image data of different labels in the same time period.
Accordingly, step S102 may be:
retrieving a plurality of target historical image data matching the model keyword from a plurality of the historical image data according to image tags in the plurality of the historical image data; and taking the historical environment images in the plurality of target historical image data as the target environment images.
For example, after determining a model keyword corresponding to the automatic driving model, a plurality of history image data may be acquired, and then, from among image tags of the plurality of history image data, a plurality of target image tags matching the model keyword are determined, the history image data corresponding to the plurality of target image tags is taken as the target history image data, and a history environmental image in the target history image data is taken as the target environmental image.
Wherein the historical image data may be pre-generated from vehicle data collected as the vehicle travels over a historical period of time, fig. 3 is a flowchart illustrating an image data generation method according to an exemplary embodiment, as shown in fig. 3, which may include:
s31, acquiring vehicle data in a historical time period.
The vehicle data may include an environmental image, sensor data, controller data, and map data, the environmental image may be acquired by a camera installed on the vehicle, the sensor data may be acquired by a sensor installed on the vehicle, the controller data may be acquired by a controller of the vehicle, the controller data may include vehicle driving data, and the controller data may include turning data, speed information, and the like, for example.
In this step, during the running of the vehicle, the camera, sensor, controller, etc. of the vehicle may be controlled to collect the vehicle data and store the vehicle data. The vehicle data may include data collected by the vehicle during different conditions and different time periods.
S32, dividing the vehicle data into a plurality of sub-data according to a preset time interval.
Wherein the preset time interval may be empirically set, and the preset time interval may be 20s, for example.
In this step, after the vehicle data is acquired, the preset time interval may be acquired, and the history time period is divided into a plurality of sub-time periods according to the preset time interval, where the data of each sub-time period is the sub-data.
S33, regarding each piece of sub-data, taking the environment image in the sub-data as the historical environment image, and determining an image label corresponding to the historical environment image to obtain the historical image data.
In this step, after a plurality of sub-data are obtained, an environmental image in the sub-data may be determined, and the environmental image may be taken as the history environmental image. Then, the sub-data may be input into a plurality of pre-trained tag acquisition models to acquire image tags output by each of the tag acquisition models, and different tag acquisition models are used to acquire different image tags.
The tag acquisition model may include an image type acquisition model, a point-in-time acquisition model, a period acquisition model, and the like, among others.
For each piece of sub data, the sub data may be input into the image type acquisition model, the time point acquisition model, and the time period acquisition model, the image type tag output by the image type acquisition model may be acquired, the time point tag output by the time point acquisition model may be acquired, and the time period tag output by the time period acquisition model may be acquired. It should be noted that, the image tags corresponding to each historical environment image may be different, and for example, the image tag corresponding to the historical environment image a may include an image type tag and a time point tag, and the image tag corresponding to the historical environment image B may include an image type tag and a time period tag.
In one possible implementation manner, after each tag acquisition model outputs the image tag, determining a target message queue corresponding to the sub-data from a plurality of preset message queues according to the vehicle identifier corresponding to the sub-data; the historical image data is stored to a search database via the target message queue. Wherein the message queue may be a prior art Kafka message queue.
Illustratively, FIG. 4 is a flow chart illustrating a data storage process according to an exemplary embodiment, as shown in FIG. 4, the historical image data may be stored to the search database via message queue 1, message queue 2, or message queue 3. Thus, the historical image data of the same vehicle can be stored in the search database through the same message queue, and the data loss caused by concurrent conflict in the data storage process is avoided. It should be noted that the message queues in fig. 4 are merely exemplary, and may include more message queues, which is not limited in this disclosure.
When the history image data is stored, for each history image data, a plurality of image tags corresponding to the history image data may be aggregated into one document data, for example, the plurality of image tags are spliced by a splicing algorithm in the prior art, so as to obtain tag document data corresponding to the history image data, and the tag document data is stored. Therefore, each image label does not need to be stored independently, the frequency of data updating is reduced, and memory resources are saved.
By adopting the method, the corresponding image label can be set for each historical image data, and the sample image can be quickly searched by comparing the image label with the model keywords during image searching, so that the efficiency of image searching is improved.
Fig. 5 is a block diagram of an image retrieval apparatus according to an exemplary embodiment, as shown in fig. 5, the apparatus may include:
a first determining module 501 configured to determine model keywords corresponding to a target autopilot model to be trained;
a retrieving module 502 configured to retrieve a plurality of target environment images matching the model keyword from a plurality of historical environment images according to image tags corresponding to the plurality of historical environment images, the image tags being used for characterizing attribute information corresponding to the historical environment images;
a first acquisition module 503 is configured to use a plurality of the target environmental images as sample images for training the target autopilot model.
Optionally, the retrieving module 502 is further configured to:
retrieving a plurality of target image tags matched with the model keywords from a plurality of image tags corresponding to the historical environment images;
and taking the historical environment images corresponding to the plurality of target image labels as the target environment images.
Optionally, fig. 6 is a block diagram of another image retrieval apparatus according to an exemplary embodiment, as shown in fig. 6, the apparatus further comprising:
the display module 504 is configured to respond to a preview request message triggered by a user, obtain a preview image corresponding to the sample image stored in advance, and display the preview image, where the preview image occupies less storage space than the sample image.
Alternatively, fig. 7 is a block diagram of another image retrieval apparatus according to an exemplary embodiment, as shown in fig. 7, the apparatus further comprising:
a second obtaining module 505, configured to obtain a plurality of history image data that are generated in advance, where the history image data includes the history environment image and an image tag corresponding to the history environment image;
the retrieval module 502 is further configured to:
retrieving a plurality of target historical image data matching the model keyword from a plurality of the historical image data according to image tags in the plurality of the historical image data;
and taking the historical environment images in the plurality of target historical image data as the target environment images.
Optionally, the apparatus further comprises:
a data generation module 506 configured to acquire vehicle data including an environmental image, sensor data, controller data, map data over a historical period of time; dividing the vehicle data into a plurality of sub-data according to a preset time interval; and regarding each piece of sub-data, taking the environment image in the sub-data as the historical environment image, and determining an image label corresponding to the historical environment image to obtain the historical image data.
Optionally, the image tag includes a plurality of image tags, and the determining the image tag corresponding to the historical environment image includes:
inputting the sub data into a plurality of pre-trained tag acquisition models to acquire image tags output by each tag acquisition model, wherein different tag acquisition models are used for acquiring different image tags.
Optionally, the apparatus further comprises:
a second determining module 507, configured to determine, after each tag acquisition model outputs the image tag, a target message queue corresponding to the sub-data from a plurality of preset message queues according to the vehicle identifier corresponding to the sub-data;
a storage module 508 is configured to store the historical image data to a search database via the target message queue.
Optionally, the apparatus further comprises:
a third determining module 509 configured to determine whether an image tag corresponding to the historical image data is a preset tag;
the storage module 508 is further configured to:
and storing the historical image data to the search database through the target message queue under the condition that the image label corresponding to the historical image data is the preset label.
Optionally, the apparatus further comprises:
A fourth determining module 510 is configured to determine, for each of the sub-data, a preview image corresponding to the environment image in the sub-data, and store the preview image.
According to the device, the required sample images can be quickly retrieved from the plurality of historical environment images according to the image labels, and the sample images can train the target automatic driving model in a targeted manner, so that the accuracy of the target automatic driving model obtained by training is higher, and the safety of automatic driving is improved.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the image retrieval method provided by the present disclosure.
The apparatus may be a stand-alone electronic device or may be part of a stand-alone electronic device, for example, in one embodiment, the apparatus may be an integrated circuit (Integrated Circuit, IC) or a chip, where the integrated circuit may be an IC or may be a collection of ICs; the chip may include, but is not limited to, the following: GPU (Graphics Processing Unit, graphics processor), CPU (Central Processing Unit ), FPGA (Field Programmable Gate Array, programmable logic array), DSP (Digital Signal Processor ), ASIC (Application Specific Integrated Circuit, application specific integrated circuit), SOC (System on Chip, SOC, system on Chip or System on Chip), etc. The integrated circuit or chip may be configured to execute executable instructions (or code) to implement the image retrieval method described above. The executable instructions may be stored on the integrated circuit or chip or may be retrieved from another device or apparatus, such as the integrated circuit or chip including a second processor, memory, and interface for communicating with the other device. The executable instructions may be stored in the memory, which when executed by the processor implement the image retrieval method described above; alternatively, the integrated circuit or chip may receive the executable instructions through the interface and transmit them to the second processor for execution, so as to implement the image retrieval method described above.
Referring to fig. 8, fig. 8 is a functional block diagram of a vehicle 600 according to an exemplary embodiment. The vehicle 600 may be configured in a fully or partially autonomous mode. For example, the vehicle 600 may obtain environmental information of its surroundings through the perception system 620 and derive an automatic driving strategy based on analysis of the surrounding environmental information to achieve full automatic driving, or present the analysis results to the user to achieve partial automatic driving.
The vehicle 600 may include various subsystems, such as an infotainment system 610, a perception system 620, a decision control system 630, a drive system 640, and a computing platform 650. Alternatively, vehicle 600 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the subsystems and components of vehicle 600 may be interconnected via wires or wirelessly.
In some embodiments, the infotainment system 610 may include a communication system 611, an entertainment system 612, and a navigation system 613.
The communication system 611 may comprise a wireless communication system, which may communicate wirelessly with one or more devices, either directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a wireless local area network (wireless local area network, WLAN) using WiFi. In some embodiments, the wireless communication system may communicate directly with the device using an infrared link, bluetooth, or ZigBee. Other wireless protocols, such as various vehicle communication systems, for example, wireless communication systems may include one or more dedicated short-range communication (dedicated short range communications, DSRC) devices, which may include public and/or private data communications between vehicles and/or roadside stations.
Entertainment system 612 may include a display device, a microphone, and an audio, and a user may listen to the broadcast in the vehicle based on the entertainment system, playing music; or the mobile phone is communicated with the vehicle, the screen of the mobile phone is realized on the display equipment, the display equipment can be in a touch control type, and a user can operate through touching the screen.
In some cases, the user's voice signal may be acquired through a microphone and certain controls of the vehicle 600 by the user may be implemented based on analysis of the user's voice signal, such as adjusting the temperature within the vehicle, etc. In other cases, music may be played to the user through sound.
The navigation system 613 may include a map service provided by a map provider to provide navigation of a travel route for the vehicle 600, and the navigation system 613 may be used with the global positioning system 621 and the inertial measurement unit 622 of the vehicle. The map service provided by the map provider may be a two-dimensional map or a high-precision map.
The perception system 620 may include several types of sensors that sense information about the environment surrounding the vehicle 600. For example, sensing system 620 may include a global positioning system 621 (which may be a GPS system, or may be a beidou system, or other positioning system), an inertial measurement unit (inertial measurement unit, IMU) 622, a lidar 623, a millimeter wave radar 624, an ultrasonic radar 625, and a camera 626. The sensing system 620 may also include sensors (e.g., in-vehicle air quality monitors, fuel gauges, oil temperature gauges, etc.) of the internal systems of the monitored vehicle 600. Sensor data from one or more of these sensors may be used to detect objects and their corresponding characteristics (location, shape, direction, speed, etc.). Such detection and identification is a critical function of the safe operation of the vehicle 600.
The global positioning system 621 is used to estimate the geographic location of the vehicle 600.
The inertial measurement unit 622 is configured to sense a change in the pose of the vehicle 600 based on inertial acceleration. In some embodiments, inertial measurement unit 622 may be a combination of an accelerometer and a gyroscope.
The lidar 623 uses a laser to sense objects in the environment in which the vehicle 600 is located. In some embodiments, lidar 623 may include one or more laser sources, a laser scanner, and one or more detectors, among other system components.
The millimeter-wave radar 624 utilizes radio signals to sense objects within the surrounding environment of the vehicle 600. In some embodiments, millimeter-wave radar 624 may be used to sense the speed and/or heading of an object in addition to sensing the object.
The ultrasonic radar 625 may utilize ultrasonic signals to sense objects around the vehicle 600.
The image pickup device 626 is used to capture image information of the surrounding environment of the vehicle 600. The image capturing device 626 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, etc., and the image information acquired by the image capturing device 626 may include still images or video stream information.
The decision control system 630 includes a computing system 631 that makes analysis decisions based on information acquired by the perception system 620, and the decision control system 630 also includes a vehicle controller 632 that controls the powertrain of the vehicle 600, as well as a steering system 633, throttle 634, and braking system 635 for controlling the vehicle 600.
The computing system 631 may be operable to process and analyze the various information acquired by the perception system 620 in order to identify targets, objects, and/or features in the environment surrounding the vehicle 600. The targets may include pedestrians or animals and the objects and/or features may include traffic signals, road boundaries, and obstacles. The computing system 631 may use object recognition algorithms, in-motion restoration structure (Structure from Motion, SFM) algorithms, video tracking, and the like. In some embodiments, the computing system 631 may be used to map the environment, track objects, estimate the speed of objects, and so forth. The computing system 631 may analyze the acquired various information and derive control strategies for the vehicle.
The vehicle controller 632 may be configured to coordinate control of the power battery and the engine 641 of the vehicle to enhance the power performance of the vehicle 600.
Steering system 633 is operable to adjust the direction of travel of vehicle 600. For example, in one embodiment may be a steering wheel system.
Throttle 634 is used to control the operating speed of engine 641 and thereby the speed of vehicle 600.
The braking system 635 is used to control deceleration of the vehicle 600. The braking system 635 may use friction to slow the wheels 644. In some embodiments, the braking system 635 may convert kinetic energy of the wheels 644 into electrical current. The braking system 635 may take other forms to slow the rotational speed of the wheels 644 to control the speed of the vehicle 600.
The drive system 640 may include components that provide powered movement of the vehicle 600. In one embodiment, the drive system 640 may include an engine 641, an energy source 642, a transmission 643, and wheels 644. The engine 641 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine of a gasoline engine and an electric motor, or a hybrid engine of an internal combustion engine and an air compression engine. The engine 641 converts the energy source 642 into mechanical energy.
Examples of energy sources 642 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity. The energy source 642 may also provide energy to other systems of the vehicle 600.
The transmission 643 may transfer mechanical power from the engine 641 to wheels 644. The transmission 643 may include a gearbox, a differential, and a driveshaft. In one embodiment, the transmission 643 may also include other devices, such as a clutch. Wherein the drive shaft may include one or more axles that may be coupled to one or more wheels 644.
Some or all of the functions of the vehicle 600 are controlled by the computing platform 650. The computing platform 650 may include at least one first processor 651, which first processor 651 may execute instructions 653 stored in a non-transitory computer-readable medium, such as memory 652. In some embodiments, computing platform 650 may also be a plurality of computing devices that control individual components or subsystems of vehicle 600 in a distributed manner.
The first processor 651 may be any conventional processor, such as a commercially available CPU. Alternatively, the first processor 651 may also include, for example, an image processor (Graphic Process Unit, GPU), a field programmable gate array (Field Programmable Gate Array, FPGA), a System On Chip (SOC), an application specific integrated Chip (Application Specific Integrated Circuit, ASIC), or a combination thereof. Although FIG. 8 functionally illustrates a processor, memory, and other elements of a computer in the same block, it will be understood by those of ordinary skill in the art that the processor, computer, or memory may in fact comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard disk drive or other storage medium located in a different housing than the computer. Thus, references to a processor or computer will be understood to include references to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only calculations related to the component-specific functions.
In the presently disclosed embodiments, the first processor 651 may perform the image retrieval method described above.
In various aspects described herein, the first processor 651 can be located remotely from and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle and others are performed by a remote processor, including taking the necessary steps to perform a single maneuver.
In some embodiments, memory 652 may contain instructions 653 (e.g., program logic), which instructions 653 may be executed by first processor 651 to perform various functions of vehicle 600. Memory 652 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of infotainment system 610, perception system 620, decision control system 630, drive system 640.
In addition to instructions 653, memory 652 may store data such as road maps, route information, vehicle location, direction, speed, and other such vehicle data, as well as other information. Such information may be used by the vehicle 600 and the computing platform 650 during operation of the vehicle 600 in autonomous, semi-autonomous, and/or manual modes.
The computing platform 650 may control the functions of the vehicle 600 based on inputs received from various subsystems (e.g., the drive system 640, the perception system 620, and the decision control system 630). For example, computing platform 650 may utilize input from decision control system 630 in order to control steering system 633 to avoid obstacles detected by perception system 620. In some embodiments, computing platform 650 is operable to provide control over many aspects of vehicle 600 and its subsystems.
Alternatively, one or more of these components may be mounted separately from or associated with vehicle 600. For example, the memory 652 may exist partially or completely separate from the vehicle 600. The above components may be communicatively coupled together in a wired and/or wireless manner.
Alternatively, the above components are only an example, and in practical applications, components in the above modules may be added or deleted according to actual needs, and fig. 8 should not be construed as limiting the embodiments of the present disclosure.
An autonomous car traveling on a road, such as the vehicle 600 above, may identify objects within its surrounding environment to determine adjustments to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently and based on its respective characteristics, such as its current speed, acceleration, spacing from the vehicle, etc., may be used to determine the speed at which the autonomous car is to adjust.
Alternatively, the vehicle 600 or a sensing and computing device associated with the vehicle 600 (e.g., computing system 631, computing platform 650) may predict the behavior of the identified object based on the characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on a road, etc.). Alternatively, each identified object depends on each other's behavior, so all of the identified objects can also be considered together to predict the behavior of a single identified object. The vehicle 600 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous car is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 600, such as the lateral position of the vehicle 600 in the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 600 so that the autonomous vehicle follows a given trajectory and/or maintains safe lateral and longitudinal distances from objects in the vicinity of the autonomous vehicle (e.g., vehicles in adjacent lanes on a roadway).
The vehicle 600 may be various types of traveling tools, such as a car, a truck, a motorcycle, a bus, a ship, an airplane, a helicopter, a recreational vehicle, a train, etc., and embodiments of the present disclosure are not particularly limited.
In another exemplary embodiment, a computer program product is also provided, comprising a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described image retrieval method when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. An image retrieval method, comprising:
determining model keywords corresponding to a target automatic driving model to be trained;
according to image labels corresponding to a plurality of historical environment images, a plurality of target environment images matched with the model keywords are retrieved from the historical environment images, and the image labels are used for representing attribute information corresponding to the historical environment images;
taking the plurality of target environment images as sample images for training the target automatic driving model;
the determining the model keywords corresponding to the target automatic driving model to be trained comprises the following steps:
determining model keywords corresponding to the target automatic driving model according to the function of the target automatic driving model;
before the image labels corresponding to the plurality of historical environment images are searched for the plurality of target environment images matched with the model keywords from the plurality of historical environment images, the method further comprises:
acquiring a plurality of pre-generated historical image data, wherein the historical image data comprises the historical environment image and image tags corresponding to the historical environment image;
the history image data is generated in advance by:
Acquiring vehicle data in a historical time period, wherein the vehicle data comprises environment images, sensor data, controller data and map data;
dividing the vehicle data into a plurality of sub-data according to a preset time interval;
and regarding each piece of sub-data, taking the environment image in the sub-data as the historical environment image, and determining an image tag corresponding to the historical environment image to obtain the historical image data.
2. The method of claim 1, wherein retrieving a plurality of target environment images matching the model keyword from the plurality of historical environment images according to image tags corresponding to the plurality of historical environment images comprises:
retrieving a plurality of target image tags matched with the model keywords from the image tags corresponding to the plurality of historical environment images;
and taking the historical environment images corresponding to the target image labels as the target environment images.
3. The method according to claim 1, wherein the method further comprises:
responding to a preview request message triggered by a user, acquiring a pre-stored preview image corresponding to the sample image, and displaying the preview image, wherein the storage space occupied by the preview image is smaller than that of the sample image.
4. A method according to any one of claims 1-3, wherein retrieving a plurality of target environmental images matching the model keyword from a plurality of historical environmental images according to image tags corresponding to the plurality of historical environmental images comprises:
retrieving a plurality of target historical image data matched with the model keyword from the plurality of historical image data according to the image tags in the plurality of historical image data;
and taking the historical environment images in the plurality of target historical image data as the target environment images.
5. The method of claim 1, wherein the image tags comprise a plurality, and wherein the determining the image tag corresponding to the historical environmental image comprises:
inputting the sub data into a plurality of pre-trained tag acquisition models to acquire image tags output by each tag acquisition model, wherein different tag acquisition models are used for acquiring different image tags.
6. The method of claim 5, wherein the method further comprises:
after each tag acquisition model outputs the image tag, determining a target message queue corresponding to the sub-data from a plurality of preset message queues according to the vehicle identifier corresponding to the sub-data;
And storing the historical image data to a search database through the target message queue.
7. The method of claim 6, wherein prior to said storing said historical image data to a search database via said target message queue, said method further comprises:
determining whether an image tag corresponding to the historical image data is a preset tag or not;
the storing the historical image data to a search database via the target message queue includes:
and storing the historical image data to the search database through the target message queue under the condition that the image label corresponding to the historical image data is the preset label.
8. The method according to claim 1, wherein the method further comprises:
and determining a preview image corresponding to the environment image in the sub data according to each sub data, and storing the preview image.
9. An image retrieval apparatus, comprising:
the first determining module is configured to determine model keywords corresponding to a target automatic driving model to be trained;
a retrieval module configured to retrieve a plurality of target environment images matching the model keyword from a plurality of historical environment images according to image tags corresponding to the historical environment images, the image tags being used for characterizing attribute information corresponding to the historical environment images;
A first acquisition module configured to take the plurality of target environmental images as sample images for training the target autopilot model;
the first determination module is configured to:
determining model keywords corresponding to the target automatic driving model according to the function of the target automatic driving model;
the apparatus further comprises:
a second acquisition module for acquiring a plurality of pre-generated historical image data, wherein the historical image data comprises the historical environment image and image tags corresponding to the historical environment image;
a data generation module configured to acquire vehicle data over a historical period of time, the vehicle data including an environmental image, sensor data, controller data, map data; dividing the vehicle data into a plurality of sub-data according to a preset time interval; and regarding each piece of sub-data, taking the environment image in the sub-data as the historical environment image, and determining an image tag corresponding to the historical environment image to obtain the historical image data.
10. A vehicle, characterized by comprising:
a first processor;
a memory for storing processor-executable instructions;
Wherein the first processor is configured to:
steps for carrying out the method of any one of claims 1-8.
11. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the method of any of claims 1-8.
12. A chip comprising a second processor and an interface; the second processor is configured to read instructions to perform the method of any one of claims 1-8.
CN202210836988.XA 2022-07-15 2022-07-15 Image retrieval method, device, vehicle, storage medium and chip Active CN115203457B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210836988.XA CN115203457B (en) 2022-07-15 2022-07-15 Image retrieval method, device, vehicle, storage medium and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210836988.XA CN115203457B (en) 2022-07-15 2022-07-15 Image retrieval method, device, vehicle, storage medium and chip

Publications (2)

Publication Number Publication Date
CN115203457A CN115203457A (en) 2022-10-18
CN115203457B true CN115203457B (en) 2023-11-14

Family

ID=83582599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210836988.XA Active CN115203457B (en) 2022-07-15 2022-07-15 Image retrieval method, device, vehicle, storage medium and chip

Country Status (1)

Country Link
CN (1) CN115203457B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937421B (en) * 2022-12-13 2024-04-02 昆易电子科技(上海)有限公司 Method for generating simulated video data, image generating device and readable storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609502A (en) * 2017-09-05 2018-01-19 百度在线网络技术(北京)有限公司 Method and apparatus for controlling automatic driving vehicle
US10061322B1 (en) * 2017-04-06 2018-08-28 GM Global Technology Operations LLC Systems and methods for determining the lighting state of a vehicle
CN111104536A (en) * 2019-12-24 2020-05-05 Oppo广东移动通信有限公司 Picture searching method, device, terminal and storage medium
CN111368789A (en) * 2020-03-18 2020-07-03 腾讯科技(深圳)有限公司 Image recognition method and device, computer equipment and storage medium
CN111694973A (en) * 2020-06-09 2020-09-22 北京百度网讯科技有限公司 Model training method and device for automatic driving scene and electronic equipment
WO2021006870A1 (en) * 2019-07-08 2021-01-14 Huawei Technologies Co., Ltd. Vehicular autonomy-level functions
CN113255445A (en) * 2021-04-20 2021-08-13 杭州飞步科技有限公司 Multitask model training and image processing method, device, equipment and storage medium
CN113762044A (en) * 2021-05-06 2021-12-07 腾讯科技(深圳)有限公司 Road recognition method, road recognition device, computer equipment and storage medium
CN113989785A (en) * 2021-12-01 2022-01-28 天津天瞳威势电子科技有限公司 Driving scene classification method, device, equipment and storage medium
CN114118413A (en) * 2021-11-30 2022-03-01 上海商汤临港智能科技有限公司 Network training and equipment control method, device, equipment and storage medium
CN114461853A (en) * 2022-01-28 2022-05-10 腾讯科技(深圳)有限公司 Training sample generation method, device and equipment of video scene classification model
CN114610628A (en) * 2022-03-16 2022-06-10 阿波罗智联(北京)科技有限公司 Scene library establishing and testing method, device, equipment, medium and program product

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10061322B1 (en) * 2017-04-06 2018-08-28 GM Global Technology Operations LLC Systems and methods for determining the lighting state of a vehicle
CN107609502A (en) * 2017-09-05 2018-01-19 百度在线网络技术(北京)有限公司 Method and apparatus for controlling automatic driving vehicle
WO2021006870A1 (en) * 2019-07-08 2021-01-14 Huawei Technologies Co., Ltd. Vehicular autonomy-level functions
CN111104536A (en) * 2019-12-24 2020-05-05 Oppo广东移动通信有限公司 Picture searching method, device, terminal and storage medium
CN111368789A (en) * 2020-03-18 2020-07-03 腾讯科技(深圳)有限公司 Image recognition method and device, computer equipment and storage medium
CN111694973A (en) * 2020-06-09 2020-09-22 北京百度网讯科技有限公司 Model training method and device for automatic driving scene and electronic equipment
CN113255445A (en) * 2021-04-20 2021-08-13 杭州飞步科技有限公司 Multitask model training and image processing method, device, equipment and storage medium
CN113762044A (en) * 2021-05-06 2021-12-07 腾讯科技(深圳)有限公司 Road recognition method, road recognition device, computer equipment and storage medium
CN114118413A (en) * 2021-11-30 2022-03-01 上海商汤临港智能科技有限公司 Network training and equipment control method, device, equipment and storage medium
CN113989785A (en) * 2021-12-01 2022-01-28 天津天瞳威势电子科技有限公司 Driving scene classification method, device, equipment and storage medium
CN114461853A (en) * 2022-01-28 2022-05-10 腾讯科技(深圳)有限公司 Training sample generation method, device and equipment of video scene classification model
CN114610628A (en) * 2022-03-16 2022-06-10 阿波罗智联(北京)科技有限公司 Scene library establishing and testing method, device, equipment, medium and program product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"面向自动驾驶的认知地图的自动生成算法与实现";邵闯;《中国优秀硕士学位论文全文数据库(工程科技Ⅱ辑)》(第01期);C035-522 *

Also Published As

Publication number Publication date
CN115203457A (en) 2022-10-18

Similar Documents

Publication Publication Date Title
US20240017719A1 (en) Mapping method and apparatus, vehicle, readable storage medium, and chip
CN115123257B (en) Pavement deceleration strip position identification method and device, vehicle, storage medium and chip
CN115042821B (en) Vehicle control method, vehicle control device, vehicle and storage medium
CN114882464B (en) Multi-task model training method, multi-task processing method, device and vehicle
CN115035494A (en) Image processing method, image processing device, vehicle, storage medium and chip
CN115203457B (en) Image retrieval method, device, vehicle, storage medium and chip
CN115056784B (en) Vehicle control method, device, vehicle, storage medium and chip
CN115100630B (en) Obstacle detection method, obstacle detection device, vehicle, medium and chip
CN115205311B (en) Image processing method, device, vehicle, medium and chip
CN115202234B (en) Simulation test method and device, storage medium and vehicle
CN115330923B (en) Point cloud data rendering method and device, vehicle, readable storage medium and chip
CN115221151B (en) Vehicle data transmission method and device, vehicle, storage medium and chip
US20230410535A1 (en) Method and apparatus for generating lane line, vehicle, storage medium and chip
CN114842440B (en) Automatic driving environment sensing method and device, vehicle and readable storage medium
CN114842455B (en) Obstacle detection method, device, equipment, medium, chip and vehicle
CN115205848A (en) Target detection method, target detection device, vehicle, storage medium and chip
CN115334109A (en) System architecture, transmission method, vehicle, medium and chip for traffic signal identification
CN115205179A (en) Image fusion method and device, vehicle and storage medium
CN115042814A (en) Traffic light state identification method and device, vehicle and storage medium
CN114880408A (en) Scene construction method, device, medium and chip
CN115147794B (en) Lane line determining method, lane line determining device, vehicle, medium and chip
CN115063639B (en) Model generation method, image semantic segmentation device, vehicle and medium
CN114972824B (en) Rod detection method, device, vehicle and storage medium
CN115082772B (en) Location identification method, location identification device, vehicle, storage medium and chip
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant