CN114937185A - Image sample acquisition method and device, electronic equipment and storage medium - Google Patents

Image sample acquisition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114937185A
CN114937185A CN202210636995.5A CN202210636995A CN114937185A CN 114937185 A CN114937185 A CN 114937185A CN 202210636995 A CN202210636995 A CN 202210636995A CN 114937185 A CN114937185 A CN 114937185A
Authority
CN
China
Prior art keywords
detection result
image
detection
sample set
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210636995.5A
Other languages
Chinese (zh)
Inventor
吴俊法
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202210636995.5A priority Critical patent/CN114937185A/en
Publication of CN114937185A publication Critical patent/CN114937185A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image sample collection method and device, electronic equipment and a storage medium, relates to the technical field of computers, and particularly relates to the fields of automatic driving, Internet of vehicles, intelligent cabins, computer vision, deep learning and other artificial intelligence. The implementation scheme is as follows: acquiring an image from a first vehicle end and a first detection result of the image; detecting the image by adopting the first detection model to obtain a second detection result of the image; and determining a sample set to which the image belongs according to a comparison result between the first detection result and the second detection result. According to the method, the sample set to which the image belongs is determined according to the comparison result of the detection result from the vehicle end and the detection result of the first detection model, and the cost for acquiring the image sample is reduced.

Description

Image sample acquisition method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of computers, in particular to the field of artificial intelligence such as automatic driving, Internet of vehicles, intelligent cabins, computer vision and deep learning, and specifically relates to an image sample acquisition method and device, electronic equipment and a storage medium.
Background
In visual identification, AR (Augmented Reality) navigation, etc., the collection of image samples is always a difficult matter, and the cost is high when the image samples are collected only by a manual labeling method.
Disclosure of Invention
The application provides an image sample acquisition method, an image sample acquisition device, electronic equipment and a storage medium.
According to an aspect of the present application, there is provided an image sample acquisition method, including:
acquiring an image from a first vehicle end and a first detection result of the image;
detecting the image by adopting a first detection model to obtain a second detection result of the image;
and determining a sample set to which the image belongs according to a comparison result between the first detection result and the second detection result.
According to another aspect of the present application, there is provided an image sample acquiring device comprising:
the first acquisition module is used for acquiring an image from a first vehicle end and a first detection result of the image;
the second acquisition module is used for detecting the image by adopting the first detection model so as to acquire a second detection result of the image;
and the determining module is used for determining a sample set to which the image belongs according to a comparison result between the first detection result and the second detection result.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the above embodiments.
According to another aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to the above-described embodiments.
According to another aspect of the present application, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of the method of the above-described embodiments.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flowchart of an image sample collecting method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of an image sample acquisition method according to another embodiment of the present application;
fig. 3 is a schematic flowchart of an image sample acquisition method according to another embodiment of the present application;
fig. 4 is a schematic structural diagram of an image sample acquisition device according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device for implementing an image sample acquisition method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
An image sample acquisition method, an apparatus, an electronic device, and a storage medium according to embodiments of the present application are described below with reference to the drawings.
Artificial intelligence is a subject of research that uses computers to simulate some human thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), and has both hardware-level and software-level technologies. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a deep learning technology, a big data processing technology, a knowledge map technology and the like.
The concept of the internet of vehicles is derived from the internet of things, namely the internet of vehicles, the network connection between vehicles and X (namely the vehicles, people, roads and service platforms) is realized by taking the vehicles in driving as information perception objects and by means of a new generation of information communication technology, the overall intelligent driving level of the vehicles is improved, safe, comfortable, intelligent and efficient driving feeling and traffic service are provided for users, meanwhile, the traffic operation efficiency is improved, and the intelligent level of social traffic service is improved.
The intelligent cabin is used for transforming the riding space in the vehicle, so that the driving and riding experience can be more comfortable and intelligent.
Computer vision is a science for researching how to make a machine look, and means that a camera and a computer are used for replacing human eyes to perform machine vision such as identification, tracking and measurement on a target, and further graphics processing is performed, so that the computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect.
Deep learning is a new research direction in the field of machine learning. Deep learning is the intrinsic law and expression level of the learning sample data, and the information obtained in the learning process is very helpful for the interpretation of data such as characters, images and sounds. The final aim of the method is to enable the machine to have the analysis and learning capability like a human, and to recognize data such as characters, images and sounds.
Fig. 1 is a schematic flowchart of an image sample collecting method according to an embodiment of the present disclosure.
The image sample collection method can be executed by the server side, so that the detection result of the vehicle end model is compared with the detection result of the first detection model of the server side, and the sample set to which the image belongs is determined according to the comparison result, and the sample collection cost is reduced.
The electronic device may be any device with computing capability, for example, a personal computer, a mobile terminal, a server, and the like, and the mobile terminal may be a hardware device with various operating systems, touch screens, and/or display screens, such as an in-vehicle device, a mobile phone, a tablet computer, a personal digital assistant, a wearable device, and the like.
As shown in fig. 1, the image sample collection method includes:
step 101, obtaining an image from a first vehicle end and a first detection result of the image.
In the application, a detection model can be arranged in the first vehicle end, and the detection model can be used for detecting the image to obtain a first detection result, or other detection modes can be adopted for detecting the image to obtain the first detection result.
Taking the AR navigation scene as an example, the detection model at the first vehicle end may be a model for detecting whether there is a preceding vehicle in the image, or a model for detecting whether there is a pedestrian in the image, or a model for detecting whether there is a non-motor vehicle in the image, or the like.
In the application, a first vehicle machine end is installed on a vehicle, images can be collected through a camera on the vehicle, the first vehicle machine end can adopt a detection model to detect the images to obtain a first detection result, and then the first vehicle machine end can send the images and the corresponding detection result to a server side.
For example, after the drive test data acquisition configuration item on the vehicle is opened, the vehicle can automatically collect image data and an image detection model on the vehicle can detect the image in the drive test process to obtain a detection result, and after the drive test is finished, if a network currently used by the vehicle is a wireless network, a user can be prompted to upload the image and the corresponding detection result, or the image and the corresponding detection result can be directly uploaded. And if the image and the corresponding detection result are not uploaded, prompting to upload the image and the corresponding detection result when the wireless connection is performed next time.
In the application, the first vehicle end may also detect each frame image in the acquired video, obtain a first detection result of each frame image, and send the video and the first detection result of each frame image in the video to the server end.
And 102, detecting the image by adopting the first detection model to obtain a second detection result of the image.
In the application, the server is provided with the first detection model, and the server can detect the image by adopting the first detection model to obtain the second detection result. If the first vehicle end is a first detection result obtained by detecting the image by using the detection model, the function of the first detection model is the same as that of the detection model used by the first vehicle end, for example, the first detection model and the detection model used by the first vehicle end are both models for detecting whether the image contains a pedestrian.
After the server acquires the image from the first vehicle end and the first detection result of the image, the server may detect the image by using the first detection model to acquire the second detection result of the image.
And 103, determining a sample set to which the image belongs according to a comparison result between the first detection result and the second detection result.
In the application, the first detection result and the second detection result may be compared, and the sample set to which the image belongs may be determined according to the comparison result of the first detection result and the second detection result, so as to divide the image into the corresponding sample sets.
The sample set to which the image belongs may be one of a standard sample set, a false detection sample set, a missed detection sample set, and the like.
For example, if the first detection result is consistent with the second detection result, the first detection result may be considered to be correct, and the sample set to which the first detection result belongs may be determined to be the standard sample set. For another example, a first confidence of a first detection result of an image from a vehicle end and a second confidence of a second detection result output by the first detection model may also be obtained, and when the first detection result is the same as the second detection result and both the first confidence and the second confidence are greater than a preset threshold, the sample set to which the image belongs is determined to be a standard sample set.
In order to improve the accuracy of image sample acquisition, in the application, the server may also compare the first detection result with the second detection result of each image in a continuous period of time, and if the first detection result is the same as the second detection result, a frame of image may be selected from each frame of image and added to the standard sample set.
It should be noted that, in the technical solution of the present application, the acquisition, storage, application, and the like of the personal information of the related user all conform to the regulations of the relevant laws and regulations, and do not violate the customs of the public order.
In the embodiment of the application, an image from a first vehicle end and a first detection result of the image are obtained; detecting the image by adopting the first detection model to obtain a second detection result of the image; and determining a sample set to which the image belongs according to a comparison result between the first detection result and the second detection result. Therefore, the sample set to which the image belongs is determined according to the comparison result of the detection result from the vehicle end and the detection result of the first detection model, and the cost for acquiring the image sample is reduced.
Fig. 2 is a schematic flow chart of an image sample acquisition method according to another embodiment of the present application.
As shown in fig. 2, the image sample collection method includes:
step 201, an image from a first vehicle end and a first detection result of the image are obtained.
In this application, a detection model is disposed in the first vehicle end, and may be referred to as a second detection model for convenience of distinction, and the second detection model may be obtained by compressing the first detection model.
For example, a first detection model with higher accuracy can be obtained by utilizing a large amount of data training, a second detection model which is convenient to operate at a vehicle end can be obtained by compressing the first detection model, such as pruning, distillation, quantification and the like, and the accuracy of the model can be reduced in the process of compressing the first detection model, so that the evaluation index value of the first detection model can be higher than that of the second detection model, that is, the accuracy of the detection result of the first detection model can be higher than that of the detection result of the second detection model. Therefore, the second detection model is obtained by compressing the first detection model, and the acquisition mode of the vehicle end model is enriched.
It should be noted that the first vehicle end may have multiple image detection models, and the first vehicle end may detect the image by using the multiple detection models to obtain corresponding detection results, and send the image and the detection results of each model to the server.
Step 202, detecting the image by using the first detection model to obtain a second detection result of the image.
In the present application, step 202 is similar to the content described in the above embodiments, and therefore is not described herein again.
The second detection model is obtained by compressing the first detection model, and the accuracy of the detection result of the first detection model is higher than that of the second detection model, so that the second detection result obtained by the detection of the first detection model can be used as a reference.
Step 203, a first confidence of a first detection result from the first vehicle end and a second confidence of a second detection result output by the first detection model are obtained.
In the application, when the second detection model outputs the first detection result, the confidence of the first detection result may also be output, the first vehicle end may also send the first confidence of the first detection result to the server end, and the server end may obtain the first confidence of the first detection result from the first vehicle end. The first confidence coefficient is used for representing the credibility of the first detection result.
The first detection model of the server detects the image, and when the second detection result is output, the second confidence of the second detection result can be output, so that the server can obtain the second confidence of the second detection result. And the second confidence coefficient is used for representing the confidence degree of the second detection result.
And 204, determining the sample set to which the image belongs as a standard sample set under the condition that the first detection result is consistent with the second detection result and the first confidence coefficient and the second confidence coefficient are both greater than a preset threshold value.
In the application, the second detection result can be used as a reference, the first detection result is compared with the second detection result, if the first detection result is consistent with the second detection result and the first confidence and the second confidence are both greater than the preset threshold, the first detection result can be considered to be correct, the sample set to which the image belongs can be determined to be the standard sample set, the image is added to the standard sample set, and the second detection result is used as the label of the image, so that manual labeling is not needed, and the cost is reduced.
In the application, the image with the correct detection result can be divided into the standard sample set, and the image sample in the standard sample set can increase the sample amount, so that the subsequent updating for the first detection model is facilitated.
Step 205, determining that the sample set to which the image belongs is an error detection sample set when the first detection result is inconsistent with the second detection result.
In this application, if the first detection result is inconsistent with the second detection result, the first detection result may be considered to be incorrect, and it may be determined that the sample set to which the image belongs is an erroneous detection sample set. Wherein the image samples in the false detection sample set are images of which the detection results of the second detection model are incorrect. Therefore, the false detection image can be divided into the false detection sample set, and the updated second detection model can be detected by using the image samples in the false detection sample set subsequently, so as to solve the false detection problem.
And step 206, determining that the sample set to which the image belongs is a missing detection sample set under the condition that the first detection result is empty and the second detection result is not empty.
In this application, if the first detection result is null and the second detection result is not null, it may be considered that the second detection model does not detect the image, that is, the image is missed, and it may be determined that the sample set to which the image belongs is the missed sample set. And the image samples in the missed detection sample set are the images missed by the second detection model.
Therefore, the missed detection image can be divided into the missed detection sample set, and the updated second detection model can be tested by using the image sample in the missed detection sample set subsequently to solve the missed detection problem.
And step 207, determining that the sample set to which the image belongs is a false detection sample set under the condition that the first detection result is not empty and the second detection result is empty.
In this application, if the first detection result is not empty, and the second detection result is empty, the image may be an image subjected to false detection, and then the sample set to which the image belongs is determined to be a false detection sample set, and the image is added to the false detection sample set.
Therefore, the false detection image can be divided into the false detection sample set, and the updated second detection model can be checked by using the image sample in the false detection sample set subsequently to solve the false detection problem.
In this application, the second detection model on the first vehicle end and the first detection model on the service end are in a one-to-one correspondence relationship. For example, the first vehicle end is provided with a model for detecting whether an image contains a pedestrian and a model for detecting whether an image contains a non-motor vehicle, and the server end is also provided with two models, if the first vehicle end sends the detection results of the two models to the server end, the server end can compare the detection result of whether the image detected by the first vehicle end contains a pedestrian with the detection result detected by the same model on the server end, determine the sample set to which the image corresponding to the detection model on the first vehicle end belongs according to the comparison result, compare the detection result of whether the image detected by the first vehicle end contains a non-motor vehicle with the detection result of the same model on the server end, and determine the sample set to which the image corresponding to the detection model on the first vehicle end belongs according to the comparison result.
In the embodiment of the application, the sample set to which the image belongs can be determined according to whether the first detection result is consistent with the second detection result, and the image is divided into the corresponding sample sets, so that different types of samples are collected, and the subsequent optimization of the second detection model is facilitated.
Fig. 3 is a schematic flowchart of an image sample acquisition method according to another embodiment of the present application.
As shown in fig. 3, the image sample acquisition method includes:
step 301, an image from a first vehicle end and a first detection result of the image are obtained.
Step 302, detecting the image by using the first detection model to obtain a second detection result of the image.
In the present application, steps 301 to 302 are similar to those described in the above embodiments, and therefore are not described herein again.
Step 303, obtaining a current state of a second vehicle end, wherein the second vehicle end has a first detection model.
In order to improve the accuracy of image acquisition, in the application, the images can be detected by using other vehicle terminals, and the sample set to which the images belong is determined based on the detection results of a plurality of vehicle terminals.
In the application, the second vehicle terminal can report the state of the second vehicle terminal to the server terminal at preset time intervals. Or, the service end may also send a status inquiry request to the second vehicle end, the second vehicle end returns the current status to the service end, and the service end may obtain the current status of the second vehicle end.
In practical applications, the vehicle side is usually in an idle state for some time period, and based on this, the service side may send a state inquiry request to the second vehicle side for a preset time period, for example, 8 pm to 4 pm on the next day.
It should be noted that the second vehicle end may be one or multiple, and the present application is not limited thereto.
And step 304, transmitting the image to the second vehicle end under the condition that the state is the idle state.
In this application, when the second vehicle end is in the idle state, the server may send the image to the second vehicle end, so that the second vehicle end detects the image.
In order to improve efficiency, the second vehicle end can register in the server end in advance and open the automatic detection function, the server end can send the image and the corresponding automatic detection script to the second vehicle end, and the second vehicle end obtains the image and the automatic detection script and starts automatic detection.
Step 305, a third detection result of the image from the second vehicle end is obtained.
In the application, the second vehicle end may be provided with a detection model that is the same as the first vehicle end, that is, a second detection model, and the second vehicle end may use the second detection model to detect the image to obtain a third detection result of the image, or may use other methods to detect the image to obtain the third detection result. The second vehicle end sends the third detection result to the server end, so that the server end can obtain the third detection result of the image from the second vehicle end.
For example, the server obtains the image uploaded by the vehicle end a and the corresponding first detection result, and when determining that the vehicle end B, C, D is in the idle state, the server sends the image to the vehicle end B, C, D respectively, and obtains the detection result of the image returned by the vehicle end B, C, D.
It should be noted that, in the present application, step 302 and step 303 to step 305 may be performed in the above order, or step 303 to step 305 may be performed first and then step 302 is performed, or step 302 and step 303 may be performed simultaneously, which is not limited in the present application.
And step 306, determining a sample set to which the image belongs according to the comparison result of the first detection result, the second detection result and the third detection result.
In the present application, the first detection result, the second detection result, and the third detection result may be compared, and the sample set to which the image belongs may be determined according to the comparison result. For example, if the first detection result, the second detection result, and the third detection result are the same, it may be determined that the sample set to which the image belongs is the standard sample set.
Or, the first vehicle end and the second vehicle end may both be provided with a second detection model, and the second detection model is obtained by compressing the first detection model, so that the second detection result of the service end may be used as a reference, and the first detection result and the third detection result are respectively compared with the second detection result.
If at least one of the first detection result and the third detection result is consistent with the second detection result, determining a sample set to which the image belongs as a standard sample set; and if the first detection result and the third detection result are different from the second detection result, the detection result of the second detection model at the vehicle end to the image is incorrect, the sample set to which the image belongs is determined to be a false detection sample set, and the image is added into the false detection sample set.
If the first detection result and the third detection result are both null and the second detection result is not null, it can be determined that the sample set to which the image belongs is a missed sample set. If the first detection result and the third detection result are not null, and the second detection result of the first detection model is null, the sample set to which the image belongs can be determined to be the false detection sample set.
Thus, the images may be divided into respective sets of samples according to a comparison of the first, second and third detection results.
In the application, the first vehicle end and the second vehicle end are obtained by detecting the image by using the second detection model, and the second detection model is obtained by compressing the first detection model, if the first detection result is inconsistent with the second detection result and the third detection result is consistent with the second detection result, the inconsistency between the first detection result and the second detection result is not a problem of the second detection model, but is caused by low hardware performance of the first vehicle end, and then the server end can send the prompt information to the first vehicle end. The prompt information is used for prompting the first vehicle end to update the hardware.
For example, the second detection model is used to detect whether the image includes a non-motor vehicle, if the detection result of the first vehicle end a does not include a non-motor vehicle, and the detection result of the second vehicle end B, C, D includes a non-motor vehicle, and the detection result of the service end includes a non-motor vehicle, it indicates that the detection result of the first vehicle end a is incorrect, possibly due to poor hardware performance of the first vehicle end a.
Therefore, when the detection result is determined to be wrong due to poor hardware performance of the first vehicle end according to the detection results of the plurality of vehicle ends and the detection result of the service end, the hardware updating prompt information is sent to the first vehicle end, and the intelligence is improved.
If the first detection result and the third detection result are different from the second detection result, which indicates that the detection result is incorrect due to the second detection model, the server may send prompt information to the first vehicle end and the second vehicle end to prompt the vehicle end to update the second detection model.
Therefore, by acquiring the detection results in different hardware environments, comparing the detection results in different hardware environments with the detection results of the server, and determining the sample set to which the image belongs according to the comparison result, the image sample collection deviation caused by single hardware difference can be avoided, and the reliability of the result is improved.
In this embodiment of the application, a current state of the second vehicle end may also be obtained, and when the current state is an idle state, the image is sent to the second vehicle end, a third detection result of the image from the second vehicle end is obtained, and a sample set to which the image belongs is determined according to a comparison result of the first detection result, the second detection result, and the third detection result. Therefore, the detection results of the plurality of vehicle ends are obtained, the detection results of the plurality of vehicle ends are compared with the detection results of the server end, and the sample set to which the image belongs is determined according to the comparison results, so that the reliability of the result is improved.
In order to implement the above embodiments, an image sample collecting device is further provided in the embodiments of the present application. Fig. 4 is a schematic structural diagram of an image sample acquisition device according to an embodiment of the present application.
As shown in fig. 4, the image sample acquiring device 400 includes:
a first obtaining module 410, configured to obtain an image from a first vehicle end and a first detection result of the image;
a second obtaining module 420, configured to detect the image by using the first detection model to obtain a second detection result of the image;
the determining module 430 is configured to determine a sample set to which the image belongs according to a comparison result between the first detection result and the second detection result.
In a possible implementation manner of the embodiment of the present application, the apparatus may further include:
the third acquisition module is used for acquiring a first confidence coefficient of a first detection result from the first vehicle end and a second confidence coefficient of a second detection result output by the first detection model;
the determining module 430 is configured to determine that the sample set to which the image belongs is a standard sample set when the first detection result is consistent with the second detection result and the first confidence and the second confidence are both greater than a preset threshold.
In a possible implementation manner of the embodiment of the application, the first detection result is obtained by detecting the image by using a second detection model at the first vehicle end, and the second detection model is obtained by compressing the first detection model.
In a possible implementation manner of the embodiment of the present application, the determining module 430 is configured to:
and under the condition that the first detection result is inconsistent with the second detection result, determining that the sample set to which the image belongs is an error detection sample set.
In a possible implementation manner of the embodiment of the present application, the determining module 430 is configured to:
and under the condition that the first detection result is empty and the second detection result is not empty, determining the sample set to which the image belongs as a missing detection sample set.
In a possible implementation manner of the embodiment of the present application, the determining module 430 is configured to:
and under the condition that the first detection result is not empty and the second detection result is empty, determining that the sample set to which the image belongs is a false detection sample set.
In a possible implementation manner of the embodiment of the present application, the apparatus may further include:
the fourth acquisition module is used for acquiring the current state of the second vehicle end;
the sending module is used for sending the image to the second vehicle end under the condition that the state is the idle state;
the fourth acquisition module is also used for acquiring a third detection result of the image from the second vehicle end;
the determining module 430 is configured to determine a sample set to which the image belongs according to a comparison result of the first detection result, the second detection result, and the third detection result.
In a possible implementation manner of the embodiment of the application, the first detection result and the third detection result are obtained by detecting an image by using a second detection model, and the second detection model is obtained by compressing the first detection model; a determining module 430 configured to:
determining a sample set to which the image belongs as a standard sample set under the condition that at least one of the first detection result and the third detection result is consistent with the second detection result;
and under the condition that the first detection result and the third detection result are different from the second detection result, determining that the sample set to which the image belongs is an error detection sample set.
In a possible implementation manner of the embodiment of the present application, the sending module is further configured to:
and sending prompt information to the first vehicle end under the condition that the first detection result is inconsistent with the second detection result and the third detection result is consistent with the second detection result, wherein the prompt information is used for prompting the first vehicle end to update hardware.
It should be noted that the explanation of the embodiment of the image sample acquiring method is also applicable to the image sample acquiring device of the embodiment, and therefore, the explanation is not repeated herein.
In the embodiment of the application, an image from a first vehicle end and a first detection result of the image are obtained; detecting the image by adopting the first detection model to obtain a second detection result of the image; and determining a sample set to which the image belongs according to a comparison result between the first detection result and the second detection result. Therefore, the sample set to which the image belongs is determined according to the comparison result of the detection result from the vehicle end and the detection result of the first detection model, and the cost for acquiring the image sample is reduced.
According to embodiments of the present application, an electronic device, a readable storage medium, and a computer program product are also provided.
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 5, the device 500 includes a computing unit 501, which can perform various appropriate actions and processes in accordance with a computer program stored in a ROM (Read-Only Memory) 502 or a computer program loaded from a storage unit 508 into a RAM (Random Access Memory) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An I/O (Input/Output) interface 505 is also connected to the bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing Unit 501 include, but are not limited to, a CPU (Central Processing Unit), a GPU (graphics Processing Unit), various dedicated AI (Artificial Intelligence) computing chips, various computing Units running machine learning model algorithms, a DSP (Digital Signal Processor), and any suitable Processor, controller, microcontroller, and the like. The calculation unit 501 performs the various methods and processes described above, such as the image sample acquisition method. For example, in some embodiments, the image sample acquisition method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 500 via ROM 502 and/or communications unit 509. When the computer program is loaded into the RAM 503 and executed by the computing unit 501, one or more steps of the image sample acquisition method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the image sample acquisition method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be realized in digital electronic circuitry, Integrated circuitry, FPGAs (Field Programmable Gate arrays), ASICs (Application-Specific Integrated circuits), ASSPs (Application Specific Standard products), SOCs (System On Chip, System On a Chip), CPLDs (Complex Programmable Logic devices), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an EPROM (erasable Programmable Read-Only-Memory) or flash Memory, an optical fiber, a CD-ROM (Compact Disc Read-Only-Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a Display device (e.g., a CRT (Cathode Ray Tube) or LCD (Liquid Crystal Display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: LAN (Local Area Network), WAN (Wide Area Network), Internet and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in a conventional physical host and a VPS (Virtual Private Server). The server may also be a server of a distributed system, or a server incorporating a blockchain.
According to an embodiment of the present application, there is also provided a computer program product, which when executed by an instruction processor in the computer program product, executes the image sample acquisition method proposed in the above-mentioned embodiment of the present application.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (21)

1. An image sample acquisition method, comprising:
acquiring an image from a first vehicle end and a first detection result of the image;
detecting the image by adopting a first detection model to obtain a second detection result of the image;
and determining a sample set to which the image belongs according to a comparison result between the first detection result and the second detection result.
2. The method of claim 1, further comprising:
obtaining a first confidence degree of the first detection result from the first vehicle end and a second confidence degree of the second detection result output by the first detection model;
the determining a sample set to which the image belongs according to the comparison result between the first detection result and the second detection result includes:
and under the condition that the first detection result is consistent with the second detection result and the first confidence coefficient and the second confidence coefficient are both greater than a preset threshold value, determining that the sample set to which the image belongs is a standard sample set.
3. The method of claim 1, wherein the first detection result is obtained by detecting the image by the first vehicle end using a second detection model, and the second detection model is obtained by compressing the first detection model.
4. The method of claim 3, wherein the determining the set of samples to which the image belongs based on the comparison between the first detection result and the second detection result comprises:
and determining that the sample set to which the image belongs is a wrong detection sample set when the first detection result is inconsistent with the second detection result.
5. The method of claim 3, wherein the determining the set of samples to which the image belongs based on the comparison between the first detection result and the second detection result comprises:
and determining that the sample set to which the image belongs is a missing detection sample set under the condition that the first detection result is empty and the second detection result is not empty.
6. The method of claim 3, wherein the determining the set of samples to which the image belongs based on the comparison between the first detection result and the second detection result comprises:
and determining that the sample set to which the image belongs is a false detection sample set when the first detection result is not empty and the second detection result is empty.
7. The method of claim 1, wherein after said acquiring the image from the first vehicle end and the first detection of the image, further comprising:
acquiring the current state of a second vehicle end;
sending the image to the second vehicle end under the condition that the state is an idle state;
obtaining a third detection result of the image from the second vehicle end;
the determining a sample set to which the image belongs according to the comparison result between the first detection result and the second detection result includes:
and determining a sample set to which the image belongs according to the comparison result of the first detection result, the second detection result and the third detection result.
8. The method of claim 7, wherein the first detection result and the third detection result are obtained by detecting the image by using a second detection model, and the second detection model is obtained by compressing the first detection model;
the determining, according to a comparison result of the first detection result, the second detection result, and the third detection result, a sample set to which the image belongs includes:
determining a sample set to which the image belongs as a standard sample set when at least one of the first detection result and the third detection result is consistent with the second detection result;
and under the condition that the first detection result and the third detection result are different from the second detection result, determining that the sample set to which the image belongs is an error detection sample set.
9. The method of claim 8, further comprising:
and sending prompt information to the first vehicle end under the condition that the first detection result is inconsistent with the second detection result and the third detection result is consistent with the second detection result, wherein the prompt information is used for prompting the first vehicle end to update hardware.
10. An image sample acquisition device comprising:
the first acquisition module is used for acquiring an image from a first vehicle end and a first detection result of the image;
the second acquisition module is used for detecting the image by adopting the first detection model so as to acquire a second detection result of the image;
and the determining module is used for determining a sample set to which the image belongs according to a comparison result between the first detection result and the second detection result.
11. The apparatus of claim 10, further comprising:
a third obtaining module, configured to obtain a first confidence of the first detection result from the first vehicle end and a second confidence of the second detection result output by the first detection model;
the determining module is configured to determine that a sample set to which the image belongs is a standard sample set when the first detection result is consistent with the second detection result and the first confidence and the second confidence are both greater than a preset threshold.
12. The apparatus of claim 10, wherein the first detection result is obtained by detecting the image by using a second detection model at the first vehicle end, and the second detection model is obtained by compressing the first detection model.
13. The apparatus of claim 12, wherein the means for determining is configured to:
and determining that the sample set to which the image belongs is a wrong detection sample set when the first detection result is inconsistent with the second detection result.
14. The apparatus of claim 12, wherein the means for determining is configured to:
and under the condition that the first detection result is empty and the second detection result is not empty, determining that the sample set to which the image belongs is a missing detection sample set.
15. The apparatus of claim 12, wherein the means for determining is configured to:
and determining that the sample set to which the image belongs is a false detection sample set when the first detection result is not empty and the second detection result is empty.
16. The apparatus of claim 10, further comprising:
the fourth obtaining module is used for obtaining the current state of the second vehicle end;
a sending module, configured to send the image to the second vehicle end when the current state is an idle state;
the fourth obtaining module is further configured to obtain a third detection result of the image from the second vehicle end;
the determining module is configured to determine a sample set to which the image belongs according to a comparison result of the first detection result, the second detection result, and the third detection result.
17. The apparatus of claim 16, wherein the first detection result and the third detection result are obtained by detecting the image by using a second detection model, and the second detection model is obtained by compressing the first detection model; the determining module is configured to:
determining a sample set to which the image belongs as a standard sample set when at least one of the first detection result and the third detection result is consistent with the second detection result;
and under the condition that the first detection result and the third detection result are different from the second detection result, determining that the sample set to which the image belongs is an error detection sample set.
18. The apparatus of claim 17, wherein the means for transmitting is further configured to:
and sending prompt information to the first vehicle end under the condition that the first detection result is inconsistent with the second detection result and the third detection result is consistent with the second detection result, wherein the prompt information is used for prompting the first vehicle end to update hardware.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.
21. A computer program product comprising a computer program which, when executed by a processor, carries out the steps of the method of any one of claims 1-9.
CN202210636995.5A 2022-06-07 2022-06-07 Image sample acquisition method and device, electronic equipment and storage medium Pending CN114937185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210636995.5A CN114937185A (en) 2022-06-07 2022-06-07 Image sample acquisition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210636995.5A CN114937185A (en) 2022-06-07 2022-06-07 Image sample acquisition method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114937185A true CN114937185A (en) 2022-08-23

Family

ID=82865891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210636995.5A Pending CN114937185A (en) 2022-06-07 2022-06-07 Image sample acquisition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114937185A (en)

Similar Documents

Publication Publication Date Title
CN112633384B (en) Object recognition method and device based on image recognition model and electronic equipment
CN112857268B (en) Object area measuring method, device, electronic equipment and storage medium
KR102616470B1 (en) Method and apparatus for detecting mobile traffic light, electronic device, and storag medium
CN114648676A (en) Point cloud processing model training and point cloud instance segmentation method and device
CN114494815A (en) Neural network training method, target detection method, device, equipment and medium
CN114821581A (en) Image recognition method and method for training image recognition model
CN114495103B (en) Text recognition method and device, electronic equipment and medium
CN113591580B (en) Image annotation method and device, electronic equipment and storage medium
CN114723949A (en) Three-dimensional scene segmentation method and method for training segmentation model
CN114238790A (en) Method, apparatus, device and storage medium for determining maximum perception range
CN113723305A (en) Image and video detection method, device, electronic equipment and medium
CN113705716A (en) Image recognition model training method and device, cloud control platform and automatic driving vehicle
CN112784102A (en) Video retrieval method and device and electronic equipment
CN115797660A (en) Image detection method, image detection device, electronic equipment and storage medium
CN114937185A (en) Image sample acquisition method and device, electronic equipment and storage medium
CN115294648A (en) Man-machine gesture interaction method and device, mobile terminal and storage medium
CN114998963A (en) Image detection method and method for training image detection model
CN113869317A (en) License plate recognition method and device, electronic equipment and storage medium
CN114429631A (en) Three-dimensional object detection method, device, equipment and storage medium
CN113704314A (en) Data analysis method and device, electronic equipment and storage medium
CN115131562B (en) Three-dimensional scene segmentation method, model training method, device and electronic equipment
CN115019048B (en) Three-dimensional scene segmentation method, model training method and device and electronic equipment
CN114445711B (en) Image detection method, image detection device, electronic equipment and storage medium
CN117615363B (en) Method, device and equipment for analyzing personnel in target vehicle based on signaling data
CN113012439B (en) Vehicle detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination