CN115115822A - Vehicle-end image processing method and device, vehicle, storage medium and chip - Google Patents

Vehicle-end image processing method and device, vehicle, storage medium and chip Download PDF

Info

Publication number
CN115115822A
CN115115822A CN202210771048.7A CN202210771048A CN115115822A CN 115115822 A CN115115822 A CN 115115822A CN 202210771048 A CN202210771048 A CN 202210771048A CN 115115822 A CN115115822 A CN 115115822A
Authority
CN
China
Prior art keywords
image
processed
vehicle
frame image
reference frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210771048.7A
Other languages
Chinese (zh)
Other versions
CN115115822B (en
Inventor
徐梦龙
胡佳高
王飞
杨赫
汪真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Pinecone Electronic Co Ltd
Xiaomi Automobile Technology Co Ltd
Original Assignee
Beijing Xiaomi Pinecone Electronic Co Ltd
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Pinecone Electronic Co Ltd, Xiaomi Automobile Technology Co Ltd filed Critical Beijing Xiaomi Pinecone Electronic Co Ltd
Priority to CN202210771048.7A priority Critical patent/CN115115822B/en
Publication of CN115115822A publication Critical patent/CN115115822A/en
Application granted granted Critical
Publication of CN115115822B publication Critical patent/CN115115822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The disclosure relates to a vehicle-end image processing method and device, a vehicle, a storage medium and a chip, and relates to the technical field of automatic driving. The method comprises the following steps: acquiring a reference frame image and an image to be processed by a vehicle-mounted camera; determining the difference degree between the reference frame image and the image to be processed; responding to the state that the difference degree is larger than the preset difference degree, and transmitting the image to be processed back to a data storage center as a key frame image, wherein the key frame image is used for training an image recognition model; the reference frame image is a first frame image acquired by the vehicle-mounted camera or a key frame image which is determined to be sequentially positioned before the image to be processed. By adopting the vehicle-end image processing method provided by the disclosure, a small number of key frame images can be output, so that the manual processing amount of manually re-labeling the key frame images is reduced.

Description

Vehicle-end image processing method and device, vehicle, storage medium and chip
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a method and an apparatus for processing an image at a vehicle end, a vehicle, a storage medium, and a chip.
Background
At present, in the process of automatic navigation of a vehicle, an image recognition model is adopted to recognize information of people, animals, buildings and other objects in the surrounding environment information of the vehicle, so as to plan a driving path for automatic obstacle avoidance for the vehicle.
In the related technology, a plurality of frames of images are input into an image recognition model, the image recognition model recognizes a key frame image and a positioning frame and a category of the key frame image from the plurality of frames of images, a worker re-labels the positioning frame and the category of the key frame image, and finally inputs the re-labeled key frame image into the image recognition model to re-train the image recognition model.
However, the number of the key frame images identified by the image identification model is huge, so that the number of the key frame images required to be labeled by a worker is large, and the manual labeling of the key frame images with large number is time-consuming and labor-consuming.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a vehicle-side image processing method, device, vehicle, storage medium, and chip.
According to a first aspect of the embodiments of the present disclosure, there is provided a vehicle-end image processing method, including:
acquiring a reference frame image and an image to be processed by a vehicle-mounted camera;
determining the difference degree between the reference frame image and the image to be processed;
responding to the state that the difference degree is larger than the preset difference degree, and transmitting the image to be processed back to a data storage center as a key frame image, wherein the key frame image is used for training an image recognition model;
the reference frame image is a first frame image acquired by the vehicle-mounted camera, or a key frame image sequentially positioned before the image to be processed is determined.
Optionally, the taking the image to be processed as a key frame image in response to the state that the difference degree is greater than a preset difference degree includes:
determining the difference value between the number of objects in the reference frame image and the number of objects in the image to be processed;
determining that the difference degree is greater than a preset difference degree under the condition that the difference value is greater than the preset difference value;
and taking the image to be processed as the key frame image in response to the state that the difference degree is greater than the preset difference degree.
Optionally, the taking the image to be processed as a key frame image in response to the state that the difference degree is greater than a preset difference degree includes:
determining a first class of an object in the reference frame image and a second class of the object in the image to be processed;
determining that the difference degree is greater than the preset difference degree when the first category is different from the second category;
and taking the image to be processed as the key frame image in response to the state that the difference degree is greater than the preset difference degree.
Optionally, the taking the image to be processed as a key frame image in response to the state that the difference degree is greater than a preset difference degree includes:
determining a first positioning frame of an object in the reference frame image and a second positioning frame of the object in the image to be processed;
determining that the difference degree is greater than a preset difference degree under the condition that the similarity between the first positioning frame and the second positioning frame is smaller than the preset similarity;
and taking the image to be processed as a key frame image in response to the state that the difference degree is greater than the preset difference degree.
Optionally, before determining the difference between the reference frame image and the image to be processed, the method further includes:
determining the size of a positioning frame of each object in the reference frame image and the image to be processed;
and removing the first target object of which the size of the positioning frame is smaller than a preset size from the reference frame image and the image to be processed.
Optionally, before determining the difference between the reference frame image and the image to be processed, the method further includes:
determining the position of a positioning frame of each object in the reference frame image and the image to be processed;
removing a second target object of the positioning frame at the edge of the reference frame image from the reference frame image;
and removing a third target object of the positioning frame positioned at the edge of the image to be processed from the image to be processed.
According to a second aspect of the embodiments of the present disclosure, there is provided a vehicle-end image processing apparatus including:
the acquisition module is configured to acquire a reference frame image and an image to be processed through the vehicle-mounted camera;
a difference degree determination module configured to determine a difference degree between the reference frame image and the image to be processed;
the response module is configured to respond to the state that the difference degree is larger than a preset difference degree, and the image to be processed is transmitted back to a data storage center as a key frame image, and the key frame image is used for training an image recognition model;
the reference frame image is a first frame image acquired by the vehicle-mounted camera, or a key frame image sequentially positioned before the image to be processed is determined.
According to a third aspect of the embodiments of the present disclosure, there is provided a vehicle including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
the method for processing the vehicle-end image provided by the first aspect of the disclosure is executed.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the vehicle-end image processing method provided by the first aspect of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a chip comprising a processor and an interface; the processor is used for reading instructions to execute the steps of the vehicle-end image processing method provided by the first aspect of the disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
and in the case that the difference degree between the reference frame image and the image to be processed is greater than the preset difference degree, the image to be processed is transmitted back to the data storage center as the key frame image, instead of all the images to be processed with difference from the reference frame image being used as the key frame image. Therefore, the staff can label the key frame images in the data storage center, and then train the image recognition model with the labeled key frame images, so that the number of the manually labeled key frame images is reduced, and the workload of the staff is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating steps of a method for vehicle-end image processing according to an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating a reference frame image in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating an image to be processed in accordance with an exemplary embodiment;
FIG. 4 is a block diagram illustrating an image processing apparatus at a vehicle end according to an exemplary embodiment;
FIG. 5 is a functional block diagram schematic of a vehicle shown in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating an apparatus in accordance with an exemplary embodiment;
FIG. 7 is a block diagram illustrating an apparatus in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that all actions of acquiring signals, information or data in the present application are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Fig. 1 is a flowchart of a vehicle-side image processing method according to an exemplary embodiment, where as shown in fig. 1, the vehicle-side image processing method is applied to a terminal and is used for processing an image acquired by a vehicle-mounted camera, and specifically includes the following steps:
in step S11, a reference frame image and an image to be processed are acquired by the onboard camera.
In this step, the reference frame image and the image to be processed may be obtained through the image recognition model, and the difference between the reference frame image and the image to be processed is calculated to determine the key frame image, and the image recognition model then recognizes the key frame image to obtain the location frame and the corresponding category of each object in the key frame image.
The positioning frame comprises a position of the positioning frame on the key frame image and a size of the positioning frame, and specifically, the positioning frame is used for representing the position of the object on the key frame image and the size of the object.
Wherein, on-vehicle camera can be for on-vehicle panoramic camera, can carry out the perception to the environment near the vehicle, discerns the target object of different types such as people, animal, plant, traffic sign in the environment.
In step S12, a degree of difference between the reference frame image and the image to be processed is determined.
In this step, the difference degree is a parameter for measuring the difference between the reference frame image and the image to be processed. The larger the difference degree is, the larger the difference between the reference frame image and the image to be processed is; the smaller the difference degree is, the smaller the difference between the reference frame image and the image to be processed is.
In step S13, in response to the state that the difference degree is greater than a preset difference degree, the image to be processed is transmitted back to the data storage center as a key frame image, where the key frame image is used for training the image recognition model.
In this step, when the vehicle-mounted camera shoots in an actual scene, a plurality of images in the actual scene are obtained in real time, the plurality of images are input into the image recognition model one by one, the difference degree between two frames of images is determined through the image recognition model, when the difference degree between the two frames of images is greater than the preset difference degree, the actual scene is changed, and at this time, the frame image which is changed in the two frames of images can be used as a key frame image.
The image recognition model is used for recognizing the key frame images from the plurality of frame images and recognizing the positioning frames and the categories of the objects in the key frame images.
Under the condition that the difference degree between the reference frame image and the image to be processed is greater than the preset difference degree, the difference of the image to be processed compared with the reference frame image is larger, and at the moment, the image recognition model can take the image to be processed as a key frame image; and under the condition that the difference between the reference frame image and the image to be processed is smaller than or equal to the preset difference, the difference of the image to be processed is smaller than that of the reference frame image, at the moment, the image recognition model can remove the image to be processed, continuously receive the next frame of image to be processed and calculate the difference between the reference frame image and the next frame of image to be processed.
The key frame image is an image to be processed, the difference degree of which is determined by the image recognition model at present is greater than the preset difference degree.
The image to be processed is the current image to be processed of the image recognition model or the current image acquired by the vehicle-mounted camera.
The reference frame image is a first frame image acquired by the image recognition model, and the first frame image refers to a first frame image to be processed input into the image recognition model from a plurality of frame images acquired by the vehicle-mounted camera.
The reference frame image may also be a key frame image whose determined order is prior to the current image to be processed, and whose determined order is prior to the determined order in which the current image to be processed is determined to be the key frame image. Specifically, the reference frame image is the last key frame image adjacent to the currently determined key frame image.
Illustratively, the image recognition model acquires to-be-processed images A, B, C, D sequentially output by the vehicle-mounted camera according to the sequence, the image recognition model takes the to-be-processed image a as a reference frame image, compares the difference between the to-be-processed image B and the to-be-processed image a, and takes the to-be-processed image B as a first key frame image if the difference between the to-be-processed image B and the to-be-processed image a is determined to be greater than the preset difference, which indicates that the scene of the to-be-processed image B is changed compared with the scene of the to-be-processed image a.
And comparing the first key frame image (to-be-processed image B) with the to-be-processed image C, and taking the to-be-processed image C as a second key frame image if the difference degree between the to-be-processed image C and the to-be-processed image B is greater than the preset difference degree.
And comparing the difference degree of the second key frame image with the image D to be processed, thus, comparing each subsequent frame of image to be processed with the reference frame image, and taking the key frame image obtained each time as the reference frame image to compare the difference degree with the subsequently obtained image to be processed, thereby obtaining the multi-frame key frame image.
In the related technology, a plurality of frames of images are input into an image recognition model, the image recognition model recognizes a key frame image and a positioning frame and a category of the key frame image from the plurality of frames of images, a worker re-labels the positioning frame and the category of the key frame image, and finally inputs the re-labeled key frame image into the image recognition model to retrain the image recognition model.
In the process, the image recognition model recognizes whether a difference exists between every two adjacent frames of images in the multi-frame images, and once the difference exists, the next frame of image in the two adjacent frames of images is used as a key frame image, so that the number of the key frame images output by the image recognition model is huge, and the workload of manual re-labeling is also large.
However, in the traffic sign detection, rod detection, traffic light detection, obstacle detection, or the like, the image recognition model recognizes the positioning frame and the category of the object from the image, thereby realizing the detection. The accuracy requirement for determining the key frame image is not high, and even if slight brightness change (for example, brightness difference between two frame images) occurs between two frame images or slight brightness change (for example, slight movement of an object) of the images does not affect the positioning frame and the category of the object identified by the image identification model. In this case, if an image whose image brightness changes slightly or an image object changes slightly is used as a key frame image, an excessive amount of labeling work is given to the worker.
Therefore, in order to reduce the workload of staff for re-labeling the key frame images, the method includes that when the difference degree between the reference frame image and the image to be processed is greater than the preset difference degree, the image to be processed is transmitted back to the data storage center as the key frame image, the staff marks the key frame image in the data storage center to obtain the key frame image with the label, and then the key frame image and the label are used as the input of the image recognition model to train the image recognition model; when the difference between the reference frame image and the image to be processed is less than or equal to the preset difference, it indicates that the image to be processed has slight brightness change or slight image change compared with the reference frame image, and at this time, the image to be processed is not returned to the data storage center as the key frame image.
In the process, the images to be processed with larger differences are used as the key frame images, and not all the images to be processed with the differences are used as the key frame images, so that the number of the key frame images required to be marked manually is reduced, and the workload of workers is reduced; and the method is applied to scenes such as traffic sign detection, rod detection, traffic light detection or obstacle detection, and the like, and eliminates images to be processed with slight brightness change or image change, and the images are not used as key frame images, and cannot influence recognition results such as a positioning frame, a category and the like of an object recognized by an image recognition model, so that the accuracy of the image recognition model recognition is also ensured to a certain extent.
In one possible implementation, whether the difference between the reference frame image and the image to be processed is greater than a preset difference may be determined in the following ways.
Mode a 1: determining the difference value between the number of objects in the reference frame image and the number of objects in the image to be processed; determining that the difference degree is greater than a preset difference degree under the condition that the difference value is greater than the preset difference value; and taking the image to be processed as the key frame image in response to the state that the difference degree is greater than the preset difference degree.
In this embodiment, when the number of objects in the reference frame image and the number of objects in the image to be processed change, it indicates that the image to be processed changes compared with the reference frame image, but whether the change affects the recognition result of the image recognition model needs to be further determined.
Specifically, when the difference is less than or equal to the preset difference, it is indicated that the image to be processed slightly changes compared with the reference frame image, and the recognition result of the image recognition model is not affected at this time, and in order to reduce the amount of manual annotation, the image to be processed does not need to be used as the key frame image; under the condition that the difference value is larger than the preset difference value, the image to be processed is shown to have larger change compared with the reference image, and the recognition result of the image recognition model may be affected at this time.
The preset difference value may be determined according to a specific application scenario of the image recognition model, and the disclosure is not limited herein.
For example, referring to the reference frame image shown in fig. 2 and the to-be-processed image shown in fig. 3, when the preset difference is 0, the number of objects in the reference frame image identified in fig. 2 is 4, and the 4 objects are objects in four positioning frames labeled 1, 2, 3, and 4 in fig. 2, respectively; in fig. 3, the number of objects in the image to be processed is identified as 3, and the 3 objects are the objects in the three positioning frames labeled 5, 6, and 7 in fig. 3. The difference between the two is 1, and 1 is greater than the preset difference 0, so that the image to be processed has a larger change compared with the reference frame image, and the image to be processed shown in fig. 3 can be used as the key frame image.
The image recognition model can be trained by using images of a plurality of objects in advance, so that the image recognition model has the capability of recognizing the objects in the image, and when the image recognition model recognizes the image to be processed or the reference frame image, the plurality of objects can be recognized from the image to be processed or the reference frame image, and the number of the plurality of objects can be determined.
For example, referring to fig. 2, the image recognition model may be trained by using four sub-images of a telegraph pole, a traffic light, a vehicle and a person in four positioning frames of 1, 2, 3 and 4 in fig. 2 in advance, and when the image recognition model receives the image shown in fig. 2, the number of objects in the image may be determined to be 4.
Mode a 2: determining a first class of an object in the reference frame image and a second class of the object in the image to be processed; determining that the difference degree is greater than the preset difference degree when the first category is different from the second category; and taking the image to be processed as the key frame image in response to the state that the difference degree is greater than the preset difference degree.
In this embodiment, when the first type of the object in the reference frame image and the second type of the object in the image to be processed change, it indicates that the image to be processed changes from the reference frame image, but whether the change affects the recognition result of the image recognition model needs to be further determined.
Specifically, under the condition that the first category is the same as the second category, it indicates that the image to be processed is slightly changed or unchanged from the reference frame image, and at this time, the recognition result of the image recognition model is not affected, and in order to reduce the amount of manual annotation, the image to be processed does not need to be used as a key frame image; under the condition that the first category is different from the second category, it indicates that the image to be processed has a large change compared with the reference image, and at this time, the recognition result of the image recognition model is affected.
When the first category and the second category of the object are compared, whether the category of the object concerned by the image recognition model in the reference frame image and the image to be processed is changed or not is compared. The category includes names, shapes, types, and the like of objects, and is set according to a scene in which the image recognition model is specifically used, wherein different objects have different types, and objects of the same type may have different names.
For example, when the type of the object focused by the image recognition model is a name, if the image recognition model recognizes that there are three objects in the reference frame image, the names of the three objects are a warning signboard, a prohibition signboard and an indication signboard, respectively, there are also three objects in the image to be processed, and the names of the three objects are a warning signboard, a prohibition signboard and a road construction safety signboard, respectively. Although the reference frame image and the to-be-processed image have the same names of two objects, the name of the other object in the reference frame image is an indication signboard, the name in the to-be-processed image is a road construction safety signboard, and the name of the signboard is not the same in the reference frame image and the to-be-processed image, so that it can be determined that the category of the object in the reference frame image is not the same as the category of the object in the to-be-processed image.
For example, when the type of the object focused by the image recognition model is a shape, if the image recognition model recognizes three objects in the reference frame image, the shapes of the three objects are respectively a circle, a square and a triangle, and the image to be processed also has three objects, and the shapes of the three objects are respectively a circle, a square and a trapezoid. Although the reference frame image and the to-be-processed image have the same shape of two objects, the shape of the other object in the reference frame image is a triangle and a trapezoid, and the shapes of the two objects are different, so that it can be determined that the class of the object in the reference frame image is different from the class of the object in the to-be-processed image.
For example, in the case that the type of the object concerned by the image recognition model is a type, if the image recognition model recognizes that there are three objects in the reference frame image, the types of the three objects are respectively a person, a vehicle, and a dog, and there are also three objects in the image to be processed, the types of the three objects are respectively a person, a vehicle, and a horse. Although the reference frame image and the to-be-processed image have two objects of the same type, the other object is of a dog type in the reference frame image and a horse type in the to-be-processed image, and the types of the two objects are different, so that it can be determined that the type of the object in the reference frame image is different from the type of the object in the to-be-processed image.
Therefore, the image recognition model can be set in advance, and the object concerned by the image recognition model and the category of the object are set, so that the image recognition model can recognize the concerned object after receiving the reference frame image and the image to be processed, and further determine whether the category of the object between the two frames of images is different.
Mode a 3: determining a first positioning frame of an object in the reference frame image and a second positioning frame of the object in the image to be processed; determining that the difference degree is greater than a preset difference degree under the condition that the similarity between the first positioning frame and the second positioning frame is smaller than the preset similarity; and taking the image to be processed as a key frame image in response to the state that the difference degree is greater than the preset difference degree.
In this embodiment, when the first positioning frame of the object in the reference frame image is not similar to the second positioning frame of the object in the image to be processed, it is indicated that the image to be processed is changed from the reference frame image, but whether the change affects the recognition result of the image recognition model needs to be further determined.
Specifically, a first positioning frame of a plurality of objects in the reference frame image and a second positioning frame of the plurality of objects in the image to be processed may be determined, similarity calculation may be performed on the first positioning frame of the plurality of objects and the second positioning frame, respectively, to obtain an IOU (interaction-over-unity) matrix, and a maximum similarity may be determined from the plurality of similarities according to the IOU matrix. Under the condition that the maximum similarity is greater than or equal to the preset similarity, the image to be processed is similar to the reference frame image, the image to be processed is slightly changed or is not changed compared with the reference frame image, the recognition result of the image recognition model cannot be influenced, and in order to reduce the manual mark amount, the image to be processed does not need to be used as a key frame image; under the condition that the maximum similarity is smaller than the preset similarity, the fact that the image to be processed is not similar to the reference frame image is shown, the image to be processed is greatly changed compared with the reference image, the recognition result of the image recognition model is affected at the moment, and in order to guarantee the recognition accuracy of the image recognition model, the image to be processed can be used as the key frame image, so that a worker can further mark the key frame image.
For example, the reference frame image and the image to be processed each have three objects, the image recognition model respectively recognizes a first positioning frame a, a first positioning frame B, and a first positioning frame C of the three objects in the reference frame image, recognizes a second positioning frame a, a second positioning frame B, and a second positioning frame C of the three objects in the image to be processed, when determining the maximum similarity, similarity calculation is performed on the size of the first positioning frame a and the size of the second positioning frame A, B, C in sequence, similarity calculation is performed on the size of the second positioning frame B and the size of the second positioning frame A, B, C in sequence, finally similarity calculation is performed on the size of the third positioning frame C and the size of the second positioning frame A, B, C in sequence, so that 9 similarities are obtained in total, and finally the maximum similarity with the largest value is determined from the 9 similarities.
When the similarity between the size of the first positioning frame and the size of the second positioning frame is smaller than the maximum similarity, it indicates that the two objects may be different objects, for example, the similarity between the size of the first positioning frame of the puppy in the reference frame image and the size of the second positioning frame of the puppy in the image to be processed is smaller, and the two objects are different objects, and at this time, the similarity may be deleted and not compared with the preset similarity.
When the similarity of the size of the first positioning frame and the size of the second positioning frame is the maximum similarity, the two objects are possibly the same object. For example, the size of the first positioning frame of the giant dog in the reference frame image is similar to the size of the second positioning frame of the giant dog in the image to be processed. At this time, the similarity of the sizes of the positioning frames of the two giant dogs can be compared with the preset similarity, if the similarity is smaller than the preset similarity, it is determined that the two giant dogs are not the same dog, at this time, the image to be processed can be used as the key frame image, if the similarity is greater than or equal to the preset similarity, it is determined that the two giant dogs are the same dog, the image to be processed does not change obviously compared with the reference frame image, and at this time, the image to be processed can be screened out.
The calculating of the similarity between the size of the first positioning frame and the size of the second positioning frame is to calculate whether the length, width and height of the first positioning frame are similar to the length, width and height of the second positioning frame, calculate the sum of differences between the length, width and height of the first positioning frame and the length, width and height of the second positioning frame, and calculate the similarity between the size of the first positioning frame and the size of the second positioning frame. The difference sum and the similarity are in an inverse proportion relation, and the smaller the difference sum is, the greater the similarity between the size of the first positioning frame and the size of the second positioning frame is.
Specifically, the difference between the length of the first positioning frame and the length of the second positioning frame, the difference between the width of the first positioning frame and the width of the second positioning frame, and the difference between the height of the first positioning frame and the height of the second positioning frame are calculated, and then the similarity between the size of the first positioning frame and the size of the second positioning frame is obtained according to the inverse ratio relationship between the sum of the differences of the three differences and the similarity. Since the smaller the sum of the differences is, the greater the similarity between the size of the first positioning frame and the size of the second positioning frame, which has the smallest sum of the differences, reaches the maximum similarity among the plurality of sums of the differences, and at this time, the maximum similarity between the first positioning frame and the second positioning frame may be compared with the preset similarity.
For example, the sum of the differences between the first positioning frame a and the second positioning frame A, B, C, D is calculated to be 1, 2, 3, and 4, respectively, and then the similarities between the first positioning frame a and the second positioning frame A, B, C, D are 1, 1/2, 1/3, and 1/4, respectively, at this time, the similarity 1 between the first positioning frame a and the second positioning frame a may be taken as the maximum similarity, and then compared with the preset similarity, to determine whether to use the image to be processed as the key frame image.
Among the above three modes a1, a2, and A3, one mode may be selected from the three modes to determine the degree of difference between the reference frame image and the image to be processed; the degree of difference between the reference frame image and the image to be processed may also be determined in the order of the manner a1, the manner a2, and the manner A3; the degree of difference may also be determined in the order of the manner a1, the manner A3; the degree of difference may also be determined in the order of the manner a2 and the manner A3, and the present application is not limited herein.
Specifically, the case where the degree of difference is determined in the order of the manner a1, the manner a2, and the manner A3 includes: when the difference between the number of objects in the reference frame image and the number of objects in the image to be processed is less than or equal to the preset difference, it indicates that it cannot be determined whether the image to be processed is the key frame image by the method of the mode a1, and at this time, the first category of the objects in the reference frame image and the second category of the objects in the image to be processed can be determined; under the condition that the first category is the same as the second category, it indicates that whether the image to be processed is the key frame image cannot be determined by the method of the mode 2, and at this time, the similarity between the first positioning frame of the object in the reference frame image and the second positioning frame of the object in the image to be processed can be determined; and under the condition that the similarity is smaller than the preset similarity, determining that the difference is larger than the preset difference, and taking the image to be processed as the key frame image.
In the process, because the calculated amount of the calculated object number is less than the calculated amount of the calculated object type is less than the calculated amount of the calculated positioning frame similarity, when the difference value between the calculated object number of the reference frame image and the object number in the image to be processed is greater than the preset difference value, the image to be processed is determined to be the key frame image, and the object type and the positioning frame similarity do not need to be calculated any more, so that the data processing amount is reduced, and the key frame image can be determined more quickly; even when the first category and the second category are calculated to be different from each other, the calculation of the similarity of the positioning frame is not necessary, and the data processing amount can be reduced. Therefore, data processing amount can be reduced through progressive calculation of one layer and one layer, and when the key frame image cannot be determined through calculation of the previous layer, calculation of the previous layer can be complemented through calculation of the next layer, so that the condition that the key frame image is determined inaccurately due to calculation errors of the previous layer is avoided, and the accuracy of the determined key frame image is guaranteed.
In a possible implementation manner, before determining the difference degree between the reference frame image and the image to be processed, preprocessing needs to be performed on the image to be processed, so as to guarantee the accuracy of the image recognition model for recognizing the key frame image. Specifically, the preprocessing of the image to be processed includes the following modes:
mode B1: determining the size of a positioning frame of each object in the reference frame image and the image to be processed; and removing the first target object of which the size of the positioning frame is smaller than a preset size from the reference frame image and the image to be processed.
Specifically, a first target object with a first positioning frame size smaller than a preset size is removed from the reference frame image, and a first target object with a second positioning frame size smaller than the preset size is removed from the image to be processed.
In this way, when the image recognition model recognizes the reference frame image and the image to be processed, if the first target object with the size of the positioning frame smaller than the preset size is recognized, it indicates that the first target object has a smaller influence on the whole image, or the image recognition model recognizes an error, and at this time, the first target object can be removed in order to avoid the influence of the first target object on the calculation of the similarity between the plurality of first positioning frames in the reference frame image and the plurality of second positioning frames in the image to be processed. Because the size of the first target object is small, even if the first target object is removed, the recognition result of the image recognition model cannot be influenced, and the first target object is removed under the condition that the image recognition model is mistaken, so that the condition that the similarity between the positioning frames is mistakenly calculated due to inaccurate recognition of the first target object is avoided.
When the size of the positioning frame of the reference frame image is determined, if the reference frame image is the first frame image acquired by the image recognition model, the size of the positioning frame of the reference frame image needs to be recognized and determined; if the reference frame image is the key frame image, the image recognition model determines the key frame image, and also determines the positioning frame (the positioning frame comprises the positioning frame size and the positioning frame position) of the object in the key frame image and the category of the object, so that under the condition that the positioning frame size of the object is determined, the positioning frame size in the reference frame image does not need to be identified and determined again, the existing positioning frame size is directly used for carrying out similarity calculation with the positioning frame size of the image to be processed, and the data processing amount of the image recognition model is reduced.
Mode B2: determining the position of a positioning frame of each object in the reference frame image and the image to be processed; removing a second target object of the positioning frame at the edge of the reference frame image from the reference frame image; and removing a third target object of the positioning frame positioned at the edge of the image to be processed from the image to be processed.
In this way, when the first positioning frame is located at the image edge of the reference frame image and/or the second positioning frame is located at the image edge of the image to be processed, because the boundaries of the first positioning frame and the second positioning frame are incomplete, the calculated similarity between the first positioning frame and the second positioning frame is not accurate according to the size of the incomplete first positioning frame and the size of the second positioning frame, and thus the determined key frame image is not accurate.
In order to improve the accuracy of determining the similarity between the first positioning frame and the second positioning frame, the second target object of the positioning frame at the edge of the image may be eliminated from the standard frame image, and the third target object of the positioning frame at the edge of the image may be eliminated from the image to be processed. Therefore, the rest first positioning frame in the reference frame image is complete, the rest second positioning frame in the image to be processed is complete, and the similarity between the complete first positioning frame and the second positioning frame is calculated between the reference frame image and the image to be processed, so that the accuracy of the similarity between the size of the first positioning frame and the size of the second positioning frame is ensured, and the determined key frame image is more accurate.
Fig. 4 is a block diagram illustrating an image processing apparatus at a vehicle end according to an exemplary embodiment. Referring to fig. 4, the apparatus 120 includes: an obtaining module 121, a difference determining module 122 and a responding module 123.
An acquisition module 121 configured to acquire a reference frame image and an image to be processed by a vehicle-mounted camera;
a difference degree determination module 122 configured to determine a difference degree between the reference frame image and the image to be processed;
a response module 123 configured to, in response to a state that the difference degree is greater than a preset difference degree, return the image to be processed to a data storage center as a key frame image, where the key frame image is used for training an image recognition model;
the reference frame image is a first frame image acquired by the vehicle-mounted camera, or a key frame image sequentially positioned before the image to be processed is determined.
Optionally, the response module 123 includes:
a difference determination module configured to determine a difference between the number of objects in the reference frame image and the number of objects in the image to be processed;
a first determining module configured to determine that the difference degree is greater than a preset difference degree if the difference value is greater than the preset difference value;
a first response module configured to take the image to be processed as the key frame image in response to a state that the difference degree is greater than a preset difference degree.
Optionally, the response module 123 includes:
a category determination module configured to determine a first category of an object in the reference frame image and a second category of an object in the image to be processed;
a second determination module configured to determine that the degree of difference is greater than the preset degree of difference if the first category is different from the second category;
a second response module configured to take the image to be processed as the key frame image in response to a state that the difference degree is greater than a preset difference degree.
Optionally, the response module 123 includes:
a positioning frame determining module configured to determine a first positioning frame of an object in the reference frame image and a second positioning frame of the object in the image to be processed;
a second determining module configured to determine that the difference degree is greater than a preset difference degree if the similarity degree between the first positioning frame and the second positioning frame is less than the preset similarity degree;
and the third response module is configured to respond to the state that the difference degree is greater than the preset difference degree and take the image to be processed as a key frame image.
Optionally, the vehicle-side image processing apparatus 120 further includes:
a positioning frame size determination module configured to determine a positioning frame size of each object in the reference frame image and the image to be processed;
the first removing module is configured to remove the first target object of which the size of the positioning frame is smaller than a preset size from the reference frame image and the image to be processed.
Optionally, the vehicle-end image processing device 120 further includes:
a positioning frame position determining module configured to determine a positioning frame position of each object in the reference frame image and the image to be processed;
the second eliminating module is configured to eliminate a second target object of the positioning frame at the edge of the reference frame image from the reference frame image;
and the third eliminating module is configured to eliminate a third target object of the positioning frame, which is positioned at the edge of the image to be processed, from the image to be processed.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the vehicle-side image processing method provided by the present disclosure.
Referring to fig. 5, fig. 5 is a functional block diagram of a vehicle 600 according to an exemplary embodiment. The vehicle 600 may be configured in a fully or partially autonomous driving mode. For example, the vehicle 600 may acquire environmental information of its surroundings through the sensing system 620 and derive an automatic driving strategy based on an analysis of the surrounding environmental information to implement full automatic driving, or present the analysis result to the user to implement partial automatic driving.
Vehicle 600 may include various subsystems such as infotainment system 610, perception system 620, decision control system 630, drive system 640, and computing platform 650. Alternatively, vehicle 600 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the sub-systems and components of the vehicle 600 may be interconnected by wire or wirelessly.
In some embodiments, the infotainment system 610 may include a communication system 611, an entertainment system 612, and a navigation system 613.
The communication system 611 may comprise a wireless communication system that may communicate wirelessly with one or more devices, either directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a Wireless Local Area Network (WLAN) using WiFi. In some embodiments, the wireless communication system may utilize an infrared link, bluetooth, or ZigBee to communicate directly with the device. Other wireless protocols, such as various vehicular communication systems, for example, a wireless communication system may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicles and/or roadside stations.
The entertainment system 612 may include a display device, a microphone, and a sound box, and a user may listen to a broadcast in the car based on the entertainment system, playing music; or the mobile phone is communicated with the vehicle, screen projection of the mobile phone is realized on the display equipment, the display equipment can be in a touch control type, and a user can operate the display equipment by touching the screen.
In some cases, the voice signal of the user may be acquired through a microphone, and certain control of the vehicle 600 by the user, such as adjusting the temperature in the vehicle, etc., may be implemented according to the analysis of the voice signal of the user. In other cases, music may be played to the user through a stereo.
The navigation system 613 may include a map service provided by a map provider to provide navigation of a route of travel for the vehicle 600, and the navigation system 613 may be used in conjunction with a global positioning system 621 and an inertial measurement unit 622 of the vehicle. The map service provided by the map provider can be a two-dimensional map or a high-precision map.
The sensing system 620 may include several sensors that sense information about the environment surrounding the vehicle 600. For example, the sensing system 620 may include a global positioning system 621 (the global positioning system may be a GPS system, a beidou system or other positioning system), an Inertial Measurement Unit (IMU) 622, a laser radar 623, a millimeter wave radar 624, an ultrasonic radar 625, and a camera 626. The sensing system 620 may also include sensors of internal systems of the monitored vehicle 600 (e.g., an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors may be used to detect the object and its corresponding characteristics (position, shape, orientation, velocity, etc.). Such detection and identification is a critical function of the safe operation of the vehicle 600.
Global positioning system 621 is used to estimate the geographic location of vehicle 600.
The inertial measurement unit 622 is used to sense a pose change of the vehicle 600 based on the inertial acceleration. In some embodiments, inertial measurement unit 622 may be a combination of accelerometers and gyroscopes.
Lidar 623 utilizes laser light to sense objects in the environment in which vehicle 600 is located. In some embodiments, lidar 623 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
The millimeter-wave radar 624 utilizes radio signals to sense objects within the surrounding environment of the vehicle 600. In some embodiments, in addition to sensing objects, the millimeter-wave radar 624 may also be used to sense the speed and/or heading of objects.
The ultrasonic radar 625 may sense objects around the vehicle 600 using ultrasonic signals.
The camera 626 is used to capture image information of the surrounding environment of the vehicle 600. The image capturing device 626 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, and the like, and the image information acquired by the image capturing device 626 may include still images or video stream information.
Decision control system 630 includes a computing system 631 that makes analytical decisions based on information acquired by sensing system 620, decision control system 630 further includes a vehicle control unit 632 that controls the powertrain of vehicle 600, and a steering system 633, throttle 634, and brake system 635 for controlling vehicle 600.
The computing system 631 may operate to process and analyze the various information acquired by the perception system 620 to identify objects, and/or features in the environment surrounding the vehicle 600. The target may comprise a pedestrian or an animal and the objects and/or features may comprise traffic signals, road boundaries and obstacles. Computing system 631 may use object recognition algorithms, Motion from Motion (SFM) algorithms, video tracking, and like techniques. In some embodiments, the computing system 631 may be used to map an environment, track objects, estimate the speed of objects, and so forth. The computing system 631 may analyze the various information obtained and derive a control strategy for the vehicle.
The vehicle controller 632 may be used to perform coordinated control on the power battery and the engine 641 of the vehicle to improve the power performance of the vehicle 600.
The steering system 633 is operable to adjust the heading of the vehicle 600. For example, in one embodiment, a steering wheel system.
The throttle 634 is used to control the operating speed of the engine 641 and thus the speed of the vehicle 600.
The brake system 635 is used to control the deceleration of the vehicle 600. The braking system 635 may use friction to slow the wheel 644. In some embodiments, the braking system 635 may convert the kinetic energy of the wheels 644 into electrical current. The braking system 635 may also take other forms to slow the rotational speed of the wheels 644 to control the speed of the vehicle 600.
The drive system 640 may include components that provide powered motion to the vehicle 600. In one embodiment, the drive system 640 may include an engine 641, an energy source 642, a transmission 643, and wheels 644. The engine 641 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine consisting of a gasoline engine and an electric motor, a hybrid engine consisting of an internal combustion engine and an air compression engine. The engine 641 converts the energy source 642 into mechanical energy.
Examples of energy sources 642 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 642 may also provide energy to other systems of the vehicle 600.
The transmission 643 may transmit mechanical power from the engine 641 to the wheels 644. The transmission 643 may include a gearbox, a differential, and a drive shaft. In one embodiment, the transmission 643 may also include other components, such as clutches. Wherein the drive shaft may include one or more axles that may be coupled to one or more wheels 644.
Some or all of the functions of the vehicle 600 are controlled by the computing platform 650. The computing platform 650 can include at least one first processor 651, which first processor 651 can execute instructions 653 stored in a non-transitory computer-readable medium, such as first memory 652. In some embodiments, computing platform 650 may also be a plurality of computing devices that control individual components or subsystems of vehicle 600 in a distributed manner.
The first processor 651 may be any conventional processor, such as a commercially available CPU. Alternatively, the first processor 651 may also include a processor such as a Graphics Processor Unit (GPU), a Field Programmable Gate Array (FPGA), a System On Chip (SOC), an Application Specific Integrated Circuit (ASIC), or a combination thereof. Although fig. 5 functionally illustrates processors, memories, and other elements of a computer in the same block, one of ordinary skill in the art will appreciate that the processors, computers, or memories may actually comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard drive or other storage medium located in a different enclosure than the computer. Thus, references to a processor or computer are to be understood as including references to a collection of processors or computers or memories which may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only computations related to the component-specific functions.
In the embodiment of the present disclosure, the first processor 651 may perform the above-described vehicle-end image processing method.
In various aspects described herein, the first processor 651 may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle and others are executed by a remote processor, including taking the steps necessary to perform a single maneuver.
In some embodiments, the first memory 652 can contain instructions 653 (e.g., program logic), which instructions 653 can be executed by the first processor 651 to perform various functions of the vehicle 600. The first memory 652 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the infotainment system 610, the perception system 620, the decision control system 630, the drive system 640.
In addition to instructions 653, first memory 652 may also store data such as road maps, route information, the location, direction, speed, and other such vehicle data of the vehicle, as well as other information. Such information may be used by the vehicle 600 and the computing platform 650 during operation of the vehicle 600 in autonomous, semi-autonomous, and/or manual modes.
The computing platform 650 may control functions of the vehicle 600 based on inputs received from various subsystems (e.g., the drive system 640, the perception system 620, and the decision control system 630). For example, computing platform 650 may utilize input from decision control system 630 in order to control steering system 633 to avoid obstacles detected by perception system 620. In some embodiments, the computing platform 650 is operable to provide control over many aspects of the vehicle 600 and its subsystems.
Optionally, one or more of these components described above may be mounted or associated separately from the vehicle 600. For example, the first memory 652 may exist partially or completely separately from the vehicle 600. The aforementioned components may be communicatively coupled together in a wired and/or wireless manner.
Optionally, the above components are only an example, in an actual application, components in the above modules may be added or deleted according to an actual need, and fig. 5 should not be construed as limiting the embodiment of the present disclosure.
An autonomous automobile traveling on a roadway, such as vehicle 600 above, may identify objects within its surrounding environment to determine an adjustment to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, separation from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to be adjusted.
Optionally, the vehicle 600 or a sensory and computing device associated with the vehicle 600 (e.g., computing system 631, computing platform 650) may predict behavior of the identified object based on characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.). Optionally, each identified object depends on the behavior of each other, so it is also possible to predict the behavior of a single identified object taking all identified objects together into account. The vehicle 600 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous vehicle is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 600, such as the lateral position of the vehicle 600 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 600 to cause the autonomous vehicle to follow a given trajectory and/or maintain a safe lateral and longitudinal distance from objects in the vicinity of the autonomous vehicle (e.g., vehicles in adjacent lanes on the road).
The vehicle 600 may be any type of vehicle, such as a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a recreational vehicle, a train, etc., and the disclosed embodiment is not particularly limited.
Fig. 6 is a block diagram illustrating an apparatus 800 for determining a key frame image according to an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, a vehicle, and the like.
Referring to fig. 6, the apparatus 800 may include one or more of the following components: a first processing component 802, a second memory 804, a first power component 806, a multimedia component 808, an audio component 810, a first input/output interface 812, a sensor component 814, and a communication component 816.
The first processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The first processing component 802 may include one or more second processors 820 to execute instructions to perform all or a portion of the steps of the vehicle-side image processing method. Further, the first processing component 802 can include one or more modules that facilitate interaction between the first processing component 802 and other components. For example, the first processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the first processing component 802.
The second memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The second memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A first power supply component 806 provides power to the various components of the device 800. The first power component 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the second memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The first input/output interface 812 provides an interface between the first processing component 802 and a peripheral interface module, which may be a keyboard, click wheel, button, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described end-of-vehicle image processing methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the second memory 804 comprising instructions, executable by the second processor 820 of the apparatus 800 to perform the vehicle-end image processing method described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The apparatus may be a part of a stand-alone electronic device, for example, in an embodiment, the apparatus may be an Integrated Circuit (IC) or a chip, where the IC may be one IC or a collection of multiple ICs; the chip may include, but is not limited to, the following categories: a GPU (Graphics Processing Unit), a CPU (Central Processing Unit), an FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an SOC (System on Chip, SOC, System on Chip, or System on Chip), and the like. The integrated circuit or the chip can be used for executing executable instructions (or codes) to realize the vehicle-end image processing method. Where the executable instructions may be stored in the integrated circuit or chip or may be retrieved from another device or apparatus, for example, where the integrated circuit or chip includes a processor, a memory, and an interface for communicating with other devices. The executable instruction can be stored in the processor, and when the executable instruction is executed by the processor, the vehicle-end image processing method is realized; alternatively, the integrated circuit or the chip may receive the executable instruction through the interface and transmit the executable instruction to the processor for execution, so as to implement the vehicle-end image processing method.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described end-of-vehicle image processing method when executed by the programmable apparatus.
Fig. 7 is a block diagram illustrating an apparatus 1900 for determining a key frame image according to an example embodiment. For example, the apparatus 1900 may be provided as a server. Referring to FIG. 7, the apparatus 1900 includes a second processing component 1922 further including one or more processors and memory resources represented by a third memory 1932 for storing instructions, e.g., applications, executable by the second processing component 1922. The application programs stored in the third memory 1932 can include one or more modules that each correspond to a set of instructions. Further, the second processing component 1922 is configured to execute instructions to perform the vehicle-end image processing method described above.
The device 1900 may also include a second power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and a second input/output interface 1958. The device 1900 may operate based on an operating system, such as Windows Server, stored in a third memory 1932 TM ,Mac OS X TM ,Unix TM ,Linux TM ,FreeBSD TM Or the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A vehicle-end image processing method is characterized by comprising the following steps:
acquiring a reference frame image and an image to be processed by a vehicle-mounted camera;
determining the difference degree between the reference frame image and the image to be processed;
responding to the state that the difference degree is larger than the preset difference degree, and transmitting the image to be processed back to a data storage center as a key frame image, wherein the key frame image is used for training an image recognition model;
the reference frame image is a first frame image acquired by the vehicle-mounted camera, or a key frame image sequentially positioned before the image to be processed is determined.
2. The vehicle-end image processing method according to claim 1, wherein the taking the image to be processed as a key frame image in response to the state where the degree of difference is greater than a preset degree of difference includes:
determining the difference value between the number of objects in the reference frame image and the number of objects in the image to be processed;
determining that the difference degree is greater than a preset difference degree under the condition that the difference value is greater than the preset difference value;
and taking the image to be processed as the key frame image in response to the state that the difference degree is greater than the preset difference degree.
3. The vehicle-end image processing method according to claim 1, wherein the taking the image to be processed as a key frame image in response to the state where the degree of difference is greater than a preset degree of difference includes:
determining a first class of an object in the reference frame image and a second class of the object in the image to be processed;
determining that the difference degree is greater than the preset difference degree when the first category is different from the second category;
and taking the image to be processed as the key frame image in response to the state that the difference degree is greater than the preset difference degree.
4. The vehicle-end image processing method according to claim 1, wherein the taking the image to be processed as a key frame image in response to the state where the degree of difference is greater than a preset degree of difference includes:
determining a first positioning frame of an object in the reference frame image and a second positioning frame of the object in the image to be processed;
determining that the difference degree is greater than a preset difference degree under the condition that the similarity between the first positioning frame and the second positioning frame is smaller than the preset similarity;
and taking the image to be processed as a key frame image in response to the state that the difference degree is greater than the preset difference degree.
5. The vehicle-end image processing method according to claim 1, wherein before determining the degree of difference between the reference frame image and the image to be processed, the method further comprises:
determining the size of a positioning frame of each object in the reference frame image and the image to be processed;
and removing the first target object of which the size of the positioning frame is smaller than a preset size from the reference frame image and the image to be processed.
6. The vehicle-end image processing method according to claim 1, wherein before determining the degree of difference between the reference frame image and the image to be processed, the method further comprises:
determining the position of a positioning frame of each object in the reference frame image and the image to be processed;
removing a second target object of the positioning frame at the edge of the reference frame image from the reference frame image;
and removing a third target object of the positioning frame positioned at the edge of the image to be processed from the image to be processed.
7. An image processing apparatus at a vehicle end, comprising:
the acquisition module is configured to acquire a reference frame image and an image to be processed through the vehicle-mounted camera;
a difference degree determination module configured to determine a difference degree between the reference frame image and the image to be processed;
the response module is configured to respond to a state that the difference degree is larger than a preset difference degree, and return the image to be processed to a data storage center as a key frame image, wherein the key frame image is used for training an image recognition model;
the reference frame image is a first frame image acquired by the vehicle-mounted camera, or a key frame image sequentially positioned before the image to be processed is determined.
8. A vehicle, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
the steps of executing the vehicle-end image processing method according to any one of claims 1 to 6.
9. A computer-readable storage medium having stored thereon computer program instructions, the program instructions when executed by a processor implementing the steps of the method for processing images at a vehicle end according to any of claims 1 to 6.
10. A chip comprising a processor and an interface; the processor is used for reading instructions to execute the steps of the vehicle-end image processing method according to any one of claims 1-6.
CN202210771048.7A 2022-06-30 2022-06-30 Vehicle-end image processing method and device, vehicle, storage medium and chip Active CN115115822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210771048.7A CN115115822B (en) 2022-06-30 2022-06-30 Vehicle-end image processing method and device, vehicle, storage medium and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210771048.7A CN115115822B (en) 2022-06-30 2022-06-30 Vehicle-end image processing method and device, vehicle, storage medium and chip

Publications (2)

Publication Number Publication Date
CN115115822A true CN115115822A (en) 2022-09-27
CN115115822B CN115115822B (en) 2023-10-31

Family

ID=83329981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210771048.7A Active CN115115822B (en) 2022-06-30 2022-06-30 Vehicle-end image processing method and device, vehicle, storage medium and chip

Country Status (1)

Country Link
CN (1) CN115115822B (en)

Citations (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016061885A (en) * 2014-09-17 2016-04-25 ヤフー株式会社 Advertisement display device, advertisement display method and advertisement display program
US20170154221A1 (en) * 2015-12-01 2017-06-01 Xiaomi Inc. Video categorization method and apparatus, and storage medium
WO2017113691A1 (en) * 2015-12-29 2017-07-06 乐视控股(北京)有限公司 Method and device for identifying video characteristics
WO2018058321A1 (en) * 2016-09-27 2018-04-05 SZ DJI Technology Co., Ltd. Method and system for creating video abstraction from image data captured by a movable object
WO2019114405A1 (en) * 2017-12-13 2019-06-20 北京市商汤科技开发有限公司 Video recognition and training method and apparatus, electronic device and medium
CN110189242A (en) * 2019-05-06 2019-08-30 百度在线网络技术(北京)有限公司 Image processing method and device
CN110287876A (en) * 2019-06-25 2019-09-27 黑龙江电力调度实业有限公司 A kind of content identification method based on video image
CN110599486A (en) * 2019-09-20 2019-12-20 福州大学 Method and system for detecting video plagiarism
CN110740266A (en) * 2019-11-01 2020-01-31 Oppo广东移动通信有限公司 Image frame selection method and device, storage medium and electronic equipment
CN110852289A (en) * 2019-11-16 2020-02-28 公安部交通管理科学研究所 Method for extracting information of vehicle and driver based on mobile video
CN110941594A (en) * 2019-12-16 2020-03-31 北京奇艺世纪科技有限公司 Splitting method and device of video file, electronic equipment and storage medium
WO2020088134A1 (en) * 2018-10-31 2020-05-07 Oppo广东移动通信有限公司 Video correction method and device, electronic apparatus, and computer-readable storage medium
CN111324874A (en) * 2020-01-21 2020-06-23 支付宝实验室(新加坡)有限公司 Certificate authenticity identification method and device
CN111429517A (en) * 2020-03-23 2020-07-17 Oppo广东移动通信有限公司 Relocation method, relocation device, storage medium and electronic device
CN111523438A (en) * 2020-04-20 2020-08-11 支付宝实验室(新加坡)有限公司 Living body identification method, terminal device and electronic device
CN111552837A (en) * 2020-05-08 2020-08-18 深圳市英威诺科技有限公司 Animal video tag automatic generation method based on deep learning, terminal and medium
CN111629262A (en) * 2020-05-08 2020-09-04 Oppo广东移动通信有限公司 Video image processing method and device, electronic equipment and storage medium
CN111626137A (en) * 2020-04-29 2020-09-04 平安国际智慧城市科技股份有限公司 Video-based motion evaluation method and device, computer equipment and storage medium
CN111797262A (en) * 2020-06-24 2020-10-20 北京小米松果电子有限公司 Poetry generation method and device, electronic equipment and storage medium
CN112419261A (en) * 2020-11-19 2021-02-26 江汉大学 Visual acquisition method and device with abnormal point removing function
CN112560684A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Lane line detection method, lane line detection device, electronic apparatus, storage medium, and vehicle
CN112700489A (en) * 2020-12-30 2021-04-23 武汉大学 Ship-based video image sea ice thickness measurement method and system based on deep learning
WO2021083242A1 (en) * 2019-10-31 2021-05-06 Oppo广东移动通信有限公司 Map constructing method, positioning method and system, wireless communication terminal, and computer-readable medium
EP3819820A2 (en) * 2020-06-28 2021-05-12 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for recognizing key identifier in video, device and storage medium
WO2021092815A1 (en) * 2019-11-13 2021-05-20 深圳市大疆创新科技有限公司 Identification method, temperature measurement method, device and storage medium
CN113139560A (en) * 2020-01-17 2021-07-20 北京达佳互联信息技术有限公司 Training method and device of video processing model, and video processing method and device
WO2021160184A1 (en) * 2020-02-14 2021-08-19 Huawei Technologies Co., Ltd. Target detection method, training method, electronic device, and computer-readable medium
CN113313112A (en) * 2021-05-31 2021-08-27 浙江商汤科技开发有限公司 Image processing method and device, computer equipment and storage medium
CN113722541A (en) * 2021-08-30 2021-11-30 深圳市商汤科技有限公司 Video fingerprint generation method and device, electronic equipment and storage medium
US20220004773A1 (en) * 2020-07-06 2022-01-06 Electronics And Telecommunications Research Institute Apparatus for training recognition model, apparatus for analyzing video, and apparatus for providing video search service
CN114139015A (en) * 2021-11-30 2022-03-04 招商局金融科技有限公司 Video storage method, device, equipment and medium based on key event identification
CN114187557A (en) * 2021-12-15 2022-03-15 北京字节跳动网络技术有限公司 Method, device, readable medium and electronic equipment for determining key frame
CN114325856A (en) * 2021-11-30 2022-04-12 国网河南省电力公司周口供电公司 Power transmission line foreign matter monitoring method based on edge calculation
US20220137636A1 (en) * 2020-10-30 2022-05-05 Uatc, Llc Systems and Methods for Simultaneous Localization and Mapping Using Asynchronous Multi-View Cameras
CN114494982A (en) * 2022-04-08 2022-05-13 北京嘉沐安科技有限公司 Live video big data accurate recommendation method and system based on artificial intelligence
CN114550053A (en) * 2022-02-24 2022-05-27 深圳壹账通科技服务有限公司 Traffic accident responsibility determination method, device, computer equipment and storage medium
WO2022111168A1 (en) * 2020-11-26 2022-06-02 腾讯音乐娱乐科技(深圳)有限公司 Video classification method and apparatus
KR20220074782A (en) * 2020-11-27 2022-06-03 삼성전자주식회사 Method and device for simultaneous localization and mapping (slam)

Patent Citations (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016061885A (en) * 2014-09-17 2016-04-25 ヤフー株式会社 Advertisement display device, advertisement display method and advertisement display program
US20170154221A1 (en) * 2015-12-01 2017-06-01 Xiaomi Inc. Video categorization method and apparatus, and storage medium
WO2017113691A1 (en) * 2015-12-29 2017-07-06 乐视控股(北京)有限公司 Method and device for identifying video characteristics
WO2018058321A1 (en) * 2016-09-27 2018-04-05 SZ DJI Technology Co., Ltd. Method and system for creating video abstraction from image data captured by a movable object
CN109792543A (en) * 2016-09-27 2019-05-21 深圳市大疆创新科技有限公司 According to the method and system of mobile article captured image data creation video abstraction
WO2019114405A1 (en) * 2017-12-13 2019-06-20 北京市商汤科技开发有限公司 Video recognition and training method and apparatus, electronic device and medium
WO2020088134A1 (en) * 2018-10-31 2020-05-07 Oppo广东移动通信有限公司 Video correction method and device, electronic apparatus, and computer-readable storage medium
CN110189242A (en) * 2019-05-06 2019-08-30 百度在线网络技术(北京)有限公司 Image processing method and device
CN110287876A (en) * 2019-06-25 2019-09-27 黑龙江电力调度实业有限公司 A kind of content identification method based on video image
CN110599486A (en) * 2019-09-20 2019-12-20 福州大学 Method and system for detecting video plagiarism
WO2021083242A1 (en) * 2019-10-31 2021-05-06 Oppo广东移动通信有限公司 Map constructing method, positioning method and system, wireless communication terminal, and computer-readable medium
CN110740266A (en) * 2019-11-01 2020-01-31 Oppo广东移动通信有限公司 Image frame selection method and device, storage medium and electronic equipment
WO2021092815A1 (en) * 2019-11-13 2021-05-20 深圳市大疆创新科技有限公司 Identification method, temperature measurement method, device and storage medium
CN110852289A (en) * 2019-11-16 2020-02-28 公安部交通管理科学研究所 Method for extracting information of vehicle and driver based on mobile video
CN110941594A (en) * 2019-12-16 2020-03-31 北京奇艺世纪科技有限公司 Splitting method and device of video file, electronic equipment and storage medium
CN113139560A (en) * 2020-01-17 2021-07-20 北京达佳互联信息技术有限公司 Training method and device of video processing model, and video processing method and device
CN111324874A (en) * 2020-01-21 2020-06-23 支付宝实验室(新加坡)有限公司 Certificate authenticity identification method and device
WO2021160184A1 (en) * 2020-02-14 2021-08-19 Huawei Technologies Co., Ltd. Target detection method, training method, electronic device, and computer-readable medium
CN111429517A (en) * 2020-03-23 2020-07-17 Oppo广东移动通信有限公司 Relocation method, relocation device, storage medium and electronic device
CN111523438A (en) * 2020-04-20 2020-08-11 支付宝实验室(新加坡)有限公司 Living body identification method, terminal device and electronic device
CN111626137A (en) * 2020-04-29 2020-09-04 平安国际智慧城市科技股份有限公司 Video-based motion evaluation method and device, computer equipment and storage medium
CN111552837A (en) * 2020-05-08 2020-08-18 深圳市英威诺科技有限公司 Animal video tag automatic generation method based on deep learning, terminal and medium
CN111629262A (en) * 2020-05-08 2020-09-04 Oppo广东移动通信有限公司 Video image processing method and device, electronic equipment and storage medium
CN111797262A (en) * 2020-06-24 2020-10-20 北京小米松果电子有限公司 Poetry generation method and device, electronic equipment and storage medium
EP3819820A2 (en) * 2020-06-28 2021-05-12 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for recognizing key identifier in video, device and storage medium
US20220004773A1 (en) * 2020-07-06 2022-01-06 Electronics And Telecommunications Research Institute Apparatus for training recognition model, apparatus for analyzing video, and apparatus for providing video search service
US20220137636A1 (en) * 2020-10-30 2022-05-05 Uatc, Llc Systems and Methods for Simultaneous Localization and Mapping Using Asynchronous Multi-View Cameras
CN112419261A (en) * 2020-11-19 2021-02-26 江汉大学 Visual acquisition method and device with abnormal point removing function
WO2022111168A1 (en) * 2020-11-26 2022-06-02 腾讯音乐娱乐科技(深圳)有限公司 Video classification method and apparatus
KR20220074782A (en) * 2020-11-27 2022-06-03 삼성전자주식회사 Method and device for simultaneous localization and mapping (slam)
CN112560684A (en) * 2020-12-16 2021-03-26 北京百度网讯科技有限公司 Lane line detection method, lane line detection device, electronic apparatus, storage medium, and vehicle
CN112700489A (en) * 2020-12-30 2021-04-23 武汉大学 Ship-based video image sea ice thickness measurement method and system based on deep learning
CN113313112A (en) * 2021-05-31 2021-08-27 浙江商汤科技开发有限公司 Image processing method and device, computer equipment and storage medium
CN113722541A (en) * 2021-08-30 2021-11-30 深圳市商汤科技有限公司 Video fingerprint generation method and device, electronic equipment and storage medium
CN114325856A (en) * 2021-11-30 2022-04-12 国网河南省电力公司周口供电公司 Power transmission line foreign matter monitoring method based on edge calculation
CN114139015A (en) * 2021-11-30 2022-03-04 招商局金融科技有限公司 Video storage method, device, equipment and medium based on key event identification
CN114187557A (en) * 2021-12-15 2022-03-15 北京字节跳动网络技术有限公司 Method, device, readable medium and electronic equipment for determining key frame
CN114550053A (en) * 2022-02-24 2022-05-27 深圳壹账通科技服务有限公司 Traffic accident responsibility determination method, device, computer equipment and storage medium
CN114494982A (en) * 2022-04-08 2022-05-13 北京嘉沐安科技有限公司 Live video big data accurate recommendation method and system based on artificial intelligence

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHIZE HUANG等: "A Novel Method to Perceive Self-Vehicle State Based on Vehicle Video by Image Similarity Calculation", 《IEEE OPEN JOURNAL OF INSTRUMENTATION AND MEASUREMENT》, vol. 1, pages 1 - 11, XP011914515, DOI: 10.1109/OJIM.2022.3186051 *
车领: "智能车辆对行人的运动感知和过街意图预测研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》, no. 8, pages 035 - 151 *

Also Published As

Publication number Publication date
CN115115822B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN114882464B (en) Multi-task model training method, multi-task processing method, device and vehicle
CN114935334B (en) Construction method and device of lane topological relation, vehicle, medium and chip
US20240017719A1 (en) Mapping method and apparatus, vehicle, readable storage medium, and chip
CN114771539B (en) Vehicle lane change decision method and device, storage medium and vehicle
CN114842455B (en) Obstacle detection method, device, equipment, medium, chip and vehicle
CN115164910B (en) Travel route generation method, travel route generation device, vehicle, storage medium, and chip
CN114756700B (en) Scene library establishing method and device, vehicle, storage medium and chip
CN115170630B (en) Map generation method, map generation device, electronic equipment, vehicle and storage medium
CN115203457B (en) Image retrieval method, device, vehicle, storage medium and chip
CN114880408A (en) Scene construction method, device, medium and chip
CN114973178A (en) Model training method, object recognition method, device, vehicle and storage medium
CN115649190A (en) Control method, device, medium, vehicle and chip for vehicle auxiliary braking
CN115205311A (en) Image processing method, image processing apparatus, vehicle, medium, and chip
CN114537450A (en) Vehicle control method, device, medium, chip, electronic device and vehicle
CN115115822B (en) Vehicle-end image processing method and device, vehicle, storage medium and chip
CN115042813B (en) Vehicle control method and device, storage medium and vehicle
CN114572219B (en) Automatic overtaking method and device, vehicle, storage medium and chip
CN114771514B (en) Vehicle running control method, device, equipment, medium, chip and vehicle
CN114821511B (en) Rod body detection method and device, vehicle, storage medium and chip
CN114789723B (en) Vehicle running control method and device, vehicle, storage medium and chip
CN115221260B (en) Data processing method, device, vehicle and storage medium
CN114842454B (en) Obstacle detection method, device, equipment, storage medium, chip and vehicle
CN115535004B (en) Distance generation method, device, storage medium and vehicle
CN115205804A (en) Image processing method, image processing apparatus, vehicle, medium, and chip
CN114954528A (en) Vehicle control method, device, vehicle, storage medium and chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant