CN111797659A - Driving assistance method and device, storage medium and electronic equipment - Google Patents

Driving assistance method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111797659A
CN111797659A CN201910282449.4A CN201910282449A CN111797659A CN 111797659 A CN111797659 A CN 111797659A CN 201910282449 A CN201910282449 A CN 201910282449A CN 111797659 A CN111797659 A CN 111797659A
Authority
CN
China
Prior art keywords
image
algorithm
probability value
pixel matrix
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910282449.4A
Other languages
Chinese (zh)
Inventor
陈仲铭
何明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910282449.4A priority Critical patent/CN111797659A/en
Publication of CN111797659A publication Critical patent/CN111797659A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P3/00Measuring linear or angular speed; Measuring differences of linear or angular speeds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The embodiment of the application discloses a driving assisting method, a driving assisting device, a storage medium and electronic equipment, wherein the driving assisting method comprises the following steps: acquiring a shot image and at least two limiting frame prediction information in the image; acquiring a pixel matrix corresponding to the prediction information of each limit frame; extracting image features from the pixel matrix according to a preset feature algorithm, and inputting the image features into a prediction algorithm to obtain a target probability value; when the target probability value is larger than a probability threshold value, determining that the image comprises a previous vehicle image, and obtaining the previous vehicle position information according to the image; and obtaining the speed and the distance of the front vehicle according to the obtained current speed and the position information of the front vehicle, and determining a following strategy according to the speed and the distance of the front vehicle. The accuracy of the preceding vehicle identification can be improved.

Description

Driving assistance method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a driving assistance method, a driving assistance device, a storage medium, and an electronic device.
Background
With the development of electronic technology, electronic devices such as smart phones have become more and more intelligent. For example, a user may implement navigation functionality with an electronic device. In some related technologies, some electronic devices may also perform a preceding vehicle recognition, and provide a following strategy according to the result of the preceding vehicle recognition. However, the electronic device in the related art is inaccurate in identifying the preceding vehicle.
Disclosure of Invention
The embodiment of the application provides a driving assisting method, a driving assisting device, a storage medium and electronic equipment, which can improve the accuracy of front vehicle identification.
In a first aspect, an embodiment of the present application provides a driving assistance method, which includes:
acquiring a shot image, wherein the image comprises at least two limiting frame prediction information of a front vehicle;
acquiring a pixel matrix corresponding to the prediction information of each limit frame;
extracting image features from the pixel matrix according to a preset feature algorithm, and inputting the image features into a prediction algorithm to obtain a target probability value;
when the target probability value is larger than a probability threshold value, determining that the image comprises a previous vehicle image, and obtaining the previous vehicle position information according to the image;
and obtaining the speed and the distance of the front vehicle according to the obtained current speed and the position information of the front vehicle, and determining a following strategy according to the speed and the distance of the front vehicle.
In a second aspect, an embodiment of the present application further provides a driving assistance device, which includes:
the limiting frame prediction information acquisition module is used for acquiring a shot image, and the image comprises at least two limiting frame prediction information of a front vehicle;
the pixel matrix obtaining module is used for obtaining a pixel matrix corresponding to each piece of limiting frame prediction information;
the target probability obtaining module is used for extracting image features from the pixel matrix according to a preset feature algorithm and inputting the image features into a prediction algorithm to obtain a target probability value;
the front vehicle position information acquisition module is used for determining that the image comprises a front vehicle image when the target probability value is greater than a probability threshold value, and acquiring front vehicle position information according to the image;
and the determining module is used for obtaining the speed and the distance of the front vehicle according to the obtained current speed and the position information of the front vehicle, and determining a following strategy according to the speed and the distance of the front vehicle.
In a third aspect, embodiments of the present application further provide a storage medium having a computer program stored thereon, which, when run on a computer, causes the computer to perform the steps of the driving assistance method described above.
In a fourth aspect, an embodiment of the present application further provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores a computer program, and the processor is configured to execute the steps of the driving assistance method by calling the computer program stored in the memory.
According to the driving assistance method, the driving assistance device, the storage medium and the electronic equipment, prediction information comprising at least two limiting frames is obtained firstly; then obtaining a pixel matrix corresponding to the prediction information of each limit frame; extracting image features from the pixel matrix according to a preset feature algorithm, and inputting the image features into a prediction algorithm to obtain a target probability value; when the target probability value is larger than a probability threshold value, determining that the image comprises a previous vehicle image, and obtaining the previous vehicle position information according to the image; and obtaining the speed and the distance of the front vehicle according to the obtained current speed and the position information of the front vehicle, and determining a following strategy according to the speed and the distance of the front vehicle. The accuracy of the preceding vehicle identification can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic view of an application scenario of a driving assistance method according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a driving assistance method according to an embodiment of the present application.
Fig. 3 is a schematic view of another application scenario of the driving assistance method according to the embodiment of the present application.
Fig. 4 is a schematic structural diagram of a driving assistance device according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a second electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present application.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of an auxiliary driving method according to an embodiment of the present application. The driving assistance method is applied to an electronic device. The electronic device may be a smartphone, a tablet, a gaming device, an Augmented Reality (AR) device, an automobile, a data storage device, an audio playback device, a video playback device, a notebook, a desktop computing device, a wearable device such as a watch, glasses, a helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like. A panoramic perception framework is arranged in the electronic equipment. The panoramic sensing architecture is an integration of hardware and software for implementing a driving assistance method in an electronic device. The panoramic perception architecture comprises an information perception layer, a data processing layer, a feature extraction layer, a scene modeling layer and an intelligent service layer.
The information perception layer is used for acquiring information of the electronic equipment and/or information in an external environment. The information-perceiving layer may comprise a plurality of sensors. For example, the information sensing layer includes a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, and a heart rate sensor.
Among other things, a distance sensor may be used to detect a distance between the electronic device and an external object. The magnetic field sensor may be used to detect magnetic field information of the environment in which the electronic device is located. The light sensor can be used for detecting light information of the environment where the electronic equipment is located. The acceleration sensor may be used to detect acceleration data of the electronic device. The fingerprint sensor may be used to collect fingerprint information of a user. The Hall sensor is a magnetic field sensor manufactured according to the Hall effect, and can be used for realizing automatic control of electronic equipment. The location sensor may be used to detect the geographic location where the electronic device is currently located. Gyroscopes may be used to detect angular velocity of an electronic device in various directions. Inertial sensors may be used to detect motion data of an electronic device. The gesture sensor may be used to sense gesture information of the electronic device. A barometer may be used to detect the barometric pressure of the environment in which the electronic device is located. The heart rate sensor may be used to detect heart rate information of the user.
And the data processing layer is used for processing the data acquired by the information perception layer. For example, the data processing layer may perform data cleaning, data integration, data transformation, data reduction, and the like on the data acquired by the information sensing layer.
The data cleaning refers to cleaning a large amount of data acquired by the information sensing layer to remove invalid data and repeated data. The data integration refers to integrating a plurality of single-dimensional data acquired by the information perception layer into a higher or more abstract dimension so as to comprehensively process the data of the plurality of single dimensions. The data transformation refers to performing data type conversion or format conversion on the data acquired by the information sensing layer so that the transformed data can meet the processing requirement. The data reduction means that the data volume is reduced to the maximum extent on the premise of keeping the original appearance of the data as much as possible.
The characteristic extraction layer is used for extracting characteristics of the data processed by the data processing layer so as to extract the characteristics included in the data. The extracted features may reflect the state of the electronic device itself or the state of the user or the environmental state of the environment in which the electronic device is located, etc.
The feature extraction layer may extract features or process the extracted features by a method such as a filtering method, a packing method, or an integration method.
The filtering method is to filter the extracted features to remove redundant feature data. Packaging methods are used to screen the extracted features. The integration method is to integrate a plurality of feature extraction methods together to construct a more efficient and more accurate feature extraction method for extracting features.
The scene modeling layer is used for building a model according to the features extracted by the feature extraction layer, and the obtained model can be used for representing the state of the electronic equipment, the state of a user, the environment state and the like. For example, the scenario modeling layer may construct a key value model, a pattern identification model, a graph model, an entity relation model, an object-oriented model, and the like according to the features extracted by the feature extraction layer.
The intelligent service layer is used for providing intelligent services for the user according to the model constructed by the scene modeling layer. For example, the intelligent service layer can provide basic application services for users, perform system intelligent optimization for electronic equipment, and provide personalized intelligent services for users.
In addition, the panoramic perception architecture can further comprise a plurality of algorithms, each algorithm can be used for analyzing and processing data, and the plurality of algorithms can form an algorithm library. For example, the algorithm library may include algorithms such as a markov algorithm, a hidden dirichlet distribution algorithm, a bayesian classification algorithm, a support vector machine, a K-means clustering algorithm, a K-nearest neighbor algorithm, a conditional random field, a residual error network, a long-short term memory network, a convolutional neural network, and a cyclic neural network.
The embodiment of the application provides a driving assisting method which can be applied to electronic equipment. The electronic device may be a smartphone, a tablet, a gaming device, an Augmented Reality (AR) device, an automobile, a data storage device, an audio playback device, a video playback device, a notebook, a desktop computing device, a wearable device such as a watch, glasses, a helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
Referring to fig. 2, fig. 2 is a schematic flow chart of a driving assistance method according to an embodiment of the present application. The driving assistance method comprises the following steps:
101, a shot image and at least two pieces of limited frame prediction information in the image are acquired.
In some embodiments, the images may be captured in real time by a camera that may capture the images through a windshield in front of the user's vehicle. The camera may be fixed to the windscreen or may be placed in front of the vehicle. For example, the camera may be placed on a base fixed within the vehicle, which may be fixed at a location within the cab.
Wherein, the camera can be vehicle-mounted camera, also can be detachable camera. The detachable camera can be an independent camera which can be directly connected with the vehicle-mounted system. The detachable camera may be part of an electronic device such as a navigator, a smart phone, etc. The camera is exemplified as a detachable camera, such as a smart phone.
After the electronic equipment acquires the shot image, at least two pieces of limiting frame prediction information in the image are acquired, wherein each piece of limiting frame prediction information is limiting frame information which possibly comprises a front vehicle image. Illustratively, a bounding box algorithm (bounding box) may be used to obtain at least two bounding box prediction information.
In some embodiments, the bounding box algorithm for obtaining the bounding box prediction information needs to be trained (calculated) in advance, specifically, the bounding box algorithm is used for obtaining the bounding box prediction information of the vehicle position in the image from the image obtained by the vehicle-mounted camera. Specifically, for an image detected from the vehicle-mounted camera, a predefined bounding box algorithm is used, and the corresponding bounding box position in the image is directly obtained as the input of the next step.
The method for generating the bounding box algorithm may be that, in an image with a leading vehicle collected by a camera (for example, about 30000 multiple images may be adopted), bounding box position information of a position of the leading vehicle is manually marked (i.e., coordinate information [ x, y, w, h ] of a corresponding vehicle in the image, where x and y are coordinates of a position of the upper left corner of the vehicle, and w and h are offset pixel values of a distance between the lower right corner of the vehicle position and the upper left corner coordinate), and a clustering algorithm (e.g., kmeans) or a classification algorithm (e.g., knn) is used for the bounding box information to obtain position information of approximately 200 bounding boxes. The obtained bounding box position information collectively represents more than 200 most likely-to-appear position information of the front vehicle in the image collected by the camera.
And 102, acquiring a pixel matrix corresponding to each piece of limited frame prediction information.
Each of the bounding box prediction information includes a bounding box information, and all pixel information within the bounding box information forms a pixel matrix. Thus, at least two pixel matrices are derived from the at least two bounding box prediction information.
103, extracting image features from the pixel matrix according to a preset feature algorithm, and inputting the image features into a prediction algorithm to obtain a target probability value.
The electronic equipment extracts image features from each pixel matrix according to a preset feature algorithm, then all the extracted image features are input into a prediction algorithm, and the prediction algorithm obtains a target probability value according to the input information.
The preset feature algorithm may be Histogram of oriented gradient feature (HOG), HAAR feature algorithm, Principal Component Analysis (PCA), or the like. The prediction algorithm may be a Support Vector Machine (SVM) algorithm, a neural network algorithm, or the like.
In some embodiments, the extracting, by the electronic device, the image feature from the pixel matrix according to a preset feature algorithm, and inputting the image feature into a prediction algorithm to obtain the target probability value specifically includes:
the preset feature algorithm comprises a first feature algorithm and a second feature algorithm, a first image feature is extracted from the image according to the first feature algorithm, and a second image feature is extracted from the image according to the second feature algorithm;
the prediction algorithm comprises a first prediction algorithm and a second prediction algorithm, the first characteristic is input into the first prediction algorithm to obtain a first probability value, and the second characteristic is input into the second prediction algorithm to obtain a second probability value;
and fusing the first probability value and the second probability value to obtain a target probability value.
In some embodiments, the extracting, by the electronic device, the image feature from the pixel matrix according to a preset feature algorithm, and inputting the image feature into a prediction algorithm to obtain the target probability value specifically includes:
the electronic equipment acquires a first sub-image feature of the pixel matrix according to a Histogram of Oriented Gradient (HOG) feature algorithm, and acquires a second sub-image feature of the pixel matrix according to a HAAR algorithm;
merging the first sub-image characteristic and the second sub-image characteristic to obtain a first image characteristic;
extracting a second image feature from the pixel matrix according to a Principal Component Analysis (PCA);
inputting the first image feature into a Support Vector Machine (SVM) algorithm to obtain a first probability value;
inputting the second image characteristics into a neural network algorithm to obtain a second probability value;
and fusing the first probability value and the second probability value to obtain a target probability value.
Specifically, the position pixel information corresponding to the bounding box position information obtained in the step 101 is extracted from the image and is separately stored as a tensor
Figure BDA0002022116870000071
Wherein A isiBut represents a pixel matrix corresponding to a bounding box.
To sheetCalculating the feature of each pixel matrix in the quantity A by using a directional gradient histogram operator to obtain a first sub-image feature AHOG
Using HAAR operator to each pixel matrix in tensor A, calculating the characteristics of each pixel matrix to obtain second sub-image characteristics AHAAR
For the first sub-image feature AHOGAnd a second sub-image feature AHAARMerging, specifically, a matrix superposition mode can be used to obtain a new feature vector first image feature ACOMB
Extracting principal components of pixel channels of each pixel matrix in the tensor A by using a principal component analysis algorithm to serve as second image characteristics APCA
And inputting the first image feature into a support vector machine algorithm to obtain a first probability value. Specifically, a SVM vehicle classifier of a support vector machine algorithm is utilized to obtain a first probability value P of whether the vehicle is a front vehicle or notsvm
And inputting the second image characteristics into a neural network algorithm to obtain a second probability value. Specifically, the neural network classifier of the neural network algorithm is used for obtaining the second probability value P of whether the vehicle is a preceding vehiclednn
It should be noted that, in the training stage of the support vector machine algorithm and the neural network algorithm, a single vehicle hip image (including only the shaded portion behind the vehicle hip and between the tire and the vehicle, and not including the rest background) is used in the vehicle hip image in the separate training process, and in addition, samples of different shapes and special vehicles such as a modified vehicle, a motorcycle, a battery car, a truck, a cement truck, a truck and the like need to be added in a balanced manner.
In some embodiments, the use of multiple classifiers can improve the accuracy of vehicle identification since one classifier cannot fully represent its accuracy, thus giving a first probability value weight w1Assigning a second probability value times the weight w2The output classification score is multiplied by a weight and added to determine whether the image belongs to the preceding vehicle image (R ═ w)1×Psvm+w2×Pdnn). For example, the weight w1May be 0.6, weight w2Can be that0.4, of course weight w1And a weight w2Other values are also possible.
In some embodiments, extracting image features from the pixel matrix according to a preset feature algorithm, and inputting the image features into a prediction algorithm to obtain a target probability value specifically includes:
acquiring the image characteristics of the pixel matrix according to a directional gradient histogram characteristic algorithm, or acquiring the image characteristics of the pixel matrix according to a HAAR algorithm;
and inputting the image features into a support vector machine algorithm to obtain a target probability value.
In some embodiments, extracting image features from the pixel matrix according to a preset feature algorithm, and inputting the image features into a prediction algorithm to obtain a target probability value specifically includes:
acquiring a first sub-image characteristic of the pixel matrix according to a directional gradient histogram characteristic algorithm, and acquiring a second sub-image characteristic of the pixel matrix according to a HAAR algorithm;
performing matrix superposition on the first sub-image characteristics and the second sub-image characteristics to obtain image characteristics;
and inputting the image features into a support vector machine algorithm to obtain a target probability value.
In some embodiments, extracting image features from the pixel matrix according to a preset feature algorithm, and inputting the image features into a prediction algorithm to obtain a target probability value specifically includes:
extracting image features from the pixel matrix according to a principal component analysis algorithm;
and inputting the image characteristics into a neural network algorithm to obtain a target probability value.
And 104, when the target probability value is larger than the probability threshold value, determining that the image comprises a previous vehicle image, and obtaining the position information of the previous vehicle according to the image.
And when the target probability value is greater than the probability threshold value, determining that the image comprises a front vehicle image, and if the target probability value is 0.8 and the probability threshold value is 0.7, determining that the image comprises a vehicle image. Then, the preceding vehicle position information can be obtained from the image. For example, an affine transformation matrix is obtained by using calibration distance information calibrated in advance by using a coordinate transformation algorithm, the coordinate position of the limit frame prediction information is multiplied by the affine transformation matrix, the offset position of the real world coordinate from the vehicle where the terminal device is located is obtained, and the actual coordinate distance information of the preceding vehicle is determined.
In some embodiments, obtaining the pre-vehicle location information from the image includes:
associating each of the bounding box prediction information with a corresponding target probability value;
and obtaining target limit frame prediction information according to a non-maximum suppression algorithm, and determining the position information of the front vehicle according to the target limit frame information.
And obtaining final vehicle position information by using the bounding box information and the corresponding target probability value of the front vehicle judgment result and using a Non-Maximum Suppression algorithm (NMS) for the obtained front vehicle position information.
And 105, obtaining the speed and the distance of the front vehicle according to the obtained current speed and the position information of the front vehicle, and determining a following strategy according to the speed and the distance of the front vehicle.
The current speed can be obtained directly by the control system of the vehicle. Panoramic perception information of the electronic device may also be obtained. Specifically, the movement information of the vehicle (including the current speed of the vehicle) may be determined according to an accelerometer, GPS positioning information, an angular velocity meter, and the like of the electronic device, and then the speed and the distance of the vehicle ahead are determined according to the movement information of the vehicle and the previously acquired position information of the vehicle ahead, where the distance of the vehicle ahead is the distance between the vehicle ahead and the vehicle where the camera is located. And determining a following strategy according to the obtained front vehicle speed and the front vehicle distance. For example, when the current vehicle speed is above 80 km/h and the distance of the front vehicle is greater than the safe distance, the vehicle can be controlled to automatically follow or the user is prompted to completely follow. When the current speed is above 60 km/h and the distance of the front vehicle is greater than the safe distance, the vehicle can be controlled to follow and servo to overtake or suggest a user to overtake. The automatic overtaking or overtaking reminding can be carried out by acquiring other lane information according to other cameras. For example, if the other cameras acquire that the left lane has no vehicle within a safe distance (for example, no vehicle exists in front of and behind 300 meters), an automatic overtaking or overtaking reminder is triggered.
In the embodiment, the current speed can be obtained by utilizing the front vehicle information detected by the rear camera of the electronic equipment such as a smart phone and according to a speed sensor, an acceleration sensor, GPS information or a vehicle central control system, the speed of the vehicle driven by the driver can be controlled according to the current speed and the detected front vehicle information, and the front vehicle following state can be reasonably determined. The detection can be carried out on different vehicle systems and different vehicle types, including modified vehicles, motorcycles, battery cars, trucks, cement cars, trucks and the like; the detection time consumption of using a large amount of deep learning technology and the like can be effectively avoided; in addition, the adjustment and the movement can be conveniently carried out, and the implementation mode is convenient.
Referring to fig. 3, fig. 3 is a schematic view of another application scenario of the driving assistance method according to the embodiment of the present application. The driving assistance method specifically comprises the following steps: first, a camera of the electronic device is used to obtain an image. Then, a bounding box algorithm (bounding box) is used to process the image to obtain a plurality of bounding box prediction information (e.g. 200), and each bounding box prediction information corresponds to one pixel matrix. Then, acquiring a first sub-image feature of each pixel matrix according to a Histogram of Oriented Gradient (HOG) feature algorithm, acquiring a second sub-image feature of each pixel matrix according to a HAAR algorithm, and merging the first sub-image feature and the second sub-image feature to obtain a first image feature; extracting a second image feature from the pixel matrix according to a Principal Component Analysis (PCA); and then inputting the first image characteristic and the second image characteristic into a discrimination module. The discrimination module comprises an SVM classifier and a neural network classifier. Inputting the first image feature into an SVM classifier using a Support Vector Machine (SVM), and outputting a first probability P _ SVM (preceding vehicle/non-preceding vehicle) of whether the first image feature is a preceding vehicle. The second image feature is input to a neural network classifier using a neural network algorithm, and a second probability P _ dnn (preceding/non-preceding) of whether or not it is a preceding vehicle is output. The judgment model uses a single vehicle hip image (only including a rear tire of the vehicle hip and a shadow part between the tire and the vehicle, and not including other backgrounds) in a separate training process by using the vehicle hip image, and samples of different shapes and special vehicles such as a modified vehicle, a motorcycle, a battery car, a truck, a cement truck and a truck need to be added in a balanced manner. Since one classifier cannot fully represent the accuracy of the image, the accuracy of vehicle identification can be improved by using multiple classifiers, so that the first probability P _ svm is given a weight of w _1 and the second probability P _ dnn is given a weight of w _2, the classification score is multiplied by the weight through the output thereof, and the target probability R is obtained by adding, so as to determine whether the image finally belongs to the preceding vehicle image (R ═ w _1 × P _ svm + w _2 × P _ dnn).
And inputting the target probability R into a vehicle position information module, wherein the vehicle position information module obtains final vehicle position information by using a bounding box and a corresponding front vehicle judgment result target probability R and using a Non-Maximum Suppression algorithm (NMS) for the obtained front vehicle position information.
The vehicle position information is input into a coordinate transformation module, the coordinate transformation module obtains an affine transformation matrix by using the calibrated distance information calibrated in advance, the coordinate position of the bounding box is multiplied by the affine transformation matrix to obtain the offset position of the real world coordinate from the vehicle where the terminal equipment is located, and the actual coordinate distance information of the front vehicle is determined.
In some embodiments, the electronic device utilizes a panoramic perception architecture to obtain the current vehicle speed and distance traveled to infer the distance and speed traveled by the leading vehicle. Specifically, it can determine the movement information of the electronic device (i.e., vehicle) from the acceleration sensor, GPS positioning information, and angular velocity meter, thereby determining the movement distance and velocity of the preceding vehicle. And finally, determining a following strategy according to the moving distance and speed information of the front vehicle, and feeding back the following strategy to a central control system of the vehicle for following control of the front vehicle or informing a user of the following strategy (such as voice playing, video display and the like).
Illustratively, the electronic device acquires information of the electronic device of the user (for example, including electronic device operation information, user behavior information, information acquired by each sensor, electronic device state information, electronic device display content information, electronic device download information, and the like) by using an information sensing layer of a panoramic sensing architecture, processes the information of the electronic device (for example, invalid data deletion, data deduplication, and the like) by using a data processing layer, extracts required information (for example, image information acquired by an image sensor) from the information processed by the data processing layer by using a feature extraction layer, acquires at least two pieces of limited frame prediction information in target information by using a scene modeling layer, acquires a pixel matrix corresponding to each piece of limited frame prediction information, extracts image features from the pixel matrix according to a preset feature algorithm, and inputs the image features into the prediction algorithm to obtain a target probability value, and when the target probability value is greater than a probability threshold value, determining that the image comprises a preceding vehicle image, and obtaining the position information of the preceding vehicle according to the image. And finally, the intelligent service layer obtains the speed and the distance of the front vehicle according to the obtained current speed and the position information of the front vehicle, and determines a following strategy according to the speed and the distance of the front vehicle. Referring to fig. 4, fig. 4 is a schematic structural diagram of a driving assistance device according to an embodiment of the present disclosure. The driving assistance apparatus includes a limit frame prediction information acquisition module 301, a pixel matrix acquisition module 302, a target probability acquisition module 303, a preceding vehicle position information acquisition module 304, and a determination module 305.
And a frame prediction information acquisition module 301, configured to acquire the captured image, where the image includes at least two frame prediction information of the preceding vehicle.
A pixel matrix obtaining module 302, configured to obtain a pixel matrix corresponding to each of the bounding box prediction information.
And the target probability obtaining module 303 is configured to extract image features from the pixel matrix according to a preset feature algorithm, and input the image features into a prediction algorithm to obtain a target probability value.
And the front vehicle position information acquiring module 304 is configured to determine that the image includes a front vehicle image when the target probability value is greater than the probability threshold, and obtain front vehicle position information according to the image.
The determining module 305 is configured to obtain a previous vehicle speed and a previous vehicle distance according to the obtained current speed and the previous vehicle position information, and determine a following strategy according to the previous vehicle speed and the previous vehicle distance.
In some embodiments, the target probability obtaining module 303 is further configured to preset feature algorithms including a first feature algorithm and a second feature algorithm, extract a first image feature from the image according to the first feature algorithm, and extract a second image feature from the image according to the second feature algorithm; the prediction algorithm comprises a first prediction algorithm and a second prediction algorithm, the first characteristic is input into the first prediction algorithm to obtain a first probability value, and the second characteristic is input into the second prediction algorithm to obtain a second probability value; and fusing the first probability value and the second probability value to obtain a target probability value.
In some embodiments, the target probability obtaining module 303 is further configured to obtain a first sub-image feature of the pixel matrix according to a histogram of oriented gradients feature algorithm, and obtain a second sub-image feature of the pixel matrix according to a HAAR algorithm; merging the first sub-image characteristic and the second sub-image characteristic to obtain a first image characteristic; extracting a second image characteristic from the pixel matrix according to a principal component analysis algorithm; inputting the first image feature into a support vector machine algorithm to obtain a first probability value; inputting the second image characteristics into a neural network algorithm to obtain a second probability value; and fusing the first probability value and the second probability value to obtain a target probability value.
In some embodiments, the target probability obtaining module 303 is further configured to obtain an image feature of the pixel matrix according to a histogram of oriented gradients feature algorithm, or obtain an image feature of the pixel matrix according to a HAAR algorithm; and inputting the image features into a support vector machine algorithm to obtain a target probability value.
In some embodiments, the target probability obtaining module 303 is further configured to obtain a first sub-image feature of the pixel matrix according to a histogram of oriented gradients feature algorithm, and obtain a second sub-image feature of the pixel matrix according to a HAAR algorithm; performing matrix superposition on the first sub-image characteristics and the second sub-image characteristics to obtain image characteristics; inputting the image characteristics into a support vector machine algorithm to obtain a target probability value;
in some embodiments, the target probability obtaining module 303 is further configured to extract image features from the pixel matrix according to a principal component analysis algorithm; and inputting the image characteristics into a neural network algorithm to obtain a target probability value.
In some embodiments, the front vehicle location information obtaining module 304 is further configured to associate the prediction information of each bounding box with the corresponding target probability value; and obtaining target limit frame prediction information according to a non-maximum suppression algorithm, and determining the position information of the front vehicle according to the target limit frame information.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 600 comprises, among other things, a processor 601 and a memory 602. The processor 601 is electrically connected to the memory 602.
The processor 601 is a control center of the electronic device 600, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or calling a computer program stored in the memory 602 and calling data stored in the memory 602, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 601 in the electronic device 600 loads instructions corresponding to one or more processes of the computer program into the memory 602 according to the following steps, and the processor 601 runs the computer program stored in the memory 602, thereby implementing various functions:
acquiring a shot image and at least two limiting frame prediction information in the image;
acquiring a pixel matrix corresponding to the prediction information of each limit frame;
extracting image features from the pixel matrix according to a preset feature algorithm, and inputting the image features into a prediction algorithm to obtain a target probability value;
when the target probability value is larger than the probability threshold value, determining that the image comprises a front vehicle image, and obtaining front vehicle position information according to the image;
and obtaining the speed and the distance of the front vehicle according to the obtained current speed and the position information of the front vehicle, and determining a following strategy according to the speed and the distance of the front vehicle.
In some embodiments, when extracting image features from the pixel matrix according to a preset feature algorithm and inputting the image features into a prediction algorithm to obtain a target probability value, the processor 601 performs the following steps:
the preset feature algorithm comprises a first feature algorithm and a second feature algorithm, a first image feature is extracted from the image according to the first feature algorithm, and a second image feature is extracted from the image according to the second feature algorithm;
the prediction algorithm comprises a first prediction algorithm and a second prediction algorithm, the first characteristic is input into the first prediction algorithm to obtain a first probability value, and the second characteristic is input into the second prediction algorithm to obtain a second probability value;
and fusing the first probability value and the second probability value to obtain a target probability value.
In some embodiments, when extracting image features from the pixel matrix according to a preset feature algorithm and inputting the image features into a prediction algorithm to obtain a target probability value, the processor 601 performs the following steps:
in some embodiments, when extracting image features from the pixel matrix according to a preset feature algorithm and inputting the image features into a prediction algorithm to obtain a target probability value, the processor 601 performs the following steps:
acquiring a first sub-image characteristic of the pixel matrix according to a directional gradient histogram characteristic algorithm, and acquiring a second sub-image characteristic of the pixel matrix according to a HAAR algorithm;
merging the first sub-image characteristic and the second sub-image characteristic to obtain a first image characteristic;
extracting a second image characteristic from the pixel matrix according to a principal component analysis algorithm;
inputting the first image feature into a support vector machine algorithm to obtain a first probability value;
inputting the second image characteristics into a neural network algorithm to obtain a second probability value;
and fusing the first probability value and the second probability value to obtain a target probability value.
In some embodiments, when extracting image features from the pixel matrix according to a preset feature algorithm and inputting the image features into a prediction algorithm to obtain a target probability value, the processor 601 performs the following steps:
acquiring the image characteristics of the pixel matrix according to a directional gradient histogram characteristic algorithm, or acquiring the image characteristics of the pixel matrix according to a HAAR algorithm;
and inputting the image features into a support vector machine algorithm to obtain a target probability value.
In some embodiments, when extracting image features from the pixel matrix according to a preset feature algorithm and inputting the image features into a prediction algorithm to obtain a target probability value, the processor 601 performs the following steps:
acquiring a first sub-image characteristic of the pixel matrix according to a directional gradient histogram characteristic algorithm, and acquiring a second sub-image characteristic of the pixel matrix according to a HAAR algorithm;
performing matrix superposition on the first sub-image characteristics and the second sub-image characteristics to obtain image characteristics;
and inputting the image features into a support vector machine algorithm to obtain a target probability value.
In some embodiments, when extracting image features from the pixel matrix according to a preset feature algorithm and inputting the image features into a prediction algorithm to obtain a target probability value, the processor 601 performs the following steps:
extracting image features from the pixel matrix according to a principal component analysis algorithm;
and inputting the image characteristics into a neural network algorithm to obtain a target probability value.
In some embodiments, when obtaining the leading vehicle position information from the image, the processor 601 performs the following steps:
associating each of the bounding box prediction information with a corresponding target probability value;
and obtaining target limit frame prediction information according to a non-maximum suppression algorithm, and determining the position information of the front vehicle according to the target limit frame information.
In some embodiments, please refer to fig. 6, and fig. 6 is a second structural diagram of an electronic device according to an embodiment of the present disclosure.
Wherein, electronic device 600 further includes: a display screen 603, a control circuit 604, an input unit 605, a sensor 606, and a power supply 607. The processor 601 is electrically connected to the display screen 603, the control circuit 604, the input unit 605, the sensor 606 and the power supply 607.
The display screen 603 may be used to display information entered by or provided to the user as well as various graphical user interfaces of the electronic device, which may be comprised of images, text, icons, video, and any combination thereof.
The control circuit 604 is electrically connected to the display screen 603, and is configured to control the display screen 603 to display information.
The input unit 605 may be used to receive input numbers, character information, or user characteristic information (e.g., a fingerprint), and generate a keyboard, mouse, joystick, optical, or trackball signal input related to user setting and function control. The input unit 605 may include a fingerprint recognition module.
The sensor 606 is used to collect information of the electronic device itself or information of the user or external environment information. For example, the sensor 606 may include a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, a heart rate sensor, and the like.
The power supply 607 is used to power the various components of the electronic device 600. In some embodiments, the power supply 607 may be logically coupled to the processor 601 through a power management system, such that the power management system may manage charging, discharging, and power consumption management functions.
Although not shown in fig. 6, the electronic device 600 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
As can be seen from the above, an embodiment of the present application provides an electronic device, where a processor in the electronic device performs the following steps: acquiring a shot image and at least two limiting frame prediction information in the image; acquiring a pixel matrix corresponding to the prediction information of each limit frame; extracting image features from the pixel matrix according to a preset feature algorithm, and inputting the image features into a prediction algorithm to obtain a target probability value; when the target probability value is larger than the probability threshold value, determining that the image comprises a front vehicle image, and obtaining front vehicle position information according to the image; and obtaining the speed and the distance of the front vehicle according to the obtained current speed and the position information of the front vehicle, and determining a following strategy according to the speed and the distance of the front vehicle.
The embodiment of the present application further provides a storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer executes the driving assistance method according to any one of the above embodiments.
For example, in some embodiments, when the computer program is run on a computer, the computer performs the steps of:
acquiring a shot image and at least two limiting frame prediction information in the image;
acquiring a pixel matrix corresponding to the prediction information of each limit frame;
extracting image features from the pixel matrix according to a preset feature algorithm, and inputting the image features into a prediction algorithm to obtain a target probability value;
when the target probability value is larger than the probability threshold value, determining that the image comprises a front vehicle image, and obtaining front vehicle position information according to the image;
and obtaining the speed and the distance of the front vehicle according to the obtained current speed and the position information of the front vehicle, and determining a following strategy according to the speed and the distance of the front vehicle.
It should be noted that, all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, which may include, but is not limited to: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The driving assistance method, the driving assistance device, the storage medium, and the electronic device provided in the embodiments of the present application are described in detail above. The principle and the implementation of the present application are explained herein by applying specific examples, and the above description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A driving assist method characterized by comprising:
acquiring a shot image and at least two limiting frame prediction information in the image;
acquiring a pixel matrix corresponding to the prediction information of each limit frame;
extracting image features from the pixel matrix according to a preset feature algorithm, and inputting the image features into a prediction algorithm to obtain a target probability value;
when the target probability value is larger than a probability threshold value, determining that the image comprises a previous vehicle image, and obtaining the previous vehicle position information according to the image;
and obtaining the speed and the distance of the front vehicle according to the obtained current speed and the position information of the front vehicle, and determining a following strategy according to the speed and the distance of the front vehicle.
2. The driving assistance method according to claim 1, wherein the extracting image features from the pixel matrix according to a preset feature algorithm and inputting the image features into a prediction algorithm to obtain a target probability value comprises:
the preset feature algorithm comprises a first feature algorithm and a second feature algorithm, a first image feature is extracted from the image according to the first feature algorithm, and a second image feature is extracted from the image according to the second feature algorithm;
the prediction algorithm comprises a first prediction algorithm and a second prediction algorithm, the first characteristic is input into the first prediction algorithm to obtain a first probability value, and the second characteristic is input into the second prediction algorithm to obtain a second probability value;
and fusing the first probability value and the second probability value to obtain a target probability value.
3. The driving assistance method according to claim 2, wherein the extracting image features from the pixel matrix according to a preset feature algorithm and inputting the image features into a prediction algorithm to obtain a target probability value comprises:
acquiring a first sub-image characteristic of the pixel matrix according to a directional gradient histogram characteristic algorithm, and acquiring a second sub-image characteristic of the pixel matrix according to a HAAR algorithm;
merging the first sub-image characteristic and the second sub-image characteristic to obtain a first image characteristic;
extracting a second image feature from the pixel matrix according to a principal component analysis algorithm;
inputting the first image feature into a support vector machine algorithm to obtain a first probability value;
inputting the second image characteristics into a neural network algorithm to obtain a second probability value;
and fusing the first probability value and the second probability value to obtain a target probability value.
4. The driving assistance method according to claim 1, wherein the extracting image features from the pixel matrix according to a preset feature algorithm and inputting the image features into a prediction algorithm to obtain a target probability value comprises:
acquiring the image characteristics of the pixel matrix according to a directional gradient histogram characteristic algorithm, or acquiring the image characteristics of the pixel matrix according to a HAAR algorithm;
and inputting the image features into a support vector machine algorithm to obtain a target probability value.
5. The driving assistance method according to claim 1, wherein the extracting image features from the pixel matrix according to a preset feature algorithm and inputting the image features into a prediction algorithm to obtain a target probability value comprises:
acquiring a first sub-image characteristic of the pixel matrix according to a directional gradient histogram characteristic algorithm, and acquiring a second sub-image characteristic of the pixel matrix according to a HAAR algorithm;
performing matrix superposition on the first sub-image characteristics and the second sub-image characteristics to obtain image characteristics;
and inputting the image features into a support vector machine algorithm to obtain a target probability value.
6. The driving assistance method according to claim 1, wherein the extracting image features from the pixel matrix according to a preset feature algorithm and inputting the image features into a prediction algorithm to obtain a target probability value comprises:
extracting image features from the pixel matrix according to a principal component analysis algorithm;
and inputting the image characteristics into a neural network algorithm to obtain a target probability value.
7. The driving assist method according to claim 1, wherein the deriving the preceding vehicle position information from the image includes:
associating each of the bounding box prediction information with a corresponding target probability value;
and obtaining target limit frame prediction information according to a non-maximum suppression algorithm, and determining the position information of the front vehicle according to the target limit frame information.
8. A driving assist apparatus, characterized by comprising:
the limiting frame prediction information acquisition module is used for acquiring a shot image, and the image comprises at least two limiting frame prediction information of a front vehicle;
the pixel matrix obtaining module is used for obtaining a pixel matrix corresponding to each piece of limiting frame prediction information;
the target probability obtaining module is used for extracting image features from the pixel matrix according to a preset feature algorithm and inputting the image features into a prediction algorithm to obtain a target probability value;
the front vehicle position information acquisition module is used for determining that the image comprises a front vehicle image when the target probability value is greater than a probability threshold value, and acquiring front vehicle position information according to the image;
and the determining module is used for obtaining the speed and the distance of the front vehicle according to the obtained current speed and the position information of the front vehicle, and determining a following strategy according to the speed and the distance of the front vehicle.
9. A storage medium having stored thereon a computer program, characterized in that, when the computer program is run on a computer, it causes the computer to execute the driving assist method according to any one of claims 1 to 7.
10. An electronic device, characterized in that the electronic device comprises a processor and a memory, in which a computer program is stored, the processor being adapted to carry out the driving assistance method according to any one of claims 1 to 7 by calling the computer program stored in the memory.
CN201910282449.4A 2019-04-09 2019-04-09 Driving assistance method and device, storage medium and electronic equipment Pending CN111797659A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910282449.4A CN111797659A (en) 2019-04-09 2019-04-09 Driving assistance method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910282449.4A CN111797659A (en) 2019-04-09 2019-04-09 Driving assistance method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111797659A true CN111797659A (en) 2020-10-20

Family

ID=72805365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910282449.4A Pending CN111797659A (en) 2019-04-09 2019-04-09 Driving assistance method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111797659A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113433339A (en) * 2021-06-17 2021-09-24 武汉唯理科技有限公司 Speed measuring method and system based on double cameras, computer equipment and readable medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203267A (en) * 2016-06-28 2016-12-07 成都之达科技有限公司 Vehicle collision avoidance method based on machine vision
CN107563256A (en) * 2016-06-30 2018-01-09 北京旷视科技有限公司 Aid in driving information production method and device, DAS (Driver Assistant System)
CN108182393A (en) * 2017-12-22 2018-06-19 上海信耀电子有限公司 A kind of automobile and its front truck tracking and system of application
CN108248508A (en) * 2016-12-29 2018-07-06 乐视汽车(北京)有限公司 Driving safety display methods, system, medium and electronic equipment
CN109284757A (en) * 2018-08-31 2019-01-29 湖南星汉数智科技有限公司 A kind of licence plate recognition method, device, computer installation and computer readable storage medium
WO2019037498A1 (en) * 2017-08-25 2019-02-28 腾讯科技(深圳)有限公司 Active tracking method, device and system
CN109409288A (en) * 2018-10-25 2019-03-01 北京市商汤科技开发有限公司 Image processing method, device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203267A (en) * 2016-06-28 2016-12-07 成都之达科技有限公司 Vehicle collision avoidance method based on machine vision
CN107563256A (en) * 2016-06-30 2018-01-09 北京旷视科技有限公司 Aid in driving information production method and device, DAS (Driver Assistant System)
CN108248508A (en) * 2016-12-29 2018-07-06 乐视汽车(北京)有限公司 Driving safety display methods, system, medium and electronic equipment
WO2019037498A1 (en) * 2017-08-25 2019-02-28 腾讯科技(深圳)有限公司 Active tracking method, device and system
CN108182393A (en) * 2017-12-22 2018-06-19 上海信耀电子有限公司 A kind of automobile and its front truck tracking and system of application
CN109284757A (en) * 2018-08-31 2019-01-29 湖南星汉数智科技有限公司 A kind of licence plate recognition method, device, computer installation and computer readable storage medium
CN109409288A (en) * 2018-10-25 2019-03-01 北京市商汤科技开发有限公司 Image processing method, device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ROSS GIRSHICK 等,: "Rich feature hierarchies for accurate object detection and semantic segmentation", ARXIV, 31 October 2014 (2014-10-31), pages 1 - 21 *
孟柯;吴超仲;陈志军;吕能超;邓超;刘钢;: "人车碰撞风险识别及智能车辆控制系统", 交通信息与安全, no. 06 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113433339A (en) * 2021-06-17 2021-09-24 武汉唯理科技有限公司 Speed measuring method and system based on double cameras, computer equipment and readable medium
CN113433339B (en) * 2021-06-17 2023-09-08 武汉唯理科技有限公司 Speed measuring method and system based on double cameras, computer equipment and readable medium

Similar Documents

Publication Publication Date Title
US10417816B2 (en) System and method for digital environment reconstruction
CN110136199B (en) Camera-based vehicle positioning and mapping method and device
EP3961485A1 (en) Image processing method, apparatus and device, and storage medium
CN111797657A (en) Vehicle peripheral obstacle detection method, device, storage medium, and electronic apparatus
Gu et al. Intelligent driving data recorder in smartphone using deep neural network-based speedometer and scene understanding
CN113591872A (en) Data processing system, object detection method and device
WO2023185354A1 (en) Real location navigation method and apparatus, and device, storage medium and program product
CN111798521B (en) Calibration method and device, storage medium and electronic equipment
CN111192341A (en) Method and device for generating high-precision map, automatic driving equipment and storage medium
CN111797302A (en) Model processing method and device, storage medium and electronic equipment
CN113205515B (en) Target detection method, device and computer storage medium
CN111797659A (en) Driving assistance method and device, storage medium and electronic equipment
CN116012822B (en) Fatigue driving identification method and device and electronic equipment
CN116823884A (en) Multi-target tracking method, system, computer equipment and storage medium
CN112818979A (en) Text recognition method, device, equipment and storage medium
CN111796663B (en) Scene recognition model updating method and device, storage medium and electronic equipment
CN111797656B (en) Face key point detection method and device, storage medium and electronic equipment
CN111797658A (en) Lane line recognition method and device, storage medium and electronic device
CN111797869A (en) Model training method and device, storage medium and electronic equipment
US20220245829A1 (en) Movement status learning apparatus, movement status recognition apparatus, model learning method, movement status recognition method and program
CN111797875B (en) Scene modeling method and device, storage medium and electronic equipment
CN113705279B (en) Method and device for identifying position of target object
CN110031654B (en) Posture identifying method and recording medium
CN111797868A (en) Scene recognition model modeling method and device, storage medium and electronic equipment
WO2020073270A1 (en) Snapshot image of traffic scenario

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination