CN111340880A - Method and apparatus for generating a predictive model - Google Patents

Method and apparatus for generating a predictive model Download PDF

Info

Publication number
CN111340880A
CN111340880A CN202010097489.4A CN202010097489A CN111340880A CN 111340880 A CN111340880 A CN 111340880A CN 202010097489 A CN202010097489 A CN 202010097489A CN 111340880 A CN111340880 A CN 111340880A
Authority
CN
China
Prior art keywords
vehicle
sample
angle
value
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010097489.4A
Other languages
Chinese (zh)
Other versions
CN111340880B (en
Inventor
蒋旻悦
谭啸
孙昊
文石磊
章宏武
丁二锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010097489.4A priority Critical patent/CN111340880B/en
Publication of CN111340880A publication Critical patent/CN111340880A/en
Application granted granted Critical
Publication of CN111340880B publication Critical patent/CN111340880B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

Embodiments of the present disclosure disclose methods and apparatus for generating a predictive model. One embodiment of the method comprises: acquiring a sample set, wherein the sample comprises a sample image and a sample result which are shot by a camera device, the sample result represents an included angle between a target direction corresponding to a vehicle in the sample image and the driving direction of the vehicle, and the target direction corresponding to the vehicle is the direction of a connecting line between the actual position of the vehicle and the actual position of the camera device; selecting samples from the sample set, and executing the following training steps: inputting a sample image in the selected sample into the initial model to obtain a prediction result corresponding to the sample image; determining the value of the loss function according to the obtained prediction result and the sample result in the selected sample; in response to determining that the initial model training is complete according to the value of the loss function, the initial model is determined to be a predictive model. This embodiment enables training of the predictive model so that the direction of travel of the vehicle can be determined using the predictive model.

Description

Method and apparatus for generating a predictive model
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method and an apparatus for generating a prediction model.
Background
Unmanned vehicles, also known as unmanned vehicles, smart vehicles, wheeled mobile robots, and the like. At present, an unmanned vehicle mainly relies on an intelligent driving instrument (such as a vehicle-mounted sensing system and the like) which is mainly a computer system in the vehicle to sense the surrounding environment of the unmanned vehicle, and controls the steering, the speed and the like of the unmanned vehicle according to information such as roads, obstacles and the like obtained by sensing, so that the unmanned vehicle can run on the roads. How to ensure the safety and reliability of unmanned vehicle driving is one of the main research directions in the field of unmanned vehicle research.
During the driving process of the unmanned vehicle, there are usually many other vehicles on the road. Therefore, if the driving directions of other vehicles around can be accurately judged, the unmanned vehicle can better plan the driving route of the unmanned vehicle so as to ensure the safe driving of the unmanned vehicle.
Disclosure of Invention
Embodiments of the present disclosure propose methods and apparatuses for generating a predictive model.
In a first aspect, an embodiment of the present disclosure provides a method for generating a prediction model, the method including: acquiring a sample set, wherein samples in the sample set comprise sample images shot by a camera device and sample results corresponding to the sample images, the sample results are used for representing an included angle between a target direction corresponding to a vehicle in the sample images and a driving direction of the vehicle, and the target direction corresponding to the vehicle is a direction of a connecting line between an actual position of the vehicle and an actual position of the camera device; selecting samples from the sample set, and performing the following training steps: inputting a sample image in the selected sample into the initial model to obtain a prediction result corresponding to the sample image; determining the value of the loss function according to the obtained prediction result and the sample result in the selected sample; in response to determining that the initial model training is complete according to the value of the loss function, the initial model is determined to be a predictive model.
In some embodiments, the training step further comprises: and responding to the condition that the initial model is determined not to be trained completely according to the value of the loss function, adjusting the parameters of the initial model, reselecting the sample from the sample set, and continuing to execute the training step by using the adjusted initial model as the initial model.
In some embodiments, the sample result includes an angle interval identifier and a relative angle, where the angle interval identifier is used to indicate an angle interval in which an included angle between a target direction corresponding to the vehicle and a driving direction of the vehicle is located, and the relative angle is used to represent a position of the included angle between the target direction corresponding to the vehicle and the driving direction of the vehicle in the angle interval in which the included angle is located, where the angle interval is obtained by dividing the omnidirectional angle by a preset angle unit.
In some embodiments, the relative angle is used to characterize the difference between the angle between the corresponding target direction of the vehicle and the driving direction of the vehicle and the half angle of the angle interval in which it is located.
In some embodiments, the sample results include sine values and/or cosine values corresponding to an angle between a target direction corresponding to the vehicle and a direction of travel of the vehicle.
In some embodiments, determining the value of the loss function based on the obtained prediction and the sample result in the selected sample comprises: obtaining a first loss value by utilizing an L1 loss function according to the sine value in the obtained prediction result and the sine value in the corresponding sample result; obtaining a second loss value by utilizing an L1 loss function according to a cosine value in the obtained prediction result and a cosine value in the corresponding sample result; determining a weight corresponding to the first loss value, wherein the weight corresponding to the first loss value is inversely proportional to a derivative of the sine function at an included angle between a target direction corresponding to the vehicle and a driving direction of the vehicle in the sample image characterized by the obtained prediction result; determining a weight corresponding to the second loss value, wherein the weight corresponding to the second loss value is inversely proportional to a derivative of the cosine function at an included angle between a target direction corresponding to the vehicle in the sample image characterized by the obtained prediction result and the driving direction of the vehicle; the value of the loss function is determined based on a weighted sum of the first loss value and the second loss value.
In a second aspect, embodiments of the present disclosure provide a method for predicting a driving direction, the method including: shooting an image by utilizing a shooting device arranged on the unmanned vehicle; inputting the image into a prediction model to obtain a prediction result for representing an included angle between a target direction corresponding to the vehicle in the image and a driving direction of the vehicle, wherein the prediction model is generated by the method described in any one of the above implementation manners of the first aspect, and the target direction corresponding to the vehicle is a direction in which a connecting line between an actual position of the vehicle and an actual position of the camera device is located; and determining the driving direction of the vehicle in the image according to the obtained prediction result.
In a third aspect, an embodiment of the present disclosure provides an apparatus for generating a prediction model, the apparatus including: the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is configured to acquire a sample set, samples in the sample set comprise sample images shot by a camera device and sample results corresponding to the sample images, the sample results are used for representing included angles between target directions corresponding to vehicles in the sample images and the driving directions of the vehicles, and the target directions corresponding to the vehicles are directions of connecting lines between the actual positions of the vehicles and the actual positions of the camera device; a training unit configured to select samples from a set of samples and to perform the following training steps: inputting a sample image in the selected sample into the initial model to obtain a prediction result corresponding to the sample image; determining the value of the loss function according to the obtained prediction result and the sample result in the selected sample; in response to determining that the initial model training is complete according to the value of the loss function, the initial model is determined to be a predictive model.
In some embodiments, the training step further comprises: and responding to the condition that the initial model is determined not to be trained completely according to the value of the loss function, adjusting the parameters of the initial model, reselecting the sample from the sample set, and continuing to execute the training step by using the adjusted initial model as the initial model.
In some embodiments, the sample result includes an angle interval identifier and a relative angle, where the angle interval identifier is used to indicate an angle interval in which an included angle between a target direction corresponding to the vehicle and a driving direction of the vehicle is located, and the relative angle is used to represent a position of the included angle between the target direction corresponding to the vehicle and the driving direction of the vehicle in the angle interval in which the included angle is located, where the angle interval is obtained by dividing the omnidirectional angle by a preset angle unit.
In some embodiments, the relative angle is used to characterize the difference between the angle between the corresponding target direction of the vehicle and the driving direction of the vehicle and the half angle of the angle interval in which it is located.
In some embodiments, the sample results include sine values and/or cosine values corresponding to an angle between a target direction corresponding to the vehicle and a direction of travel of the vehicle.
In some embodiments, the training unit is further configured to obtain a first loss value by using an L1 loss function according to a sine value in the obtained prediction result and a sine value in the corresponding sample result; obtaining a second loss value by utilizing an L1 loss function according to a cosine value in the obtained prediction result and a cosine value in the corresponding sample result; determining a weight corresponding to the first loss value, wherein the weight corresponding to the first loss value is inversely proportional to a derivative of the sine function at an included angle between a target direction corresponding to the vehicle and a driving direction of the vehicle in the sample image characterized by the obtained prediction result; determining a weight corresponding to the second loss value, wherein the weight corresponding to the second loss value is inversely proportional to a derivative of the cosine function at an included angle between a target direction corresponding to the vehicle in the sample image characterized by the obtained prediction result and the driving direction of the vehicle; the value of the loss function is determined based on a weighted sum of the first loss value and the second loss value.
In a fourth aspect, an embodiment of the present disclosure provides an apparatus for predicting a driving direction, the apparatus including: a photographing unit configured to photograph an image with a photographing device provided on an unmanned vehicle; a prediction unit configured to input the image into a prediction model, and obtain a prediction result for representing an included angle between a target direction corresponding to the vehicle in the image and a driving direction of the vehicle, wherein the prediction model is generated by the method described in any one of the above-mentioned implementation manners of the first aspect, and the target direction corresponding to the vehicle is a direction in which a connecting line between an actual position of the vehicle and an actual position of the camera device is located; a determination unit configured to determine a traveling direction of the vehicle in the image according to the obtained prediction result.
In a fifth aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; storage means for storing one or more programs; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.
In a sixth aspect, embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, which computer program, when executed by a processor, implements the method as described in any of the implementations of the first aspect.
According to the method and the device for generating the prediction model, a large number of training samples comprising sample images and sample results are acquired, the prediction model is obtained through training, wherein the sample results are used for representing an included angle between a target direction corresponding to a vehicle in the sample images and the driving direction of the vehicle, the target direction corresponding to the vehicle is the direction of a connecting line between the actual position of the vehicle and the actual position of the camera device, therefore, the prediction model obtained through training can be used for obtaining a prediction result used for representing the included angle between the direction of the connecting line between the actual position of the shot vehicle and the actual position of the camera device and the driving direction of the vehicle, and the driving direction of the shot vehicle can be calculated according to the prediction result.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for generating a predictive model according to the present disclosure;
FIG. 3 is a flow diagram of yet another embodiment of a method for generating a predictive model in accordance with an embodiment of the present disclosure;
FIG. 4 is a flow diagram of one embodiment of a method for predicting a direction of travel according to the present disclosure;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for generating a predictive model according to the present disclosure;
FIG. 6 is a schematic block diagram illustrating one embodiment of an apparatus for predicting a direction of travel according to the present disclosure;
FIG. 7 is a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary architecture 100 to which embodiments of the disclosed method for generating a predictive model or apparatus for generating a predictive model may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 101, 102, 103 interact with a server 105 via a network 104 to receive or send messages or the like. Various client applications may be installed on the terminal devices 101, 102, 103. Such as image processing-type applications, browser-type applications, search-type applications, and so forth.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices that support image processing, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, for example a server training an initial model from a sample set sent by the terminal devices 101, 102, 103 to derive a predictive model. Further, the server 105 may further process the received or collected image by using the trained initial model to obtain an output result corresponding to the image, and further may determine the driving direction of the vehicle in the captured image according to the output result.
It should be noted that the sample set may also be directly stored locally in the server 105, and the server 105 may directly extract the locally stored sample set to train the initial model, in which case, the terminal devices 101, 102, and 103 and the network 104 may not be present.
It should be noted that the method for generating the prediction model provided by the embodiment of the present disclosure is generally performed by the server 105, and accordingly, the apparatus for generating the prediction model is generally disposed in the server 105.
It should be noted that the terminal apparatuses 101, 102, and 103 may also have an image processing function. At this time, the terminal devices 101, 102, 103 may also train the initial model based on the sample set to obtain the prediction model. In this case, the method for generating the prediction model may be executed by the terminal devices 101, 102, and 103, and accordingly, the device for generating the prediction model may be provided in the terminal devices 101, 102, and 103. At this point, the exemplary system architecture 100 may not have the server 105 and the network 104.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for generating a predictive model in accordance with the present disclosure is shown. The method for generating a predictive model comprises the steps of:
step 201, a sample set is obtained, wherein samples in the sample set comprise sample images shot by a camera and sample results corresponding to the sample images.
In this embodiment, the executing agent (e.g., server 105 shown in fig. 1) of the method for generating a predictive model may obtain the sample set from a local or other storage device (e.g., terminal devices 101, 102, 103 or a database shown in fig. 1).
In the present embodiment, the image pickup device may refer to various devices capable of taking an image. For example, the camera may include, but is not limited to, a camera, a video camera, and the like. Wherein the sample image taken with the camera may include at least one vehicle.
The vehicle may refer to various types of drivable vehicles, among others. For example, a vehicle may include, but is not limited to, an automobile, a passenger car, a truck, a bus, and the like. It should be understood that when the sample image includes more than two vehicles, the types of the respective vehicles included in the sample image may be the same or different.
Alternatively, the camera may take a sample image on the road. For example, the imaging device may be provided at the road side. For another example, the imaging device may be provided on an unmanned vehicle.
The sample result corresponding to the sample image can be used for representing the included angle between the target direction corresponding to the vehicle in the sample image and the driving direction of the vehicle. The target direction corresponding to the vehicle may be a direction in which a connection line between an actual position of the vehicle and an actual position of the camera device is located.
Wherein the actual location may refer to a geographic location. The actual position of the vehicle and the actual position of the camera device may be represented based on various coordinate systems, depending on the actual application requirements. Of course, the actual position of the vehicle and the actual position of the imaging device are represented by the same coordinate system.
If the sample image includes more than two vehicles, the sample result corresponding to the sample image may include sub-sample results corresponding to the vehicles in the sample image. That is, the sample result may include sub-sample results respectively representing an angle between a target direction and a driving direction corresponding to each vehicle in the sample image.
According to different application scenes or application requirements, different methods for representing the included angle between the target direction corresponding to the vehicle and the driving direction of the vehicle can be adopted. For example, the sample result may include an angle between a target direction corresponding to the vehicle and a direction of travel of the vehicle. For another example, the sample result may include a normalized value of an angle between a target direction corresponding to the vehicle and a direction of travel of the vehicle.
For another example, the sample results may include a sine value and/or a cosine value of an angle between a corresponding target direction of the vehicle and a direction of travel of the vehicle. In this case, an included angle between a target direction corresponding to the vehicle and a driving direction of the vehicle is represented by using the sine value and/or the cosine value.
Step 202, selecting samples from the sample set, and performing the following training steps 2021 and 2023:
in this embodiment, the samples can be flexibly selected from the sample set according to the actual application scenario. For example, a preset number of samples may be randomly chosen from a set of samples. The preset number can be preset by a technician according to actual application requirements.
Step 2021, inputting the sample image in the selected sample into the initial model to obtain a prediction result corresponding to the sample image.
In this step, the initial model may be various types of untrained or trained artificial neural networks. For example, the initial model may be a deep learning model. The initial model may also be a model that combines a variety of untrained or untrained artificial neural networks. Of course, technicians can also build initial models by utilizing various existing deep learning frameworks according to actual application requirements.
It should be understood that if the number of the selected samples is more than two, the sample images in the selected samples may be respectively input to the initial model, and the sample results output by the initial model and respectively corresponding to the sample images are obtained.
Step 2022, determining a value of the loss function according to the obtained prediction result and the sample result in the selected samples.
In this embodiment, the value of the loss function may be determined based on a comparison of the obtained prediction result and the sample result in the selected sample. Wherein, technicians can flexibly design the loss function according to actual application requirements in advance. For example, the loss function may include, but is not limited to, an L1 loss function, a smooth L1 loss function, an L2 loss function, and the like.
Step 2023, in response to determining that the initial model training is complete according to the value of the loss function, determining the initial model as the predictive model.
In this embodiment, whether the initial model is trained can be determined according to the value of the loss function. The manner of determining whether the initial model is trained can be flexibly set by a technician according to actual application requirements. For example, whether the initial model is trained can be determined by determining whether the value of the loss function is not greater than a preset loss threshold. If the value of the loss function is greater than the loss threshold, it may be determined that the initial model is untrained.
In some optional implementations of this embodiment, in response to determining that the initial model is not trained completely according to the value of the loss function, parameters of the initial model are adjusted, and the samples are re-selected from the sample set, and the training step 202 is continued using the adjusted initial model as the initial model.
Wherein, after obtaining the value of the loss function, the parameters of each network layer of the initial model can be adjusted by using algorithms such as gradient descent, back propagation and the like. Generally, multiple iterative training is typically required to be able to train.
In the training process, various modes for determining whether the initial model is trained can be flexibly adopted. For example, when the initial model is trained for the first time, whether the initial model is trained completely may be determined according to the value of the loss function and a preset loss threshold, or whether the initial model is not trained completely may be set regardless of the value of the loss function. After the initial model is subjected to parameter adjustment, whether the initial model is trained completely can be determined according to the difference between the values of the loss functions respectively corresponding to the initial model after the parameters are adjusted for multiple times through continuous reading and a preset difference threshold.
In some optional implementations of this embodiment, the sample result may include an angle interval identification and a relative angle. The angle interval mark can be used for indicating an angle interval of an included angle between a target direction corresponding to the vehicle and the driving direction of the vehicle. The angle interval may be obtained by dividing the omnidirectional angle by a preset angle unit. The relative angle may be used to characterize the position of the angle between the corresponding target direction of the vehicle and the driving direction of the vehicle in the angle interval in which it is located. In other words, the relative angle may be used to characterize an angle between a corresponding target direction of the vehicle and the driving direction of the vehicle relative to an angle interval in which the target direction of the vehicle is located.
The omnidirectional angle can be flexibly set by technical personnel according to actual application requirements. For example, the omnidirectional angle may be 360 ° or 180 °. The preset angle unit may also be preset by a technician. For example, the preset angle unit may be 30 °, 60 °, or the like. Taking the preset angle unit as 60 degrees and the omnidirectional angle as 360 degrees as an example, six angle intervals of 0-60 degrees, 60-120 degrees, 120-180 degrees, 180-240 degrees, 240-300 degrees and 300-360 degrees can be obtained. If the angle between the target direction corresponding to the vehicle and the traveling direction of the vehicle is 115 °, it can be determined that the angle between the target direction corresponding to the vehicle and the traveling direction of the vehicle is in the above-described 60 ° to 120 ° angle range.
Wherein, the relative angle can be flexibly determined by adopting various representation methods. For example, the relative angle may be used to characterize the difference between the angle between the corresponding target direction of the vehicle and the driving direction of the vehicle and the starting angle of the angular interval in which it is located.
For example, if the included angle between the target direction corresponding to the vehicle and the driving direction of the vehicle is 115 °, the included angle is located in an angle range of 60 ° to 120 °. The relative angle may be 55 deg. from the starting angle 60 deg. of the angle interval by 115 deg..
A plurality of angle intervals are obtained by dividing the omnidirectional angle according to a preset angle unit, so that the included angle between the target direction corresponding to the vehicle and the driving direction of the vehicle can be represented by using the angle interval identification and the position located in the angle interval. The representation mode can reduce the value range of the relative angle and is beneficial to reducing the output error of the prediction model.
In some optional implementation manners of this embodiment, any two adjacent angle intervals in each angle interval obtained by dividing the omnidirectional angle by the preset angle unit may have an intersection.
As an example, the resulting individual angle intervals may be as follows: the angle ranges of (330-360 degrees and 0-90 degrees) 0-90 degrees, 60-150 degrees, 120-210 degrees, 180-270 degrees, (240-300 degrees and 0-30 degrees) 240-300 degrees and 300-360 degrees.
Any two adjacent angle intervals in each angle interval obtained by setting and dividing can have intersection, so that the continuity and the stability of the output result of the prediction model can be ensured.
It should be noted that, when any two adjacent angle intervals in each divided angle interval have an intersection, for an angle in any two adjacent angle intervals, the interval identifier corresponding to the angle may include an angle interval identifier corresponding to each of the two adjacent angle intervals.
Alternatively, the relative angle may be used to represent a difference between an angle between a target direction corresponding to the vehicle and a driving direction of the vehicle and a half angle of an angle section in which the target direction corresponds to the vehicle. Wherein a half angle of the angle interval may refer to an angle located at the center of the angle interval.
For example, if the included angle between the target direction corresponding to the vehicle and the driving direction of the vehicle is 115 °, the included angle is located in an angle range of 60 ° to 120 °. The half angle of the angle interval 60 DEG-120 DEG is 60 DEG + ((120 DEG-60 DEG)/2) ═ 90 deg. At this time, the relative angle may be 25 ° different from the half angle 90 ° of the angle interval by 115 °.
The included angle between the target direction corresponding to the vehicle and the driving direction of the vehicle is represented by the difference value between the included angle between the target direction corresponding to the vehicle and the driving direction of the vehicle and the half angle of the angle interval in which the included angle is located, so that the value range of the relative angle can be further reduced, and the output error of the prediction model can be further reduced.
In some optional implementations of the present embodiment, an included angle between a corresponding target direction of the vehicle and a driving direction of the vehicle may be characterized by a corresponding sine value and/or cosine value. At this time, the sample result may include a sine value and/or a cosine value corresponding to an angle between a target direction corresponding to the vehicle and a driving direction of the vehicle.
In some optional implementations of the embodiment, when the sample result includes the angle interval identifier and the relative angle, the position of the included angle between the target direction corresponding to the vehicle and the driving direction of the vehicle in the angle interval where the included angle is located may be characterized by using the corresponding sine value and/or cosine value. At this time, the relative angle may include a sine value and/or a cosine value of an angle between the target direction corresponding to the vehicle and the driving direction of the vehicle with respect to an angle of the angle section in which the target direction corresponds to the vehicle.
It should be noted that, the representation method of the included angle between the target direction corresponding to the vehicle and the driving direction of the vehicle may be flexibly set. For example, a coordinate system and a reference position may be selected, and the relative angle may be used to characterize an angle between a corresponding target direction of the vehicle and a direction of travel of the vehicle. The present disclosure is not so limited.
In some optional implementation manners of this embodiment, when the sample result uses a sine value and a cosine value to represent an angle, a value of the loss function may be determined according to the obtained prediction result and a sample result in the selected sample by the following steps:
and step one, according to the sine value in the obtained prediction result and the sine value in the corresponding sample result, obtaining a first loss value by using an L1 loss function.
The loss function uses the minimum absolute value deviation to calculate the loss value due to L1. Thus, the absolute value of the difference between the sine value in the prediction result and the sine value in the corresponding sample result may be determined as the first loss value.
And step two, obtaining a second loss value by utilizing an L1 loss function according to the cosine value in the obtained prediction result and the cosine value in the corresponding sample result.
In this step, an absolute value of a difference between a cosine value in the prediction result and a cosine value in the corresponding sample result may be determined as the second loss value.
And step three, determining the weight corresponding to the first loss value according to the sine value in the obtained prediction result.
In this step, the first weight may be inversely proportional to a derivative of the sine function at an angle between a target direction corresponding to the vehicle in the sample image characterized by the obtained prediction result and the driving direction of the vehicle.
The specific calculation mode of the first weight can be flexibly set by technicians according to actual application scenarios. For example, the first weight may be an inverse of a derivative of a sine function at an angle between a target direction corresponding to the vehicle in the sample image characterized by the obtained prediction result and the driving direction of the vehicle. For another example, the sum of the derivative of the sine function at the included angle between the target direction corresponding to the vehicle in the sample image characterized by the obtained prediction result and the driving direction of the vehicle and the preset adjustment value may be calculated to obtain an adjusted derivative, and then the reciprocal of the obtained adjusted derivative may be determined as the first weight.
And step four, determining the weight corresponding to the second loss value according to the cosine value in the obtained prediction result.
In this step, the second weight may be inversely proportional to a derivative of the cosine function at an angle between a target direction corresponding to the vehicle in the sample image represented by the obtained prediction result and the driving direction of the vehicle.
Similar to the first weight, the specific calculation manner of the second weight can be flexibly set by a technician according to an actual application scenario. For example, the second weight value may be an inverse of a derivative of a cosine function at an angle between a target direction corresponding to the vehicle in the sample image characterized by the obtained prediction result and the traveling direction of the vehicle. For another example, the sum of the derivative of the cosine function at the included angle between the target direction corresponding to the vehicle in the sample image represented by the obtained prediction result and the driving direction of the vehicle and the preset adjustment value may be calculated to obtain an adjusted derivative, and then the reciprocal of the obtained adjusted derivative may be determined as the second weight.
And step five, determining the value of the loss function according to the weighted sum of the first loss value and the second loss value.
In this step, the value of the penalty function may be proportional to a weighted sum of the first penalty value and the second penalty value. For example, the value of the weighted sum of the first loss value and the second loss value as a loss function may be determined directly.
The accuracy of the calculated L1 loss function value is affected due to the non-linearity of the sine function and the pre-function itself. By determining the weight of the value of the L1 loss function by the reciprocal of the derivative of the sinusoidal function and the pre-function at the corresponding included angle, respectively, a more accurate value of the loss function can be obtained, thereby improving the accuracy of the output result of the prediction model.
According to the method provided by the embodiment of the disclosure, the target direction corresponding to the regression vehicle is taken as the direction of the connecting line between the actual position of the vehicle and the actual position of the camera device through training the prediction model, and the included angle between the target direction and the driving direction of the vehicle can ensure the synchronism of the appearance change of the vehicle and the driving direction of the vehicle presented in the shot image, so that the accuracy and the stability of the output result of the prediction model are improved.
With further reference to fig. 3, fig. 3 is a flow 300 of yet another embodiment of a method for generating a predictive model according to the present embodiments. The flow 300 of the method for generating a predictive model includes the steps of:
step 301, a sample set is obtained, where the samples in the sample set include sample images captured by a camera and sample results corresponding to the sample images, the sample results include an angle interval identifier and a relative angle, and the relative angle includes a sine value and a cosine value of a difference between an included angle between a target direction corresponding to a vehicle and a driving direction of the vehicle and a half angle of the angle interval in which the target direction corresponds to the vehicle and the driving direction of the vehicle are located.
Step 302, selecting a sample from the sample set, and performing the following training steps 3021 and 3023:
and step 3021, inputting a sample image in the selected sample into the initial model to obtain a prediction result corresponding to the sample image.
Step 3022, the value of the loss function is determined by the following steps 30221-30225:
step 30221, obtaining a first loss value by using an L1 loss function according to the sine value in the obtained prediction result and the sine value in the corresponding sample result.
And step 30222, obtaining a second loss value by using an L1 loss function according to the cosine value in the obtained prediction result and the cosine value in the corresponding sample result.
Step 30223, determining the weight corresponding to the first loss value.
In this step, the weight corresponding to the first loss value may be inversely proportional to a derivative of the sine function at a difference between an angle between a target direction corresponding to the vehicle and a traveling direction of the vehicle, which is characterized by a relative angle in the obtained prediction result, and a half angle of an angle interval in which the angle interval is located. For example, a difference between an angle between a target direction corresponding to the vehicle and a driving direction of the vehicle and a half angle of an angle interval in which the target direction corresponds to the vehicle and the driving direction of the vehicle are determined according to a sine value and a cosine value included in the relative angle in the prediction result, and then an inverse of a derivative of the sine function at the obtained difference is calculated as the first loss value.
Step 30224, determining the weight corresponding to the second loss value.
In this step, the weight corresponding to the second loss value may be inversely proportional to a derivative of the cosine function at a difference between an angle between a target direction corresponding to the vehicle and a driving direction of the vehicle, which is characterized by a relative angle in the obtained prediction result, and a half angle of an angle interval in which the target direction corresponds to the vehicle.
For example, a difference between an angle between a target direction corresponding to the vehicle and a driving direction of the vehicle and a half angle of an angle interval in which the target direction corresponds to the vehicle and the driving direction of the vehicle are determined according to a sine value and a cosine value included in the relative angle in the prediction result, and then an inverse of a derivative of the cosine function at the obtained difference is calculated as the first loss value.
Step 30225, a value of the weighted sum of the first penalty value and the second penalty value as a penalty function is determined.
Step 3023, in response to determining that the initial model training is complete based on the value of the loss function, determining the initial model as the predictive model.
Step 3024, in response to determining that the initial model is not trained according to the value of the loss function, adjusting parameters of the initial model, and reselecting a sample from the sample set, and continuing to perform the training step 402 using the adjusted initial model as the initial model.
The specific execution process of the step that is not described in the above steps 301 and 302 may refer to the related description of steps 201 and 202 in the corresponding embodiment of fig. 2, and is not described herein again.
According to the method provided by the embodiment of the disclosure, the omnidirectional angle is divided into a plurality of angle intervals, so that the value range of the relative angle involved in the training process can be reduced, the output error of the prediction model is reduced, and the stability of the output result of the prediction model is improved. Meanwhile, the reciprocal of the derivative of the sine function and the pre-function at the corresponding angle is used as the weight to adjust the value of the loss function, so that the nonlinear influence of the sine function and the cosine function can be reduced, and the accuracy of the output result of the prediction model is improved.
With further reference to FIG. 4, a flow 400 of one embodiment of a method for predicting a direction of travel is shown. The process 400 of the method for predicting a driving direction includes the steps of:
step 401, shooting an image by using a shooting device arranged on the unmanned vehicle.
In the present embodiment, the execution subject of the method for predicting the traveling direction may be the same as or different from the execution subject of the method described in the embodiment corresponding to fig. 2. The camera device may refer to various devices for taking an image that can be installed in the unmanned vehicle. For example, the camera may include, but is not limited to, a camera, a video camera, and the like. Wherein the image captured with the camera device may include at least one vehicle.
Step 402, inputting the image into a prediction model to obtain a prediction result for representing an included angle between a target direction corresponding to the vehicle in the image and the driving direction of the vehicle.
In this embodiment, the prediction model may be generated by the method described in the embodiment corresponding to fig. 2 or fig. 3. The target direction corresponding to the vehicle may be a direction in which a line between an actual position of the vehicle and an actual position of the image pickup device is located.
In step 403, the driving direction of the vehicle in the image is determined according to the obtained prediction result.
After the prediction result is obtained, the actual position of the camera device arranged on the unmanned vehicle and the position of the vehicle in the image can be combined to calculate the driving direction of the vehicle in the image.
The unmanned vehicle can detect the actual position of the unmanned vehicle, so that the actual position of the camera device arranged on the unmanned vehicle is obtained. Various methods may be utilized to determine the actual location in the image where the vehicle is located. For example, the actual position of the vehicle in the image may be calculated from external and/or internal references of the camera, and based on image analysis techniques and computer vision.
It should be understood that when the captured image includes more than two vehicles, the respective directions of travel of the respective vehicles may be determined.
According to the method provided by the embodiment of the disclosure, a trained prediction model is used to obtain a prediction result for representing an included angle between a target direction corresponding to a vehicle in an image shot by an unmanned vehicle and a driving direction of the vehicle, wherein the target direction corresponding to the vehicle is a direction in which a connecting line between an actual position of the vehicle and an actual position of an image pickup device is located, so that the driving direction of each vehicle in the image can be determined according to the obtained prediction result. Based on the method, the unmanned vehicle can adjust the driving track of the unmanned vehicle in time according to the determined driving direction of the surrounding vehicles in the driving process, so that the safety of the driving process is ensured.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for generating a prediction model, which corresponds to the method embodiment shown in fig. 2, and which may be applied in various electronic devices in particular.
As shown in fig. 5, the apparatus 500 for generating a prediction model provided in the present embodiment includes an obtaining unit 501 and a training unit 502. The obtaining unit 501 is configured to obtain a sample set, where samples in the sample set include a sample image captured by the camera and a sample result corresponding to the sample image, where the sample result is used to represent an included angle between a target direction corresponding to the vehicle in the sample image and a driving direction of the vehicle, and the target direction corresponding to the vehicle is a direction in which a connection line between an actual position of the vehicle and an actual position of the camera is located; the training unit 502 is configured to select samples from a sample set and to perform the following training steps: inputting a sample image in the selected sample into the initial model to obtain a prediction result corresponding to the sample image; determining the value of the loss function according to the obtained prediction result and the sample result in the selected sample; in response to determining that the initial model training is complete according to the value of the loss function, the initial model is determined to be a predictive model.
In the present embodiment, in the apparatus 500 for generating a prediction model: the specific processing of the obtaining unit 501 and the training unit 502 and the technical effects thereof can refer to the related descriptions of step 201 and step 202 in the corresponding embodiment of fig. 2, which are not repeated herein.
In some optional implementation manners of this embodiment, the training step further includes: and responding to the condition that the initial model is determined not to be trained completely according to the value of the loss function, adjusting the parameters of the initial model, reselecting the sample from the sample set, and continuing to execute the training step by using the adjusted initial model as the initial model.
In some optional implementation manners of this embodiment, the sample result includes an angle section identifier and a relative angle, where the angle section identifier is used to indicate an angle section where an included angle between a target direction corresponding to the vehicle and a driving direction of the vehicle is located, and the relative angle is used to represent a position of the included angle between the target direction corresponding to the vehicle and the driving direction of the vehicle in the angle section where the included angle is located, where the angle section is obtained by dividing the omnidirectional angle by a preset angle unit.
In some optional implementations of the embodiment, the relative angle is used to represent a difference between an included angle between a corresponding target direction of the vehicle and a driving direction of the vehicle and a half angle of an angle interval in which the included angle is located.
In some optional implementations of the embodiment, the sample result includes a sine value and/or a cosine value corresponding to an angle between a target direction corresponding to the vehicle and a driving direction of the vehicle.
In some optional implementations of the present embodiment, the training unit 502 is further configured to obtain a first loss value by using an L1 loss function according to a sine value in the obtained prediction result and a sine value in the corresponding sample result; obtaining a second loss value by utilizing an L1 loss function according to a cosine value in the obtained prediction result and a cosine value in the corresponding sample result; determining a weight corresponding to the first loss value, wherein the weight corresponding to the first loss value is inversely proportional to a derivative of the sine function at an included angle between a target direction corresponding to the vehicle and a driving direction of the vehicle in the sample image characterized by the obtained prediction result; determining a weight corresponding to the second loss value, wherein the weight corresponding to the second loss value is inversely proportional to a derivative of the cosine function at an included angle between a target direction corresponding to the vehicle in the sample image characterized by the obtained prediction result and the driving direction of the vehicle; the value of the loss function is determined based on a weighted sum of the first loss value and the second loss value.
The device provided by the above embodiment of the present disclosure acquires a sample set through an acquisition unit, where samples in the sample set include a sample image captured by a camera and a sample result corresponding to the sample image, where the sample result is used to characterize an included angle between a target direction corresponding to a vehicle in the sample image and a driving direction of the vehicle, and the target direction corresponding to the vehicle is a direction in which a connection line between an actual position of the vehicle and an actual position of the camera is located; the training unit selects samples from the sample set and performs the following training steps: inputting a sample image in the selected sample into the initial model to obtain a prediction result corresponding to the sample image; determining the value of the loss function according to the obtained prediction result and the sample result in the selected sample; in response to determining that the initial model training is complete according to the value of the loss function, the initial model is determined to be a predictive model. By training the prediction model in such a way, the target direction corresponding to the regressed vehicle is taken as the direction of the connecting line between the actual position of the vehicle and the actual position of the camera device, and the included angle between the target direction and the driving direction of the vehicle can ensure the synchronism of the appearance change of the vehicle presented in the shot image and the driving direction of the vehicle, so that the accuracy and the stability of the output result of the prediction model are improved.
With further reference to fig. 6, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for predicting a driving direction, which corresponds to the method embodiment shown in fig. 4, and which is particularly applicable in various electronic devices.
As shown in fig. 6, the apparatus 600 for predicting a traveling direction provided by the present embodiment includes a photographing unit 601, a prediction unit 602, and a determination unit 603. Wherein the photographing unit 601 is configured to photograph an image with a photographing device provided on the unmanned vehicle; the prediction unit 602 inputs the image into a prediction model, and obtains a prediction result for representing an included angle between a target direction corresponding to the vehicle in the image and a driving direction of the vehicle, wherein the prediction model is generated by the method described in the embodiment of fig. 2, and the target direction corresponding to the vehicle is a direction of a connecting line between an actual position of the vehicle and an actual position of the camera device; a determination unit 603 configured to determine a traveling direction of the vehicle in the image according to the obtained prediction result.
In the present embodiment, in the apparatus 600 for predicting a traveling direction: the specific processing of the capturing unit 601, the predicting unit 602, and the determining unit 603 and the technical effects thereof can refer to the related descriptions of step 401, step 402, and step 403 in the corresponding embodiment of fig. 4, which are not repeated herein.
The device provided by the above embodiment of the present disclosure captures an image by a capturing unit using a capturing device provided on an unmanned vehicle; the prediction unit inputs the image into a prediction model to obtain a prediction result for representing an included angle between a target direction corresponding to the vehicle in the image and a driving direction of the vehicle, wherein the prediction model is generated by the method described in any one of the implementation manners of the first aspect, and the target direction corresponding to the vehicle is a direction of a connecting line between an actual position of the vehicle and an actual position of the camera device; the determining unit determines the driving direction of the vehicle in the image according to the obtained prediction result, so that the unmanned vehicle can adjust the driving track of the unmanned vehicle in time according to the determined driving direction of the surrounding vehicles, and the safety of the driving process is ensured.
Referring now to FIG. 7, a block diagram of an electronic device (e.g., the server of FIG. 1) 700 suitable for use in implementing embodiments of the present disclosure is shown. The server shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from storage 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 7 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of embodiments of the present disclosure.
It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a sample set, wherein samples in the sample set comprise sample images shot by a camera device and sample results corresponding to the sample images, the sample results are used for representing an included angle between a target direction corresponding to a vehicle in the sample images and a driving direction of the vehicle, and the target direction corresponding to the vehicle is a direction of a connecting line between an actual position of the vehicle and an actual position of the camera device; selecting samples from the sample set, and performing the following training steps: inputting a sample image in the selected sample into the initial model to obtain a prediction result corresponding to the sample image; determining the value of the loss function according to the obtained prediction result and the sample result in the selected sample; in response to determining that the initial model training is complete according to the value of the loss function, the initial model is determined to be a predictive model.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit and a training unit. Where the names of these units do not in some cases constitute a limitation of the unit itself, for example, the acquisition unit may also be described as a "unit acquiring a sample set".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (11)

1. A method for generating a predictive model, comprising:
obtaining a sample set, wherein samples in the sample set comprise sample images shot by a camera device and sample results corresponding to the sample images, the sample results are used for representing an included angle between a target direction corresponding to a vehicle in the sample images and a driving direction of the vehicle, and the target direction corresponding to the vehicle is a direction of a connecting line between an actual position of the vehicle and an actual position of the camera device;
selecting samples from the sample set, and performing the following training steps: inputting a sample image in the selected sample into the initial model to obtain a prediction result corresponding to the sample image; determining the value of the loss function according to the obtained prediction result and the sample result in the selected sample; in response to determining that the initial model training is complete according to the value of the loss function, the initial model is determined to be a predictive model.
2. The method of claim 1, wherein the training step further comprises:
and in response to determining that the initial model is not trained completely according to the value of the loss function, adjusting parameters of the initial model, reselecting the sample from the sample set, and continuing to execute the training step by using the adjusted initial model as the initial model.
3. The method according to claim 1, wherein the sample result comprises an angle section identifier and a relative angle, wherein the angle section identifier is used for indicating an angle section in which an included angle between a target direction corresponding to the vehicle and a driving direction of the vehicle is located, and the relative angle is used for representing a position of the included angle between the target direction corresponding to the vehicle and the driving direction of the vehicle in the angle section in which the included angle is located, wherein the angle section is obtained by dividing the omnidirectional angle by a preset angle unit.
4. A method according to claim 3, wherein the relative angle is used to characterize the difference between the angle between the corresponding target direction of the vehicle and the direction of travel of the vehicle and the half angle of the angular interval in which it is located.
5. The method of claim 1, wherein the sample results comprise sine values and/or cosine values corresponding to an angle between a target direction corresponding to the vehicle and a direction of travel of the vehicle.
6. The method of claim 5, wherein determining the value of the loss function based on the obtained prediction and the sample results in the selected samples comprises:
obtaining a first loss value by utilizing an L1 loss function according to the sine value in the obtained prediction result and the sine value in the corresponding sample result;
obtaining a second loss value by utilizing an L1 loss function according to a cosine value in the obtained prediction result and a cosine value in the corresponding sample result;
determining a weight corresponding to the first loss value, wherein the weight corresponding to the first loss value is inversely proportional to a derivative of a sine function at an included angle between a target direction corresponding to the vehicle and a driving direction of the vehicle in a sample image characterized by the obtained prediction result;
determining a weight corresponding to the second loss value, wherein the weight corresponding to the second loss value is inversely proportional to a derivative of a cosine function at an included angle between a target direction corresponding to the vehicle and a driving direction of the vehicle in the sample image characterized by the obtained prediction result;
determining a value of a loss function based on a weighted sum of the first loss value and the second loss value.
7. A method for predicting a direction of travel, comprising:
shooting an image by utilizing a shooting device arranged on the unmanned vehicle;
inputting the image into a prediction model to obtain a prediction result for representing an included angle between a target direction corresponding to the vehicle in the image and a driving direction of the vehicle, wherein the prediction model is generated by the method according to one of claims 1 to 6, and the target direction corresponding to the vehicle is a direction of a connecting line between an actual position of the vehicle and an actual position of the camera device;
and determining the driving direction of the vehicle in the image according to the obtained prediction result.
8. An apparatus for generating a predictive model, comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is configured to acquire a sample set, samples in the sample set comprise sample images shot by a camera device and sample results corresponding to the sample images, the sample results are used for representing included angles between target directions corresponding to vehicles in the sample images and the driving directions of the vehicles, and the target directions corresponding to the vehicles are directions of connecting lines between the actual positions of the vehicles and the actual positions of the camera device;
a training unit configured to select samples from the set of samples and to perform the following training steps: inputting a sample image in the selected sample into the initial model to obtain a prediction result corresponding to the sample image; determining the value of the loss function according to the obtained prediction result and the sample result in the selected sample; in response to determining that the initial model training is complete according to the value of the loss function, the initial model is determined to be a predictive model.
9. An apparatus for predicting a direction of travel, comprising:
a photographing unit configured to photograph an image with a photographing device provided on an unmanned vehicle;
a prediction unit configured to input the image into a prediction model, and obtain a prediction result for representing an included angle between a target direction corresponding to the vehicle in the image and a driving direction of the vehicle, wherein the prediction model is generated by the method according to one of claims 1 to 6, and the target direction corresponding to the vehicle is a direction in which a connecting line between an actual position of the vehicle and an actual position of the camera device is located;
a determination unit configured to determine a traveling direction of the vehicle in the image according to the obtained prediction result.
10. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
11. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN202010097489.4A 2020-02-17 2020-02-17 Method and apparatus for generating predictive model Active CN111340880B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010097489.4A CN111340880B (en) 2020-02-17 2020-02-17 Method and apparatus for generating predictive model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010097489.4A CN111340880B (en) 2020-02-17 2020-02-17 Method and apparatus for generating predictive model

Publications (2)

Publication Number Publication Date
CN111340880A true CN111340880A (en) 2020-06-26
CN111340880B CN111340880B (en) 2023-08-04

Family

ID=71187000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010097489.4A Active CN111340880B (en) 2020-02-17 2020-02-17 Method and apparatus for generating predictive model

Country Status (1)

Country Link
CN (1) CN111340880B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113029146A (en) * 2021-03-02 2021-06-25 北京白龙马云行科技有限公司 Navigation action prediction model training method, navigation action generation method and device
CN113344237A (en) * 2021-03-24 2021-09-03 安徽超视野智能科技有限公司 Illegal vehicle route prediction method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130016877A1 (en) * 2011-07-15 2013-01-17 International Business Machines Corporation Multi-view object detection using appearance model transfer from similar scenes
US20130236858A1 (en) * 2012-03-08 2013-09-12 Industrial Technology Research Institute Surrounding bird view monitoring image generation method and training method, automobile-side device, and training device thereof
CN105752081A (en) * 2014-12-30 2016-07-13 株式会社万都 Lane Change Control Device And Control Method
CN107554430A (en) * 2017-09-20 2018-01-09 京东方科技集团股份有限公司 Vehicle blind zone view method, apparatus, terminal, system and vehicle
CN107703937A (en) * 2017-09-22 2018-02-16 南京轻力舟智能科技有限公司 Automatic Guided Vehicle system and its conflict evading method based on convolutional neural networks
CN108053067A (en) * 2017-12-12 2018-05-18 深圳市易成自动驾驶技术有限公司 Planing method, device and the computer readable storage medium of optimal path
CN109087485A (en) * 2018-08-30 2018-12-25 Oppo广东移动通信有限公司 Assisting automobile driver method, apparatus, intelligent glasses and storage medium
CN109389863A (en) * 2017-08-02 2019-02-26 华为技术有限公司 Reminding method and relevant device
CN110356405A (en) * 2019-07-23 2019-10-22 桂林电子科技大学 Vehicle auxiliary travelling method, apparatus, computer equipment and readable storage medium storing program for executing
CN110400490A (en) * 2019-08-08 2019-11-01 腾讯科技(深圳)有限公司 Trajectory predictions method and apparatus

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130016877A1 (en) * 2011-07-15 2013-01-17 International Business Machines Corporation Multi-view object detection using appearance model transfer from similar scenes
US20130236858A1 (en) * 2012-03-08 2013-09-12 Industrial Technology Research Institute Surrounding bird view monitoring image generation method and training method, automobile-side device, and training device thereof
CN105752081A (en) * 2014-12-30 2016-07-13 株式会社万都 Lane Change Control Device And Control Method
CN109389863A (en) * 2017-08-02 2019-02-26 华为技术有限公司 Reminding method and relevant device
CN107554430A (en) * 2017-09-20 2018-01-09 京东方科技集团股份有限公司 Vehicle blind zone view method, apparatus, terminal, system and vehicle
CN107703937A (en) * 2017-09-22 2018-02-16 南京轻力舟智能科技有限公司 Automatic Guided Vehicle system and its conflict evading method based on convolutional neural networks
CN108053067A (en) * 2017-12-12 2018-05-18 深圳市易成自动驾驶技术有限公司 Planing method, device and the computer readable storage medium of optimal path
CN109087485A (en) * 2018-08-30 2018-12-25 Oppo广东移动通信有限公司 Assisting automobile driver method, apparatus, intelligent glasses and storage medium
CN110356405A (en) * 2019-07-23 2019-10-22 桂林电子科技大学 Vehicle auxiliary travelling method, apparatus, computer equipment and readable storage medium storing program for executing
CN110400490A (en) * 2019-08-08 2019-11-01 腾讯科技(深圳)有限公司 Trajectory predictions method and apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PAUL E.RYBSKI 等: "Visual Classification of Coarse Vehicle Orientation using Histogram of Oriented Gradients Features" *
朱会强: "基于视频跟踪的车辆行为分析技术研究" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113029146A (en) * 2021-03-02 2021-06-25 北京白龙马云行科技有限公司 Navigation action prediction model training method, navigation action generation method and device
CN113344237A (en) * 2021-03-24 2021-09-03 安徽超视野智能科技有限公司 Illegal vehicle route prediction method

Also Published As

Publication number Publication date
CN111340880B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
CN111461981B (en) Error estimation method and device for point cloud stitching algorithm
CN110163153B (en) Method and device for recognizing traffic sign board boundary
CN110110029B (en) Method and device for lane matching
CN109213144A (en) Man-machine interface (HMI) framework
CN115616937B (en) Automatic driving simulation test method, device, equipment and computer readable medium
CN110619666B (en) Method and device for calibrating camera
CN111353453B (en) Obstacle detection method and device for vehicle
CN113126624B (en) Automatic driving simulation test method, device, electronic equipment and medium
CN111340880B (en) Method and apparatus for generating predictive model
CN113892275A (en) Positioning method, positioning device, electronic equipment and storage medium
CN111766891A (en) Method and device for controlling flight of unmanned aerial vehicle
CN111469781B (en) For use in output of information processing system method and apparatus of (1)
CN111461980B (en) Performance estimation method and device of point cloud stitching algorithm
CN113409393B (en) Method and device for identifying traffic sign
CN115512336B (en) Vehicle positioning method and device based on street lamp light source and electronic equipment
CN108595095B (en) Method and device for simulating movement locus of target body based on gesture control
CN113920174A (en) Point cloud registration method, device, equipment, medium and automatic driving vehicle
CN112859109B (en) Unmanned aerial vehicle panoramic image processing method and device and electronic equipment
CN110588666B (en) Method and device for controlling vehicle running
US11508241B2 (en) Parking area mapping using image-stream derived vehicle description and space information
CN111402148B (en) Information processing method and apparatus for automatically driving vehicle
CN110033088B (en) Method and device for estimating GPS data
CN111383337A (en) Method and device for identifying objects
CN111461982B (en) Method and apparatus for splice point cloud
CN115848358B (en) Vehicle parking method, device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant