CN111340880B - Method and apparatus for generating predictive model - Google Patents

Method and apparatus for generating predictive model Download PDF

Info

Publication number
CN111340880B
CN111340880B CN202010097489.4A CN202010097489A CN111340880B CN 111340880 B CN111340880 B CN 111340880B CN 202010097489 A CN202010097489 A CN 202010097489A CN 111340880 B CN111340880 B CN 111340880B
Authority
CN
China
Prior art keywords
vehicle
sample
value
angle
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010097489.4A
Other languages
Chinese (zh)
Other versions
CN111340880A (en
Inventor
蒋旻悦
谭啸
孙昊
文石磊
章宏武
丁二锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010097489.4A priority Critical patent/CN111340880B/en
Publication of CN111340880A publication Critical patent/CN111340880A/en
Application granted granted Critical
Publication of CN111340880B publication Critical patent/CN111340880B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

Embodiments of the present disclosure disclose methods and apparatus for generating a predictive model. One embodiment of the method comprises the following steps: the method comprises the steps that a sample set is obtained, the sample comprises a sample image and a sample result, the sample image is shot by using a camera device, the sample result represents an included angle between a target direction corresponding to a vehicle in the sample image and a running direction of the vehicle, and the target direction corresponding to the vehicle is a direction in which a connecting line between an actual position of the vehicle and an actual position of the camera device is located; selecting samples from the sample set, and performing the following training steps: inputting a sample image in the selected sample into an initial model to obtain a prediction result corresponding to the sample image; determining a value of a loss function according to the obtained prediction result and a sample result in the selected samples; in response to determining that the initial model training is complete based on the value of the loss function, the initial model is determined to be a predictive model. The embodiment realizes the training of the prediction model, so that the running direction of the vehicle can be determined by using the prediction model.

Description

Method and apparatus for generating predictive model
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a method and apparatus for generating a predictive model.
Background
Unmanned vehicles, also known as unmanned vehicles, intelligent vehicles, wheeled mobile robots, and the like. At present, an unmanned vehicle mainly depends on an intelligent driver (such as a vehicle-mounted sensing system and the like) taking a computer system as a main part in the vehicle to sense the surrounding environment of the unmanned vehicle, and controls the steering, the speed and the like of the unmanned vehicle according to information of roads, obstacles and the like obtained by sensing, so that the unmanned vehicle can run on the roads. How to ensure the safety and reliability of the unmanned vehicle running is one of the main research directions in the unmanned vehicle research field.
During the travel of an unmanned vehicle, there are often many other traveling vehicles on the road. Therefore, if the driving direction of other surrounding vehicles can be accurately judged, the unmanned vehicle can better plan the driving route of the unmanned vehicle so as to ensure the safe driving of the unmanned vehicle.
Disclosure of Invention
Embodiments of the present disclosure propose methods and apparatus for generating a predictive model.
In a first aspect, embodiments of the present disclosure provide a method for generating a predictive model, the method comprising: the method comprises the steps that a sample set is obtained, wherein a sample in the sample set comprises a sample image shot by using a camera device and a sample result corresponding to the sample image, the sample result is used for representing an included angle between a target direction corresponding to a vehicle in the sample image and a running direction of the vehicle, and the target direction corresponding to the vehicle is a direction in which a connecting line between an actual position of the vehicle and an actual position of the camera device is located; selecting a sample from the sample set, and performing the training steps of: inputting a sample image in the selected sample into an initial model to obtain a prediction result corresponding to the sample image; determining a value of a loss function according to the obtained prediction result and a sample result in the selected samples; in response to determining that the initial model training is complete based on the value of the loss function, the initial model is determined to be a predictive model.
In some embodiments, the training step further comprises: and in response to determining that the initial model is not trained according to the value of the loss function, adjusting parameters of the initial model, and re-selecting samples from the sample set, using the adjusted initial model as the initial model, and continuing to execute the training step.
In some embodiments, the sample result includes an angle interval identifier and a relative angle, where the angle interval identifier is used to indicate an angle interval where an included angle between a corresponding target direction of the vehicle and a driving direction of the vehicle is located, and the relative angle is used to characterize a position of the included angle between the corresponding target direction of the vehicle and the driving direction of the vehicle in the angle interval where the included angle is located, where the angle interval is obtained by dividing an omni-directional angle by a preset angle unit.
In some embodiments, the relative angle is used to characterize the difference between the included angle between the corresponding target direction of the vehicle and the direction of travel of the vehicle and the half angle of the angular interval in which it is located.
In some embodiments, the sample results include sine and/or cosine values corresponding to an angle between a target direction corresponding to the vehicle and a direction of travel of the vehicle.
In some embodiments, determining the value of the loss function based on the obtained prediction result and the sample result in the selected samples comprises: obtaining a first loss value by using an L1 loss function according to the sine value in the obtained prediction result and the sine value in the corresponding sample result; obtaining a second loss value by using an L1 loss function according to the cosine value in the obtained prediction result and the cosine value in the corresponding sample result; determining a weight corresponding to the first loss value, wherein the weight corresponding to the first loss value is inversely proportional to a derivative of the sine function at an included angle between a target direction corresponding to the vehicle and a running direction of the vehicle in a sample image represented by the obtained prediction result; determining a weight corresponding to the second loss value, wherein the weight corresponding to the second loss value is inversely proportional to the derivative of the cosine function at an included angle between a target direction corresponding to the vehicle and a running direction of the vehicle in a sample image represented by the obtained prediction result; the value of the loss function is determined from a weighted sum of the first loss value and the second loss value.
In a second aspect, embodiments of the present disclosure provide a method for predicting a driving direction, the method comprising: shooting images by using a shooting device arranged on the unmanned vehicle; inputting the image into a prediction model to obtain a prediction result for representing an included angle between a target direction corresponding to the vehicle in the image and a running direction of the vehicle, wherein the prediction model is generated by a method described in any implementation manner of the first aspect, and the target direction corresponding to the vehicle is a direction in which a connecting line between an actual position of the vehicle and an actual position of the image pickup device is located; and determining the running direction of the vehicle in the image according to the obtained prediction result.
In a third aspect, embodiments of the present disclosure provide an apparatus for generating a predictive model, the apparatus comprising: the system comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is configured to acquire a sample set, a sample in the sample set comprises a sample image shot by an image pickup device and a sample result corresponding to the sample image, wherein the sample result is used for representing an included angle between a target direction corresponding to a vehicle in the sample image and a running direction of the vehicle, and the target direction corresponding to the vehicle is a direction in which a connecting line between an actual position of the vehicle and an actual position of the image pickup device is located; a training unit configured to select samples from the sample set, and to perform the training steps of: inputting a sample image in the selected sample into an initial model to obtain a prediction result corresponding to the sample image; determining a value of a loss function according to the obtained prediction result and a sample result in the selected samples; in response to determining that the initial model training is complete based on the value of the loss function, the initial model is determined to be a predictive model.
In some embodiments, the training step further comprises: and in response to determining that the initial model is not trained according to the value of the loss function, adjusting parameters of the initial model, and re-selecting samples from the sample set, using the adjusted initial model as the initial model, and continuing to execute the training step.
In some embodiments, the sample result includes an angle interval identifier and a relative angle, where the angle interval identifier is used to indicate an angle interval where an included angle between a corresponding target direction of the vehicle and a driving direction of the vehicle is located, and the relative angle is used to characterize a position of the included angle between the corresponding target direction of the vehicle and the driving direction of the vehicle in the angle interval where the included angle is located, where the angle interval is obtained by dividing an omni-directional angle by a preset angle unit.
In some embodiments, the relative angle is used to characterize the difference between the included angle between the corresponding target direction of the vehicle and the direction of travel of the vehicle and the half angle of the angular interval in which it is located.
In some embodiments, the sample results include sine and/or cosine values corresponding to an angle between a target direction corresponding to the vehicle and a direction of travel of the vehicle.
In some embodiments, the training unit is further configured to obtain a first loss value using an L1 loss function according to the sine value in the obtained prediction result and the sine value in the corresponding sample result; obtaining a second loss value by using an L1 loss function according to the cosine value in the obtained prediction result and the cosine value in the corresponding sample result; determining a weight corresponding to the first loss value, wherein the weight corresponding to the first loss value is inversely proportional to a derivative of the sine function at an included angle between a target direction corresponding to the vehicle and a running direction of the vehicle in a sample image represented by the obtained prediction result; determining a weight corresponding to the second loss value, wherein the weight corresponding to the second loss value is inversely proportional to the derivative of the cosine function at an included angle between a target direction corresponding to the vehicle and a running direction of the vehicle in a sample image represented by the obtained prediction result; the value of the loss function is determined from a weighted sum of the first loss value and the second loss value.
In a fourth aspect, embodiments of the present disclosure provide an apparatus for predicting a driving direction, the apparatus comprising: a photographing unit configured to photograph an image using a photographing device provided on the unmanned vehicle; the prediction unit is configured to input the image into a prediction model to obtain a prediction result used for representing an included angle between a target direction corresponding to the vehicle in the image and a running direction of the vehicle, wherein the prediction model is generated by the method described in any implementation manner of the first aspect, and the target direction corresponding to the vehicle is a direction in which a connecting line between an actual position of the vehicle and an actual position of the camera device is located; and a determining unit configured to determine a traveling direction of the vehicle in the image based on the obtained prediction result.
In a fifth aspect, embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage means for storing one or more programs; the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as described in any of the implementations of the first aspect.
In a sixth aspect, embodiments of the present disclosure provide a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a method as described in any of the implementations of the first aspect.
According to the method and the device for generating the prediction model, a large number of training samples including sample images and sample results are collected, the prediction model is obtained through training, the sample results are used for representing the included angle between the corresponding target direction of the vehicle in the sample images and the running direction of the vehicle, the corresponding target direction of the vehicle is the direction of the connecting line between the actual position of the vehicle and the actual position of the camera, and therefore the prediction result used for representing the included angle between the direction of the connecting line between the actual position of the photographed vehicle and the actual position of the camera and the running direction of the vehicle can be obtained through the training of the obtained prediction model, and the running direction of the photographed vehicle can be calculated according to the prediction result.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings:
FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present disclosure may be applied;
FIG. 2 is a flow chart of one embodiment of a method for generating a predictive model in accordance with the present disclosure;
FIG. 3 is a flow chart of yet another embodiment of a method for generating a predictive model in accordance with an embodiment of the disclosure;
FIG. 4 is a flow chart of one embodiment of a method for predicting a direction of travel according to the present disclosure;
FIG. 5 is a schematic structural diagram of one embodiment of an apparatus for generating a predictive model according to the present disclosure;
FIG. 6 is a schematic structural view of one embodiment of an apparatus for predicting a direction of travel according to the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 illustrates an exemplary architecture 100 to which embodiments of the methods of the present disclosure for generating a predictive model or apparatus for generating a predictive model may be applied.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The terminal devices 101, 102, 103 interact with the server 105 via the network 104 to receive or send messages or the like. Various client applications can be installed on the terminal devices 101, 102, 103. Such as an image processing class application, a browser class application, a search class application, etc.
The terminal devices 101, 102, 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices supporting image processing, including but not limited to smartphones, tablets, electronic book readers, laptop and desktop computers, and the like. When the terminal devices 101, 102, 103 are software, they can be installed in the above-listed electronic devices. Which may be implemented as multiple software or software modules (e.g., multiple software or software modules for providing distributed services) or as a single software or software module. The present invention is not particularly limited herein.
The server 105 may be a server providing various services, such as a server training an initial model from a set of samples sent by the terminal devices 101, 102, 103 to obtain a predictive model. Further, the server 105 may further process the received or collected image by using the initial model after training to obtain an output result corresponding to the image, and further determine the driving direction of the vehicle in the captured image according to the output result.
The sample set may be directly stored in the local area of the server 105, and the server 105 may directly extract the locally stored sample set to train the initial model, and in this case, the terminal devices 101, 102, 103 and the network 104 may not be present.
It should be noted that, the method for generating a prediction model provided by the embodiments of the present disclosure is generally performed by the server 105, and accordingly, the apparatus for generating a prediction model is generally disposed in the server 105.
It should also be noted that the terminal apparatuses 101, 102, 103 may also be provided with an image processing function. At this time, the terminal devices 101, 102, 103 may also train the initial model based on the sample set to obtain the prediction model. At this time, the method for generating the prediction model may also be performed by the terminal devices 101, 102, 103, and correspondingly, the means for generating the prediction model may also be provided in the terminal devices 101, 102, 103. At this point, the exemplary system architecture 100 may not have the server 105 and network 104 present.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster formed by a plurality of servers, or as a single server. When server 105 is software, it may be implemented as multiple software or software modules (e.g., multiple software or software modules for providing distributed services), or as a single software or software module. The present invention is not particularly limited herein.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for generating a predictive model in accordance with the present disclosure is shown. The method for generating a predictive model comprises the steps of:
in step 201, a sample set is acquired, where a sample in the sample set includes a sample image captured by an imaging device and a sample result corresponding to the sample image.
In this embodiment, the execution subject of the method for generating a predictive model (e.g., server 105 shown in fig. 1) may obtain a sample set from a local or other storage device (e.g., terminal devices 101, 102, 103, or database, etc., shown in fig. 1).
In the present embodiment, the image pickup device may refer to various devices capable of capturing images. For example, the camera device may include, but is not limited to, a camera, a video camera, and the like. Wherein the sample image captured with the image capturing device may include at least one vehicle.
Among them, the vehicle may refer to various types of drivable vehicles. For example, the vehicle may include, but is not limited to, an automobile, a passenger car, a van, a bus, and the like. It should be understood that when the sample image includes more than two vehicles, the types of the respective vehicles included in the sample image may be the same or different.
Alternatively, the image pickup device may take a sample image on the road. For example, the image pickup device may be provided on the road side. For another example, the imaging device may be provided on an unmanned vehicle.
The sample result corresponding to the sample image may be used to characterize an angle between a target direction corresponding to the vehicle in the sample image and a traveling direction of the vehicle. The target direction corresponding to the vehicle may be a direction in which a line between an actual position of the vehicle and an actual position of the image capturing device is located.
Wherein the actual location may refer to a geographic location. The actual position of the vehicle and the actual position of the image pickup device can be represented based on various coordinate systems according to actual application requirements. Of course, the actual position of the vehicle and the actual position of the imaging device are represented by the same coordinate system.
If the sample image includes more than two vehicles, the sample result corresponding to the sample image may include sub-sample results corresponding to each vehicle in the sample image. I.e. the sample results may comprise sub-sample results, each for characterizing an angle between a respective target direction and a respective driving direction of a respective vehicle in the sample image.
According to different application scenes or application requirements, different methods for representing the included angle between the corresponding target direction of the vehicle and the running direction of the vehicle can be adopted. For example, the sample result may include an angle between a corresponding target direction of the vehicle and a traveling direction of the vehicle. For another example, the sample result may include a normalized value of an angle between a corresponding target direction of the vehicle and a traveling direction of the vehicle.
For another example, the sample result may include a sine value and/or a cosine value of an angle between a corresponding target direction of the vehicle and a traveling direction of the vehicle. At this time, the sine value and/or the cosine value are used to represent the included angle between the corresponding target direction of the vehicle and the running direction of the vehicle.
Step 202, selecting samples from the sample set, and performing the training steps of steps 2021-2023 as follows:
In this embodiment, samples may be flexibly selected from the sample set according to an actual application scenario. For example, a predetermined number of samples may be randomly selected from the sample set. The preset number can be preset by a technician according to actual application requirements.
Step 2021, inputting the sample image in the selected sample to the initial model to obtain a prediction result corresponding to the sample image.
In this step, the initial model may be various types of untrained or trained artificial neural networks. For example, the initial model may be a deep learning model. The initial model may also be a model that combines a variety of untrained or untrained artificial neural networks. Of course, the technician can also build the initial model by utilizing various existing deep learning frames according to actual application requirements.
It should be understood that if the number of samples is more than two, the sample images in each selected sample may be respectively input to the initial model, so as to obtain sample results output by the initial model and respectively corresponding to each sample image.
Step 2022, determining a value of the loss function according to the obtained prediction result and the sample result in the selected samples.
In this embodiment, the value of the loss function may be determined according to the result of comparing the obtained prediction result with the sample result in the selected samples. Wherein, the technician can flexibly design the loss function according to the practical application requirement in advance. For example, the loss function may include, but is not limited to, an L1 loss function, a smooth L1 loss function, an L2 loss function, and the like.
Step 2023, in response to determining that the initial model training is complete based on the value of the loss function, determines the initial model as the predictive model.
In this embodiment, it may be determined whether the initial model is trained according to the value of the loss function. The manner of determining whether the initial model is trained can be flexibly set by the technician according to the actual application requirements. For example, it may be determined whether the initial model is trained by determining whether the value of the loss function is not greater than a preset loss threshold. If the value of the loss function is greater than the loss threshold, it may be determined that the initial model is not trained to be complete.
In some alternative implementations of the present embodiment, in response to determining that the initial model is not trained according to the value of the loss function, parameters of the initial model are adjusted, and the samples are re-selected from the sample set, the training step 202 continues using the adjusted initial model as the initial model.
Wherein after obtaining the value of the loss function, the parameters of each network layer of the initial model can be adjusted by using algorithms such as gradient descent and back propagation. Generally, multiple iterations of training are typically required to complete the training.
In the training process, various ways of determining whether the initial model is trained can be flexibly adopted. For example, when the initial model is trained for the first time, whether the initial model is trained is determined to be completed or not according to the value of the loss function and a preset loss threshold value, whether the value of the loss function is can be set, whether the initial model is not trained is determined to be completed or not, and the like. After the initial model is subjected to parameter adjustment, whether the initial model is trained is determined to be completed or not according to the difference between the values of the loss functions respectively corresponding to the initial model after the parameter adjustment is continuously read for a plurality of times and a preset difference threshold.
In some alternative implementations of the present embodiment, the sample results may include an angle interval identification and a relative angle. The angle section identifier may be used to indicate an angle section in which an included angle between a target direction corresponding to the vehicle and a traveling direction of the vehicle is located. The angle interval can be obtained by dividing the omni-directional angle by a preset angle unit. The relative angle may be used to characterize the position of the angle between the corresponding target direction of the vehicle and the direction of travel of the vehicle in the angular interval in which it is located. In other words, the relative angle may be used to characterize the angle in the angle interval in which the angle between the corresponding target direction of the vehicle and the traveling direction of the vehicle is relative to it.
The omni-directional angle can be flexibly set by a technician according to actual application requirements. For example, the omni-directional angle may be 360 ° or 180 °. The preset angle units may also be preset by a technician. For example, the preset angle unit may be 30 °, 60 °, or the like. Taking the preset angle unit of 60 degrees as an example for dividing the omni-directional angle of 360 degrees, six angle intervals of 0-60 degrees, 60-120 degrees, 120-180 degrees, 180-240 degrees, 240-300 degrees and 300-360 degrees can be obtained. If the included angle between the corresponding target direction of the vehicle and the running direction of the vehicle is 115 degrees, the included angle between the corresponding target direction of the vehicle and the running direction of the vehicle can be determined to be in the angle interval of 60-120 degrees.
Wherein the relative angle can be flexibly determined by various representation methods. For example, the relative angle may be used to characterize the difference between the angle between the corresponding target direction of the vehicle and the direction of travel of the vehicle, and the starting angle of the angular interval in which it is located.
For example, if the included angle between the corresponding target direction of the vehicle and the traveling direction of the vehicle is 115 °, the included angle is in the angle range of 60 ° to 120 °. The relative angle may be 55 deg. from the difference between 115 deg. and the starting angle 60 deg. of the angular interval.
The omni-directional angle is divided according to the preset angle unit to obtain a plurality of angle sections, so that the angle section mark and the position in the angle section can be utilized to represent the included angle between the corresponding target direction of the vehicle and the running direction of the vehicle. The representation mode can reduce the value range of the relative angle, and is beneficial to reducing the output error of the prediction model.
In some optional implementations of this embodiment, any two adjacent angle intervals among the angle intervals obtained by dividing the omni-directional angle by the preset angle unit may have an intersection.
As an example, the respective angle intervals obtained may be as follows: (330-360 degrees and 0-90 degrees) 0-90 degrees, 60-150 degrees, 120-210 degrees, 180-270 degrees, 240-300 degrees and 0-30 degrees) 240-300 degrees and 300-360 degrees.
The continuity and stability of the output result of the prediction model can be ensured in a mode that any two adjacent angle intervals in the divided angle intervals can have intersection.
When any two adjacent angle sections in the divided angle sections have an intersection, the section identifier corresponding to the angle may include the angle section identifier corresponding to each of the two adjacent angle sections in which the angle is located for the angle between any two adjacent angle sections.
Alternatively, the relative angle may be used to characterize the difference between the angle between the corresponding target direction of the vehicle and the direction of travel of the vehicle and the half angle of the angular interval in which it is located. Wherein the half angle of the angle interval may refer to an angle located at the center of the angle interval.
For example, if the included angle between the corresponding target direction of the vehicle and the traveling direction of the vehicle is 115 °, the included angle is in the angle range of 60 ° to 120 °. The half angle of the angle interval 60 ° to 120 ° is 60 ° + ((120 ° -60 °)/2) =90°. At this time, the relative angle may be a difference of 25 ° between 115 ° and the half angle 90 ° of the angle section.
The included angle between the corresponding target direction of the vehicle and the running direction of the vehicle and the half angle of the angle section where the included angle is located are represented by the difference value between the corresponding target direction of the vehicle and the running direction of the vehicle, and the value range of the relative angle can be further reduced, so that the output error of the prediction model can be further reduced.
In some optional implementations of the present embodiment, the angle between the corresponding target direction of the vehicle and the driving direction of the vehicle may be represented by a corresponding sine value and/or cosine value. At this time, the sample result may include a sine value and/or a cosine value corresponding to an angle between a target direction corresponding to the vehicle and a traveling direction of the vehicle.
In some optional implementations of this embodiment, when the sample result includes an angle interval identifier and a relative angle, the position of the included angle between the corresponding target direction of the vehicle and the driving direction of the vehicle in the angle interval in which the included angle is located may be represented by using the corresponding sine value and/or cosine value. At this time, the relative angle may include a sine value and/or a cosine value of an angle section in which the vehicle corresponds to an angle between the target direction and the traveling direction of the vehicle.
It should be noted that, the characterization method of the included angle between the corresponding target direction of the vehicle and the driving direction of the vehicle may be flexibly set. For example, a coordinate system and a reference position may be selected, and the relative angle may be used to characterize an angle between a corresponding target direction of the vehicle and a direction of travel of the vehicle. The present disclosure is not limited in this regard.
In some optional implementations of this embodiment, when the sample result characterizes the angle with a sine value and a cosine value, the value of the loss function may be determined according to the obtained prediction result and the sample result in the selected sample by:
step one, obtaining a first loss value by using an L1 loss function according to the sine value in the obtained prediction result and the sine value in the corresponding sample result.
The loss value is calculated as the L1 loss function using the minimum absolute value deviation. Thus, the absolute value of the difference between the sine value in the predicted result and the sine value in the corresponding sample result may be determined as the first loss value.
And step two, obtaining a second loss value by using an L1 loss function according to the cosine value in the obtained prediction result and the cosine value in the corresponding sample result.
In this step, the absolute value of the difference between the cosine value in the prediction result and the cosine value in the corresponding sample result may be determined as the second loss value.
And thirdly, determining the weight corresponding to the first loss value according to the sine value in the obtained prediction result.
In this step, the first weight may be inversely proportional to the derivative of the sine function at the angle between the direction of travel of the vehicle and the target direction corresponding to the vehicle in the sample image characterized by the obtained prediction result.
The specific calculation mode of the first weight can be flexibly set by a technician according to the actual application scene. For example, the first weight may be the inverse of the derivative of the sinusoidal function at the angle between the corresponding target direction of the vehicle and the direction of travel of the vehicle in the sample image characterized by the resulting prediction. For another example, the sum of the derivative of the sine function at the included angle between the corresponding target direction of the vehicle and the traveling direction of the vehicle in the sample image represented by the obtained prediction result and the preset adjustment value may be calculated first, the adjusted derivative may be obtained, and then the inverse of the obtained adjusted derivative may be determined as the first weight.
And step four, determining the weight corresponding to the second loss value according to the cosine value in the obtained prediction result.
In this step, the second weight may be inversely proportional to the derivative of the cosine function at the angle between the direction of travel of the vehicle and the target direction corresponding to the vehicle in the sample image characterized by the obtained prediction result.
Similar to the first weight, the specific calculation mode of the second weight can be flexibly set by a technician according to the actual application scene. For example, the second weight may be the inverse of the derivative of the cosine function at the angle between the corresponding target direction of the vehicle and the direction of travel of the vehicle in the sample image characterized by the resulting prediction. For another example, the sum of the derivative of the cosine function at the included angle between the corresponding target direction of the vehicle and the traveling direction of the vehicle in the sample image represented by the obtained prediction result and the preset adjustment value may be calculated first, the adjusted derivative may be obtained, and then the reciprocal of the obtained adjusted derivative may be determined as the second weight.
And fifthly, determining the value of the loss function according to the weighted sum of the first loss value and the second loss value.
In this step, the value of the penalty function may be proportional to a weighted sum of the first penalty value and the second penalty value. For example, a weighted sum of the first loss value and the second loss value may be determined directly as the value of the loss function.
The accuracy of the calculated value of the L1 loss function is affected by the non-linearities of the sine function and the pre-function itself. By determining the weight of the value of the L1 loss function by using the inverse of the derivative of the sine function and the derivative of the pre-function at the corresponding included angle respectively, a more accurate value of the loss function can be obtained, thereby improving the accuracy of the output result of the prediction model.
According to the method provided by the embodiment of the invention, the prediction model is trained, the direction of the connecting line between the actual position of the vehicle and the actual position of the camera device is taken as the corresponding target direction of the regression vehicle, and the synchronism of the appearance change of the vehicle and the running direction of the vehicle in the shot image can be ensured by the included angle between the actual position of the vehicle and the running direction of the vehicle, so that the accuracy and the stability of the output result of the prediction model are improved.
With further reference to fig. 3, fig. 3 is a flow 300 of yet another embodiment of a method for generating a predictive model in accordance with the present embodiment. The flow 300 of the method for generating a predictive model includes the steps of:
step 301, obtaining a sample set, wherein a sample in the sample set comprises a sample image shot by using an imaging device and a sample result corresponding to the sample image, the sample result comprises an angle interval mark and a relative angle, and the relative angle comprises a sine value and a cosine value of a difference value between an included angle between a corresponding target direction of a vehicle and a driving direction of the vehicle and a half angle of an angle interval where the vehicle is located.
Step 302, selecting samples from the sample set, and performing training steps of steps 3021 to 3023 as follows:
step 3021, inputting the sample image in the selected sample to the initial model to obtain a prediction result corresponding to the sample image.
Step 3022, determining the value of the loss function by steps 30221-30225 as follows:
step 30221, obtaining a first loss value by using the L1 loss function according to the sine value in the obtained prediction result and the sine value in the corresponding sample result.
Step 30222, obtaining a second loss value by using the L1 loss function according to the cosine value in the obtained prediction result and the cosine value in the corresponding sample result.
In step 30223, a weight corresponding to the first loss value is determined.
In this step, the weight corresponding to the first loss value may be inversely proportional to the derivative of the sine function at the difference between the angle between the target direction corresponding to the vehicle and the traveling direction of the vehicle, characterized by the relative angle in the obtained prediction result, and the half angle of the angle interval in which it is located. For example, a difference between an included angle between a target direction corresponding to the vehicle and a traveling direction of the vehicle and a half angle of an angle section in which the included angle is located may be determined first according to a sine value and a cosine value included in the relative angle in the prediction result, and then an inverse of a derivative of the sine function at the obtained difference is calculated as the first loss value.
In step 30224, a weight corresponding to the second loss value is determined.
In this step, the weight corresponding to the second loss value may be inversely proportional to the derivative of the cosine function at the difference between the angle between the target direction corresponding to the vehicle and the traveling direction of the vehicle, which is characterized by the relative angle in the obtained prediction result, and the half angle of the angle section in which it is located.
For example, a difference between an included angle between a target direction corresponding to the vehicle and a traveling direction of the vehicle and a half angle of an angle section in which the included angle is located may be determined first according to a sine value and a cosine value included in the relative angle in the prediction result, and then an inverse of a derivative of the cosine function at the obtained difference is calculated as the first loss value.
In step 30225, a weighted sum of the first loss value and the second loss value is determined as the value of the loss function.
In response to determining that the initial model training is complete based on the value of the loss function, the initial model is determined to be the predictive model, step 3023.
In step 3024, in response to determining that the initial model is not trained according to the value of the loss function, parameters of the initial model are adjusted, and the samples are re-selected from the sample set, and the training step 302 is continued using the adjusted initial model as the initial model.
The specific implementation of the steps not described in the above steps 301 and 302 may refer to the relevant descriptions of the steps 201 and 202 in the corresponding embodiment of fig. 2, which are not described herein again.
According to the method provided by the embodiment of the disclosure, the omni-directional angle is divided into the angle intervals, so that the value range of the relative angle involved in the training process can be reduced, the output error of the prediction model can be reduced, and the stability of the output result of the prediction model can be improved. Meanwhile, the inverse of the derivative of the sine function and the derivative of the pre-function at the corresponding angles is used as a weight to adjust the value of the loss function, so that the nonlinear influence of the sine function and the cosine function can be reduced, and the accuracy of the output result of the prediction model is improved.
With further reference to fig. 4, a flow 400 of one embodiment of a method for predicting a direction of travel is shown. The flow 400 of the method for predicting a driving direction comprises the steps of:
in step 401, an image is captured by a capturing device provided in the unmanned vehicle.
In the present embodiment, the execution subject of the method for predicting the traveling direction may be the same as or different from the execution subject of the method described in the corresponding embodiment of fig. 2. Among them, the image pickup device may refer to various devices for photographing images that can be provided on an unmanned vehicle. For example, the camera device may include, but is not limited to, a camera, a video camera, and the like. Wherein the image captured with the image capturing device may include at least one vehicle.
And step 402, inputting the image into a prediction model to obtain a prediction result for representing the included angle between the corresponding target direction of the vehicle in the image and the running direction of the vehicle.
In this embodiment, the prediction model may be generated by the method described in the corresponding embodiment of fig. 2 or fig. 3. The target direction to which the vehicle corresponds may be a direction in which a line between an actual position of the vehicle and an actual position of the image pickup device is located.
Step 403, determining the running direction of the vehicle in the image according to the obtained prediction result.
After the prediction result is obtained, the actual position of the camera device arranged on the unmanned vehicle and the position of the vehicle in the image can be combined to calculate the running direction of the vehicle in the image.
The unmanned aerial vehicle can detect the actual position of the unmanned aerial vehicle, so that the actual position of the camera device arranged on the unmanned aerial vehicle is obtained. The actual location of the vehicle in the image may be determined using various methods. For example, the actual position of the vehicle in the image may be calculated from external and/or internal parameters of the camera device, as well as based on image analysis techniques and computer vision.
It should be understood that when the captured image includes more than two vehicles, the respective traveling directions of the respective vehicles may be determined separately.
According to the method provided by the embodiment of the disclosure, the trained prediction model is utilized to obtain the prediction result for representing the included angle between the corresponding target direction of the vehicle and the running direction of the vehicle in the image shot by the unmanned vehicle, and the corresponding target direction of the vehicle is the direction of the connecting line between the actual position of the vehicle and the actual position of the camera device, so that the running direction of each vehicle in the image can be determined according to the obtained prediction result. Based on the method, the unmanned vehicle can timely adjust the self running track according to the determined running direction of the surrounding vehicles in the running process, so that the safety of the running process is ensured.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present disclosure provides an embodiment of an apparatus for generating a predictive model, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for generating a prediction model provided in the present embodiment includes an acquisition unit 501 and a training unit 502. The obtaining unit 501 is configured to obtain a sample set, where a sample in the sample set includes a sample image captured by using an imaging device and a sample result corresponding to the sample image, where the sample result is used to characterize an included angle between a target direction corresponding to a vehicle in the sample image and a driving direction of the vehicle, and the target direction corresponding to the vehicle is a direction in which a line between an actual position of the vehicle and an actual position of the imaging device is located; training unit 502 is configured to select samples from the sample set and perform the training steps of: inputting a sample image in the selected sample into an initial model to obtain a prediction result corresponding to the sample image; determining a value of a loss function according to the obtained prediction result and a sample result in the selected samples; in response to determining that the initial model training is complete based on the value of the loss function, the initial model is determined to be a predictive model.
In the present embodiment, in the apparatus 500 for generating a prediction model: the specific processing of the acquiring unit 501 and the training unit 502 and the technical effects thereof may refer to the descriptions related to step 201 and step 202 in the corresponding embodiment of fig. 2, and are not described herein.
In some optional implementations of this embodiment, the training step further includes: and in response to determining that the initial model is not trained according to the value of the loss function, adjusting parameters of the initial model, and re-selecting samples from the sample set, using the adjusted initial model as the initial model, and continuing to execute the training step.
In some optional implementations of this embodiment, the sample result includes an angle interval identifier and a relative angle, where the angle interval identifier is used to indicate an angle interval where an included angle between a corresponding target direction of the vehicle and a driving direction of the vehicle is located, and the relative angle is used to characterize a position of the included angle between the corresponding target direction of the vehicle and the driving direction of the vehicle in the angle interval where the included angle is located, where the angle interval is obtained by dividing an omni-directional angle by a preset angle unit.
In some optional implementations of this embodiment, the relative angle is used to characterize a difference between an included angle between a corresponding target direction of the vehicle and a traveling direction of the vehicle and a half angle of an angle interval in which the vehicle is located.
In some optional implementations of this embodiment, the sample result includes a sine value and/or a cosine value corresponding to an angle between a target direction corresponding to the vehicle and a traveling direction of the vehicle.
In some optional implementations of this embodiment, the training unit 502 is further configured to obtain a first loss value according to the obtained sine value in the predicted result and the sine value in the corresponding sample result by using an L1 loss function; obtaining a second loss value by using an L1 loss function according to the cosine value in the obtained prediction result and the cosine value in the corresponding sample result; determining a weight corresponding to the first loss value, wherein the weight corresponding to the first loss value is inversely proportional to a derivative of the sine function at an included angle between a target direction corresponding to the vehicle and a running direction of the vehicle in a sample image represented by the obtained prediction result; determining a weight corresponding to the second loss value, wherein the weight corresponding to the second loss value is inversely proportional to the derivative of the cosine function at an included angle between a target direction corresponding to the vehicle and a running direction of the vehicle in a sample image represented by the obtained prediction result; the value of the loss function is determined from a weighted sum of the first loss value and the second loss value.
The device provided by the embodiment of the present disclosure obtains a sample set through an obtaining unit, where a sample in the sample set includes a sample image captured by using an image capturing device and a sample result corresponding to the sample image, where the sample result is used to characterize an included angle between a target direction corresponding to a vehicle in the sample image and a driving direction of the vehicle, and the target direction corresponding to the vehicle is a direction in which a line between an actual position of the vehicle and an actual position of the image capturing device is located; the training unit selects samples from the sample set and performs the following training steps: inputting a sample image in the selected sample into an initial model to obtain a prediction result corresponding to the sample image; determining a value of a loss function according to the obtained prediction result and a sample result in the selected samples; in response to determining that the initial model training is complete based on the value of the loss function, the initial model is determined to be a predictive model. By training the prediction model in this way, the direction in which the corresponding target direction of the vehicle is returned to be the direction in which the connecting line between the actual position of the vehicle and the actual position of the image pickup device is located, and the included angle between the actual position of the vehicle and the running direction of the vehicle can ensure the synchronism between the appearance change of the vehicle and the running direction of the vehicle, which are shown in the photographed image, so that the accuracy and the stability of the output result of the prediction model are improved.
With further reference to fig. 6, as an implementation of the method shown in the above figures, the present disclosure provides an embodiment of an apparatus for predicting a driving direction, which corresponds to the method embodiment shown in fig. 4, and which is particularly applicable to various electronic devices.
As shown in fig. 6, the apparatus 600 for predicting a traveling direction provided by the present embodiment includes a photographing unit 601, a prediction unit 602, and a determination unit 603. Wherein the photographing unit 601 is configured to photograph an image with a photographing device provided on the unmanned vehicle; the prediction unit 602 inputs the image to a prediction model to obtain a prediction result for representing an included angle between a target direction corresponding to the vehicle in the image and a driving direction of the vehicle, where the prediction model is generated by a method as described in the embodiment of fig. 2, and the target direction corresponding to the vehicle is a direction in which a line between an actual position of the vehicle and an actual position of the image capturing device is located; a determining unit 603 configured to determine a traveling direction of the vehicle in the image based on the obtained prediction result.
In the present embodiment, in the apparatus 600 for predicting the traveling direction: the specific processes of the photographing unit 601, the predicting unit 602, and the determining unit 603 and the technical effects thereof may refer to the descriptions related to the steps 401, 402, and 403 in the corresponding embodiment of fig. 4, and are not repeated herein.
The device provided by the above embodiment of the present disclosure captures an image by a capturing unit using a capturing device provided on an unmanned vehicle; the prediction unit inputs the image into a prediction model to obtain a prediction result used for representing an included angle between a target direction corresponding to the vehicle in the image and a running direction of the vehicle, wherein the prediction model is generated by the method described in any implementation manner of the first aspect, and the target direction corresponding to the vehicle is a direction in which a connecting line between an actual position of the vehicle and an actual position of the camera device is located; the determining unit determines the running direction of the vehicle in the image according to the obtained prediction result, so that the unmanned vehicle can timely adjust the running track of the unmanned vehicle according to the determined running direction of surrounding vehicles, and the safety of the running process is ensured.
Referring now to fig. 7, a schematic diagram of an electronic device (e.g., server in fig. 1) 700 suitable for use in implementing embodiments of the present disclosure is shown. The server illustrated in fig. 7 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present disclosure in any way.
As shown in fig. 7, the electronic device 700 may include a processing means (e.g., a central processor, a graphics processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
In general, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 shows an electronic device 700 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 7 may represent one device or a plurality of devices as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 709, or installed from storage 708, or installed from ROM 702. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 701.
It should be noted that, the computer readable medium according to the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In an embodiment of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Whereas in embodiments of the present disclosure, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: the method comprises the steps that a sample set is obtained, wherein a sample in the sample set comprises a sample image shot by using a camera device and a sample result corresponding to the sample image, the sample result is used for representing an included angle between a target direction corresponding to a vehicle in the sample image and a running direction of the vehicle, and the target direction corresponding to the vehicle is a direction in which a connecting line between an actual position of the vehicle and an actual position of the camera device is located; selecting a sample from the sample set, and performing the training steps of: inputting a sample image in the selected sample into an initial model to obtain a prediction result corresponding to the sample image; determining a value of a loss function according to the obtained prediction result and a sample result in the selected samples; in response to determining that the initial model training is complete based on the value of the loss function, the initial model is determined to be a predictive model.
Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments described in the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes an acquisition unit and a training unit. The names of these units do not in any way constitute a limitation of the unit itself, for example, the acquisition unit may also be described as "unit that acquires a sample set".
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (11)

1. A method for generating a predictive model, comprising:
the method comprises the steps that a sample set is obtained, wherein a sample in the sample set comprises a sample image shot by using an image pickup device and a sample result corresponding to the sample image, the sample result is used for representing an included angle between a target direction corresponding to a vehicle in the sample image and a running direction of the vehicle, and the target direction corresponding to the vehicle is a direction in which a connecting line between an actual position of the vehicle and an actual position of the image pickup device is located;
selecting samples from the sample set, and performing the training steps of: inputting a sample image in the selected sample into an initial model to obtain a prediction result corresponding to the sample image; determining a value of a loss function according to the obtained prediction result and a sample result in the selected samples; in response to determining that the initial model training is complete based on the value of the loss function, the initial model is determined to be a predictive model.
2. The method of claim 1, wherein the training step further comprises:
and in response to determining that the initial model is not trained according to the value of the loss function, adjusting parameters of the initial model, and re-selecting samples from the sample set, continuing to execute the training step by using the adjusted initial model as the initial model.
3. The method of claim 1, wherein the sample result comprises an angle interval identifier and a relative angle, wherein the angle interval identifier is used for indicating an angle interval where an included angle between a corresponding target direction of the vehicle and a running direction of the vehicle is located, and the relative angle is used for representing a position of the included angle between the corresponding target direction of the vehicle and the running direction of the vehicle in the angle interval where the included angle is located, and wherein the angle interval is obtained by dividing an omni-directional angle by a preset angle unit.
4. A method according to claim 3, wherein the relative angle is used to characterize the difference between the angle between the corresponding target direction of the vehicle and the direction of travel of the vehicle and the half angle of the angular interval in which it lies.
5. The method of claim 1, wherein the sample result comprises a sine value and/or a cosine value corresponding to an angle between a target direction corresponding to the vehicle and a driving direction of the vehicle.
6. The method of claim 5, wherein the determining a value of the loss function based on the obtained prediction result and the sample result in the selected samples comprises:
obtaining a first loss value by using an L1 loss function according to the sine value in the obtained prediction result and the sine value in the corresponding sample result;
obtaining a second loss value by using an L1 loss function according to the cosine value in the obtained prediction result and the cosine value in the corresponding sample result;
determining a weight corresponding to the first loss value, wherein the weight corresponding to the first loss value is inversely proportional to a derivative of a sine function at an included angle between a target direction corresponding to the vehicle and a running direction of the vehicle in a sample image represented by the obtained prediction result;
determining a weight corresponding to the second loss value, wherein the weight corresponding to the second loss value is inversely proportional to a derivative of a cosine function at an included angle between a target direction corresponding to the vehicle and a running direction of the vehicle in a sample image represented by the obtained prediction result;
and determining a value of a loss function according to a weighted sum of the first loss value and the second loss value.
7. A method for predicting a direction of travel, comprising:
Shooting images by using a shooting device arranged on the unmanned vehicle;
inputting the image into a prediction model to obtain a prediction result used for representing an included angle between a target direction corresponding to the vehicle in the image and a running direction of the vehicle, wherein the prediction model is generated by the method according to one of claims 1 to 6, and the target direction corresponding to the vehicle is a direction in which a connecting line between an actual position of the vehicle and an actual position of the camera device is located;
and determining the running direction of the vehicle in the image according to the obtained prediction result.
8. An apparatus for generating a predictive model, comprising:
an acquisition unit configured to acquire a sample set, wherein a sample in the sample set includes a sample image captured by using an imaging device and a sample result corresponding to the sample image, and the sample result is used for representing an included angle between a target direction corresponding to a vehicle in the sample image and a driving direction of the vehicle, and the target direction corresponding to the vehicle is a direction in which a line between an actual position of the vehicle and an actual position of the imaging device is located;
a training unit configured to select samples from the set of samples, and to perform the training steps of: inputting a sample image in the selected sample into an initial model to obtain a prediction result corresponding to the sample image; determining a value of a loss function according to the obtained prediction result and a sample result in the selected samples; in response to determining that the initial model training is complete based on the value of the loss function, the initial model is determined to be a predictive model.
9. An apparatus for predicting a direction of travel, comprising:
a photographing unit configured to photograph an image using a photographing device provided on the unmanned vehicle;
a prediction unit configured to input the image into a prediction model, and obtain a prediction result for representing an included angle between a target direction corresponding to a vehicle in the image and a driving direction of the vehicle, wherein the prediction model is generated by the method according to one of claims 1 to 6, and the target direction corresponding to the vehicle is a direction in which a line between an actual position of the vehicle and an actual position of the image capturing device is located;
and a determining unit configured to determine a traveling direction of the vehicle in the image based on the obtained prediction result.
10. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-6.
11. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-6.
CN202010097489.4A 2020-02-17 2020-02-17 Method and apparatus for generating predictive model Active CN111340880B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010097489.4A CN111340880B (en) 2020-02-17 2020-02-17 Method and apparatus for generating predictive model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010097489.4A CN111340880B (en) 2020-02-17 2020-02-17 Method and apparatus for generating predictive model

Publications (2)

Publication Number Publication Date
CN111340880A CN111340880A (en) 2020-06-26
CN111340880B true CN111340880B (en) 2023-08-04

Family

ID=71187000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010097489.4A Active CN111340880B (en) 2020-02-17 2020-02-17 Method and apparatus for generating predictive model

Country Status (1)

Country Link
CN (1) CN111340880B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113029146A (en) * 2021-03-02 2021-06-25 北京白龙马云行科技有限公司 Navigation action prediction model training method, navigation action generation method and device
CN113344237A (en) * 2021-03-24 2021-09-03 安徽超视野智能科技有限公司 Illegal vehicle route prediction method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105752081A (en) * 2014-12-30 2016-07-13 株式会社万都 Lane Change Control Device And Control Method
CN107554430A (en) * 2017-09-20 2018-01-09 京东方科技集团股份有限公司 Vehicle blind zone view method, apparatus, terminal, system and vehicle
CN107703937A (en) * 2017-09-22 2018-02-16 南京轻力舟智能科技有限公司 Automatic Guided Vehicle system and its conflict evading method based on convolutional neural networks
CN108053067A (en) * 2017-12-12 2018-05-18 深圳市易成自动驾驶技术有限公司 Planing method, device and the computer readable storage medium of optimal path
CN109087485A (en) * 2018-08-30 2018-12-25 Oppo广东移动通信有限公司 Assisting automobile driver method, apparatus, intelligent glasses and storage medium
CN109389863A (en) * 2017-08-02 2019-02-26 华为技术有限公司 Reminding method and relevant device
CN110356405A (en) * 2019-07-23 2019-10-22 桂林电子科技大学 Vehicle auxiliary travelling method, apparatus, computer equipment and readable storage medium storing program for executing
CN110400490A (en) * 2019-08-08 2019-11-01 腾讯科技(深圳)有限公司 Trajectory predictions method and apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8498448B2 (en) * 2011-07-15 2013-07-30 International Business Machines Corporation Multi-view object detection using appearance model transfer from similar scenes
TWI491524B (en) * 2012-03-08 2015-07-11 Ind Tech Res Inst Surrounding bird view monitoring image generation method and training method, automobile-side device, and training device thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105752081A (en) * 2014-12-30 2016-07-13 株式会社万都 Lane Change Control Device And Control Method
CN109389863A (en) * 2017-08-02 2019-02-26 华为技术有限公司 Reminding method and relevant device
CN107554430A (en) * 2017-09-20 2018-01-09 京东方科技集团股份有限公司 Vehicle blind zone view method, apparatus, terminal, system and vehicle
CN107703937A (en) * 2017-09-22 2018-02-16 南京轻力舟智能科技有限公司 Automatic Guided Vehicle system and its conflict evading method based on convolutional neural networks
CN108053067A (en) * 2017-12-12 2018-05-18 深圳市易成自动驾驶技术有限公司 Planing method, device and the computer readable storage medium of optimal path
CN109087485A (en) * 2018-08-30 2018-12-25 Oppo广东移动通信有限公司 Assisting automobile driver method, apparatus, intelligent glasses and storage medium
CN110356405A (en) * 2019-07-23 2019-10-22 桂林电子科技大学 Vehicle auxiliary travelling method, apparatus, computer equipment and readable storage medium storing program for executing
CN110400490A (en) * 2019-08-08 2019-11-01 腾讯科技(深圳)有限公司 Trajectory predictions method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱会强.基于视频跟踪的车辆行为分析技术研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2012,(第01期),I138-365. *

Also Published As

Publication number Publication date
CN111340880A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN111626208B (en) Method and device for detecting small objects
JP6644742B2 (en) Algorithms and infrastructure for robust and efficient vehicle positioning
US10803324B1 (en) Adaptive, self-evolving learning and testing platform for self-driving and real-time map construction
CN110163153B (en) Method and device for recognizing traffic sign board boundary
CN111079619A (en) Method and apparatus for detecting target object in image
CN110654381B (en) Method and device for controlling a vehicle
CN107830869B (en) Information output method and apparatus for vehicle
CN111340880B (en) Method and apparatus for generating predictive model
CN109213144A (en) Man-machine interface (HMI) framework
JP2015076077A (en) Traffic volume estimation system,terminal device, traffic volume estimation method and traffic volume estimation program
CN111401255B (en) Method and device for identifying bifurcation junctions
JP6224344B2 (en) Information processing apparatus, information processing method, information processing system, and information processing program
CN111461980B (en) Performance estimation method and device of point cloud stitching algorithm
JP2019087969A (en) Travel field investigation support device
CN113409393B (en) Method and device for identifying traffic sign
CN112558036B (en) Method and device for outputting information
CN110135517B (en) Method and device for obtaining vehicle similarity
CN115512336B (en) Vehicle positioning method and device based on street lamp light source and electronic equipment
CN116776151A (en) Automatic driving model capable of performing autonomous interaction with outside personnel and training method
CN110588666B (en) Method and device for controlling vehicle running
CN113312403B (en) Map acquisition method and device, electronic equipment and storage medium
CN115556769A (en) Obstacle state quantity determination method and device, electronic device and medium
CN112859109B (en) Unmanned aerial vehicle panoramic image processing method and device and electronic equipment
CN111383337B (en) Method and device for identifying objects
CN113920174A (en) Point cloud registration method, device, equipment, medium and automatic driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant