CN111680730A - Method and device for generating geographic fence, computer equipment and storage medium - Google Patents

Method and device for generating geographic fence, computer equipment and storage medium Download PDF

Info

Publication number
CN111680730A
CN111680730A CN202010485969.8A CN202010485969A CN111680730A CN 111680730 A CN111680730 A CN 111680730A CN 202010485969 A CN202010485969 A CN 202010485969A CN 111680730 A CN111680730 A CN 111680730A
Authority
CN
China
Prior art keywords
model
fence
initial
data set
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010485969.8A
Other languages
Chinese (zh)
Inventor
姜云鹏
刘洋
宋林桓
于小洲
孙连明
崔茂源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Group Corp
Original Assignee
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Group Corp filed Critical FAW Group Corp
Priority to CN202010485969.8A priority Critical patent/CN111680730A/en
Publication of CN111680730A publication Critical patent/CN111680730A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a method and a device for generating a geo-fence, a computer device and a storage medium. The method comprises the following steps: acquiring a target image, wherein the target image comprises a target road condition rule of a vehicle; inputting the target image into a trained fence generation model to obtain a classification result of the target image; and when the classification result is that the automatic driving function is allowed to be started, determining the target road condition rule as the geographic fence of the vehicle. Through the technical scheme, the computer equipment can directly predict the classification result of the target image through the trained fence generation model and automatically generate the geo-fence of the vehicle based on the classification result, manual design is not needed, and labor cost is reduced. Meanwhile, the computer equipment can classify the target images containing any one target road condition rule, namely, all the road condition rules can be exhausted, so that the application range of the geo-fence is expanded, and the expansibility of an automatic driving function is improved.

Description

Method and device for generating geographic fence, computer equipment and storage medium
Technical Field
The present application relates to the field of automatic driving technologies, and in particular, to a method and an apparatus for generating a geo-fence, a computer device, and a storage medium.
Background
The geo-fence is a precondition for starting an automatic driving function of a vehicle and is one of core conditions for guaranteeing the safety of automatic driving. Generally, in a geofence, sensors of a vehicle can work normally to ensure accuracy and robustness of identification and positioning, thereby ensuring safety of autonomous driving; and outside the geography fence, the sensor of the vehicle may not work normally, therefore, the vehicle cannot start or forbid the automatic driving function, and the driver operates the vehicle, thereby ensuring the safety of the user.
In conventional techniques, geofences are typically generated by human design. Specifically, an engineer determines a threshold range within which each sensor can normally operate according to performance indexes and actual parameters of each sensor of the vehicle, and then designs a condition rule enabling all the sensors to normally operate according to the threshold range within which each sensor normally operates, and the condition rule is used as a geo-fence of the vehicle.
However, the conventional method requires manual design, has high labor cost, is difficult to exhaust all rules, and limits the application range of the automatic driving function.
Disclosure of Invention
Based on this, it is necessary to provide a geo-fence generation method, an apparatus, a computer device, and a storage medium for solving the technical problems that the labor cost of the conventional geo-fence generation method is high, all rules are difficult to exhaust, and the application range of the automatic driving function is limited.
In a first aspect, an embodiment of the present application provides a method for generating a geo-fence, including:
acquiring a target image, wherein the target image comprises a target road condition rule of a vehicle;
inputting the target image into a trained fence generation model to obtain a classification result of the target image;
and when the classification result is that the automatic driving function is allowed to be started, determining the target road condition rule as the geographic fence of the vehicle.
In a second aspect, an embodiment of the present application provides a generation apparatus of a geo-fence, including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a target image, and the target image comprises target road condition rules of a vehicle;
the classification module is used for inputting the target images into a trained fence generation model to obtain classification results of the target images;
and the determining module is used for determining the target road condition rule as the geographic fence of the vehicle when the classification result is that the automatic driving function is allowed to be started.
In a third aspect, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the method for generating a geo-fence provided in the first aspect of the embodiment of the present application when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for generating a geo-fence provided in the first aspect of the embodiment of the present application.
According to the geo-fence generation method and device, the computer device and the storage medium, the target image is obtained, the target image is input into the trained fence generation model, the classification result of the target image is obtained, and when the obtained classification result is that the automatic driving function is allowed to be started, the target road condition rule contained in the target image is determined as the geo-fence of the vehicle. Through the technical scheme, the computer equipment can directly predict the classification result of the target image through the trained fence generation model and automatically generate the geo-fence of the vehicle based on the classification result, manual design is not needed, and labor cost is reduced. Meanwhile, the computer equipment can classify the target images containing any one target road condition rule, namely, all the road condition rules can be exhausted, so that the application range of the geo-fence is expanded, and the expansibility of an automatic driving function is improved.
Drawings
Fig. 1 is a schematic flow chart of a method for generating a geo-fence according to an embodiment of the present application;
fig. 2 is another schematic flow chart of a geo-fence generation method provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for generating a geo-fence according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a geo-fence generation apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application are further described in detail by the following embodiments in combination with the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that the execution subject of the method embodiments described below may be a geo-fence generation apparatus, which may be implemented as part of or all of a computer device by software, hardware, or a combination of software and hardware. Optionally, the computer device may be an independent server, or may be a server cluster formed by a plurality of servers, and of course, the computer device may also be an electronic device that has a data processing function and can interact with an external device or a user, such as a personal computer pc (personal computer), a mobile terminal, and a portable device, and the specific form of the computer device is not limited in this embodiment. The method embodiments described below are described by way of example with the execution subject being a computer device.
Fig. 1 is a flowchart illustrating a method for generating a geo-fence according to an embodiment of the present application. The present embodiment is directed to a specific process of how a computer device automatically generates a geofence. As shown in fig. 1, the method may include:
and S101, acquiring a target image.
Specifically, the target image is an image of the geofence that needs to be obtained, and may be a photograph or a video. The geo-fence refers to a series of rule conditions for starting an automatic driving function of the vehicle, and meanwhile, the target image contains target road condition rules of the vehicle, namely the target image contains a driving scene of the vehicle, and different target images contain different driving scenes of the vehicle. Optionally, the target road condition rule may include at least one of a curvature, a slope, a geographical condition, a weather condition, and a lighting condition of an expected travel path of the vehicle. The geographical conditions may be terrain position conditions (such as longitude and latitude, altitude and the like), the weather conditions mainly include sunny days, rainy days, snow, wind, frost and the like, and the illumination conditions mainly include sufficient illumination, insufficient illumination, air temperature and the like.
The computer equipment can shoot the driving scene of the vehicle through the high-resolution camera device, so that the target image is obtained. Of course, the target image may also be stored in an album of the computer device in advance, and when the geofence needs to be generated, the computer device may select the target image from the album, which needs to obtain the geofence. Of course, the computer device may also acquire the target image from other external devices. For example, the target image of the vehicle is stored in the cloud, and when the geofence needs to be generated, the computer device may obtain the target image of the geofence from the cloud.
S102, inputting the target image into a trained fence generation model to obtain a classification result of the target image.
In particular, the fence generation model may be a deep learning model. Of course, the fence generation model may also be other machine learning models, such as a random forest model. The parameters of the fence generation model can be obtained based on fish swarm algorithm optimization, namely, the computer equipment can optimize the parameters of the fence generation model through the fish swarm algorithm, so that the computer equipment can train the optimal parameters of the fence generation model in effective time, the training efficiency of the fence generation model is improved, and the accuracy of the classification result predicted by the fence generation model is improved.
After the target image is obtained, the computer equipment inputs the target image into a trained fence generation model, and the target image is classified through the fence generation model to obtain a classification result of the target image. Wherein the classification result is to allow the automatic driving function to be turned on or disable the automatic driving function. Optionally, when the fence generation model is a random forest model, in this step, the computer device inputs the target image into the trained random forest model, each decision tree in the random forest model classifies the target image to obtain a classification result of each decision tree, and then simple majority voting is performed on each classification result, and a final classification result is selected according to the voting. Wherein the final classification result includes allowing the automatic driving function to be turned on in a driving scene included in the target image or disabling the automatic driving function in the driving scene.
S103, when the classification result is that the automatic driving function is allowed to be started, determining the target road condition rule as the geographic fence of the vehicle.
Specifically, after the classification result of the target image is obtained, when the computer device determines that the classification result is that the automatic driving function is allowed to be started, the computer device determines the target road condition rule included in the target image as the geo-fence of the vehicle, and stores the geo-fence. In this way, the computer device can control the vehicle to turn on the autopilot function when the vehicle is within the geofence to improve the intelligence of the drive.
According to the method for generating the geo-fence, the target image is obtained, the target image is input into the trained fence generation model, the classification result of the target image is obtained, and when the obtained classification result is that the automatic driving function is allowed to be started, the target road condition rule contained in the target image is determined as the geo-fence of the vehicle. Through the technical scheme, the computer equipment can directly predict the classification result of the target image through the trained fence generation model and automatically generate the geo-fence of the vehicle based on the classification result, manual design is not needed, and labor cost is reduced. Meanwhile, the computer equipment can classify the target images containing any one target road condition rule, namely, all the road condition rules can be exhausted, so that the application range of the geo-fence is expanded, and the expansibility of an automatic driving function is improved.
In one embodiment, an acquisition process of the fence generation model is further provided. On the basis of the foregoing embodiment, optionally, as shown in fig. 2, before the foregoing S101, the method may further include:
s201, acquiring an original data set.
Specifically, the original data set includes a training image and a label of the training image, where the label is used to indicate whether the vehicle can start an automatic driving function. The value of the label may be 1 or 0, where 1 may indicate that the vehicle is allowed to start the automatic driving function, and 0 may indicate that the vehicle is not allowed to start the automatic driving function. The computer equipment can shoot vehicles in different driving scenes through the high-resolution camera device, so that a plurality of training images containing different driving scenes are obtained. The driving scene refers to a target road condition rule of the vehicle, and comprises at least one of curvature, gradient, geographical condition, weather condition and illumination condition of an expected driving path of the vehicle. Then, the computer device labels the training images through the labeling software, labels the labeling labels of the training images which can normally start the automatic driving function as 1, and labels the labeling labels of the training images which can not normally start the automatic driving function as 0. Through the process, the computer equipment acquires the original data set of the model training.
S202, performing model training on a preset initial model by using the original data set to obtain the fence generation model.
Specifically, the model structure of the initial model is the same as that of the fence generation model. The initial model may also be a deep learning model, although the initial model may also be other machine learning models. In the model training process, the computer equipment can initialize each layer of parameters of the initial model, and optimize and adjust the initial values of each layer of parameters by adopting the original data set until the convergence condition of the model is reached, so that the fence generation model is obtained.
Optionally, the computer device further needs to perform preprocessing and data enhancement on the original data set before performing model training on the initial model by using the original data set. In the data preprocessing process, in order to make the training process more comprehensive and make the robustness of the trained fence generation model higher, the raw data set adopted by the computer device may include image data of a plurality of different resolutions. As such, the computer device needs to perform operations such as registration, resampling, non-uniformity correction, gray histogram matching, and the like on a variety of image data. In order to facilitate the processing of the image, the raw data set also needs to be normalized.
In order to increase the amount of data in the raw data set used in the model training process, the computer device may perform data expansion on the raw data set. As an alternative embodiment, the computer device may employ a horizontally folded mirror image for data expansion of the original data set. As another alternative, the computer device may crop each image data to achieve data augmentation of the original data set, i.e., randomly selecting a location as a cropping center to crop each image data.
Optionally, referring to fig. 3, the process of S202 may include the following steps:
s202a, dividing the original data set into a training data set and a verification data set according to a preset proportion.
Wherein, above-mentioned preset proportion can set up according to actual demand, and is optional, and this preset proportion can set up to 8: 2, i.e. the computer device can convert the raw data set into 8: scale of 2 into a training data set and a validation data set. And performing model training through a training data set, and performing model verification on the model obtained through training of the training data set through a verification data set.
S202b, inputting the training data set into the initial model, and determining actual values of parameters of each layer of the initial model.
Specifically, the training data set includes training images and labeling labels of the training images. The value of the label of the training image can be 1 or 0, when the value of the label of the training image is 1, the automatic driving function can be started in the driving scene contained in the training image, and when the value of the label of the training image is 0, the automatic driving function cannot be started in the driving scene contained in the training image.
Optionally, the process of S202b may be: inputting the training data set into the initial model to obtain a prediction label of the training image; determining a loss value of a loss function corresponding to the initial model according to the prediction label and the label of the training image; when the loss value of the loss function is larger than a preset threshold value, adjusting the initial value of each layer of parameter of the initial model until the loss value of the loss function is smaller than or equal to the preset threshold value; and determining the current value of each layer parameter of the initial model as the actual value of each layer parameter.
Specifically, the computer device inputs the training data set into the initial model, estimates a prediction label of the training image, and calculates a loss value of a loss function corresponding to the initial model based on the estimated prediction label and an actual annotation label of the training image. Wherein the loss function may be a cross-entropy loss function. Of course, the loss function may also be other types of loss functions, which is not limited in this embodiment. When the loss value of the loss function is smaller than or equal to a preset threshold value, the computer equipment determines the current value of each layer parameter of the initial model as the actual value of each layer parameter. When the loss value of the loss function is larger than a preset threshold value, the computer equipment adjusts the initial value of each layer of parameters of the initial model, retrains the initial model based on the adjusted initial model and the training data set, and recalculates the loss value of the loss function until the loss value of the loss function is smaller than or equal to the preset threshold value, and at the moment, the current value of each layer of parameters of the adjusted initial model is determined as the actual value of each layer of parameters.
S202c, updating the initial values of the parameters of each layer into the actual values to obtain an initial generation model.
S202d, inputting the verification data set into the initial generative model, and determining the classification accuracy of the initial generative model.
The verification data set comprises a verification image and an annotation label of the verification image. And the computer equipment inputs the verification data set into an initial generation model obtained through training of the training data set, predicts a prediction label of the verification image, and calculates the classification accuracy of the initial generation model by adopting a preset calculation formula based on the prediction label and an actual labeling label of the verification image.
S202e, adjusting the parameter value of the initial generated model according to the classification accuracy rate to obtain the fence generated model.
When the calculated classification accuracy is smaller than the preset accuracy, the computer device can adjust the parameter values of the parameters of each layer of the initial generated model, recalculate the classification accuracy of the initial generated model based on the initial generated model after parameter adjustment and the verification data set until the classification accuracy is larger than or equal to the preset accuracy, and at the moment, take the initial generated model after parameter adjustment as the final fence generated model.
In this embodiment, the computer device divides the original data set into a training data set and a verification data set, and performs optimization adjustment on parameters of the initial model based on the training data set and the verification data set, so that the estimation accuracy of the finally obtained fence generation model is high. Therefore, when the target images are classified by using the fence generation model, the accuracy of the obtained classification result is higher, and the safety of starting the automatic driving function is guaranteed.
Fig. 4 is a schematic structural diagram of a geo-fence generation apparatus according to an embodiment of the present application. As shown in fig. 4, the apparatus may include: a first obtaining module 10, a classification module 11 and a determination module 12.
Specifically, the first obtaining module 10 is configured to obtain a target image, where the target image includes a target road condition rule of a vehicle;
the classification module 11 is configured to input the target image into a trained fence generation model, so as to obtain a classification result of the target image;
the determining module 12 is configured to determine the target road condition rule as the geo-fence of the vehicle when the classification result indicates that the automatic driving function is allowed to be turned on.
The device for generating the geo-fence, provided by the embodiment of the application, acquires a target image, inputs the target image into a trained fence generation model, acquires a classification result of the target image, and determines a target road condition rule contained in the target image as the geo-fence of a vehicle when the acquired classification result is that the automatic driving function is allowed to be started. Through the technical scheme, the computer equipment can directly predict the classification result of the target image through the trained fence generation model and automatically generate the geo-fence of the vehicle based on the classification result, manual design is not needed, and labor cost is reduced. Meanwhile, the computer equipment can classify the target images containing any one target road condition rule, namely, all the road condition rules can be exhausted, so that the application range of the geo-fence is expanded, and the expansibility of an automatic driving function is improved.
On the basis of the above embodiment, optionally, the apparatus may further include: the system comprises a second acquisition module and a model training module.
Specifically, the second obtaining module is configured to obtain an original data set before the first obtaining module 10 obtains the target image, where the original data set includes a training image and a label of the training image, and the label is used to indicate whether the vehicle can start an automatic driving function;
the model training module is used for performing model training on a preset initial model by adopting the original data set to obtain the fence generating model, wherein the model structure of the initial model is the same as that of the fence generating model.
On the basis of the foregoing embodiment, optionally, the model training module includes: the device comprises a data set dividing unit, a first determining unit, an updating unit, a second determining unit and an adjusting unit.
Specifically, the data set dividing unit is used for dividing the original data set into a training data set and a verification data set according to a preset proportion;
the first determining unit is used for inputting the training data set into the initial model and determining the actual value of each layer parameter of the initial model;
the updating unit is used for updating the initial values of the parameters of each layer into the actual values to obtain an initial generation model;
the second determination unit is used for inputting the verification data set into the initial generative model and determining the classification accuracy of the initial generative model;
and the adjusting unit is used for adjusting the parameter value of the initial generation model according to the classification accuracy rate to obtain the fence generation model.
On the basis of the foregoing embodiment, optionally, the first determining unit is specifically configured to input the training data set into the initial model to obtain a prediction label of the training image; determining a loss value of a loss function of the initial model according to the prediction label and the label of the training image; when the loss value of the loss function is larger than a preset threshold value, adjusting the initial value of each layer of parameter of the initial model until the loss value of the loss function is smaller than or equal to the preset threshold value; and determining the current value of each layer parameter of the initial model as the actual value of each layer parameter.
Optionally, the loss function is a cross-entropy loss function.
Optionally, the target road condition rule includes at least one of curvature, gradient, geographical condition, weather condition and illumination condition of the expected traveling path of the vehicle.
Optionally, the fence generation model is a deep learning model.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used to store data generated during the generation of the geofence. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a geo-fence generation method.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, the computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the following steps when executing the computer program:
acquiring a target image, wherein the target image comprises a target road condition rule of a vehicle;
inputting the target image into a trained fence generation model to obtain a classification result of the target image;
and when the classification result is that the automatic driving function is allowed to be started, determining the target road condition rule as the geographic fence of the vehicle.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring an original data set, wherein the original data set comprises a training image and a label of the training image, and the label is used for indicating whether an automatic driving function of the vehicle can be started or not; and performing model training on a preset initial model by adopting the original data set to obtain the fence generating model, wherein the initial model and the fence generating model have the same model structure.
In one embodiment, the processor, when executing the computer program, further performs the steps of: dividing the original data set into a training data set and a verification data set according to a preset proportion; inputting the training data set into the initial model, and determining the actual value of each layer of parameters of the initial model; updating the initial values of the parameters of each layer to the actual values to obtain an initial generation model; inputting the verification data set into the initial generative model, and determining the classification accuracy of the initial generative model; and adjusting the parameter value of the initial generation model according to the classification accuracy to obtain the fence generation model.
In one embodiment, the processor, when executing the computer program, further performs the steps of: inputting the training data set into the initial model to obtain a prediction label of the training image; determining a loss value of a loss function corresponding to the initial model according to the prediction label and the label of the training image; when the loss value of the loss function is larger than a preset threshold value, adjusting the initial value of each layer of parameter of the initial model until the loss value of the loss function is smaller than or equal to the preset threshold value; and determining the current value of each layer parameter of the initial model as the actual value of each layer parameter.
Optionally, the loss function is a cross-entropy loss function.
Optionally, the target road condition rule includes at least one of curvature, gradient, geographical condition, weather condition and illumination condition of the expected traveling path of the vehicle.
Optionally, the fence generation model is a deep learning model.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a target image, wherein the target image comprises a target road condition rule of a vehicle;
inputting the target image into a trained fence generation model to obtain a classification result of the target image;
and when the classification result is that the automatic driving function is allowed to be started, determining the target road condition rule as the geographic fence of the vehicle.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring an original data set, wherein the original data set comprises a training image and a label of the training image, and the label is used for indicating whether an automatic driving function of the vehicle can be started or not; and performing model training on a preset initial model by adopting the original data set to obtain the fence generating model, wherein the initial model and the fence generating model have the same model structure.
In one embodiment, the computer program when executed by the processor further performs the steps of: dividing the original data set into a training data set and a verification data set according to a preset proportion; inputting the training data set into the initial model, and determining the actual value of each layer of parameters of the initial model; updating the initial values of the parameters of each layer to the actual values to obtain an initial generation model; inputting the verification data set into the initial generative model, and determining the classification accuracy of the initial generative model; and adjusting the parameter value of the initial generation model according to the classification accuracy to obtain the fence generation model.
In one embodiment, the computer program when executed by the processor further performs the steps of: inputting the training data set into the initial model to obtain a prediction label of the training image; determining a loss value of a loss function corresponding to the initial model according to the prediction label and the label of the training image; when the loss value of the loss function is larger than a preset threshold value, adjusting the initial value of each layer of parameter of the initial model until the loss value of the loss function is smaller than or equal to the preset threshold value; and determining the current value of each layer parameter of the initial model as the actual value of each layer parameter.
Optionally, the loss function is a cross-entropy loss function.
Optionally, the target road condition rule includes at least one of curvature, gradient, geographical condition, weather condition and illumination condition of the expected traveling path of the vehicle.
Optionally, the fence generation model is a deep learning model.
The device for generating a geo-fence, the computer device and the storage medium provided in the above embodiments may execute the method for generating a geo-fence provided in any embodiment of the present application, and have corresponding functional modules and beneficial effects for executing the method. For technical details not described in detail in the above embodiments, reference may be made to a method for generating a geo-fence provided in any of the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A geo-fencing generation method, comprising:
acquiring a target image, wherein the target image comprises a target road condition rule of a vehicle;
inputting the target image into a trained fence generation model to obtain a classification result of the target image;
and when the classification result is that the automatic driving function is allowed to be started, determining the target road condition rule as the geographic fence of the vehicle.
2. The method of claim 1, further comprising, prior to said acquiring a target image:
acquiring an original data set, wherein the original data set comprises a training image and a label of the training image, and the label is used for indicating whether an automatic driving function of the vehicle can be started or not;
and performing model training on a preset initial model by adopting the original data set to obtain the fence generating model, wherein the initial model and the fence generating model have the same model structure.
3. The method according to claim 2, wherein the performing model training on the preset initial model by using the original data set to obtain the fence generation model comprises:
dividing the original data set into a training data set and a verification data set according to a preset proportion;
inputting the training data set into the initial model, and determining the actual value of each layer of parameters of the initial model;
updating the initial values of the parameters of each layer to the actual values to obtain an initial generation model;
inputting the verification data set into the initial generative model, and determining the classification accuracy of the initial generative model;
and adjusting the parameter value of the initial generation model according to the classification accuracy to obtain the fence generation model.
4. The method of claim 3, wherein inputting the training data set to the initial model, determining actual values of parameters of layers of the initial model, comprises:
inputting the training data set into the initial model to obtain a prediction label of the training image;
determining a loss value of a loss function corresponding to the initial model according to the prediction label and the label of the training image;
when the loss value of the loss function is larger than a preset threshold value, adjusting the initial value of each layer of parameter of the initial model until the loss value of the loss function is smaller than or equal to the preset threshold value;
and determining the current value of each layer parameter of the initial model as the actual value of each layer parameter.
5. The method of claim 4, wherein the loss function is a cross-entropy loss function.
6. The method of any one of claims 1-5, wherein the target road conditions rules include at least one of curvature, grade, geographical conditions, weather conditions, and lighting conditions of an intended travel path of the vehicle.
7. The method of any one of claims 1 to 5, wherein the fence generation model is a deep learning model.
8. A geo-fencing generation apparatus, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a target image, and the target image comprises target road condition rules of a vehicle;
the classification module is used for inputting the target images into a trained fence generation model to obtain classification results of the target images;
and the determining module is used for determining the target road condition rule as the geographic fence of the vehicle when the classification result is that the automatic driving function is allowed to be started.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202010485969.8A 2020-06-01 2020-06-01 Method and device for generating geographic fence, computer equipment and storage medium Pending CN111680730A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010485969.8A CN111680730A (en) 2020-06-01 2020-06-01 Method and device for generating geographic fence, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010485969.8A CN111680730A (en) 2020-06-01 2020-06-01 Method and device for generating geographic fence, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111680730A true CN111680730A (en) 2020-09-18

Family

ID=72453712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010485969.8A Pending CN111680730A (en) 2020-06-01 2020-06-01 Method and device for generating geographic fence, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111680730A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112261584A (en) * 2020-10-13 2021-01-22 恒大新能源汽车投资控股集团有限公司 Geographic fence determination method and device and electronic equipment
CN112529104A (en) * 2020-12-23 2021-03-19 东软睿驰汽车技术(沈阳)有限公司 Vehicle fault prediction model generation method, fault prediction method and device
CN113473374A (en) * 2021-06-29 2021-10-01 重庆长安汽车股份有限公司 Automatic driving area management method and system based on geo-fencing technology
CN114440904A (en) * 2022-01-28 2022-05-06 中国第一汽车股份有限公司 Geographic fence data updating method, device, medium and equipment
CN116030079A (en) * 2023-03-29 2023-04-28 北京嘀嘀无限科技发展有限公司 Geofence partitioning method, device, computer equipment and storage medium

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010028260A1 (en) * 2008-09-04 2010-03-11 United Parcel Service Of America, Inc. Driver training systems
US20130204517A1 (en) * 2012-02-08 2013-08-08 Ford Global Technologies, Llc Method and Apparatus for Alerting a Driver of Warning Conditions
CN104063509A (en) * 2014-07-09 2014-09-24 武汉大学 Information pushing system and method based on mobile geofence
US20150088792A1 (en) * 2013-09-25 2015-03-26 Google Inc. Learning Geofence Models Directly
KR20160143252A (en) * 2015-06-05 2016-12-14 현대자동차주식회사 Geofence setting method, vehicle electronic system, vehicle management center, program and recording medium
US9838843B1 (en) * 2016-10-13 2017-12-05 Adobe Systems Incorporated Generating data-driven geo-fences
CN107765278A (en) * 2017-10-20 2018-03-06 珠海汇迪科技有限公司 A kind of method that geography fence is automatically generated based on GPS track
CN108318902A (en) * 2017-11-22 2018-07-24 和芯星通(上海)科技有限公司 Adaptive geo-fence detection method and device, electronic equipment and management method
CN108345875A (en) * 2018-04-08 2018-07-31 北京初速度科技有限公司 Wheeled region detection model training method, detection method and device
CN109117709A (en) * 2017-06-23 2019-01-01 优步技术公司 Collision avoidance system for automatic driving vehicle
CN109358614A (en) * 2018-08-30 2019-02-19 深圳市易成自动驾驶技术有限公司 Automatic Pilot method, system, device and readable storage medium storing program for executing
CN109461321A (en) * 2018-12-26 2019-03-12 爱驰汽车有限公司 Automatic Pilot fence update method, system, equipment and storage medium
CN110053479A (en) * 2018-01-15 2019-07-26 福特全球技术公司 Vehicle geography fence based on crowd
CN110300175A (en) * 2019-07-02 2019-10-01 腾讯科技(深圳)有限公司 Information push method, device, storage medium and server
CN110378483A (en) * 2018-04-12 2019-10-25 百度(美国)有限责任公司 The system and method for training machine learning model being deployed on analog platform
CN110377024A (en) * 2018-04-13 2019-10-25 百度(美国)有限责任公司 Automaticdata for automatic driving vehicle marks
CN111002924A (en) * 2019-11-25 2020-04-14 长城汽车股份有限公司 Energy-saving control method and device of automatic driving system and automatic driving system

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010028260A1 (en) * 2008-09-04 2010-03-11 United Parcel Service Of America, Inc. Driver training systems
US20130204517A1 (en) * 2012-02-08 2013-08-08 Ford Global Technologies, Llc Method and Apparatus for Alerting a Driver of Warning Conditions
US20150088792A1 (en) * 2013-09-25 2015-03-26 Google Inc. Learning Geofence Models Directly
CN104063509A (en) * 2014-07-09 2014-09-24 武汉大学 Information pushing system and method based on mobile geofence
KR20160143252A (en) * 2015-06-05 2016-12-14 현대자동차주식회사 Geofence setting method, vehicle electronic system, vehicle management center, program and recording medium
US9838843B1 (en) * 2016-10-13 2017-12-05 Adobe Systems Incorporated Generating data-driven geo-fences
CN109117709A (en) * 2017-06-23 2019-01-01 优步技术公司 Collision avoidance system for automatic driving vehicle
CN107765278A (en) * 2017-10-20 2018-03-06 珠海汇迪科技有限公司 A kind of method that geography fence is automatically generated based on GPS track
CN108318902A (en) * 2017-11-22 2018-07-24 和芯星通(上海)科技有限公司 Adaptive geo-fence detection method and device, electronic equipment and management method
CN110053479A (en) * 2018-01-15 2019-07-26 福特全球技术公司 Vehicle geography fence based on crowd
CN108345875A (en) * 2018-04-08 2018-07-31 北京初速度科技有限公司 Wheeled region detection model training method, detection method and device
CN110378483A (en) * 2018-04-12 2019-10-25 百度(美国)有限责任公司 The system and method for training machine learning model being deployed on analog platform
CN110377024A (en) * 2018-04-13 2019-10-25 百度(美国)有限责任公司 Automaticdata for automatic driving vehicle marks
CN109358614A (en) * 2018-08-30 2019-02-19 深圳市易成自动驾驶技术有限公司 Automatic Pilot method, system, device and readable storage medium storing program for executing
CN109461321A (en) * 2018-12-26 2019-03-12 爱驰汽车有限公司 Automatic Pilot fence update method, system, equipment and storage medium
CN110300175A (en) * 2019-07-02 2019-10-01 腾讯科技(深圳)有限公司 Information push method, device, storage medium and server
CN111002924A (en) * 2019-11-25 2020-04-14 长城汽车股份有限公司 Energy-saving control method and device of automatic driving system and automatic driving system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LANNY AGUSTINE 等: "Vehicle Security and Management System on GPS Assisted Vehicle Using Geofence and Google Map", 《PROCEEDINGS OF SECOND INTERNATIONAL CONFERENCE ON ELECTRICAL SYSTEMS, TECHNOLOGY AND INFORMATION 2015 (ICESTI 2015)》 *
强明辉 等: "基于 UAV 地面站避让系统的地理围栏算法设计与仿真", 《专题研究与综述》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112261584A (en) * 2020-10-13 2021-01-22 恒大新能源汽车投资控股集团有限公司 Geographic fence determination method and device and electronic equipment
CN112529104A (en) * 2020-12-23 2021-03-19 东软睿驰汽车技术(沈阳)有限公司 Vehicle fault prediction model generation method, fault prediction method and device
CN113473374A (en) * 2021-06-29 2021-10-01 重庆长安汽车股份有限公司 Automatic driving area management method and system based on geo-fencing technology
CN114440904A (en) * 2022-01-28 2022-05-06 中国第一汽车股份有限公司 Geographic fence data updating method, device, medium and equipment
CN114440904B (en) * 2022-01-28 2024-03-15 中国第一汽车股份有限公司 Geofence data updating method, device, medium and equipment
CN116030079A (en) * 2023-03-29 2023-04-28 北京嘀嘀无限科技发展有限公司 Geofence partitioning method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111680730A (en) Method and device for generating geographic fence, computer equipment and storage medium
WO2022156520A1 (en) Cloud-road collaborative automatic driving model training method and system, and cloud-road collaborative automatic driving model calling method and system
US10962979B2 (en) System and method for multitask processing for autonomous vehicle computation and control
CN108528458B (en) System and method for vehicle dimension prediction
US10259468B2 (en) Active vehicle performance tuning based on driver behavior
KR102373430B1 (en) Method and device for providing personalized and calibrated adaptive deep learning model for the user of an autonomous vehicle
CN111507172A (en) Method and device for supporting safe automatic driving by predicting movement of surrounding object
US20190353487A1 (en) Positioning a terminal device based on deep learning
CN111325086B (en) Information processing system, program, and information processing method
CN111709471B (en) Object detection model training method and object detection method and device
CN108382204A (en) Speed control for vehicle
CN113383283B (en) Perceptual information processing method, apparatus, computer device, and storage medium
US12013251B2 (en) Dynamic map generation with focus on construction and localization field of technology
CN112749589A (en) Method and device for determining routing inspection path and storage medium
US12045023B2 (en) Electronic control device and neural network update system
US20220196432A1 (en) System and method for determining location and orientation of an object in a space
KR102521657B1 (en) Method and apparatus of controlling vehicel
CN113095194A (en) Image classification method and device, storage medium and electronic equipment
CN114761185B (en) Robot and method for controlling robot
CN113942364A (en) Method and device for controlling parking air conditioner and parking air conditioner
US11858511B2 (en) Control system for a motor vehicle and method for adapting the control system
CN116626670B (en) Automatic driving model generation method and device, vehicle and storage medium
CN113632100A (en) Traffic light state identification method and device, computer equipment and storage medium
CN113486719A (en) Vehicle destination prediction method, vehicle destination prediction device, computer equipment and storage medium
CN115661798B (en) Method and device for determining target area, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200918

RJ01 Rejection of invention patent application after publication